A good article on free will

Summary:  Although the majority of intellectuals deny the existence of free will, a few have defended it.  These posts have already referred extensively to Ayn Rand’s defense of free will as based on a primary choice to think or not.  This post will review the work of a scientist who also provides a vigorous defense of free will, and who makes other valuable points within the field of psychology and neuroscience.

In a previous post, a book chapter by Albert Bandura 1 was referenced as providing a devastating critique of neurophysiological determinism, that is, the view that all of our conscious activity is determined by brain states.  Some of Bandura’s key criticisms were discussed in that earlier post.  Bandura makes several other excellent points that there was no space to cover earlier.

The first point he makes is that the conceptual level is what is called an “emergent property.”   Such a property is something that results from the combination of elements, but is not present in those original elements.  An example is the saltiness resulting from the combination of sodium and chlorine, which is not present in either ingredient.  Bandura states:

“Cognitive processes are emergent brain activities that exert determinative influence. In emergence, constituent elements are transformed into new physical and functional properties that are not reducible to the elements. For example, the novel emergent properties of water, such as fluidity and viscosity, are not simply the combined properties of its hydrogen and oxygen microcomponents … Through their interactive effects, the constituents are transformed into new phenomena.”

This is the essential counter-argument against all of the neurophysiological determinists who claim that free will does not exist because it cannot be found in the brain’s physical components. Recall that the anti-free will argument presented by Harris rests on the idea that there would have to be something “extra,” some (in his view) mystical element to explain choice.  Speaking of a criminal, for example, and supposing he changed places with the criminal “atom for atom,” Harris states “There is no extra part of me that could decide to see the world differently or to resist the impulse to victimize other people.”  In contrast to this view, if free will and the cognitive processes it involves are combination-enabled “emergent” properties, there is no need to hunt for an “extra” property, let alone to assert it is mystical.  Free will is as natural as water’s wetness.

Bandura goes on to explain the proper level at which to view man’s control.  He rejects the anti-free-will view that because man is not aware of the brain states underlying choice, man therefore has no choice:

“In acting as agents, individuals obviously are neither aware of nor directly control their neuronal mechanisms. Rather, they exercise second-order control. They do so by intentionally engaging in activities at the macrobehavioral level known to be functionally related to given outcomes. In pursuing these activities, over which they can exercise direct control, they shape their neural circuitry and enlist subpersonal neurophysiological events subserving their chosen pursuits.”

Bandura gives the analogy of driving a vehicle, which requires combustion, conversion of energy into motion, steering linkages, brake pads, etc.  The driver activates all of these subsystems by choice, but not directly.  He does so through the conscious activities we normally take to be part of driving, which include planning the route (getting reservations if one is embarking on an overnight stay), gassing the car, starting the car, pressing the gas pedal, steering.  As Bandura states, in addition to the conscious choices made in simply starting and steering the car, “The deliberate planning … for these diverse activities far in advance requires considerable proactive top-down cognitive regulation.”

This concept of “top-down” cognitive regulation is the most important contribution Bandura makes.  What he is saying is that man actually exercises control over all these subsystems that operate in the background, but only at the “macrobehavioral” level (the “top”), i.e. only at the level of what one can directly choose.  Man cannot choose or know about the details of the combustion engine’s momentary operation (the “down”), just as he cannot choose or know about what his neurons are doing.  When he chooses to do something, however, man activates these subsystems.  In simple terms, he is a causal agent in enabling these subsystems to function to his benefit.

Bandura continues his explanation using an example from exercise physiology and psychology:

“Individuals obviously do not intentionally direct their atrial and ventricular cardiac muscle fibers to fire and their aortic and pulmonary valves to open and close.  However, by intentionally engaging in an exercise routine and controlling their activity level, they can enhance their cardiac function and regulate their heart rate without having the foggiest idea of how they indirectly recruited, by their intentional actions, the subserving neurophysiological mechanisms.”

You, the reader, can see the above mechanisms in an everyday example.  Suppose you are nervous about an upcoming job interview, or examination.  Suppose this causes you to feel shortness of breath.  You can choose to identify your nervousness, and the underlying worry about how you will perform. Further, you can choose to say to yourself: “Calm down, I know I’m prepared for this.  Breathe deeply and evenly.  Steady now.”  These actions – identifying and correcting mistaken thoughts, and purposely controlling your breathing – are at the “macrobehavioral” level, where choice is operative.  Yet they result in causing the underlying bodily systems to deliver oxygen to the cells more efficientlly, a function that no one can control directly.

Bandura summarizes (here, and below, brackets indicate words substituted to explain a technical term):

“Framing the issue of conscious cognitive regulation in terms of direct control over the neurophysiological mechanics of action production casts the issue in the wrong terms at the wrong level of control … Because individuals have no awareness of their brain processes does not mean that they are just quiescent hosts of automata that dictate their behavior.  Neuroimaging can shed light on the neural mechanisms of cognitive control and how controllable [chosen] action indirectly develops functional neuronal structures and orchestrates the neurodynamics for selected purposes.”

Finally, Bandura discusses the details of recent neuroscience research that rejects the mechanistic theories of the free-will deniers.  This research investigates what happens to the brain when man exercises choice.  Bandura urges lines of research that elaborate on these findings:

“Thoughts change the brain by cognitive practice in much the same way as does physical practice (Pascal-Leone, et al., 1995). … prior cognitive practice reduces the time needed to learn a skill by physical practice. There is much excitement [by current practitioners of neuroscience] about how the brain regulates behavior to the neglect of how individuals train the brain to serve desired purposes.

Research on brain development underscores the influential role that [choice] plays in shaping the functional structure of the brain (Diamond, 1988 2; Kolb & Whishaw, 1998). It is not mere exposure to stimulation but [choice] in exploring, manipulating, and influencing the environment that counts.  By regulating their motivation and activities, people produce the experiences that form the functional neurobiological substrate of [skill development] … This is a realm of inquiry in which psychology can make unique contributions to the … understanding of human development, adaptation, and change.”

Unfortunately, the great value of this article is undercut by Bandura’s views about the foundations of ethics.  His views are a mixture of individualism (standards derived from reason, applying to each individual man) and collectivism (group or societal standards, whatever they are).  Implicitly Bandura accepts the centrality of the individual valuer, in that he emphasizes the survival value to each person of being able to control his own cognitive processes and behavior.  In Bandura’s words: “Forethoughtful, regulative, and reflective capabilities are vital for survival.”  Because of that implicit individualism, Bandura has insightful things to say about the role of chosen moral values in shaping how people act.  However, despite that implicit individualism, parts of his section on man as a moral agent come from an ethical position that gives equal weight to individual and collective moral standards.  That perspective is also reflected in Bandura’s use of terminology (e.g. “biopsychosocial, ” “social cognitive theory”) that represent confusing package-deals of individualism and collectivism.  The package lumps together 1) biologically important aspects of each individual man in his control over his environment with 2)  man’s psychology (on this view) as a “social” agent.

This flaw aside, the article’s major contribution is to identify the actual psychological and brain processes involved in a person being an “agent,” that is exercising choice.  Removing the determinist and mystical aspects entirely, Bandura and those psychologists he cites are asking serious questions about the role of processes and mechanisms that accompany and are activated by choice.  This is neuroscience as it could and should be.

Two views of the power of man

Summary:  Two views of man’s worth and scope of choice are contrasted.  One view holds that nature is immensely powerful and humbles man.  The other holds that man, while of course subject to the laws of nature, has the power to reshape nature to valuable ends, achieving results nature could never accomplish.  This power of man is the result of his reason and choice.  Which of man’s achievements would you judge as most exemplifying his power?

By speaking of greater forces than we can possibly invoke, and by confronting us with greater spans of time than we can possibly envisage, mountains refute our excessive trust in the man-made. They pose profound questions about our durability and the importance of our schemes. They induce, I suppose, a modesty in us. 1

This view is virtually the motto adopted by the environmentalist movement (of which its author is a prominent member).  It represents a strain of thought about man and his capabilities that is increasingly common in our culture.  This view emphasizes the power of nature and belittles man by contrast.  It urges man to become modest and jettison what it considers his unwarranted sense of importance.

In fact, however, man’s power is far greater than the mountains he is asked to be humbled by.  With the faculty of reason, culturally unleashed since the Renaissance, man has used nature to accomplish life-enhancing tasks that nature, by itself, did not and could not accomplish.

Consider one example, from a biography of a great scientist:  A mold can grow penicillin.  This fact of nature lay unknown for the entire 200,000 years of man’s life on earth prior to the time it was discovered by science in the 1920’s.  By itself, however, even this discovery wasn’t sufficient to make it useful.  To be beneficial required, first, investigation by scientists to understand penicillin’s properties and medical applications.  Second, an enormous scale-up effort was required to take this research from laboratory to widespread medical use.   At the beginning of World War II, with soldiers injured on the battlefield in desperate need of antibiotics, penicillin was extremely limited in supply: The soldiers, after receiving a dose, had their urine collected so the rare commodity could be re-purified from it and used for another dose.  By the end of World War II – with the industrialization of the penicillin process, most notably by the drug companies Merck and Pfizer – penicillin supplies were adequate to provide doses as needed for the Normandy invasion.  In Britain alone, from 1930 to 1960, the number of people dying from infection decreased from 115,000 to 24,000 per year, while population grew (some of this was due to hygiene and other medical advances, but most is attributed to penicillin and its derivatives).

A mold (“nature alone”) can grow infinitesimal amounts of a useful chemical.  It takes man’s reason to recognize the chemical’s beneficial properties, and to develop the scientific understanding and production processes necessary to make it available for widespread use.

With regard to mountains, yes, the building of them by the earth involves massive forces, blind consequences of earth motion, heat and gravity.  The moving of mountains, to achieve something of great value, requires man’s mind and focused choice.  Few achievements illustrate that more clearly than the building of the Panama Canal.

The Canal was first conceived and proposed seriously in the 1850’s, as a way to reduce the months-long journey from the Atlantic to the Pacific oceans.  Decades of planning and work resulted in very little, as the first efforts failed and went bankrupt.  When American industry and medicine got involved, in the 1900s, Panama was still a pestilential, disease-ridden jungle, with a few skeletons of the earlier work remaining.  The task seemed, and had proven, impossible.  First, man would have to find a cure for yellow fever and malaria so that it was possible to work in Panama.  Then, the enormous task of slicing a waterway through a continent had to be accomplished.

There were political factors at play in this vast effort, but that is not the subject of concern here.  What matters is what a combination of science, engineering and business accomplished in merely 11 years.

David McCullough’s excellent history of the building of the Canal details the extent of the accomplishment.

  • The American effort at Panama started in 1903, after a 20-year French effort had ended in failure both technically and financially, and a 10-year lapse in construction had occurred.
  • Before work could begin, a medical team had to assemble all that was known about yellow fever and malaria in order to reduce or eliminate the almost unheard-of death rate (12 of every 1,000 employees) in Panama. This team, led by William C. Gorgas, conducted a war against mosquitoes (whose role in yellow fever and malaria had been recently discovered).  This campaign consisted of drainage, brush and grass cutting, killing larvae (by a crude larvacide they had to invent), prophylactic quinine for all employees, screening, and killing adult mosquitoes. The result of this four-year effort was the eradication of yellow fever and a dramatic reduction of the malaria hospitalization rate (from 10% to 2%).
  • Culebra Cut was the heart of the canal, a massive man-made canyon with vertical rock walls subject to progress-reversing mudslides.  96 million cubic yards of material were removed from the canyon, by around 100 “monster” machines and 61 million pounds of dynamite.  All the types of work – blasting, drilling, shoveling, material removal – had to be coordinated so as not to have one interfere with or delay any other.
  • A large dam was built at Gatun in order to control the flow of water in the canal. The dam bridged a mile and a half waterway, and was fifteen times as thick at the bottom as it was high.  A daily concrete pour for the dam was 3400 cubic yards.
  • The Canal, once operational, would function by at least 1,500 electric motors, designed by General Electric.
  • Pittsburgh steel mills produced all the needs of the Canal.  Special steels were used in the construction of the locks, which needed to be high strength and corrosion resistant.  Allegheny Steel had invented the new Vanadium steel alloy.

McCullough provides an eloquent summary of all these technical achievements:

“The creation of a water passage across Panama was one of the supreme human achievements of all time, the culmination of a heroic dream of four hundred years and of more than twenty years of phenomenal effort … The fifty miles between the oceans were among the hardest ever won by human effort and ingenuity, and no statistics on tonnage or tolls can begin to convey the grandeur of what was accomplished.”

Among the many elements of free will highlighted in the Panama Canal story, one stands out:  the difference between the characters of the leaders of the failed attempt and those of the successful one.  Ferdinand de Lesseps, the French builder who had become famous for the Suez Canal, tried to apply the same techniques used in building Suez to the Panama region.  Ignoring the overwhelming evidence that methods suitable for a desert climate were not working in a muddy jungle, de Lesseps’ effort collapsed.  His successors, American medical personnel and engineers, by contrast were open to the facts and did not evade them.  Under the leadership of two engineers – George Goethals and John Stevens – they devised methods appropriate to the particulars of Panama.  It was no accident, for example, that Stevens was an expert railroad engineer:  Building the Canal required innovative railroad solutions to the problem of deploying equipment and materials at dispersed sites.  Nor was it an accident that Stevens had trained under the legendary railroad builder James J. Hill, whose management philosophy is summed up as follows:  “Intelligent management . . . must be based on exact knowledge of facts. Guesswork will not do.”  A determination to have exact factual information vs. stubborn evasion and wishful thinking – those are the contrasting approaches adopted by the two men whose exertions had such different outcomes.  This fundamental difference in character (not anything related to nationality) explains how the later effort to build a canal at Panama resulted in success, despite the enormity of the task and all the setbacks along the way.

For those who love to visit and view the mountains, either as a rest from a productive life or a vocation (such as a tour guide), nothing said here should be considered as a suggestion against it – as long as one considers the full context of the situation.  Don’t adopt the anti-man viewpoint of the environmental movement.  That movement promotes a poisonous package deal combining your enjoyment of nature with a denigration of man’s power and choice.

Man’s reason allows him to move mountains, and to choose to slice pathways through continents.  It also gives him the power to read those mountains, understanding their geologic history though that history extends for epochs before man inhabited the earth.  Reason allows man to grasp possibilities on extra-terrestrial bodies and choose to travel there.  Reason makes possible eliminating disease.  But reason is a power that has to be exercised by choice – requiring focus on reality, effort and perseverance in the face of failure.  Rather than be humbled by the sight of nature, or be overawed by its power, man may legitimately experience an earned pride in his choice to comprehend and make use of it.

Intelligence – a learnable skill

Summary:  Intelligence is viewed by most people as a fixed quantity, determined by genes and unchangeable.  What if that isn’t true?  What if, like any complex set of skills (such as swimming or driving), people can learn to improve it?  Read what the research shows.  And read also about which intellectual movement opposed such research. 

Even many of those who accept free will think that intelligence is not within the scope of what one is free to choose.  IQ testing and interpretation are based on the premise that there is a certain human capacity, called “g,” that represents an innate ability to deal with abstractions and complex mathematical and verbal relationships.  But is that true?

Research conducted within the educational community over many decades, directed to fixing deficiencies in student problem solving, has shed new light on this issue.  Intelligence has come to be viewed as trainable, like any complex skill.

An insightful book, Intelligence Can be Taught, by Arthur Whimbey, summarizes the research underlying the trainability of intelligence.  Whimbey first identifies the component skills involved in solving intelligence-test problems.  He uses this background to define “intelligence” as the capacity of paying careful skilled attention to the analysis of relations, whether verbal, spatial or mathematical.

The studies reviewed by Whimbey spanned the entire age range from preschool through college.  Interestingly, the conclusions found in the studies of younger students were the same as those in the older students:  Performance on intelligence tests was explained by a difference in the mental habits of the students.

Low-aptitude students had the following characteristics:

  1. They were one-shot thinkers: If they didn’t already know the answer to a question, they gave up and rushed to pick an answer rather than performing any analysis.
  2. They were quick – mentally careless and superficial. They glossed over detail – often writing down an answer in a fraction of the time necessary to absorb the data and construct an answer.  As a result of this habit, they missed the main point or based an answer on superficial clues. In preschool, the student would start answering the question before the questioner finished speaking.
  3. Their answers were often feeling based – when asked for the reason for an answer, they didn’t respond in terms of the problem description but in terms of some feeling or attitude that they, the solvers, possessed.
  4. They were not particularly concerned about having an accurate picture of the problem or of the words in the problem. In a manner very similar to that of “look-say” readers, they guessed at concepts rather than analyze and ultimately grasp them.

By contrast, high-aptitude students had the following characteristics:

  1. They were active and deliberate in attack on problems. If they didn’t already know the answer, they used analysis to reach one.
  2. They worked slowly enough to grasp and process the entire description of a problem. If they didn’t understand everything, they constructed the meaning based on what they did understand, carefully proceeding through a sequence of questions and steps to clarify the full meaning.
  3. They based their answers on analysis of the problem and other facts/relationships they knew, rather than what they felt.
  4. They were determined to get an accurate picture of the problem, including the meaning of every concept in the problem statement. They were determined to get an accurate solution.

The researchers found that the attributes of low-aptitude students were not innate and fixed – they were modifiable and within the power of a student’s ability to choose.  Dealing with abstract, complex material to solve a problem has definite subskills and activities that the students could be coached to adopt.  For example, students were taught not to rush but rather to take the time necessary to obtain an unambiguous picture of what the problem stated and what was requested.  They were taught to focus on the problem statement and facts as opposed to their feelings.  They were taught to ask a series of probing questions when material was initially unclear, paying careful attention to the exact meaning of each word, and using other available knowledge.  This training improved not only the students’ scores on standardized intelligence tests but also, and most importantly, their performance in further education.  The researchers concluded that it is within one’s power to raise intelligence by learning and automatizing a proper method.

The value of this kind of training was found to be proportional to its continuity, length, number of hours per week, and the starting age of trainees.  The earlier this kind of training started, the better.  The researchers found that in the home of the better preschool students such training was delivered as a natural part of the parent-child interaction – via consistent and intensive verbal interactions. In the home of the poor student, such verbal interactions were missing.  In contrast to the child who engaged in consistent verbal interactions, the “silent” home produced children who didn’t know the meaning of simple relations like either-neither, over-under and big-little.  The difference in the verbal interaction level develops later into a difference in ability with regard to complex patterns of thought – if-then, cause-effect, exclusion/inclusion and generalization.

The researchers encountered one serious methodological impediment to progress in this field:  opposition by the psychological theory of Behaviorism to introspection. Behaviorism’s claim was that introspection is not objective because it cannot be observed from the outside – Behaviorism’s view was that only external behavior, not thinking, is objective.  During the early part of the 20th century, intelligence researchers endured heavy criticism and lack of support for their focus on the patterns of thought engaged in by the students.  Yet, as they showed, those patterns of thought cannot only be identified by student introspection, they can be communicated by careful self-reporting of thought patterns.

Behaviorism lost its dominance of the psychological and educational fields, mostly because of the ascendancy of those cognitive theorists who re-asserted the importance of cognitive aspects of the mind.  By the end of the 20th century, more intellectuals were accepting of methods of research focusing on thought.

The objectivity and use of verbal self-reports of thinking is a legitimate issue for discussion even among those who do not dismiss cognition and the mind.  The basic question is: to what extent can psychologists and educators rely on verbal self-reports as a scientific or training database?  Since the mind is only visible to the individual, how is it possible to trust and verify the objectivity of what he says?

The answer, based on extensive investigation of the issue by Ericsson and Simon, is that it can be trusted if it a) is contemporaneous with the student’s actual thinking (rather than remembered), b) consists of simple reports rather than interpretations, c) is spontaneous rather than prompted (i.e. no leading questions or suggestions) and d) integrates with other knowledge the observer can glean about the thinking patterns of the student, such as what the student writes down on the blackboard.  Ericsson and Simon also provided an in-depth critique (as does Bandura 1) of the one paper most cited as proving verbal self-reports are nonobjective.

As a result of the successful defense of introspection, it was more widely accepted as a key source of data, and has been applied in areas as diverse as study methods, emotional harmony, and in Whimbey’s own area of training thinking methods.

Whimbey and the other researchers found the best strategy for improving intelligence:  The poor student should read aloud a thinking protocol of a model high-achieving student prior to attempting to solve problems.  The student, under teacher guidance, explains the process of the model student and then attempts to implement it in his own problem solving, reporting his own thinking as he proceeds.  The teacher identifies areas where the student is deficient in successfully implementing the model student’s thinking methods, and gradually tutors the poorer student to improve his thinking and problem-solving method.

The student chooses to adopt the new methodology and to make it a habit.  The teacher cannot force the student to adopt it, but can be an invaluable guide in showing the student where he might improve his method.  The previously poor student, who would walk away from problem-solving sessions with confusion and frustration, thinking he could never be successful, develops a new habitual method and confidence in his ability.

Unfortunately, in the early part of the 21st century the reality of thinking and the mind, and the choice to alter one’s thinking patterns, have again come under attack, this time by implication of ideas advocated by the anti-free-will movement.

This issue of whether intelligence is within the scope of one’s free will is not an academic issue – the future of a student’s academic performance and life success are at stake.  An important study showed that students who think intelligence can be trained are more willing to modify their thinking and exert effort.  Students who believed in free will had a positive trajectory in performance. Those who did not, had a negative trajectory.  Thus those free-will deniers, like Harris, who claim that people will continue to behave the same way whether or not they believe in free will, are dead wrong.  Accepting free will, in any area where it applies, is the first and most fundamental step in improving one’s success in that area, whatever one’s ultimate physical or mental capacity.

If there is one criticism one can level at this book, it is Whimbey’s unwarranted dismissal of ability to deal with abstractions as a definition of intelligence.  He seems to have an incorrect view of what is legitimately involved in forming abstractions, arguing that the careful, systematic analysis of relationships has no role in abstraction.  In fact, however, a proper theory of forming abstractions requires such a systematic approach.

That criticism aside, this is a valuable book.  It not only reviews research in training intelligence but also debunks much of the so-called research (especially separated-twin studies) purporting to show that intelligence is 80% hereditary, or (even worse) that it is racially determined.  To their great credit, Whimbey and his colleagues have added intelligence to the list of what a person can and does build.

Critique of Sam Harris’ book Free Will – Conclusion – Anti-hierarchy

Summary: Harris repeatedly commits the fallacy of hierarchical inversion, which consists of denying a concept or idea on which one’s own argument rests.  This fallacy is committed with respect to concepts such as “science,” and with respect to the very descriptions he provides of the experiments supposedly proving that free will is an illusion.

One principle that has emerged in the discussions in several posts is hierarchy.  It was observed in the discussion of methods that methods come in a hierarchy, from simple techniques to tactics, strategies, and more abstract and all-encompassing methods like logic and scientific method.  It was observed in several other discussions that knowledge has a hierarchy:  Later conclusions depend, implicitly or explicitly, on earlier ones.  It was observed that violating the hierarchy – advocating ideas higher up in the chain while denying those lower-level ideas on which the higher-up ones depend – was a fundamental error made by those who deny free will.  This post will expand on that theme.

As an example, an individual such as Harris can write an entire book denying the existence of free will, advocating strenuously that man is an automaton whose every idea and action is programmed by his brain chemistry.  During the writing of that book, this individual made thousands of choices, framing words, sentences and paragraphs as carefully as he could to convey his thesis.  Further, he cited many examples of scientists conducting neurological experiments – scientists who chose one experimental design over another, who corrected error by making careful and repeated measurements, and who designed experiments to methodically support or reject hypotheses.  Yet Harris fails to see the contradiction between all that is alleged to support his thesis, and the thesis itself.  The thesis depends on a hierarchy in which the lower levels are used yet simultaneously repudiated.

That is why all Harris’ interpretations of neurology experiments are invalid, whether the experiments themselves were valid in their particulars or not. (The famous Libet experiment, shown to be in violation of hierarchy in an earlier post, has also been critiqued persuasively in its experimental design by Bandura. 1)

It was also observed in the previous post, and other posts, that the locus of choice fundamentally rests not on any specific, higher-level, choices, but on the choice of whether and how to use one’s consciousness.  In other words, the fundamental choice is not whether to take this job or the two others on offer.  It’s about how one will make the choice.

It can be seen, therefore, that choice itself has a hierarchy – there is a fundamental choice that conditions and frames the higher-level choices.  Those higher-level choices are not possible without the context set by the fundamental choice.  Those higher-level choices may be said to have reasons that explain them (one’s values, interests, skills), but the fundamental choice about whether to think about the issue at all is a primary, always open to one’s own will and not explainable in the same sense.

Harris and the free will deniers say that the nearly ubiquitous observation of turning one’s thinking or focus on or off is a delusion.  We can’t trust our observations.  We don’t observe ourselves choosing, they say.  We don’t even observe ourselves making the fundamental choice of bringing focus to our minds.  Rather, we simply “find” we go one way or the other (the choice just appears), and misinterpret that as “choice.”  The last post discussed in detail what is wrong with the idea that the choice just appears.

Consider another example of how writing a book involves issues of hierarchy that the writer who denies free will must ignore.  That writer is writing the book for a purpose, namely to convince others of his position.  He is relying on an audience of thinkers who will read and evaluate his ideas, accepting those ideas because they view the arguments as logical and convincing.  It would be absurd for the author to state that his ideas are no more logical than the ideas of the person who holds the opposite view, that he simply spewed those ideas out automatically rather than their opposite, and that members of his audience only accept those ideas because they are necessitated to by their own brain physiologies.  Such a statement would make the author’s attempt to convince us a ludicrous exercise in self-deception.  Yet that author implicitly asserts all of this by rejecting free will.  Ignoring the hierarchical dependence of his motivation for writing on the existence of choice in his audience, he can simultaneously depend on them and wipe out that foundation.

Harris’ most egregious and morally reprehensible violation of hierarchy is in the use of the term “moral” itself.  He cashes in on his denial of choice by objecting to moral condemnation of monstrous criminals:  “Once we recognize that even the most terrifying predators are, in a very real sense, unlucky to be who they are, the logic of hating (as opposed to fearing) them begins to unravel.”  (“Hating” is not actually a moral term – it’s an emotion – but Harris means it as a consequence of a moral condemnation.)    This kind of amorality is to be expected from someone who rejects personal responsibility for any action.  Harris goes further, however, saying “it seems immoral not to recognize just how much luck is involved in morality itself.”  Get that?  He is accusing his opponents – those who support free will – of being immoral.  He is using the concept “immoral” while having denied the base of the concept – choice.

There has to be a method and a motivation behind such a blatant contradiction.  Apparently, he wants to reserve for himself the use of that concept, so he can condemn his opponents, while denying that concept to them.  He violates hierarchy to do so, whereas his opponents do not – they are simply following the implications of free will in using the concept.  Readers who do not deny choice and morality may form their own moral judgments of such a motive and such a tactic.

Hierarchy is not something we happen to observe, as some accident, in issues as different as scientific method and free will.  Hierarchy is inherent in the nature of conceptual awareness.  It is inherent because concepts themselves, the building blocks of all knowledge, are hierarchically dependent.  The concepts formed from the lowest-level concretes of perception (like “dog,” “elephant,” “fish,” “bird”) can then themselves be treated as constituents in a later act of abstraction, for example to form the concept “animal.”  That new concept would be both unnecessary and impossible without the earlier concepts.  “Animal” distinguishes all of the above concretes, and innumerable others, from concretes such as “tree,” “bush,” and “flower.”  This holding of abstractions as “units” that can be treated as a base to go forward, is a distinctive capacity of a conceptual consciousness.

When one denies hierarchy, it is a mistake, an error of thought that necessarily invalidates any further thought process.  Just as a car without a chassis has nothing to support its body, an idea or concept that is proposed while denying its logical roots has no support – it’s a self-contradiction.  With respect to concepts, this fallacy is called the “stolen concept fallacy.”  But the broader kind of “stealing” employed by the free-will deniers is much wider in scope and deeper in its pathology.  So much territory must be wiped out, so many facts must be ignored by such stealing, that it is hard to imagine a mind going forward from that point on.  Nothing but absurdities can result from such a flawed reasoning process.  And those who respect reason and logic should diagnose it as contradictory and dismiss it.

To summarize all of the posts on Harris’ book, Free Will:  The first part of this review identified the arbitrary underlying premise behind Harris’ view that past brain states necessitate all future actions.  He simply ignores, without any argument, the possibility that a being could possess capabilities that are enabled by and emerge from the brain yet are not completely necessitated in every detail by the brain’s neurology.  The second part of this review analyzed the gimmick that gives plausibility to the argument, namely focusing only on a straw man (the last split second of the process of choice) rather than the true nature of free will (the entire sequence of mental events and choices from the primary choice to focus and leading up to a final, higher-level choice).  Finally, the present post identified the conceptual inversion involved in denying the validity of free will while depending on it for an argument.  This vast collection of fallacies – arbitrariness, use of straw-man tactics and hierarchy violations – are the means used by the neurological determinist to deny the universal experience of free will.  Any one of those transgressions alone would be sufficient reason to reject Harris’ arguments, and to accept what one grasps from personal experience rather than deny it as a delusion.  The combination of all three logical insults should make one recoil from the poisonous free-will-denier’s doctrine.

Critique of Sam Harris’ book Free Will – Part 2 – Sleepwalking

Summary:  Harris mischaracterizes choice – describing it only in terms of the split second the choice is made instead of identifying all the thinking, planning and goal-setting preceding it.  This narrow lens on free will is used by Harris to assert that man is essentially sleepwalking, making choices and taking actions for which there is no explanation and no root in conscious thought.

Harris has a second argument that is emphasized throughout the book.  It might be called:  You didn’t choose – the “choice” just appeared.  He says in regard to his choice of coffee or tea on a particular day (p. 7):

“Did I consciously choose coffee over tea? No. The choice was made for me by events in my brain that I, as the conscious witness of my thoughts and actions, could not inspect or influence…The intention to do one thing and not another does not originate in consciousness—rather, it appears in consciousness, as does any thought or impulse that might oppose it.”

And (p. 33, regarding a pain in his back that led him to consider physical therapy):

“Did I, the conscious person, create my pain? No. It simply appeared.  Did I create the thoughts about it that led me to consider physical therapy? No. They, too, simply appeared. This process … offers no foundation for freedom of will.”

As a final example (of many he gives), he says with regard to a person’s idea of starting a website (p. 37):

“Where did this idea for a website come from? It just appeared in your mind.  Did you, as the conscious agent you feel yourself to be, create it?”

The first quote, and much of the discussion around the other quotes, make it clear that this is not really a second, original argument, but depends on his first argument that mental contents are fed to the conscious mind from unconscious physical causes over which there is no control.  That argument and its arbitrariness have been dealt with in the previous post.  There are some additional things to say, however, about Harris’ idea of choices just “appearing” in the mind with no explanation or conscious antecedent.

Harris’ examples are a mixture of two types of choices – simple taste-type preferences (coffee or tea, vanilla or chocolate ice cream) and more fundamental choices like the one to start a website or to seek physical therapy.  The taste-type preferences can be ignored, because they often are made on the basis of physical attraction and do have a strong element of the physical (taste or smell).  They are also very superficial (but still actual) choices where operating on physical desire is perfectly legitimate.  Note, however, that the mixing in of these types of choices with more fundamental ones is used by Harris to lend credibility to the “choice appears” idea.  In effect, what he is asserting in this package deal is exactly that the more fundamental choices are just like those superficial ones in that they are physical and automatic.

In order to make it plausible that the deeper choices are also of this nature, Harris narrows the focus in describing them to the very split second that the thoughts or intentions were finalized by the conscious mind.  Only in this way can he avoid having to explain why one person has those thoughts and intentions and another does not.  This issue of level is critical to his argument, and to a refutation of it.  Essentially, he mischaracterizes choice as something that occurs in discrete, unrelated little bites.  He ignores the goals a person sets, the thinking he engages in and the other methods he employs that integrate those bites.  The way Harris describes his snapshots of choices is designed to make it natural that they would seem inexplicable.  Yet they are not.

This aspect of level has been dealt with admirably by Ghate as well as by Bandura1 (in a devastating critique of neurological determinism).  The essence of their counter-argument to Harris will be recapitulated here, and additional points made.  Ghate’s lecture, directed at a young student audience, discusses Ayn Rand’s view that free will must be identified on a very fundamental level as the method by which a conceptual being uses his consciousness.

Consider, for example, two individuals with back pain and how they arrive at the decision to seek physical therapy.  The first individual is not passive about the issue.  He asks acquaintances with back pain what remedies they used, and which ones were demonstrated to work.  He considers their answers, inquires further, and checks the facts, relying on trustworthy references rather than uncritical web browsing.  He gives himself a mental order to revisit the matter at whatever small intervals of spare time his busy schedule affords.  Via this process, he gradually becomes convinced that his problem requires professional help, and that a physical therapist has the skills and training needed.  He even asks his insurance company whether visits to such a specialist are covered under his plan.  On the basis of all this thinking, he decides to seek an appointment with a particular physical therapist.

The second individual takes a very different approach.  He at first dismisses the pain, attributing it to a short-term sprain that will heal on its own.  He ignores it on and off for months, sometimes resolving to do something about it and at other times simply ignoring it further.  He happens to be at a cocktail party where a woman is rhapsodizing about her chiropractor, ensuring anyone who will listen that the man is the best “back-cracker” in town.  The next day, this second individual calls the telephone number the woman gave him, and makes an appointment with her chiropractor.

By Harris’ description of a choice to “consider therapy,” focusing only on the final result and not on the process that led up to it, both of these individuals would seem to be identical, having automatically arrived at the “same” choice.  Yet the two are fundamentally different.  No one can directly “will” a complex choice like this into being. The first individual wills it into being indirectly via: A conscious decision to pursue the matter, to focus on it over an extended period, to seek and weigh information, and to hold the full context of what is involved (such as how to pay for it).  The second individual makes the kind of “choice” that Harris describes as choice:  He drifts, acts emotionally, weighs no evidence and asks no questions.  In effect, he doesn’t consciously make a choice at all: his feelings make it for him, along with the opinion of some other person.  The final choices made by each of these individuals, far from being inexplicable, are perfectly predictable (in direction, if not the details) from the different methods they use.

Those who read Harris’ book will observe that every single example he describes, including two lengthy examples about a person pursuing martial arts or getting fit, are of this second, emotional, type.

The difference between these two individuals elucidates where the real level of free will is: not at the level of the narrow decision in question but rather at the level of how to go about making that decision.  What is within one’s direct power of choice is not the end product but the method of getting there – not the specific destination but the train one chooses to ride.   There are subsidiary choices, but underlying them is the basic choice: to think, to exercise all the rational methods at one’s disposal to arrive at a solution to a problem, or to default on thinking and drift.

Harris would no doubt criticize the conclusion being drawn here by saying that each individual’s brain mysteriously feeds him even the method he uses.  This could seem plausible with respect to the second individual, since he drifts without control, like a rudderless boat.

With respect to the first individual, Harris’ conclusion is impossible.  None of the things the first individual does is automatic, nor can it be.  Any number of results could have been delivered to that individual from a web search, and the easiest and most automatic thing to do would be to accept the top results listed.  Exerting oneself to seek reputable results, reminding oneself that not everything one reads is valid, takes an act of focus and attention requiring effort.

More generally, no conceptual idea (whether about how to plant a crop or cure back pain) is innate or guaranteed to be true. Truth must be obtained by a definite process, a process that must include free will.  Perhaps in those transitional eras between the animals and the evolution of man there existed beings who got ideas fed to them by their brains but had no way to check their validity, no way to do anything but follow those ideas – unable to distinguish between food and poison, between a safe cave and an impending rock slide.  Such beings would have perished very quickly.  Then there arrived on the scene the human being, with a faculty enabling him to evaluate the evidence, the reasons for and against an idea, and to reject those ideas that were in error.  This being had to have the capacity to look inward and monitor the operations of his mind, identifying not only the correct ideas from all those presented by the brain but also the correct methods of arriving at those ideas.  Making such a selection – choice – is a necessity for that being, if he is to continue to survive.  What choice such a being makes is not necessitated (if it were, he would have his survival guaranteed).  If he chooses to think, like the first individual in the above example, this being succeeds.  If he defaults, like the second individual, this being harms himself, whether immediately or in the long run.

To reverse a pat phrase of Harris’ in condemning choice:  No one has come up with a mechanism by which such persistent, repeated monitoring, evaluation and re-establishing one’s direction in the face of error could be the result of neurological impulses or genes or chemistry.  A thought process can go off track.  Using the earlier analogy, the train one chooses to ride can be derailed.  To keep it going requires grabbing the steering and deliberately keeping it on its course.  Harris denies the possibility of grabbing the steering by choice.  A few narrowly conceived “choice” descriptions, or stories of emotionally driven choices, like the examples Harris provides, might deceive one into agreeing.  The lengthy thought process described with respect to the first individual in the example – not to mention the kind of long-term efforts involved in producing the Oxford English Dictionary, or Les Misérables, or the Empire State Building, or the moon landing – unmask the absurdity of such a denial.

The next and concluding post will identify the fundamental mistake Harris makes in using concepts, and how that error invalidates his argument.

Critique of Sam Harris’ book Free Will – Part 1 – Man as Robot

Summary: The book Free Will, by Sam Harris (2012), rejects the validity of free will and proposes that man is an automaton whose every thought and action is determined by his brain neurology.  In this and following posts, the fallacies in his argument will be identified, the most basic of which is an arbitrary assertion that the physical state of man’s brain at one time necessitates every future action.

“[M]y mental life is simply given to me by the cosmos.”

This remarkable statement is neither the mumblings of a psychotic nor a mystic’s claim to authority. Rather, it represents part of the argument against free will by a highly respected intellectual, Sam Harris.  His 2012 book Free Will is in the top tier of Amazon sellers in its category.  The favorable reviews of this work, by university professors from several different disciplines, demonstrate that he is not only a popular author but an influential writer promoting ideas well-thought-of by his peers.

Despite the popularity and respect this book has garnered, this and subsequent posts will argue that his primary conclusion that free will is an illusion is wrong, and in fact, self-contradictory.  His arguments are riddled with fallacies and arbitrary assertions, some of which have been identified in a prior post.

The primary argument presented by Harris is that everything in our minds is determined by the activity of the brain.  Because the brain has a physical state, all future actions can be predicted by physical laws from that state (in principle if not right now).  Consider the following excerpts (except as otherwise noted, italics are added to emphasize certain of his key ideas):

  1. He says with respect to a depraved criminal who murdered an entire family: “I have to admit that if I were to trade places with one of these men, atom for atom, I would be him: There is no extra part of me that could decide to see the world differently or to resist the impulse to victimize other people.” (p. 4)
  2. Harris states that he cold excuse a criminal if the criminal had a brain tumor, and further: “…a neurological disorder appears to be just a special case of physical events giving rise to thoughts and actions. Understanding the neurophysiology of the brain, therefore, would seem to be as exculpatory as finding a tumor in it.” (p. 5)
  3. Again, regarding criminals, he says: “To say that they were free not (his italics) to rape and murder is to say that they could have resisted the impulse to do so (or could have avoided feeling such an impulse altogether) – with the universe, including their brains, in precisely the same state it was in at the moment they committed their crimes.” (p. 17)
  4. “And the moment we see that such [brain-induced] causes are fully effective – as any detailed account of the neurophysiology of human thought and behavior would reveal – we can no longer locate a plausible hook upon which to hang our conventional notions of personal responsibility.” (p. 17)

With phrases like “atom for atom,” “physical events giving rise to thoughts and actions,” “in precisely the same state,” and “brain-induced causes are fully effective,” Harris paints a picture of mental activity as being fully explained by the brain and its chemistry/physiology.  The main assumption here is that a physical state of being is equivalent to a deterministic cause.  That is, if every atom or neuron is in some definite physical state, has some particular describable identity, then every action of the mind must be fully necessitated.  In short, identity invalidates volition.

Nowhere, however, does Harris even attempt to support this claim, let alone prove it.  He assumes it is true without question, and he brings this assumption to his interpretation of everything related to free will.  It is apparent that Harris views this point as so obvious as not to need any validation or even discussion.

But this conclusion is not at all obvious. A being that could choose would also have a physical identity, a describable physical state.  How could it not?  Living beings are not immaterial ghosts.  Beings with no physical specificity are a fiction of the mystics.  Every choosing being would still be a being with a particular orientation of its genes, neurons and brain molecules.  Why does that fact alone invalidate free will?  Why is it impossible for a being to be so constructed physically that it has the mental capacity for choosing? Why can’t the ability to choose be the result of the combination of its physical elements, just as the aroma of a rose is the result of the combination of its atoms?  Why would choosing have to be something “extra?”  Harris neither asks nor answers any of these questions.  He simply makes the assumption that specificity (“atom for atom”) contradicts choice.

A basic premise that is so fundamental as to be taken for granted and not even argued for must have at root some assumptions that seem obvious to the author and to others. We can see a hint of what Harris is assuming by looking at what he emphasizes.  His repeated use of phrases like “atom for atom” and “physical events” suggests that for him the ultimate reality is not a living being and its capacities, but rather the chemicals and small corpuscles that make it up – and particularly their actions.  With a focus not on the resulting being and its observed abilities but on the actions of its building blocks, these actions are what he looks to for basic explanations.

One of the most important activities of science is the study of underlying mechanisms.  It is perfectly legitimate for scientists to do so.  The field of neuroscience, the study of the brain and neural system, has resulted in many valuable discoveries, especially in the field of medicine and health (some examples are: spinal cord disorders, stroke, dementia and Parkinson’s disease).  However, properly, when an underlying mechanism is discovered, scientists do not deny the existence of the phenomenon they are studying.  For example, when it was discovered that the eye’s experience of color is made possible by the existence of cellular structures called “cones” on the human retina, the phenomenon of color vision was not made synonymous with those structures.  No scientist (only certain philosophers) came along to deny the existence of color vision, saying it is just an “illusion.”  Even in the purely physical sciences, observations are not discounted as illusions when underlying explanations are found.  The table in front of you is no less real, and it continues to be flat, hard and level, when you learn that it is composed of atoms, and those atoms composed of protons and electrons.  In all such cases, mechanism does not obliterate the observed phenomena.

Harris and those determinists who agree with him take the unique approach of saying that since there are underlying physical mechanisms that enable choice, choice doesn’t exist.  He is perfectly willing to grant that we observe choice, experiencing it on a day to day basis, but he claims it isn’t really real. “For most purposes, it makes sense to ignore the deep causes of desires and intentions – genes, synaptic potentials, etc. – and focus instead on the conventional outlines of the person.  We do this when thinking about our own choices and behaviors – because it’s the easiest way to organize our thoughts and actions …  Knowing that I like beer more than wine is all I need to know to function in a restaurant.  Whatever the reason, I prefer one taste to the other.  Is there freedom in this? None whatsoever.” (p. 59) Why does Harris believe that there is no freedom in this?  Because it’s enabled by those underlying mechanisms.  In other words, because a capacity has its roots in the brain, the capacity doesn’t actually exist.

The argument Harris presents, it should be noted, would not only invalidate choice, but consciousness as such (this was identified by Ayn Rand in her critique of Kant – see below).

Harris goes further in his attempt to make this plausible.  He argues (p. 25): “How can we be ‘free’ as conscious agents if everything we consciously intend is caused by events in our brain that we do not intend and of which we are entirely unaware?”  He is asserting here that these underlying events do not just enable man to intend things (i.e. make choices) but that those underlying events cause those intentions.  He denies that this is real choice.  The emphasis on actions of the brain as causal is a key theme with Harris, as when he states that “the next choice you make will come out of the darkness of prior causes that you the conscious witness of your experience, did not bring into being.”

This causation argument is nothing more than he has argued so far:  an assertion that everything is necessitated from the prior state of the brain.  He is merely adding that beyond future actions being necessitated, the causes are mysterious and inaccessible to man.

There is a trend in modern philosophy since Kant to view everything regarding man’s consciousness in this way.  Kant’s categories are built-in structures (Harris’ “events in our brain”) that fully determine what we think we perceive.  We only think we actually perceive spatial relations, says Kant.  In fact, that appearance (the “phenomenal” world) is caused by the categories, and isn’t really real.  In addition, Kant’s categories are inaccessible to man (like Harris’ “darkness of prior causes”).  Man cannot get under them, or know true reality (the “noumenal” world).  Harris, then, is simply applying Kantian argumentation to the phenomenon of choice.

The tie to Kantianism adds no validity to Harris’ argument that identity invalidates volition.  The assertion is still arbitrary and presented with no evidence to support it (and as shown elsewhere, Kant’s own argument that identity invalidates perception is also arbitrary and self-contradictory).  The fact that Harris is in the mainstream of modern philosophy merely helps one understand how his underlying premise could be held by him as so unquestionable as to need no discussion.

Harris has many additional and subsidiary arguments.  Several of them, however, involve erecting straw men as descriptions of what free will entails – straw men that he can easily knock down.  These and other arguments will be addressed in upcoming posts.

Faraday and Hamilton

Summary:  Read about men who exercise their free will by choosing to focus, reason and develop new knowledge across their entire lifetimes.  Such a choice, whether made in the physical sciences or the humanities, can move man to unprecedented new levels of prosperity and happiness.  Here are two such exemplary figures.

The last two posts have asked you to dwell with the bottom feeders who evade the responsibility of thinking, adopt a victim stance, and even deny the existence of choice. This post will let you swim to the surface and breathe the fresh air of men who adopted the opposite approach.

Michael Faraday (1791-1867) and Alexander Hamilton (1755-1804) were two figures of the Enlightenment period.  This was a period in which it was fully accepted that man can think, that reason is efficacious, and that by his own choice and actions he can raise himself to the heights of knowledge, productive achievement and wealth.  In the Renaissance, men had begun to study the ancient volumes newly available in Europe, and now they pushed the boundaries further – in medicine, science, politics, and other practical arts there was a new flourishing, a new outpouring of discoveries, books and improvements.  Faraday and Hamilton, each in his own way, adopted the new philosophy of reason, and built on it to discover and implement the innovative, life-enhancing values of the future.

Faraday was born into a poor family and had only the most basic education until he was 13.  He was apprenticed to a bookbinder, a position Faraday earned by impressing a bookshop owner he worked for.  Unlike other apprentices, Faraday actually read many of the books he bound in the shop, in his spare time after a long hard day as an apprentice.  He read the 2nd edition of the Encyclopedia Britannica and a 600 page chemistry book, among many others.  He spent money from his small salary to buy and experiment with chemicals and chemical apparatus.

A customer at the bookshop gave Faraday tickets to a series of lectures by the famous scientist Humphry Davy at the Royal Institution in London.  After the lectures, Faraday sent Davy a 300-page summary of the series, which so impressed Davy that he ultimately hired Faraday as an assistant.  From this point on, and for several decades thereafter, Faraday performed meticulous and original experiments in chemistry and electricity, invented new equipment (for example, a precursor to the Bunsen Burner), discovered electromagnetic induction and developed the science and practice of motors.  In addition, he was a lucid lecturer, who began the Royal Institution’s Friday lecture series and delivered lectures on a wide range of topics for many years.

Nothing better characterizes Faraday’s mental approach than a description of one of his most influential investigations.  He had been repeating earlier experiments by Gilbert and others related to magnetic fields around electrical coils, looking to see if those fields could induce current in other nearby coils.  Experiments with different materials, different numbers of loops in a coil, or different voltages on the coil, all ended in failure to see any current in a nearby coil.  However, Faraday noticed that a small current was visible when he connected the coil battery wires or disconnected them.  Rather than shrugging that tiny effect off as an anomaly or something impossible to understand, he studied it in detail.  He designed several different kinds of experiments – with different shaped coils, with circuits involving passage of current through brine, with different materials – to evaluate this phenomenon.  Ultimately he identified the fact that it was the movement of a magnetic field that induced a current, not the magnetism itself, and that the effect was proportional to the rate of movement.  These experiments were thoroughly described by Faraday in the Proceedings of the Royal Society of London, in 1832.

Faraday made certain basic choices in life: to devote himself to thought, deliberate observation, experimentation, and conceptualizing scientific phenomena.  He chose to do this not for a week or a year, but for a lifetime.

Hamilton, like Faraday, had impoverished beginnings.  “The bastard brat of a Scottish peddler,” in John Adams’ graphic description, Hamilton was poor in knowledge, money, and position, a youngster stranded in a Caribbean island backwater. He started working at the age of eleven. At one job, as an accounting clerk for a firm engaged in international trade, Hamilton studied accounting and trade, and impressed his employer.  He also impressed a minister and newspaper editor by his articulate letter describing a hurricane that hit the island.  The employer and the minister decided to help Hamilton get a full education by agreeing to pay his way to America.  Once in America, Hamilton’s self-improvement program accelerated.  He studied languages, law, politics and military history and strategy.  Arriving just before the Revolution against Britain, he became convinced of the Patriot cause, and wrote political articles defending and explaining it.

During the war for independence, Hamilton fought brilliantly at the Battle of Yorktown, attacking the British flank in a daring charge.  His ability impressed George Washington, who appointed him staff assistant and adviser.  Hamilton’s self-developed writing ability enabled him to write many (some say most) of Washington’s military orders to other commanders.

A student of military history and political philosophy, Hamilton observed how chaotic the war effort was, with independent militias each contributing to a heterogeneous and poorly trained army.  This was the germ of what later became a conviction that America needed not a loose and disorganized Confederation but a central government.  Such a government would respect the rights of the individual but have the authority to protect those rights adequately.  Hamilton (and the other Founders) thus discovered new knowledge in the arena of political philosophy.  Hamilton ultimately supported, attended the Convention to develop, and promoted the new Constitution of the United States.  His arduous effort to explain the Constitution to the American people in numerous brilliant essays (two-thirds of which he wrote himself), represents one of the greatest achievements in political history, and was instrumental in the Constitution’s adoption.  Besides those achievements, he had a successful law career and became Washington’s first Secretary of the Treasury.

Like Faraday, Hamilton made certain basic choices in life: to devote himself to overcoming ignorance and poverty, to understanding the complexities of law and political science, to explaining those complexities to the world.  Like Faraday, he chose to do this not for a week or a year, but for a lifetime (his tragic early end notwithstanding).

It is common for those of the “you didn’t build that” school to focus on the help men obtained rather than on a) what they did with that help, and b) what they did to deserve that help. Faraday was offered a job by Humphry Davy. Hamilton was offered a formal education in the United States from friends and teachers on St. Croix. Neither man ever denied the help he obtained, or tried to claim his achievements didn’t benefit from it.  More importantly, however, neither man stopped there. Each, starting with that help, built on it.  Further, the help itself is a testament to their self-made value: It was bestowed on them because of the value seen in them by their benefactors, value that preceded the offer of help.  Therefore, for each man, what he built, he did build.

More generally, it is absurd to say that a successful surveyor didn’t build his career because he relied on geometry, which was developed by Euclid in ancient Greece.  By the same argument, it is wrong to say that men like Faraday and Hamilton didn’t build successful lives, just because they relied on benefactors, or on the books they learned from, or on the roads they traveled on to get to their schools.

With regard to free will, there is an even more fundamental point that can be made from observing the lives of these two men.  They were men who discovered new knowledge, one in physical science and one in political science.  Those who deny free will have a serious problem to confront with anyone who discovers new conceptual knowledge:  How did he do it?  Without free will, it is impossible to explain how to overcome error.  Man’s conceptual level is fallible.  It is not guided automatically to the correct identification of essentials, the proper formulation of definitions, unerring generalizations.  These require a definite method, the discarding of error, the awareness of contradictions, sifting, analysis, editing, selection.

The free-will deniers would have a more plausible case if they tried to explain the mind of an animal, whose perceptual level is a product of what the brain automatically does with the external stimuli impinging on it.  They might even have a case to make about the type of mind that spits out un-thought-out tweets or unsubstantiated assertions.  That sort of mind is operating automatically (but only because its driver has, by choice, relinquished control of his steering wheel).  The scientific mind, by contrast, with its unremitting dedication to facts, its continuous error correction, its high level of focus and attention, its discarding of unfruitful methods like tea-leaf reading, cannot operate on automatic.  The development of knowledge requires a certain process, a sequence of steps, often spanning decades, without which nothing of value automatically appears.  Certain men engage in this process, and others do not.  And those, like Hamilton and Faraday, who engage in such a process, are doing something different with their minds than those others who do not.

On automatic, man’s mind produces falsehood, confusion, bull sessions.  When there is conscious direction, man arrives at knowledge, clarity, The United States of America and the electric motor.

Free will and mass terror

Summary: Totalitarian and terrorist movements make a direct, explicit attack on the idea of free will.  Read how intimately connected the 9/11 Twin Towers attack and the issue of free will are, even moreso than you might think.

The previous post identified the kinds of mental patterns at the root of destructive and criminal behavior.  First, the mind is de-focused, so that the context of the consequences of an act is blurred or obliterated.  Next, various thought patterns are employed to justify and sanction the behavior, further shielding the actor from the nature and consequences of the action.  The actor even thinks of himself as a good person: He engages in acts that simulate moral goodness based on a substitute standard that he adopts to help him pretend that he is not the monster he is.  Once the mind is reduced to a state of fog by all of these means, and only when that state is induced, repetition of the criminal acts becomes tolerable to the felon.

In keeping with the theme of these posts, all of these mental patterns are chosen.  If a person has a normal brain, he has the capacity to focus his mind, to choose to see the nature of his actions, or not.  Piling on mental defenses and substitutes for self-esteem is chosen as a means of, first, avoiding effort, and second, blocking the pain that would result from awareness of his true character and the nature of his actions.  This person’s actions are both explainable and chosen, in the sense that the defenses help explain the lack of awareness, yet fundamentally the defenses themselves are chosen.

Despite all these mechanisms, the choice to adopt awareness, to focus one’s mind, to repudiate and change past thinking patterns and past actions, is always possible.

There are actions, however, that are so morally repugnant, and require planning over such a long time with a sustained effort, that there are repeated reminders to the actor of the nature of his actions, reminders that threaten to break through all the defenses so far discussed. One such action of this type is mass murder and/or mass terror, characteristic of totalitarian movements such as communism, fascism and Islamic fundamentalism. In these cases, further mechanisms are adopted to help the actor in his effort to block awareness of reality, lest he see what he is doing and recoil from the sight.

What are those further mechanisms?

Terror requires first and foremost a moral sanction of mass murder.  Each of these movements has an enemy, and grudges, that it claims justify the movement’s actions (similar to the criminal described in the last post, but on a much wider scale).  They derive the sanction for their horrific actions from fully developed ideological systems.  Such ideologies do not always originate with these movements.  They may derive from the writings of an “ivory tower” philosopher of previous decades or centuries, whose thoughts are only now being translated into their logical conclusions.  Hegel, with his argument for the primacy of the State, led to both communism and Nazism. Nietzsche’s emphasis on the Superman and the remaking of the human race also contributed to both movements.

A very powerful and necessary adjunct to a moral sanction is an explicit rejection of free will.  These movements tell their storm troopers:  You are both justified in engaging in murder and you cannot do otherwise.  Each of these movements actually makes a concerted attack on the idea of free will.  Communism has as part of its ideological foundation Hegel’s historical necessity, by means of which history unfolds in a predetermined pattern and requires (in the Communist’s view) the deterioration of capitalism and its development into its opposite.  Fascism has various flavors, but the Nationalist variety practiced by Hitler, Mussolini and other Nationalists requires the unfolding of a historical plan or “destiny” to justify the plunder and expansion by conquest.  Islamic fundamentalism takes literally and fully seriously those (many) passages in the Koran that speak of Allah as all-powerful, and history as evolving according to His plan for Islam to conquer the earth.

Few people realize it, but the issue of free will was of fundamental importance in the lead-up to the 9/11 Twin Towers attack.  The worst terrorist attack of the fundamentalist Islamic variety (communism has done much worse), 9/11 involved a group of four terrorists who were managed by their leader, Mohamed Atta. The personal effects of Atta contained what the FBI called a “spiritual manual,” a numbered instruction sheet in which he exhorts the other terrorists to stick to their purpose and not let the difficulties or any moral repugnance deter them.  This document, full of quotes from the Koran and references to past battles in the history of Islam, contained these words under the section titled “Last Night”:

(#9): “…remember that you will return to God and remember that anything that happens to you could never be avoided, and what did not happen to you could never have happened to you.”

(#14) after suggesting some practical steps to succeed in their plan, says “[…although God decrees what will work and what won’t] and the rest is left to God, the best One to depend on.”

These instructions, given to followers who were presumed to be committed Islamists, testify to the fact that the planners of the carnage were themselves worried that those who implemented it might change their minds at the last minute, either aborting the mission or compromising it with weakness and lack of resolve.  The propagandizing in Islamic fundamentalism had been progressing for these men all their lives, and certainly in recent times before the attack.  Yet they were still subject to choice.  And the planners, knowing that, needed extra tools to continue the “instruction,” and to further sustain the acolytes in continuing their out-of-focus submission to the plan.

Although the most dramatic example, 9/11 is not unique.  Suicide bombers have testified when their plots were foiled that they believed they were already dead – even before the act.

It was explained earlier that choice is axiomatic.  Even in the act of denying it, free will is confirmed to be true.  The example of the 9/11 attackers illustrates a further aspect of this point:  The choice is not a one-time event but continuous.  Each and every moment, a person has the choice to assert mental control, process data rationally, think – or to refuse to do so.  Danger lurks around every corner for those who choose the foggy state of mind, with an unyielding reality ever-present and threatening to break through the fog, stopping the deliberately unaware individual in his tracks.  Continuing evasions are needed at each moment.

The philosophic systems that reject choice, and build into their ideologies justifications for doing so, are merely using weapons to trap the mentally passive, giving such individuals excuses for further evasion.  Such evaders, when confronted with the order to imprison, torture and murder enemies, carry out the instructions without conscious qualms.  That is why those philosophical ideas are so dangerous.  The systems can’t make terrorists perform their acts but they provide terrorists with ready rationalizations for choosing to do so.

The will is free even in the face of powerful emotions

In the last post, it was shown that an emotion cannot be willed out of existence.  The reason is that an emotion is a consequence, and so it cannot be controlled directly.  What can be controlled are the thoughts and values that underlie the emotion.  These are within one’s power of choice.  Once the thoughts leading to an emotion are changed, even with the same object observed, the emotion changes accordingly.

There is a further aspect of emotions that needs to be addressed: What about overpowering emotions, emotions so powerful that in the moment of experiencing them they seem to control one’s action to the point where one is not free to do otherwise (the idea behind the “crime of passion” defense)?  What about what people call psychological “addiction,” according to which a person is compelled to act in a certain way, such as to overeat, to stay in an abusive relationship, to be the aggressor in an abusive relationship, or to gamble to the point of financial ruin?  Are these counter-arguments or qualifications to the existence of free will?

The first point to make about these questions is that there is a fifth step after the four steps previously discussed in regard to emotional responses.  The four steps were, in their necessary order: perceive, identify, evaluate and respond emotionally.  For the current discussion, there is a fifth and critical step: action.  This is a separate step, and never subject to becoming fully automatic.  No matter what emotion one experiences, nothing can make a man actually lift his fist to his wife.  He might feel like doing it, want desperately to do it, but is always free along the way and even just before the action, no matter how powerful the emotion seems, to stop, “take a deep breath,” and reconsider his action.  Recall that the neuroscientist Benjamin Libet, most often quoted for his scientific experiments allegedly proving that free will doesn’t exist, himself very strongly defends free will precisely because of this possibility of aborting the action: As he himself stated: “The existence of a veto possibility is not in doubt.” (Libet’s critics have argued against this conclusion by stating that if neurons caused the onset of the action, they caused the veto of it as well, so even the veto is not “free.”  This and other arguments of the neurological determinists will be addressed in upcoming posts.)

You, the reader, have experienced this situation many times in your own life.  Perhaps you have had a passionate desire to go to the movies when you knew you should study for tomorrow’s exam.  In the times when you exercised the proper control, you have seen yourself say, “No, I can do that tomorrow night (or this weekend), but right now I must study or I’ll do poorly on the exam.”  Even prior to this reassertion of one’s values, there is a more basic step: putting one’s mind in focus, deciding to think about and address the issue of the conflict.

In cases where a person did not exercise the proper control in the face of a powerful emotion, he could also see a different choice or set of choices being made: He could see himself, if he slowed his thinking processes down to slow motion, saying “F** it – I wanna see the movie, so I’m going out with my friends.”  What has happened here?  This individual has chosen to ignore the context of the situation, to force from his mind the need to study, to de-focus his mind and literally not think, taking his emotion as a primary.  All of these actions are precisely what is in the realm of free will.  Free will, as was stated in the first post, consists fundamentally of the choice of whether and how a person will use his mind.  It can be easily seen from the above description that the individual did make a choice, though in this case the individual chose not to think or exercise his mind.  He engaged in a process of actively avoiding such thinking and evaded the knowledge that such thinking was required.

The emotion in the situation just described is not as powerful or all-encompassing as some emotions.  What about “overpowering” emotions?  It is inadvisable to even call these “overpowering” emotions because such terminology already presupposes that control is impossible in the face of them.  However, there are emotions that stem from deeply rooted patterns of behavior, perhaps years or decades of evasions of the type just described, that can seem overpowering to those experiencing them.  Still, they are not.

One of the best proofs of that fact is the example of rehabilitation of criminals and addicts.  By “rehabilitation” here is not meant the kind of walk-in-the-park programs where such individuals are forgiven their calumnies and dealt with tenderly.  The rehabilitation referred to here is real rehabilitation, where a career criminal or a life-long addict is changed to the point of never committing such actions again.

In criminal science, a multi-volume work by Samuel Yochelson and Stanton Samenow titled The Criminal Personality demonstrates the method and the process of such fundamental change.  These two psychologists (Yochelson is also an M.D. and is the one who first developed the methods they describe) performed their research and practiced at Saint Elizabeth’s Hospital and George Washington University Medical School.  Their program participants were, indeed, “reluctant converts,” (the title of their first chapter): career criminals.  Such men are not simply petty thieves but rather men who have devoted most of their lives to committing thousands of property crimes, and even murder, stopping only when they are imprisoned.  Yochelson and Samenov discovered a consistent pattern of thought among such criminals.  For example, one thought pattern was the view that anything belonging to someone else was really the criminal’s own property being kept in good hands by its current “caretaker” until the criminal could come along to relieve him of it (the authors call this thinking pattern “ownership”).  Other examples are “fragmentation,” a “rapidly fluctuating mental state” in which the criminal chooses not to focus or concentrate, and “victim stance,” in which the criminal considers himself the victim of forces outside his control.

Fundamental to the criminal’s approach are habits of thought that facilitated the other criminal thoughts and behavior.   These criminals engaged in a process of what Yochelson and Samenov called “cut-off,” which is basically the same action of evasion described above with respect to the student who went to the movies instead of studying.  Cut-off is blocking of one’s own thoughts.  If he does begin to think about the fact that the property of another person is really and rightfully that person’s own and not the criminal’s, or if he does begin to have empathy for the person who would be subject to the criminal’s depredations, the criminal cuts those thoughts off.  That is, he evades them so that he can continue with his actions and not be consciously confronted with the horror of his chosen form of life.

Further, such a criminal has the thought that he is “really” a good person, even though other people “think” he isn’t. Of course, a criminal who has spent the majority of his life preying on others, who has no respect for life or property, who has evaded rational thought time and time again, cannot authentically experience that he is a “good person.”  The criminal is engaging in yet another evasion when he repeats to himself that he is a good person, when he gives money to charity or helps others, all in an attempt to falsely construct the appearance of the self-worth that he lacks and that can only come from thinking and achievement.

In rehabilitating such individuals, Yochelson’s method is to fully confront them with the depravity of their actions.  He tells them that they are morally corrupt, and doesn’t let them get away with any evasions, soft-pedaling or substitution of pseudo-self-esteem for their true moral worth.  Then Yochelson systematically rebuilds their thinking processes by observing and confronting them with their facilitating thoughts, session after session, week after week, month after month.  He requires them to re-think and re-state the actual truth of the situation, and to act accordingly.  Needless to say, there are many drop-outs from the program, since criminals have choice not only about their behavior before prison but also about whether they are willing to endure the grueling psychological makeover Yochelson’s program requires.  However, those who make it through the program do live a non-criminal life afterwards.

The same sort of total makeover is required for those addicts (such as alcoholics) who choose to change.  The thought of the alcoholic, for example, when he thinks about never drinking to excess again, might be that it is too overwhelming to contemplate such a massive goal as “never.”  When programs such as Alcoholics Anonymous tell him to re-orient his thinking to set a goal of “not drinking today,” they are focusing on a change in the destructive thought patterns that stop many who drink excessively from being willing to change.  (Which particular program an alcoholic enters is not the subject of this discussion.  However, the specific contents of some of these programs are certainly open to challenge, for example the goal of seeking a “higher power.” What is relevant here is that the change in a person’s actions is a consequence of a change in his underlying thinking.)

Nothing demonstrates the fact that thinking controls emotion, and that action is always under one’s control, more than examples like these of such extreme personality change.  Excuses, evasions, pretenses at really being good, or “being able to stop drinking at any time,” are what makes the previous destructive behavior possible.  By contrast, the full conviction that one can choose how one thinks and acts, and the commitment to doing so, are what makes the subsequent productive behavior possible.

Free will cannot include willing an emotion to disappear

Recall in the introductory post that willing an emotion to disappear is not included in free will.  Why is that, and what is the relation between emotion and free will?  Do those who say that an emotion “made” them do something have an argument against free will?  In this post, the first of these questions will be addressed.

Emotions are self-evidently present in our awareness – in fact, they represent a large part of man’s conscious life.  Emotions are the means by which we experience an evaluation of the objects of perception.  A man is reading a news story.  Is the subject discussed (a new law, a recently opened play, a technological breakthrough) for his values or against them, or of no relation to them, and in what way?  Those differences in the evaluation would determine whether his emotional response is positive, negative or neutral.  If the new law discussed in the news story represents a cherished value to the man, he will feel elation.  If the law has no particular significance in the hierarchy of his values, he will experience no emotion.  If the law is opposed to his values, or he considers it nonsensical, he will have a strong negative emotion.  Both the positive or negative relation of the law to his values, and the importance to him of those values, determine the specific emotion and its strength.

For the same reason – the connection to his values – the story about the new law may generate a very strong emotion, whereas the story about the recently opened play may generate nothing but a yawn.   A single individual will therefore react very differently to different objects.   Further, the very same object will produce different emotions in persons with different values.   Another man may read the exact same news story about the law and experience boredom, skipping to the next story.  That second man might be excited about the opening of the new play because it’s plot is about a topic that he values highly.

These examples demonstrate  that the emotion is a consequence of a perception of some object and of one’s evaluation of it.  The key here is that it is a consequence.  Emotions are not primaries, but require these two antecedents.  The emotion is a consequence of the value-significance of each set of facts to each individual, a consequence of the entire mental context of the person who experiences it.

It is not hard to see, therefore, why man is not “free” to will an emotion to immediately disappear:  If it is a consequence of the object and the value, as long as both are present the emotion will be present.  To change the emotion requires a change in either the value(s) or the fact pattern (for example, the man excited about the opening of a new play might have his emotion diminished when he reads at the end of the story that the opening is in a distant city).  Absent one of these, an emotion cannot be willed to disappear, or even to be modified in its intensity (as in: “Don’t feel so sad…”).

The “fact pattern” as described in the previous example is really a combination of two steps: perception and identification. For example, the man reads the story about the new law – that is just a perception.  The meaning of the perception is: something I consider fundamentally important has just been enacted into law.  This occurs before any evaluation, and hence before the emotional response.  All of these steps – perception, identification, evaluation and emotional response – are conceptually separable, but normally some of the steps are automatized subconsciously, and not immediately differentiated in our awareness.  This automatized combination of several steps may make it difficult to untangle complex emotional responses.

Though an emotion cannot be willed away, it may dissolve due to a person’s analyzing the underlying premises and changing one or more of them.  This is common in stressful situations.  An athlete experiences high anxiety, but asks himself the source of the emotion.  He sees himself thinking “I’ll never win this event. The competitors are so strong.” He then reminds himself that he has trained for this event, and has every reason to expect a good outcome.  The emotion of anxiety that he’d experienced moments before, prior to analyzing and correcting his mistaken premise, is immediately resolved to a normal level of healthy stress.

In such cases as this, however, an emotion has not directly been willed away.  On the contrary, a conclusion has been changed, an idea rejected and replaced with another one – in the athlete’s case, a better one, one representing the truth.  In the new context, with the new premise leading to a new identification of the meaning of the situation, the emotion changes.

What has just been described is the correct approach to emotional change, as opposed to attempting to directly will the emotion away.

The primary historical advocacy of willing emotions to disappear is religion.  Consider the following from two of Christianity’s most admired men:  St. Francis, when he was tempted by sexual desire, would “plunge into a ditch full of snow, that he might both utterly subdue the foe within him, and might preserve his white robe of chastity from the fire of lust.” (St. Bonaventure’s account). Or St. Benedict, on an occasion when he was tempted by sexual desire, dealt with it this way, according to Gregory: “Seeing near at hand a thick growth of briars and nettles, he stripped off his habit and cast himself into the midst of them and plunged and tossed about until his whole body was lacerated. Thus, through those bodily wounds, he cured the wounds of his soul.”  When emotion was opposed to religious edicts, the heroes of Christianity simply attempted to emasculate the emotion.  Such counsels are not confined to past centuries.  A twentieth century theologian, Josemaria Escriva, is famous for this saying: “To defend his purity, Saint Francis of Assisi rolled in the snow, Saint Benedict threw himself into a thorn bush, and Saint Bernard plunged into an icy pond… You – what have you done?”

Even though the West has secularized since the Renaissance and such men are not universally revered as moral models, the West has never fully jettisoned their approach towards morality or their approach towards emotional conflict.  Men still retain many attitudes about how to be moral that are rooted in those earlier times.  With regard to emotions, it is not unusual even today for a parent troubled by a child’s fear or pain to counsel the child with phrases like “don’t be so afraid” or “cheer up.” Such advice cannot help but be interpreted by a vulnerable child as a suggestion to rid himself of the emotion, any way possible.

This mistaken approach to dealing with threatening emotions can lead to only one of two outcomes: 1) the emotion is apparently willed away, though in fact it is simply submerged so one is not aware of it consciously.  In such cases, the emotion is not actually eliminated because its antecedents are still present, though forced into the subconscious.  The emotion often re-emerges at some future time. Or 2) the emotion continues despite the attempt to will it away (as happens in the case of deep-seated fears such as phobias). The emotion remains due to a person’s not analyzing and eliminating the subconscious premises and values that gave rise to it.

The proper approach is, first, to reject the idea that free will includes omnipotent power over emotions.  There is ample evidence to reject that idea, and every reason to avoid the harmful effects of adopting such a policy.  Second, in trying to change a destructive emotion, one should analyze the ideas and values underlying the emotion, and correct mistaken ones.  Trying to suppress a consequence cannot work.  And living with an intolerable consequence (such as unanalyzed emotional conflict) cannot be justified when it can be corrected and lead to emotional harmony.  As the example of the athlete in competition illustrates, such harmony is achievable if one uses the correct method.