The history of free will thought – the ancients other than Aristotle

Summary – This post presents the views of the ancients other than Aristotle.  These philosophers cast doubt on, or turned away from, accepting free will.  Atomists presented theories that strongly implied determinism, even if the implications of those theories were left to later thinkers.  The Stoics emphasized fate and determinism.  Members of both schools tried to reconcile determinism with a degree of “freedom” that would allow punishment and reward. 

The previous post presented Aristotle’s views on free will, a promising beginning for free-will thought, even though not everything Aristotle said was correct.  Another prior post presented the unsuccessful attempts of the medieval theologians to reconcile free will with supernaturalism.  In this post, the views of philosophers whose theories implied rejection of free will are presented: The Greek Atomists and Stoics, and their Roman interpreters.

Atomists

Atomist physics postulates that everything in the universe is composed of only two things: small indivisible particles (atoms) and the void.  In the words of Democritus:  “Only the atoms and the void are real.”  It may appear that there are many things, not just atoms, but those things and their motions are nothing but rearrangements of the atoms.  According to Atomists, the atoms move by inexorable causal law, like billiard balls being struck by other billiard balls.  Even the human mind and thought, on their view, are only complex movements of the atoms, governed by the same inexorable law.  As a consequence, everything that goes on in the mind, even the appearance of choosing, is actually completely determined.

Surprisingly, the earlier Atomists like Democritus did not see, or at least did not explore in their fragmentary writings, the deterministic implication.  Only a later Atomist, Epicurus, did.  Disturbed by the implication, he modified the theory to allow for choice, virtue, punishment and reward.  His solution consisted of asserting that the atoms made unpredictable “swerves,” motions that were not governed by law but were surprising, indeterminate phenomena.  As Lucretius, Epicurus’ student, said: “[T]hat the … mind feels not some necessity within in doing all things, and is not constrained like a conquered thing, to bear and suffer, this is brought about by the tiny swerve of the first-beginnings in no determined direction of place and at no determined time.”  Epicurus, therefore, counteracted complete determinism with complete indeterminism, absolute necessity supposedly refuted by postulating that everything is chance.

This is one of the earliest false alternatives posited in the history of free will thought, a false alternative that persists to this day.  In particular, a defender of free will is often accused of advocating complete chance – the brain cannot do something other than what it is determined to do (an Atomist-inspired view) according to such free-will deniers as Harris, and if one argues for free will one must be asserting that causation doesn’t exist.  Forgotten in this accusation is the position of Aristotle:  choice is not chance, but a kind of first cause appropriate to the rational, deliberating individual.

With free-will thought having taken this unfortunate path, later thinkers would have to either adopt the determinist line or reject Atomist theory.  The Stoics continued the determinist tradition but attempted to reconcile it with free will in a different way than Epicurus.

Stoics

The Stoics were explicit advocates of determinism.  They accepted the ancient Greek idea of the unity of nature, and argued that everything was “in accordance with nature.”  To the Stoics, this meant that all events were determined, the result of fate.  An explicit statement of this is by Hippolytus, referring to the two most well-known Stoics1: “These men [Chrysippus and Zeno] maintain the universal rule of fate by using the following illustration.  Suppose a dog to be tied to a wagon.  If he wishes to follow, the wagon pulls him and he follows, so that his own power and necessity unite.  But if he does not wish to follow, he will be compelled to anyhow.  The same is the case with mankind also.  Even if they do not wish to follow, they will be absolutely forced to enter into the fated event.”

Just like Epicurus, however, the Stoics were unwilling to reject morality, reward and punishment.  After all, the Stoics themselves advocated an explicit morality, one that follows from their metaphysics of fate:  Act in accordance with nature, in other words, accept your fate and don’t rebel against it (act like the dog willingly following the wagon, in the same direction and with the same speed, and don’t struggle or worry about your fate).  This Stoic viewpoint of acceptance, of apathy, has come down to modern times as the word “stoic.”

How could advocates of determinism still make recommendations about how to act, saying that the man who did not accept fate apathetically was acting against morality?  How could they propose to punish wrongdoers and reward virtuous men?  They did so by introducing a trick, a distinction between kinds of causes.  Chrysippus, quoted by Cicero (a Roman Stoic), says: “… perfect and principal causes are one thing, auxiliary and proximate causes are another.  For which reason, when we say everything happens by fate and antecedent causes, we do not mean perfect and principal causes, but auxiliary and proximate.”

This distinction is illustrated with a physical analogy:  If a cylinder or a top were to be put in motion, it would eventually rotate in a particular way according to its nature.  The original impetus is an auxiliary and proximate cause, a cause for sure, but not one that completely necessitates every aspect of the motion.  The spinning or rotation of the individual object is caused by its own nature, by the kind of thing it is:  A cylinder rotates on an inclined plane with its axis parallel to the plane.  A top spins with its axis perpendicular to the plane.  The Stoics extend this analogy to man, arguing that it is his nature to want or will things, and that in that sense man is like the cylinder or the top:  Fate is the auxiliary cause, but man’s internal will is what originates his actions in response to fate.

Of course, what is completely left out is the essential attribute of free will:  that the choice could have been otherwise than what was fated.

Readers will recognize this attempt at reconciliation of fate and free will:  It is virtually identical to the attempts of the medieval theologians to reconcile them, an attempt shown in a previous post to be inadequate to establishing actual free will.  This reconciliation relies on a watered-down, contradictory version of fate, basically amounting to saying that fate causes everything but it doesn’t (a version of determinism often called “soft determinism”).  The Stoics were the first to introduce this kind of distinction, which was picked up by the theologians and persists to this day in the arguments of “Compatibalists” (those modern philosophers who attempt the same compromise).

Alexander – Aristotle re-introduced and improved

It would seem from the philosophical movements described above that the promising start for free-will thought, which Aristotle had initiated, was completely forgotten.  However, a brilliant Aristotelian named Alexander of Aphrodisias reinstated and extended Aristotle’s concept of “deliberation.”  He made cogent arguments in favor of a conception of free will that includes its essential attribute:  that at the time one acts, he is free to either act or not.  Recall that Aristotle had distinguished between the voluntary and the chosen, the latter being connected necessarily to deliberation or thinking.  Aristotle was not too explicit on the absolute requirement that what is chosen must be open to alternative possibilities, but Alexander was.  In his discussion of the use of deliberating, he says (XI-XV Difficulties of the determinist position iii: Deliberation and responsibility):

“…man has this advantage from nature over the other living creatures, that he does not follow appearances in the same way as them, but has reason from her as a judge of the appearances that impinge on him concerning certain things as deserving to be chosen.  Using this, if, when they are examined, the things that appeared are indeed as they initially appeared, he assents to the appearance and so goes in pursuit of them; but if they appear different or something else appears more deserving to be chosen, he chooses that, leaving behind what initially appeared to him as deserving of choice.  At any rate there are many things which, having seemed different to us in their first appearances from what they appeared to us subsequently, no longer remained as in our previous notion when reason put them to the test; and so, though they would have been done so far as concerned the appearance of them, on account of our deliberating about them they were not done – we being in control of the deliberating and of the choice of the things that resulted from the deliberation.”

Much more would have to be demonstrated – particularly the primacy of the choice to deliberate – in order for this position to be as strong a defense of free will as is argued for in these posts.  However, in the history of philosophy prior to the present this was one of the best statements defending free will and, most importantly, identifying it with thinking.

Unfortunately, Alexander came too late in antiquity for his arguments to take hold.  Very soon after Alexander wrote this, the theologians rose to dominance, and the teachings of his school were declared heresy.  It would be close to 1000 years before the grip of the Church was loosened and philosophers could openly debate this question free from the threat of heresy.  The rebirth of discussion regarding free will is the subject of a future post.

The history of free will thought – the beginning – with wisdom for today

Summary:  A previous post has already discussed the thoughts of the theologians regarding the topic of free will, and distinguished the defense provided here from their attempts.  However, the theologians represent the middle of a history.  When and where did discussions of free will first begin?  What did the great philosophers think about the topic?  This post reviews the ideas of the greatest of the ancients – Aristotle – on the subject.  Do you know what he said at the beginning that is as relevant today as it was 2300 years ago?

Up until the time of Aristotle, there was virtually no discussion of the issue of free will as we mean it today.  Socrates, in urging others to follow the good life, and to think carefully about philosophical issues (especially the meaning of concepts), implicitly accepted free will.  On the other hand, some of his passages, especially those discussed earlier from the dialog Protagoras, may be taken to reject free will:  On that view, if a person had the knowledge of what was a good action, he would necessarily perform the good action and not the evil one.  Neither did Plato say anything directly about free will – its nature or existence.  However, as implied by his emphasis on virtue and law in his dialogues, he also took free will for granted.

The philosophies most associated with challenging free will (Stoicism and Atomism) post-date Aristotle.  Those systems will be discussed in a future post.

Aristotle made the first attempt to discuss free will at length.  In addition, his more fundamental discoveries in the field of logic have an important implication for defending free will.  As mentioned in a previous post, Aristotle was the philosopher who discovered the method of “re-affirmation through denial,” by means of which any attempt to reject the most primary underlying ideas (such as free will) can be refuted.  The attempt is refuted (or the ideas it rejects are “re-affirmed”) by showing the attempt must rely on those ideas for its argument.  An earlier post on the hierarchical nature of ideas elaborated on the fundamental fallacy implicit in denying free will.

Beyond his superlatively important contribution in logical hierarchy, Aristotle discusses free will directly.  In Nichomachean Ethics, Book III, chapters 1-5, he investigates the nature of free will, but not whether it exists.  Its existence he accepted fully.  Nonetheless, his thought is enlightening even on the question of free will’s existence, because getting the nature of free will correct is key to a proper defense of it (as argued in several prior posts – the introductory post, the post critiquing a free-will denier and the post about a good article on free will).

What is considered choice or free will today is a combination of two of Aristotle’s categories – the “voluntary” and the “chosen.”  Consider each of these in turn.

Voluntary vs. Involuntary

Aristotle first distinguishes between the “voluntary” and the “involuntary.”  (All quotes, here and below, are from the W.D. Ross translation).  “Voluntary [is] that of which the moving principle is in the agent himself, he being aware of the particular circumstances of the action.”  The involuntary are those things “which take place under compulsion or owing to ignorance; and [something] is compulsory of which the moving principle is outside, being a principle in which nothing is contributed by the person …, e.g. if he were to be carried somewhere by a wind, or by men who had him in their power.”

Aristotle rejects the idea that men are acting involuntarily when they are caught in the sway of emotion:  He says that “…it is absurd to make external circumstance responsible, and not oneself, [if one is] easily caught by such attractions.”  Here, Aristotle is making a distinction between the truly involuntary and what seems involuntary in the present moment.  It only seems involuntary now because the actor has in some way set himself up by his own prior choices to act in this way, now.

Aristotle makes the same point with respect to ignorance, distinguishing between “acting by reason of ignorance,” and “acting in ignorance.”  “Acting by reason of ignorance” is the case in which a person is truly unaware of key facts that would change his action.  “Acting in ignorance” is the case in which a person has made himself unaware by his own responsibility.  “… [T]he man who is drunk or in a rage is thought to act as a result not of ignorance … but in ignorance.”  The man is “in ignorance” because he put himself in that state, by not taking sufficient care to think about the consequences of his action, or to control his action despite what emotion he experienced.

As Aristotle states in Book III, ch. 5,

“…we punish a man for his very ignorance, if he is thought responsible for the ignorance, as when penalties are doubled in the case of drunkenness; for the moving principle is in the man himself, since he had the power of not getting drunk and his getting drunk was the cause of his ignorance.  And we punish those who are ignorant of anything in the laws that they ought to know and that is not difficult, and so too in the case of anything else that they are thought to be ignorant of through carelessness; we assume that it is in their power not to be ignorant, since they have the power of taking care.

“But perhaps a man is the kind of man not to take care.  Still they are themselves by their slack lives responsible for becoming men of that kind, and men make themselves responsible for being unjust or self-indulgent, in the one case by cheating and in the other by spending their time in drinking bouts and the like; for it is activities exercised on particular objects that make the corresponding character.”

Choice

Aristotle discusses “choice,”  in Book III, ch. 2-3.  “Choice .. seems to be voluntary, but not the same thing as the voluntary; the latter extends more widely.  For both children and the lower animals share in voluntary action, but not in choice, and acts done on the spur of the moment we describe as voluntary, but not as chosen.”

Here he is making the distinction between all voluntary action (a wider category) and deliberation.  It is the latter that he calls “choice.” He suggests in what follows that man’s distinctive nature as a reasoning being is what gives rise to choice:

“We deliberate about things that are in our power and can be done …  For nature, necessity, and chance are thought to be causes, and also reason and everything that depends on man.  Now every class of men deliberates about the things that can be done by their own efforts …, e.g. questions of medical treatment or of money-making.”

Aristotle included moral virtue as within man’s power to deliberate: “Virtue … is in our own power, and so too vice.  For where it is in our power to act it is also in our power not to act, and VICE VERSA…”

Aristotle’s view contains one significant error, namely the conclusion that choice is only about the means to achieve an end, but not about the end:  “The end, then, being what we wish for, the means what we deliberate about and choose, actions concerning means must be according to choice and voluntary.”  He is not correct in his view that ethical ends are not within our power to rationally deliberate, and that they are simply what one wishes for.  That conclusion takes the foundation of ethics out of the realm of reason and makes it captive of emotion.  A further discussion of this point, however, would be too far afield for the subject of this post. (see Ayn Rand’s essay on The Objectivist Ethics for a discussion of how to identify an ultimate end in ethics by reason).

That point aside, perhaps the weakest part of Aristotle’s analysis is that too much has been left implicit:   Deliberation, although one kind of choice, is not the primary.  It has been argued throughout these posts (e.g. in the critique of the Harris book) that deliberation or thought requires a pre-existing mindset, a focus, which sets up or enables thought.  Whereas a thought process is higher level and can be “explained” – in terms of one’s interests, knowledge, values – the choice to focus, to establish contact with reality and follow a course of reasoning, is a primary.  If one does not properly identify the primary, the existence of choice can come into question:  Free-will deniers always (legitimately) ask: How can a choice be “free” if there are prior causes that explain them?

Despite the limitations in Aristotle’s analysis, much of what he said was correct, and could serve as a guide to free-will defenders even today.  His thinking represents a very promising beginning for the history of free will thought.  As stated earlier, later philosophies denied the existence of free will, or severely delimited its scope.

Aristotle’s view that man’s character is shaped by the man himself, and therefore he is responsible for it (and its consequences), is the most important part of his discussion.   If men learned nothing from Aristotle’s view of free will but this conclusion, much of the current debate (certainly in ethics, politics and law) would end.  No one who accepted Aristotle’s view would argue that a criminal should be excused because he “felt,” in the moment, that he wanted to slaughter a whole family, or because he was too drunk to know what he was doing when he tee-boned another car.  Maybe all that is true – maybe he didn’t, in the moment, know what he was doing.  But according to reason, and to Aristotle, that is beside the point.  The criminal brought himself to this moment by his own choices, and could have done otherwise.  That is why we do, and should continue to, “punish a man for his very ignorance, if he is … responsible for the ignorance.”

Macreedy vs. Forrest

Summary: This post contrasts two movies, which illustrate opposite positions on the question of whether man possesses free will or is moved by forces outside his control.  The pro-free-will movie shows that the issue is not only one of determining the course of the events in one’s life, but also of determining one’s own character – and how the latter is responsible for the former.

If a man accepts free will, he holds he can determine his own character and his own future.  If he rejects free will, he holds that his character and his future are determined by something other than his own choices.  The acceptance or rejection of free will is therefore a fundamental premise that underlies everything else that man does.  In particular, this premise is implicit in every artistic product of man, such as literature and theater.  Consider the issue with respect to movies, by contrasting the view implicit in each of the following:  Bad Day at Black Rock, starring Spencer Tracy, and Forrest Gump, starring Tom Hanks.

Bad Day at Black Rock (1955) is set several years after the end of World War II.  From the moment that John Macreedy (played by Tracy) gets off the train at Black Rock, he finds the townspeople both unhelpful and threatening.  It goes beyond the suspiciousness that some small-town inhabitants exhibit toward strangers – it is clear from the outset that the town is hiding something.  Macreedy is looking for a man, Komoko, in nearby Adobe Flat.  No one will tell him where that is, or help him to get there by taxi or car rental.  The hotel manager claims there are no vacancies, though the hotel is obviously empty of guests.  Exposing the lie by perusing the hotel register, Macreedy helps himself to a room key.  Soon, goons from town start threatening him, physically camping in his room and obstructing his path to goad him into a fight.  Their gang leader is Reno Smith, who gives them orders at every step.  The goal is to terrorize Macreedy, as everyone in the town seems to be terrorized by Smith, and thus to drive the newcomer from the town.

But surprisingly, Macreedy is simply not afraid.  Nothing the townspeople do dissuades him from his goal to find the man Komoko.  In fact, Macreedy gradually becomes more determined to find Komoko as a consequence of the town’s actions.  He is told by a townsman that Komoko died trying in vain to farm a dusty desert plot.  Macreedy by now is suspicious enough that he doesn’t believe that story.  He finds a woman who will rent him her jeep, and drives to Adobe Flat, barely dodging death as one of Smith’s men tries to drive him off the road.  Again, Macreedy exposes the townsman’s lie about Komoko’s death, spotting a deep water well on the property that would have made the farm a success.  Macreedy also spots what looks like a grave on the property.

With what he has found out, Macreedy tries to contact the state police, but the telephone won’t connect to the outside – “lines busy” the operator says – and the wire Macreedy pens at the telegraph office is never sent.  Macreedy tries to rent the jeep again to drive to another town, but the woman refuses him – Smith has had a “conversation” with her.

Meanwhile, Macreedy’s refusal to be intimidated, and his commitment to the truth, have had an effect on two of the townspeople who for years have kept quiet and evaded what happened between Smith and Komoko.  The town’s doctor and the sheriff have both despised themselves for their cowardice in going along with Smith.  Each has lost his self-respect.  Seeing Macreedy act with quiet conviction has galvanized them to act as men instead of slaves, and they help Macreedy fight against Smith.

See the movie to discover what the town has been hiding all these years, why Macreedy stopped there, and how this fight comes out.  Whatever the outcome, the essence of the movie’s theme emerges from the above events:  Man can choose whether he will focus on facts or bury them in terrorized silence and liquor.  Man can choose to focus even after having made the opposite choice for years.  And a man’s character can have a powerful impact on others by exemplifying a moral ideal, at least for those who choose to contrast his character with theirs and see the difference.

At the time Black Rock was filmed, the cultural onslaught against free will had not yet gained full steam.  Philosophers, at least those in the cultural mainstream dominating the universities, were indeed teaching their students that free will is an illusion.  But their arguments were not as widely accepted as they are today.  Fast forward four decades to a time when the premise of free will had been under active, even aggressive, attack by intellectuals throughout the culture (and the 60’s-70’s counter-culture) – in newspapers, literature, theater, popular books, as well as classrooms.  It is now 1994, and one of the most popular movies of the year is Forrest Gump.  Forrest, played by Hanks, is a mildly retarded boy, whose mother fights energetically to keep him in normal classes in school so that he won’t be stigmatized in “special” education.  One of her favorite lines is “stupid is as stupid does,” meaning if one doesn’t act stupid one won’t be.  Her wishful thinking aside, Forrest’s subnormal intelligence can’t be hidden:  The movie portrays him blundering into one idiotic adventure after another (accidentally teaching Elvis to swivel his hips, exposing the Watergate scandal, unintentionally inventing a viral tee-shirt slogan).  Another favorite line of Forrest’s mother (and the theme of the movie) is “Life is like a box of chocolates – you never know what you’re going to get.”  In other words, things just happen to a person in life.  He is neither the cause of what happens, nor is he able to know what will happen in advance.  Life is just like selecting chocolates from a box – you might get cream-filled, or solid chocolate.  Who knows?

Gump is not intended to be taken as seriously as is Black Rock.  It is played partially tongue in cheek, cataloging how many zany situations befall Forrest.  That lack of seriousness is perfectly consistent with Gump’s theme.  If consequences just happen by accident and are not connected to effort, thought or action, then how can they be taken seriously?  One absurd outcome is as likely as another.  All one can say about Forrest’s character is that he has a passionate stubbornness: he devotedly sticks to the phrases his mother gave him, repeating them throughout his adventures.  Not exactly in the same universe as Spencer Tracy choosing to expose the tyranny and hidden crimes at Black Rock.

The political Left began the “you didn’t build it” movement (see introductory post).  They accuse the “right” of being defenders of free will, and using the concept in an attempt to exclude the supposedly underpriviledged from getting their “fair share” of society’s pie.   In an illuminating demonstration that the issue of free will is not a strictly political one, however, and that the right is hardly an advocate of free will, this movie has been taken to be a “conservative” movie politically.  National Review ranked the film in 2009 in the top 25 of the “Best Conservative Movies” of the last quarter century, apparently reacting positively to Gump’s being too much of an “amiable dunce” to get involved in drugs and the 60’s hippie movement.  Gump is just a “regular guy,” experiencing the joys and sadnesses of life, however they happen to occur.  Bob Dole said the movie proved that the American Dream was within everyone’s reach, apparently un-dissuaded by the fact that all of Forrest’s “dreams” weren’t anything he set out to do but happened by chance.  Or perhaps Dole was admiring Forrest’s devotion to barely understood slogans.

A fundamental premise such as free will has an enormous impact on the life of an individual man, and on the life of a culture.  If men believe they have choice, that those choices are responsible for the outcomes in their lives, they will act accordingly – planning, thinking, taking actions that help them achieve their goals.  They will also establish moral codes holding men responsible for their choices and actions.  Their art and literature (including lighter entertainment like much in cinema) will reflect that underlying premise.  Movies like Bad Day at Black Rock will be the result.

By contrast, if men believe that they have no choice, that whatever happens is an accident (whether of birth or mysterious firing of neurons), they will not plan or pursue goals, or spend the effort to think about their futures.  They will act with no particular reason or purpose.  They will  deny the intelligent any distinction relative to the morons.  And in such a culture, the art and literature (including cinema) will reflect that underlying premise.  Movies like Forrest Gump will be the result.

In the final analysis, it is the responsibility of philosophy to present the arguments defending free will and exposing the fallacies in the arguments against it.  For those considering the two alternatives – free will or accident – there are an enormous number of arguments and sub-issues to digest and evaluate.  It could seem an overwhelming task to make sense of it all.  It is therefore helpful to have a concretization that sums up in stark terms the alternatives of both sets of fundamental premises.  Is the life you strive for that of a Forrest Gump, mouthing inherited lines like a parrot and swaying like a feather with every wind, absurdly becoming a Medal of Honor winner and making a fortune in Apple stock, which he thought was a fruit company?  Or is the life you strive for a John Macreedy, confident, just, courageous, choosing to make a stand against what seems like an all-powerful evil?  The concrete images make the difference clearer to direct perception.  The choice between the two alternatives is then not only easier to make, but also comes with the level of conviction that only a powerful concrete, added to the philosophical arguments, can engender.

A unique defense of free will

Summary:  Our culture presents a false alternative – either accept science and reason while rejecting free will, or defend free will on the basis of mysticism.  These posts reject both sides of this alternative, and defend free will on the basis of science and reason.  Can you identify why religion is no ally to a defender of free will, and is in fact just as much of a free-will denier as are the modern intellectuals?

Readers of this blog will be aware of its thorough critique of the culturally popular position opposing free will.  Readers of this blog may also have observed that no language or ideas stemming from or related to mysticism are used to defend free will.  The posts presented here emphasize the thinking mind, focus, observation, conceptualization, logic – all the methods of reason.  The arguments presented here come from a reality-oriented perspective, grounded in this world, containing only facts and concepts derived from this world.

However, there is another cultural group, purporting to defend free will – from a religious perspective.  For a non-mystical person, it is perhaps a natural reaction to such a defense to shake one’s head and wearily think “with friends like these…”  A more complete response starts, simply, with the categorical statement that this blog completely disassociates itself from any such attempt at a defense.  Further, even though some religions writers present cogent critiques of the dominant cultural position, what those critiques are based on is a non-rational, otherworldly approach.  Hence, this blog will neither refer to nor discuss those critiques.  It is this blog’s position that reason is fully adequate to refute determinism – no “help” is needed from those who may seem on the surface to be allied but in fact are enemies of reason.  It will also become obvious as their arguments are presented that the religious perspective actually implies determinism, rather than an opposition to it.

Religion has struggled with the concept of free will for millennia.  Discussions of free will are present in all the major theological works, the most well-known of which is St. Augustine’s The City of God.  The problem that these thinkers try to address is inherent in the religious position:  How can there be free will if there is an all-knowing, all-powerful deity that has a full plan for the universe for all of eternity?  If man’s destiny is already written by that plan, how can man determine his own destiny?  And further:  How can man be held accountable for moral crimes – how can morality even exist – with the kind of free will that would be compatible with such a deity?  The answer from theologians, as we’ll see, amounts to various attempts to smuggle free will in through the back door while still denying its roots, attempts that fail utterly to pass the test of logic.  In the end, many of these attempts amount simply to the assertion that one has to accept the contradiction on faith, because the deity’s master plan includes free will for man!

As representative of the sorts of equivocations, and even direct assaults on reason, involved in the religious attempt at a defense, consider the arguments of Augustine.  He uses the Roman Cicero (a critic of God’s foreknowledge of all things) as a foil (Book Fifth, Section 9):

“[Holding] either that something is in our own power, or that there is foreknowledge…of those two [he] chose the freedom of the will … and thus, wishing to make men free, he makes them sacrilegious.  But the religious mind chooses both, confesses both, and maintains both by the faith of piety.”

Although Augustine states here that the religious mind adopts this position by faith (i.e. in the absence of or against reason), he does supply an argument:

“It does not follow that, though there is for God a certain order of all causes, there must therefore be nothing depending on the free exercise of our own wills, for our wills themselves are included in that order of causes which is certain to God, and is embraced by his foreknowledge, for human wills are also causes of human actions; and He who foreknew all the causes of things would certainly among those causes not have been ignorant of our wills…”

Now consider this argument.  Augustine is trying to escape the implication of saying God is all-knowing.  By reason this is a contradiction of free will because, if God knows all, then how can man “choose” what God already knows will happen?  If man “chooses” only what God knows will happen already, is that really choice?  Could man choose otherwise?  Of course not.  Augustine’s solution is to assert that God knows everything, including that man will choose.  But is that a solution?  There is no middle ground.  Either man chooses his actions or he doesn’t.  If God knows man has choice, then man has choice, and that means man can take an action that isn’t necessitated, that couldn’t be known in advance.  “Solving” the problem by saying: the Being that knows all, in advance, also knows that man can do something that can’t be known in advance, is a contradiction.  Just because Augustine declares that the Being can do it, doesn’t make it so.

Thomas Aquinas, centuries later, provided a similar but more sophisticated version of Augustine’s argument.  First, it should be emphasized that Thomas is light years ahead of Augustine, in that he made room for reason in his theology.  This ultimately led to the widespread acceptance of reason, and the decline of religion’s power.  Even on the issue of free will, Thomas at times has a very fact-based approach:  Although man possesses emotions that may incline him to act in particular ways, “These [natural] inclinations are subject to the judgment of reason …Therefore this is in no way prejudicial to free choice.” (Summa Theologica, Pt. I, Question 83, Art. I, Reply to Objection 5).

However, when it comes to reconciling man’s free will with God’s omnipotence, His power to do anything, Thomas is in the same situation as Augustine.  Thomas wants to allow for both, despite the contradiction, and he does so by equivocating about priority (man may cause his actions but God caused man to cause his actions).

“… It does not of necessity belong to liberty that what is free should be the first cause of itself, as neither for one thing to be cause of another need it be the first cause.  God, therefore, is the first cause, Who moves causes both natural and voluntary.  And just as by moving natural causes He does not prevent their actions from being natural, so by moving voluntary causes He does not deprive their actions of being voluntary; but rather is He the cause of this very thing in them, for He operates in each thing according to its own nature.” (Summa Theologica, Pt. I, Question 83, Art. 1, Reply to Objection 3).

To unravel this complexity, what he is saying here is that God operates in each man, but because man’s nature is to have free will, God operates through man’s free will.  So, in a manner very similar to Augustine, Thomas says that God causes everything, including the man who causes other things.  God causes everything but He doesn’t, just as for Augustine God knows everything but He doesn’t.  The only way this could work is if Thomas limits God’s power, saying He causes some things but not others.  This Thomas does not do (although, true to his Aristotelianism, Thomas says elsewhere that God cannot do what is impossible. See Summa Theologica, Pt. I, Question 25, Art. 4.)

These arguments are not the off-the-cuff arguments of some modern televangelist.  They are the arguments of the two most influential theologians in all of history.   And yet they fail to reconcile the contradictions involved in positing supernatural beings while still attempting to support free will in man.

A recent excellent course on the history of the concept of free will, presented by the philosopher Ben Bayer, shows how theologians ultimately “reconcile” the contradictions by presenting a very delimited definition of free will:  one has free will if he does something and at the same time wants to do it.  This weak definition of free will, however, is not what is rationally meant by the concept.  The definition completely leaves out the essential characteristic: the possibility that a person could have done otherwise.  Later theologians (notably Luther) made just that point in fully rejecting any hint of free will.

To readers who find it both odious and tedious to wade through arguments based on mysticism, your suffering is now over.  The purpose of presenting the above is to illustrate why those in the religious camp are of no value to a pro-reason person in defending free will.  The fact that many today, including many scientists, argue for determinism while rejecting the religious perspective, does not mean that there is a basic opposition between religion and determinism.  In fact, as is easily grasped from the above discussion, religion – with all its emphasis on omnipotent and omniscient beings possessing the unlimited power to defy nature, and know everything for all eternity – is completely deterministic.  Both the modern free-will deniers and the religious thinkers are thus on the same side when it comes to being opposed to free will, whether the latter group acknowledges it or not.

Allying with today’s free-will deniers, just because they appear to be scientific and rational, in contrast to all the so-called free-will defenders who are religious, is no solution.  Whichever side a person chooses if he accepts that false alternative, he leaves out the real opposition to both:  those who embrace the scientific and rational, and on that basis defend free will, with all the best and most rational arguments available.  That is the purpose of the posts presented here.

Free will and the subconscious

Summary:  Those who deny free will often reference the subconscious as a counter-argument to free will.  How can man have free will if the subconscious continuously feeds him ideas he didn’t choose and over which he has no control?  Previous posts have identified the need to edit and select from what the subconscious delivers.  But how does that process work?

A subject that frequently arises in discussions of free will is the subconscious.   The free-will deniers harp on the subconscious repeatedly.  For example, Harris states in his attack on free will:  “The intention to do one thing and not another does not originate in consciousness—rather, it appears in consciousness, as does any thought or impulse that might oppose it.”  In effect, the anti-free-will argument rests on the idea that there is a part of the mind not in one’s awareness that feeds one data, ideas, and feelings that ultimately control one’s actions.  If that is the case, they argue, how can man have free will?

The answer emphasized in previous posts is the process of selection, discarding, editing of what the subconscious feeds.  Whether one is building the Panama Canal, sending rockets to a celestial body or planting flowers in one’s backyard, the process of selection is critical to achieving productive results.

But what exactly does that process of selection consist of?  What guidance can be provided to someone who wants to gain or regain control of one’s consciousness?  What specific steps are involved, and how are they connected?

Purposefulness

To edit, one needs a criterion of selection.  Only a purpose can provide it.  In any context or project, the purpose provides the standard by which one judges any idea that comes into the mind.  Without it, one cannot edit or select.  Consider the example of writing a novel or story, where the theme represents the purpose in this context.  In the Sherlock Holmes stories by Arthur Conan Doyle, the theme is:  the methods of a brilliant detective.  Based on this theme, the author includes concretes that support and further the theme, such as how Holmes observes physical detail.  When editing a rough draft, the author uses the theme to select, from the material originally written, only those concretes that achieve the purpose.

It cannot be overemphasized that one needs a hierarchy of purposes.  One cannot simply have a grand, distant purpose and expect anything useful to come from it as a guide for what to do today.  Especially in constructing a product as complex as a novel, the purpose is broken down into sub-purposes, such as: write the description of a gentleman’s attire in 19th century London.  Those who study the issue of goal-setting, in the fields of psychology and business management, emphasize the need for one’s goals or purposes to be specific and proximal. That is, on any given day of engaging in one’s larger purpose, one must translate that purpose into goals that are specific enough to grasp and achieve on that day.  Further, one must set those goals at a time close to that particular day.  Only in this way can those goals be retained and used as a guide.  Nesting today’s and tomorrow’s purposes hierarchically into the larger purpose results ultimately in achieving that larger purpose.

Reality-orientation

Reality is the ultimate standard of selection.  Even in a work of fiction or a fantasy story, what is real and possible (in the context of the project) should always guide one in establishing a) whether the purpose itself is achievable, and b) whether the project as it unfolds actually embodies one’s purpose.  Evasion, wishful thinking, and emotionalism are as destructive to editing as they are to treating patients in an emergency room.

Integration

This is the central method of editing.  Integration is connection of one’s ideas to each other and to reality.  One cannot edit without performing the process of connecting each idea to the purpose.  If an idea “comes to you in the night,” it may be worthwhile or it may be useless.  It may advance the project or diminish it.  It won’t even be possible to know which it does without making this connection.

Integration has a further critical role:  all new ideas, all innovations, are based on connections.  Creativity is not a mystical process but rather a process of making new connections among what seemed like disconnected elements.  Whether it is Newton connecting the fall of the earth in its orbit to the fall of an apple, Pasteur connecting microbial activity in fermentation to microbial activity in disease, or an English teacher connecting a new word to its Greek and Latin roots for his students, the expansion of knowledge rests on such integrations.

The two kinds of integration just described – connection to goals and connection to knowledge – have a fundamental unity: both are a method of widening the context of the current contents of consciousness – to knowledge and to values.  Both involve asking the question: what else do I know that conditions or relates to this situation?  It is precisely the failure to broaden that context to the fullest extent that leads to the puzzling phenomenon of a person acting or saying something that clearly is within his power to avoid.

Socrates said in Protagoras that man could not act against his knowledge, and so Socrates thought that committing evil must be a failure of knowledge.  Socrates was wrong on two counts:  If a person simply lacked knowledge that he had no way of obtaining, then he cannot be said to be committing evil, as he had no choice in the matter and a moral evaluation therefore is not relevant.  On the other hand, we see examples all the time of evil acts (cheating on exams, lying about one’s credentials, colluding with enemy foreign governments, committing sexual harassment).  Those acts are committed by people who have enough knowledge and intelligence to understand their destructive consequences.  What explains it?  What explains the kinds of acts criminals commit, as described previously?  The answer, to connect back to the theme of integration, is disintegration, failure or refusal to enlarge the context beyond this narrow frame of consciousness, failure or refusal to look beyond the momentary out-of-context emotion fed to consciousness, to include the knowledge and values one does possess.  The subconscious mind is permitted to set the terms of thought and action, with no editing.

Integration has so many benefits, across the entire spectrum of conscious activities – including improving memory, maintaining one’s goals and values, improving creativity – that it is worth significant effort to make it a habit.  Forming habits is part of a larger discussion of the full scope of free will’s control.  Free will as presented so far is merely defensive, in its role of editing subconsciously offered material.  Free will in fact has a much larger role:  A person has control (within definable limits) even over what the subconscious provides.

Preprogramming

By definition the subconscious is sub-conscious, so one cannot have direct control over it.  On that point, the free-will deniers are correct.  The subconscious delivers to the mind, out of the vast storehouse of data perceived and assimilated in the past, the result of connections not in one’s immediate power to alter.  Thus the subconscious itself is an integration engine, albeit an automatic one.  Nonetheless, one does have control over 1) what goes into the storehouse, and 2) how much attention to pay to particular items that come out of it.  What goes into the storehouse is determined by one’s choice of what to read, what thinking and research to do, what experiences to deliberately pursue.  What to pay attention to is determined by one’s deliberate choice of what is relevant in a particular context.

The process of choosing to notice some aspect of reality is accomplished by giving oneself standing orders, and reinforcing those standing orders repeatedly until they themselves become automatic.  A simple example is: reminding oneself to count calories if one is pursuing a healthy eating style.  During the first several days, or weeks, of beginning this activity a person may find himself being haphazard about counting.  With monitoring and constant reminders, the process becomes automatized.  Automatized does not mean fully automatic, as anyone who has gone on a diet knows full well.  What it does mean is that one finds it easier and more probable that he will notice what he has set his mind to.  The only guarantee of continuing to succeed in making the action as automatic as it can be is to intentionally monitor and remind oneself of the goal.

Self-monitoring has been implicit all along in the process of editing, in that without it one isn’t even aware of the ideas one needs to connect to reality and to one’s purpose.  Self-monitoring, then, is an aspect of integration.  A previous post on intelligence as a learned skill  referred to monitoring and the critical role it plays in building intelligence.  It is also critical in study methods and motivation, in psychological reform, and as has been seen, in forming good habits and by implication breaking poor ones.

With respect to accomplishing a purpose like writing a story, the standing order might be:  notice everything one encounters (ideas one has, information one reads) regarding, say, how detectives evaluate clues to a crime.  The principle and the methods are the same, on a more exalted scale, when one is trying to automatize not information delivery but a method like the skill of integration.  One gives oneself a standing order to ask the integration question:  what is the larger context of values and knowledge that connects to my current line of thought or action?  What other things do I know that relate to this?  It’s the same preprogramming one uses in forming any other habit.

Even with preprogramming – standing orders guided by monitoring – there will still be much delivered by the subconscious that is irrelevant, fanciful, exaggerated or in other ways useless to the purpose at hand.   That is why integration, guided by reality and one’s purpose, is indispensable if one is to achieve anything worthwhile.

Those who use the methods discussed here do not find their subconscious minds to be the enemy of choice.  They do not feel the way Sam Harris did when he stated “[M]y mental life is simply given to me by the cosmos.”  They do not deny the automatic character of the subconscious – that is an obvious psychological fact – but they are not ruled by it either.  Rather, they can program the subconscious, habituating new methods of thinking, and regulating within limits what it provides to the conscious mind.  Subconscious material thus obtained becomes the indispensable feed-stock from which integrations are made, and from which the selection process sifts, high-grades, refines and molds raw data to achieve the purposes men strive for.

A good article on free will

Summary:  Although the majority of intellectuals deny the existence of free will, a few have defended it.  These posts have already referred extensively to Ayn Rand’s defense of free will as based on a primary choice to think or not.  This post will review the work of a scientist who also provides a vigorous defense of free will, and who makes other valuable points within the field of psychology and neuroscience.

In a previous post, a book chapter by Albert Bandura 1 was referenced as providing a devastating critique of neurophysiological determinism, that is, the view that all of our conscious activity is determined by brain states.  Some of Bandura’s key criticisms were discussed in that earlier post.  Bandura makes several other excellent points that there was no space to cover earlier.

The first point he makes is that the conceptual level is what is called an “emergent property.”   Such a property is something that results from the combination of elements, but is not present in those original elements.  An example is the saltiness resulting from the combination of sodium and chlorine, which is not present in either ingredient.  Bandura states:

“Cognitive processes are emergent brain activities that exert determinative influence. In emergence, constituent elements are transformed into new physical and functional properties that are not reducible to the elements. For example, the novel emergent properties of water, such as fluidity and viscosity, are not simply the combined properties of its hydrogen and oxygen microcomponents … Through their interactive effects, the constituents are transformed into new phenomena.”

This is the essential counter-argument against all of the neurophysiological determinists who claim that free will does not exist because it cannot be found in the brain’s physical components. Recall that the anti-free will argument presented by Harris rests on the idea that there would have to be something “extra,” some (in his view) mystical element to explain choice.  Speaking of a criminal, for example, and supposing he changed places with the criminal “atom for atom,” Harris states “There is no extra part of me that could decide to see the world differently or to resist the impulse to victimize other people.”  In contrast to this view, if free will and the cognitive processes it involves are combination-enabled “emergent” properties, there is no need to hunt for an “extra” property, let alone to assert it is mystical.  Free will is as natural as water’s wetness.

Bandura goes on to explain the proper level at which to view man’s control.  He rejects the anti-free-will view that because man is not aware of the brain states underlying choice, man therefore has no choice:

“In acting as agents, individuals obviously are neither aware of nor directly control their neuronal mechanisms. Rather, they exercise second-order control. They do so by intentionally engaging in activities at the macrobehavioral level known to be functionally related to given outcomes. In pursuing these activities, over which they can exercise direct control, they shape their neural circuitry and enlist subpersonal neurophysiological events subserving their chosen pursuits.”

Bandura gives the analogy of driving a vehicle, which requires combustion, conversion of energy into motion, steering linkages, brake pads, etc.  The driver activates all of these subsystems by choice, but not directly.  He does so through the conscious activities we normally take to be part of driving, which include planning the route (getting reservations if one is embarking on an overnight stay), gassing the car, starting the car, pressing the gas pedal, steering.  As Bandura states, in addition to the conscious choices made in simply starting and steering the car, “The deliberate planning … for these diverse activities far in advance requires considerable proactive top-down cognitive regulation.”

This concept of “top-down” cognitive regulation is the most important contribution Bandura makes.  What he is saying is that man actually exercises control over all these subsystems that operate in the background, but only at the “macrobehavioral” level (the “top”), i.e. only at the level of what one can directly choose.  Man cannot choose or know about the details of the combustion engine’s momentary operation (the “down”), just as he cannot choose or know about what his neurons are doing.  When he chooses to do something, however, man activates these subsystems.  In simple terms, he is a causal agent in enabling these subsystems to function to his benefit.

Bandura continues his explanation using an example from exercise physiology and psychology:

“Individuals obviously do not intentionally direct their atrial and ventricular cardiac muscle fibers to fire and their aortic and pulmonary valves to open and close.  However, by intentionally engaging in an exercise routine and controlling their activity level, they can enhance their cardiac function and regulate their heart rate without having the foggiest idea of how they indirectly recruited, by their intentional actions, the subserving neurophysiological mechanisms.”

You, the reader, can see the above mechanisms in an everyday example.  Suppose you are nervous about an upcoming job interview, or examination.  Suppose this causes you to feel shortness of breath.  You can choose to identify your nervousness, and the underlying worry about how you will perform. Further, you can choose to say to yourself: “Calm down, I know I’m prepared for this.  Breathe deeply and evenly.  Steady now.”  These actions – identifying and correcting mistaken thoughts, and purposely controlling your breathing – are at the “macrobehavioral” level, where choice is operative.  Yet they result in causing the underlying bodily systems to deliver oxygen to the cells more efficientlly, a function that no one can control directly.

Bandura summarizes (here, and below, brackets indicate words substituted to explain a technical term):

“Framing the issue of conscious cognitive regulation in terms of direct control over the neurophysiological mechanics of action production casts the issue in the wrong terms at the wrong level of control … Because individuals have no awareness of their brain processes does not mean that they are just quiescent hosts of automata that dictate their behavior.  Neuroimaging can shed light on the neural mechanisms of cognitive control and how controllable [chosen] action indirectly develops functional neuronal structures and orchestrates the neurodynamics for selected purposes.”

Finally, Bandura discusses the details of recent neuroscience research that rejects the mechanistic theories of the free-will deniers.  This research investigates what happens to the brain when man exercises choice.  Bandura urges lines of research that elaborate on these findings:

“Thoughts change the brain by cognitive practice in much the same way as does physical practice (Pascal-Leone, et al., 1995). … prior cognitive practice reduces the time needed to learn a skill by physical practice. There is much excitement [by current practitioners of neuroscience] about how the brain regulates behavior to the neglect of how individuals train the brain to serve desired purposes.

Research on brain development underscores the influential role that [choice] plays in shaping the functional structure of the brain (Diamond, 1988 2; Kolb & Whishaw, 1998). It is not mere exposure to stimulation but [choice] in exploring, manipulating, and influencing the environment that counts.  By regulating their motivation and activities, people produce the experiences that form the functional neurobiological substrate of [skill development] … This is a realm of inquiry in which psychology can make unique contributions to the … understanding of human development, adaptation, and change.”

Unfortunately, the great value of this article is undercut by Bandura’s views about the foundations of ethics.  His views are a mixture of individualism (standards derived from reason, applying to each individual man) and collectivism (group or societal standards, whatever they are).  Implicitly Bandura accepts the centrality of the individual valuer, in that he emphasizes the survival value to each person of being able to control his own cognitive processes and behavior.  In Bandura’s words: “Forethoughtful, regulative, and reflective capabilities are vital for survival.”  Because of that implicit individualism, Bandura has insightful things to say about the role of chosen moral values in shaping how people act.  However, despite that implicit individualism, parts of his section on man as a moral agent come from an ethical position that gives equal weight to individual and collective moral standards.  That perspective is also reflected in Bandura’s use of terminology (e.g. “biopsychosocial, ” “social cognitive theory”) that represent confusing package-deals of individualism and collectivism.  The package lumps together 1) biologically important aspects of each individual man in his control over his environment with 2)  man’s psychology (on this view) as a “social” agent.

This flaw aside, the article’s major contribution is to identify the actual psychological and brain processes involved in a person being an “agent,” that is exercising choice.  Removing the determinist and mystical aspects entirely, Bandura and those psychologists he cites are asking serious questions about the role of processes and mechanisms that accompany and are activated by choice.  This is neuroscience as it could and should be.

Two views of the power of man

Summary:  Two views of man’s worth and scope of choice are contrasted.  One view holds that nature is immensely powerful and humbles man.  The other holds that man, while of course subject to the laws of nature, has the power to reshape nature to valuable ends, achieving results nature could never accomplish.  This power of man is the result of his reason and choice.  Which of man’s achievements would you judge as most exemplifying his power?

By speaking of greater forces than we can possibly invoke, and by confronting us with greater spans of time than we can possibly envisage, mountains refute our excessive trust in the man-made. They pose profound questions about our durability and the importance of our schemes. They induce, I suppose, a modesty in us. 1

This view is virtually the motto adopted by the environmentalist movement (of which its author is a prominent member).  It represents a strain of thought about man and his capabilities that is increasingly common in our culture.  This view emphasizes the power of nature and belittles man by contrast.  It urges man to become modest and jettison what it considers his unwarranted sense of importance.

In fact, however, man’s power is far greater than the mountains he is asked to be humbled by.  With the faculty of reason, culturally unleashed since the Renaissance, man has used nature to accomplish life-enhancing tasks that nature, by itself, did not and could not accomplish.

Consider one example, from a biography of a great scientist:  A mold can grow penicillin.  This fact of nature lay unknown for the entire 200,000 years of man’s life on earth prior to the time it was discovered by science in the 1920’s.  By itself, however, even this discovery wasn’t sufficient to make it useful.  To be beneficial required, first, investigation by scientists to understand penicillin’s properties and medical applications.  Second, an enormous scale-up effort was required to take this research from laboratory to widespread medical use.   At the beginning of World War II, with soldiers injured on the battlefield in desperate need of antibiotics, penicillin was extremely limited in supply: The soldiers, after receiving a dose, had their urine collected so the rare commodity could be re-purified from it and used for another dose.  By the end of World War II – with the industrialization of the penicillin process, most notably by the drug companies Merck and Pfizer – penicillin supplies were adequate to provide doses as needed for the Normandy invasion.  In Britain alone, from 1930 to 1960, the number of people dying from infection decreased from 115,000 to 24,000 per year, while population grew (some of this was due to hygiene and other medical advances, but most is attributed to penicillin and its derivatives).

A mold (“nature alone”) can grow infinitesimal amounts of a useful chemical.  It takes man’s reason to recognize the chemical’s beneficial properties, and to develop the scientific understanding and production processes necessary to make it available for widespread use.

With regard to mountains, yes, the building of them by the earth involves massive forces, blind consequences of earth motion, heat and gravity.  The moving of mountains, to achieve something of great value, requires man’s mind and focused choice.  Few achievements illustrate that more clearly than the building of the Panama Canal.

The Canal was first conceived and proposed seriously in the 1850’s, as a way to reduce the months-long journey from the Atlantic to the Pacific oceans.  Decades of planning and work resulted in very little, as the first efforts failed and went bankrupt.  When American industry and medicine got involved, in the 1900s, Panama was still a pestilential, disease-ridden jungle, with a few skeletons of the earlier work remaining.  The task seemed, and had proven, impossible.  First, man would have to find a cure for yellow fever and malaria so that it was possible to work in Panama.  Then, the enormous task of slicing a waterway through a continent had to be accomplished.

There were political factors at play in this vast effort, but that is not the subject of concern here.  What matters is what a combination of science, engineering and business accomplished in merely 11 years.

David McCullough’s excellent history of the building of the Canal details the extent of the accomplishment.

  • The American effort at Panama started in 1903, after a 20-year French effort had ended in failure both technically and financially, and a 10-year lapse in construction had occurred.
  • Before work could begin, a medical team had to assemble all that was known about yellow fever and malaria in order to reduce or eliminate the almost unheard-of death rate (12 of every 1,000 employees) in Panama. This team, led by William C. Gorgas, conducted a war against mosquitoes (whose role in yellow fever and malaria had been recently discovered).  This campaign consisted of drainage, brush and grass cutting, killing larvae (by a crude larvacide they had to invent), prophylactic quinine for all employees, screening, and killing adult mosquitoes. The result of this four-year effort was the eradication of yellow fever and a dramatic reduction of the malaria hospitalization rate (from 10% to 2%).
  • Culebra Cut was the heart of the canal, a massive man-made canyon with vertical rock walls subject to progress-reversing mudslides.  96 million cubic yards of material were removed from the canyon, by around 100 “monster” machines and 61 million pounds of dynamite.  All the types of work – blasting, drilling, shoveling, material removal – had to be coordinated so as not to have one interfere with or delay any other.
  • A large dam was built at Gatun in order to control the flow of water in the canal. The dam bridged a mile and a half waterway, and was fifteen times as thick at the bottom as it was high.  A daily concrete pour for the dam was 3400 cubic yards.
  • The Canal, once operational, would function by at least 1,500 electric motors, designed by General Electric.
  • Pittsburgh steel mills produced all the needs of the Canal.  Special steels were used in the construction of the locks, which needed to be high strength and corrosion resistant.  Allegheny Steel had invented the new Vanadium steel alloy.

McCullough provides an eloquent summary of all these technical achievements:

“The creation of a water passage across Panama was one of the supreme human achievements of all time, the culmination of a heroic dream of four hundred years and of more than twenty years of phenomenal effort … The fifty miles between the oceans were among the hardest ever won by human effort and ingenuity, and no statistics on tonnage or tolls can begin to convey the grandeur of what was accomplished.”

Among the many elements of free will highlighted in the Panama Canal story, one stands out:  the difference between the characters of the leaders of the failed attempt and those of the successful one.  Ferdinand de Lesseps, the French builder who had become famous for the Suez Canal, tried to apply the same techniques used in building Suez to the Panama region.  Ignoring the overwhelming evidence that methods suitable for a desert climate were not working in a muddy jungle, de Lesseps’ effort collapsed.  His successors, American medical personnel and engineers, by contrast were open to the facts and did not evade them.  Under the leadership of two engineers – George Goethals and John Stevens – they devised methods appropriate to the particulars of Panama.  It was no accident, for example, that Stevens was an expert railroad engineer:  Building the Canal required innovative railroad solutions to the problem of deploying equipment and materials at dispersed sites.  Nor was it an accident that Stevens had trained under the legendary railroad builder James J. Hill, whose management philosophy is summed up as follows:  “Intelligent management . . . must be based on exact knowledge of facts. Guesswork will not do.”  A determination to have exact factual information vs. stubborn evasion and wishful thinking – those are the contrasting approaches adopted by the two men whose exertions had such different outcomes.  This fundamental difference in character (not anything related to nationality) explains how the later effort to build a canal at Panama resulted in success, despite the enormity of the task and all the setbacks along the way.

For those who love to visit and view the mountains, either as a rest from a productive life or a vocation (such as a tour guide), nothing said here should be considered as a suggestion against it – as long as one considers the full context of the situation.  Don’t adopt the anti-man viewpoint of the environmental movement.  That movement promotes a poisonous package deal combining your enjoyment of nature with a denigration of man’s power and choice.

Man’s reason allows him to move mountains, and to choose to slice pathways through continents.  It also gives him the power to read those mountains, understanding their geologic history though that history extends for epochs before man inhabited the earth.  Reason allows man to grasp possibilities on extra-terrestrial bodies and choose to travel there.  Reason makes possible eliminating disease.  But reason is a power that has to be exercised by choice – requiring focus on reality, effort and perseverance in the face of failure.  Rather than be humbled by the sight of nature, or be overawed by its power, man may legitimately experience an earned pride in his choice to comprehend and make use of it.

Intelligence – a learnable skill

Summary:  Intelligence is viewed by most people as a fixed quantity, determined by genes and unchangeable.  What if that isn’t true?  What if, like any complex set of skills (such as swimming or driving), people can learn to improve it?  Read what the research shows.  And read also about which intellectual movement opposed such research. 

Even many of those who accept free will think that intelligence is not within the scope of what one is free to choose.  IQ testing and interpretation are based on the premise that there is a certain human capacity, called “g,” that represents an innate ability to deal with abstractions and complex mathematical and verbal relationships.  But is that true?

Research conducted within the educational community over many decades, directed to fixing deficiencies in student problem solving, has shed new light on this issue.  Intelligence has come to be viewed as trainable, like any complex skill.

An insightful book, Intelligence Can be Taught, by Arthur Whimbey, summarizes the research underlying the trainability of intelligence.  Whimbey first identifies the component skills involved in solving intelligence-test problems.  He uses this background to define “intelligence” as the capacity of paying careful skilled attention to the analysis of relations, whether verbal, spatial or mathematical.

The studies reviewed by Whimbey spanned the entire age range from preschool through college.  Interestingly, the conclusions found in the studies of younger students were the same as those in the older students:  Performance on intelligence tests was explained by a difference in the mental habits of the students.

Low-aptitude students had the following characteristics:

  1. They were one-shot thinkers: If they didn’t already know the answer to a question, they gave up and rushed to pick an answer rather than performing any analysis.
  2. They were quick – mentally careless and superficial. They glossed over detail – often writing down an answer in a fraction of the time necessary to absorb the data and construct an answer.  As a result of this habit, they missed the main point or based an answer on superficial clues. In preschool, the student would start answering the question before the questioner finished speaking.
  3. Their answers were often feeling based – when asked for the reason for an answer, they didn’t respond in terms of the problem description but in terms of some feeling or attitude that they, the solvers, possessed.
  4. They were not particularly concerned about having an accurate picture of the problem or of the words in the problem. In a manner very similar to that of “look-say” readers, they guessed at concepts rather than analyze and ultimately grasp them.

By contrast, high-aptitude students had the following characteristics:

  1. They were active and deliberate in attack on problems. If they didn’t already know the answer, they used analysis to reach one.
  2. They worked slowly enough to grasp and process the entire description of a problem. If they didn’t understand everything, they constructed the meaning based on what they did understand, carefully proceeding through a sequence of questions and steps to clarify the full meaning.
  3. They based their answers on analysis of the problem and other facts/relationships they knew, rather than what they felt.
  4. They were determined to get an accurate picture of the problem, including the meaning of every concept in the problem statement. They were determined to get an accurate solution.

The researchers found that the attributes of low-aptitude students were not innate and fixed – they were modifiable and within the power of a student’s ability to choose.  Dealing with abstract, complex material to solve a problem has definite subskills and activities that the students could be coached to adopt.  For example, students were taught not to rush but rather to take the time necessary to obtain an unambiguous picture of what the problem stated and what was requested.  They were taught to focus on the problem statement and facts as opposed to their feelings.  They were taught to ask a series of probing questions when material was initially unclear, paying careful attention to the exact meaning of each word, and using other available knowledge.  This training improved not only the students’ scores on standardized intelligence tests but also, and most importantly, their performance in further education.  The researchers concluded that it is within one’s power to raise intelligence by learning and automatizing a proper method.

The value of this kind of training was found to be proportional to its continuity, length, number of hours per week, and the starting age of trainees.  The earlier this kind of training started, the better.  The researchers found that in the home of the better preschool students such training was delivered as a natural part of the parent-child interaction – via consistent and intensive verbal interactions. In the home of the poor student, such verbal interactions were missing.  In contrast to the child who engaged in consistent verbal interactions, the “silent” home produced children who didn’t know the meaning of simple relations like either-neither, over-under and big-little.  The difference in the verbal interaction level develops later into a difference in ability with regard to complex patterns of thought – if-then, cause-effect, exclusion/inclusion and generalization.

The researchers encountered one serious methodological impediment to progress in this field:  opposition by the psychological theory of Behaviorism to introspection. Behaviorism’s claim was that introspection is not objective because it cannot be observed from the outside – Behaviorism’s view was that only external behavior, not thinking, is objective.  During the early part of the 20th century, intelligence researchers endured heavy criticism and lack of support for their focus on the patterns of thought engaged in by the students.  Yet, as they showed, those patterns of thought cannot only be identified by student introspection, they can be communicated by careful self-reporting of thought patterns.

Behaviorism lost its dominance of the psychological and educational fields, mostly because of the ascendancy of those cognitive theorists who re-asserted the importance of cognitive aspects of the mind.  By the end of the 20th century, more intellectuals were accepting of methods of research focusing on thought.

The objectivity and use of verbal self-reports of thinking is a legitimate issue for discussion even among those who do not dismiss cognition and the mind.  The basic question is: to what extent can psychologists and educators rely on verbal self-reports as a scientific or training database?  Since the mind is only visible to the individual, how is it possible to trust and verify the objectivity of what he says?

The answer, based on extensive investigation of the issue by Ericsson and Simon, is that it can be trusted if it a) is contemporaneous with the student’s actual thinking (rather than remembered), b) consists of simple reports rather than interpretations, c) is spontaneous rather than prompted (i.e. no leading questions or suggestions) and d) integrates with other knowledge the observer can glean about the thinking patterns of the student, such as what the student writes down on the blackboard.  Ericsson and Simon also provided an in-depth critique (as does Bandura 1) of the one paper most cited as proving verbal self-reports are nonobjective.

As a result of the successful defense of introspection, it was more widely accepted as a key source of data, and has been applied in areas as diverse as study methods, emotional harmony, and in Whimbey’s own area of training thinking methods.

Whimbey and the other researchers found the best strategy for improving intelligence:  The poor student should read aloud a thinking protocol of a model high-achieving student prior to attempting to solve problems.  The student, under teacher guidance, explains the process of the model student and then attempts to implement it in his own problem solving, reporting his own thinking as he proceeds.  The teacher identifies areas where the student is deficient in successfully implementing the model student’s thinking methods, and gradually tutors the poorer student to improve his thinking and problem-solving method.

The student chooses to adopt the new methodology and to make it a habit.  The teacher cannot force the student to adopt it, but can be an invaluable guide in showing the student where he might improve his method.  The previously poor student, who would walk away from problem-solving sessions with confusion and frustration, thinking he could never be successful, develops a new habitual method and confidence in his ability.

Unfortunately, in the early part of the 21st century the reality of thinking and the mind, and the choice to alter one’s thinking patterns, have again come under attack, this time by implication of ideas advocated by the anti-free-will movement.

This issue of whether intelligence is within the scope of one’s free will is not an academic issue – the future of a student’s academic performance and life success are at stake.  An important study showed that students who think intelligence can be trained are more willing to modify their thinking and exert effort.  Students who believed in free will had a positive trajectory in performance. Those who did not, had a negative trajectory.  Thus those free-will deniers, like Harris, who claim that people will continue to behave the same way whether or not they believe in free will, are dead wrong.  Accepting free will, in any area where it applies, is the first and most fundamental step in improving one’s success in that area, whatever one’s ultimate physical or mental capacity.

If there is one criticism one can level at this book, it is Whimbey’s unwarranted dismissal of ability to deal with abstractions as a definition of intelligence.  He seems to have an incorrect view of what is legitimately involved in forming abstractions, arguing that the careful, systematic analysis of relationships has no role in abstraction.  In fact, however, a proper theory of forming abstractions requires such a systematic approach.

That criticism aside, this is a valuable book.  It not only reviews research in training intelligence but also debunks much of the so-called research (especially separated-twin studies) purporting to show that intelligence is 80% hereditary, or (even worse) that it is racially determined.  To their great credit, Whimbey and his colleagues have added intelligence to the list of what a person can and does build.

Critique of Sam Harris’ book Free Will – Conclusion – Anti-hierarchy

Summary: Harris repeatedly commits the fallacy of hierarchical inversion, which consists of denying a concept or idea on which one’s own argument rests.  This fallacy is committed with respect to concepts such as “science,” and with respect to the very descriptions he provides of the experiments supposedly proving that free will is an illusion.

One principle that has emerged in the discussions in several posts is hierarchy.  It was observed in the discussion of methods that methods come in a hierarchy, from simple techniques to tactics, strategies, and more abstract and all-encompassing methods like logic and scientific method.  It was observed in several other discussions that knowledge has a hierarchy:  Later conclusions depend, implicitly or explicitly, on earlier ones.  It was observed that violating the hierarchy – advocating ideas higher up in the chain while denying those lower-level ideas on which the higher-up ones depend – was a fundamental error made by those who deny free will.  This post will expand on that theme.

As an example, an individual such as Harris can write an entire book denying the existence of free will, advocating strenuously that man is an automaton whose every idea and action is programmed by his brain chemistry.  During the writing of that book, this individual made thousands of choices, framing words, sentences and paragraphs as carefully as he could to convey his thesis.  Further, he cited many examples of scientists conducting neurological experiments – scientists who chose one experimental design over another, who corrected error by making careful and repeated measurements, and who designed experiments to methodically support or reject hypotheses.  Yet Harris fails to see the contradiction between all that is alleged to support his thesis, and the thesis itself.  The thesis depends on a hierarchy in which the lower levels are used yet simultaneously repudiated.

That is why all Harris’ interpretations of neurology experiments are invalid, whether the experiments themselves were valid in their particulars or not. (The famous Libet experiment, shown to be in violation of hierarchy in an earlier post, has also been critiqued persuasively in its experimental design by Bandura. 1)

It was also observed in the previous post, and other posts, that the locus of choice fundamentally rests not on any specific, higher-level, choices, but on the choice of whether and how to use one’s consciousness.  In other words, the fundamental choice is not whether to take this job or the two others on offer.  It’s about how one will make the choice.

It can be seen, therefore, that choice itself has a hierarchy – there is a fundamental choice that conditions and frames the higher-level choices.  Those higher-level choices are not possible without the context set by the fundamental choice.  Those higher-level choices may be said to have reasons that explain them (one’s values, interests, skills), but the fundamental choice about whether to think about the issue at all is a primary, always open to one’s own will and not explainable in the same sense.

Harris and the free will deniers say that the nearly ubiquitous observation of turning one’s thinking or focus on or off is a delusion.  We can’t trust our observations.  We don’t observe ourselves choosing, they say.  We don’t even observe ourselves making the fundamental choice of bringing focus to our minds.  Rather, we simply “find” we go one way or the other (the choice just appears), and misinterpret that as “choice.”  The last post discussed in detail what is wrong with the idea that the choice just appears.

Consider another example of how writing a book involves issues of hierarchy that the writer who denies free will must ignore.  That writer is writing the book for a purpose, namely to convince others of his position.  He is relying on an audience of thinkers who will read and evaluate his ideas, accepting those ideas because they view the arguments as logical and convincing.  It would be absurd for the author to state that his ideas are no more logical than the ideas of the person who holds the opposite view, that he simply spewed those ideas out automatically rather than their opposite, and that members of his audience only accept those ideas because they are necessitated to by their own brain physiologies.  Such a statement would make the author’s attempt to convince us a ludicrous exercise in self-deception.  Yet that author implicitly asserts all of this by rejecting free will.  Ignoring the hierarchical dependence of his motivation for writing on the existence of choice in his audience, he can simultaneously depend on them and wipe out that foundation.

Harris’ most egregious and morally reprehensible violation of hierarchy is in the use of the term “moral” itself.  He cashes in on his denial of choice by objecting to moral condemnation of monstrous criminals:  “Once we recognize that even the most terrifying predators are, in a very real sense, unlucky to be who they are, the logic of hating (as opposed to fearing) them begins to unravel.”  (“Hating” is not actually a moral term – it’s an emotion – but Harris means it as a consequence of a moral condemnation.)    This kind of amorality is to be expected from someone who rejects personal responsibility for any action.  Harris goes further, however, saying “it seems immoral not to recognize just how much luck is involved in morality itself.”  Get that?  He is accusing his opponents – those who support free will – of being immoral.  He is using the concept “immoral” while having denied the base of the concept – choice.

There has to be a method and a motivation behind such a blatant contradiction.  Apparently, he wants to reserve for himself the use of that concept, so he can condemn his opponents, while denying that concept to them.  He violates hierarchy to do so, whereas his opponents do not – they are simply following the implications of free will in using the concept.  Readers who do not deny choice and morality may form their own moral judgments of such a motive and such a tactic.

Hierarchy is not something we happen to observe, as some accident, in issues as different as scientific method and free will.  Hierarchy is inherent in the nature of conceptual awareness.  It is inherent because concepts themselves, the building blocks of all knowledge, are hierarchically dependent.  The concepts formed from the lowest-level concretes of perception (like “dog,” “elephant,” “fish,” “bird”) can then themselves be treated as constituents in a later act of abstraction, for example to form the concept “animal.”  That new concept would be both unnecessary and impossible without the earlier concepts.  “Animal” distinguishes all of the above concretes, and innumerable others, from concretes such as “tree,” “bush,” and “flower.”  This holding of abstractions as “units” that can be treated as a base to go forward, is a distinctive capacity of a conceptual consciousness.

When one denies hierarchy, it is a mistake, an error of thought that necessarily invalidates any further thought process.  Just as a car without a chassis has nothing to support its body, an idea or concept that is proposed while denying its logical roots has no support – it’s a self-contradiction.  With respect to concepts, this fallacy is called the “stolen concept fallacy.”  But the broader kind of “stealing” employed by the free-will deniers is much wider in scope and deeper in its pathology.  So much territory must be wiped out, so many facts must be ignored by such stealing, that it is hard to imagine a mind going forward from that point on.  Nothing but absurdities can result from such a flawed reasoning process.  And those who respect reason and logic should diagnose it as contradictory and dismiss it.

To summarize all of the posts on Harris’ book, Free Will:  The first part of this review identified the arbitrary underlying premise behind Harris’ view that past brain states necessitate all future actions.  He simply ignores, without any argument, the possibility that a being could possess capabilities that are enabled by and emerge from the brain yet are not completely necessitated in every detail by the brain’s neurology.  The second part of this review analyzed the gimmick that gives plausibility to the argument, namely focusing only on a straw man (the last split second of the process of choice) rather than the true nature of free will (the entire sequence of mental events and choices from the primary choice to focus and leading up to a final, higher-level choice).  Finally, the present post identified the conceptual inversion involved in denying the validity of free will while depending on it for an argument.  This vast collection of fallacies – arbitrariness, use of straw-man tactics and hierarchy violations – are the means used by the neurological determinist to deny the universal experience of free will.  Any one of those transgressions alone would be sufficient reason to reject Harris’ arguments, and to accept what one grasps from personal experience rather than deny it as a delusion.  The combination of all three logical insults should make one recoil from the poisonous free-will-denier’s doctrine.

Critique of Sam Harris’ book Free Will – Part 2 – Sleepwalking

Summary:  Harris mischaracterizes choice – describing it only in terms of the split second the choice is made instead of identifying all the thinking, planning and goal-setting preceding it.  This narrow lens on free will is used by Harris to assert that man is essentially sleepwalking, making choices and taking actions for which there is no explanation and no root in conscious thought.

Harris has a second argument that is emphasized throughout the book.  It might be called:  You didn’t choose – the “choice” just appeared.  He says in regard to his choice of coffee or tea on a particular day (p. 7):

“Did I consciously choose coffee over tea? No. The choice was made for me by events in my brain that I, as the conscious witness of my thoughts and actions, could not inspect or influence…The intention to do one thing and not another does not originate in consciousness—rather, it appears in consciousness, as does any thought or impulse that might oppose it.”

And (p. 33, regarding a pain in his back that led him to consider physical therapy):

“Did I, the conscious person, create my pain? No. It simply appeared.  Did I create the thoughts about it that led me to consider physical therapy? No. They, too, simply appeared. This process … offers no foundation for freedom of will.”

As a final example (of many he gives), he says with regard to a person’s idea of starting a website (p. 37):

“Where did this idea for a website come from? It just appeared in your mind.  Did you, as the conscious agent you feel yourself to be, create it?”

The first quote, and much of the discussion around the other quotes, make it clear that this is not really a second, original argument, but depends on his first argument that mental contents are fed to the conscious mind from unconscious physical causes over which there is no control.  That argument and its arbitrariness have been dealt with in the previous post.  There are some additional things to say, however, about Harris’ idea of choices just “appearing” in the mind with no explanation or conscious antecedent.

Harris’ examples are a mixture of two types of choices – simple taste-type preferences (coffee or tea, vanilla or chocolate ice cream) and more fundamental choices like the one to start a website or to seek physical therapy.  The taste-type preferences can be ignored, because they often are made on the basis of physical attraction and do have a strong element of the physical (taste or smell).  They are also very superficial (but still actual) choices where operating on physical desire is perfectly legitimate.  Note, however, that the mixing in of these types of choices with more fundamental ones is used by Harris to lend credibility to the “choice appears” idea.  In effect, what he is asserting in this package deal is exactly that the more fundamental choices are just like those superficial ones in that they are physical and automatic.

In order to make it plausible that the deeper choices are also of this nature, Harris narrows the focus in describing them to the very split second that the thoughts or intentions were finalized by the conscious mind.  Only in this way can he avoid having to explain why one person has those thoughts and intentions and another does not.  This issue of level is critical to his argument, and to a refutation of it.  Essentially, he mischaracterizes choice as something that occurs in discrete, unrelated little bites.  He ignores the goals a person sets, the thinking he engages in and the other methods he employs that integrate those bites.  The way Harris describes his snapshots of choices is designed to make it natural that they would seem inexplicable.  Yet they are not.

This aspect of level has been dealt with admirably by Ghate as well as by Bandura1 (in a devastating critique of neurological determinism).  The essence of their counter-argument to Harris will be recapitulated here, and additional points made.  Ghate’s lecture, directed at a young student audience, discusses Ayn Rand’s view that free will must be identified on a very fundamental level as the method by which a conceptual being uses his consciousness.

Consider, for example, two individuals with back pain and how they arrive at the decision to seek physical therapy.  The first individual is not passive about the issue.  He asks acquaintances with back pain what remedies they used, and which ones were demonstrated to work.  He considers their answers, inquires further, and checks the facts, relying on trustworthy references rather than uncritical web browsing.  He gives himself a mental order to revisit the matter at whatever small intervals of spare time his busy schedule affords.  Via this process, he gradually becomes convinced that his problem requires professional help, and that a physical therapist has the skills and training needed.  He even asks his insurance company whether visits to such a specialist are covered under his plan.  On the basis of all this thinking, he decides to seek an appointment with a particular physical therapist.

The second individual takes a very different approach.  He at first dismisses the pain, attributing it to a short-term sprain that will heal on its own.  He ignores it on and off for months, sometimes resolving to do something about it and at other times simply ignoring it further.  He happens to be at a cocktail party where a woman is rhapsodizing about her chiropractor, ensuring anyone who will listen that the man is the best “back-cracker” in town.  The next day, this second individual calls the telephone number the woman gave him, and makes an appointment with her chiropractor.

By Harris’ description of a choice to “consider therapy,” focusing only on the final result and not on the process that led up to it, both of these individuals would seem to be identical, having automatically arrived at the “same” choice.  Yet the two are fundamentally different.  No one can directly “will” a complex choice like this into being. The first individual wills it into being indirectly via: A conscious decision to pursue the matter, to focus on it over an extended period, to seek and weigh information, and to hold the full context of what is involved (such as how to pay for it).  The second individual makes the kind of “choice” that Harris describes as choice:  He drifts, acts emotionally, weighs no evidence and asks no questions.  In effect, he doesn’t consciously make a choice at all: his feelings make it for him, along with the opinion of some other person.  The final choices made by each of these individuals, far from being inexplicable, are perfectly predictable (in direction, if not the details) from the different methods they use.

Those who read Harris’ book will observe that every single example he describes, including two lengthy examples about a person pursuing martial arts or getting fit, are of this second, emotional, type.

The difference between these two individuals elucidates where the real level of free will is: not at the level of the narrow decision in question but rather at the level of how to go about making that decision.  What is within one’s direct power of choice is not the end product but the method of getting there – not the specific destination but the train one chooses to ride.   There are subsidiary choices, but underlying them is the basic choice: to think, to exercise all the rational methods at one’s disposal to arrive at a solution to a problem, or to default on thinking and drift.

Harris would no doubt criticize the conclusion being drawn here by saying that each individual’s brain mysteriously feeds him even the method he uses.  This could seem plausible with respect to the second individual, since he drifts without control, like a rudderless boat.

With respect to the first individual, Harris’ conclusion is impossible.  None of the things the first individual does is automatic, nor can it be.  Any number of results could have been delivered to that individual from a web search, and the easiest and most automatic thing to do would be to accept the top results listed.  Exerting oneself to seek reputable results, reminding oneself that not everything one reads is valid, takes an act of focus and attention requiring effort.

More generally, no conceptual idea (whether about how to plant a crop or cure back pain) is innate or guaranteed to be true. Truth must be obtained by a definite process, a process that must include free will.  Perhaps in those transitional eras between the animals and the evolution of man there existed beings who got ideas fed to them by their brains but had no way to check their validity, no way to do anything but follow those ideas – unable to distinguish between food and poison, between a safe cave and an impending rock slide.  Such beings would have perished very quickly.  Then there arrived on the scene the human being, with a faculty enabling him to evaluate the evidence, the reasons for and against an idea, and to reject those ideas that were in error.  This being had to have the capacity to look inward and monitor the operations of his mind, identifying not only the correct ideas from all those presented by the brain but also the correct methods of arriving at those ideas.  Making such a selection – choice – is a necessity for that being, if he is to continue to survive.  What choice such a being makes is not necessitated (if it were, he would have his survival guaranteed).  If he chooses to think, like the first individual in the above example, this being succeeds.  If he defaults, like the second individual, this being harms himself, whether immediately or in the long run.

To reverse a pat phrase of Harris’ in condemning choice:  No one has come up with a mechanism by which such persistent, repeated monitoring, evaluation and re-establishing one’s direction in the face of error could be the result of neurological impulses or genes or chemistry.  A thought process can go off track.  Using the earlier analogy, the train one chooses to ride can be derailed.  To keep it going requires grabbing the steering and deliberately keeping it on its course.  Harris denies the possibility of grabbing the steering by choice.  A few narrowly conceived “choice” descriptions, or stories of emotionally driven choices, like the examples Harris provides, might deceive one into agreeing.  The lengthy thought process described with respect to the first individual in the example – not to mention the kind of long-term efforts involved in producing the Oxford English Dictionary, or Les Misérables, or the Empire State Building, or the moon landing – unmask the absurdity of such a denial.

The next and concluding post will identify the fundamental mistake Harris makes in using concepts, and how that error invalidates his argument.