Category Archives: Philosophy

Constitutive ends, transcendent ends, and rules

I don’t care about making money, I just love to sell carpet!

— Buddy Kallick

If you ask what the goal or end of a particular action is, there are a number of actions which admit of two distinct answers — which, after rejecting several even less felicitous terms (trust me, it’s possible!), I have decided to call the constitutive end and the transcendent end.

This distinction was brought to my attention as I was rereading some of the epigrams filed under “Diversion” in Pascal’s Pensées (Krailsheimer’s translation), so I might as well use one of his examples to explain what I mean.

[Those who say] that people are quite unreasonable to spend all day chasing a hare that they would not have wanted to buy, have little knowledge of our nature. The hare itself would not save us from thinking about death and the miseries distracting us, but hunting does so.

In this case, catching the hare is the constitutive end of the hunt — so called because such an end is an essential part of what makes a hunt a hunt. Without a hare (or some other quarry), there can be no hunt. Or, to be more precise, it is not the hare itself that is necessary so much as the idea of the hare, as a goal imagined in the hunters’ minds. It is quite possible to hunt even when there are no hares about, so long as the hunters do not know this.

The constitutive end is therefore absolutely necessary, in that the activity in question is by definition the pursuit of that end. Unless a person pursues that end, it is impossible for him to engage in that activity. However, psychologically, it often happens that the constitutive end is not the “real” end — not the thing that would really satisfy those who are ostensibly pursuing it. This is where transcendent ends come into play. (Sorry if that sounds a little too maharishi; again, you’ll just have to trust me when I say that these are the least infelicitous terms I could come up with.) In Pascal’s example, the transcendent end of the hunt is simply to distract the hunter, thus relieving him of the pain of thinking about our miserable human condition. I call it transcendent because it transcends the activity itself. The hunt as a hunt is perfectly intelligible on its own terms even without the knowledge that the hunters are seeking to be distracted from their own mortality — but not without the knowledge that they are pursuing a hare.

*

Furthermore, although the transcendent end is the “real” end — the one that the hunters actually care about — they must nevertheless focus all their attentions and energies on the constitutive end. In another of his examples, Pascal discusses a man who gambles a small sum every day for entertainment. Like the hare hunters, he doesn’t really want his ostensible goal (money) but rather diversion and distraction. If you offered to just give him some money each day on condition that he give up gambling, he wouldn’t be interested. However —

He must have excitement, he must delude himself into imagining that he would be happy to win what he would not want as a gift if it meant giving up gambling. He must create some target for his passions and then arouse his desire, anger, fear, for this object he has created, just like children taking fright at a face they have daubed themselves.

The transcendent end is such that it cannot be pursued directly, but only by means of the pursuit of a wholly different (constitutive) end. And the more you can lose yourself in the pursuit of the CE, forgetting all about the TE if possible, the more likely you will be to attain the TE.

*

Actively pursuing one end (the CE) in order to attain a quite different end (the TE) is a dicey business, since there will nearly always be actions which, while effective ways of reaching the CE, are actually detrimental to the TE. This is why it is often necessary to pursue the CE within a framework of rules.

Hunting, for example, is subject to standards of sportsmanship, and too-effective methods are deemed unsportsmanlike. This is very strange if you know only that the goal of the hunt is to capture a hare — but once you understand the transcendent end of distraction-from-one’s-mortality (or, in less charged language, “fun”), it becomes clear why hunting must not be permitted to become so easy that it fails to keep the mind occupied.

The same is true for the rules of other sports. If your purpose is to get the ball into the goal, it seems quite counterproductive to refuse to touch it with your hands — but in light of the transcendent end of soccer (distraction again), such rules make sense.

*

So far my examples — Pascal’s examples — involve only sports and other diversions, where the TE is distraction. However, I think the logic of constitutive ends, transcendent ends, and rules can also be applied to many other kinds of activities. A few examples follow.

*

St. Augustine, back before he was a saint, used to enjoy stealing for the sake of stealing. As the noted Augustine scholar Sir Michael P. Jagger puts it, “Augustine knew temptation,” loving not only “women, wine, and song,” but also “all the special pleasures of doing something wrong.” In the saint’s own words (translated by E. B. Pusey):

I lusted to thieve, and did it, compelled by no hunger, nor poverty, but through a cloyedness of well-doing, and a pamperedness of iniquity. For I stole that, of which I had enough, and much better. Nor cared I to enjoy what I stole, but joyed in the theft and sin itself.

A pear tree there was near our vineyard, laden with fruit, tempting neither for colour nor taste. To shake and rob this, some lewd young fellows of us went, late one night (having according to our pestilent custom prolonged our sports in the streets till then), and took huge loads, not for our eating, but to fling to the very hogs, having only tasted them. And this, but to do what we liked only, because it was misliked.

Behold my heart, O God, behold my heart, which Thou hadst pity upon in the bottom of the bottomless pit. Now, behold, let my heart tell Thee what it sought there, that I should be gratuitously evil, having no temptation to ill, but the ill itself. It was foul, and I loved it; I loved to perish, I loved mine own fault, not that for which I was faulty, but my fault itself. Foul soul, falling from Thy firmament to utter destruction; not seeking aught through the shame, but the shame itself! . . .

So then, not even Catiline himself loved his own villainies, but something else, for whose sake he did them. What then did wretched I so love in thee, thou theft of mine, thou deed of darkness, in that sixteenth year of my age? Lovely thou wert not, because thou wert theft. But art thou any thing, that thus I speak to thee? Fair were the pears we stole, because they were Thy creation, Thou fairest of all, Creator of all, Thou good God; God, the sovereign good and my true good. Fair were those pears, but not them did my wretched soul desire; for I had store of better, and those I gathered, only that I might steal. For, when gathered, I flung them away, my only feast therein being my own sin, which I was pleased to enjoy. For if aught of those pears came within my mouth, what sweetened it was the sin.

The pears were a constitutive end for young Augustine — no theft without something to steal — but the transcendent end was to taste “all the special pleasures of doing something wrong.” These are seemingly paradoxical pleasures, but everyone knows them; no one reads Augustine but recognizes himself in this passage. (I suppose the root of the pleasure is pride — glorying in the fact that one can do such things and enjoy them, in defiance of God, reason, and society.)

The theft was by definition a means to the end of getting pears — but the pears were valued only as a means to the end of committing theft.

*

Everyone knows the story of the widow’s mite (as it is always called for some reason; it should be “the widow’s mites“).

And Jesus sat over against the [temple] treasury, and beheld how the people cast money into the treasury: and many that were rich cast in much. And there came a certain poor widow, and she threw in two mites, which make a farthing. And he called unto him his disciples, and saith unto them, Verily I say unto you, That this poor widow hath cast more in, than all they which have cast into the treasury: For all they did cast in of their abundance; but she of her want did cast in all that she had, even all her living (Mark 12:41-44).

This is not generally considered one of Jesus’s “hard sayings”; most people naturally and intuitively understand and agree with the judgment expressed. For me, though, it has always been a major sticking point, something I have brooded over again and again in an attempt to understand Jesus’s message.

From a utilitarian point of view (and we moderns are all utilitarians to some degree), how can the widow’s donation possibly be judged better than those of the rich men? The rich men contributed substantially to the support of the temple at no real inconvenience to themselves — maximum benefit for the temple, minimum harm for the donors — whereas the widow made an enormous sacrifice which scarcely benefited the temple at all. By what criteria is that donation judged better which produces greater harm and less benefit?

One easy, not to say facile, explanation is that Jesus was making a statement about the general goodness of the attitude exemplified by the widow, not of this particular instance of it. What he meant was that it would be good if people in general (particularly rich people) were as proportionally generous as this poor widow had been. This particular widow’s gift was worthless and even actively harmful, but we ought nevertheless to praise it so as to encourage a similar attitude in others — specifically, in others who are not poor widows.

but that’s not what Jesus said. He didn’t say, “If only the rich could be so generous!” He said, “This poor widow hath cast more in.” It’s hard to avoid the (anti-utilitarian) conclusion that, for Jesus, the primary value of the gifts lay not in the good they did to the temple but in the harm they caused to the donors. The temple received more from the rich, but the widow sacrificed more, and thus her gift was superior.

Sacrifice is thus valued qua sacrifice, regardless of whether or not it helps anyone. However, not just any sacrifice will do. It must be a sacrifice motivated by love or piety — which in turn means that its constitutive end must be to help some other person, to further the work of God, etc. Although the widow wasn’t really helping the temple at all, it was nevertheless important that it was into the temple treasury that she cast her mites. Had she just cast them into the sea, it seems unlikely that Jesus would have been as approving. Likewise, when Jesus wanted the rich young man to sacrifice his wealth, he didn’t tell him to scatter his flocks and burn down his house; he told him, “sell whatsoever thou hast, and give to the poor.” Was the point really to help the poor? No, of course not. But helping the poor (or some similar “good cause”) was nevertheless necessary as a constitutive end. The transcendent end was the sacrifice itself, or perhaps the moral effects which sacrifice engenders.

*

Is all charity similar in kind to that demonstrated by the widow or demanded of the rich young man? There is, after all, something paradoxical about on the one hand scorning worldly goods and comforts (as a virtuous person should), and on the other hand trying to provide those goods and comforts for others as if we were thereby doing them some great service. Just as Pascal’s gambler had to “delude himself into imagining that he would be happy to win what he would not want as a gift,” doesn’t the charitable Christian have to delude himself into imagining that he can contribute to others’ happiness by giving them what he knows cannot bring happiness.

The constitutive end of charitable giving is to alleviate poverty — and that could be most effectively achieved by forcibly taking money from the rich and giving it to the poor. But such means would be detrimental to the transcendent end (namely, the happiness that comes from love, generosity, and gratitude), so a rule is needed (“thou shalt not steal”).

*

My work as a language teacher is necessarily based on the pursuit of constitutive ends. The transcendent end is to develop proficiency in English, and that goal can be achieved only through practice using the language. Language, however, is such that it cannot really be used without some communicative goal (which is why “Say something in Chinese!” is such an annoying request). In a recent class, for instance, I had my students read an English version of H. C. Andersen’s story “The Swineherd” and discuss whether the characters’ actions were right or wrong. (I knew from experience that women tend to sympathize with the princess, and men with the prince, leading to lively debate.) The real purpose of this whole exercise was to practice a few specific grammatical constructions — perfect modals (“he shouldn’t have deceived her,” “I would have done the same thing,” etc.) and the third conditional (“if she hadn’t kissed him, they wouldn’t have been banished”) — but this was accomplished by focusing almost entirely on the constitutive end of passing moral judgment on fairy-tale characters.

Of course, the most effective way for a group of Taiwanese people to reach any communicative goal would be for them to speak Chinese. Hence the need for rules (English only) to ensure that the transcendent end is served.

*

One type of error is to disregard rules and focus too exclusively on the constitutive end. Another is to focus directly on the transcendent end, forgetting that the activity cannot maintain its character — and thus cannot lead to the TE — unless the CE is kept in focus.

“We don’t keep score; we just play for fun.” People who say this exhibit a fundamental misunderstanding. Yes, fun is the real point (transcendent end) of playing — but unless you’re trying to win (the constitutive end) you’re not actually playing the game and therefore won’t have as much fun.

1 Comment

Filed under Ethics, New Testament, Philosophy, Psychology

The insufficiency of mere virtue

But the injunctions “Be virtuous,” “Be courageous,” “Be great-souled,” “Be liberal” do not tell us what to do in the sense of what to aim at; they rather tell us how we should behave in the pursuit of our aim, whatever it is. But what should that aim be?

— Alasdair MacIntyre, A Short History of Ethics

*

I’ve got a good idea for a game. Like all games, it needs rules, so I’ve got several. First, you’ve got to keep the ball within bounds; there’s a line painted around the edge of the playing field, and if the ball goes over the line it’s out of bounds. You can kick the ball, but you can’t touch it with your hands. Also, you’re not allowed to hit, kick, push, or spit on the other players. No steroids are allowed. Oh, and you have to wear a regulation uniform. Jumping is permitted, as long as you don’t jump too much. Excessive jumping is frowned on. That’s about it.

I know you’re thinking I must have forgotten to mention something — like what the goal is, how to win the game. Well, your goal is to score points, and you score points — (here’s the beauty of the game design) — by following the rules. The ref watches you while you play, and for every minute spent following the rules you get a point. Points are deducted for infractions — how many depends on how serious the offense — and in the end if you have a positive number of points, you win! (This is not an inherently competitive game. It’s perfectly possible for everyone to win.) Winning — ending with a positive number of points — is the main point of the game, but of course the more points you can get, the better.

That was the original version of the game, but I found that it had a few problems — the biggest one being that it was possible to win by just standing on the field doing nothing. So I added some more “positive” scoring criteria, to encourage players to actively play well rather than taking the negative path of merely avoiding violations. In the new version of the game, you also get points for helping other players. So, for example, if one of the other players has decided to try to keep the ball in the air for a full five minutes, you can get points by helping him do that. Or if he’s decided to stand perfectly still for the whole game, you can help prop him up — but of course don’t push him! Team spirit counts, too. Enthusiastically helping other players will get you even more points. You also get points for accuracy — for making the ball go precisely wherever it is that you want it to go — and for general grace of movement. In this new, richer version of the game, obeying the rules is the bare minimum expected; most of your effort will be devoted to being accurate and graceful and helping others.

*

A deeply unsatisfying game, obviously. A pointless one. But notice how much more intelligible it becomes with the addition of an objective goal, however arbitrary. Tell a player that his goal is to see to it that the ball enters net A more times than it enters net B, and suddenly all his grace and accuracy and teamwork become meaningful.

As with sports, so with war. Send an army onto the field with no orders but to be brave, loyal, and self-sacrificing, and nothing will come of it. But tell them that their goal is to capture Jerusalem — or to protect it from being captured, or whatever (just about any goal will do, really) — and you create a situation in which real bravery, loyalty, and self-sacrifice can appear.

The goal itself doesn’t really matter. The real point is not the achievement of the ostensible goal, but rather the virtue and excellence which are manifested in its pursuit. As Nietzsche’s Zarathustra says, “a good war hallows every cause.” The cause itself may be completely pointless — as, for example, in soccer or World War I — but it must not be thought of as pointless. Unless the participants really care about the ostensible goal, no “good war” will result.

*

Moving from sports and war to life in general, I find that most moral philosophy is as unsatisfactory as the imaginary game described at the beginning of this post:

Be virtuous.

In the service of what goal?

Happiness.

And how is happiness obtained?

By being virtuous.

So what exactly am I supposed to do?

Implied answer: Whatever strikes your fancy, so long as you do it in a virtuous way.

*

This can be done. A more-or-less arbitrary goal can be chosen and pursued, and the result can be a life of virtue and happiness. (It can be instructive to Google the phrases “the purpose of life is” and “he devoted his life to” — the sentences tend to end in totally different ways.) But it only works if you don’t think about it too much. The soldiers must never stumble upon the disillusioning thought, “This? Is this the face that launched a thousand ships?”

3 Comments

Filed under Ethics, Philosophy

The Consolation of Philosophy

What is philosophy, you ask?
They say it’s learning how to die.
But should you chance to flub that task,
Don’t fret: You’ll get another try —
And then a third, and so on, till
You get it right — you surely will!
It’s guaranteed, so don’t despair!
Philosophy is more than fair.
Her students may be plagued with doubt,
But not a one has yet flunked out.

 

1 Comment

Filed under Philosophy, Poetry

Evidence against free will?

What would count as evidence against free will?

I’ve said before that there can be no evidence for or against free will because it is a doctrine about the ontological status of things that don’t happen. A person with free will might very well do precisely the same things as a person without free will — the only difference being that the former could have done otherwise. But what “could have happened” is invisible to us; we can only observe what actually happens. Therefore, a person with free will and a person without free will are empirically indistinguishable. I’ve said before that this means all we can do is assume that we have free will (or not), and that it makes more practical sense to assume that we do. But what it would actually mean, if it were true, is that free will vs. determinism is a fake question, a distinction which makes no difference. If it is indeed true that the one belief is more practically useful than the other, then the two beliefs must be empirically distinguishable, at least in principle.

*

My argument that there can be no evidence for or against free will could also be used to argue that there can be no evidence for or against the proposition that nature is governed by laws. The sun rose in the east today, but could it have risen in the west instead? We believe that the sun and the earth move in accordance with fixed laws of gravity and that it is impossible for them to do otherwise — but isn’t it also possible that they are free, that they just happen to choose to behave in a uniform way but could just as easily choose otherwise? G. K. Chesterton somewhere discusses the possibility that every single day God freely chooses to tell the sun (or rather the earth) to “do it again.” Have we no empirical grounds for favoring Kepler/Newton/Einstein over Chesterton? After all, we only observe what happens, not what could have happened.

I’ve been conflating absolute proof with mere evidence. There can be no conclusive disproof of Chesterton’s “do it again” hypothesis, but we can and do have evidence against it. Suppose an astronomer predicts precisely where and when the sun will rise tomorrow. If the sun and earth are bound by the laws the astronomer thinks they are bound by, what is the probability of the prediction coming true? 1. If they could behave otherwise, what is the probability of the prediction coming true? Unknown, but necessarily less than 1. You can do the Bayesian math and see that, whatever prior probability we assign to the Chesterton hypothesis, it should be revised downward if the astronomer’s prediction comes true. Therefore, every successful prediction is evidence that things could not have happened otherwise. (How strong that evidence is cannot be calculated, though.)

*

When it comes to human behavior, things are less straightforward, since no one claims to be able to predict it. Determinists say it is predictable in theory but, due to the fantastic complexity of the human brain, not in practice. Indeterminists say it in not predictable even in principle. The two theories do not therefore make distinct predictions.

However, if someone were to make a detailed and accurate prediction of human behavior, comparable to the predictions of astronomers, that would be evidence for determinism and against free will. The more specific the prediction, the stronger the evidence (though, again, we cannot assess exactly how strong).

Prophecies like those featured in Greek tragedy would be relatively weak evidence against free will. One gets the impression that Oedipus and his parents were free to do many different things, but that some unseen power was seeing to it that, whatever they chose to do, the final result would be the same. (It could be compared to a chess master’s prediction that, whatever his novice opponent may choose to do, the master will still win in the end.) Much stronger evidence can be found in the Gospels, where Jesus says to Peter, “Before the cock crow twice, thou shalt deny me thrice” — and then, despite knowing the prophecy and being unwilling to do any such thing, Peter proceeds to fulfill it. If the story is true, it offers strong evidence against free will, since specific details of Peter’s behavior (which Peter, apparently erroneously, believed were under his control) were successfully predicted. No wonder the poor devil “went out and wept bitterly” at this revelation that he was, despite appearances, a robot!

*

What, then, would count as evidence for free will? Well, any failed prediction of human behavior. Granted, that seems like a very strange thing to say. If I am unsuccessful in predicting the behavior of a given system, that doesn’t mean the system isn’t governed by rules — it just means it isn’t governed by the particular rules I thought it was governed by. But, logically, that is evidence (very weak evidence) that it isn’t rule-governed at all — just as the fact that I wasn’t born on February 12 is evidence that I wasn’t born in February at all.

*

The problem is that none of this evidence is at all quantifiable, so it remains impossible to say whether, on balance, there is more evidence for free will or against it. In the end, then, there’s still nothing to do but to make an assumption one way or the other.

3 Comments

Filed under Philosophy

Life, or knowledge of good and evil: choose one

There is a way which seemeth right unto a man, but the end thereof are the ways of death.

— Proverbs 14:12 (and 16:25)

The upshot of my discussion (qv) of Bruce Charlton’s argument against atheism is that, yes, it is very likely a pathological belief — but that we cannot therefore write it off as a delusion. It is pathological not because it misrepresents reality, but because (like most religions) it fails to provide the artificial motives which are necessary in order to induce human populations to reproduce themselves under modern conditions. Humans, like many zoo animals, don’t breed well “in captivity” (i.e., in an unnatural and evolutionarily novel environment), and most belief systems, including all known atheistic ones, fail to cure that problem.

Only a handful of belief systems (the most prominent being Mormonism) qualify as non-pathological under modern conditions. The problem is that, by ordinary standards of evidence, these belief systems just don’t seem to be true. For Dr. Charlton, Mormonism’s effectiveness as an antidote to the modern pathology of voluntary infertility is evidence for its truth. However, the pathology is not essentially about incorrect beliefs, but about the inadequacy of evolved motives to induce reproduction under evolutionarily novel conditions. If certain forms of theism can cure that pathology, this is not evidence that they are true, but only that they are expedient under modern conditions. (The pathology will correct itself in any case, either by evolutionary changes in human nature or by the collapse of modernity — most likely the latter. However, if we want to continue to be both modern and human — and we do — it would certainly be expedient to convert to Mormonism or something similar.)

So, we find ourselves in the dilemma described in Proverbs: The beliefs that seem right lead to death; the beliefs that will save us seem wrong. If we — not we individuals, but we cultures, we nations, kindreds, tongues, and peoples — choose to die for what we believe (or disbelieve), is that heroic or just stupid? The Christian answer is clear: If your eyes cause you to fall, pluck them out; better to enter into life blind than to perish outright.

*

The writer who addresses this dilemma most explicitly is Friedrich Nietzsche. I hadn’t read Beyond Good and Evil since I was a child, but a couple of days ago I felt a sudden urge to reread it (in Marianne Cowan’s English translation). By “coincidence,” I found that passage after passage tied into the train of thought triggered by Dr. Charlton’s post.

Here is section 4 of Beyond Good and Evil, which states the dilemma in the clearest possible terms:

The falseness of a given judgment does not constitute an objection against it, so far as we are concerned. It is perhaps in this respect that our new language sounds strangest. The real question is how far a judgment furthers and maintains life, preserves a given type, possibly cultivates and trains a given type. We are, in fact, fundamentally inclined to maintain that the falsest judgments (to which belong the synthetic a priori judgments) are the most indispensable to us, that man cannot live without accepting the logical fictions as valid, without measuring reality against the purely invented world of the absolute, the immutable, without constantly falsifying the world by means of numeration. That getting along without false judgments would amount to getting along without life, negating life. To admit untruth as a necessary condition of life: this implies, to be sure, a perilous resistance against customary value-feelings. A philosophy that risks it nonetheless, if it did nothing else, would by this alone have taken its stand beyond good and evil.

People will call this nihilism, but of course it is not. Nietzsche is not saying that nothing matters; he is saying that life matters — that it matters more than truth itself, and that any judgment, be it never so “true,” which stands in the way of life must be sacrificed. I myself have already taken a step down the Nietzschean path by choosing to accept the doctrine of free will — despite the fact that I know it to be logically self-contradictory — because it seems pragmatically necessary for life. Nietzsche forces us to face the uncomfortable fact that to think this way — to accept untrue or probably-untrue beliefs because they “further life” — is to “take a stand beyond good and evil.”

Essentially all modern Christians do this, and will generally admit to doing it if pressed. In the faith even of one who professes to “know beyond a shadow of a doubt” there lurks an element of Pascal’s Wager, of freely choosing beliefs which seem expedient rather than being compelled by adequate evidence. No Christian thinks of this as a Nietzschean move, or as being “beyond good and evil.” (Christians generally dislike Nietzsche, perhaps because he shines too bright a light on them.)

But this choosing to accept false beliefs is not a uniquely religious phenomenon. As Nietzsche says, everyone does it — because it is literally necessary for life — but some are more honest than others about it. Atheists are generally the least honest, Christians a great deal more so — but they still fall short of the unblinking, spade-calling candor of Nietzsche himself.

*

But perhaps one of our necessary, life-furthering delusions is the belief that no delusion is necessary or life-furthering. There is an obvious element of paradox in being so honest about our need for self-deception, in insisting on the important truth that truth is not the most important thing. Nietzsche’s paradoxical insistence that, while truth is of secondary importance, honesty is essential, is perhaps best understood in light of the above quotation. ” The real question” is not only “how far a judgment furthers and maintains life,” but also how far it “preserves a given type.” Nietzsche is not — though he seems at first glance to be — advocating a philosophy of “better a live dog than a dead lion.” “Type” — dog or lion — matters just as much as life, and as becomes clear later in Nietzsche’s book, the human type he wishes to preserve is one characterized by courage, and by the candor which comes with courage.

What tempts us to look at all philosophers half suspiciously and half mockingly is not so much that we recognize again and again how innocent they are, how often and how easily they make mistakes and lose their way, in short their childishness and childlike-ness — but rather that they are not sufficiently candid, though they make a great virtuous noisy to-do as soon as the problem of truthfulness is even remotely touched upon. Every one of them pretends that he has discovered and reached his opinions through the self-development of cold, pure, divinely untroubled dialectic (in distinction to the mystics of every rank who, more honest and fatuous, talk about “inspiration”), whereas, at bottom, . . . a heart’s desire, made abstract and refined, is defended by them with arguments sought after the fact. They are all of them lawyers (though wanting to be called anything but that), and for the most part quite sly defenders of their prejudices, which they christen “truths” — very far removed they are from the courageous conscience which admits precisely this; very remote from the courageous good taste which makes sure that others understand. (from Beyond Good and Evil, Section 5)

The problem is that the stance Nietzsche is advocating — embracing “life-furthering” beliefs rather than true ones, expedience rather than principle — is hardly one that we would normally associate with courage. The courageous stance is the one expressed by Arthur Hugh Clough: “It fortifies my soul to know / That, though I perish, Truth is so” — compared with which Nietzsche’s own position seems more like a craven selling-out.

Truth, however, is not the only principle for which one can courageously take a stand. As becomes clear in the next (i.e., the sixth) section of Beyond Good and Evil, Nietzsche’s courageous man exhibits fealty not to the impersonal “truth” but to his own personal “moral intentions.”

Gradually I have come to realize what every great philosophy up to now has been: the personal confession of its originator, a type of involuntary and unaware memoirs; also that the moral (or amoral) intentions of each philosophy constitute the protoplasm from which each entire plant has grown. Indeed, one will do well (and wisely), if one wishes to explain to himself how on earth the more remote metaphysical assertions of a philosopher ever arose, to ask each time: What sort of morality is this (is he) aiming at? . . . there is nothing impersonal whatever in a philosopher. And particularly his morality testifies decidedly and decisively as to who he is — that is, what order of rank the innermost desires of his nature occupy.

The courageous man, then, is one who wishes to live a particular kind of life and who orders his beliefs so as to further that goal — both in terms of staying alive and in terms of living by that particular morality.

*

This is ultimately unsatisfying, though. If there is no bedrock of objective truth — or if there is, but we choose to ignore it as irrelevant — then none of these supposedly “heroic” choices people are making really mean anything. A man’s chosen morality “testifies decidedly and decisively as to who he is,” says Nietzsche, making it sound terribly momentous — but without some fixed standard of real morality grounded in actual truth, “who he is” is just a bit of meaningless trivia; preferring morality A to morality B is no more significant than preferring chocolate over strawberry ice cream. There can be no real courage or heroism without something objective in which to ground it.

Even Nietzsche seems to see this at times. Much later in Beyond Good and Evil (section 39) he appears to backtrack from his earlier position and to stress the importance of truth — truth at all costs, even if knowing the truth should result in vice, misery, and death.

No one very easily takes a doctrine as true because it makes one happy or virtuous. . . . Happiness and virtue are not arguments. But we like to forget — even sensible thinkers do — that things making for unhappiness or for evil are not counter-arguments, either. Something might be true, even though it is harmful and dangerous in the greatest degree; it might in fact belong to the basic make-up of things that one should perish from its full recognition. Then the strength of a given thinker would be measured by the amount of “the truth” that he could stand.

*

Ultimately, the only humanly acceptable state of affairs is one in which we don’t need to make such trade-offs — one in which truth, life, virtue, and happiness are all mutually compatible. The only acceptable way in which to live is in the faith that that is indeed true: that the Good is a unitary thing which can be pursued in its entirety, without the need to permanently sacrifice one aspect of it to another.

Even that faith cannot obviate the need to make tough choices between truth and life, though, since they often seem to be incompatible. Do we embrace beliefs that seem true, in the faith that they will ultimately turn out to be life-sustaining as well; or do we choose beliefs that seem expedient, in the faith that they will turn out to be true?

2 Comments

Filed under Ethics, Philosophy

Bruce Charlton’s case that atheism is incoherent

In a recent blog post, Bruce Charlton makes the case that Atheism is always incoherent, incompetent or unserious; coherent thinkers *must be* theists. Now I am no longer the atheist I once was — I am willing to entertain theism as a working hypothesis (which is of course still a long way from actually believing it). However, I do think that atheism is a reasonably coherent point of view — or, at any rate, that its inherent problems as a philosophy are no worse than the problems inherent in theism. I therefore want to go through Dr. Charlton’s points one by one and analyze them.

In what follows, italicized paragraphs represent summaries or paraphrases of points made by Dr. Charlton. Paragraphs in roman type present my own ideas.

*

1. The terms of the debate

Theism and atheism are metaphysical assumptions, not empirical conclusions. They should be judged not by comparing the evidence for and against each view, but by comparing the positive and negative consequences of believing them.

I think it is probably true that there can be no empirical evidence for or against theism simply as such, because it is such a vague proposition. However, the more specific theological claims of individual religions often do have implications which are subject to empirical testing and/or logical disproof.

Where empirical evidence is unavailable or inadequate, it is indeed appropriate to evaluate competing beliefs by their probable consequences — i.e., by criteria of expediency as opposed to truth. This is what lies behind the principle of presumption of innocence without proof of guilt; lacking conclusive evidence, we judge it more expedient to risk one kind of error than the other. My assumption that I have free will (see You should believe in free will) is also based on expediency rather than evidence (since there can be no empirical evidence regarding the ontological status of things that don’t happen). Pascal’s wager is yet another example of this kind of reasoning.

Bare theism, though, is a very vague proposition indeed, and just as there can probably be no real evidence for or against theism-as-such, it’s not clear that theism-as-such has any particular consequences, either. Specific religions have specific consequences and can thus be judged as expedient or inexpedient belief systems, but I confess to being at a loss to think of any specific practical consequences of “mere theism.” Rather than passing judgment on theism first and only afterwards (should the judgment be positive) considering which brand of theism is the best, perhaps it makes more sense to consider specific religions right from the start.

*

2. The pathology of sub-replacement fertility

One of the negative consequences of atheism is “sub-replacement fertility under modern conditions (where there is access to a range of fertility regulating technologies).” This is objectively pathological, and seriously so. Dr. Charlton admits that most religions also lead to sub-replacement fertility; however, there are a few religious exceptions to this rule (e.g., Mormons, Orthodox Jews) but no known non-religious exceptions. (Some individual atheists may be exceptions, of course, but no predominantly secular society is.)

Well, the fact that atheists and the vast majority of theists suffer from this pathology is a strong indication that belief in God is not the determining factor. That every member of this tiny group of élite cultures — those which reproduce themselves under modern conditions — should be theistic is hardly a surprise, since virtually all cultures are theistic. To understand the secret of their immunity to the otherwise universal plague of Malignant Modernity, we should be looking at what they have in common which makes them different from other cultures — not at the near cultural universal of theism.

*

When I look at the modern pathology of voluntary infertility — and, as Dr. Charlton says, it is very definitely a pathology and a serious one — I see a pathology of motives, not beliefs. It’s not that our fundamental motives have changed, but that our technology has changed the world in such a way that the old, one-serviceable motives are no longer productive of fitness. (See my discussion of this in The Genie scenario.)

Consider the situation with food: We still have the same old food-motives as before — a desire for sugar and fat and salt and so on — but those motives, which once kept us alive, are now fitness-reducing in a world where technology has made these things too readily available, and in refined form.

A similar pathology of outdated motives seems to be in play vis-à-vis reproduction. Most people do have a natural desire to have children — but compared to our other natural desires, it’s not a very strong one. Other desires — for sex, status, comfort, security, pleasure — are much stronger and more immediate, and when they are pitted against the desire for children, the latter tends to lose out. In pre-modern times, those stronger desires tended naturally to lead people to have children — either as side-effect of pursuing sex, or as means of acquiring wealth, status, and security. Under modern conditions, these indirect inducements to reproduction no longer work properly. It is quite easy to have plenty of sex without ever having children, and children tend to be a net negative in economic terms. As for security, the modern welfare state makes it unnecessary to have children to provide for one in one’s old age; and easy divorce means that women cannot feel secure without a “career” — which generally entails a ridiculously protracted period of education, with predictable consequences for fertility. Without the assistance of these ancillary motives, modern people are inadequately motivated to reproduce.

In all of this there is no indication that people’s incorrect beliefs (about the existence of God or about anything else) are at the root of the pathology — just as the obesity epidemic probably cannot be attributed to incorrect beliefs about nutrition. In both cases, once-effective motives are wreaking havoc in an environment which no longer resembles the one in which they evolved.

Certain beliefs may turn out to be effective antidotes to these motivational pathologies — but these need not (indeed, probably will not be) factually correct beliefs. Wrong beliefs can be tailored to fit wrong motives so as to produce the desired result — throwing Br’er Rabbit into the briar patch, as it were. To use a hypothetical example, a firm belief that eating refined grains results in eternal damnation would probably lead to better health consequences than true beliefs (coupled with woefully inadequate motives) would. Those few religions which succeed in motivating their adherents to choose above-replacement fertility may be not-so-hypothetical examples of the same thing.

*

3. Justifying norms

Another consequence of atheism is that laws and other norms have nothing to back them up. They are either confessedly arbitrary — enforced by bare, unjustified power — or else they are justified by utilitarian criteria (maximizing pleasure and minimizing pain “overall”). However, there is no intelligible calculus for summing up individual pains and pleasures and deriving the overall hedonic value of any particular state of affairs, so in practice utilitarianism is used as a post hoc justification for whatever those in power find expedient.

Yes, but the “will of God” is no more provable than the “greatest good for the greatest number,” and both of these principles have been used to justify all sorts of different norms — including ones which strike most people as grotesquely evil.

In theory, theists humble submit to the will of God. In practice, they simply assume that God agrees with their own conscience, or their own culture’s norms, or whatever happens to be expedient at the moment — which is pretty much the same thing that utilitarians do. (See my old post Arrogance and humility, which was also written in response to Dr. Charlton.)

*

4. Objective meaning and purpose

Under atheism, there is no objective purpose or meaning of life. Atheists respond that they can create their own meaning and purpose. However, if this is true, it means no particular meaning or purpose can be objectively right or wrong. This implies either solipsism or nihilism — but nihilism is self-contradictory “because it is a non-arbitrary metaphysical belief which claims that beliefs are arbitrary.”

Actually, this form of nihilism is not technically self-contradictory. It states merely that all “meanings” and “purposes” — not all beliefs — are arbitrary. But that’s of little importance; I think most people will agree that solipsism and (any form of) nihilism are things to be avoided, and that atheism has serious problems if it entails either of the two.

However, it’s not clear to me how theism saves us from this species of nihilism. Various intelligent beings have various goals and purposes, and if God exists then he has goals and purposes as well — but why should God’s purposes be considered the purposes, inherently valid in a way that others are not? Is it because he is so powerful? (Might makes right?) Or because he is good and wise? (See the Euthyphro dilemma.) Or because he created us? (But if we had been created by a mad scientist instead, would his mad purposes therefore be automatically and uniquely valid?)

In fact, non-theistic Darwinism also proposes that there is an objective “purpose of life” — namely, to maximize our inclusive fitness, i.e., to keep copies of our genes in existence for as long as possible — but that is obviously an inadequate reason for any human to accept that as his own purpose in life. Theists accept the purposes attributed to God, not because they are the purposes for which life was created and as such necessarily valid, but because they are purposes which humans already find attractive for other reasons. Whatever “objectivity” those purposes may have is derived from their status as human psychological universals, a status which is unaffected by the existence or nonexistence of God.

7 Comments

Filed under God, Philosophy

Why is choosing beliefs more problematic than choosing actions?

There are those who scoff at the schoolboy, calling him frivolous and shallow: Yet it was the schoolboy who said “Faith is believing what you know ain’t so.”

Pudd’nhead Wilson’s New Calendar

“Believing what you know ain’t so” — if it is true that we choose our beliefs and are morally responsible for them, then it ought to be possible to do just that.

We are certainly morally responsible for our actions because, knowing (or thinking that we know) what is right, we are nevertheless capable of choosing to do otherwise. We are able to do what we know is wrong — to judge a particular course of action to be wrong and to do it anyway. This is possible because judging a deed to be right is one thing, and actually doing it is another. Without this distinction, the idea of sin would be incoherent.

When it comes to belief, though, no such distinction is possible. To judge a belief to be right just is to believe it. “Believing what you know ain’t so” is, pace the schoolboy, meaningless. To believe something just is to think it is so; if you don’t think it’s so, you don’t believe it. Thus, the idea of sin is incoherent when applied to beliefs. We cannot be held morally responsible for our beliefs because there is no internal standard against which to judge them. Of course, there is the external standard of what is objectively true, but that’s not good enough. A man may do something which, as a matter of fact, is wrong — but if he doesn’t know it’s wrong, he is still innocent. Likewise, a man who believes something false is innocent unless he knows it’s false — but if he knew it was false, he would eo ipso not believe it.

*

And yet, and yet — that can’t be the whole story. At some level, everyone understands exactly what Twain’s schoolboy is talking about, which is why his definition of faith makes us smile knowingly rather than scratching out heads. “Believing what you know ain’t so” is not simply meaningless, or it would not even register as a witticism. Somehow, despite the contradictions it seems to involve, it is possible to willfully — culpably — believe something you know is false. As for what exactly that means, though, I confess that I’m still at a loss. Further research is, as they say, indicated.

6 Comments

Filed under Ethics, Philosophy, Psychology

Is freely choosing our beliefs desirable?

I have always had trouble with the idea — which I think of as typically Christian, though many non-Christians also subscribe to it — that our beliefs are freely chosen in the same way that our actions are, and that they are therefore something for which we are morally responsible.

One problem is that the “choices to believe” which I am supposed to be making all the time are, for this observer at least, invisible to the eye of introspection. When it comes to actions, I have a very strong subjective feeling of choosing my actions, and of being able to choose otherwise than I do. However metaphysically problematic that idea may be, it is an unshakable subjective conviction. Either humans really do have free agency, or else they are subject to powerful and inescapable illusion that they have free agency. (After some vacillation, I have decided on the former, as anyone who has been reading my recent posts will know.) When it comes to beliefs, on the other hand, I have no such subjective experience. I believe what I believe, and if I have freely chosen to believe as I do, these are choices of which I have no direct knowledge. The idea of free choices which are made without the chooser’s knowledge is obviously paradoxical.

A second troubling question — the one which this post will be addressing — is why anyone would want the freedom to choose his beliefs — what the point of such freedom might be. Freedom of action is desirable because there are many different good things any given person can do, and it is not possible for him to do all of them. Therefore, there is no one right answer to the question “What should I do?” It is necessary to choose. When it comes to beliefs, on the other hand, all possible true propositions are mutually consistent in a way that all possible good actions are not. Therefore, there is one right answer to the question “What should I believe?” — and freedom to believe otherwise is nothing but the freedom to be wrong. It is a worthless freedom which can do only harm. Furthermore, the freedom to believe as you choose — that is, the “freedom” to be ignorant and to have incorrect beliefs — is detrimental to the freedom that really matters, the freedom to act. If I have an accurate map — i.e., one whose content is forced on me by reality — I am free to go wherever I want. If, on the other had, I am “free” to draw my own map, unconstrained by the actual layout of the territory I wish to navigate, I lose the freedom to choose a destination and go there, gaining only the “freedom” to get lost.

*

Perhaps the best way to approach this is through the map metaphor just introduced — one which I have been using for years as an argument against the desirability of freely choosing one’s beliefs. While it seems obvious that a map dictated by reality is more useful than one we freely make up, it is equally obvious that in fact we do want to choose what kind of map to use, and that there is no One True Map which is objectively better than any other.

Every map is necessarily incomplete. For one thing, it has only two spatial dimensions and, while it persists in time, it does not change; the territory it represents is extended in three dimensions and changes through time. The map also has fewer “dimensions” than the territory in the sense that each point on the map has only one distinguishing characteristic (namely, color) but represents a place in the territory which has any number of characteristics (temperature, altitude, soil type, population density, etc.); therefore only one — or, with some ingenuity, a few — of these characteristics can be portrayed on any given map. Every map is also physically smaller than the territory it represents, which necessitates many omissions.

In addition to being incomplete, every map necessarily contains distortions and inaccuracies. The most inevitable of these are those which result from using a flat surface to represent the surface of a sphere — resulting in the distortion of directions and/or proportions. Other distortions may be necessary depending on what is being mapped; all information has to be “translated” into the language of colors on a two-dimensional surface, and some information doesn’t translate very well.

Translation itself offers another useful metaphor. All translations are also necessarily incomplete and distorted. A terza rima translation of Dante necessitates a great deal of semantic distortion; but a “literal” translation in prose distorts the work’s fundamental character as a poem. And of course any conceivable translation will involve the near-complete loss of the phonetic content of the original. There is no one true translation of Dante any more than there is one true map of the world. Translations and maps cannot even be objectively ranked according to how closely they approximate this unrealizable ideal of perfection. It’s not a quantitative question of how much accurate information a given map or translation conveys, but a qualitative question of what information. Which map or translation is “best” for me depends entirely on my purposes and on what kinds of information I consider most important for those purposes.

*

Any representation of the world in a finite mind is going to be incomplete, as inevitably as any map or translation. Perhaps our situation regarding possible beliefs is not really all that different from our situation regarding possible actions. While it may be true in principle that all truth may be circumscribed into one great whole, in practice we mortals are no more able to assent to all possible true propositions than to carry out all possible good actions. Thought, no less than action, requires time and effort, of which we have but a finite supply, and so knowing the truth in one area entails remaining ignorant in another — or believing something false because it “works” well enough for our purposes and because the truth is more complicated. It also seems highly likely that there are some aspects of reality which our minds simply cannot model correctly — just as a flat map simply cannot accurately portray the surface of a sphere.

*

In the realm of action, in order to do good in the only way that limited beings are capable of doing good, we have to be able to do bad. Anything good we can do will involve failing to do some other good, and often even doing something positively bad. If we are to make omelettes, we need the freedom to break eggs. Given that freedom, though, we also become free to go around breaking eggs just for the hell of it, without making omelettes. The freedom to do evil as such is undesirable, but for limited beings it is a necessary side effect of having the freedom to do good.

Something similar may be true in the realm of belief. In order to have any “true” beliefs — that is, workable approximations of truth, such as finite minds are capable of — we need the freedom to ignore and distort certain truths. (If a cartographer is strictly forbidden to depict anything untrue, he cannot draw a map at all. A translator who cannot lie cannot translate.) With that freedom, though, necessarily comes the freedom to ignore and distort even the most vital of truths — i.e., the freedom to be wrong, even disastrously wrong.

What specific truths ought we to ignore or distort, and which ones are non-negotiable? There is no one best answer to that question, since it all depends on the individuals interests and goals. Hence the desirability of freedom.

*

So choosing out beliefs is, after all, desirable — for essentially the same reasons that choosing our actions is desirable. There remains the question of whether and how it is possible to choose one’s beliefs. The true seems, almost by definition, to be that which, when properly understood, compels belief. If one really believed P (a given proposition) to be true, it seems that it would be impossible to consider it an option to believe not-P instead. I have recently made some headway on this question, too. (Again, choosing beliefs turns out to be a lot more similar to choosing actions than I had realized.) But that is a subject for another post.

4 Comments

Filed under Philosophy

You should believe in free will

If you believe in free will, and you’re right — well, then you’re right. That’s a good thing.

But if you believe in free will, and you’re wrong — well, that means you were fated to have incorrect beliefs, and there’s nothing you could possibly have done to change that. It would be meaningless to say that you “ought to” believe differently.

*

So if you believe in free will, don’t worry about the possibility of being wrong. If you were wrong, there wouldn’t be anything you could do about it anyway.

7 Comments

Filed under Philosophy

Syllogisms, free will, and the role of attention

The ideas in this post grew out of my reflections on a recent exchange with Agellius in the comments to this post. Agellius brought up the idea of the practical syllogism, which I dismissed as “an algorithm which is no less mechanical than the laws of physics” (since any mindless computer program can derive a conclusion from premises) and therefore of no use in constructing the non-deterministic model of causation which agency seems to require. Further thought has convinced me that I was wrong in this assessment. While any given practical syllogism is indeed a deterministic algorithm, the process of making choices via practical syllogisms is not deterministic and may indeed be relevant to the question of free will.

*

A syllogism is a deductive argument deriving a conclusion from two premises, for example:

  • Major premise: All men are mortal.
  • Minor premise: Samuel L. Jackson is a man.
  • Conclusion: Samuel L. Jackson is mortal.

In this kind of syllogism — the theoretical syllogism, or syllogism properly so called — all three components are propositions, and any rational person who believes the first two proposition must believe the third also. We could make this explicit by writing our example as follows:

  • I believe that all men are mortal.
  • I believe that Samuel L. Jackson is a man.
  • Therefore, I believe that Samuel L. Jackson is mortal.

Aristotle also introduced the idea of the practical syllogism — that is, a syllogism which concludes not in a belief but in an action. Unfortunately, he did not develop this idea very clearly, as can be seen in his example of a supposed practical syllogism:

  • Major premise: [I believe that] everything sweet ought to be tasted.
  • Minor premise: [I believe that] this particular thing is sweet.
  • Conclusion: [Therefore, I believe that] this particular thing ought to be tasted.

This is a poor example, not only because the major premise is completely bizarre (the logical form of an argument is independent of the sanity of its premises), but because, as my bracketed additions make explicit, it is actually just another theoretical syllogism — a set of three propositions, belief in the first two of which necessitates belief in the third — and not “practical” at all; a person could assent to all three propositions in the syllogism without actually doing anything. So here I must part company with Aristotle and insist that in a true practical syllogism, the major premise should be the desire for a particular end; the minor premise, the belief that a particular course of action will effect that end; and the conclusion, the execution of that course of action. For example:

  • Major premise: I am hungry.
  • Minor premise: I believe that cheeseburgers satisfy hunger.
  • Conclusion: Therefore, I eat a cheeseburger.

*

Now there is a sense in which it is obviously true that something like a practical syllogism lies behind each of our conscious decisions (as opposed to “autopilot” decisions, which probably account for the majority of human behavior and which are matters of habit rather than of reason). If you wanted to explain why you chose to take a particular course of action, you would probably do so in terms corresponding to the major and minor premises of such a syllogism. However, it is also clear that the conclusion of a truly practical syllogism (like my cheeseburger example, as opposed to Aristotle’s pseudo-practical sweet tooth example) does not really follow from the premises the way it would in a theoretical syllogism. It would be manifestly irrational to affirm the two premises of my Samuel L. Jackson syllogism while at the same time denying that Mr. Jackson is mortal. However, there’s nothing at all irrational in affirming the premises of the cheeseburger syllogism while at the same time refraining from eating a cheeseburger. Why might a person be hungry, admit that cheeseburgers satisfy hunger, and yet choose not to eat a cheeseburger? Well, one reason is the existence of countless competing syllogisms, differing from the cheeseburger syllogism only in the identity of the minor term. (“Cheeseburger” is the minor term in the original.) Cheeseburgers do satisfy hunger, but so do schnitzels and burritos and apple pies and as many other things as you care to think of. To be sure, this is also true of our theoretical syllogism; Mr. Jackson is a man, but so are John Travolta and Jacquizz Rodgers and Takeru Kobayashi and a few billion other people. However, these other syllogisms are not in competition with the original Jackson syllogism because I’m free to believe as many things as I please. Realizing the conclusion of one of these syllogisms — say, that Mr. Travolta is mortal — doesn’t prevent me from realizing that Messrs. Jackson, Rodgers, Kobayashi, and any number of other individuals are mortal as well. In the case of the practical syllogism, though, the competition is real. Realizing the conclusion of one of the syllogisms can preclude the realization of its competitors. It’s physically impossible for me to eat all of the things that are capable of satisfying hunger. Even eating two of them is problematic; if I eat, say, a schnitzel, then I will no longer be hungry, and the cheeseburger syllogism and its other competitors will no longer be sound.

Even worse than these competing syllogisms which are practically incompatible with the original one, there may be syllogisms which are just as sound as the cheeseburger syllogism but which are logically incompatible with it — that is, which terminate in the conclusion “I don’t eat a cheeseburger.” For example:

  • Major: I want to maintain my health.
  • Minor: I believe that cheeseburgers are detrimental to health.
  • Conclusion: Therefore, I don’t eat a cheeseburger.

Nothing like this exists in the world of the theoretical syllogism. If you have two valid syllogisms, the respective conclusions of which are “Samuel L. Jackson is mortal” and “Samuel L. Jackson is not mortal,” you can be sure that at least one of your premises is false. It is logically impossible for both syllogisms to be sound (that is, valid and with true premises). However, it is possible (and quite common, actually) for two perfectly sound practical syllogisms to have contradictory conclusions.

*

Theoretical syllogisms are therefore definitive and self-sufficient in a way that a practical syllogism can never be. So long as I know that all men are mortal and that Mr. Jackson is a man, I can safely ignore all other considerations and conclude with confidence that Mr. Jackson is mortal. I know that, so long as this one syllogism is indeed sound, no other sound syllogism can possibly contradict it. Once I am satisfied that the premises are true and the syllogism is valid, the case is closed.

In practical reason, though, the case is never closed — or, rather, logic will never dictate when the case ought to be considered closed. Be I never so convinced of a particular syllogism’s soundness, I can still never be sure that there isn’t some other equally valid syllogism out there which contradicts it. Nevertheless, at some point I do have to stop thinking and act — declaring the case closed by a free exercise of will. This is what makes the practical syllogism a possible vehicle of free will despite its superficially deterministic nature.

*

I can exercise my will in two main ways. The first, as mentioned above, is by deciding how long to think about a possible course of action. The longer I hold a contemplated action in my mind and dwell on it, the greater the number of relevant desires and beliefs that will appear and arrange themselves into syllogisms. At first I think only of hunger and the tastiness of cheeseburgers. Then health comes to mind. As I continue to think, any number of other relevant concerns may turn up: monetary cost, convenience, the morality of killing animals for meat, the social implications of eating prole food, the question of the extent to which I should maintain “American” eating habits as opposed to going native, and so on for as long as I care to think. Decide quickly, and I will tend to follow the path of least psychic resistance — little better than just going on autopilot without engaging consciousness at all. Dwell on the possible action for a long time, and complications will proliferate — giving me more options when I finally do make my choice, but also possibly leading to paralysis.

Assuming I have thought long enough to have come up with at least two syllogisms whose conclusions are mutually incompatible (either practically incompatible or logically incompatible, as discussed above), then I have a further opportunity to exercise my will be decided which syllogism (or which set of mutually compatible syllogisms) will “win” — that is, which of the mutually incompatible conclusions will actually be realized in action. How is this decided? Common sense has it that the “strongest” desire wins out — that if in the end I actually eat the cheeseburger, that goes to show that I wanted the pleasure of eating it “more than” I wanted the benefits of good health. The relative strength of the minor premises is also relevant, of course; perhaps my belief that cheeseburgers satisfy hunger is a near-certainty, while I am much less certain about their long-term effects on health. This is true enough of “autopilot” decisions in which consciousness and will do not play a part. Once consciousness is engaged, though, the relative strength of various desires and beliefs turns out to a very plastic thing, highly susceptible to the influence of attention.

Attention is the instrument of will, just as reason is the instrument of thought. To say that will just is attention wouldn’t be too far off the mark. Almost any belief or desire can be made stronger or weaker — or can be made to change its character in other ways — by the attention we choose to give it. In James Hogg’s novel The Private Memoirs and Confessions of a Justified Sinner, the narrator, after admitting to a “longing desire to kill my brother,” writes,

Should any man ever read this scroll, he will wonder at this confession, and deem it savage and unnatural. So it appeared to me at first, but a constant thinking of an event changes every one of its features.

Attention, in turn, just is imagination. (William James makes a compelling case for this identification in his Principles of Psychology.) The longer I dwell on the idea of eating a cheeseburger, calling up a mental image of the taste and the smell and the feeling of it in my mouth, the stronger those premises become, and the more likely their associated syllogisms are to prevail over the competition. (In fact, the process of writing this post required me to dwell on cheeseburger-eating at some length, and sure enough, I went out and ate a cheeseburger afterwards. My apologies if I should happen to induce a similar reaction in any of my readers.) If, on the other hand, I am persistent in my refusal to entertain such images, choosing instead to dwell on an image of myself in excellent health, the relative strength of the competing syllogisms will change, and the other decision will be made.

*

In my previous post on agency and motive, I proposed an analogy in which we are slaves with many masters — slaves who can do nothing but what we are commanded to do, but who can choose which of various competing commands to obey.

The “masters” can be identified with competing practical syllogisms — i.e., with imperatives derived from beliefs and desires. The slave’s freedom lies in his ability to choose who to listen to. Some of the masters have louder voices than the others, but it is still within his power to tune out the shouting of one premise and attend to the whispering of another — to “hearken and hear and obey.”

Leave a comment

Filed under Philosophy