Category Archives: Psychology

How crazy do you have to be to think you’re God?

C. S. Lewis once wrote that if Christ was a mere man who believed he was God, he would be “on a level with the man who says he is a poached egg” — that is, a complete lunatic who could not possibly be considered a great moral teacher or anything of that nature. In this essay, Peter Kreeft takes Lewis’s point a step further and claims that mistakenly believing oneself to be God is even crazier than believing oneself to be a poached egg — that Christ, if wrong, was literally as crazy as it is possible for a human being to be.

A measure of your insanity is the size of the gap between what you think you are and what you really are. If I think I am the greatest philosopher in America, I am only an arrogant fool; if I think I am Napoleon, I am probably over the edge; if I think I am a butterfly, I am fully embarked from the sunny shores of sanity. But if I think I am God, I am even more insane because the gap between anything finite and the infinite God is even greater than the gap between any two finite things, even a man and a butterfly.

Is that really a fair measure of insanity, though? The gap between your beliefs (about yourself or anything else) and reality is a measure of how wrong you are, but being very wrong isn’t the same as being insane. To be insane, you have to be obviously wrong; your beliefs have to be inconsistent with, or at least completely unsupported by, the data directly available to you. Ontologically speaking, a man may have far less in common with God than with a butterfly or even a poached egg — but the fact that he is not a butterfly is still far more immediately obvious than the fact that he is not God.

Consider the following three (hypothetical) people and their beliefs about themselves.

  1. Anthony believes that he is entirely composed of matter operating according to deterministic laws of physics, and that his “soul” (if that word is even appropriate) is “made of lots of tiny robots.” (The phrase is from Daniel Dennett’s translation of an Italian newspaper headline about his philosophy.)
  2. Brian believes that he is an immortal, non-physical spirit temporarily inhabiting a physical body, and that his spiritual part is supernatural and not subject to the laws of physics.
  3. Christopher is completely normal physically. However, he is firmly convinced that he has no hands and that his arms terminate in horse’s hooves. He believes this even when he is using his hands, which he can do just as well as anyone else. When other people insist that he does not have hooves and that his hands are perfectly normal, he thinks they are just trying to avoid hurting his feelings.

Whatever the truth may be about the soul and its relation to the body, it’s clear that either Anthony or Brian (or, most likely, both of them) must be deeply and fundamentally wrong about his own most basic nature, whereas Christopher’s error concerns only some relatively trivial anatomical details. Nevertheless, we probably all know people who hold views like Anthony’s and Brian’s and consider them perfectly sane — or at any rate far saner than Christopher, who is clearly barking mad.

Now some people may believe — or think they believe — that Anthony’s denial of his own metaphysical free will (which, in their view, he uses every day) is every bit as insane as Christopher’s insistence that he has no hands. It is therefore important to keep in mind that the question under consideration is not whether a particular belief is a “crazy” one, but whether a person holding that belief can be assumed to be so severely mentally ill that none of his teachings on any subject could be of any value to us. If Anthony or Brian (whichever one seems crazier to you) had written a book about, say, biology or economics or parenting — or even about moral philosophy or religion — would you feel justified in dismissing it as the ravings of a lunatic? (The question is supposed to be a rhetorical one, and I hope you got the right answer.)


Let us take it as axiomatic that Christians are not (as such) literally insane. Even if we assume for the sake of argument that the Christian creed is false, it is obvious that such people as Newton, Dante, and St. Thomas have much of great value to teach us. (See my essay about that here.)

Christians believe that Christ is the Eternal and Omnipotent God. They believe that in spite of the fact that he started his career as a baby, increased gradually in wisdom and stature, and needed to eat and drink like ordinary mortals — in spite of the fact that he died like an ordinary mortal, his last words being “My God, my God, why hast thou forsaken me?” — in spite of the fact that, after promising to return within the lifetime of his first-century disciples, he disappeared for 2,000 years and counting. Many Christians believe even “crazier” things about Christ — for example, that he and his Father are both one and not one, or that bread and wine can literally be his body and blood.

Christians believe all this, and yet, even if we assume it all to be false, they are still sane and perfectly capable of being great moral teachers. Is it really so different is someone falsely believes such things about himself? It seems different — it seems that any sane person would know the truth about himself in a way that he could not know it about another person — but I’m not so sure that it is.

At first glance, the Catholic’s belief that, despite his lying eyes, the bread and wine in front of him are actually the body and blood of Christ, seems to be on the same level as Christopher’s insistence that his hands are actually horse’s hooves. It’s not, though, because the Catholic’s belief is qualified in a way that makes it consistent with what he experiences: the bread is supposed to be flesh only in essence, while its “accidents” remain that of ordinary bread. Christopher’s belief about his hands has no such asterisk, which is what makes it more truly mad.

Similarly, no sane person is ever going to believe that he is simply God, but only God in human form. If Christ believed that he was God, but a God who had condescended to live and die as mortal, would it really be so obvious that he was wrong? So obvious that the belief would mark him as a raving lunatic and disqualify him as a great moral teacher? What aspects of his experience would be inconsistent with that belief? It would be an unusual belief, to be sure, an eccentric belief, but nowhere near the poached-egg level of madness. And if we assume that Christ was in fact a rather extraordinary mortal with seemingly “supernatural” abilities, and that he had been told by his mother that he had no biological father — well, then his belief that he was God hardly even seems all that eccentric anymore.


Actually, this whole discussion is less hypothetical than I have been making it sound. The fact is that I am personally acquainted with a man who believes himself to be Jehovah incarnate, and he’s a very intelligent, creative, and insightful person with a keen if somewhat unconventional moral sense. (In fact, in his moral discourse I often find the same combination of astute insight, earnest benevolence, and biting sarcasm that is so characteristic of Christ himself.) I wouldn’t call him a great moral teacher, but it’s quite easy for me to believe that someone like that could be such a teacher. I haven’t bothered myself too much over the question of whether he should be considered “insane,” but in a way it doesn’t really matter. I’m forced to conclude, either that you can believe you’re God without being insane, or that you can be insane and still be an insightful moralist. Either way, the “Lord, liar, or lunatic” trilemma crumbles.


Filed under Christianity, God, Psychology

Seeing what you expect to see

Just yesterday I was looking at the cover of one of my books and noticed something funny. It was a volume of English translations of Euripides, edited by David Grene and Richard Lattimore — only they had written his name as Richmond Lattimore, right there on the front cover! Then I looked at the back cover, and the title page, and a Sophocles book by the same editors — and I found that, by golly, the guy’s name actually was Richmond.

I read a lot of Greek literature in translation, and I must have seen Mr. Lattimore’s name hundreds or even thousands of times before without ever once noticing that it wasn’t Richard. They say the brain recognizes words mainly by how they begin and end (wcihh is why Esilgnh is slitl pltcefrey lbilege wehn you wtrie it lkie tihs), and I suppose the first time I encountered this particular name, my brain said something like, “R-I-C-something, ends with D — okay, I know this one.” After that, the more times I saw the name, and the more familiar it became, the more likely my brain would be to recognize it as a unit rather than actually reading it letter-by-letter and recognizing its mistake.


This isn’t the first time this has happened to me. It was only last year that I discovered, much to my surprise, that Euripides himself was not called Euripedes — this after reading about a dozen of his plays and writing extensively about him in a notebook.

When I was a child, I was once discussing the characters in a Tintin book with my sister, and she mentioned the name Spalding. I said, “Don’t you mean Spadling?” She said she was pretty sure the character’s name was Spalding, but I insisted: “No, it’s Spadling — you know, like the basketball brand!” — at which point she went and got our Spalding basketball and showed it to me. You don’t forget an embarrassing experience like that. (Years later I tried to correct the same sister, then a grad student in philosophy, for saying Leibniz instead of Liebniz. I should have learned my lesson the first time.)


I’m sure I’m not the only person who does this. Another childhood memory is of my father reading to us from The Lord of the Rings — and always pronouncing Rohirrim as “Rohimmir” (though I can’t be sure he thought it was spelled that way, I suppose). And I can’t count how many times I’ve seen people list “Jane Austin” as a favorite author — that is, an author whose name they must have seen written innumerable times and should be able to spell.


Some of these mistakes are pretty easy to understand. There are 200 Austins for every Austen in the most recent U.S. Census, and Richmond is so unusual as a Christian name that I can’t even calculate how much less frequent it is than Richard, the seventh-most popular name for men in my country.

“Spadling,” of course, is not a normal name at all, but the -ling ending is fairly common in English, and I suppose that’s what my brain thought it recognized. It made the same mistake when I read Tolkien, reading Eorlingas as Eorlings. (I was really quite shocked to discover much later that the a had been there all along.) My father’s own Rohan-related misreading is harder to understand, though, since -im as a suffix for the name of a people should seem quite natural to a Bible-reader, much more so than -ir.

“Euripedes” and “Liebniz” are also hard to understand. I guess a lot of Greek names end in -edes, like Archimedes and — well, that’s the only one that comes to mind. I think I have a reasonable guess for “Liebniz,” though. My pre-teen philosophical education consisted of (1) reading everything Plato ever wrote, (2) reading everything Nietzsche ever wrote, and (3) nothing else. When I first encountered another German philosopher with a prominent ni-z in his name, my brain must have decided that ie was a more appropriate vowel than ei.


The strange thing about errors of this kind is how confident we are in them. I wasn’t unsure about the names Spalding and Leibniz; I was confidently correcting people who pronounced them correctly! It’s not that I was unsure of Mr. Lattimore’s Christian name. If you had asked me two days ago, I would have said without hesitation, “Richard.” And if you’d said, “Are you sure it isn’t Richmond?” — well, as they say, I could have sworn his name was Richard. Why? Because I’d seen his name so very many times, and every single time I saw it as Richard.


Filed under Language, Psychology

Willpower: Exercise or conserve?

After all the more or less fruitless posts on free will as a metaphysical problem, here’s something a little more practical. The following is from a recent article by John Tierney  discussing some of Roy F. Baumeister’s research on what they are calling “ego depletion” or “decision fatigue.”

[Baumeister’s] experiments demonstrated that there is a finite store of mental energy for exerting self-control. When people fended off the temptation to scarf down M&M’s or freshly baked chocolate-chip cookies, they were then less able to resist other temptations. When they forced themselves to remain stoic during a tearjerker movie, afterward they gave up more quickly on lab tasks requiring self-discipline, like working on a geometry puzzle or squeezing a hand-grip exerciser. Willpower turned out to be more than a folk concept or a metaphor. It really was a form of mental energy that could be exhausted. The experiments confirmed the 19th-century notion of willpower being like a muscle that was fatigued with use, a force that could be conserved by avoiding temptation.

According to this view, the best way to maintain a high level of willpower is to conserve it by not using it too much! This can be contrasted with the view that sees willpower as a muscle to be built up by constant exercise — what we might call the Hamlet theory of self-control (“Refrain to-night, and that shall lend a kind of easiness to the next abstinence”).

As the muscle metaphor suggests, the two views are not necessarily incompatible. Other things being equal, someone who has just run a mile will be weaker than someone who has not — but someone who runs a mile every day will be stronger. Baumeister’s experiments (at least the ones mentioned in the article) only measure the short-term effects of decision fatigue, so they do not rule out the possibility that willpower works the same way. It would be interesting to see the results of a study on the effects of a long-term regimen of willpower training.


There can be little doubt that, when it comes to any one specific behavior, Hamlet is right that each abstinence makes the next easier — but this probably has more to do with establishing or disrupting habits than with building up willpower. Once something becomes a habit, it no longer requires much in the way of decision-making or willpower. The Spartans didn’t have to force themselves every evening to have nothing but black broth for dinner; this habit was probably so entrenched that nothing else even seemed like a live option. Once your brain has got the idea that this is just what we do, that no decision-making is required, willpower ceases to be an issue. When I was a Mormon missionary, I had an enormous number of rules to follow — get up at 6:00 every morning, never put your hands in your pockets, refrain from using the word “guy,” etc. — but after a few months none of them were very difficult to follow. This was not because my willpower had increased, but because the behaviors in question had shifted out of the realm of conscious decision and into the realm of habit. This is what Hamlet means when he says that use almost can change the stamp of nature.


I’m interested in a different question, though: whether exercising one’s willpower can make it stronger in general, aside from the effect habituation may have on any one specific behavior.

Mormons have a practice of fasting for 24 hours (a complete fast: no food, no water) on the first Sunday of every month. Though there are other purposes for this (for example, the money saved by not eating is supposed to be given as alms), one rationale which I often heard was that by practicing self-control in this arbitrary matter, one built up one’s ability to control oneself in general, resulting in an increased capacity to resist temptation. I suppose similar thinking underlies other forms of asceticism and “mortification of the flesh.” Baumeister would probably say that fasting is bad for willpower in the short term (low glucose levels were found to negatively affect willpower), but could regular fasting really build up willpower in the long run?

One thing that makes this difficult to test (or to practice, for that matter) is that, whatever regimen of willpower training one decides to use, it is itself in danger of becoming a habit and thus ceasing to be a meaningful exercise in self-control. The Mormon program of fasting addresses this issue to some extent; because the fasts only occur once a month, they always represent a break in one’s routine and never become fully habitual. Still, though, one becomes accustomed to fasting and it ceases to be difficult. As a Mormon, I was virtually never seriously tempted to break my fast early, and it’s not clear that I was actually exercising self-control in any meaningful sense. Of course I felt hungry and thirsty, but mere desire does not always constitute a real temptation which must be resisted by force of will. Walking down the street on a hot summer’s day, you may feel uncomfortably warm, but are you ever seriously tempted to take off all your clothes? Does it really take any self-control to keep them on? When you see something in a shop which you want but can’t afford, is it really willpower that keeps you from stealing it? Our habits, and our idea of which actions are thinkable and which are not, determine whether or not willpower even comes into play.


I suppose a regimen of real (non-habitual) willpower training would look something like a kung fu movie, where the master trains his student by making a series of unpredictable and often whimsical demands.


Filed under Ethics, Psychology

Some notes on the dark arts of rhetoric

The most effective put-down is one that employs — and deftly eviscerates — the very same terms which would ordinarily be used for praise. This is roughly a million times more effective than name-calling. Witness Byron’s masterful deflation of pretensions of immortality:

Pride! bend thine eye from heaven to thine estate;
See how the Mighty shrink into a song!

The power of these lines hinges at least in part on the choice of the word “song” — put at the end of a line for extra punch. This is the same word usually used to refer to fame as a kind of apotheosis (as in “to be immortalized in song”), but Byron makes it sound rather paltry — not by actually saying it is paltry, but by casting his verse in such a way that the reader is forced to presuppose it is paltry. The addition of that little word “a” is also a slick touch. How much less glorious it sounds to be immortalized in a song!

Another good example of this is in the film The Aviator, when Howard Hughes (Leonardo DiCaprio) says to Katherine Hepburn (Cate Blanchett), “Don’t you ever talk down to me! You are a movie star — nothing more.” By simply using the (usually positive) term “movie star” as an insult, he presupposes that both he and Hepburn already know that movie stars are contemptible — and presupposing your point can be much more effective than making it directly.


Walter Winchell mocked Nazis by calling them “Ratzis” (Rational Socialists?) and “swastinkers”. Now “Nazi” itself is enough of an insult. Likewise for liberals, feminists, and fundamentalists. If you can ridicule or denounce something whilst using the very same name that its supporters use, it’s far more effective than making up some derogatory term.

Likewise, it’s usually better to embrace the common—even if hostile—terminology for what you support rather than insisting on something else. Groups that insist on politically correct euphemisms for themselves imply that they need euphemizing.

Insisting on special terminology for oneself or for one’s enemies is a sign of weakness. The best way is to use common neutral language, pushing it very slightly in the direction of sarcastically imitating the terminology used by your enemies—but not too much, or you’ll sound like you have a chip on your shoulder.


When you compare the president to a Nazi, your scorn for the president sounds shrill, but your scorn for Nazis sounds reasonable. Again, this is because your comparison takes it for granted that everyone knows Nazis are bad. If X is the real target of your scorn, don’t compare X to something worse; instead, find excuses to compare other things to X in a way that presupposes a negative opinion of X.

I once saw this comment on a blog: “You sound like a goddamn Christian with all that ‘People hate me because I’m awesome’ bullshit.” This may have been an effective put-down of its ostensible target (an atheist who would presumably object to being compared to a Christian), but it’s a far more effective put-down of Christians. (Corollary: Pro-religion commentators who compare outspoken atheists to religious fundamentalists are shooting themselves in the foot.)


These techniques are forms of sarcasm, which Studies Have Shown is more effective than direct criticism.

The psychologist Ellen Winner and her colleagues have shown that people have a better impression of speakers who express a criticism with sarcasm (“What a great game you just played!”) than with direct language (“What a lousy game you just played!”). The sarcastic speakers, compared with the blunt ones, are seen as less angry, less critical, and more in control. This may be cold comfort to the target of the sarcasm, of course, since criticism is more damaging when it is seen to come from a judicious critic than from a dyspeptic one (Steven Pinker, The Stuff of Thought, pp. 380-81).

Part of the power of sarcasm is that, to some extent, it only works if you’re right. “What a great game you just played!” will be understood as a sarcastic put down only if the listener already knows that he didn’t just play a great game, or at least has some doubts.

Sarcasm disarms its target. There is no safe reply. If you say, “What a great game you just played!” and I respond defensively (“Come on, it wasn’t so bad!”), I’m implicitly admitting that you are right. I understand your comment to be sarcasm, which means I know you couldn’t have meant it sincerely, which means I know I played badly. If, on the other hand, I don’t get the sarcasm (or pretend not to get it) and respond with “Thanks!”, you can answer with a withering “I was being sarcastic.”


Filed under Psychology, Rhetoric

Bootstrapping the placebo effect

These are sugar pills I’m giving you. Nothing in these pills will have any direct effect on your illness or its symptoms; they have no active ingredient. In biological terms, neither if you eat are you the better; neither if you eat not are you the worse.

However, clinical trials have shown that this illness responds to placebos. A patient’s condition often improves significantly after taking sugar pills — provided that he has been lied to by the doctor and believes that the pills are actual medicine. But “actual medicine” just means something which significantly improves a patient’s condition — so these pills are real medicine, as real as any medicine can be, if and only if the patient believes they are real medicine.

So, what do you believe? Well, if you’re logical, you believe that the pills are effective iff you believe that the pills are effective. They’re the pharmacological equivalent of a Henkin sentence. If you can somehow bootstrap your belief in their effectiveness, that belief will immediately become self-justifying, saving yourself both the cost of prescription medication and the indignity of being deceived by your doctor.

But can you do it?


Filed under Psychology

High and low

The Boston Globe has a summary of recent psychological research indicating that some metaphors are so fundamental that our minds conflate their literal and metaphorical senses, such that manipulating the one can influence how people think about the other.

Researchers have sought to determine whether the temperature of an object in someone’s hands determines how “warm” or “cold” he considers a person he meets, whether the heft of a held object affects how “weighty” people consider topics they are presented with, or whether people think of the powerful as physically more elevated than the less powerful. What they have found is that, in fact, we do.

The article discusses the following metaphors:

  • Warm/cold: People holding a cup of hot coffee rate a person as happier and friendlier than those holding a cup of iced coffee. When people recall an episode of social ostracism, the room feels physically colder to them,
  • Weighty/light: People answer questions more carefully (as if judging them to be weightier) when writing on a heavier clipboard.
  • High/low: People unconsciously look up when they think about power. People who tell a story while moving marbles to a higher position tell happier stories than those who are moving them to a lower position.
  • Rough/smooth: Handling sandpaper makes people less likely to think a social situation went smoothly.
  • Clean/dirty: Guilt makes people feel physically dirty. Washing their hands makes them feel less guilty.
  • Hard/soft: Sitting on a hard chair makes people think of tasks as harder.

One fundamental metaphor that the article doesn’t mention is the use of “high” and “low” to describe the frequency of sounds — a metaphor that is used in every language and culture with which I am familiar. It seems a strange one to me, given the general rule that large objects produce “lower” sounds than small ones. What makes it natural for us to think of the voice of a grown man or a buffalo as “low” and that of a child or a mouse as “high”? The only explanation I can think of is that you lower something in your throat in order to speak or sing in a “low” voice and raise it for a “high” one.

In any case, the acoustic sense of “high” doesn’t seem to mesh well with the other metaphorical meanings of that word. We may unconsciously look upwards when we think of power, but we certainly don’t associate power with a high-pitched voice. And the expectation that the Most High God have a most deep voice is so automatic that giving him a high-pitched one (as in this video) seems blasphemous. It seems that we expect everything about God to be high (“God is in heaven and thou art on earth”) except his voice.

So why is it that we so universally describe acoustic pitch in terms that clash with our other habitual metaphors? “High,” like “white” (as so exhaustively detailed by Melville), seems to be a concept in conflict with itself.

Leave a comment

Filed under Language, Perception, Psychology

The Argument from Desire

I’ve recently read two discussions — one by philologist Edward M. Cook (of Ralph the Sacred River), and one by Christian apologist Peter Kreeft — of what is being called the Argument from Desire. Then, by a strange coincidence, John C. Wright also came out with a post about it while I was in the process of composing this one.

The argument, though not the name, comes from C. S. Lewis, who summarizes it as follows in the tenth chapter of Mere Christianity:

Creatures are not born with desires unless satisfaction for these desires exists. A baby feels hunger; well, there is such a thing as food. A duckling wants to swim; well, there is such a thing as water. Men feel sexual desire; well, there is such a thing as sex. If I find in myself a desire which no experience in this world can satisfy, the most probable explanation is that I was made for another world.

This is not strictly speaking an argument for the existence of God, but for an undefined something which is beyond all known human experience. As Kreeft puts it, “What it proves is an unknown X, but an unknown whose direction, so to speak, is known. This X is more: more beauty, more desirability, more awesomeness, more joy.” Still, if even this much can be proved — if we have reason to believe in something beyond this world which is nevertheless intimately connected with human desires and interests — it gives us at least a starting point from which to theologize.

Of course no one would argue that every human desire — including my desire for an ansible and a cloak of invisibility — implies the existence of an object that would satisfy it, only that we are not born with vain desires. Lewis’s argument only applies to natural, innate, instinctive desires, so the first question that arises is how to distinguish these from artificial ones. Kreeft proposes the following criteria:

  1. We generally “recognize corresponding states of deprivation” for natural desires, but not for artificial ones. “There is no word like ‘Ozlessness’ parallel to ‘sleeplessness.'”
  2. Because natural desires come from our shared human nature, they “are found in all of us, but the artificial ones vary from person to person.”

Kreeft’s first point seems not to favor Lewis, who was so far from seeing his unsatisfied desire as a state of deprivation analogous to sleeplessness that he actually dubbed it “Joy” — not the desire for Joy, mind you, but Joy itself. As far as Lewis was concerned, his desire was not for Joy; it was Joy. The desire was itself intensely desirable. In that respect it seems more like an artificial, fanciful desire than a natural, biological one. Are intense hunger, loneliness, sleep deprivation, and so on ever joyous experiences? Wouldn’t it be odd if they were? Fantasizing about the land of Oz, on the other hand, can be rather pleasant.

The second point is also problematic, since so many obviously fanciful desires are nevertheless near-universal. As Wright (who, despite his Lewisian sympathies, finds this particular argument weak) puts it, “Who has not longed to fly to the stars . . . to speak to the trees and rivers and hills, . . . or peer into the thoughts of another, or live his life?” And who has not felt Lewisian Joy, the “desire which no experience in this world can satisfy,” a persistent longing which is no less intense for being vague? All of these must be in some sense “natural,” since they come so naturally to us, but it hardly follows that there must exist something which can satisfy them.

Desires, after all, do not exist to be satisfied; they exist to motivate behavior. Often the behavior elicited by a desire will result in its satisfaction (e.g., hunger motivates eating, and eating satisfies hunger) but this need not always be the case. Take for example the proverbial method of motivating a donkey to move by dangling a carrot in front of it, where the donkey’s desire serves its purpose (making the donkey move) even if it is never satisfied. In fact, the minute you actually let the donkey eat the carrot, it will stop walking and the purpose of the desire will be frustrated. You should only let it eat the carrot after you have reached your destination and no longer want the donkey to move; if you want it to keep moving indefinitely, you should never let it eat the carrot. Creating a desire serves to make the donkey move; satisfying the desire serves to make it stop. (Of course this is a highly artificial example, but in principle there’s nothing to stop nature from doing something similar.) So in thinking about desire and satisfaction, we need to keep in mind two important points — important enough to be bulleted:

  • To understand why a given natural desire exists, the correct question to ask is not what would satisfy it, but what evolutionarily useful behavior it serves to motivate.
  • Other things being equal, we should expect a desire to be satisfied only when, and only for so long as, the behavior it serves to motivate is no longer useful.

If there were some behavior which it were evolutionarily beneficial for us to perform only once, or only a specific finite number of times, then we could expect to find a natural desire which could be satisfied in the fullest sense of that word — we reach the intended goal, the desire is completely and permanently quenched, and we move on to other things. Mission accomplished. It’s hard to think of any clear examples of this in the real world, though, which is perhaps only to be expected. The evolutionary project — ensuring that copies of as many of our genes as possible continue to exist for as long as possible — is inherently open-ended, a race with no finish line, and we might expect a similar open-endedness in the desires which were created to serve it.

More typically we find that our natural desires can be satisfied, but only for a time. The satisfaction is temporary, and the desire is quenched and rekindled, quenched and rekindled, in a cycle that can continue indefinitely. We eat, we drink, we sleep — but hunger, thirst, and fatigue are never banished for long. All the rivers run into the sea, yet the sea is not full. This is a confusing state of affairs if we see satisfaction as being the purpose of desire, but it makes perfect sense if we keep in mind that desires exist to trigger behavior and satisfaction exists to turn it off. When the body needs fuel, the desire to eat is turned on; when it has enough, and eating more would actually be detrimental, the desire is turned off — satisfied — but only until fuel supplies begin to run low again.

The on-again off-again nature of hunger is explained by the fact that eating regularly is evolutionarily useful but eating until you burst is not. But what if there were a behavior which, unlike eating, was always useful and never needed to be turned off? Well, in that case we would expect that behavior to be motivated by a desire which could never be satisfied. The most obvious example of this in nature is our desire for life itself. Nature has given most of us an insatiable desire to go on living indefinitely, not because immortality is actually on offer, but to motivate us to extend our finite lives for as long as we possibly can. Other ways of coping with our unacceptable mortality — having children, trying to bequeath something of lasting value to posterity, and so on — also tend to serve evolution’s ends. So long as we keep chasing the carrot of eternal life, pulling our wagonload of selfish genes behind us, the desire serves its purpose, even if satisfaction remains forever out of reach.

Lewisian Joy isn’t as straightforward as a desire for immortality — it’s a vague desire for a certain je ne sais quoi — and so the behavior it serves to motivate is less easily characterized. However, I suspect that it still does serve to motivate broadly predictable patterns of behavior. Someone who is motivated by Joy is likely to seek, as Kreeft puts it, “more awesomeness” — where our idea of awesomeness will tend to be drawn from our other, more straightforward (and more clearly evolutionarily useful) desires. The inchoate longing for “something more” is not as open-ended as it might seem, since our human nature will predictably direct it towards certain goals (such as power, wisdom, and beauty) rather than others (such as trying to ensure that the number of turnips in the world is prime). Given how clever our species is, and how good we are at finding ways to cheat evolution by satisfying our desires without reaching the goals for which those desires were created (see my post on the Genie scenario) — Joy may be a broadly effective way of keeping us from resting on unearned laurels.

I’m getting into just-so-story territory here, but all that’s really necessary to counter Lewis is to come up with an explanation for vague unsatisfiable desires which, however hypothetical and ad hoc it might be, is at least less far-fetched than his own “most probable explanation” — namely, that there must exist some “other world” than the known universe and that it was for this hypothetical world that we were “made.” And, that, I think, is a pretty easy standard to meet.


Filed under Evolution, God, Philosophy, Psychology

Language and numeracy

Daniel Tammet’s Embracing the Wide Sky mentions some interesting research — though he unfortunately neglects to mention who performed this research or where I can read more about it.

[R]esearch suggests that the counting words we use in English (and many other European languages) can have a negative side effect on some young children’s numeracy and arithmetic skills. Studies consistently show that Asian children learn to count earlier and higher than their Western counterparts and can do simple addition and subtraction sooner. The reason is that the teen and ten numbers in English and other languages are irregular and difficult for children to learn. In contrast, the number words in most Asian languages are much more consistent; in Chinese, the word for eleven is ‘ten one’, twelve is ‘ten two’, thirteen is ‘ten three’ and so on. . . . The language helps rather than hinders early understanding of the base 10 system. (pp. 134-35)

Knowing how the current intellectual climate systematically deemphasizes racial differences in everything except skin color, I tend to overcompensate, assuming when I read something like this that of course boring genetic differences are probably the real explanation. Just because it’s crimethink, though, doesn’t necessarily mean it’s true, and the language theory is an interesting possibility. Here’s how you could test it:

  • Limit the sample to a single ethnic group. For example, see if Chinese-speaking Chinese people are better at math than ethnically Chinese Americans who speak only English.
  • Look at non-Asian languages with regular number terms. While I don’t know of any examples off the top of my head, it seems unlikely that this feature would be exclusive to East Asian languages. Are there any African or European languages, for example, that express numbers in a regular way? Do speakers of those languages excel at math in the same way that Asians do?

I haven’t been able to track down the research Tammet alludes to, so I don’t know whether such tests have already been performed. It would be fascinating to discover that language can have such a strong influence on thinking.

There’s no question, though, that Chinese way of expressing numbers is much better-designed than the English and matches the decimal system more closely. In English, if I say “three hundred seven–,” you have no idea what the second digit is going to be; the number could turn out to be 307, 317, or 370, among others. If someone is dictating the number to you, you have to wait until they’ve said the whole thing before you can write the second digit. Chinese is much clearer; 307 is “three hundred zero seven,” 317 is “three hundred ten seven,” and 370 is “three hundred seven (ten)” (the final “ten” is optional). The assumption that if you say “three hundred seven,” the seven is the next digit rather than the final one, is a convenient one. We deal with numbers with zeroes at the end (like 6,500) more often than those with zeroes in the middle (like 6,005), so it makes sense to reserve the short form “six thousand five” for the former.

Leave a comment

Filed under Language, Psychology

Reading: Embracing the Wide Sky, by Daniel Tammet

I finished Daniel Tammet’s book Embracing the Wide Sky: A Tour Across the Horizons of the Mind on 9 Aug 2009.

Embracing the Wide Sky is a popular science book, an overview of various topics related to the mind. Tammet, an autistic savant with extraordinary mathematical and linguistic gifts, is interesting primarily as the owner of a remarkable mind, not as an expert on the mind in general, and this book is thus less compelling than his memoir Born on a Blue Day. Much of it is interesting, but one is still left with the sense that there was no need for this particular person to write this particular book, that any reasonably competent science journalist could have done as good a job of it.

The best chapters are those on memory and perception, which summarize some very interesting research. The chapter on statistics and logical thinking is one of the weakest, contenting itself with defining mean, median, and mode; explaining that the chance of winning the lottery is very low indeed; and listing various familiar logical fallacies. Perhaps because Tammet is used to other people struggling to follow ways of thinking that come naturally to his own mind, he sometimes fails to realize that some things are elementary even to us dear Watsons.

Some of the fallacies enumerated in the logic chapter are on display in the chapter on IQ. Tammet discusses various theories of intelligence, usually in a scrupulously evenhanded way; he seems always to appreciate both sides — but when it comes to the politically radioactive research of Herrnstein and Murray, Tammet suddenly opts for a black-and-white approach (no pun intended), asserting without further argument that they “misinterpreted data on intelligence to lead to some racist conclusions.” He also disregards his own admonitions about statistical thinking in citing the single case of Van Gogh (talented but financially unsuccessful) as evidence against the claim that there is a statistical correlation between IQ and financial success.

One final oddity is the bibliography, in which the majority of the entries look like this: “A Beautiful Mind New York Simon & Schuster” — just the title, city, and publisher, with no punctuation and no mention of the author. These are interspersed with a handful of ordinary entries which include authors, years, and punctuation. So much for the fabled proofreading skills of autistic people.

Overall it’s an interesting enough read, but people who are expecting another Born on a Blue Day are likely to be disappointed.

Leave a comment

Filed under Psychology