Home » Library » Modern Library » The Argument from Cognitive Biases

The Argument from Cognitive Biases


(2018)

I. Introduction

A common family of theistic arguments, which I will group for the sake of simplicity under the title “arguments from reason,” appeal to the ability of human beings to reason as evidence for theism over metaphysical naturalism. Arguments of this variety have been made by philosophers such as C. S. Lewis[1], Victor Reppert[2], and Alvin Plantinga.[3] These arguments come in many forms, some better than others. Consider a version advanced by Reppert:

  1. If naturalism is true, then we should expect our faculties not to be reliable indicators of the nonapparent character of the world.
  2. But our faculties do reliably reveal the nonapparent character of the world.
  3. Therefore, naturalism is false.[4]

This version is invalid for four reasons. First, Reppert can only conclude, at best, that naturalism is probably false, not that it is in fact false. Second, in order to conclude that naturalism is probably false, Reppert must consider the intrinsic probability of naturalism and its rival hypotheses. Third, in order to conclude that naturalism is probably false, Reppert must also consider all of the relevant evidence, not just evidence related to reasoning faculties. Fourth, even if the reliability of our rational faculties is unexpected on naturalism, this would only count against naturalism if the argument identified a hypothesis that predicts the reliability of our rational faculties to a higher degree, which, as stated, the argument does not do.

To repair these defects, consider an improved version of the argument:

  1. Our ability to reason (R)[5] is unexpected given naturalism (N).[6]
  2. Our ability to reason is expected given theism (T).
  3. Therefore, our ability to reason favors theism over naturalism.

Stated more formally in Bayesian terms:

  1. P(R|N) is low.
  2. P(R|T) is high.
  3. Therefore, R favors T over N.

This argument does not claim that T is probably true, or even that T is more probable than N. It merely says that learning R should make us more confident in T relative to N than we previously were. Premise 1 is typically defended with the argument that unguided evolution favors traits that aid in survival and reproduction, rather than traits conducive to discovering the truth. Thus, it is argued that evolutionary naturalism provides us with no reason to expect our cognitive faculties to be accurate truth detectors. Defenders of this line of reasoning are fond of using Charles Darwin’s own words against him, when he wrote in 1881:

With me the horrid doubt always arises whether the convictions of man’s mind, which has been developed from the mind of the lower animals, are of any value or at all trustworthy. Would any one trust in the convictions of a monkey’s mind, if there are any convictions in such a mind?[7]

In response, several naturalists have argued that there is in fact significant survival and reproductive value in having cognitive faculties that accurately discover the truth.[8] However, for the sake of this paper, I will grant premise 1. Premise 2 is typically defended on the grounds that, if God created us, he would have designed our cognitive faculties to be aimed at discovering the truth. As such, theism leads us to believe that human beings would have the ability to reason and discover the truth. Again, for the sake of this paper, I will grant premise 2.

However, even if it is true that the general fact R favors T over N, there may still be more specific facts about human reasoning that favor N over T. The purpose of this paper is to explore whether such facts exist.

II. The Fallacy of Understated Evidence

Philosopher Paul Draper has identified a fallacy he calls “the fallacy of understated evidence”: “This fallacy (i.e., mistake in reasoning) is committed when one uses some relatively general known fact about X to support a hypothesis when a more specific fact about X (that is also known to obtain) fails to support that hypothesis.”[9] Draper provides the example of a prosecutor who tells the jury that the defendant purchased a knife right before the victim was stabbed. On its face, this might be seen as evidence that the defendant was the murderer. However, the prosecutor neglects to mention the more specific fact that the knife purchased by the defendant was a harmless butter knife. This more specific fact obviously does not suggest that the defendant is guilty. Therefore, the prosecutor has committed the fallacy of understated evidence.[10]

This makes intuitive sense, and our intuitions can actually be formally modeled probabilistically. Let “H” be the hypothesis under consideration, and “G” be a general fact that supports H over ~H because the likelihood of G given ~H is less than the likelihood of G given H. That is, P(G|~H) < P(G|H). Now let “S” be a specific fact which is conjoined to G such that the conjunction G&S does not support H over ~H because the likelihood of G&S given ~H is not less than the likelihood of G&S given H. That is, P(G&S|~H) ≥ P(G&S|H). Thus, a person commits the fallacy of understated evidence when he argues that G supports H because P(G|~H) < P(G|H), but fails to acknowledge that there is an additional fact S such that P(G&S|~H) ≥ P(G&S|H). Thus, the fallacy of understated evidence can be thought of as a situation when there are two facts, one general and one specific, that pull in opposite directions, where the arguer only focuses on the general fact.

Consider another example where the addition of more specific information changes likelihoods. Suppose that the student standing in front of you in the bookstore checkout line is purchasing a copy of The Critique of Pure Reason. You infer that this is evidence that he is taking a philosophy course. You reason that P(purchases The Critique of Pure Reason | taking philosophy course) > (purchases The Critique of Pure Reason | not taking philosophy course). Suppose, though, that when you later overhear the student chatting with the cashier, he says that he hates philosophy and is buying the book for his girlfriend, who is taking a philosophy course. The addition of this more specific information is less likely if he is taking a philosophy course than if he is not taking a philosophy course. In this case, the fact that he is purchasing a copy of The Critique of Pure Reason is not evidence that he is taking a philosophy course.[11]

Given that we have these two opposing facts, one general and one specific, how do we balance their conflicting evidential thrusts? How do we know if the specific fact is strong enough to neutralize the general fact? Here, the odds form of Bayes theorem comes in handy:

P(H|G&S) =  P(H)    ×   P(G|H)    ×   P(S|H&G)
P(~H|G&S)   P(~H)       P(G|~H)       P(S|~H&G)

The fallacy of understated evidence is committed when someone focuses on the fact that the ratio P(G|H)/P(G|~H) is greater than 1, but ignores the fact that the ratio P(S|H&G)/P(S|~H&G) is less than 1. Depending on its value, the second ratio can either neutralize, or completely offset, the evidential support provided by the first ratio. For example, if G is twice as likely on H as it is on ~H, then the first ratio is 2/1. However, if S is twice as likely on ~H as it is on H, then S completely neutralizes the evidential support provided to H by G. And if S were more than 2 times as likely on ~H as it is on H, then S would override the evidential support provided by G.

Consider a final example. I tell you that on the other side of a locked door is an object with four legs. This general piece of evidence favors the hypothesis that the object is a dog, because P(4 legs|dog) > P(4 legs|~dog). But now imagine I tell you the more specific fact that the object’s legs are made of wood. Given that the object has four legs, it is much more likely that these legs would be made of wood on the hypothesis that it is not a dog than on the hypothesis that it is a dog. Thus, P(wood|dog & 4 legs) < P(wood|~dog & 4 legs). The specific fact that the object’s legs are wooden is so unlikely on the hypothesis that the object is a dog that it overrides the evidential support provided to the dog hypothesis by the general fact that the object has four legs. Anyone who attempted to focus on the general fact, while ignoring the specific fact, would be guilty of the fallacy of understated evidence.

In the following sections, I will outline various ways in which human reasoning is fallible, and how, given that human beings can reason, these facts are more likely on naturalism than theism. In light of these specific facts of about human reasoning, we will be able to see that arguments from reason are guilty of the fallacy of understated evidence.

III. Human Cognition and Cognitive Biases

The fact represented by R, that human beings have generally reliable cognitive functions, severely understates what we know about human cognition. In addition to R, we also know the more specific fact about human cognition that human beings are plagued by a variety of cognitive biases that lead us to systematically make errors in judgment. Let us represent this more specific fact with the letters “CB” for cognitive biases. In this section, I will briefly outline some well documented examples of CB.

One such example is the so-called “anchoring effect.” In the words of the Nobel Prize winning psychologist Daniel Kahneman, this “occurs when people consider a particular value for an unknown quantity before estimating the quantity … [and] the estimates stay close to the number that people considered—hence the image of an anchor.”[12] For example, if one group of people is asked whether the population of Iraq is greater than 10 million, and a second group is asked if it is greater than 100 million, the second group will estimate a population size far greater than the second group. This is, of course, entirely irrational. Nothing in the questions posed to these groups suggests that the population size offered is close to the actual population size. Nonetheless, the mere mention of a number prevents us from straying too far.

This can happen even when the anchoring number is totally unrelated to the number we are being asked to estimate. For example, in an experiment conducted by Kahneman and his colleague Amos Tversky, a wheel of fortune was spun in front of two groups. The wheel was rigged so that it landed on 10 for one group and 65 for the other. Both groups were then asked to estimate the percentage of African nations in the UN. Despite the fact that the wheel could not possibly have provided useful information in estimating these numbers, the group who saw the wheel land on 65 were far more likely to estimate a higher percentage than the group who saw the wheel land on 10.[13] The pernicious effects of the anchoring effect can be seen in all facets of life. For instance, this cognitive bias has long been exploited by negotiators who make unrealistically high settlement offers in the hopes that future counteroffers will be anchored by their original offer.

Another example identified by Kahneman and Tversky is our tendency to commit the conjunction fallacy, which is committed when someone estimates the probability of the conjunction of two events to be higher than the probability of only one of the events. In a now-famous series of experiments, the researchers asked large groups of people to consider a fictional character named Linda, whom they described as follows:

Linda is a thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.[14]

Kahneman and Tversky then asked the participants which alternative was more probable:

  1. Linda is a bank teller, or
  2. Linda is a bank teller and is active in the feminist movement.

According to the axioms of probability theory, (1) is necessarily more probable than (2), because (2) is the conjunction of (1) and another claim not logically entailed by (1). However, roughly 85-90% of the participants in the study chose (2) as more the probable alternative.[15]

Another bias identified by Kahneman and Tversky is our tendency to commit the base-rate fallacy. In another experiment, they asked participants to consider a scenario in which a taxi cab was involved in a hit-and-run accident. Participants were told that there are two cab companies in the city, the Green Company, which owns 85% of the cabs, and the Blue Company, which owns 15% of the cabs. A witness comes forwards and testifies that the cab involved was blue. Testing then reveals that this witness correctly identified cab colors 80% of the time, and incorrectly 20% of the time. Participants were then asked to estimate the probability that the cab involved in the accident was in fact blue.[16]

According the Bayes’ theorem, this probability is to be estimated in the following way:

P(blue|testimony) =

P(blue) × P(testimony|blue) / [P(blue) × P(testimony|blue) + P(green) × P(testimony|green)]

Plugging in the values provided by Kahneman and Tversky, we find that the correct answer to their question is (0.15)×(.8)/[(0.15)x(.8)+(.85)x(.2)] = 0.41, or 41%. However, the most common answer to their question was 80%.[17] This appears to be the result of people irrationally ignoring the statistical information about the base rates of cabs in the city, and implicitly assuming that they are equal.

To provide one final example, there is what Michael Shermer has dubbed “the mother of all cognitive biases”: confirmation bias.[18] In Shermer’s words, this is the “tendency to seek and find confirmatory evidence in support of already existing beliefs and ignore or reinterpret disconfirming evidence.”[19] Several studies have confirmed the ubiquity of this trait. For example, in a study conducted by Mark Snyder, participants were asked to interview someone and assess their personality. Half of the participants were given a profile of the person in advance describing him as an extrovert, and the other half were given a profile describing him as an introvert. During the ensuing interviews, the participants who were given the extrovert description asked questions that tended to confirm that conclusion. Likewise, the participants who were given the introvert descriptions asked questions that tended to confirm that conclusion.[20]

These examples are just four of nearly one hundred different cognitive biases[21], and the full extent of human fallibility does not end with these. Not only are we literally hardwired to make errors in judgment, but the very way that we take in sensory information and recall memories is terribly fallible, as is demonstrated by a wealth of research on the inaccuracy of eyewitness testimony.[22]

IV. Cognitive Biases are More Likely on Naturalism than Theism

Even if we grant that R is more likely on T than N, it is still the case that, given R, CB is more likely on N than T. Given the conjunction of naturalism and the fact that we can reason, it should come as no surprise that human reasoning nonetheless suffers from a variety of cognitive biases. As the result of an unguided and imperfect process, we would expect human reasoning to be filled with kinks and bugs. Particularly when it comes to reasoning about things that were of little importance to our evolutionary ancestors, such as logic and probability theory. As philosophers Branden Fitelson and Elliott Sober note: “It would be no surprise, from an evolutionary point of view, if human beings had highly reliable devices for forming beliefs about practical issues that affect survival and reproduction, but are rather less gifted when it comes to matters of philosophy, theology, and theoretical science.”[23]

In fact, many evolutionary theorists have argued that unguided evolution would lead us to expect many of the precise cognitive biases that we observe. For example, there is a great deal of evidence to suggest that human beings possess what cognitive psychological Justin Barrett calls a “hyperactive agency detection device.”[24] When confronted with ambiguous appearances, we tend to “see” agents, even when they are not there. That is, we are prone to false positives when it comes to believing that there are perhaps hostile agents in our vicinity. From an evolutionary perspective, it is easy to see why. You see a suspicious movement in the grass, and you think “Lion!” and scamper up a tree. If there is no lion, you only lose the effort of an unnecessary climb. But if you hang around to see whether or not it really is a lion, then, if it is, you lose everything. Better safe than sorry. There are similar evolutionary explanations to be offered for many of the other cognitive biases as well.[25] Thus, not only does naturalism predict that human reasoning would be fallible, but it actually predicts that it would be fallible in many of the precise ways that we observe. Thus, P(CB|N&R) is modest, if not high.

P(CB|T&R), on the other hand, must be quite low. For the very same reasons that theists say R is likely on theism, they should think that CB is unlikely on theism. Defenders of arguments from reason repeatedly tell us that God created us in his image as a rational being, and He has equipped us with truth-aimed cognitive faculties. Thomas Aquinas wrote, “since human beings are said to be in the image of God in virtue of their having a nature that includes an intellect, such a nature is most in the image of God in virtue of being most able to imitate God.”[26] René Descartes wrote in his Fourth Meditation:

I am conscious that I possess a certain faculty of judging [or discerning truth from error], which I doubtless received from God, along with whatever else is mine; and since it is impossible that he should will to deceive me, it is likewise certain that he has not given me a faculty that will ever lead me into error, provided I use it aright.[27]

More recently, Christian philosopher Victor Reppert has written that:

Given that God creates creatures, it is at least possible that God might wish to provide those creatures with the some measure of the rationality which God himself possesses. And human beings reflect God’s rational character by having the capacity think logically. If we make the further supposition that God has created human beings in such a way that they consist of both a soul and a body, we might be able to say that while the body’s activities are determined by the laws of physics, it is possible for human beings, through their souls, to perceive not merely physical activities in the environment, but logical and mathematical truths that apply throughout all that God has created.[28]

Christian philosopher Mark Linville adds:

[T]he theist also has an account of the development of human moral faculties … we have the basic moral beliefs we do because they are true, and this is because the mechanisms responsible for those moral beliefs are truth-aimed … because human moral faculties are designed [by God] to guide human conduct in light of moral truth.[29]

All of the above-quoted theists stress the point that God, if He exists, would want human beings to be rational beings who are well-equipped to discover the truth about reality. This is why they believe that theism predicts that our cognitive faculties would be reliable. For this reason, it should come as a great surprise to theists that our reasoning abilities are so deeply flawed. If God wanted us to know the truth, why would he go out of his way to purposefully install so many bugs and glitches in our reasoning faculties? Would this not be directly contrary to his goal of ensuring that humanity is able to accurately reason about the world? If his purpose for instilling us with reasoning faculties was to enable us to discover the truth, it would be positively irrational to also design these faculties in such a way that they lead us to go wrong so often. Moreover, it would be a tremendously odd coincidence if God hardwired us with many of the precise types of cognitive biases we would expect to observe if evolutionary naturalism were true.

Furthermore, not only would purposefully making human reasoning so fallible be irrational on God’s part, it would also introduce a great deal of suffering into the world. One could fill entire libraries with descriptions of the harms caused by human fallibility. These cognitive biases have stifled the progress of science, technology, and medicine. They have led responsible, well-meaning people to make destructive decisions. They make us sitting ducks for charlatans, demagogues, and con artists. They have caused wrongfully convicted people to waste away in prison for crimes that they didn’t commit. They have contributed to the downfall of relationships, economies, and even civilizations.[30] So much misery and suffering could have been avoided if our cognitive faculties weren’t so terribly flawed.

V. Objections and Replies

a. The Free Will Objection

In his Fourth Meditation, Descartes attempted to explain why human beings are so prone to error if a good God would never deceive us. His answer is that it is our free will, rather than God, that leads us into error. Descartes argued that although our intellect is vastly inferior to God’s, we are similar to God in that we both have perfect freedom of the will. He then went on to argue:

[Because] the will is much wider in its range and compass than the understanding, I do not restrain it within the same bounds, but extend it also to things which I do not understand: and as the will is of itself indifferent to these, it easily falls into error and sin, and chooses the evil for the good, or the false for the true.[31]

In other words, we freely choose to lead ourselves into error. A theist may similarly argue that P(CB|T&R) is high because God would want us to have free will, and given that we have free will, we will inevitably lead ourselves into cognitive error.

Although this may have been an adequate response in Descartes’ time, in light of the evidence that we have regarding the nature and extent of cognitive biases, it cannot reasonably be argued that the errors in judgment caused by CB are the result of free choices. First, why would anyone “choose” to think a conjunction is more probable than its individual conjuncts, or choose to neglect base rates? There is nothing emotionally desirable or comforting about many of these biases, and therefore there is no basis for saying that someone would choose to make these errors.

Second, Descartes’ theory makes empirical predictions that have not been borne out. If Descartes is correct that human error is the result of the gap between the scope of our will and our intellect, than we would expect people with greater intellects to be less prone to cognitive biases. However, research shows that this is not the case. A number of studies suggest that intelligence and various cognitive biases are relatively independent.[32] In fact, some studies suggest that increased intelligence predicts a greater tendency towards some cognitive biases.[33]

Third, and most importantly, we have every reason to believe that these biases are not the result of choices. Rather, they are built-in features of our cognition that lead to systematic errors across widely divergent populations. Our tendency to make mistakes is literally a design feature, a built-in handicap. If anything, it takes a deliberate exercise of free will to avoid these errors in judgment. For this reason, cognitive scientist Massimo Piattelli-Palmarini calls these biases “inevitable illusions.” He writes:

We are all prey to cognitive illusions. And we are deluded in complete innocence, in good faith, not even realizing that we are so misled…. The current term for this is “cognitive” illusions, to indicate that they exist quite apart from any subjective, emotion illusions and from any other such habitual, classical, irrational distortions.[34]

Cognitive biases similar to those of human beings can even be found in animals as simple as pigs[35], dogs[36], rats[37], and honeybees.[38] Few would argue that these animals have libertarian free will. This lends further support to the conclusion that our cognitive biases do not stem from free choices, but in fact have a biological basis.

b. The Skeptical Theism Objection

According the skeptical theism objection, human beings are severely limited in their knowledge of good and evil. Michael Bergman identifies three separate theses of skeptical theism:

ST1: We have no good reason for thinking that the possible goods that we know of are representative of the possible goods that there are.

ST2: We have no good reason for thinking that the possible evils that we know of are representative of the possible evils that there are.

ST3: We have no good reason for thinking that the entailment relations that we know of between possible goods and the permission of possible evils are representative of the entailment relations that there are between possible goods and the permission of possible evils.[39]

In light of these theses, skeptical theists argue that we are in no position to assess the probability that any given instance of suffering is not necessary for bringing about some greater good, or preventing some greater evil. From the perspective of a finite human being, an instance of suffering may seem pointless, but given our finite knowledge, we are not in a position to make this judgment. For all we know, an omniscient being with complete knowledge of good and evil would understand that such instances of suffering are necessary for a greater purpose.

Skeptical theism was originally developed to undercut the evidential argument from evil. However, theists may be tempted to co-opt this logic against the argument presented in this paper. Perhaps they could argue that, in light of our limited knowledge of good and evil, we are in no position to assess whether God would or would not have good reasons to create human beings with these cognitive biases. Likewise, while it may seem irrational for God to design cognitive biases into our reasoning faculties if he wanted us to be able to discover the truth, perhaps this is simply the result of our shortsightedness. From the perspective of an omniscient God, this could in fact be the best way of achieving His plan. We are in no position to know. For these reasons, P(CB|T) is entirely inscrutable, or at least that’s how the skeptical theist might argue.

Unfortunately, the skeptical theism objection causes problems that are far worse than the one that it aims to solve. As many have previously argued, skeptical theism opens the floodgates for complete moral skepticism.[40] If we are so ignorant of good and evil that we are in no position to assess whether an instance of suffering is pointless, then much of our ordinary moral reasoning goes out the window. This should lead us to experience what philosopher Scott Sehon calls “moral paralysis.”[41] Ordinarily, if we saw something terrible about to happen, such as an infant crawling into traffic, we would immediately come to the child’s rescue. This is because we implicitly make the judgment that the child’s death would be, on the whole, a bad outcome. However, skeptical theism tells us that we are in no position to make such a judgment. Thus, if we endorse the theses of skeptical theism, we should always be in doubt about whether we have an obligation to prevent seemingly bad outcomes.

This “moral paralysis” response to the skeptical theism objection to the problem of evil can be similarly employed against the skeptical theism objection to the argument from cognitive biases. If we really are so ignorant of the goods and evils that we are in no position to assess whether God could have good reasons for giving us so many cognitive biases, then we should lose any degree of trust in our cognitive faculties in general. Taken to its logical conclusion, skeptical theism prevents us from saying that God wouldn’t have good reasons for allowing our cognitive faculties to be wildly inaccurate and deceptive.[42] For all we know, God has very good reasons for instilling us with false memories, or making us brains in vats. Skeptical theism thereby gives us reasons to be skeptical of all of the cognitive faculties described by R—memory, perception, a priori reasoning, etc.[43] Thus, if skeptical theism allows theists to deny that P(CB|T&R) is low, it does so at the cost of preventing them from affirming that P(R|T) is high. To use Plantinga’s terminology, skeptical theism is thereby a “defeater”[44] for the belief that our cognitive faculties are generally reliable. Thus, the skeptical theism objection proves far too much for its own good.

Additionally, when skeptical theism is conjoined with the principle of indifference, it actually entails that P(CB|T&R) is low. The principle of indifference (POI) says that when we are faced with multiple hypotheses and no reason to favor one over the other, we should assign them all an equal epistemic probability. If we can’t know whether God has reasons to bring about any given cognitive bias, then we equally cannot be sure if he lacks such reasons. Thus, in accordance with the POI, the probability that God would have reasons for bringing about any given cognitive bias is 0.5. Thus, P(CB1|T&R) = 0.5, P(CB2|T&R) = 0.5,… P(CBn|T&R) = 0.5. Given the sheer number of cognitive biases, the probability that God would have reasons for creating the conjunction of them all is well below 0.01.

A skeptical theist may object to this line of reasoning by saying that we should apply the POI to the partition of CB or ~CB, rather than to each of the individual conjuncts of CB and their denials. This would make P(CB|T&R) = 0.5. However, this cannot be accomplished without either abandoning the core theses of skeptical theism or violating the axioms of probability theory. On one hand, if P(CB|T&R) = 0.5, this would mean that for any given cognitive bias, P(CBi|T&R) would necessarily have to be much greater than 0.5. However, if P(CBi|T&R) is much greater than 0.5, this would mean that we are in a position to know that, given all of the possible goods and evils and the entailment relationship between them, God would likely create this particular cognitive bias. But this is in direct contradiction with skeptical theism, which says that we are not in a position to say that any particular cognitive bias is likely or not, given all of the possible goods and evils and the entailment relationship between them.

On the other hand, if we retain the core theses of skeptical theism and assign a probability of 0.5 to every individual cognitive bias conditional on theism, then we violate the axioms of the probability calculus if we also assign a probability of 0.5 to the conjunction of all cognitive biases conditional on theism. Unless the truth of one cognitive bias deductively entails all of the rest, which is not the case, then the probability of the conjunction CB must necessarily be less than the probability of any particular bias. Thus, a theist who endorses both skeptical theism and the POI cannot say that P(CB|T&R) = 0.5. Indeed, such a theist must say that it is much lower.

To be clear, I am not arguing that the POI is an appropriate method for assigning probabilities. Rather, I am simply arguing say that if one affirms both the POI and skeptical theism, this leads to the inescapable conclusion that P(CB|T&R) is low. And this should be a concern for theists, many of whom do in fact affirm both the POI and skeptical theism. Plantinga, for example, explicitly makes use of the POI in his defense of the premise that P(R|N) is low[45], and has also defended a version of skeptical theism.

c. The Sin Objection

One final way a theist might respond to the argument from cognitive biases is to conjoin theism with an auxiliary hypothesis to help increase theism’s predictive power with respect to our cognitive biases. One auxiliary hypothesis that Christian theists commonly appeal to is the doctrine of the Fall of Man. According to this doctrine, Adam and Eve’s disobedience in the Garden of Eden brought sin into the world and corrupted human nature. Let “S” represent the auxiliary hypothesis that although human nature was originally created perfect, it was corrupted by sin. Can S rescue P(CB|T&R) from being low? According to the theorem of total probability:

P(CB|T&R) = P(S|T&R) × P(CB|T&R&S) + P(~S|T&R) × P(CB|T&R&~S)

From this we can see that even if the addition of S can help T&R predict CB, this won’t be of much help to theists unless T&R predicts S to a significant degree. Otherwise, the gain in explanatory power provided by S will be offset by a loss in prior probability. Thus, we must ask: is there any reason to think that P(S|T&R) is high? One could argue that this probability is very low because the historicity of the story of the Fall of Man in the Garden of Eden as described by the Book of Genesis has been seriously criticized on scientific, historical, philosophical, and moral grounds. However, one can still affirm S without affirming a literal interpretation of Genesis, because all that S requires is the general hypothesis that humanity’s cognitive faculties be in a state of corruption resulting from sin.[46] Nonetheless, even this stripped down version of S is ad hoc because there is no direct independent evidence for S, and therefore no serious antecedent reasons to expect it on T. This alone should render P(S|T&R) below 0.5.

Moreover, while the idea that sin can corrupt our reasoning faculties may seem sensible to Christians, things quickly get murky once we ask for the details. How, exactly, did this process of corruption work? Did sin alter the structure of our brains? Did it change our DNA? Is “sin” some kind of immaterial substance that infects our bodies? If so, how did sin interact with the physical matter? Or, instead, did sin corrupt our reasoning faculties by interacting with our immaterial souls? If so, how do the mechanics of immaterial interactions work? Or, on the other hand, is “sin” just the name for certain behavior? If so, how and why would a behavior alter our physical makeup? Questions abound. Without any clear answers to these questions, the auxiliary hypothesis S has too little content to assess whether such a thing is even possible. It is easy to understand how things like drugs and alcohol can damage our cognitive faculties, because there is a mechanistic account of how the cause brings about the effect. Not so with sin. The explanation seems to immediately bottom out at “it just does.” Lest I be misunderstood, this is not merely a matter of preferring naturalistic explanations over supernaturalistic ones. My complaint is not that the explanation for how sin damages our reasoning is supernatural instead of natural; rather, my complaint is that the explanation is nonexistent.

There are still other problems with the auxiliary hypothesis S. If we suppose that sin has corrupted our cognitive faculties, then theists lose their ability to say that P(R|T&S) is any higher than P(R|N). Theism was originally supposed to have a leg up on naturalism with respect to the reliability of our cognitive faculties because they were designed by God to be reliable, but if the theist wants to say that they have nonetheless been corrupted by sin, he can no longer say that R is any more expected on theism than it is on naturalism.

The late philosopher Richard Gale made this criticism in a 2004 oral debate with Alvin Plantinga, where he stated: “According to this story, because of original sin, our cognitive faculties … [were] seriously impaired…. So given that story, I think it’s highly improbable that our cognitive faculties are functioning properly and therefore are reliable.”[47] In response to Gale’s claim that P(R|T&S) is low, Plantinga insisted that sin only impairs our cognitive faculties in a very specific and limited way that does not affect the general reliability of our cognitive facilities. More specifically, he said: “Christians have always thought of this [the damage done by sin] as concentrated with respect to knowledge of God, and perhaps also knowledge of one’s own inner condition.”[48]

In Warranted Christian Belief, Plantinga elaborates on this point, writing that sin has two major effects: cognitive and affective. The cognitive effect is that sin “prevents its victim from proper knowledge of God and his beauty, glory, and love; it also prevents him from seeing what is worth loving and what worth hating, what should be sought and what eschewed. It therefore compromises both knowledge of fact and knowledge of value.”[49] The affective effect of sin is that “we love and hate the wrong things. Instead of seeking first the kingdom of God, I am inclined to seek first my own personal glorification and aggrandizement, bending all my efforts toward making myself look good. Instead of loving God above all and my neighbor as myself, I am inclined to love myself above all and, indeed, to hate God and my neighbor. Much of this hatred and hostility springs from pride.”[50]

While Plantinga’s account of the cognitive effects of sin may help him rescue the reliability of our cognitive faculties given theism, it does not do much to predict that human beings would suffer from cognitive biases. Perhaps sin can explain why we are susceptible to confirmation bias and overconfidence. But other biases, such as base-rate neglect or other logical fallacies, have absolutely nothing to do with pride or lacking proper knowledge of God. Simply put, sin does not predict that we would be so fallible in the realm of mathematics, science, logic, memory, etc. It only predicts that we would be fallible in the realm of knowing God and values. Plantinga himself writes that “the noetic effects of sin are concentrated with respect to our knowledge of other people, of ourselves, and of God; they are less relevant … to our knowledge of nature and the world.”[51]

The theist now has two choices: (1) admit that the addition of S does not help T predict CB, or (2) disagree with Plantinga’s response to Gale and insist that sin actually does corrupt our reason generally. If they choose option (1), then theists must admit that CB disconfirms theism, even when theism is conjoined with S. On the other hand, if the theist chooses option (2), then he loses his claim that P(R|T) is high unless he arbitrarily imposes ad hoc limits on how much S degrades our reasoning abilities. If sin corrupts our cognitive faculties too much, then T doesn’t predict R. Too little, and then T doesn’t predict CB. The corrupting effects of sin must be very precisely “fine-tuned” to predict both R and CB. Of course, it is always possible for a theist to retroactively say that sin corrupts our reasoning faculties exactly to the degree that we observe. But no one would say that sin would predict this exact balance of reliability and fallibility prior to our observation of CB. The corrupting effect of “sin” is therefore a vacuous auxiliary hypothesis, because the predicted effects of sin are vague enough to accommodate any degree of human fallibility that we observe. If S doesn’t pin us down to anything, the combination of T&S can accommodate any level of reliability or unreliability that the theist wants.

Thus, the addition of S as an auxiliary hypothesis cannot help the theist deny that CB disconfirms theism. Either sin does nothing to help T predict CB, or sin undercuts any advantage T had over N in predicting R.

VI. Conclusion

In this paper I have argued that even if the general fact R (“human beings can reason”) favors theism over naturalism, the more specific fact CB (“human reasoning suffers from a variety of cognitive biases”) favors naturalism over theism. If this is correct, then theistic arguments known as “arguments from reason” can only be deemed successful by understating the full extent of our knowledge concerning human reasoning. Once we fully state the available data concerning human reasoning, it is far from clear that the data favors theism over naturalism. I have argued that the evidential force of CB overpowers the evidential significance of R and tips the scales in favor of naturalism (all else being equal). At the very least, CB neutralizes the evidential effect of R, and therefore the data on human reason ultimately has no effect on the relative probabilities of naturalism and theism.[52]

Notes

[1] C. S. Lewis, Miracles: A Preliminary Study (New York, NY: Macmillan Paperbacks Edition, 1978).

[2] Victor Reppert, C. S. Lewis’ Dangerous Idea: In Defense of the Argument from Reason (Downers Grove, IL: InterVarsity Press, 2003); Victor Reppert, “The Argument from Reason” in The Blackwell Companion to Natural Theology ed. William Lane Craig and J. P. Moreland (Malden, MA: Blackwell, 2009): 344-390.

[3] Alvin Plantinga, Where the Conflict Really Lies: Science, Religion, and Naturalism (New York, NY: Oxford University Press, 2011); Alvin Plantinga, Warrant and Proper Function (New York, NY: Oxford University Press, 1993).

[4] Reppert, C. S. Lewis’ Dangerous Idea, p. 85.

[5] Although I am using R to stand for the ability of human beings to reason, R can also be used to represent a wide variety of cognitive faculties. In Plantinga’s version of the argument, R represents the general reliability of memory, perception, a priori intuitions, sympathy, introspection, testimony, induction, and moral sense. See Plantinga, Where the Conflict Really Lies, pp. 311-312.

[6] For the purposes of this paper, N is actually meant to represent the conjunction (N&E), where “E” stands for the theory of evolution. Naturalism may be defined, per Plantinga, as simply the view that theism is false. Naturalism may also be defined more specifically as the view that a physical reality exists, and that any mental features of reality are explained by the physical. Conversely, theism may be defined as the view that an eternal immaterial agent who is omnipotent, omniscient, and perfectly good exists and created the universe.

[7] Charles Darwin, in Plantinga, Where the Conflict Really Lies, p. 316.

[8] For example, see: James Beilby (Ed.), Naturalism Defeated? Essays on Plantinga’s Evolutionary Argument Against Naturalism (Ithaca, NY: Cornell University Press, 2002) and Paul Draper, “In Defense of Sensible Naturalism” (2007) in God or Blind Nature? Philosophers Debate the Evidence ed. Paul Draper on the Secular Web. <https://infidels.org/library/modern/paul_draper/naturalism.html>.

[9] Paul Draper, “Collins’ Case for Cosmic Design” (2008) in God or Blind Nature? Philosophers Debate the Evidence ed. Paul Draper on the Secular Web. <https://infidels.org/library/modern/paul_draper/no-design.html>.

[10] Ibid.

[11] I owe this example to Keith Parsons.

[12] Daniel Kahneman, Thinking Fast and Slow (New York, NY: Farrar, Straus and Giroux, 2011), p. 119.

[13] Ibid.

[14] Ibid., p. 156.

[15] Ibid., p. 158.

[16] Ibid., pp. 165-169.

[17] Ibid., p. 167.

[18] Michael Shermer, The Believing Brain: From Ghosts and Gods to Politics and Conspiracies—How We Construct Ideas and Reinforce Them as Truths (New York, NY: Times Books, 2011), p. 259.

[19] Ibid.

[20] Ibid. For a summary of this and numerous other studies on confirmation bias, see Raymond S. Nickerson, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises.” Review of General Psychology Vol. 2, No. 2 (June 1998): 175-220.

[21] See Wikipedia. (2018). “List of Cognitive Biases.” <https://en.wikipedia.org/wiki/List_of_cognitive_biases>.

[22] For example, see: Christian A. Meissner & John C. Brigham, “Thirty Years of Investigating the Own-Race Bias in Memory for Faces: A Meta-Analytic Review.” Psychology, Public Policy, and Law Vol. 7, No. 1 (March 2001): 3-35; Nancy K. Steblay, Gary L. Wells, & Amy Bradfield Douglass, “The Eyewitness Post Identification Feedback Effect 15 Years Later: Theoretical and Policy Implications.” Psychology, Public Policy, and Law Vol. 20, No. 1 (February 2014): 1-18; Roy S. Malpass, Stephen J. Ross, Christian A. Meissner, & Jessica L. Marcon, “The Need for Expert Psychological Testimony on Eyewitness Identification” in Expert Testimony on the Psychology of Eyewitness Identification ed. Brian L. Cutler (Oxford, UK: Oxford University Press, 2009): 3-27; Richard A. Wise, Clifford S. Fishman, & Martin A. Safer, “How to Analyze the Accuracy of Eyewitness Testimony in a Criminal Case.” Connecticut Law Review Vol. 42, No. 2 (January 2009): 435-513; Sandra Guerra Thompson, “Beyond a Reasonable Doubt? Reconsidering Uncorroborated Eyewitness Identification Testimony,” U.C. Davis Law Review Vol. 41 (2008): 1487-1545; Kenneth Deffenbacher, Brian H. Bornstein, Steven D. Penrod, & E. Kiernan McGorty, “A Meta-Analytic Review of the Effects of High Stress on Eyewitness Memory.” Law and Human Behavior Vol. 28, No. 6 (December 2004): 687-706; Charles A. Morgan III, Gary Hazlett, Anthony Doran, Stephan Garrett, Gary Hoyt, Paul Thomas, Madelon Baranoski, & Steven M. Southwick, “Accuracy of Eyewitness Memory for Persons Encountered During Exposure to Highly Intense Stress.” International Journal of Law and Psychiatry Vol. 27 (2004): 265-279; Elizabeth Loftus, “Our Changeable Memories: Legal and Practical Implications.” Nature Reviews Neuroscience Vol. 4, No. 3 (May 2003): 422; Bruce W. Behrman & Sherrie L. Davey, “Eyewitness Identification in Actual Criminal Cases: An Archival Analysis.” Law and Human Behavior Vol. 25, No. 5 (October 2001): 475-491; Elizabeth F. Loftus & Jacqueline E. Pickrell, “The Formation of False Memories.” Psychiatric Annals Vol. 25, No. 12 (1995): 720-725; Saul M. Kassin & Kimberly A. Barndollar, “The Psychology of Eyewitness Testimony: A Comparison of Experts and Prospective Jurors.” Journal of Applied Social Psychology Vol. 22, No. 16 (August 1992): 1241-1249; Elizabeth F. Loftus, Jonathan W. Schooler, Stanley M. Boone, & Donald Kline, “Time Went By So Slowly: Overestimation of Event Duration by Males and Females.” Applied Cognitive Psychology Vol. 1, No. 1 (January 1987): 3-13; and Elizabeth Loftus, Eyewitness Testimony (Cambridge, MA: Harvard University Press, 1979).

[23] Branden Fitelson & Elliott Sober, “Plantinga’s Probability Arguments Against Evolutionary Naturalism” in Intelligent Design Creationism and Its Critics: Philosophical, Theological, and Scientific Perspectives ed. Robert T. Pennock (Cambridge, MA: MIT Press, 2001): 411-427.

[24] Justin L. Barrett, Why Would Anyone Believe in God? (Lanham, MD: AltaMira Press, 2004), p. 32.

[25] Martie G. Haselton, Daniel Nettle, & Paul W. Andrews, “The Evolution of Cognitive Bias” in The Handbook of Evolutionary Psychology ed. David M. Buss (Hoboken, NJ: John Wiley & Sons, 2005): 724-746.

[26] Thomas Aquinas, Summa Theologiae Ia q.93 a.4, translation in Eleonore Stump, Aquinas (New York, NY: Routledge, 2003), p. 232.

[27] René Descartes, Meditations (New York, NY: Cosimo, 2008), pp. 98-99.

[28] Victor Reppert, “The Argument from Reason” (1998). The Secular Web. <https://infidels.org/library/modern/victor_reppert/reason.html>.

[29] Mark Linville, “The Moral Argument” in The Blackwell Companion to Natural Theology ed. William Lane Craig and J. P. Moreland (Malden, MA: Blackwell, 2009): 391-448, p. 414.

[30] The 2015 film The Big Short humorously demonstrates how the human tendency to commit the “gambler’s fallacy” contributed to the United States’ financial crisis of 2007-2008. (The gambler’s fallacy is the mistake of thinking that statistically independent events are dependent, and therefore erroneously concluding that a certain outcome is more or less likely given previous outcomes.)

[31] Descartes, Meditations.

[32] For example, see: Keith E. Stanovich, Richard F. West, & Maggie E. Toplak, “Myside Bias, Rational Thinking, and Intelligence.” Current Directions in Psychological Science Vol. 22, No. 4 (2013): 259-264; Keith E. Stanovich & Richard F. West, “On the Relative Independence of Thinking Biases and Cognitive Ability.” Journal of Personality and Social Psychology Vol. 94, No. 4 (April 2008): 672-695; and Keith E. Stanovich, What Intelligence Tests Miss: The Psychology of Rational Thought (New Haven, CT: Yale University Press, 2010).

[33] For example, see Richard West, Russell Meserve, & Keith Stanovich, “Cognitive Sophistication does not Attenuate the Bias Blind Spot.” Journal of Personality and Social Psychology Vol. 103, No. 3 (September 2012): 506-519.

[34] Massimo Piattelli-Palmarini, Inevitable Illusions: How Mistakes of Reason Rule Our Minds (New York, NY: Wiley, 1996), p. x.

[35] Annalisa Scolloa, Flaviana Gottardo, Barbara Contiero, & Sandra A. Edwards, “Does Stocking Density Modify Affective State in Pigs as Assessed by Cognitive Bias, Behavioural and Physiological Parameters?” Applied Animal Behaviour Science Vol. 153 (April 2014): 26-35; Catherine Douglas, Melissa Bateson, Clare Walsh, Anaïs Bédué, & Sandra A. Edwards, “Environmental Enrichment Induces Optimistic Cognitive Biases in Pigs.” Applied Animal Behavior Science Vol. 139, No. 1-2 (June 2012): 65-73.

[36] Michael Mendl, Julie Brooks, Christine Basse, Oliver Burman, Elizabeth Paul, Emily Blackwell, & Rachel Casey, “Dogs Showing Separation-Related Behaviour Exhibit a ‘Pessimistic’ Cognitive Bias.” Current Biology Vol. 20, No. 19 (October 12, 2010): R839-R840.

[37] Emma J. Harding, Elizabeth S. Paul, & Michael Mendl, “Animal Behaviour: Cognitive Bias and Affective State.” Nature Vol. 427, Issue 6972 (January 22, 2004): 427.

[38] Melissa Bateson, Suzanne Desire, Sarah E. Gartside, & Geraldine A. Wright, “Agitated Honeybees Exhibit Pessimistic Cognitive Biases.” Current Biology Vol. 21, No. 12 (June 21, 2011): 1070-1073.

[39] Michael Bergmann, “Skeptical Theism and Rowe’s New Evidential Argument from Evil.” Noûs Vol. 35, No. 2 (2001): 278-296.

[40] Stephen Maitzen, “The Moral Skepticism Objection to Skeptical Theism” in The Blackwell Companion to the Problem of Evil ed. Justin P. McBrayer and Daniel Howard-Snyder (West Sussex, UK: Wiley-Blackwell, 2013): 444-457; Scott Sehon, “The Problem of Evil: Skeptical Theism Leads to Moral Paralysis.” International Journal for Philosophy of Religion Vol. 67, No. 2 (April 2010): 67-80; Erik J. Wielenberg, “Skeptical Theism and Divine Lies.” Religious Studies Vol. 46, No. 4 (2010): 509-523; Graham Oppy, Arguing About Gods (New York, NY: Cambridge University Press, 2006), pp. 289-314.

[41] Sehon, “The Problem of Evil: Skeptical Theism Leads to Moral Paralysis.”

[42] This argument is defended in Stephen Law, “The Pandora’s Box Objection to Skeptical Theism.” International Journal for Philosophy of Religion Vol. 78, No. 3 (December 2015): 285-299.

[43] Ibid.

[44] Alvin Plantinga, “Probability and Defeaters.” Pacific Philosophical Quarterly Vol. 84, No. 4 (September 2003): 291-298.

[45] Plantinga, Where the Conflict Really Lies, p. 332.

[46] Alvin Plantinga, Warranted Christian Belief (New York, NY: Oxford University Press, 2000), p. 207.

[47] William Lane Craig, Alvin Plantinga, Quentin Smith, & Richard Gale (January 30, 2004). “Science and Religion.” Panel discussion presented at California Polytechnic State University, San Luis Obispo, California. <https://www.youtube.com/view_play_list?p=2C1FABF5591B6C21>.

[48] Ibid.

[49] Plantinga, Warranted Christian Belief, p. 207.

[50] Ibid., p. 208.

[51] Ibid., p. 213.

[52] I would like to thank Keith Parsons for his help developing the ideas in this paper.


Copyright ©2018 Aron Lucas. The electronic version is copyright ©2018 by Internet Infidels, Inc. with the written permission of Aron Lucas. All rights reserved.