Critical Review of Victor Reppert’s Defense of the Argument from Reason (2004)
Table of Contents
1. Primary Discussion
1. Argument from Intentionality (AfI)
2. Argument from Truth (AfT)
3. Argument from Mental Causation (AfMC)
4. Argument from Psychological Relevance of Logical Laws (AfPR)
5. Argument from Unity of Consciousness in Rational Inference (AfUC)
6. Argument from Reliability of Rational Faculties (AfRF)
2. Secondary Discussion
In C.S. Lewis’s Dangerous Idea: In Defense of the Argument from Reason (InterVarsity: 2003), Victor Reppert has contributed what is surely the most extensive defense of the so-called “Argument from Reason” yet to appear in print, despite its many (and serious) failings. His first two chapters concern preliminary issues with which I have little disagreement, so I will not address them here—except to point out that Reppert contradicts himself by attacking at great length the Argument from Motive when applied to C.S. Lewis (in chapter 1) but then applies that same argument against Naturalists (on p. 127). But that aside, my present concern is only with the various arguments from reason, and whether any of them achieves what Reppert claims, namely:
Arguments from reason are good arguments in the sense that they provide substantial grounds for rejecting naturalism about the universe and materialism about the mind, and hence a reason for preferring theism about the universe and dualism or some other nonmaterialist view about the mind. (44)
In a nutshell, an argument from reason (hereafter AfR) argues from “the existence of rational thought” to the necessity of theism and the nonphysicality of the human mind, such that “our very thinking” can “provide evidence that theism is true” (45). Reppert traces the AfR back even beyond C.S. Lewis, but concedes that even Lewis needs improvement, providing which is the function of Reppert’s book.
In this critique I will not address every scientific and philosophic objection one could raise against Reppert’s case. Rather, I will point out what I believe are the most important conceptual flaws in his arguments, and explain in detail how his arguments are ineffective against my own personal worldview. The organization of this critique is in two parts: the first is all one needs to read to get a gist of what’s wrong with Reppert’s defense of the Argument from Reason; the second part elaborates with more details and other issues not central to the main points made in the first part.
Naturalism vs. the AfR
Reppert first must establish his opponent: metaphysical naturalism. This is essentially his task in chapter three. He defines naturalism correctly as “the view that the natural world is all there is and that there are no supernatural beings,” in particular naturalism holds that “whatever takes place in the universe takes place through natural processes and not as the result of supernatural causation” (46-47). He also rightly notes that “the most popular kind of naturalism is known as materialism or physicalism” (47), and it is this particular variety of naturalism that is his main target throughout the book, though he claims his arguments work against all varieties of naturalism.
The feature of naturalism most relevant to the AfR is that “consistent naturalists,” as Reppert puts it, “must hold that in the final analysis, events take place by natural necessity and pure chance,” meaning either the one or the other, or perhaps a combination of both, but never anything else (87). Therefore, “if some purposive or intentional explanation can be given” for any phenomenon “and no further analysis can be given in non-purposive and nonrational terms” (51) then purpose or intention are somehow basic properties of the universe. But “purposive basic explanations” cannot “be admitted into a naturalist’s worldview” (53). Therefore, if one such phenomenon should turn out to be the formation of logical inferences, then “reason must be viewed as a fundamental cause in the universe,” which cannot be true on naturalism (51). This is the basic approach of an AfR. And it is valid: if purposive or intentional explanations actually are fundamental for any phenomena, meaning prior to and not reducible to any other mechanical or otherwise nonpurposive or nonintentional explanation, then naturalism must be false.
Therefore, an “argument from reason” must demonstrate the causal irreducibility of some purposive or intentional event, and by definition, the events it focuses on in that category are rational events, namely logical inferences (deductive and inductive).
Formulating the Basic AfR
Paraphrased in the simplest sense, the AfR works like this: naturalism is a belief, but no belief is rational if ultimately nonrational, and naturalism entails that all beliefs are ultimately nonrational, therefore naturalism is not a rational belief. From this it follows that, since there are rational beliefs, but rational beliefs cannot exist if naturalism is true, therefore naturalism is false (and some worldview that allows rational beliefs must be true). Reppert believes he can establish the key premises in these two arguments in many different ways, and this gives rise to several different “formulations” of the AfR.
The key premise (hereafter the Basic AfR Premise) on which all AfR’s rest, and which I find the most fault with, is this:
No belief is rationally inferred if it can be fully explained in terms of nonrational causes. (57)
It is this premise that all of the AfR’s Reppert presents attempt to establish. The structure of the basic AfR that he presents is valid, in that it does lead to the conclusion that naturalism should be abandoned, if this one key premise is true. For naturalism is certainly a belief, and it is certainly true that naturalism entails that all beliefs are ultimately nonrational, in the sense that belief-formation analyzed to its simplest causal components is a mindless, mechanical process under naturalism.
It should be noted, though, that there is one technical flaw in his argument’s extended form, which does not invalidate the negative conclusion, but does invalidate his further positive conclusion that some antinaturalist worldview must therefore be adopted, or that naturalism is definitely false (as opposed to unknowable). The problem in question is a standard example of the fallacy of excluded middle. This is created by Premise 6 on page 54 and Premise 4 on page 58, where he asserts (using the latter formulation, which Reppert regards as the most correct): “If any thesis entails the conclusion that no belief is rationally inferred, then it should be rejected and its denial accepted.” This is an invalid condition, in that Reppert has not validly established it as true or even probably true. For the rejection of a thesis does not entail its denial. There is often a middle position, and there is such here: Pyrrhonic Skepticism (PS). That is, one need not adopt any worldview, naturalism or theism. Rather, one can maintain a position of skepticism toward all worldview conclusions (and thus all or most substantive metaphysical theses). Though such a position is often regarded as self-defeating for a variety of reasons, this is not so on a careful formulation. Even Sextus Empiricus, in his treatise Outlines of Pyrrhonism, written nearly two millennia ago, established a careful formulation that directly addresses and overcomes the usual objections (which haven’t really changed in two thousand years), and the formal positivism of Ayer and the positivistic pragmatism of Quine both constitute modern variants of the same view. Of course, though I accept its formal validity as an internally consistent position, I reject PS for other reasons, and it may be that Reppert does as well. But in formal terms, since Reppert has not shown this middle position invalid or less acceptable than any alternative, he has not, strictly speaking, proven his case for antinaturalism, even if everything he argues is entirely correct. This is particularly a problem for Reppert, since without such a case, the Premise in question entails its own defeat. For if it is true that we should deny any premise that is not rationally inferred, then if his premise is not rationally inferred, it must be denied as well. For Reppert does not do a very good job of establishing his premise that “there are rational beliefs.”
To be fair, Reppert does declare that he will not advance the AfR as a Skeptical Threat argument, based on a brief discussion of the problems such a form of argument presents, and this could count as an attempt to argue “against” PS, though indirectly and incompletely. At any rate, Reppert declares that his AfR will not call “into question the validity of human reasoning” but rather “assum[es] that validity as an established fact” (70), although the argument he presents does not formally vindicate this assertion. We can thus only assume that Reppert rejects PS as an alternative based on some vaguely defined or defended “established facts” that remove it from reasonable consideration. In that I think he is correct, but he has not done the work that would be needed to secure such a position for someone who actually trusts the AfR as he has defined it.
For if the AfR as Reppert formally presents it is correct, it sooner destroys reason than saves it, and therefore to redeem reason he has far more work cut out for him than the average bear. This is only more the case since his “rescue” theory is theism, which, unlike naturalism, requires positing as its fundamental an unobserved entity—yet unobserved entities can only be accessed through reason, the very faculty the AfR destroys. In contrast, though naturalism certainly appeals to many unobserved entities, its fundamental entity (nature) is observable, and therefore if nature alone can offer a sufficient basis for the reliability of reason, then the AfR doesn’t even get off the ground, and there is nothing that need be done to restore the reliability of reason—for the evidence of the senses can then be trusted when they confirm reason’s reliability. Reppert cannot rely on this route for theism: if the AfR is correct, then he must establish reason as sound before he can use it to establish God as existent. Otherwise, his defense of reason becomes circular.
But this problem only flaws his attempt to argue from the AfR to theism, and I suspect that given time and effort Reppert probably could establish at least an inductive case against PS, and thus establish his more questionable premise as at least probably true, if everything else he argues already is correct. And even without that, his argument, if correct, still defeats naturalism, by leaving only two options: PS or some form of antinaturalism. The AfR is therefore a respectable argument that deserves a naturalist’s attention. Any naturalist worldview that does not entail, strongly imply, or successfully argue for the denial of the Basic AfR Premise, cannot be maintained as credible. It is therefore necessary to show how Reppert’s arguments for this premise fail to establish it, and how at least one formulation of naturalism can justify its denial.
Three Underlying Problems
Reppert’s discussion of the AfR flirts with disaster at several points, where conceptual mistakes threaten to undermine his project, and we should get these issues out of the way first, before giving his arguments their best possible shake.
The first and most fundamental problem is what I shall call the Possibility Fallacy: assuming that having no explanation is equivalent to not being able to have one (e.g. 69-71). The Basic AfR Premise is a global assertion: no belief is rationally inferred if it is explained with nonrational causes. It is thus necessary to prove that it is not even possible for this to happen (a conceptual argument that is pretty hard to carry off), or at the very least that it probably does not actually happen in human brains (an argument that would require a survey of neurophysical data, which Reppert never conducts).
Since Reppert never develops the latter type of argument, he can only ever mean the former. For example, Lewis’s “best explanation argument maintains that the necessary conditions for rationality cannot exist in a naturalist universe” (70: emphasis added), by which he must mean any naturalist universe, in other words every naturalist universe that is conceptually possible. That’s a pretty tall order. This underlying premise is essential to the Basic AfR Premise. Yet you cannot establish this underlying premise by showing that naturalists have no explanation for the existence of rational inference. For that does not establish the impossibility of such an explanation. But without establishing impossibility, the AfR fails—unless it is reformulated probabilistically, which Reppert never attempts.
There are two ways the AfR could be so reformulated: Reppert could attempt to show that existing brain science data indicate that a naturalist explanation is improbable, even if it is conceptually possible (for example, it would be true only on a set of data materially different from the data we actually have); or Reppert could attempt to show that a naturalist explanation is conceptually impossible in a certain subset of naturalist universes, and then present physical evidence that the universe must belong to that subset of naturalist universes (while, of course, still leaving room for that evidence to be compatible with the actual universe conforming at the same time to no naturalist scheme at all, but to some supernatural scheme instead, like theism). The latter comes closest to characterizing what Reppert actually (though unknowingly and insufficiently) attempts.
Neither argument would be easy to achieve, and I doubt either could be achieved with the presently limited data available. And Reppert certainly accomplishes neither. So that leaves him with the target of a global conceptual absolute, which is extraordinarily hard to prove, especially given the vast complexity and diversity of naturalisms available. To be fair, Reppert is aware of this problem (71) and does attempt to hit such a target with his various AfR’s later in the book. But at times he seems to forget this, and acts like proving that naturalists have no explanation is the same thing as proving they can never have one. And that is a fatal mistake. We will revisit this problem when it arises.
Next is the deployment of what I shall call the Causation Fallacy. Reppert endorses Lewis’s argument that “the presence of a cause and effect account of a belief is often used to show the absence or irrelevance of a ground and consequent relationship” (63). The point being that, if rational belief can be given a physical cause-and-effect account, then that would show that a ground-and-consequent relationship was actually irrelevant to that belief being formed. From this it would follow that it is conceptually impossible for a naturalist to explain rational inference (causally) without a self-defeating argument.
However, it would be fallacious to argue that “lunchmeat is often baloney, therefore all lunchmeat is baloney.” It is obvious what is wrong with such an argument, and this ties right back into my discussion of the Possibility Fallacy above: simply because it just so happens that all false beliefs are formed causally, it does not follow that all causally formed beliefs are false. This is a fallacy of Affirming the Consequent, to which Lewis and Reppert were apparently led by Hasty Generalization. The fact of a belief being causally formed is, after all, a Red Herring: what makes a belief dubious is not that it was causally formed, but that it was formed by a process that was not significantly truth-finding. This is the crucial issue: whether a causal process is significantly truth-finding. This cannot be ascertained by examining whether it is a causal process. Thus, it cannot be said that rational inference cannot be a causal process simply because nonrational inferences are. To be fair again, Reppert tries to avoid this fallacy when he gets to more careful variants of the AfR later on, but we shall see that he doesn’t always dodge the ball.
Finally, the most devastating fault of Reppert’s book is the painting of straw men and a bad case of Armchair Science: in other words, his failure to interact at several crucial points with the extensive philosophical literature—and even worse, his failure to get acquainted with the latest scientific findings of cognitive science. I will point these failings out as we go. Quite often he simply ignores the naturalist and scientific literature on a subject and declares an issue unresolved that in fact has been resolved many times in many different ways—though Reppert’s readers would never know this, because Reppert ignores all these solutions. The solutions may all be failures, but Reppert has to show that they are. At the very least, he is obligated to tell his readers that the proposed solutions exist. Yet often he doesn’t even do that. So when he declares that naturalists “invariably fail to explain reason naturalistically” (119) he is asserting what he hasn’t even come close to proving—since he hardly interacts with naturalist philosophies at all, much less all of them (as the word “invariably” entails).
Likewise, cognitive science has produced many findings that lead to many plausible solutions to the problems he presents, yet Reppert’s discourse often betrays no awareness of this data—for example, the fact that brains compute virtual simulations of their environment, that the experience of a unified consciousness is a post hoc event, that many forms of ordinary computation are employed in the brains even of lower animals (for example, the way visual data is computed into a visual field) and that these systems are evolutionarily related to those computational systems associated now with language and reason, and so on.
There are many works discussing the physiology of intelligence, for example, such as Raymond Cattell’s Intelligence: Its Structure, Growth and Action (1987). This is outdated and yet already covers details Reppert never seems aware of. For example, in “The Physiological and Neurological Bases of Intelligence” (pp. 213-54), Cattell discusses early studies of failures in different aspects of logical reasoning in relation to known brain damage, which is the basis for concluding that reason is a physical function. Indeed, any theory of a reasoning brain (including Reppert’s) must account for the peculiar facts at hand linking different rational functions with different brain centers. The way the brain fails tells us a lot about how it works. Reppert does not address this. He doesn’t even seem aware of it. This does not mean that he can’t deal with it. But he certainly must deal with it before he can claim victory, or anything close to it. To the same end, Cattell also discusses evidence that the more abstract a concept or logical relation, the more brain resources it consumes when we entertain it—and, conversely, losses in brain matter degrade such capacity—which again conforms more to physicalism than Reppert’s supernatural dualism. And that was fifteen years ago. A huge amount of progress has been made in these areas since. Reppert discusses none of it.
An example of the most recent work can be found in Relational Frame Theory: A Post-Skinnerian Account of Human Language and Cognition (2001). This outlines a neo-behaviorist theory of mind and rational cognition, deeply rooted in scientific evidence, which Reppert seems largely unaware of and unequipped to address. Therein: Steven Hayes, et al., “Relational Frame Theory: A Précis” (pp. 141-54) outlines a physicalist theory of rational cognition; Ian Stewart, et al., “Relations among Relations: Analogies, Metaphors, and Stories” (pp. 73-86) discusses the scientific evidence regarding the neurophysiology of rational cognition; Steven Hayes, et al., “Thinking, Problem-Solving, and Pragmatic Verbal Analysis” (pp. 87-101) discusses scientific evidence regarding the role of language and natural reasoning, and their evolutionary advantages; Dermot Barnes-Holmes, et al., “Understanding and Verbal Regulation” (pp. 103-17) discusses logic as computational rule-following, and the production of intentionality from relational frame perception; and ibid., “Self and Self-Directed Rules” (pp. 119-39) discusses consciousness as a perceptual construct of a self. All issues Reppert tackles. Yet he never even mentions much less addresses any of this scientific evidence or the theories developed to account for it. And relational frame theory isn’t the only theoretical treatment of the evidence out there. Many more exist. Reppert deals with none of them.
I had already warned Reppert about both problems—namely, the lack of interaction with printed sources regarding both the science and the naturalist philosophy of mind—before his book went to print, but he seems to have taken no effort to rectify the failing. This makes his book quite deficient as a contribution to philosophy, much less cognitive science—as his book pretends to be: for Reppert is formulating a theory regarding brain function that is well within the purview of existing scientific research and analysis. Just because Reppert posits supernatural entities does not make his theory any less a scientific hypothesis, open to test against the data like any other, which means he cannot ignore the existing data. But even worse than being merely deficient, since each AfR depends on his proving that naturalists can have no response, for him to ignore all the responses proposed (and all the relevant scientific data), and thus effectively pretend there are none, is quite fatal to his entire enterprise.
The Basics of Reason under Carrier Naturalism
Reppert sets out nine propositions that must be true (and, at least potentially, explicably true) on Naturalism, in order for Reason to be trusted as a reliable faculty (73). I agree with them all, except one, though even that I don’t reject completely. So I will show how (at least my own version of) naturalism can explain every single one of these propositions. Though many other naturalists have their own responses to the same issues, I will only draw on my own variant of Metaphysical Naturalism which is thoroughly described in my book Sense and Goodness without God: A Defense of Metaphysical Naturalism. The theories I advance here will demonstrate that all nine propositions have the potential of being true on naturalism, and therefore naturalism is not unable to account for Reason. This entails that the Basic AfR Premise is false, and hence so is every version of the AfR that Reppert presents.
Here I will only outline how my theory explains Reppert’s nine propositions, and then I will refer back to this summary later when I get to actually defending my theory on each point. Here are his nine propositions:
1. States of mind have a relation to the world we call intentionality, or about-ness.
Cognitive science has established that the brain is a computer that constructs and runs virtual models. All conscious states of mind consist of or connect with one or more virtual models. The relation these virtual models have to the world is that of corresponding or not corresponding to actual systems in the world. Intentionality is an assignment (verbal or attentional) of a relation between the virtual models and the (hypothesized) real systems. Assignment of relation is a decision (conscious or not), and such decisions, as well as virtual models and actual systems, and patterns of correspondence between them, all can and do exist on naturalism, yet these four things are all that are needed for Proposition 1 to be true.
2. Thoughts and beliefs can be either true or false.
From an analysis of data a brain computes varying degrees of confidence that a virtual model does or does not correspond to a real system. If there is such a correspondence, then having confidence in this is a true belief, while having confidence that there isn’t such a correspondence would then be a false belief. If there is no such correspondence between the virtual model and reality, then having confidence that there is such a correspondence is a false belief, but having confidence that there isn’t such a correspondence would be a true belief. Thus, Proposition 2 only requires the existence of correspondence and confidence, both of which can and do exist on naturalism.
3. Human beings can be in the condition of accepting, rejecting or suspending belief about propositions.
Every meaningful proposition is the content or output of a virtual model (or rather: actual propositions, of actual models; potential propositions, of potential models). Propositions are formulated in a language as an aid to computation, but when they are not formulated, they merely define the content of a nonlinguistic computation of a virtual model. In either case, a brain computes degrees of confidence in any given proposition, by running its corresponding virtual model and comparing it and its output with observational data, or the output of other computations. Thus, when I say I “accept” Proposition A this means that my brain computes a high level of confidence that Virtual Model A corresponds to a system in the real world (or another system in our own or another’s brain, as the case may be); while if I “reject” A, then I have a high level of confidence that A does not so correspond; but if I “suspend judgment,” then I have a low level of confidence either way. By simply defining “proposition” as I have here, Proposition 3 follows necessarily from Propositions 1 and 2. Therefore naturalism can account for this as well.
4. Logical laws exist.
Logical “laws” are simply rules of computation. Like the laws of physics, which simply describe how the physical universe actually, physically behaves, the laws of logic do nothing more than describe certain kinds of computational procedure. These “laws” are also no different than, for example, the “laws” of agriculture—the rules of conduct that dictate successful cultivation of crops vs. unsuccessful cultivation (when accurately stated in detail and not summarized into rules of thumb). But in both cases, the “laws” are “discovered” as being the best way to take advantage of the way the universe works. Thus, all you need for the laws to exist is a universe that works a certain way—it will automatically follow from the existence of any such universe that there will be a best way to describe and manipulate it, and that “best way” is something humans can discover and then describe. We then name these descriptions “rules” or “laws.”
Logical laws are essentially the laws of linguistic communication—whether communicating with oneself (for purposes of computation) or communicating with others (for purposes of transmitting data or conclusions to other “computers”). The moment any language exists, logic exists, for logic is the procedure required for language to succeed. Since it is a procedure, it requires no special supernatural “entities.” The laws, when written down, merely describe the behavior of natural entities that successfully communicate (with themselves or others). In other words, they prescribe certain behaviors that are useful, just as agribusiness describes and prescribes certain behaviors that are useful. Therefore, any universe in which computation and communication exist will also automatically contain logical laws. It is physically impossible for it to be otherwise. And since nothing else is needed, just computation and communication, and both exist on naturalism, therefore naturalism inherently entails the existence of logical laws, in just the same way that it entails physical laws and agricultural laws and so on.
5. Human beings are capable of apprehending logical laws.
Human brains evolved the computational ability to discover better ways of doing things, including better ways to think and communicate, but also better ways to grow cabbage or traverse distances or kill, and so on—it’s all the same ability: to explore, experiment, compute, and invent better procedures to achieve any goal. Once you have that ability, you automatically have the ability to discover logical laws, just as one can discover agricultural laws. It is just a matter of time and circumstance. And since one of our most potent evolutionary advantages was language, and the procedure entailed by any successful use of language literally is logic, the discovery of the laws of logic was all but inevitable for humans. We thus “apprehend” the procedures required by language and computation the same way we “apprehend” the procedures for cultivating cabbage or manufacturing a spear. It is the same act of perception engaged when we observe that a pattern of marks is a face, or a pattern of sound is a song. If we can perceive these things, then we can perceive any other kind of pattern, including patterns called “procedures.” So if perception exists on naturalism, then we can perceive “logical laws” on naturalism.
6. The state of accepting the truth of a proposition plays a crucial causal role in the production of other beliefs, and the propositional content of mental states is relevant to the playing of this causal role.
Brains are computers. As such, the output of one computation (including the output of confidence level) is often physically the input of another computation, and it thereby has a causal effect on that other computation’s output. Every conscious computation in the brain is the computation of either a virtual model or data physically connected to or computed from a virtual model (such as a confidence level). Since a proposition literally is the content or output of a virtual model, propositional content therefore literally has a physical-causal effect on further computation that relies on that virtual-model computation (which literally is the “proposition” in question). All this follows from everything I have declared about the other Propositions above. Therefore naturalism can account for Proposition 6, too.
7. The apprehension of logical laws plays a causal role in the acceptance of the conclusion of the argument as true.
Since “logical laws,” like “agricultural laws” or “physical laws,” are descriptions of things that happen (or that we can make happen, if we follow the procedure involved), and these descriptions are visible as patterns in data, and brains as computers can perceive patterns in data, including patterns that describe things that happen (or that we can make happen, if we follow the procedure involved), it follows that human brains can perceive logical laws, as I’ve noted already. Just as this perception in the case of agriculture or physics results eventually in our having confidence that following the perceived procedure will improve our cultivation or manipulation of nature, and this is a causal effect (our brains causally compute the perception, the output of which causally affects all future computations that draw on this output as input), so also this perception in the case of logic results eventually in our having confidence that following the perceived procedure will improve our ability to think and communicate. Obviously this is a causal effect in both cases. Since physical causation is all that is needed, and is easily accounted for here, naturalism can account for Proposition 7.
8. The same individual entertains thoughts of the premises and then draws the conclusion.
Singular consciousness is itself a virtual-model computation of certain brain activities. We know for a fact that it is a post hoc construction—the neurophysical data is conclusive on this point. Consciousness follows brain action, by a measurable lapse of time. This suggests that consciousness is a perceptual computation just like all others. In other words, although different parts of the brain do different things, even in the same train of reasoning, one part of the brain (the cerebral cortex) observes and computes relations among the results, and other parts of the brain (such as the hippocampus) coordinate all outputs together into a coherent state of apprehension, and this stitched-and-perceived output has its own causal effect back down upon the individual systems. To put it another way, just as one section of the brain generates a model of what exists in our visual field, by drawing together all kinds of disparate data and computations performed on that data (like color, shape, speed, relation, etc.), another section of our brain generates a model of a “self” to which the brain can physically relate various other outputs of its computational systems.
This organ (the cerebral cortex), with the aid of global synchronizing signals (such as originate in the hippocampus), thereby generates what we call a “unified consciousness.” This is both an illusion and a real thing. It is an illusion in the same sense that colors are an illusion—fictions created by our brains to represent photon frequency data. But this is also causally and physically real in the same sense that colors are produced by real things (photons with frequencies of vibration) and have a causal effect on future computation (the photons cause our brains to generate color-perceptions, but it is perception of the colors that then causes the brain to compute other conclusions, without reference again to the original photons or their frequencies of vibration).
Unifying consciousness involves many systems (for example, synchronic “stitching” of brain events is brought about physically by different organs than those that build a self-perception—but the synchrony of brain systems, and its regulation, is still a physically observable phenomenon, and proven necessary for consciousness), but the result is the same as the unifying of the visual field, the unifying of the sound field, and so on: it is a computation of a virtual model of something really going on. It’s just that this time the thing that is “going on” is what is going on in the brain, rather than outside of it. But there is no practical difference here. In other words, the cerebral cortex is a sensory organ little different than an eye or an ear (and their attached brain systems): except that just as the eye senses light instead of sound, and an ear sound instead of light, so a cerebral cortex senses patterns of brain activity instead of patterns of light or sound in themselves (which patterns are “sensed” by other sensory organs in the brain, such as the primary visual cortex, and so on).
Thus, setting Proposition 8 as a condition of Reason is both correct and incorrect. It is obviously not true that the same logic gate must do every step of a computation in a computer. Obviously computation involves many disparate logic gates doing their own thing, all causally linked in a complex way. Yet still, the same “complex of circuits” is what takes an input and produces an output (this much Reppert concedes). But this means it remains true that the “same brain” takes the inputs and produces the outputs, following a logical procedure, which is just a category of computational procedure. The existence of a unified consciousness isn’t even necessary, as we see that mindless computers can perform every kind of logical procedure humans can. Unified consciousness is useful however, and more importantly, is how we do things, how our computers developed to process data. This obviously affects how Reppert, for example, “sees” logical computation, since he is a computer that employs (the perception of) a unified consciousness to do that. But it does not follow that his (ours) is the only way reasoning can be performed, and as we have seen, we know for a fact that it is not. Truly, there are great advantages to our way, advantages which “mindless” computers lack. But that has to do with the nature of consciousness, not reason, a distinction I will revisit later.
9. Our processes of reasoning provide us with a systematically reliable way of understanding the world around us.
…unless the AfR is correct in concluding that our processes of reasoning cannot do this given the actual observational data at hand (i.e. the observed natural world). Whether he knows it or not, Reppert’s argument formally seeks to prove this Proposition 9 false, in order to introduce Theism as a way of making it true again. I have already discussed above the problem he faces with such an approach (“Formulating the Basic AfR“). Of course, I will show Proposition 9 is not false on naturalism, therefore we have no need of a theistic hypothesis. But the central distinction required here derives from what Reppert himself says: he posited as a given that the “validity” of reason is “an established fact” (70), so the only question is not whether reason is reliable—for Reppert concedes that it is from the start—but whether a purely natural universe can produce (evolve) the “reliable reason” that we observe and that Reppert accepts as an established fact. We will show that it can. Therefore, if we are correct, then the AfR fails.
Such is the approach that I will use here. In subsequent sections I will elaborate and defend all the statements I have made above.
Six Arguments from Reason
Reppert deploys six different versions of the AfR (72-85; cf. 86 & 87), each one aimed at establishing the Basic AfR Premise, by focusing each time on a different but equally essential feature of reason. So I will now address in detail each of these arguments in turn.
1. The Argument from Intentionality (AfI)
The Basic AfI Premise is “If naturalism is true, then there is no fact of the matter as to what someone’s thought or statement is about” (75). Reppert tries to establish this premise by claiming that intentionality cannot be reduced to physical things or causes. He never really “demonstrates” this much more than asserts it. He quotes C.S. Lewis, for example, musing that “To talk of one bit of matter being true about another bit of matter seems to me to be nonsense” (74), which I shall call the Argument from Lack of Imagination. The problem is that something is only nonsense when it doesn’t mean anything. But the propositions belonging to the class in question do mean something. Per my analysis of Propositions 1 and 2 above, “This bit of matter is true about that bit of matter” literally translates as “This system contains a pattern corresponding to a pattern in that system, in such a way that computations performed on this system are believed to match and predict the behavior of that system.” This is not nonsense. So Lewis’s Argument from Lack of Imagination is easily refuted.
Surely Reppert would agree that for one theory to be “about” something in the real world (our uncle, the universe) it does not have to be true. All that is required is that the person formulating the theory assign that theory to the hypothesized reality. So “X is about Y” translates literally as “the pattern of X hypothetically corresponds to the pattern of Y.” In other words, the “aboutness” of a thought is by definition the hypothesis of “correspondingness” between a thought and its object, which is something we choose (consciously or not) to assign to a thought. It is not something that exists apart from that choice.
So when Reppert complains that “if reality is fundamentally physical” then “the state of the world does not uniquely determine what meaning a word has” and therefore “the word just has no determinate meaning” (74) he really doesn’t get how language works. Language is a tool—it is a convention invented by humans. Reality does not tell us what a word means. We decide what aspects of reality a word will refer to. Emphasis here: we decide. We create the meaning for words however we want. The universe has nothing to do with it—except in the trivial sense that we (as computational machines) are a part of the universe.
Reppert complains later that such a decision to name something is always itself an intentional act (90), but that is neither true nor relevant. A decision is just a decision—all computers make them, even those with no intentional states. Decisions can thus be adapted to intentional behaviors as easily as any others. Meanwhile, the core engine of intentionality derives from the attentional centers of the brain, possessed and employed by all animals. That’s why cats can keep track of their prey, for example—by the same means, we can track the image or thought of, say, our uncle, by attending to it cognitively, a process well understood in neurophysical terms.
Thus, I am certain many words were derived without any deliberate intention behind them, but simply from unthinking practices of sound emulation (think of the words bang, boom, zap, plunk, etc.), originating from attention to sensory data. But even for those words that were assigned by deliberation, the intentional relationship originated with the pre-verbal thought (the hypothesization that X matches Y), and so the buck stops there. Assigning names is merely secondary. The assignment of a relationship (such as “the behavior of model X will predict the behavior of reality Y”) comes first. It should come as no surprise that many intentional states are constructed from other intentional states, just as many colors are constructed in our visual field from other colors. But in any such construct, all of the most underlying intentional states will themselves be intentionally basic, and not constructed from other intentional states. They are constructed, instead, from underlying nonintentional physical phenomena, just as colors are.
In short, we create virtual models in our minds of how we think the universe works, then we choose what names to give to each part or element of that virtual model, in order to suit our needs (chief among which being communication, which in turn leads to semantic computation, as we learn to communicate with ourselves). Once we choose to assign the word “white” to “element A of model B” that assignment remains in our computational register: the word evokes (and translates as) that element of that virtual model. That’s how communication works: I choose “white” to refer to a certain color pattern, you learn the assignment, and then I can evoke the experience of that color in you by speaking the word “white.” The “about-ness” here is purely internal: a computer assigns a label to a repeatable experience. That’s it. There is no mystery here, and nothing supernatural going on. It is straightforward computational physics.
Reppert, like Quine, is disturbed by the fact that this just all goes on in our heads. We never know for sure whether “element A of model B” corresponds to any real thing in the universe, much less the very same thing. But that is not a problem for intentionality, only for epistemology—it’s the good old fashioned “problem of knowledge.” And I am sure Reppert would agree we can still say from time to time that the proposition “element A of model B corresponds to a real thing in the universe, even the very same thing” is probably true, and really mean it—even estimate how probable this is—all based on available data and additional exploration and testing. Moreover, “element A of model B corresponds to a real thing in the universe, even the very same thing” can be a false proposition and still be about the same thing. So the fact that we can’t know whether such a proposition is true presents no challenge to the issue of establishing intentionality.
Returning to my earlier definition of aboutness, as long as we can know that “element A of model B is hypothesized to correspond to real item C in the universe” we have intentionality, we have a thought that is about a thing. The thing doesn’t even have to exist—the same statement stands even if it is rephrased as “hypothesized real item C in the hypothesized universe.” And it is obvious how the proposition “element A of model B is hypothesized to correspond to item C which is hypothesized to be real in the hypothesized universe” can be identified as a true proposition even by a dumb computer. Because the verbal link that alone completely establishes aboutness—the fact of being “hypothesized”—is something that many purely mechanical computers do (a point I will elaborate on later, e.g. “The Argument from the Psychological Relevance of Logical Laws” and “Introducing the Question of Computers“). So there is no question about whether such a proposition can be true in a physical universe, much less any question about how we can know it is true: since we create the relation, we have first-hand knowledge, even knowledge-by-direct-acquaintance, that the proposition is true.
Reppert pulls out Quine’s disastrous gavagai analogy in an attempt to prove his point from an alleged ambiguity in language. “There is no fact of the matter,” Reppert claims, “as to whether the native is referring to ‘rabbit’ or ‘undetached rabbit parts'” (74-75). But this is not really true. Insofar as ‘rabbit’ or ‘undetached rabbit parts’ even have a different meaning (and if they mean the same thing, then Reppert’s and Quine’s concerns are utterly pointless), then there are many possibilities, none of which was openly considered by Quine—and thus not by Reppert, either. For example: (a) the native doesn’t care what the word means or how he may be misusing it; (b) the word (for the native) means only one or the other, and the native knows this; or (c) the word (for the native) means both things (it defines a whole class of objects and not just one object), and the native knows this; or (d) the native isn’t sure what the word is supposed to mean, but knows who does (such as his elders). Quine’s argument covertly depends on (a), which is rather ridiculous, or (d), which only pushes the issue back to (b) or (c) anyway.
The reality, in most actual, genuine cases of communication, is either (b) or (c)—or (d), which ultimately entails either (b) or (c)—neither of which supports Quine’s point. To the contrary, they refute it. And how can you know? Just ask the native “Do you mean by ‘gavagai’ a ‘rabbit’ or ‘undetached rabbit parts’ or can it mean either?” He will tell you. Case closed. How will the native know? Because he learned the language and knows how it is used (and might even be able to point you to a dictionary or living authority on the matter) and, more importantly, he pictured exactly what he meant in his mind before he searched his verbal index for a word corresponding to that picture in his mind, and so now, upon your query, he need only run the procedure in reverse to compute the answer.
The connection between a pattern—let’s say, always ‘rabbit’ and never just ‘undetached rabbit parts’—and a word (gavagai) is physically created and strengthened in the brain of the native speaker. And his cerebral cortex can detect this physical connection and use it (to transmit or receive data using its corresponding codeword, or teach the connection to someone else). How did the native’s brain learn this precise a connection? By paying attention. Imagine one day he saw (or asked his elder to imagine) a rabbit all chopped up but all its parts sewn back together at random. “Papa, is that still a gavagai?” “No, son. That’s just a mangled mess of gavagai parts.” Or maybe even: “Yes, son. The word means both.” Or perhaps: “We’ve never encountered that before, so let’s make up a rule to cover it right now.” So no matter what, there is a fact of the matter after all. But the Argument from Lack of Imagination strikes again, and both Quine and Reppert miss the obvious.
Since the meaning of a word is tied physically in our brains to certain repeatable patterns or assemblies of patterns in experience, there is never any question as to what that word means to us. We can simply crunch the data in our heads and out comes the answer, just by processing the virtual model physically linked to the word (in this case, the model of a gavagai, which is computed from another model: the model of our past observations of, and queries about, the use of the word gavagai). There remains a question as to what the word means to other people, and it is true that it might mean slightly different things to different people, but the more expert and precise a person is, the more their meaning will converge on that of other expert, precise people, and at all events, the set of all the meanings shared by all the people who speak the same language is by definition the conventional meaning known to all. The rest is just a question of rule enforcement: if the experts want the meaning-set that they share to be the convention, they must educate the public accordingly. This is the role served, for example, by dictionaries: to represent an authoritative codebook to educate everyone into sharing the same conventions of meaning. And it is only by seeking to maintain some such enforcement of conventions of meaning that people can communicate at all. And where a word is broad in scope, either officially or by default (e.g. gavagai means both ‘rabbit’ and ‘undetached rabbit parts’, which one depending on the context), further explication can generate the needed precision. So even the natural ambiguity in language is overcome by additional communication.
Such is my solution to the Problem of Intentionality. Now, one might play fair and suppose Reppert didn’t know my view—though I am pretty sure he did, from email exchanges before the publication of his book. Set that aside. What about other naturalists on the public record? Reppert presents only one philosopher (W. V. Quine) as doubting the reducibility of intentionality, but Reppert hardly even presents Quine’s philosophy or his position on issues related to intentionality, much less discusses any rebuttals by other philosophers in print. He merely rests uncritically on one detached example. Thus, Reppert doesn’t really demonstrate what he needs to: namely, that no naturalists have a solution to the Problem of Intentionality. In fact, even that would not be enough, since to avoid the Possibility Fallacy he must show that it is impossible for any Naturalist to have a solution (or at the very least that this is improbable), but Reppert makes no case for such a conclusion.
Indeed, Reppert is addressing metaphysical naturalism, yet Quine is not a metaphysical naturalist—to the contrary, he rejects all metaphysics, and instead advances a kind of pragmatic positivism, referring to himself as a naturalist only in the obscure epistemic sense. So where are the metaphysical naturalists? Where is Dennett? Field? Millikan? It does not seem Reppert checked the writings of any such philosophers, except the bizarre fringe group known as “eliminative materialists,” yet even then he does not name (here) a single philosopher in that group, much less interact with their philosophies. Though he refers readers to some articles that do this, he does not refer readers to any articles that present, much less rebut, the arguments of any non-eliminative materialists on the issue of intentionality (or related subjects).
Yet the Stanford Encyclopedia of Philosophy tells us that “among philosophers attracted to a physicalist ontology, few have accepted the outright eliminativist materialist denial of the reality of beliefs and desires” and that in fact “a significant number of physicalist philosophers subscribe to the task of reconciling the existence of intentionality with a physicalist ontology.” So why don’t we hear from them? Why doesn’t Reppert discuss them? He certainly must refute them for his AfI to stand any chance. Yet he ignores them completely. Therefore, we can conclude that Reppert has failed to establish the Basic AfI Premise and therefore the AfI remains unproved. And since I have presented a coherent and plausible naturalist account of intentionality, we can go even further and conclude that the Basic AfI Premise is false, and therefore so is the conclusion of the AfI. [For more discussion, see in the secondary part of this critique: “Intentionality” and “Mindreading“]
2. The Argument from Truth (AfT)
The Basic AfT Premise is “If naturalism is true, then no states of the person can be either true or false” (77). As with the AfI, Reppert only argues for this premise by interacting with eliminative materialists—though at least now he actually does interact with and name them (or two of them: the Churchlands). But he still does not even mention much less interact with non-eliminative materialists. Indeed, his entire AfT rests on an unproven conditional: “some theorists…are telling us that we must be prepared to find nothing in the brain that can be true or false, and if such an alarming occurrence take[s] place, the reasonable thing to do would be to deny the existence of truth” (77: emphasis added). That’s it. “Some” say this, and “if” they are right, then the conclusion of the AfT is true. That is not adequate to establish that the Basic AfT Premise is true, and therefore Reppert has failed to demonstrate the credibility of his own argument.
We could walk away right now and feel no threat—for no threat has been established. Reppert never mentions much less addresses or considers those naturalists who disagree with the only two people that Reppert cites. And since most metaphysical naturalists disagree with (at least Paul) Churchland on this one point, it is really rather silly of Reppert to claim that metaphysical naturalists have no answer here. And yet he must not only show that (which he has not done), but that all metaphysical naturalists cannot have an answer (which, again, he doesn’t do). But I suppose Reppert intended his quotations of the Churchlands to “represent” an attack on those metaphysical naturalists, even though if that is what he intended he was obligated to discuss at least some of the responses to the Churchlands that must be in print, yet he never even mentions any.
Thus, with the AfI and AfT, Reppert really has only attempted two arguments so far against eliminative materialism, not naturalism. And even then, in both cases Reppert’s argument consists of little more than begging the question against them: “they” say X does not exist (= Reppert’s basic premise in each argument), “I” say it does (= Reppert’s second premise in each argument), therefore “they” are wrong. I say “little more” (rather than not at all) because Reppert does not merely assert the second premise, but does try to provide a couple of paragraphs supporting his contention. But his arguments are hardly extensive enough to persuade anyone who might disagree, least of all the eliminative materialists who, far from writing a few paragraphs, have composed entire books arguing their case. And as for the rest of us, who aren’t eliminative materialists, it is all side show. Since Reppert only ever addresses them, neither his AfI nor his AfT concerns us.
But I can go one better still. Not only has Reppert not presented any actual AfT against any form of non-eliminative materialism, but my metaphysical naturalism presents a refutation of his Basic AfT Premise, and therefore settles the matter in our favor, not his. And I believe Reppert was aware of my view before he completed his book. As I described above (per my analysis of Propositions 2 and 3), “truth” is the degree to which the pattern of a virtual model computed by a brain corresponds to the pattern of an actual system in the real world. Of course, per the issue of intentionality, not just “any” system, but the particular system we have chosen to assign our virtual model to (as a hypothesized description of it). The real system we reference by accessing the relevant data (looking and pointing) and/or mutual communicated agreement (communicating to each other the fact that we are discussing the same thing—and we can, if necessary, explain how we are to know that we are discussing the same thing and what conditions would have to obtain for us to conclude that we weren’t discussing the same thing). But once we have resolved the Problem of Intentionality, the Problem of Truth is easily solved.
If “truth” is the degree to which the pattern of a virtual model computed by a brain corresponds to the pattern of an actual system in the real world, and true “knowledge” is the possession of a belief (“having confidence”) that such a correspondence exists when in fact it does, then “truth” can obviously exist in a purely physical world-system. For that correspondence, on which the reality of truth depends, is a physical fact, as is the confidence, on which the reality of knowledge depends. Reppert can only argue against this by (1) presenting physical evidence against the factual elements of my definition (e.g. that the brain computes virtual models, etc.) or by (2) somehow proving that truth cannot mean what I say it means.
Option (2) seems like a fool’s errand, since my definition is coherent and explains all relevant uses of the word “truth,” and how words are defined is a human invention, and thus humans can simply change the definition of truth to be mine, if in fact any of Reppert’s arguments showed they must do so in order to rescue their preferred realist ontology of truth. So that leaves option (1), which is the sort of argument Reppert never attempts. He does not seem interested in the actual details of the leading findings of neurophysiology or cognitive science. Indeed, just about the only times he even mentions such things it is in the quotations of persons who actually agree with all of my facts, the very facts Reppert would need to refute—such as the Churchlands, who outlay the evidence extensively in their books (some of which Reppert even quotes): A Neurocomputational Perspective: The Nature of Mind and the Structure of Science (1989), The Engine of Reason, The Seat of the Soul: A Philosophical Journey into the Brain (1995), Neurophilosophy: Toward a Unified Science of the Mind-Brain (1986), and The Computational Brain (1992). Though I do not agree with the Churchlands on every single point, especially in their interpretations of certain facts, I pretty much agree that they have the actual facts right. And my view is based on those facts.
One example of an interpretation of the facts that the Churchlands (as Reppert represents them) get wrong is precisely what Reppert rests his own case on, which is the view that the interests of survival-advantage would place the interests of truth-finding in the backseat. Such a view is true only so far—it quickly becomes untrue the moment truth-finding itself is hit upon as a survival advantage, in just the same way wings or peacock feathers are hit upon for the very same end. The issue of evolved truth-finding mechanisms is something I will take up in more detail later. For now, observe what Reppert says:
One can pursue effective manipulation of the world, or reproductive fitness, or fame, or glory, or sainthood, but these goals are distinct from the purely epistemic goal of gathering as many (or as high of quality) truths as possible. (77)
Distinct, yes. Unconnected, no. What Reppert seems to miss is that every single one of these goals can be better achieved—faster, more thoroughly, and more efficiently—if one has on hand a generic truth-finding tool to aid him. Therefore, such a tool would definitely confer a survival advantage on anyone who had it—in fact, it would confer an advantage for the achievement of almost every conceivable goal. Imagine two animals: one that has a tool that helps him in almost every conceivable endeavor, and one that lacks such a tool. Who has the advantage? That’s a no-brainer (joke intended).
So there can be no doubt that a truth-finding engine is valuable and, if hit upon, would not likely fail to survive and develop. Thus, when Reppert, like Plantinga, supposes that an animal can survive just as well without Reason, he supposes wrong. Yes, an animal can do well without Reason. But they cannot do anywhere near as well as animals who have Reason. That is why humans are the only species in the history of this planet who have been able to sustain themselves at a population many times beyond the natural ecological capacity of their environment, and who have the means to escape nearly every catastrophe that has driven other species extinct (we now have the ability, even if not yet the will, to avoid destruction at the hands of diseases and asteroids, for example—and have long had the ability to escape primitive catastrophes such as fire, flood, famine, and plague).
Thus, Patricia Churchland is right that, as Reppert quotes her, “boiled down to essentials, a nervous system enables the organism to succeed in…feeding, fleeing, fighting, and reproducing” but she is wrong when she (according to Reppert) concludes from this fact that “truth, whatever that is, takes the hindmost” place in importance (76-77). For this is missing the forest for the trees: a truth-finding organ aids “feeding, fleeing, fighting, and reproducing” in a way no other advancement or even array of advancements can ever come close to matching. Thus, it cannot be said that truth takes the hindmost. That is only true for animals who have not developed a truth-finding organ. For animals who have developed such an organ, truth is as vital as the opposable thumb. For it is precisely by being able to construct virtual models of the world and “play them out” in our brains that we can come to understand and predict that world, and make use of that understanding and prediction to benefit, for example, our quest for “differential reproductive success.” But this advantage is only gained if we are able to construct, more often than chance, models that match the real world or that approach such a correspondence. And that is what “truth” physically is. Indeed, as I explain in the latter half of this critique, that is pretty much the very same thing the Churchlands argue, contrary to Reppert’s mischaracterization (cf. Giving the Churchlands a Fairer Shake).
I must also add before continuing that Churchland’s comment, taken by itself (as she probably did not intend it to be), also ignores the fact that it is not all about genetics. It is no longer the case that the “differential reproductive success” of human genomes drives advances in science or reason, or even that it matters much at all anymore. Genetic evolution accounts for the development of a crude truth-finding organ in the human brain. But Reason, as Reppert intends the term, encompasses the formal rules and procedures of various logics, including the logic of the scientific method. And these have not arisen from genetics. They do describe and refine computational processes actually employed by the human brain, but they greatly improve the accuracy and greatly reduce the errors of that mechanism, by restructuring the brain memetically, i.e. through environmental learning, not natural selection.
Most of what humans “are” is not genetic but memetic—for example, consciousness (meaning our actual identity, not the capacity to develop one) is constructed, and is therefore an assembly of memes, not an assembly of genes. Genes do define and limit and create tendencies in how the brain can respond to and assimilate memes, but the mind itself, our “identities” as persons, is largely a memetic system. We are made, not born. It is thus the case now that the “differential reproductive success” of memetic systems (ideologies, ideas, techniques, technologies, etc.) matters far more than that of genetic systems, and any account of the role of evolution in the development of human reason must address the role of the memetic ecology even more than the biological. This is a fact fatally overlooked by Plantinga, for example. It is also missed by Reppert, who should have known better than to conflate the natural reason of the human animal, and the formal reason established by the Greeks and perfected by subsequent inheriting cultures. The one is the function of a biological organ. The other is a technology. The distinction is an important one, as we shall see.
3. The Argument from Mental Causation (AfMC)
The Basic AfMC Premise is “If naturalism is true, then no event can cause another event in virtue of its propositional content” (78). How Reppert gets to this premise takes a little explaining. Here he only discusses deductive reasoning, which is strange since inductive reasoning is more important (e.g. inductive reasoning could explain the “discovery” of deductive reasoning through experience). But we will follow Reppert and assume that deductive reasoning is the important issue here. Appealing to standard Aristotelian syllogistic logic, Reppert says that for “rational inference” to be “possible” then we must come to believe a syllogism’s conclusion is true by “being in the state of entertaining and accepting” the major premise and the minor premise, and this state of being must somehow “cause” us to entertain and accept the conclusion. The key move here is that for this to be true, mental events must cause other mental events “in virtue of the propositional content of those events.” That much we agree on.
Where we disagree is when Reppert declares that it “might…be the case that the propositional content of these brain states is irrelevant to the way in which they [causally] succeed one another in the brain” (78). Of course, yet again, saying “might” destroys his own argument. If its central premise only “might” be true, then the argument only “might” be true, and that is a useless conclusion, since we want to know whether the argument is true. Yet even his tentative approach is undermined by the fact that on my theory his central premise is false and therefore he is wrong even to claim that it might be true for all naturalisms, as he must claim in order for his AfR to say much of interest to naturalists.
Reppert notes that “if all causation is physical” then “it might be asked how the content of a mental state could possibly be relevant to what causes what in the world” (79). True, it might be asked. But that doesn’t mean there isn’t an answer. And it doesn’t seem that Reppert bothered looking for any answers. He only discusses one such, the “anomalous monism” of Donald Davidson, which belongs to the category of “nonreductive materialism.” But what about reductive materialism? What about solutions that do not involve anomalous monism? In other words, what about all other metaphysical naturalists? And, indeed, what about cognitive scientists? A great deal of work has been done on brain-state causation, especially in regard the production, recognition, and recording of the very pattern-data that (we shall see) figure so centrally here. Until Reppert addresses all this, his AfMC is at best unfinished and unproven.
Worse, on my naturalism, it is false (see my discussion of Propositions 3 and 6). On my theory, every meaningful proposition is the content of a virtual model or the content of an output of such a model. With regard to the question of possible but not actual propositions, I qualify my definition to say that actual propositions obtain from actual models, while potential propositions obtain from potential models. For I do not believe propositions “exist” anywhere besides human minds or as the products thereof. Yet there is an infinity of “potential” propositions that are not now and probably never will be actualized, just as there is an infinity of “constellations” that exist in the terrestrial star field. Even though we will never name all possible shapes that could be found by connecting the dots in the sky, nevertheless those shapes really are there, since they physically exist as a direct consequence of the physical existence and physical arrangement of the stars. There doesn’t have to be some nonphysical Platonic “realm” where those shapes all exist. The physical facts alone are sufficient to establish the existence of that infinity of shapes—whether any mind notices them or not.
This relates to the issue of so-called Abstract Entities. Reppert rightly notes that “if physics is a closed system, then it seems impossible for abstract entities, even if they exist, to make any difference in how beliefs are caused” (54). I agree. But I follow Aristotle: I do not believe there are any such things as abstract “entities,” only “abstractions,” which are essentially just human labels for repeating or repeatable patterns in sensory or conceptual experience (a concept I develop in my book Sense and Goodness without God: A Defense of Metaphysical Naturalism). For example, we can say “triangularity” exists because of the physical fact that a shape with three sides is physically possible and physically manifest in many places, and the pattern of arrangement in question (the having of three sides) is the same in every one of those cases. All that is required for that to be true is the existence of places and sides, which are physical facts, not Platonic or supernatural ones. And the human brain (in fact, the brain of every higher animal) is physically wired to “detect” repeating patterns like that, and remember them, so it can detect where the pattern repeats itself, and take advantage of that special information. Humans, of course, can assign codewords for those repeating (or repeatable) patterns that their brains detect, and those words are called “abstracts.” Reifying them, as Plato did, is a fallacy. But at the same time, my view is not antirealist, either, since I believe abstract words really do refer to real things: repeatable patterns. Whether these physical patterns are repeated in the physical universe outside our minds or only in our minds does not matter, since on my view our minds are a part of the physical universe.
With that background in place, my answer to the question Reppert poses is that the content of a mental state is literally and physically the content of a virtual model computation, which in turn produces a computational output that physically causes a subsequent computation to produce a certain output by providing the physical input for it, and so on. This much should be clear already. I can illuminate my position further by explaining how I disagree with Davidson (or at least Reppert’s characterization—which I am inclined not to trust, given his mistreatment of the Churchlands):
(1) Reppert says Davidson holds that “mental states can have contents that do not correspond to anything in the material world (e.g. false beliefs)” (79). I suspect Davidson had in mind fictions here, not false beliefs, but I will address both. First, mental states are in the material world, so a mental state that has content that corresponds to another mental state corresponds to something in the material world. Unless we are talking about something literally just invented out of the blue by one single brain, it is probable that any mental state corresponds to at least one other. This is the case in fiction, for example: the lifespan of Yoda is a fictional characteristic of a fictional person. Though there is no Yoda and thus no Yoda lifespan, there is a fiction of a Yoda and that fiction has been related to a Yoda lifespan. The fiction was invented by one or more persons at some point, but since then has become a part of millions of minds and remains recorded in documents and physical records of all varieties. Thus there is a fact of the matter whether Yoda lived to around 800 years of age. It is not that there really is a Yoda who lived 800 years, but that there is a fiction of a Yoda who lived 800 years, and there is an accepted authority on the matter (those who invented and/or control the copyright to the character named “Yoda,” and/or official “guidebooks” to the Star Wars universe, which were most likely written and/or approved by the same). To disagree whether Yoda lived so long is a debate over whose authority to follow in establishing the “official” Yoda lore. It is not a debate over the actual age of any real person.
This is the sort of thing philosophers usually have in mind when they discuss intentionality (as the context provided by Reppert suggests Davidson was doing). What is a thought about Yoda really about if there is no Yoda? Well, it is about a physically existing lore about a fictional character. And the existence of a fiction of a character is itself a physical reality. The intention of a Yoda proposition can and often does encompass certain assumptions or choices regarding which physically-existing authority “counts” as far as establishing what should be accepted as “true” about Yoda. For example, it is a physical fact that there are pornographic sex-stories involving Yoda—but one can still claim Yoda never had sex by appealing to an accepted authority who is mutually accepted as having the right to establish what shall be true about the fiction of Yoda. And so on. In every case, even with fiction, there is something that physically exists that thoughts of Yoda are “about,” and per my analysis of intention, we choose the connection, and in so doing, physically create such a connection in our brain.
On the other hand, the question of false propositions is another matter altogether, since those are not fictions, but actual claims to reality. I already analyzed earlier how intention relates (or rather, does not relate) to the truth or falsehood of a proposition. In either case, a proposition is still about what we declare it to be about, regardless of whether it states something true or false about that thing, whatever it is. So it is not correct that false propositions “do not correspond to anything in the material world.” It is correct that the entire content of such propositions does not so correspond, but that is not what is referred to by intention. What is referred to by “intention” is whether a proposition is hypothesized about a particular set of data or a particular system in the real world (which is hypothesized from that data). And that can be true even if the hypothesis itself is false. And even when the intentional object is a hypothesized reality that in fact doesn’t exist, the hypothesized reality does exist as a model of the world in at least one person’s brain, and therefore physically exists, albeit in a different form than the agent believes. Thus, no matter how you cut the cake, propositions are always about something that physically exists.
(2) Reppert says Davidson holds that “mental states” that possess intentionality “cannot be fitted into lawlike statements and therefore cannot be predicted or explained by causal laws” (79). I am not sure what Reppert expects here—many physical phenomena are too complex to be predictable or neatly defined by a fixed set of “lawlike statements,” but that does not mean they cannot be explained by causal laws or events. On my view, a mental state cannot have propositional content without the physical presence in the brain of a model of the very pattern that proposition defines. Without that physical pattern in the computer of the brain, the brain would never produce any corresponding thought. Conversely, the fact that that physical pattern, rather than some other pattern, is in the computer of the brain, is precisely what causes the brain to run its computation in one direction rather than another. Since the pattern must physically exist for the thought (the “proposition”) to exist, and since a physical pattern of brain activity obviously has a causal effect on the future course of that brain’s physical activity—an effect that differs from that of another pattern in precisely the respect that matters here—it simply makes no sense to claim that naturalism entails “no event can cause another event in virtue of its propositional content.” Even on my thoroughly physicalist view (on which I will have more to say below), that is the only way brain events (in the conscious sphere at least) can causally operate at all. So Reppert’s Basic AfMC Premise is false, and hence so is the AfMC. [For more discussion, see in the secondary half of this critique: “Propositional Content” and “Zombies“]
4. The Argument from the Psychological Relevance of Logical Laws (AfPR)
The Basic AfPR Premise is “If naturalism is true, then logical laws do not exist or are irrelevant to the formation of beliefs” (82). As I have already explained (see my discussion of Propositions 4, 5, and 7), “logical laws” are propositions that describe truth-finding procedures, and the procedure by which we form beliefs is certainly not irrelevant (or nonphysical for that matter). Truth-finding procedures are obviously to be preferred to others, just as I noted in my discussion of the Causation Fallacy earlier. And clearly they must be carried through physically by any computer, whether a human brain or not. So, just as “rules of farming corn” exist, in the sense that there are procedures that result in successful corn farming and those procedures can be discovered and described, so also “rules of reason” exist, in the same sense that there are procedures that result in successful reasoning (i.e. reasoning that leads to true conclusions—or in the case if induction, reasoning that leads more often than not to true conclusions), and those procedures can in turn be discovered and described. What else needs to be explained?
There is no intelligible sense in which there can’t be truth-finding procedures in a purely physical universe, yet that is what Reppert must show—he doesn’t. Likewise, there is no reason to believe brains of a certain complexity can’t discover those procedures (memetically, by learning, or genetically, by natural selection), yet that is what Reppert must also show—again, he doesn’t. So Reppert does not establish the Basic AfPR Premise. Worse, on my naturalism, it is outright false. Therefore, the AfPR is false. Of course, one might try to argue it is improbable that, say, natural selection would have led to a brain developing such truth-finding procedures (or the ability to discover them through exploration of its environment), but we have already seen such an argument is implausible, and at any rate that would no longer be an AfR but an Argument from Design (AfD), and Reppert’s book is not about the latter, but the former. Still, Reppert returns to the subject anyway in his last AfR, so I will have more to say about it then (see AfRF below).
Reppert does not seem to be aware of any of the computational theories of reasoning developed by cognitive scientists and followed up by naturalist philosophers—which is strange considering his fondness for quoting the Churchlands, among the most popular advocates of just such a thing. Thus, when he says logical laws “are not physical laws” (81) he is really missing the boat, betraying the fact that he hasn’t done his homework. For logical laws are just like physical laws, because physical laws describe the way the universe works, and logical laws describe the way reason works—or, to avoid the appearance of begging the question, logical laws describe the way a truth-finding machine works, in the very same way the laws of aerodynamics describe the way a flying machine works, or the laws of ballistics describe the way guns shoot their targets. The only difference between logical laws and physical laws is the fact that physical laws describe physics and logical laws describe logic. But that is a difference both trivial and obvious.
Reppert thinks there are other strange differences, but he doesn’t think them through. For example, he says logical laws are unlike physical laws in that the former “pertain across possible worlds, including worlds with no physical objects whatsoever” (81; cf. 94). But that is again just the same trivial difference, that physical laws, not logical laws, describe physics. The reason logical laws pertain to “possible worlds” is the very fact that the sphere of possibility entailed by such a phrase is the sphere of imagination capable of being explored by the virtual model computations of our brain (the assembly and reassembly of the elements of experience). For a world (or system or object within a world) to be “possible” is literally to be capable of simulation in our brain (or in any adjunct to our brain that extends its mental power—like a computer, hypothetical or real). Since reason is a function performed on computed data, especially but not only that category of computed data defined by virtual models, obviously the rules of reason will by definition describe what is applicable to everything that can potentially be computed. It is no accident that Gödel’s Theorem connected with work in computation: he proved that the realm of the possible excluded certain things precisely because those things could never be computed. The limits of computation literally are the limits of logic, because computation itself literally is logic (or “a” logic—as there are many types of computation, there are many logics).
Missing all this, Reppert goes on to declare that “if one accepts the laws of logic, as one must if one claims to have rationally inferred one belief from another” (emphasis added) “then one must accept some nonphysical, nonspatial and nontemporal reality” rather like Plato suggested (81). Note how close Reppert is to getting it—but just when you think he has it, he puts the cart before the horse, then observes that the whole caboose won’t go, and from that concludes it can’t go, without some wizard to cast a spell to levitate the cart so it can drag the horse along where it needs to go. If you think that is a silly way to respond to an inverted horse-and-cart, then you will agree Reppert’s approach to logical laws is silly, too. The reason one must accept the laws of logic to rationally infer anything is the very same reason one must accept the laws of aerodynamics to fly. Surely Reppert would not conclude that we need some sort of supernatural powers and beings to explain why we need to follow the laws of aerodynamics to fly. The reason we need them is that it is physically impossible to fly any other way, and the only way flight is physically possible is exactly the way described by those laws. All you need for that to be true is a physical universe that is a certain way. Of Plato’s hypothesis we have no need.
Yet one could just as easily say that the laws of aerodynamics are “nonphysical, nonspatial and nontemporal.” Or rather, one can just as wrongly say so. For the laws of aerodynamics do not exist but for the physical objects and relations that physically interact as they do. Thus, there is no sense in which the behavior of physical objects, or their existence, or their interaction or relation, is “nonphysical.” The same goes for space and time: without space and time, there would be no aerodynamics, and yet we can use the laws of aerodynamics to describe possible flying machines that never have and never will exist. For the laws of aerodynamics apply to (they describe) all physical worlds that are relevantly similar (i.e. that have the same physical attributes that entail, and are entailed by, those laws), including worlds that have flying machines in them that do not exist in this (we suppose actual) world. The same is true of logic, though even fewer physical attributes are necessary for them (e.g. the mere physical fact of the existence of distinctions and causal and ontological consistency is sufficient to establish the law of non-contradiction—see below, as well as my later discussion of the Ontology of Logic).
Now, it is true that we can compute simulated worlds that lack space, time, and physical objects. But the laws of logic concern only what computation can and cannot do, regardless of how computation is manifested or brought about. Maybe it is possible for computation itself to exist without space, time, and physical objects—I doubt it, but I don’t trouble myself with trying to answer such a question, because it is of no relevance to me, who literally am a physical computer in space-time. Even if computers can exist in other nonphysical worlds, that does not entail that computers can’t exist in purely physical worlds. And the question at issue is solely whether computation can exist in a purely physical world. Thus, when I say computation only exists in this world as a consequence of complex physical machines functioning in space-time, I am not saying this is the only way it can be done, but simply that this is the only way we have ever seen it done, which means it is probably the only way it can be done (at least “around here”).
It is here that we must avoid the Causation Fallacy I described earlier. Just because computation can be, and in our world actually is, carried out by physical machines operating in space-time, it does not follow that such computers cannot perform truth-finding operations, or that there are no particular procedures of computation that are more truth-finding than others, or that there are none that are most capable of such an output. To the contrary, it is precisely because computers exist that makes it possible for them to perform truth-finding procedures rather than others, and it is precisely because of this that it is possible to describe such procedures propositionally. And that is what “logical laws” are. There is thus no need of anything nonphysical or outside space and time. Computers, whose truth-finding operations logical laws describe, are physical and exist in space and time. Yet the existence of computers that perform truth-finding operations is all that is necessary for logical laws to exist. Exactly the same as for the laws of aerodynamics.
As to how we can come to discover the laws of logic, that is no different than how we came to discover the laws of aerodynamics, or the laws of ballistics. There is no mystery at all in any of these cases. The latter is a particularly close analogy: our brains naturally evolved an intuitive albeit imperfect understanding of the laws of ballistics. When we throw or catch a ball, our brains compute its trajectory with amazing accuracy. Our brains can also improve this computational ability through experience, by trial and error. But even before we do that, we are already born with a ballistics computer in our head (as are many other animals). It isn’t a superb computer—but that’s because it developed blindly by natural selection, and not by design. It needs perfecting, and each individual can perfect it through experience. But all this goes on without any conscious awareness of what the computer is doing.
Then we came to discover how to precisely describe the operation of such a computer, when we discovered the laws of ballistics themselves (by observing and testing and so on), and defined them (using languages, like mathematics, specially adapted to such purposes). Later, with this knowledge in hand, we were able to build nearly flawless ballistics computers superior to our own. But even without that technology we could use our own general-purpose computer—the cerebral cortex—to run a nearly flawless ballistics computation by running a program called “mathematics.” Our cerebral cortex wasn’t “built” for that, so it is very inefficient when used that way—it takes a lot of time and can make mistakes every now and then—but it can still be used that way, because the computational architecture required is exactly the same as that which our brains developed to communicate, and to reason their way out of an infinite variety of problems, and into an infinite variety of advantageous solutions.
So simple the solution is. It is perhaps not surprising, then, that the only form of naturalism Reppert attacks on this point is that which adopts logical antirealism. We are left in the dark whether the AfPR even applies at all to logical realists in the naturalist community. So yet again, Reppert claims his argument applies to all kinds of naturalism, when in fact he only gives evidence against one variety—and a relatively unpopular one at that. He never says a word about any of the naturalists who advocate logical realism—like, say, me. And he can’t claim to have not known I was a logical realist—I’ve made my views quite clear to him in correspondence. But never mind that. For he never even names a single soul. Or rather, he names one, but curiously omits to mention how that famous fellow solved the problem.
That man is Aristotle. Reppert cites Metaphysics 4.4 (= 1005b-1006d) against the plausibility of logical antirealism (81). But it is strange that Reppert didn’t do what he ought to have done, and present Aristotle’s explanation of logical laws, for Reppert at the very least must refute that before he can claim physicalism is defeated by the AfPR—since Aristotle’s solution posits nothing not already accepted by all physicalists. But that would only be the tip of the iceberg. There are dozens of modern naturalists that defend some form of logical realism, and Reppert must refute them all to establish the Basic AfPR Premise. But as I said, he doesn’t even name them, much less refute them.
Here is what Aristotle has to say about the fundamental principles of logic (those from which all other principles derive), especially the Law of Non-Contradiction:
The starting-point for all such discussions is not the claim that one should state that something is or is not so (because this might be supposed to be a begging of the question), but that he should say [i.e. be able to say] something significant both to himself and to another (this is essential if any argument is to follow; for otherwise such a person cannot reason either with himself or with another); and if this is granted, demonstration will be possible, for there will be something already defined. (Metaphysics 1006a)
He goes on to explain that words have definite meanings assigned by human convention, and for that very reason words cannot also mean what they by definition deny (ibid. 1006a-1007a). Thus, for Aristotle, logical laws derive necessarily and automatically from the existence of communication (defining terms and reasoning with others) and computation (reasoning with oneself). The moment you have those, in any possible universe, you will always have logical laws. It can never be any other way. This is exactly what I argue above and elsewhere. And since one does not need anything more than physics to have communication and computation, you do not need anything more to have logical laws.
Extending the point to physical reality, Aristotle argues:
Again, if all contradictory predications of the same subject at the same time are true, clearly all things will be one. For if it is equally possible either to affirm or deny anything of anything, the same thing will be a trireme and a wall and a man, which is what necessarily follows for those who hold the theory of Protagoras. For if anyone thinks that a man is not a trireme, he is clearly not a trireme [i.e. in their conception], but he also is a trireme if the contradictory statement is true. So the result is the dictum of Anaxagoras, “all things are mixed together,” so that nothing truly exists. (Metaphysics 1007b)
Aristotle goes on to explain that a posit asserts that something exists, while a negation asserts that it does not, so that to assert both is to declare, literally, nothing (ibid. 1007b-1008a). That is, a self-contradiction communicates nothing, and represents nothing even in the mind of one who wishes to declare it. Thus, it cannot correspond to anything real except the null set. This follows as a consequence of language alone. But as Aristotle observes, this also follows physically: for if everything is true, then there is only one thing—in other words, there is no difference between one object and another. In fact, there are no objects (exactly as Protagoras concluded). For to be an object requires distinction from something else, and distinction literally is a non-contradiction. That is, a distinction is the literal and physical opposite of a contradiction. It therefore follows that in any universe where distinct objects and properties exist, no self-contradictory propositions will be true for that universe (see ibid. 1008b for an example of Aristotle’s extension of the point to practical physical behavior).
This is also the case (for essentially the same reason) for the observation that in any universe where three spacial dimensions exist without curvature, no parallel lines will ever meet. The axiom (in this case the Euclidean axiom of parallels) describes a physical fact of the universe proposed. It is not some Platonic form or some “law” beyond space and time—it is a physical property of that universe, such that it would be impossible for that universe ever to exist and for the axiom of parallels not to be true (as part of a description of that universe). No mysterious logical relation is needed for this to be true: just the physical facts themselves. So also for the law of non-contradiction—that “law” is just like any other physical law: it describes any universe that contains distinctions (and causal and ontological consistency), which is (I suspect) every universe except the null set, since we cannot conceive of (compute) any other universe without positing at least one consistent distinction. This, again, is not because of some mysterious logical superlaw, but because the physical facts of any universe we care to compute are just so.
In short, the laws of logic literally are the (potential or actual) physical properties of every (potential or actual) universe that they describe. If there is any possible universe that the laws of logic do not physically describe, then, obviously, logic would not apply to it. So far man has not been able to imagine (simulate) any such universe, except a nullverse. True, this could be because of some limitation inherent in all computation (a la Gödel). But that would be irrelevant really, since we obviously don’t live in such a universe, and the discovery of such a possibility would be no more devastating to logic than the discovery of non-Euclidean geometry was to geometry. After all, in our physical universe, parallel lines can meet. Yet geometry is not overthrown.
If only Reppert had read past the short little section of Aristotle that he apparently glanced at. Had he done so, he would have realized there is a powerful naturalist, realist theory of logic that he doesn’t even mention, much less rebut. And lest he think Aristotle too vague and unclear, his arguments have been much refined and advanced upon by numerous philosophers since. Reppert need only do the research required of him to find it. Until he does so, and actually refutes it all, he cannot claim the AfPR has any merit against all forms of naturalism. And if he wants to be lazy, he can just stick with mine: that logical laws are simply propositions that describe truth-finding computational procedures.
As Reppert himself says, “Part of what it means to say anything is to imply that the contradictory is false” (82). Indeed, that is too wishy-washy: the fact is that what it means to say anything is literally at the same time to say (not imply, but assert) that the contradictory is false. Reppert sees this. So why doesn’t he connect the dots? If, as even he admits, “language simply does not function” (82) without following logic, isn’t it then obvious where logic comes from? Why do we need something more than language, if language already comes with logic included? And though he worries that if logic, like language, is merely a “convention,” then we could have different conventions, he misses the obvious: that logic, like language, has a specific goal—to get at the truth. Thus, we can’t adopt “just any” conventions for logic—we must discover those conventions that best get at the truth, just as we must discover the best ways to grow corn or hit targets with guns. By the same token, the aim of language is communication. And though any language is possible, all languages must follow the same basic rules for communication to be possible—and that is simply a physical fact of the universe. Nothing else need be the case for that to be so. And since reason is essentially a form of computation by communicating (speaking) with oneself, and has the goal of truth-finding, and those procedures that are most truth-finding are most desirable for obtaining that goal, we need look no further for where the procedures of reasoning come from, or what causal role they play in our brain’s physical computations.
5. The Argument from the Unity of Consciousness in Rational Inference (AfUC)
The Basic AfUC Premise is “If naturalism is true, then there is no single metaphysically unified entity that accepts the premises, perceives the logical connection between them and draws the conclusion” (84). Now, this isn’t really an AfR. As I will show, it is actually just a disguised Argument from Consciousness (AfC), and Reppert does not present any of the evidence or argument or interaction with leading naturalist philosophers and cognitive scientists that would be required to carry off an AfC. As I noted already earlier, unified consciousness is not essential to any truth-finding computation (see my discussion of Proposition 8). In other words, it is consciousness itself that Reppert presents as a problem here, and not the carrying out of truth-finding procedures, yet there is nothing about this problem that is unique to or derives from reason, any more than all other forms of perception and conscious thought. Still, I will address this issue anyway, to correct many of Reppert’s misunderstandings—or, maybe, his ignorance of what is, after all, standard introductory textbook stuff as far as cognitive science goes. But note that much further detail (and qualification) will be provided in the secondary half of this critique (cf. Theory of Mind).
Reppert complains that “if physicalism is true, then each” moment of awareness in a process of reasoning “is a different brain process,” so he asks “what ties them all together in an inference?” (83). Well, for starters, what ties together the synchronization of visual and auditory phenomena? We know how to fool the brain systems responsible for this, and can identify exactly where in the brain synchronic stitching of disparate sensory phenomena is controlled. We have also identified synchronizing rhythms and brain systems responsible for generating coherence across the whole spectrum of conscious experience, and can greatly interfere with the experience of consciousness by interfering with these systems or the synchronizing electric signals they generate and regulate. A great deal more could be said about this subject, and one can get a decent background in it by reading any introductory college textbook on neurophysiology or cognitive science. So why does Reppert act like he has never heard of any of this work? Can it really be that Reppert is willing to make global assertions about brain function without having read anything on actual contemporary brain science? I certainly hope not. But I cannot otherwise account for his failure to address the neurophysical systems responsible for synchronic brain function, or any of the other facts we know that pertain directly to this and many other issues Reppert raises.
This is not to say that the problem of synchronic stitching has been solved. There is a reason William Hasker is skeptical of our success here, but he is the only author whom Reppert cites on theories of consciousness—despite the fact that he is not a naturalist, but a Christian (author of the Christian treatise Providence, Evil and the Openness of God (2004)). If Reppert had given naturalism a fairer shake, he would see that Hasker’s analysis disguises the fact that a great deal of progress has been made toward understanding synchrony, unification, and the peculiarities of human experience. Instead, Reppert’s open question (“what ties it all together?”) betrays a willful or inexcusable ignorance of this progress or its relevance to answering that very question.
For example, when Reppert says that “it makes no sense to ‘parcel out’ a complex awareness to parts that lack a comprehensive awareness” (83) he really isn’t taking brain science seriously. For the exact opposite is true: it only makes sense to parcel out a complex function to multiple parts. Indeed, I cannot personally conceive of any other way any complex function could be performed, outside of magic. For example, imagine Reppert saying “it makes no sense to ‘parcel out’ a complex digestive system to parts that lack a comprehensive digestive capability.” That really would be a silly thing to say. Obviously a complex digestive system can only exist by doing exactly that: parceling out different functions to different organs. This in no way prevents food from being digested. By the same token, that a brain produces a function called “consciousness” by parceling out all the various needed functions to different organs (brain systems) in no way prevents consciousness from being produced. Indeed, I cannot imagine any other way such a complex function could be produced—except, again, by magic. Which is basically, of course, what Reppert wants to conclude.
But bad news for Reppert: we have empirical proof of the fact that consciousness is a post hoc product of disparate functions working in synchrony. It has long been well-known, and has been demonstrated in numerous experiments, that what we call “conscious awareness” always follows by a few hundred milliseconds every single operation involved in that awareness. For example, we have proven that your brain has already made a decision—say, to raise your hand—well before you yourself become aware of making the choice to raise your hand. Now, many use this data to argue against the existence of free will, but they jump the gun a bit, mistakenly supposing that being aware of making a choice is what defines a will. But a will is not defined by awareness, but actually willing, and thus the human will consists in the actual decisions made, regardless of when we become aware of them (and regardless of the fact that this awareness, like any other, can sometimes be mistaken). And throughout all this the “we” in question is not the “awareness” of a “we” but the actual entity (the brain and its functions) of which we are aware, since it is the actual thing detected that is really us, not the awareness of it. Thus, that we only become aware of what we have thought and done well after we have already thought and done it does not mean we didn’t think and do it. But it does mean that conscious awareness is itself a perceptual process that follows all the other processes of which it is aware (like reasoning) and therefore it is not essential to reasoning or any of those other processes.
Conscious awareness is, however, very useful, because it can provide feedback to the rest of the brain through top-down causation. Though military units (or, to draw on Reppert’s own analogy, a class full of students) can function rationally on their own, put a machine at the top that can pool all the information from all the units (or students), and send that information back down to every individual, each one extracting from it the information it needs (or is designed to handle), and you will have a far more successful team. This is all the more so when the machine at the top can not only redistribute data globally, but can itself identify patterns in that data that no single unit (or student) could see on its own (like, for example, the connection between a desire, a means, and a goal) and transmit that (entirely new) data back down as well. It is this latter function, performed by the cerebral cortex (the “machine at the top”), that is especially to credit for things we most prize about consciousness: the ability to carry off long-term planning, to construct complex systems of ideas, to construct a coherent identity, and, most of all, predict the thoughts and behaviors of others through the ability to model (and also, as a result, empathize with) their consciousness, which is doing all the same things. Now, this machine would be useless if not for the separate units feeding it information, and receiving its data and commands in turn.
So it is thus obvious that you cannot have consciousness without the parceling out of functions across several distinct specialized organs. An eye cannot hear, nor an ear see. And so, too, the primary visual cortex is not very adept at emulating the function of the primary auditory cortex. Human brains are flexible, and can reprogram themselves. But they function best when divided into specialized systems, each a master at its own domain. Hence we have found that entirely different parts of the brain recognize faces, assess distances, store names of objects, store names of people, compute color information, compute shape information, and so on. This is indisputable scientific fact. So there is no way Reppert can pretend that this isn’t how the brain works. It clearly does. Likewise, though the cerebral cortex is the brain system that produces consciousness, it can only produce it by stitching together and developing a series of “perception” events from the data of all the other systems (visual, auditory, etc.). Take away those systems, and you take away all the data needed to generate conscious awareness.
Hence came my answer to Reppert earlier: Yes, we already know in outline what goes into physically unifying consciousness, so this is no great problem for physicalists; but No, that doesn’t really have anything to do with reason as a truth-finding computational procedure. Just as sight and sound are both processed in different parts of the brain, yet the data is organized and “perceived” as a coherent field of awareness by the cerebral cortex (with the aid of several other pieces of the brain), so also any process of “reasoning” can be carried out by different parts of the brain, yet the data can still be organized and perceived, and thus related, by the cerebral cortex, which in turn can cause, in a top-down way, those same systems to move on to the next step of reasoning, and so on, back and forth.
So an analogy more apt than Reppert’s would be: the students all pass their test answers into a machine that identifies all the correct answers and spits out only the correct ones back down to all the students, who can then all see they aced the test, even though each one only worked on a part of it. Of course, the analogy breaks down precisely because it is using conscious agents in the place of unconscious ones. But the point is the same: just as a collection of cells can organize and cooperate into a body that can walk on two feet—even though no one of those cells can walk at all or even has legs, much less the other needed organs, like hearts and lungs—so also can a collection of brain systems organize and cooperate into a mind that can think. And it does this by producing the virtual appearance of a singularity of consciousness, just as it produces the mere appearance that unified patches of color exist—when in fact only streams of various distinct particles exist.
So Reppert’s assumption that a collection cannot produce a unified function simply does not hold up in the light of the facts of cognitive science. But his other assumption, that the brain must parcel out the steps of reasoning, is also ungrounded in any actual science. Though it is true that different parts of the brain contribute different things to any actual stream of thought, rational or otherwise, the synchronic stitching and higher perception of internal brain function that takes place in the cerebral cortex actually generates simultaneity, not sequential steps of reasoning. Though we can walk through a syllogism step by step if we want to, this isn’t really what is going in our brain when, for example, we “see” that a conclusion follows from a major and minor premise. To the contrary, we see the connection immediately. For example, when we see a stop sign, several systems in the brain cooperate to generate a perceptual field—one system recognizes its shape, another its color, and so on, and the synchronizing signal produced by another system relates these facts to other areas of the brain at the same time (for every part of the brain is physically connected to ever other), which identify that the combined pattern of these two things indicates a stop sign, and so on. But this all happens at the same time.
Logical deduction is a form of analysis. When we see a stop sign, color and shape unified, we see it all at once. We perceive not disparate elements in sequence—first a color, then a shape, etc.—but our brain physically regulates synchronization and perception of relational patterns (such as the pattern of a color spanning a space within a shape—and we know where in the visual cortex this unification of visual data occurs). We can then pick apart what we see, and examine the shape apart from the color, then the color apart from the shape, and so on. But all the while, we have it all in view: the perception of both is simultaneous. This is exactly what we do when we perform acts of formal deduction: we see a picture all at once, the picture described (and thus physically caused to be constructed and “perceived” in our brain) by the major and minor premises, and then, seeing it all at once, pick apart the relevant elements one by one. In a simple syllogism, this means only one element, though most human reasoning is more complex than that. But in the easy case, when we see that a conclusion follows from the premises, this is exactly the same thing as seeing that a stop sign is both red and hexagonal and “therefore” a stop sign: the perception of everything (the two premises and the conclusion) is seen all at once. Thus, the brain does not reason “step by step” as Reppert claims. It reasons holistically. It draws a picture and then examines, analyzes, that picture, which stays in view the whole while (as long as the brain continues to devote attentional resources to it).
Stepwise reasoning takes place on a larger scale: when we move, for example, from one observation, one syllogism, to another. The conclusion of one syllogism then becomes the premise in another. We then assemble a new picture, depending on what the other, fourth premise is, and again see all at once what follows from “running” that virtual model as instructed (whether the instructions come from ourselves—reasoning with ourselves—or others). Thus, even when stepwise reasoning is engaged, it is still not what Reppert claims—it is not the brain moving sequentially from step to step within a single syllogism, but from one syllogism to another, every one linked into the chain by another simultaneous experience.
All that Reppert can do is deny that a physical system could possibly produce the simultaneity of a coherent field of awareness definitive of consciousness (and this is what he does, by quoting a book review (!) by Goetz). But doing so would no longer have any relation to reason. For we already showed that such simultaneity is not a necessary feature of any reasoning process. It is how humans do it. But it can be done by a stupid Turing Machine, too. Rather, such an argument (from the oddness of our experience of a unified consciousness) would be an AfC, not an AfR, because such a claim would impugn all conscious perception, not just that related to reason. Otherwise, if you accept a physicalist account of conscious perception at all, then you can have no ground to object to the extension of this to reasoning. [For more discussion, see in the secondary half of this critique: Introducing the Question of Computers and Computation and Perception, and then Computation, Perception, and Propositional Content]
6. The Argument from the Reliability of Our Rational Faculties (AfRF)
The Basic AfRF Premise is “If naturalism is true, then we should expect our faculties not to be reliable indicators of the nonapparent character of the world” (85). As I noted earlier, the question here is whether a purely natural universe can produce the “reliable reason” that we observe and that Reppert rightly accepts as an established fact (see my discussion of Proposition 9). He is essentially adapting Plantinga’s Evolutionary Argument Against Naturalism into the form of an AfR. I have already covered the issue in my discussions above of the AfT and the AfPR (Reppert’s arguments #2 and #4), and elsewhere. I will also revisit the issue later (see Reliabilism and We Should Attack Rocks?).
In contrast, it must be noted that Reppert doesn’t try very hard to establish the Basic AfRF Premise as true. He presents no actual evidence in its favor, just a single assertion resting on a single uncritical supposition. He does not interact with any of the literature criticizing Plantinga’s deployment of the very same argument, and indeed Reppert gives the impression that there is none, which is certainly unfair to his readers—and makes this section of his book rather useless to anyone who wants to critically examine the issue. As I pointed out already, that is a common failing in this book. Reppert ought to have done his homework here, but didn’t. Yet such background would have made his book a hundred times more useful. And if the premises of his six AfR’s are true, addressing such background would have made his book a hundred times more persuasive as well.
The single assertion on which Reppert rests the entire foundation of his AfRF is that “we could effectively go through our daily life without knowing, or needing to know, that physical reality has a molecular and an atomic structure” (85), which is true, but not relevant to the question at hand, which is, as Reppert himself puts it, “would naturalistic evolution give us mostly true beliefs, or merely just those falsehoods that are useful for survival?” (84-85). I have answered that question above already. His assertion is as fallacious a basis for the Basic AfRF Premise as the assertion that we (as animals) could survive and flourish without an opposable thumb (which is true), therefore nature would never have selected them (which is not true). Or to draw an even closer analogy, we (as animals) could survive and flourish without being able to play complex musical instruments (which is true), the ability to play complex musical instruments follows necessarily from the possession of opposable thumbs (which is also true), therefore nature would never have selected opposable thumbs (which is not true). My point is: Reppert needs to fill in a missing premise here. Otherwise, his assertion does not support his argument.
The “uncritical supposition” that Reppert uses to try and fill in that blank is again just a mere assertion, backed by no evidence: that “natural selection would favor the development of reliable cognitive and rational abilities only insofar as those aptitudes helped protohumans cope with the challenges of their environment” but “there is no reason to believe that we should trust our reasoning abilities beyond that original ‘coping’ function” (85), or in other words “what is required for survival is effective response to the environment, not accurate knowledge of that environment” (100). This is faulty reasoning in two crucial respects:
First, one should immediately ask: “How does he know?” By his same reasoning, peacocks should not have colorful and burdensome feathers. But they do. And we can explain why they do on purely natural evolutionary principles. Thus, there is more to evolution than merely “coping,” which means Reppert’s supporting supposition is uncritically adopted. It is in fact scientifically false. What gets selected by evolutionary process is whatever aids differential reproductive success. Advanced abilities to “cope” with an environment do serve that end. But so do many other advances, far beyond mere coping functions.
Second, one should also ask what “environment” did humans evolve to “cope” with? The answer is: first, a stable environment in which we relied on the strategies of social interaction and planning (1-2 million years ago), then a radically and unpredictably changing environment (the last 1 million years)—as humans expanded into entirely new climates and ecosystems, more rapidly and boldly than most animals ever have. Both phenomena are scientifically established facts. What “ability” would be most useful in helping an organism cope with its environment using the strategies of socialization and planning? And after that, what “ability” would be most useful in helping an organism cope with a radically and unpredictably changing environment? Why, reason of course. In both cases. So Reppert isn’t even correct that reason would not have “helped protohumans cope with the challenges of their environment.” Therefore, the Basic AfRF Premise is not only not established, it is false. And thus so is the AfRF itself.
The details supporting both points I have related already in previous sections of this critique. But just by way of example, observe the following as well:
- (1) Logic follows necessarily from language, and language is a profoundly unbeatable advantage for coping with any environment (not to mention for advancing differential reproductive success even without the coping advantages). Since language is a universally advantageous adaptation, and since language comes with logic built-in, there will certainly be evolutionary pressure toward the development of logical reasoning—as long as the underlying structure is achieved (e.g. large brain, vocalization, etc.).
- (2) Human evolution is typified by what is called “evolved generalization of functions,” the most notable examples being our teeth (which combine a diverse selection of biting tactics to suit omnivorism—indeed, with its little canines and molars, our mouth is a jack of all trades, master of none) and our fleshy, delicate fingers with weak, stubby nails. This development abandons the advantages most animals reap through the specialization of limb function (think of claws or fangs, or fins or wings), in favor of the advantages of generalized function. This produces “innate adaptability,” i.e. the ability to adapt to changing environments almost immediately, without the prolonged (and haphazard) process of reproduction, mutation, and selection. Literally any organ that provides an increase in innate adaptability will be advantageous—provided it does not come at too great a cost. For instance, some extinct species of hominid evolved brains so large that they probably resulted in enough birth complications to overwhelm all the advantages such brains would have afforded—the species that survives today has a medium brain size in comparison with all other hominids, so it obviously found the ideal balance between fatal brain size and the advantages of large-brain function. But the basic ability to reason provides an obvious increase in innate adaptability (think of fire, the wheel, shaped weapons—or even farther back in evolutionary history: deception, strategic hunting, planning), and so there will certainly be evolutionary pressure toward the development of reasoning—especially since there are few disadvantages to evolving a relatively simple reasoning organ, and those that there are can (in suitable circumstances) pale in comparison to the advantages.
- (3) The actual biological functions we now use to discover the structure of matter are identical to those that allowed the discovery of vital technologies (like fire, weapons, clothes—or canteens, like hollowed gourds for carrying water) and the development of advantageous behaviors (like out-thinking stalked prey, deceiving enemy humans, remembering to store up food or carry water with you), both of which prevented our early extinction and led to our planetary supremacy even before we discovered, for example, the principles of deductive logic or the scientific method. And since the early uses of natural reason certainly advanced our ability to cope and increased our differential reproductive success in other ways, there was again obvious evolutionary pressure to develop, or certainly select, natural reason.
In all three cases, it is not possible to “pick and choose” one kind of “natural reason” for figuring out some things, and a different “reason” for figuring out other things. It’s the same reason. There is no other kind. The basic biological functions are the same in every case, however much one might tweak them one way or another. At the same time, and as I have already explained, the identification, perfection, and employment of enhanced procedures (logic, mathematics, etc.) for employing natural reason is not a biological function, but a memetic one. It is learned—invented and passed on as culture. Thus, it is folly to look for a genetically evolved “reason,” except the very same innate reasoning that led to the discovery of, say, sewing and agriculture. Rather, the reason that Reppert seems to have in mind, Reason with a capital R, is not natural reason, but a technology. Just like sewing or agriculture, it was invented, not evolved.
Natural reason entails the functional ability to perceive better procedures for accomplishing almost any task (the very advantage provided by generalization of function: to cope with a boundless array of unexpected possibilities, rather than a small set of predefined goals), to identify and correct errors (error-correction is one of the most important features of any computational technology—even genes themselves have evolved error-correcting structures, and so we can expect that brains could evolve similar structures, having exactly the same needs in this respect), to counteract (see through) and employ (anticipate methods of) deception (some theorists regard this advantage—an obvious one indeed—as the very function a personal consciousness was selected for, since it allows us to “model” how someone else thinks, which entails modeling the very procedures of thought themselves), and to enhance efficiency by identifying and eliminating wasteful behaviors and identifying and adopting beneficial behaviors (the basic engine of culture, obviously greatly aided by, but not necessarily requiring, the evolution of language).
Now, I challenge Reppert to come up with any way of “thinking” that can accomplish all of those functions and yet not be “reason” in the natural sense (i.e. as the biological function of thought, not the developed technology of logics). Yet all of those functions advance “coping” as well as success, and so cannot be said to be fatally improbable outcomes of evolution, any more than thumbs or peacock feathers. The last function, especially—efficiency modulation—is the most important requirement of any species that is ever to develop culture as an evolutionary advantage. So it is quite notable that all of Plantinga’s “counter-examples” against evolved reasoning ability entail the absence of this otherwise essential function, thus nullifying the applicability of his examples to any real theory of human evolution. So I doubt the AfRF could ever be given a leg to stand on. Reppert certainly provides none.
Indeed, the only kind of supporting evidence Reppert refers to is “falsehoods that are useful” (84; cf. 89), which is all he can offer, since the only way “effective response to the environment” would ever be possible without “accurate knowledge of that environment” is if false knowledge would do as well as true (otherwise, true knowledge would always provide some advantage over false, producing evolutionary pressure in the direction of truth-finding). Of course, Reppert presents no actual instances of such evidence, and thus he doesn’t give his assumption any empirical basis. Like Plantinga, all he presents is an evolutionarily improbable scenario (which I will address later: see “We Should Attack Rocks?” below) born solely from his own mind. The Argument from Lack of Imagination strikes again.
What is important here is that none of the basic functions of natural reason allow much room for the “useful falsehoods” that Reppert (and Plantinga) require. There are extraordinarily few “useful falsehoods” that could be produced by any organ capable of perceiving better procedures for accomplishing novel tasks, identifying and correcting errors, counteracting and employing deception, and enhancing efficiency. Now, an organ capable of these things can certainly generate falsehoods, and there is no doubt ours does—nor should we expect a flawless organ from natural selection, although (undermining Reppert’s and Plantinga’s dream of theism) we should expect a flawless organ, or at least a far superior one to what we actually have, if our reasoning ability came instead from a flawless engineer who wanted us to reason as well as physically possible (as any benevolent engineer would want). But the Basic AfRF Premise requires that our reasoning organ generate not only errors, but errors that are as useful as accurate results, and not only that (for any organ that makes errors will produce some such errors by chance alone), but so many such errors that an organ capable of identifying its own errors would provide no (repeat: no) advantage to differential reproductive success. I cannot conceive of any such organ, nor do I believe Reppert or Plantinga will ever be able to describe one—although they should be able to do so in the mathematics of computational physics, and since they are making a scientific assertion, they are obligated to present scientific evidence, not idle musing. Until they do that, we can dismiss the AfRF as little more than wishful thinking. [For more discussion, see “Reliabilism” below]
That concludes my critical survey of Reppert’s six AfR’s. All of them are wanting. In every single case, the basic premise is either groundless (i.e. no actual evidence supports it as true) or actually false. And in no case does Reppert even mention much less refute all naturalist views (much less address the actual science) on the relevant subject in question. Indeed, just about the only forms of naturalism Reppert ever specifically attacks are eliminative materialism, nonreductive monism, and logical antirealism, which represent only a small fringe minority among naturalists generally. My own naturalism, for instance, sits well secure and not even touched by Reppert’s case.
Reppert attempts to generalize his arguments to all forms of naturalism only in a very vague and haphazard way when he comes to his defense of “explanatory dualism” as his alternative. For example, he deploys what I earlier described as the Causation Fallacy again when he argues that naturalism’s reliance on only two categories of fundamental explanation—necessity and accident—eliminates reason (87), which is teleological (a third category). But this is a non sequitur. Just because our basic explanations are limited to accident and necessity it does not follow that this exhausts all explanations available to us—for not all explanations are basic. Reppert knows very well that naturalism allows teleological causation as a category of explanation (human behavior, for example), and that we explain the emergence of this type of cause as an effect of a complex system of more fundamental nonteleological causes.
Thus, Reppert wants to argue that a system composed of nonteleological causes cannot manifest teleological causation. This much he openly concedes. But he never presents any evidence supporting this requisite claim. I told him in email before he finished his manuscript that his approach is like trying to argue that bricks, being just bricks, can never create a house. Obviously, a house can be reduced to mere bricks, none of which has doors or windows or a living-space inside. Yet those bricks can be organized so as to produce such a thing—a thing that can exist in no other way except as such an assembly of simpler things that are not themselves a house. After all, must a wheel be composed of parts that are themselves “in the last analysis” round? Obviously not. Yet the wheel can roll, even when its parts cannot. Causal properties thus arise from the organization of material, not just from the material itself. A gold ring will roll down an incline, but a gold block will not—despite these objects being made of nothing whatsoever but the very same gold. In the same way, a teleological system can arise from the organization of simpler nonteleological systems.
Thus the Causation Fallacy lies at the very heart of Reppert’s entire book, and undermines the whole. For instance, he complains, “the fact that one statement entails another statement seemingly must be irrelevant to how events are caused in the physical world” (87-88). “Seemingly”? Does Reppert’s entire book rest on the dubious argument that “Naturalism is false, because to me it seems to be”? I would hope not. But I can’t see him establishing anything beyond that, a purely subjective argument, essentially demonstrating nothing but his own lack of imagination and understanding. Think about it. It is not necessarily the case that “the fact that one statement entails another statement” is “irrelevant to how events are caused in the physical world.” Reppert certainly has not claimed that this statement is a fact of logic. So the only way Reppert can establish such a claim is on empirical evidence. But where is that evidence? I couldn’t find any in his book.
At most Reppert offers that Naturalists at present have no accepted explanation of certain mental phenomena, but that commits what I described earlier as the Possibility Fallacy. It is not evidence for any of the basic premises of his six AfR’s. In contrast, I have presented plausible explanations for every single issue raised by his six AfR’s, which, if true, falsify all six basic premises. He could perhaps counter that my explanations are not all (or completely) scientifically proven, but that is moot. It is Reppert who claims such explanations impossible, so all I need show is that they are possible—for which I only need conceptual evidence, not scientific. The AfR only succeeds if such explanations are either impossible or false. Since I have proven they are not impossible, Reppert must prove they are false. He has not done so. Therefore, his AfR fails.
The one thing I can say in Reppert’s favor, however, is what I said in the beginning: his arguments remain important in that they constrain the range of naturalist worldviews, and therefore his arguments must be addressed by any would-be naturalist. In short, he usefully defines several “problems” that any viable worldview must be capable of solving. The fact that it is easy to solve them does not detract from the point: we all need to take a look at them and actually solve them, if we are to believe our worldview viable.
This concludes the primary discussion section of this critique. The remaining second half goes into even further detail and addresses new peripheral issues for those who are interested in going into more depth.
Giving the Churchlands a Fairer Shake
I was already familiar with the Churchlands’ work in this field and found Reppert’s portrayals of their views as bizarre to say the least (cf. AfT). So the following is a report on what the Churchlands really believe, contrary to what Reppert claims:
First, Paul Churchland says point blank “I am a scientific realist” (A Neurocomputational Perspective, p. 139). Indeed, he even declares himself to be a moral realist (The Engine of Reason, The Seat of the Soul, p. 286). So clearly he believes that some theories are true and others false. Second, Churchland says his theory of eliminative materialism only redefines “beliefs” and truth and so forth in computational-operational terms, and he openly concedes his theory might even be wrong—that, for example, beliefs in the traditional sense “may turn out to be a small part of our cognitive activity, but a real part” (“Neural Networks and Commonsense,” Speaking Minds: Interviews with Twenty Eminent Cognitive Scientists, 1995: p. 42). In the same interview he explains how his theory does not eliminate qualia, for example, but rather he believes qualia “can be explained and therefore kept” and “in the case of beliefs and desires, this may happen, too, and I will be happy with it.”
So, in contrast to Reppert’s mischaracterization, Churchland argues that “belief” may be the wrong theoretical conceptualization of what is really going on, or maybe not. But either way, there is still something going on that is physically different, for example, between what we call a belief and what we do not call a belief. This means the causal role of that other something is functionally equivalent to what Reppert calls “beliefs” in his AfR. Therefore, Reppert is wrong to argue, as he does, that Churchland’s eliminative materialism destroys reason as Reppert defines it.
If Reppert had bothered to study Churchland’s actual theory he would understand that Paul rejects propositionalism (276), the view that propositions and language are the primary engine of cognition—and I agree with him: these are products, and tools for taking advantage of, what really is the engine of cognition, which is cognitive modeling. This is exactly what Churchland argues. Hence he says: “we may have to reconceive our basic unit of cognition as something other than the sentence or proposition, and reconceive its virtue as something other than truth” (150), a view Reppert attacks, but takes out of context—for in context Churchland outright says he means to replace “truth” with “some epistemic goal even more worthy than truth” (150), which he relates in particular to the ability of the human brain to “construct representations” that successfully describe the real world (in which Churchland certainly believes). So Churchland isn’t really denying truth so much as redefining it, and in fact redefining it almost exactly as I do. The confusion arises from the fact that he thinks truth is an attribute only of “sentences” (153), not cognitions, but that is clearly incorrect, as I explain throughout this critique.
Churchland surveys and raises questions about views like mine (e.g. 157-58), but his concerns mistake the nature of such theories and thus do not apply to them. Churchland’s view is that a theory has genuine epistemic virtue (a virtue superior to “truth”) precisely to that degree which it conforms an organism’s behavior to the real world. In essence, Churchland’s epistemology is a kind of radical pragmatism. This does explain science, though (contrary to Reppert’s claim), since scientific method, and the knowledge it produces, clearly conforms our behavior to the real world better than any other mental procedure (otherwise we could never have cured polio or landed men on the moon), and that is the very “epistemic virtue” which Churchland wants in place of “truth” (which to him is only a syntactic property of language).
Indeed, this is my view, too—but Churchland confuses epistemology with metaphysics. How we know a theory is true is not the same thing as what it means for a theory to be true—these two things are only related operationally. The meaning of “true” is model-to-world pattern correspondence. The “how” of truth is our observation of feedback in response to our behavior. Churchland seizes on the latter as the meaning of truth (or rather, as what matters more than truth), but that is to replace an ontology with a methodology, ignoring the question of why it is that one theory conforms our behavior to the world better than another. Thus, contra Churchland, the “why” here must be “truth.” But this is just a semantic dispute. On a fair reading of Churchland’s theory, there is no appreciable difference between his definition of “genuine epistemic virtue” and my definition of “truth.” And anyone who reads Reppert’s book can easily see how this completely destroys Reppert’s case against Churchland’s epistemology.
So also in Churchland’s Engine of Reason. There he even asks how it is “that humans manage to ‘reach beyond the appearances’ to gain command of the hidden reality behind?” (p. 271). Didn’t Reppert bother to find out what Churchland’s answer to this question was? After all, it is absolutely crucial to Reppert’s case against him. Churchland’s answer, “to a first approximation, is that we learn all of our prototypes solely within the domain of observable things,” prototypes meaning models and the elements thereof, and “that process of concept formation takes place relatively slowly as one’s global pattern of synaptic weights is gradually reconfigured in response to one’s ongoing sensory experience,” hence child development takes fifteen years or so (a fact Reppert’s anticipated god-centered theory of mind would have a lot of trouble explaining), “but once those prototypes are in place, a human is in a position to find new and surprising applications of those prototypes, even in perceptually inaccessible domains, by virtue of our built-in capacity for vector completion or filling in the gaps” (pp. 279-80, emphasis his).
This “vector completion” is the centerpiece of Churchland’s theory of rational cognition, so it is astonishing (or perhaps shameful) that it gets nary a mention in Reppert’s attack. Indeed, vector completion is a computational property unique to neural nets, and lo and behold, it turns out the human brain is a neural net. Therefore Churchland’s theory predicts a physical fact about the brain, which Reppert’s theory (whatever that would be) probably would not anticipate. More importantly, the computational science of vector completion provides an excellent physicalist explanation for making “connections” between two facts or observations, the very engine of rational thought Reppert falsely claims is missing from Churchland’s worldview.
Okay, so Reppert has painted an awful straw man of Paul Churchland. What about Patricia? In Patricia Churchland’s most recent book on the subject, Brain-Wise: Studies in Neurophilosophy (2002), she advances a theory of “neurosemantics” that accounts for the very things Reppert falsely claims she denies: intentionality, semantics, and truth (pp. 302-08). So clearly we have another awful straw man (or woman, as the case happens to be). Patricia agrees with Paul and me that language is a secondary adaptation: it is produced by, depends upon, and derives from primary nonlinguistic cognition. Thus Reppert’s problems of reason are, in actual fact, unrelated to “propositions” in the lay sense of statements formulated in a language, since these are merely a device for getting at what is really going on in our brains.
As Patricia puts it (and I completely concur) the right approach is called “cognitive semantics” and entails that “formal logic and formal semantics are atypical artifacts of natural language, not its heart and soul” (p. 304). In other words, formal logic, just like formal language, is a technology of reason, not the biological underpinning of it, and it was discovered or invented like any other tool, not born into us. Instead, she argues, language primarily evolved as a “tool for communication, and only secondarily a tool for representation, not the other way around,” and in fact the ability to compute linguistically was perhaps an “accidental” but inevitable byproduct of what was really the useful adaptation at the time, which was communication.
In contrast, she argues, cognition is really a question of modeling—exactly the same conclusion as my own. As she breaks it down: “mental representation has fundamentally to do with categorization, prediction, and action-in-the-real-world; with parameter spaces, and points and paths within parameter spaces” (which would include Paul Churchland’s vector completion theory). She does not deny the existence of truth in the sense Reppert intends. To the contrary, she asserts that “we know the models are increasingly adequate…by their relative success in predicting and explaining” (p. 368; and the brain, she argues, is one of the things being modeled). Likewise, she does not deny the existence of intentionality either, but argues that it has its basis in synaptic mapping and overlap and the production of connections in a physical concept-space (p. 305), while the ontology of intentionality has to do with the correspondence between patterns mapped in the brain and patterns in the real world (which are identified by learned signs in the sensory domain: p. 307).
Why do we learn none of this from Reppert? Why does his attack on the Churchlands proceed as if they said none of the things (which I have revealed here) that they really did say? He will have to answer for himself. But I suspect his was just an armchair attack on someone whose theories he didn’t really study.
Introducing the Question of Computers
Computational physics is a major component of the Churchlands’ theory of mind, and I have referred to computation myself many times already (cf. e.g. AfUC). Reppert does devote one whole paragraph to the question, but he still does not seem to understand the point—despite my having attempted at length to explain it to him in our previous correspondence. Reppert notes that “sometimes naturalists advert to the compatibility of mechanism and purpose” but, quoting Hasker, “computers function as they do because human beings have constructed them endowed with rational insight” (88) so the existence of computers, Reppert alleges, does not solve the problem.
Now, Hasker’s grammar is a bit awkward. Did he mean that computers are endowed with rational insight, or their makers? The former seems to be what he meant, though grammatically the sentence could still mean the latter. But assuming he meant the former, his statement is false. For computers do not possess rational insight. That is a quality unique to consciousness. Computers do make rational decisions and arrive at rational conclusions. But that is not the same thing. And this is the central issue that Reppert doesn’t get. Computers are not cited by naturalists as proof that machines can have rational insight. The prospect of genuine AI is sometimes referenced, but since it has not yet been achieved, such arguments are hypothetical and conceptual, not evidential. Rather, when existing computers are cited by naturalists they are not presenting evidence of machines with rational insight, but of machines that reason in the simple sense: they can deduce and induce conclusions from premises.
The reason this is an important distinction is that if Reppert’s only beef is with the prospect of a machine producing consciousness, then he is no longer making an AfR but, as I said earlier, an AfC (see my discussion above of the AfUC = AfR #5). But his book is about the AfR, so we are only concerned with that here, not whether the AfC is correct—for Reppert has not even touched the tip of the iceberg of that discussion and all the work that has been done on it. Insofar as the discussion is about the AfR, as obviously it is supposed to be, then the question of whether machines can produce consciousness must be bracketed. Because he does not address the Problem of Consciousness in anything like the requisite detail, Reppert must concede for the sake of argument that such a thing (at least sans reason) is possible, or else he begs an important question.
And yet, the Problem of Computers that Reppert faces is not the Problem of Consciousness, so attacking the latter does nothing to save him from the former anyway. Reppert cannot dispute the established fact that machines exercise reason (simpliciter, i.e. sans consciousness). But it is precisely that fact that undermines his entire case. Reppert argues that because humans cause computers to come into existence, therefore humans (or rather: teleological causes) are necessary for computers to come into existence. But that just begs the question: can something else cause a computer to come into existence? Naturalists argue that evolution by natural (i.e. nonteleological) selection can do this. And Reppert presents no actual evidence against this possibility.
Now, one of Reppert’s AfR’s (#6 = AfRF) does try to undermine the evolutionary origins of reason, so I do not claim that Reppert has ignored it—although, as I explain above, the AfRF actually fails, and Reppert offers no actual evidence in support of it anyway. Rather, my point is that several of Reppert’s other arguments are undermined if the AfRF fails. His six arguments are therefore not independent of each other. But more importantly, Reppert does not acknowledge the fact that such arguments are challenged by the evidence of computers. For example, the Basic AfMC Premise (AfR #3) is deeply challenged by the existence of machines that exhibit conceptual causation, and that solely on an underlying basis of purely physical causation. As long as such machines exist, Reppert must concede that naturalists can account for their existence. All he can try to do is argue that naturalism cannot explain how any such machine can arise nonteleologically, but that is the AfRF (which is really just an Argument to Intelligent Design, and not an AfR as such), which means if the AfRF is false, so is the AfMC.
Reppert thinks he has avoided this problem by arguing that “if the computer makes a correct inference, it is not a correct inference in the computer’s perception, but in ours” (83; cf. 119), but that is not true in the one context that matters: the natural evolution of computers (brains). Nature selected for computers (brains) in animals precisely because the making of “correct inferences” aids survival. Thus, such inferences are not true because they are true in anyone’s “perception” but because they are simply and literally true, whether any consciousness is ever around to “notice” or not. If the process did not produce true conclusions (at least conclusions more true, and more often true, than mere chance would produce), brains would not produce the correct responses, and, more importantly, would not be able to learn. Therefore, brains have to produce a greater balance of true inferences (in the sense of “true” that I defined earlier: see Proposition 2) for them to be of any use to an animal. I have already discussed this issue above.
Now, this all refers to reason simpliciter, the basic processes of deduction and induction, without reference to consciousness. And Reppert does mean Reason, i.e. reason in a sense that is dependent on consciousness, because he includes not just natural reason, but the technology of reason. But I have already addressed the problem of conflating those two things. Naturalists have no problem accounting for the natural selection of natural reason. Likewise, the existence of purely physical machines that can engage in natural reason proves that there is no metaphysical problem for naturalists in inferring the same sort of thing for human reason. And Naturalists have no problem accounting for how an animal with a highly developed natural reason could have discovered and refined a technology of reason. So there is nothing left to be explained—except perhaps how consciousness is produced mechanically, since that appears to be necessary for the exceptional performance required by any animal that relies on culture, even if consciousness is not required for a less impressive performance of all the same functions of natural reason (which computers even today can accomplish). But again, that is not relevant to the AfR as such, but an AfC.
Reppert will certainly take issue with some of what I have said above, so more detail is required to explain just what I mean. The following sections seek to provide that detail.
Computation and Perception
I agree with Reppert that “the question of whether a person is rational cannot be answered in a manner that leaves entirely out of account the question of how his or her beliefs are produced and sustained” (65). But I disagree with him in the sense that I believe naturalism can easily account for these things. Hence I am completely on board when Reppert says “if a materialist says that she believes in materialism because she perceives the reasons for believing it, then…she is committed to the existence of reasons, as well as the existence of the perceptions of reasons” (68) and so I agree naturalism must therefore account for both, and that account “of rational inference” must be “compatible with the limitations placed on causal explanations by materialism” (69). Quite true. The solution comes from two converging directions: the existence of mechanical inference-making and the existence of perception. When these two facts are combined, every mystery that Reppert claims confounds naturalists is solved.
Christian philosopher William Hasker, on whom Reppert relies, claims that rational experience can only be rescued “by abandoning causal closure and acknowledging that micro processes in the brain go differently in the presence of conscious experience than they would without it.” But we should ask “Why?” Why must causal closure be broken? First, what if conscious experience itself is causally closed, conceptually and with regard to its underlying physical causes? Hasker does not address this possibility. Second, since rational decision is by definition deterministic (there is always and only one right answer that follows necessarily from two or more fixed inputs), it must necessarily itself be causally closed. So why can’t a deterministic physical system do the same thing, i.e. derive a fixed output from two fixed inputs? Hasker does not address this possibility either. So Reppert’s reliance on Hasker is a mistake (see also my discussion of Madell below).
If a machine can perform acts of rational decision (and machines certainly can, both deductive and inductive), and if another machine can “perceive” that activity and construct a corresponding virtual model of it (and machines can certainly do this, too), then where is Hasker’s violation of causal closure? What need do we have of that hypothesis? Just as a machine accepts an input of photons and constructs from that data a visual phenomenal experience (a “virtual model” of a visual field), so can a machine accept an input of brain processes and construct from that data a personal phenomenal experience (a “virtual model” of a reasoning self). Thus, it is theoretically possible for consciousness to have a purely physicalist explanation, and the evidence is tending to support just that. But even without consciousness, a computer can perceive (construct and manipulate virtual models of) all the elements of any simple reasoning process. And that is all that is going on in us, too, when we use Reason.
So what is there left to explain? Here is Reppert’s dilemma. Either one can dispute that a machine can perform the input-output functions that together define all logic, or one can dispute that a machine can construct a phenomenal experience of any kind. That is the only way an AfR can be carried off. But the first cannot be challenged. It is easily demonstrated that machines can perform all logical procedures of any kind whatever, even perfectly stupid machines—in fact, even machines that come about solely and entirely by accident without any intelligent design. Such machines can be produced, for example, by simulating natural causes, both natural and accidental, on other computers, or even in test tubes (as we are starting to see from research at the cutting edge of biochemical computation). So we know not only that such machines exist, but that natural evolutionary forces can, in the right circumstances, produce them.
So only the second prong is left: the causal foundations of perception. But to challenge that is to challenge consciousness itself. For there is no significant difference between a veridical visual experience and a veridical rational experience. If any machine can create the former, it can create the latter, just as it can create one for sound, touch, emotion, taste, smell, pain, pleasure, panic, and so on, whatever chance and advantage should produce a perceptual system for (the human sensory array is by no means exhaustive: bats see sound, lobsters feel magnetic fields, and so on). So Reppert cannot single out reason here. An attack on rational perception is automatically an attack on all perception of whatever kind. But as I have said, this reduces the AfR to an AfC, since there is no longer anything peculiar about reason in the argument.
There are, of course, two obvious lines of renewed attack here. But it is not enough to attack them by claiming our position is not scientifically proven, since that commits the Possibility Fallacy. Remember, Reppert must prove our position either impossible or false. So, drawing on what I said before, Reppert must prove his case either conceptually or evidentially in either of two ways:
- (1) Either: (a) there is no way (literally no way) an accidental combination of replicating strings can produce, through natural (nonintelligent) selection, simple computers that perform logical operations which benefit survival in a way that illogical operations do not; or (b) such a process can occur (no matter how rarely), but in our case (the genetic evolution of humans) no such process actually has occurred. Reppert has not demonstrated either.
- (2) Either: (a) there is no way (literally no way) that self-consciousness (a perceptual model of a self) can have a purely physical explanation; or (b) such an entity is possible, but the actual facts presently known about the human brain and its function rule out the existence of any such process in us. Reppert has not demonstrated either.
As for (1), arguing (as Plantinga and Reppert do) that such a process is “too rare” to believe that it has actually happened to us constitutes a (relatively weak) form of (1)(b). A more convincing argument would present actual evidence that it didn’t happen, not that it was improbable, since evidence of improbability itself requires evidence (not hunches, but actual evidence) on which to establish actual values for the probability of any such process occurring (not just the actual one that led to us) as well as a credible benchmark for impossibility (i.e. the probability below which we can reasonably expect an event never to happen anywhere in the entire universe).
As for (2), I will reiterate the point that we don’t need to have such an explanation, only the theoretical prospect of one. A defense of reductive physicalism is not based on having an explanation for everything, but on an inference from past cases, i.e. the fact that (i) everything so far that has been fully explained has had a physical reductive explanation, with no exceptions of any kind, and that (ii) those things that remain at least partly unexplained are things we do not yet have the technical ability to explore in the requisite detail (so we can explain our lack of explanation), and that (iii) a vast quantity of seemingly incorrigible phenomena of the mind (such as most of the details of the experience of colors) have been proven to have physical reductive explanations, and the number of such things has steadily increased without abatement or any sign of stalling. When we put (i), (ii), and (iii) together we have all the evidence we could possibly have that an explanation of X is type Y, without actually having an explanation yet—whereas we lack all three (i, ii, and iii) for nonreductive explanations, including those involving explanatory dualism. In other words, reductive physicalism has evidential assets that are completely lacking for explanatory dualism. Therefore, lacking any conceptual refutation or evidential falsification, reductive physicalism is to be preferred as the most probable explanation of any as-yet-unexplained phenomenon.
So now let’s discuss each avenue separately: first computation, then perception.
Reppert claims a computer, to have Reason, must be able to “see” a “logical connection” (64), in other words such a function “must be physically realizable” (97). So two things are needed here: there must “be” a “logical connection” in some objective sense (i.e. some sense independent of human will); and we must be able to “see” it. We will address the first issue here.
What is a “logical connection”? There are at least two kinds: deduction and induction. In deduction, “logical connection” is simple and direct computation: information contained within one data set is extracted. For example, all things labeled “man” are assigned the attribute “mortal,” so a machine can “deduce” that Socrates is mortal simply by searching and finding that the element “Socrates” belongs to the set of all things labeled “man.” In induction, “logical connection” is computation of a virtual model, a model which we often “describe” as a hypothesis. Broadly speaking: a hypothetical structure is “run” in our brain until the results are observable, and then these virtual results are matched against actual observation; if they fit, then we believe (to some degree of confidence) the model matches reality; if not, then not. Narrowly speaking: if X is always observed in situation Z, then probably X always occurs in situation Z. Obviously there are nuances, but they are simply computational adjuncts refining the same basic procedure. For such computational procedures to work (i.e. produce true knowledge) all that is required is causal consistency (in both the universe and the computational system observing it), and if there were none of either, then we would not exist, nor would we have evolved an inductive-deductive brain. But we evolved such a brain because it works, and it works because there is such causal consistency.
Deduction does not create new information. All it does is extract a small set of data from a larger one on hand. By analyzing information we already have, we can extract knowledge from it—knowledge about the content of that information set, the patterns it exhibits, and so on. And this is a straightforward, purely mechanical computation. Any Turing Machine can do it, for example. All that is needed is that the computer be organized in the right fashion. For a Turing Machine, that means software; for a human brain, that means synaptic organization. For example, our brain is capable of “deducing” that a particular pattern of light indicates the presence of our uncle’s face, all without our even being conscious of the computation involved: all visual patterns of arrangement U are “uncle’s face”; current visual pattern D is a visual pattern of arrangement U; therefore current visual pattern D is “uncle’s face.” What we would describe as the major premise (hence the content of that premise: the perception of a pattern and a name and the link between them) is coded directly in the synaptic structure of our brain, where the label “uncle’s face” is physically linked to the pattern-recognizing circuit that identifies U. Then our visual system physically activates that circuit link when stimulated by D (this event constitutes the content of the minor premise). And finally, the system produces an output (D is “uncle’s face”) by direct physical causation. In short: D stimulates U, which stimulates the labeling of D accordingly, through U’s physical connection to the corresponding labeling circuit.
To emulate this process in our conscious reasoning therefore does not require anything more than the same kind of circuitry, only connected to different sources of data: namely, linguistic and proto-linguistic data, brain stimuli instead of visual stimuli. For us, the biological pathway to basic deductive computation of the sort Reppert is interested in was probably the evolution of language. All formal logics are languages, and all languages contain a logic. Therefore, the moment we began to evolve the ability to communicate in a language, we also began to evolve the ability to think deductively, since the latter is required for and thus entailed by the former. The basic deductive circuit, originally a subconscious perceptual organ like any other, was simply retasked.
So much for the origin of natural deduction. As for the technology of deduction, that is easily discovered: all we had to do was study (observe, practice, describe, and learn from) our use of an inborn brain function, the function of thinking and communicating in a language. And the study of language is exactly what led the Greeks to discover logic. In summary: we know machines can perform deductive reasoning (all computers can do this), we have good reason to believe such machines exist in our brain, we have a plausible explanation for how they got there without any intelligent planning, and we know how the procedures employed by such machinery could be discovered, described, and refined by any conscious entity.
Induction is the functional opposite of deduction: it attempts to “guess” at the content of a large set of data, which is not available to us, by analyzing a small subset of that data, which is. So induction does create new information, by hypothesizing it (and then, in proto-scientific method, testing the consequences of that hypothesis in order to correct or refine it). To do this one merely need run a deduction machine in reverse: we imagine (construct) a pattern in our mind (perceptual field) which contains the pattern of data we do have (which can include a lot more than just the particular kind of data being sought). We then have two possible alternatives: either the real pattern “out there” is as we have imagined it to be, or it is some other pattern. We can then “deduce” from our hypothesis other data that should be out there, and still other data that should not be out there, if our hypothesized pattern is true. And then we can go and look and see what is really there. That is, you might notice, the core structure of the scientific method. And it is, again, something easily replicated by a purely mechanical computer. In essence, induction is a computation using virtual models in the brain, much like what engineers do when they use a computer to predict how an aircraft will react to various aerodynamic situations. Only, the virtual models our brain composes are themselves inductively (and deductively) constructed from data, very often unconsciously.
Although induction is something we learn from experience slowly through childhood, and we have built machines that can “learn” it, too, it is also a capacity naturally latent in the brain as well. Even cats have it: their brains can “guess” that a rabbit that runs behind a tree is still there, by simply completing the pattern of the rabbit’s movement in a three-dimensional space—induction, plain and simple. On the one hand, the more times we engage in an inductive procedure and get rewarding results (kids eventually figure out that an object moved behind another still exists, that water poured from a tall glass into a short one is still the same quantity of water, and so on), the more our brain becomes accustomed to the utility of that pattern of behavior, and starts using it more widely. The fact that induction is a procedure that actually is universal, and generalizable to everything, ensures that we begin using it successfully on more and more problems until we realize there is almost no problem it can’t be applied to, at least to some effect. On the other hand, machines we have made, and machines we have discovered operating within the brains of animals, also engage in simple inductive procedures—as behavioral science has more than established for the pigeon, among other animals, and as the development of learning and self-innovating computers has established even more clearly. The innate tendency to deploy such a procedure is evolved genetically in the brain, and this leads to its being used from a very early age, eventually producing the discovery of its related technology.
It is again strange that Reppert does not even acknowledge the existence of, much less address, the vast scientific evidence on both fronts here. How, for example, would he explain the ability of pigeons to figure out that a particular symbol indicates the presence of food, unless their brains were engaging in a simple inductive procedure? Yet pigeons are not conscious, nor were they invented by men. Reppert might claim (despite having no evidence to support him, either for his position or against ours) that pigeons are somehow tapping into the Platonic realm that he says alone makes Reason possible, and that their inductive behaviors cannot be reduced to physical-causal systems in their brain. But then looms the existence of man-made inductive machines, which have been analyzed down to every single one of their basic mindless parts. Saying that men made these machines is moot (that is just another kind of Causation Fallacy). What is important is that these machines engage in induction, and do so according to a purely physical-causal system, therefore such machines do exist, therefore they can exist in the pigeon, and therefore they are capable of being produced by the same process of natural selection that created a pigeon’s feathers, eyes, legs, and brains.
Consider, as but one example, and already an old one, Daniel Dennett’s discussion of an actual robot named Shakey who could, all on its own, figure out from incomplete data whether an object before it was a box or a ramp, all by purely mechanical, inductive procedures. The point is that no human, no intelligence, told it whether any objects it encountered were boxes or ramps—it figured it out on its own, using basic induction. Now, Reppert would say that humans built Shakey to perform inductive reasoning, but that is moot: the question is not how the ability was caused, but what the ontology is of the ability itself. And the ontology here is completely physical. This can even be proven mathematically by a computer scientist: nothing more is needed for these procedures to be engaged except the mindless physical parts arranged just so. So Reppert cannot claim that completely physical machines cannot perform induction. All he can claim is that evolution cannot produce such a machine (or that, though we have such a machine in us, the particular machine we physically have was not produced by evolution). But he has no evidence to support such a claim.
The key point is this: a computer that performs deduction (or induction) and a computer that does not are always physically different. Their output is always observably, physically different given the same inputs, and if you physically arrange a computer’s components so that it produces the same outputs as a proper deducer or inducer, then you have a computer that performs deduction or induction. Period. There is nothing more to it. No Platonic realm or any kind of explanatory dualism need be appealed to at all.
This should not be surprising. Had Reppert actually endeavored to study and address the science of logic, reasoning, and computation, he would know that all logical operations, of any kind whatsoever, can be performed by organized assemblies of just three simple physical systems called “logic gates.” Both inductive and deductive logic and all mathematics, including probability theory, can all be represented and carried out by an appropriately-organized assembly of such gates, and therefore all such procedures are mechanically reducible to them (or their functional equivalents—i.e. the functional equivalents of the gates themselves, or their various assemblies). Though all mathematics can be entirely reduced in form to an assembly of only nine propositions, most of those “axioms” are empirical postulates (i.e. they are discovered either actively by memetic rearrangement of brain structures through exploration and learning, or passively by genetic selection of brain structures), but all mathematical computations can still be carried out using a system that itself reduces to a collection of only three simple physical components—logic gates—because each of the nine axioms can be operationally defined by a complex arrangement of those gates, or is functionally entailed by such an arrangement. That is why computers can perform calculations even in set theory.
Since the basic procedures entailed by Reason exist in any mechanically consistent universe, and since they can be produced by organized functional systems of nonteleological components, and are useful in such a universe in precisely the way other procedures are not, naturalism is not challenged by the existence of Reason in this sense at all. Reason itself is a mechanically consistent procedure, computing from fixed inputs a mechanically inevitable output. Therefore, all that is required for a deduction machine or an induction machine is matter arranged a certain way. Nothing prevents matter from being arranged that way, and we have ample reason to believe natural selection could and did arrange matter that way in us. Moreover, since the procedures of deduction and induction are inherently more truth-finding than any other procedures (at least, it is up to Reppert to demonstrate otherwise), and since the functions performed by deduction and induction are always beneficial to differential reproductive success (even for grunting primates—indeed, even for pigeons), it follows that the development of a truth-finding organ of reason in humans is not at all surprising on naturalism.
Now, Reppert would, I hope, concede such a point, and agree that reason in a basic sense is just computation, and computation requires nothing more than machines to exist. He would then have to retreat into an attack on the reducibility of perception, even though that is to retreat from the AfR altogether, into the thorny maze of an AfC. But I will nevertheless address the small niche in this last-ditch defense that relates to reason: my contention that “seeing” logical connections is no different than “seeing” with eyes, ears, nose, etc., or more analogously, “seeing” other internal phenomena of the brain and body, such as pain, emotion, memory, introspection, etc.
In contrast to Reppert’s more vague analysis of Lewis’s concept of an “irrational” belief (55-56), a more careful analysis, which avoids the Causation Fallacy, is this: any belief is discredited if that belief was produced by any process that is not significantly truth-finding, and any belief is credible if that belief was produced by any process that is significantly truth-finding. Since deduction and induction are significantly truth-finding processes, any mechanical operation that follows the process of deduction or induction will be significantly truth-finding, and therefore Reppert cannot condemn all machines of irrationality simply because they are machines. But now the question intervenes whether we know which kind of process produced any particular belief, which necessarily entails yet another belief disjunction that must be resolved. Either this leads to infinite regress, which would render belief-justification impossible, or some beliefs must be necessarily basic and unquestionable, such that it is impossible that they were produced by a process that is not significantly truth-finding. And to be “self-evident,” that means the grounding belief (that the basic belief was so produced) must be based entirely on the content of that basic belief, or actually identical to it, thus ending regress.
Elsewhere I have outlined such a system, called “basic empiricism.” There I discuss building credible belief-systems from a basic ground of undeniable experience-events in not only the sensory but also the emotive and cognitive domains. That is, I believe that we “see” inferential relations in exactly the same way we “see” colors or hear sounds or feel heat or sadness, and so on. Just as it is impossible to disbelieve that we are seeing red when we are seeing red—to say “I am not seeing red when I am seeing red” is not only a self-contradiction, but is the same thing as telling a lie, and a lie is never, by definition, true—so it is impossible to disbelieve that we are seeing a connection between two concepts when we are seeing a connection between two concepts. Of course, often we can doubt whether there really is a connection, but not in every case. For example, when we see a red patch touching a blue patch, that there is a connection between the two patches is undeniable, even if we can doubt whether that connection corresponds to any actual objects touching each other in the real world. Likewise, in the case of comparable inferences (at least the simple inferences of which all complex inferences are composed), the connection between two concepts is directly and immediately observable, and therefore undeniable.
Consider a simple deductive inference:
Major Premise: Every triangle has three sides.
Minor Premise: A Tolnedran “Danger” sign is a triangle.
Conclusion: A Tolnedran “Danger” sign has three sides.
If we start with skepticism regarding the truth-finding ability of this Simple Inference Procedure, we can “test” it thus: when we picture a triangle in our minds, we literally “see” a three-sided object; when we try to picture a Tolnedran “Danger” sign in our minds, we will follow the instructions provided and “map onto” our imagined sign the same shape as the triangle we are also picturing at the same time, because that is what the word “is” instructs us to do. Thus we will literally “see” a three-sided Tolnedran “Danger” sign. This conclusion is irrefutable. It is literally and observably impossible to map the triangle onto the sign and not get a three-sided sign. We can see this fact directly, even visually. And therefore we can never deny it, not without lying. For this is a simple uninterpreted experience-event. We cannot claim we are not experiencing it when in fact we are. It is therefore undeniable.
The same holds for non-visual transformations. For example, some patterns are not in space (like triangles) but in time (like to be moving vs. immobile), some patterns are olfactory, others auditory, and, finally, still others are conceptual, i.e. the pattern exists in a physical connection (in time or space) between two other patterns represented in a brain. Just as an organ can detect patterns in electrical signals stimulated by photons and thus produce vision, so can another organ detect patterns in electrical signals stimulated by brain processes themselves, and thus produce thought, even self-reflective thought. So while a pigeon can engage in induction without seeing its own thinking, we can “see” the procedure our brain engages in when it engages in induction, and thereby model and reproduce that procedure in other contexts, even play around with it and improve upon it by learning from experience. Just as there is nothing that stops a machine from “seeing” patterns in light, or sound, or magnetic fields, or physiological response, so there should be nothing that stops a machine from “seeing” patterns of brain activity, such as computation, concept-pattern connections, and so on.
The Basic AfR Premise effectively denies that there are any beliefs produced by undeniable experience-events that are also true rational inferences. The latter would be, according to my analysis above, any inference, from one set of propositions to another, corresponding to any process that is significantly truth-finding. But by now it should be clear that the Basic AfR Premise has not been supported by anything Reppert presents. He offers no conceptual or empirical proof that “undeniable experience-events” (like seeing a patch of red touching a patch of blue) cannot have as their content “true rational inferences.” He offers no conceptual or empirical proof that such “undeniable experience-events” do not participate in causing the formation of rational human beliefs. He offers no conceptual or empirical proof that beliefs caused in this way cannot be distinguished from beliefs caused in other ways. And finally, he offers no conceptual or empirical proof that distinguishing beliefs in this way (in favor of the truth-finding causes) would not improve an organism’s differential reproductive success, nor does he offer any conceptual or empirical proof that the species Homo sapiens does not possess a physical computer in its brain capable of all of this (detecting patterns of brain activity corresponding to true rational inferences, distinguishing the latter from other patterns, and using that distinction to causally affect the generation of confidence levels). Nor does he prove that such a computer is present but the actual physical form of that computer could not have been developed by natural selection. So, in the end, Reppert fails to establish the Basic AfR Premise.
Theory of Mind
As I explained in the primary half of my critique, as well as several times again above, Reppert should not confuse the AfR with the AfC (cf. e.g. AfUC). But a lot should be said about my naturalist theory of mind because the issue is complex and important, especially as background to my theory of rational cognition. So I will provide that detail here.
In general, consciousness derives from two functions working in concert:
(1) Synchrony: It is a known fact that consciousness is generated in the cerebral cortex, which also participates (along with other brain centers) in generating synchronizing signals coordinating all areas of the brain. However, I do not mean what Goetz means, namely that all perceptions are drawn in there and stitched together there. Rather, stitching is carried out globally throughout the brain. Because the entire brain is interconnected, there is no need for a “place” for the stitching to occur: the entire brain is as good a place as any one part of it, and it is much more energy efficient to just have the one whole brain doing it all. Rather, what is needed is synchronization of functions. There are a few areas of the brain responsible for generating the signals that produce synchronization of brain activity (which is directly observable by EEG). But the fact that the signal is produced is enough: all the systems of the brain, as a result, function in lockstep. This is how, for example, we are able to relate visual and auditory information together (and thus why we can hear voices at the same time we see lips move—even though the systems involved do not always take the same amount of time to perform their functions). This much we know. Exactly why this makes our particular experience of binding possible is not yet known, but there are many good theories, and we do know that this synchronization function will be essential to any successful theory (hence it is significant that this required causal factor is a physical operation, not a supernatural one).
(2) Eye of the Mind: What exactly is it, then, that the cerebral cortex does? My theory, which unites the common features of the theories of Dennett, Churchland, and LeDoux, is that the cerebral cortex (CC) is an organ of perception that perceives certain things unique to consciousness that none of the other brain systems are geared to perceive (because they are not complex enough for the task). For example, the primary visual cortex (PVC) is what assembles coherent visual perceptions from the data produced by all the disparate visual processors in the brain. Thus the CC does not produce the visual component of Goetz’s “Augustinian first-person experience of ourselves.” That is done already by the PVC. What the CC does is perceive patterns of relationship between, for example, what the PVC sees, and what, for example, the primary auditory complex (PAC) sees, and what is stored in memory, and so on. That is not to say that the CC draws in visual and auditory data and stitches it together. That is already done by the whole brain, as the PVC and PAC are physically connected and work together in synchrony already. Rather, the CC perceives relationships between the salient characteristics of the perceptions generated by the PVC and PAC, and all other areas of the brain (like personal memory). It is this perception (which we might call “insight”) that elevates what is alone just visual and auditory consciousness to the level of personal consciousness.
In just the same way that the PVC takes care of visual consciousness, without handling the auditory, so the CC takes care of personal consciousness, without handling the visual. Think of the entire visual system: the eye’s function is to gather light, and through its attached brain systems detect patterns in that data to produce a perception (as, for example, of a shape). Now adapt this to the entire cerebral cortex: it is a sensory organ just like the eye, gathering data from the rest of the brain, and through its computational systems it detects patterns in that data to produce a perception (as, for example, of a “self” that is seeing, rather than just a seeing—though in both cases the “seeing” is going on elsewhere, though physically and causally interacting in synch with every other system in the brain, including the CC).
The reason, therefore, that consciousness “appears” unified to us is the very same reason a certain concentration of photons within a certain frequency range looks like a patch of green to us: the brain constructs that perception as a useful way to efficiently handle and analyze the data. This is, of course, just a hypothesis. Science is still looking at the problem. But this is a plausible hypothesis endorsed by most leading cognitive scientists. All of the above is based on sound science. It has already been proven, for example, that human sensory systems are heavily, physically integrated (and act in concert) not only globally, but even in concentrated form within, for example, the superior collicus, and other areas of the brain.
Joseph LeDoux also discusses the relevant evidence in Synaptic Self: How Our Brains Become Who We Are (2002). The summary of his theory of consciousness (which Reppert’s arguments fail to address) appears on pp. 310-12. It consists of seven principles, none of which Reppert takes adequately into account:
- (1) parallel processing (which is how we have proven our brain does its work: many systems processing the same data simultaneously, but in different ways and with different outcomes).
- (2) neural synchrony (the physically proven fact that brain systems operate in concert and activation is synchronized by global timing signals).
- (3) neurochemical modulation (we have proven that the brain’s systems all communicate with and have causal effects on each other not only due to the brain’s global synaptic structure, but also due to global neurochemical response: one synapse can affect all the others in the brain by triggering the release of a particular chemical, just as one synapse can affect all the others by sending an electrical signal that circulates throughout).
- (4) convergence zones (we have proven both physiologically and anatomically that the brain’s systems are heavily integrated in particular regions of the brain, where the circuitry of many systems converges: in fact, these particular physical convergence zones are known only in animals with a cortex, and only such animals engage in behaviors that require the integration of disparate senses and thought processes, so we can infer that this is indeed the biological function of a cortex, to provide such integration, and to engage in higher-order oversight of the brain’s subsystems).
- (5) causal organization is both bottom-up and top-down (we have physically proven that the brain builds complex computations from a conglomeration of simpler ones, but then these complex computations provide feedback which causally affects the simpler modules in turn).
- (6) emotion and cognition are fundamentally integrated (from both anatomy and observations of brain activity, and comparisons of physiology across species, we have proven that emotional perception and reasoning is the most primitive computation system, on top of which the human cognitive system has been built ad hoc, which means emotion is as important to cognition as all the other senses, especially in terms of causal effect on cognition, and that emotional reasoning was the evolutionary precursor to rational reasoning—indeed, we can hypothesize that the latter was produced by the gradual refinement of the former in the direction of increased predictive success, i.e. making more and more correct anticipations of feedback when interacting with the environment).
- (7) subconscious computation (only some brain function is perceived and modeled in our consciousness because the organ that detects a “self” is new, and like a new eye, is relatively simple and limited in what it can “see” and “process,” but also because some brain functions are actually a distraction to higher-order reasoning and thus, by drawing resources from it needlessly, would actually be a detriment if apprehended and tracked by the organ of consciousness).
What is significant here is that LeDoux’s theory of consciousness (which converges very closely with mine) is thoroughly based on scientific evidence, whereas Reppert’s theory is not. Indeed, there is no evident way that Reppert’s theory of mind (whatever he thinks that would be in the final analysis) would predict the seven observations on which LeDoux bases his theory, whereas LeDoux’s physicalist theory accounts for all of them. LeDoux’s physicalism therefore has a vast explanatory advantage over Reppert’s (anticipated) alternative. That is one good reason to trust LeDoux is on the right track, and Reppert is not.
Even so, I must remind all readers of the Possibility Fallacy: just because physicists haven’t yet explained something does not mean they cannot do so. And Reppert needs to prove the latter. It is not enough to prove the former—especially given the fact that all evidence supports the Inference to Physicalism, and no (positive) evidence contradicts it, even in the field of cognitive science. But even without the scientific evidence supporting theories like LeDoux’s, and without the Inference to Physicalism, we would still only have agnosticism, not supernaturalism. Hence the demands Reppert has set for himself are far higher than he meets.
This provides a good lead-in to what follows. The structure of Reppert’s argument shrinks as the book goes on. He begins with nine “problems” (see “The Basics of Reason under Carrier Naturalism” above), but then builds from them six “arguments” (see “Six Arguments from Reason” above), and then ends up organizing this all into three broad categories (in his chapter five): “intentionality and the efficacy of mental states,” “laws of logic,” and “the reliability of our rational faculties.” I have already addressed much of what he says here in various sections above, so that material will not be repeated. Instead, in the following sections I will address any remaining issues.
1. Propositional Content
Reppert begins by elaborating his attack on a naturalist theory of mind (cf. earlier: AfMC and AfUC). But when Reppert says things like “the physical realm…will go on its merry way regardless of what mental states exist” (89), he fails to take seriously the most common forms of naturalism, which hold that mental states literally are, or necessarily correspond to, physical states unique to those mental states (as Reppert acknowledges: 52). On such a view, the word “regardless” has no place in such a sentence. Reppert tries to argue that mental content is irrelevant to the corresponding physical content of brain states, but that is simply not true on physicalist or emergentist psychology—indeed, it is not true by definition. That is, on such naturalist theories of mind, it is actually an inherent element of their explanatory hypothesis that no mental event can exist without a uniquely corresponding physical event. Now, that theory might be false, but to argue so, Reppert would have to present physical evidence from brain science refuting it. Which he doesn’t do. Therefore, since he does not challenge its factuality, he must accept the theory as tenable. But once he accepts the theory as tenable, a statement like “the physical realm will go on its merry way regardless of what mental states exist” is necessarily and literally false for any universe in which that theory is fact. Therefore he cannot advance that statement as an argument against any naturalism that adopts that theory—which is by far most versions of naturalism defended today.
Still, Reppert believes “that a consistent physicalism leads to the conclusion that there are no mental states with propositional content” (89), and since all other naturalist theories of mind exclude epiphenomenal causation, naturalist theories of mind must be incoherent. And he makes a valuable point here, with which I have always agreed: even if epiphenomenalism is true, though that would rescue propositional content from Reppert’s objection, it would deny any causal role to that content, and therefore either such a theory of mind must deny that role, or else become self-refuting. Either way Reppert wins, since propositional content must be essential to any causal process underlying Reason. Thus, if naturalists are to defend themselves against Reppert’s objection, they will have to accept some sort of physicalist theory of mind. But that is no problem for me.
Now, it is most curious that Reppert never really “defines” what he means by “propositional content,” nor does he present any evidence against such a thing existing within a strict physicalist theory of mind. He pretty much just rests on yet another Argument from Lack of Imagination, a kind of “Dunno how it could work, so it must be that it can’t” attitude, a classic Argument from Ignorance. In contrast, I have defined this phrase already above, and in several places accounted for how such a thing can exist within a strict physicalist theory of mind. That is sufficient to negate Reppert’s objection as unfounded, or at least undemonstrated.
Related to this groundless attack is Reppert’s attitude toward the concept of intentionality, which I have already discussed at length earlier (cf. e.g. AfI). But now he quotes Geoffrey Madell complaining that “it is quite unclear what an analysis of a thought’s directedness could possibly amount to” (90) and that our feeling of “aboutness” seems inherently irreducible. There are two erroneous assumptions here.
First, an experience of irreducibility is useless as a guide to the underlying physics. The experience of the color green, for example, is also irreducible—in experience. This does not mean the experience itself is not reducible physically. Obviously we do not experience the world directly (though if a deity created us we ought to ask why not), but by a complex process of experience-construction. It is in the very nature of such a process that the elements of the constructing mechanism cannot be experienced themselves. That would require an infinite regress of computational mechanisms (and would not be useful anyway). But just because you cannot see the electrons changing patterns in a computer’s microchip does not mean there is any great mystery to the computer’s beating us at chess. So, too, the fact that we cannot directly apprehend how apprehension occurs in ourselves does not mean it is not brought about by a complex of physical causes. So, too, for our sense of “aboutness.” Like the color green, we cannot look behind the curtain and see how it is generated—at least not first-hand. But that is an epistemological defect, not a metaphysical one. Just because we can’t look behind the curtain doesn’t mean there isn’t anything there. And we know something is there—it’s called a brain. We can see that second-hand, even as it functions for someone else first-hand (see next section).
Second, it is certainly clear to me what an analysis of “a thought’s directedness” could amount to, so what we have is still one more Argument from Lack of Imagination. The fact that one thought is about another thought (or thing) reduces to this (summarizing what I have argued several times above already): (a) there is a physical pattern in our brain of synaptic connections physically binding together every datum that we say is “about” an object of thought (let’s say, Madell’s “Uncle George”), (b) including a whole array of sensory memories, desires, emotions, other thoughts, and so on, and (c) our brain has calculated (by various computational strategies) that this physically connected set of data describes elements of that object (Uncle George), (d) which of course means a hypothesized object (we will never really know directly that there even is an Uncle George, so we only “predict” his existence based on an analysis, conscious and subconscious, of a large array of data, which means our brain-circuits include a physical measure of confidence in our brain’s calculation that there is an Uncle George), and (e) when our cerebral cortex detects this physical pattern as obtaining between two pieces of data (like the synaptic region that identifies Uncle George’s face and that which generates our confidence that someone with that face lives down the street), we “feel” the connection as an “aboutness” (just as when certain photons hit our eyes and electrical signals are sent to our brain we “feel” the impact as a “greenness”).
So, when we hold the image of George’s face in our minds, we “feel” the physical connection this face-datum has to other data, such as the datum regarding our high level of confidence in George’s having a body outside our brain and residing down the street. And notably, we know this connection can be physically broken or destroyed: there are people who, after physical damage to the brain, can no longer recognize faces, even though they see every detail perfectly well, and can otherwise remember everything else about the person in question. Indeed, some people who have had their brain’s hemispheres cut in two can recognize faces with one eye but not the other! Thus, for them, the face data is there, it has not been destroyed, but since the other hemisphere’s connection to it has been physically severed, its half of the cerebral cortex can no longer “feel” that such visual data is “about” anything, much less their uncle. That seems to be pretty strong evidence that “aboutness” is the perception of a physical connection between synaptic patterns in the brain, and not some mysterious Platonic power implanted by God.
When we take in all of the above (and discussion under AfI) we can now see what’s amiss when Reppert makes the following argument:
Consider the following two scenarios.
1. I sit in a chair, have an idle thought about my cousin Warren, forget the thought completely and go on with my day.
2. I sit in a chair, have an idle thought about my cousin Don, forget the thought completely and go on with my day.
In these two scenarios my thoughts are about different people, yet they produce precisely the same effects, namely, none….it causes no effects on my behavior. The difference is simply a difference in my first-person world; it cannot be accounted for from the outside…[an outside observer] will never capture what my thoughts are about.(91)
This is, quite simply, all wrong. First, the two scenarios do not “produce precisely the same effects” but in fact track completely different physical events in the brain. Reppert is engaging in a fallacious definition of “behavior” when he says these different thoughts have no different effects on our behavior, for thoughts themselves are a behavior—and the behavior of thinking about Warren, which activates one area of synapses in the brain, is physically different from the behavior of thinking about Don, which activates a completely different area of synapses in the brain. Thus, the two thoughts are physically different and track physically different behaviors in the thinker. Second, it is not the case that this difference is “simply” a difference in our “first-person world,” since the difference is also a difference in brain activity, which is indeed observable to a third party, despite Reppert’s assertion to the contrary.
Now, as of yet, our ability to analyze brain activity has not reached the level of precision to distinguish different face-recognitions. But that is simply a technological limitation, not a metaphysical one. For indeed we have reached the ability to distinguish visual recognitions across many categories. For instance, we know where the brain stores pictures of tools (like hammers and scissors), and it is physically located in a different place from where the brain stores faces (like the faces of Warren and Don), whose location we also know. And we can observe a brain’s activity while someone is thinking, and a third party can in fact tell whether an observed subject is thinking about a tool and when he is thinking about a face: simply by observing which areas of the brain activate at the time. The same goes for all aspects of thought (at least theoretically): names, sounds, mailing addresses, presumably even hypotheses and degrees of confidence therein, and so on—every single datum we possess in our minds about our two cousins Warren and Don.
There is no theoretical barrier to how far technology can take us in this direction. I will bet good money that in a hundred years we will be able to track the location and activation of every synaptic connection in the brain, and at that point we will even be able to tell when Reppert is thinking about Warren and when he is thinking about Don—because the face-recognition circuit for each is located in a physically different place, even though we already know they are very near each other. Indeed, with that kind of resolution we will be able to trace synaptic connections all the way back to, for example, cells in the eye, and to other organs, and other areas of the brain where data labeled “Warren” or “Don” are stored (since, by being associated with Warren or Don, those circuits are physically tied to each other—that is how our brain knows they are connected, and that is what we “see” when we see these data are “about” these guys).
By being able to see the links to the sensory cells that originated and reinforced the data that is synaptically stored, we will be able to determine exactly what Warren’s face looks like, in every particular the observed subject can remember, without ever seeing Warren or even communicating with the observed subject. Simply by observing the pattern of arrangement and interconnection of synapses and other nerve cells, a third party will be able to reconstruct the entire content of any human thought. This is not to say that the third party will then be able to “picture” Warren or Don in their own head—for to do that, they would have to arrange the synapses in their own brain in roughly the same way, and our brains have evolved no capacity to simply “upload” a program like they did in the movie The Matrix.
The only way to “program” our brains to have a thought about, say, someone’s face, is to activate the entire visual system in the normal way. That is, we would have to draw on paper or animate on a computer screen what the synaptic data in the observed subject tells us the face looks like—its colors, patterns of arrangement, and so on—then look at that, and by that means we would burn the same synaptic pattern into our brain as the subject had done by viewing his cousins. Or we could skip the intervening mechanism of our eyes, dispense with drawings or animation, and instead use some device to directly rearrange our brain’s synapses into the requisite pattern. Either way, only then could we activate that circuit and thereby imagine and “see” Warren or Don’s face. But all of this will still be achievable without ever speaking to the subject or seeing his cousins.
Reppert would know this if he would bother to study contemporary brain science, and not rely on the musings of Lewis (whom in fact he cites for his point here, on page 91). Lewis had no conception of the facts of brain science now available to us, and for that reason his thoughts on the subject are as archaic and obsolete as the fantastical (and hopelessly false) theories of mind developed by Freud or Jung. What Reppert needs to do is get up to date, and address what we now know, and have the prospect of soon knowing, about the brain and its behavior. Maybe after a thorough survey of that evidence he will be able to find something that science has literally no prospect of explaining—but I doubt it. At any rate, he doesn’t do this in his book, and that failure is fatal to his enterprise.
Reppert allows that maybe naturalists like me are right, but then claims “there are still difficulties here concerning how one thought can cause another thought in virtue of its content” (91) because “materialism maintains that causal relationships are governed by physical laws, not by mental content and not by logical laws” (91-92). But materialism does not maintain such a thing (cf. AfMC). Reppert has falsely characterized the physicalist hypothesis, stumbling into the Causation Fallacy yet again. It is true that physicalism holds that all causal relationships are governed by physical laws. But physicalism does not entail that no physical causal relationships are not also mental or logical.
Reppert should know better—in his analysis of Anscombe earlier in the book he came to agree that being physically caused does not exclude the possibility that there are different categories (patterns) of physical causation, like, say, rational physical systems vs. nonrational physical systems. But despite seeming to understand the Causation Fallacy, Reppert keeps on committing that very fallacy, as he does again here. Insofar as (on mind-brain physicalism) mental content is physical pattern-data in the brain, mental content does indeed constitute a legitimate category of purely physical causation. And since, for example, Reppert could never have a thought of Warren’s face without synapses in his brain physically possessing the necessary arrangement (since if that circuit were arranged in any functionally different way, it would not generate an image of Warren’s face, but some other), it is impossible that Reppert’s thought of Warren’s face could ever be caused, or ever cause anything, except through a physical causal relationship involving this synaptic circuit. By the same token, “logical laws,” by definition simply truth-finding procedures, a form of mechanical computation, are physically represented in the brain—such that if the brain’s circuitry were organized in any functionally different way, it would be incapable of logical reasoning, and it is only because this physical arrangement of logic circuits can causally affect further brain activity that humans can reason at all.
Reppert tries to argue against this not as he should—by discussing the vast array of physical evidence available for brain function—but by ignoring all the facts and touting the famous (and dubious) thought-experiment of the “zombie,” as if it were evidence against physicalist theories of mind. But his discussion is too brief and completely uninformed by the vast literature on the subject. In actual fact, the “zombie” (in a thoroughly complete sense) is a conceptual impossibility and therefore is not evidence against any theory of mind. It is simply impossible for any entity to engage in all the behaviors of a conscious being without being conscious. For example, if you asked one of Reppert’s “zombies” whether the color of this rose here was unusually brilliant and that a little sparkle just here was gleaming off the dew on its left petal, to satisfy his definition of what a “zombie” is, it would have to truthfully answer “yes” if in fact it was making that observation as we are (since lying would constitute a different behavior). But how could the zombie truthfully say any of this, if it was not observing every one of those details exactly as described? Indeed, to see these things is literally to experience them—you cannot “see” a red flower and at the same time not be having any mental experience whatever. That is a contradiction in terms.
Moreover, a self-contradiction in Reppert’s particular version of this thought-experiment arises as follows: (1) he stipulates that in his zombie world all physical states are the same as in our world—which must then include behavior that results from rational decision-making and coherent conversation; (2) but he draws from this stipulated world the conclusion that no causation in virtue of propositional content would exist in that world; (3) but Reppert has already committed to the stipulation that rational decision-making cannot exist without causation in virtue of propositional content (this is in fact the very point of his AfMC, i.e. AfR #3); (4) but it follows necessarily that if there is no rational decision-making in a world without causation in virtue of propositional content, and his proposed zombie world is such a world (as he asserts it is: 92-93), therefore there can be no rational decision-making in that world, which entails that that world will be physically different from this one (e.g. zombies will behave differently in it), which contradicts his original stipulation. Thus, Reppert’s zombie world is self-refuting. Because of his own commitments to certain stipulations, Reppert has constructed an incoherent system of propositions. Obviously, that proves nothing.
Finally, Reppert concludes this argument by repeating the Possibility Fallacy when he tries to respond to naturalists who reject his assumptions (though he names only Horgan). In particular, Reppert argues that mind-brain supervenience “stands in need of explanation” (which is obvious—and not anything any naturalist has ever denied) but we do not have such an explanation yet (which is not entirely true: though it is true in the scientific sense of a confirmed explanation, it is not true in the philosophical or analytical sense, since there are a number of viable and well-developed hypotheses in contention, none of which is even mentioned, much less addressed, by Reppert), and “one cannot simply presume that physicalism is true” (94).
Even if this closing argument was valid and its conclusion true, it is not sufficient to establish any Basic AfR Premise. For by the same token, one cannot simply presume that physicalism is false either. As I have said time and again, it is only by demonstrating that physicalism is (at least probably) false can any AfR succeed. Indeed, Reppert’s conclusion here, besides being useless to his agenda, is itself false. We do not “simply presume” physicalism is true: we have good arguments and evidence supporting it (as I explained earlier). Reppert has nothing even remotely comparable supporting his explanatory dualism. And since he has also not addressed any actual physicalist theories of mind, he cannot claim that such theories are conceptually impossible or factually false. Yet that means he cannot claim his AfR a success.
Ontology of Logic
Reppert eventually returns again to the ontology of “logical laws” (94-98), which I have already addressed in detail several places above (cf. e.g. AfPR). I will not repeat my earlier points but address only what is new in Reppert’s presentation here.
Reppert says a rational mind “obeys a radically different set of laws from the laws of physics, namely, the laws of logic,” which he claims “do not result from the laws governing the physical order” (94). Reppert again offers no evidence that the laws of logic don’t actually derive from the laws of physics. As I have already explained, obviously they do: logical operations in computational physics are certainly an inevitable emergent fact of physical laws: because the laws of physics are as they are, the laws of logical operations are as they are. If the physical laws were relevantly different, so would the logical laws be.
Of course, to get the physical laws to be relevantly different would require changing the physical universe in an absolutely radical way, which may even make the existence of minds in such a universe physically impossible. For instance, I can imagine a physical universe wherein the logical law, which I shall call modus primoris, holds: whatever is first in any series of alternatives is always true. Such a universe would be physically bizarre, but I don’t think it would be incoherent. For instance, we could simulate such a world on a computer and—perhaps—go and live in it as one would live in The Matrix. On the other hand, in order to physically change the universe so that the law of non-contradiction would no longer apply in it, we would have to eliminate all distinction—such a universe could have only one physical object (actually, not even that, insofar as being “physical” is itself a distinction), in which resided all properties whatever, which in turn can have no distinctions whatever between any of them (which seems to me to describe a literal nothingness). Such a universe could never manifest a mind, since a mind requires the existence of distinctions (distinct objects and distinct properties—not to mention causal and ontological consistency), and any universe in which distinctions exist (along with causal and ontological consistency) is a universe that will be described by (among other things) the law of non-contradiction.
Reppert also seems to lack a grasp of just what logical relations are. For now he asks “How can it be true…that one thought causes another thought not by actually being its ground, but by being seen to be the ground for it?” (94-95). One wonders what Reppert thinks it means to have a logical “ground.” He clearly thinks it is some kind of extra property, some “third entity” that exists apart from P and Q, when P entails Q. But that is hocus pocus, rather like thinking there must be invisible angels that are pushing the planets in their orbits, or that a car’s engine would not go but for a magical car-demon jerking its pistons. As I explained in detail earlier, when P entails Q this means that the “set” or pattern P contains the set or pattern Q. Thus, all that is needed for “P entails Q” to be true is that Q be contained within the meaning (the mental content) of P. No “third” entity mysteriously called a “ground” needs to exist. The “ground” is the fact that P includes Q in its content, which requires nothing but P and Q. Likewise, no mysterious power is needed to detect the fact that Q is contained within P: all you have to do is look at P, and search around to see if Q is in there—and whether Q is wholly in there or extends outside P, and so on, but in every case, it is just a question of detecting different patterns, which any appropriate mechanical process can do.
Likewise, Reppert senselessly asks “How could there possibly be states of something that not only do not exist in any particular place or time, but are true in all possible worlds?” (95), referring to the state of “P entailing Q.” But the state of “P entailing Q” does exist somewhere: at the very least, it exists in our brain, but (when referring to an actual fact outside the brain) it also exists in the physical world, wherever there is a P and a Q. But it also exists as a fact of the universe as a whole, just as the law of gravity exists as a fact of the universe as a whole—for there is no particular “place” or “time” where the law of gravity “is,” it “is” in every place and time where the physical conditions that manifest gravity exist. By the same token, the same physical conditions (attributes of the universe) that permit P to entail Q exist in every place and time where the relevant laws of logic also manifest. For example, the law of non-contradiction “exists,” i.e. applies, wherever distinctions, along with causal and ontological consistency, physically exist.
So not only is Reppert’s question “How could there possibly be states of something that…do not exist in any particular place or time?” based on groundless suppositions about the physicalist ontology of logic, but his second question, “How could [such states be] true in all possible worlds?” has the whole thing turned completely backwards. The fact is not that logical relations are true in all possible worlds, but the other way around: the set of “all possible worlds” is the set of all worlds that can be constructed from (i.e. are described by and thus consistent with) the laws of logic.
There is absolutely no mystery to the fact that any given set of rules can describe features of (i.e. be consistent with) a large array of possible constructions. It just so happens that the array of possible constructions that cohere to the “laws of logic” is what we choose to call the array of all possible worlds. But that is a misnomer. For I can imagine worlds in which some of the laws of logic do not apply (for example, a world in which distinctions are impossible is a world that is not described by the law of non-contradiction). Why can I imagine that world? Because to “imagine a world” is to simulate it, to create a model, from a set of descriptive rules. We can start with any descriptive rules we want—not just the laws of logic, but anything (though they must be consistent for computation to be physically possible). That is why the set of all worlds described by Euclidean Geometry is smaller than the set of all worlds described by Non-Euclidean Geometry: change the rules, and you change what is “possible,” i.e. imaginable (which means “computable”).
Reppert insists there is still a mystery because the recognition of logical relations “require[s] that we know something about thoughts that we have not yet thought” (96), i.e. that logical laws apply to future thoughts, hence thoughts we have not yet had. But this is yet another pointless complaint. First, as I already explained (in Note 15), knowledge is not the same thing as information. We can have all the letters in the right sequence (information) and still not know that they spell “stop” (knowledge). To extract knowledge from information requires analysis (computation), and that entails a procedure. Second, and more directly relevant to Reppert’s worry, the reason we can “know” that a procedure that applies to a thought today will apply to a thought tomorrow is the same reason we can know that a procedure that applies to growing corn today, or building an airplane today, or sharpening a spear today, or starting a fire today, will also apply to the same activity tomorrow. There is no mystery here at all. The reason we know a procedure applies at all is that we have learned from observation and practice that given a certain set of relevant observational conditions (like having some sticks of dry wood and kindling, in habitable atmospheric conditions, with low wind, and so on), the procedure will work (i.e. produce the result intended: like starting a fire). Therefore, we “know” (inductively, hence never with absolute certainty) that whenever those conditions obtain, tomorrow or a hundred years from now, the procedure will still work.
So it is with logic, only instead of sticks and favorable wind conditions, the relevant observational conditions are those contained within the laws of logic themselves—such as the existence of distinctions and consistency. We know that all thoughts to which logic will pertain will meet those conditions, since that is true by definition: rational thought literally is any thought that meets the conditions fixed by logical laws. There is no mystery in our coming to discover that a set of laws will apply to all thoughts to which those laws apply. That is a tautology, and every tautology is a directly observable fact of experience.
Of course, it is also true that the certainty of the laws of logic we only know inductively. We can never be absolutely certain of anything, not even that the laws of logic are certain. Children don’t know it, for example, so such knowledge is clearly not innate. Which means we must learn it. And when you think about it, the only reason you “know” it is “certain” is that you yourself infer it from past experience, including countless thought-experiments beginning in early childhood. So “logic is certain” is probably true. But we could all be amazingly deceived in all this. That’s possible. However, we have so much experience with thinking, and comparing the products of thought against results and observations, that for us to be wrong about this is extremely improbable—so improbable we often take it for granted and euphemistically call it indubitably certain.
In sum, Reppert has done nothing to show that physicalism is incapable of producing a credible ontology of logic. He doesn’t even mention or discuss, much less actively refute, any actual naturalist theories of logical realism. So his AfR fails to succeed on any such challenge.
The strange questions continue when Reppert returns at last to the Evolution of Reason, which I have already discussed in detail (cf. AfRF, etc.), so I will only address new issues here. For example, Reppert quotes Robert Adams asking “What survival value was there in recognizing necessary truths as necessary?” (96) but by now it should be clear how silly this sort of question is. First, what has survival value is the ability to make use of certain computational operations (as I have explained in detail several times already). The fact that those procedures eventually result in a recognition of the existence of necessary truths is simply a byproduct of an otherwise useful truth-finding function, just as the ability to play complex musical instruments is an inevitable byproduct of opposable thumbs. Indeed, the actual recognition is not a result that was hit upon genetically, as Adams erroneously supposes, but memetically. Children have no conception of a necessary truth. They have to learn it. Indeed, the idea of a “necessary truth” did not really exist in human conception until the Greeks discovered it. Thus here we have another Argument from Lack of Imagination, and what I might call a Fallacy of Conflation—confusing the technology with the biology of reason.
Likewise, the fact that formal logic is learned (see Note 37) pretty much destroys the thesis of Adams that (as Reppert quotes him) God “constructed us in such a way that we would at least commonly recognize necessary truths as necessary” (98). If God made us that way, why don’t children recognize this? Though the capacity to develop such a faculty is probably indeed innate, that is not the same thing. For why would God give us a capacity that needed developing, instead of just giving us the ability outright? Moreover, why would God only give us the ability to “commonly” recognize necessary truths? (as Adams must concede since he realizes we often make mistakes.) Why not give us the ability to always recognize them? So there are explanatory deficiencies in the Adams hypothesis.
The truth is that, as I said of our innate ballistics computer earlier, the imperfection and inefficiency of human cognition is exactly what we should expect from evolution, and exactly what we shouldn’t expect from intelligent design. Evolution predicts an imperfect but serviceable reasoning faculty that depends upon memetic assistance to develop and hone it. Lo and behold, that is exactly what we get. Theism predicts a flawless reasoning faculty innate from birth. But that is not what we get. One can invent all kinds of “Ptolemaic epicycles” to make theism fit the same evidence entailed by evolution, but that neither proves theism better than evolution as an explanation, nor does it bode well for theism as a scientific hypothesis of cognitive development. For to incorporate a plausible array of excuses, any creationist theory of human reason quickly becomes hopelessly complex, loaded down with tons of unproven ad hoc assumptions for why the parsimonious theory implies different facts than we observe, while the elaborated theory instead produces facts exactly identical to those expected on natural evolution!
Reppert then implicitly changes the subject to inductive logic, bringing up Hume’s attack against its principles—as if nothing else had been written on the subject of the foundations of induction in the past two hundred years. He argues that (quoting Lewis, again as if nothing had been written on this subject in the past fifty years) we have an “innate sense of the fitness of things” (99), i.e. we innately feel that the universe is consistent and thus that the principle of past-future uniformity (upon which many forms of induction are based) is trustworthy. Unfortunately, Reppert commits the same fallacy here of simply not doing his homework. On the one hand, it has been conclusively proven that we do not possess any such innate sense (i.e. an operational belief in uniformity)—to the contrary, children must take several years to learn all the principles of consistency (see Note 37). On the other hand, what makes developing this sense possible is the very nature of neural net computation, and the principles of vector completion (as defined by Paul Churchland). And that is a computational ability every animal benefits from. So not only is it not true that our belief in the principle is “inexplicably innate” (because it isn’t even innate), but it is also not true that following such a principle has no survival value. Obviously an organism that relies on the consistency of a consistent world will have an advantage over one that does not rely on it (unless it lives in an inconsistent world—but Reppert nowhere argues that the universe might be inconsistent, nor would that be an AfR). So Reppert’s brief (and apparently rare) foray into inductive logic also fails to support his case.
Five Axioms of Science?
So much for induction. Next Reppert turns to an implicit attack on what I can only suppose he imagines as some sort of special “scientific” logic, though he never explains or defines his target here. All Reppert does is list (drawing on Lewis, who in turn drew on Balfour) five “preinvestigatory convictions” which, supposedly, “were not…based on experimental evidence” (99) or “physical evidence” (100), and then asks “if all of this is just the product of naturalistic evolution, then why are we so sure, and why have we always been so sure” (emphasis added) “that the world ‘out there’ corresponds to these convictions?” (100). As you might expect, Reppert concludes that theism would make more sense of the fact that we were innately attracted to these axiomatic beliefs. But Reppert is wrong twice over here. Most of these axioms are derived from “physical evidence”—they are not completely biologically innate—and we have not “always been so sure” of them. And those that have any biological basis at all are actually shared by dumb animals, and make perfect sense as products of natural selection. Let’s take them one at a time:
1. Our belief in an objective, independently existing external world.
But Greek philosophy from its earliest days included both those who advocated and those who denied this axiom (Parmenides and the Eleatics especially). Hinduism and Buddhism, which have dominated half of the world’s population and of the history of civilization, explicitly reject it. And the science of child development has shown that children before the age of seven do not yet behave in accordance with this axiom. So how can anyone say this axiom has always been believed by all humans? Clearly it is an axiom that only started to see organized defenses under the Greeks—and dominated in the West because it was far more empirically successful. It is thus part of the technology of reason, not the biology of reason.
However, there is some biological underpinning here: brains only succeed in producing intelligent behavior because they model an environment that actually exists. This is true for most animals. If brains adopted a different computational strategy, such as imagining that all experience was just a subjective construct, they would far more often fail to benefit the organism. Even so, the system is nowhere perfect. Even in humans, we must take years of interactive brain development after birth before we have fully grasped the tactic of modeling an external environment (rather than acting egoistically as if we were gods), and even then, as history shows, humans can reject this and deploy a different tactic altogether—to their detriment (not only the sane, but the insane demonstrate this: a schizophrenic has very little prospect of survival without the aid of a non-schizoid community). In simpler animals, who have no ego construct—like, say, cats—there is never any question of subjective constructs, since cats have no conception of a distinction between subjective and objective constructs, but their brains, as a matter of practical fact, still construct models of an actual, not an imaginary, environment. That’s why they can get around without bumping into things.
2. The uniformity of nature and governance by the laws of nature.
Ancient scientists did not employ the idea of a “law of nature,” at least in the sense intended here (see below). So that leaves only the “uniformity of nature.” But this is observed, by everyone, from the moment they open their eyes as babies. So how can we pretend to know that it is not arrived at by empirical observation? Even so, this one belief (in uniformity) would serve an obvious biological advantage, so even insofar as there is any innate biological basis for a belief in uniformity, it is easily explained in evolutionary terms. As I’ve already said, if the universe is uniform, then an organism that operates on the assumption that it is uniform will have an advantage over an organism that does not.
Even so, if there is any such innate sense, it is imperfect. Otherwise, how does Reppert explain the widespread belief in miracles and other supernatural deviations from uniformity? Buddhists and Hindus believe they can defy the “laws of physics” once they achieve a sufficient degree of enlightenment. Many Christians (laymen far outnumbering theologians) believe the “laws of physics” not only can be broken, but often are. Postmodern metaphysicians deny that there even are “laws of physics” and assert the mind’s supremacy over nature.
So clearly it is not the case that humans all innately believe in the uniformity of nature—otherwise humans would all have been scientists from day one. But it took three thousand years of civilization before we hit upon the idea of a rational scientific method, and another two thousand before it became popular, and even then, it is only really understood, and accepted, by a minority of the world’s population. But they accept it because it works. This is therefore another example of the technology of reason, not the biology of reason—and if there is anything else to it biologically, natural selection would fully explain that.
3. An intuitive, nonmathematical conception of probability.
This has been demonstrated even in pigeons. And its survival advantage is obvious, since the ability to grasp the probability of an outcome is of inestimable value in many different respects. But it has also been proven that the human ability is flawed and sometimes leads to error. It is a serviceable ability, in that it helps us more than hurts us (just as it helped pigeons and rats and other animals cope with their environment), but it is nowhere near the sort of intuition we would expect from an intelligent designer. In contrast, an imperfect but useful organ is exactly what natural evolution predicts. On the other hand, the system of probability actually employed in science is not our innate capacity, but a highly refined technology. For scientists have found that our intuition here is far too flawed for the needs of scientific reason. Hence anecdotal evidence is rejected, and controlled, double-blind experiments followed by mathematically-precise analyses are most preferred.
4. Our belief in the conservation of matter and energy.
This is another belief that arises from experience—children do not grasp conservation principles until their middle years (between seven and twelve). It has to be learned. And it is learned from experience. But conservation principles are a generally useful discovery—any kind of long-term planning, such as regarding food and water supplies, depends upon them. Thus, the ability to learn them is an obvious survival advantage to any intelligent organism that relies upon planning as a key strategy.
Yet, once again, humans did not connect the obvious everyday experience of the fact of conservation to the more subtle conceptions of matter and energy, until, again, the Greeks (in fact, there was barely any concept of “energy” at all until the Greeks started toying with the idea, and it would not get a useful definition until Newton). The principle of nihil ex nihilo was developed by the Presocratic philosophers, and was regarded as something novel and distinct to their science of nature (for the ancients continually remarked upon this). Mythophiles, diviners, and traditional religionists rejected the idea (as, again, the ancients continually observed). Philosophers can still be found who believe that energy can be created by a libertarian human will (even in the West—but many Buddhist and Hindu sages also believe in conjuration, for example), and obviously thousands of theologians think a god can pull it off. So it is not true that everyone has always believed in this axiom.
To the contrary, scientists have not always been comfortable “assuming” this was true, but have repeatedly throughout history tested it and established it from empirical evidence. From as early as the first distinctions were being made between chemistry and alchemy, to the first two laws of thermodynamics (established from experiments on steam engines) or Maxwell’s hunch (leading to an empirical demonstration) of the conservation of electrical charge, and beyond (Planck, Einstein, Bohr), the laws of conservation have been doubted and then empirically proved.
So, once again, insofar as this axiom has a biological basis, its evolutionary advantage is beyond doubt, whereas in most respects relevant to scientific method, this axiom belongs to the technology, not the biology of reason.
5. Our belief that the world must have an atomic structure.
This is the most amusing, since this belief was only ever commonplace when atomist philosophy was rediscovered in the West, launching the Scientific Revolution—though even then there were scientists who tried to get along without adopting it, but their relative failure to produce the same impressive results as their colleagues was clear. Even in antiquity, when atomism first appeared as an idea in which anyone could be said to “believe” (notice how all this stuff comes from the pagan Greeks?), it was a minority view, rejected even by most scientists. Atomism never developed independently outside the West. So there is no sense whatever in which humans innately, much less “always have” believed in this axiom. This is indeed one of the clearest examples of a technology of reason. And to understand how someone could come to “discover” it even before the technologies and techniques of the 19th and 20th centuries became available, all one need do is read the De Rerum Natura of Lucretius. And since his day, atomism has been given an enormous and sound empirical basis. It is no longer any sort of basic “axiom.”
It is clear that Reppert cannot maintain that there is any mystery here. These five principles are easily accounted for by naturalism.
Only Theists Can Invent Science?
Around this point Reppert pulls out a bit of popular pseudohistory in defense of his claim that evolution would not produce scientific reasoning: the baloney idea that only theists can invent science. I have already refuted the claim that evolution could not lead to scientific reasoning, noting how the Fallacy of Conflation, the Argument from Lack of Imagination, and a failure to take natural selection theory seriously, conspire to undermine all of Reppert’s claims and assumptions. So here I will focus briefly on only this little bit of pseudohistory. A lot more could be said than this, but that will have to wait for another time.
As Lewis put it (in Reppert’s quotation): “Men became scientific because they expected Law in Nature, and they expected Law in Nature because they believed in a Legislator” (99). Lewis, of course, did not study the history of science, much less the history of ancient science—which happens to be my professional field—so his pronouncements on the matter can be safely dismissed as uninformed. This pronouncement in particular is multiply erroneous.
To begin with, ancient scientists did not employ a concept of natural “law.” A few poets and philosophers said something like it on occasion, but as Robert Grant concludes, it was “Jewish and Christian theologians” who “accepted and developed the idea of physical laws of nature” but “the laws they accept[ed] have philosophical rather than scientific meaning, and were taken by theologians from philosophers” and so in antiquity “there was considerable distance from ‘laws of nature’ to scientific study of nature.” Hence the idea of “physical laws” only rose to prominence among the Christians and contemporary theologians, not scientists. And it barely had any significant role to play in sparking the Scientific Revolution, which began more with new dogma-destroying observations (especially in anatomy, astronomy, geology, zoology, and botany), not laws. Taken as a whole, very little science has to do with discovering laws—then or now (though discovering laws tends to get better press—then as now). Instead, the significant causes of the Scientific Revolution are much more mundane—the rediscovery of ancient science during the Renaissance being chief among them, but other major factors include the discovery of the New World, and the breaking of the stranglehold of the Church by the Lutheran Reformation and Anglican Schism, thus eliminating its thousand-year-old power to control ideology—which it did both by suppressing most dissent and unpalatable inquiry, and choosing to what ends resources (including time, education, research, and personnel) would be directed—which was rarely toward curiosity or the reduction of the glory of god necessarily entailed by solving natural mysteries.
But it is the rediscovery of ancient science that was and still is most important: for the ancient scientists were not in any great sense different from the new scientists of the early Scientific Revolution. Galileo was Ptolemy’s equal (both studied light refraction in much the same way), just as Harvey was Galen’s equal (both sought to establish physiological theory on anatomical observation in much the same way). Ancient scientists sought and established mathematical principles describing and predicting natural phenomena (they invented mechanics, optics, hydrostatics, etc.), engaged in experimentation and inductive reasoning, and held doubt, curiosity, and observation in high esteem, almost all the features that would come to constitute the Scientific Revolution, and distinguish its thinkers from those of the middle ages. The greatest difference was that the new thinkers abhorred philosophical systems and placed empiricism as supreme, because they had suffered under the former at the hands of the Churchmen for centuries, and had found out that the latter was finding things those same Churchman had never noticed before.
Conclusion: the first “men” who “became scientific” were the Greeks and Romans. But Greek and Roman scientists did not employ an idea of “Laws of Nature,” certainly not in the sense Lewis imagined. So that refutes Lewis’ claim that it was belief in “Laws of Nature” that made men scientific. Thus, Reppert cannot claim theism has any role to play in the discovery of scientific reason. But could theism still take some sort of credit? After all, it is true that almost all the first scientists were theists. Indeed, Aristotle, Ptolemy, and Galen all argued for intelligent design and employed the presumption of ID in their scientific hypothesis-formation. However, this sometimes led them to error as often as the truth. Ptolemy’s epicycles represent the most notorious example, but lesser known is Galen’s rejection of the Epicurean theory of natural selection, which led him to a few errors in physiological reasoning. Aristotle committed similar errors in both fields.
But not all of the first scientists were theists. Even those who were, were not always creationists in the sense Lewis intended (Soranus, Posidonius, Eratosthenes, etc.). But there were atheists and agnostics (and deistic pantheists) as well. The Church only bothered to preserve the writings of scientists who were believers—indeed, not only believers, but believers with palatable views (mainly Creationist Aristotelians). But there were quasi-Epicurean scientists (who denied Creationism, and thus Lewis’ ‘legislativism’, altogether) like Strato (the first to study the principles of air pressure and vacuum), Aristarchus (the first scientist to propose heliocentrism as a hypothesis) and Erasistratus (who made tremendous advances in the sciences of anatomy and physiology)—yet almost all of their works were destroyed or left to rot over the course of the middle ages. And there were Pyrrhonian scientists (who, as Skeptics, did not have any confidence in the existence of or creative activity of any gods but suspended judgment on that subject) like the many members of the Empiricist school of medical research (who made many advances in pharmacology and therapeutic method). Sextus Empiricus is believed to have been a member of the latter, though his scientific writings do not survive, and his one treatise on skeptical epistemology was forgotten in the West—until it would surface again, recovered from the East, alongside Lucretius’ nontheistic De Rerum Natura—and both these works (along with a third: the collection of Diogenes Laertius) played an especially large roll in inspiring the Scientific Revolution.
Therefore, it cannot be argued that scientific reason originated from or depends upon the presumption of theism, much less legislative creationism.
We Should Attack Rocks?
Another absurdism comes when, following Plantinga pretty closely, Reppert offers a single imaginary “example” of how we could evolve a “useful false belief.” I already refuted the concept of such arguments earlier (cf. e.g. AfRF). But here I want to focus on how Reppert dooms himself from his own Lack of Imagination—in other words, a failure to think his own scenario through. His example goes like this (please resist laughing):
If the chief enemy of a creature is a foot-long snake, perhaps some inner programming to attack everything a foot long would be more effective from the point of view of survival than the complicated ability to distinguish reptiles from mammals or amphibians.
That’s right. That’s exactly what he says. He follows with the additional (and correct) observation that a complex brain presents disadvantages (which I discussed earlier). But he never demonstrates, empirically, logically or mathematically, that the disadvantages of a large brain would outweigh the advantages of complex discriminating functions. He merely assumes it. Indeed, he doesn’t even dare assert it—”perhaps” he says—so even Reppert himself doesn’t know whether his own conclusion is true!
But it is worse than that. For his example is absurd. Follow me here. I have “some inner programming to attack everything a foot long.” Okay. Stay with me. What sorts of things are a foot long? Hmmm. Well, sections of my arms and legs. So I’ll be busy hacking off my own limbs the moment I have the strength to act on my insane “inner programming” proposed by Reppert. Good grief, don’t let that man design robots! And what about snakes that are only six inches long—or two feet long? I guess they get to kill us.
Okay, suppose for some zany reason all snakes are only exactly one foot long, and we also evolved the ability to discriminate our own limbs from all those other foot-long objects we are supposed to attack. How would we ever get out of the house? Or pass a tree? It is a cliched horror to break rocks at Leavenworth all your life, but Reppert apparently thinks it would be a greater survival advantage than human reason to actually like breaking foot-long rocks at Leavenworth all our lives—indeed, to do nothing else but! I hardly need go on. The absurdity of his scenario is palpable. There is no way evolution could ever produce (especially in a population) such fatal “inner programming.” Therefore, Reppert’s example is defunct. His “perhaps” is really a “definitely no.”
Can we give Reppert a hand here and maybe come up with some example that isn’t ridiculous? I doubt it. It will always be a more efficient use of resources (energy, time, risk, and tools) to avoid attacking all non-threats and to attack all actual threats—including entirely new and unanticipated threats. And the only means by which an organism can maximize efficiency in this respect is to optimize its ability to categorize and discriminate objects and events. There is literally no other way. So how can Reppert advance any argument to the contrary?
I’ll tell him how he could do it. For it is true that there is a threshold beyond which discriminatory abilities become detrimental or needless (i.e. the gain in efficiency is negligible or even reverses, into a loss in efficiency). That is why when we look at a leaf we see a patch of green and not a trillion-billion individual photon-cell reactions. That is why our eyes can’t see bacteria and why our brains always devote most of their limited attentional resources to a relatively confined region of our visual field (otherwise we would never need eyes that move in their sockets). So there is a sense in which the basic idea behind Reppert’s argument is true: at some point the efficiency-gains of discriminatory ability diminish, and then cease to overcome the concomitant increase in physiological detriment (from an ever-larger and more-energy-consuming brain, for example).
The question to ask then is: When? At what point does this happen? At what threshold between discrimination and efficiency does reason become impossible? These are scientific questions, and can only be given scientific answers. I await Reppert’s peer-reviewed scientific research paper on the subject. Until then, this is just special pleading. Indeed, when Reppert concludes that “it is far from clear that a general ability to learn what is true will be helpful from an evolutionary standpoint” (101), I can only suppose he has some kind of mental cataracts obscuring his view. Learning isn’t useful? Of course it is. Far from being “far from clear,” this is not only clear to the rest of us, but blindingly obvious.
Reppert argues that the Problem of Reason is for naturalists comparable to the Problem of Evil for Theists (128). I agree in one respect: there are facts that need explaining here, and if naturalism has difficulty explaining them, then that difficulty is comparable in kind to the difficulty Theism has in explaining Evil. But I do not believe the difficulty is remotely the same in degree.
First, Naturalism’s solutions to the problems Reppert proposes are all heavily backed by empirical findings from the sciences. Theism has absolutely no empirical evidence backing its solutions to the Problem of Evil—they are all ad hoc, because God isn’t around for us to ask him questions or observe his behavior. Second, the evidence we do find regarding the Problems of Reason is just what we should expect on naturalism (like the abundance of imperfections, physiological and behavioral analogies in animal brain functions, the necessary role of slow cultural-intellectual development over thousands of years, and so on), and not what we should expect on any comparably simple version of theism (perfect reasoning faculties being essential to personal moral development, the lack of need for reasoning faculties in animals much less the same ones and same physical brain structures, the lack of need for thousands of years of hit-and-miss discovery—much less more than a decade of child development for every individual—before Reason can be hit upon, and so on). The same disconnect between theory and observation also holds true for the Problem of Evil: so what works against Theism on that issue actually works for Naturalism on the Problem of Reason. Therefore, they are not at all comparable.
I have more than amply shown that the abilities identified in the six AfR’s as essential to Reason are not “impossible for beings whose actions are governed entirely by the laws of physics” (87) as Reppert’s argument requires. And since this is not impossible, the conclusion of the AfR is not true. When we add to this finding the fact that all the reliable evidence we have points to naturalism, and none of it points to anything else, then there seems no reason not to be a naturalist. There is no better theory of the world going. Think about that. Every (I repeat: every) phenomenon we have been able to explore in the requisite detail has turned up naturalistic, without a single exception so far, in over three hundred years of concerted scientific investigation, by millions of experts, engaging the best methods imaginable, and with nearly unlimited resources. Does theism, or any form of supernaturalism, have anything even remotely like that behind it? No.
Appendix on Philosophia Christi Feature
An extended discussion arose concerning Reppert’s book in the pages of Philosophia Christi (5.1, 2003: pp. 9-184). Reppert summarized his book’s argument, two skeptics and one sympathizer then responded, and Reppert was allowed to end the debate with a rebuttal. Since my present critique is of his book, I won’t comment on his summary here. But I will comment on the other essays, and Reppert’s reply to them, insofar as this adds anything significant to what my critique already argues above (I composed my critique before reading any of this material, and it is surprising how similar the arguments sometimes are between them).
Theodore Drange: Even though Drange himself is “not even convinced” that naturalism is true (p. 40), he does not see in Reppert’s AfR any threat to naturalism. Drange’s response adds valuable detail to my arguments regarding intentionality, the ontology of logic, and the evolution of reason. And though I disagree with Drange’s philosophy of propositions (pp. 40-42), I agree it is a different but still coherent position that other naturalists can take and thus which Reppert must answer. Of course, it is a separate question whether two people are contemplating a “numerically identical” proposition and whether they can know they are. And on my view (which I explicated in detail already), people are doing so when their brains share the same relevant patterns of arrangement and activity (the relevant pattern details being precisely those which, by corresponding to the pattern of a real-world system, distinguish true beliefs from false). Obviously in most cases people are not contemplating the same proposition, but propositions sufficiently similar to be treated as the same in practice. As to how they can know when they are contemplating the same proposition, or how closely their contemplation corresponds to someone else’s, that is always a probabilistic task of inference like any other involving access to the thoughts of another—it is, after all, frequently the case that two people think they understand a proposition the same way but in fact they do not. A sufficient degree of communication can (and often does) resolve such an error, and the rules of language (lexical and grammatical) serve as a tool for facilitating exactly this. But there are other ways of approaching the issue, and Drange explains his.
Drange also argues convincingly (adding a lot to my own arguments) that “on the whole, a strong case could be made that there is survival value in being aware of as many truths as possible, whether they are apparently trivial or not” (p. 47). His arguments on this point (pp. 46-48) lend themselves well to supporting the following elaborations to my own arguments above (which expand on what I have already said):
- (1) Any organ that can efficiently increase the store of options available to an organism for invention and problem-solving will provide a benefit to differential reproductive success, and therefore one cannot argue that the ability to reason about the trivial is of no evolutionary value, since what is trivial today might not be tomorrow, and nature is not intelligent enough to know beforehand what will be trivial (e.g. that rubbing two sticks together will start a fire seems to be trivial information—until someone finds a use for this fact; and all complex facts are simply constructions from simpler facts like this one, and Reppert has not identified what facts would be “too complex” for an evolved organ to construct, nor has he presented any evidence for the existence of such a functional threshold).
- (2) Related to the first point is the fact that what is beneficial to an organism in one environment will be useless in another, and even harmful in yet another, such that one cannot argue about the utility of an organ without knowing the environment in which it allegedly benefited an organism. But humans evolved in a large variety of environments (even crossing between them in a single lifetime) and thus we should expect organ utility to tend toward generalized function. Also, humans evolved to take advantage of their own portable environment called “culture,” so organ utility should also be expected to exploit this adaptation as well (which means language, hence deductive logic; and technology, hence memetic discovery and refinement of the tools and techniques of reason).
- (3) A tool that works on important functions will always be usable on an even larger array of unimportant functions (e.g. you can use the blade of scissors to drive screws, even though scissors were never designed for that function), as there is literally no way to prevent this (i.e. no tool is incapable of being put to unexpected uses), and therefore an organ capable of natural reason will automatically be capable of using a technology of reason, even if it serves no evolutionary function, simply because it is physically impossible to have the one without the other (just as it is physically impossible to have scissors that can’t also drive screws).
William Hasker: Even though he is a theist sympathetic to Reppert’s approach, Hasker surprised me by making several points identical to mine, such as the fact that Reppert has yet to address a maximally “sensible naturalism” rather than straw men (like eliminative materialists, etc.), the fact that the AfC is not an AfR (see his footnote 6, p. 54), the fact that even Aristotle had responses to Reppert’s arguments that Reppert did not consider (p. 57), and the fact that Reppert commits what I call the Possibility Fallacy (p. 61). On the other hand, the one point on which Hasker continues to err is in continuing the same Causation Fallacy that Reppert commits. Hasker declares, for example, that “if the physical laws determine all the laws as well as the events that take place…then it is never true that we accept a conclusion because it is supported by good reasons” (p. 58, his emphasis). This statement is untrue, since the process of “accepting a conclusion because it is supported by good reasons” may be one among many types of purely physical process, as for example the process of “brewing beer” or “extracting termites from a log” or “winning at chess.” Thus, Hasker is wrong to insist that naturalists must find special “emergent” laws of causation to explain reason. For we could just as well be able to find out how to describe “accepting a conclusion because it is supported by good reasons” using nothing but ordinary physical laws. Reppert, of course, builds his six AfR’s in an attempt to show how we can’t ever do this, but I have already explained how his arguments don’t even succeed in making such a discovery improbable much less impossible, and Hasker adds nothing to the issue.
Keith Parsons: Parsons reiterates, in several different ways throughout his article, my own point that Reppert has not overcome the Pyrrhonic objection, and in that respect Parsons’ paper is quite useful. He also adds to my defense of the point that Reppert commits what I call the Causation Fallacy. But the bulk of Parsons’ response to Reppert does not really address Reppert’s arguments, at least as he presents them in his book. For example, the process of “adducing reasons” Parsons refers to (p. 67) is exactly the very process Reppert is asking naturalists to explain—for example, to adduce those reasons, we have to be capable of seeing that they are reasons (and not just any “reasons” but reasons that, being true, make a belief more probably true than not), and it is that capability of “seeing” that Reppert claims naturalists can’t account for. Therefore, as useful as his points are, Parsons falls short of developing a complete response to Reppert.
Victor Reppert: The bulk of Reppert’s rebuttal says nothing that is not already refuted by what I have argued throughout my own critique above, so I will only address now whatever is novel. Reppert begins by incorrectly assessing Hasker’s criticisms as “quibbles” (p. 77) and thus ignores them, which renders his rebuttal quite ineffective, since Hasker’s criticisms, though presented with a false air of merely casual disagreement, are actually quite devastating. On the other hand, Reppert’s response to Parsons does not address those points made by Parsons with which I agree, but only continues their routine of talking past each other. Parsons misses Reppert’s argument, so Reppert simply rebuts Parsons by repeating it. No progress is made. That leaves Drange. Reppert’s reply to him adds nothing not already addressed in my critique above. But the following observations can be added.
In Reppert’s reply to Drange he declares that any mentalistic worldview would be immune to the AfR (p. 77; a point I believe he also makes in his book). Since he reintroduces such a point, it might be worth asking here how Reppert can claim the AfR leads us to theism rather than Taoism. That is a mentalistic worldview that is nevertheless godless, and which fits the evidence of the universe and human experience far more closely and simply than any credible version of theism. Reppert falls victim to an ad populum fallacy in concluding that if naturalism is false, then only theism is left as a major contender—a fact that is only true in the West, and only because of Western cultural bias, tradition, and the incidentals of political history. In effect, Reppert is myopically ignoring half of the world’s population, and half of the history of man, and mistaking popularity for rational and evidential merit.
Reppert also introduces a Moral Argument (pp. 80-81), which of course is not an AfR, so I won’t address it here, except in the one respect that is analogous to the AfR, which is his claim that there could be no physical facts that correlate with a proposition being true, including, obviously, a moral proposition. To this I shall say that I can prove otherwise: on my naturalism (though not on every variety of naturalism), ethics is a branch of science, and physical facts do determine right from wrong. I establish this in my book Sense and Goodness without God: A Defense of Metaphysical Naturalism. But the analogy Reppert has in mind is revealed, for example, in that Reppert still professes an inability to imagine how a proposition can exist in a physical brain (e.g. pp. 81-82). But he is ignoring (as I have argued at length earlier) the relevance of computers. For example, if it were true that a computer could not physically contain (and correctly and causally respond to) any proposition, then computers would be physically impossible. For example, to beat me at chess, a computer must not only be capable of containing the rules of the game (which are in fact propositions), but it must be capable of being caused by those rules to play the game correctly—which means that any change to those propositions would physically cause the computer to behave differently, the very thing Reppert claims to be impossible (e.g. in his AfMC).
Of course, Reppert does argue that “if” the AfMC is false then he has other arguments still, but it remains an evident folly to propose that the AfMC has any merit at all when it entails the non-existence of what plainly does exist: computers (that, for example, play chess). Or think of a computer that calculates how to hit a target with a piece of robotic artillery: to do so it must be capable of physically containing all the necessary propositions defining the facts of ballistics, and it must be physically capable of being physically caused by those propositions to arrive at the correct firing solution. Reppert responds that computers can do this because teleological agents made them, but, as I’ve already explained, that misses the point. The AfMC is blind to how a physical system came into existence—because it claims, instead, that such a system can’t come into existence, by any means. But computers prove it can. So even if Reppert were right that computers could arise only by intelligent causation, the AfMC is still false. It is by not seeing problems like this that Reppert does not understand the points that Drange and Parsons are making.
In the end, Drange and Parsons are right: everything just ends up in the same place, the AfRF, which is just Plantinga’s Evolutionary Argument against Naturalism, to which Reppert adds nothing of any significance. Indeed, in his reply here, he doesn’t even add anything not already in his book. He completely ignores Drange’s very powerful arguments on this matter, for example, and continues to advance his own uncritical, unscientific “folklore” of evolution. For instance, Reppert repeats his uncritical argument that “creatures with large brains” suffer all kinds of problems. That is true—but why does he not think to ask, then, why we have large brains? If all these problems come with it (and they do), but our enormous brain does us no good (and according to Reppert, it doesn’t—for it gives us no powers not possessed by other animals), why do we have such an enormous brain? Reppert provides no answer. It doesn’t even occur to him that his argument makes no sense without such an answer. But any answer he might conjure up now would intrude quite securely on the domain of science. That is, his answer would be subject to scientific observation and test—and would only be credible when it had scientific evidence to back it up. Offering some armchair speculation, like, say, Aristotle’s, that we have (by proportion) the largest brains in nature simply to cool our blood, would pose no threat to naturalism, which rests on science, not fantasy.
 Reppert’s conception of the natural-supernatural distinction, here and (as we shall see) throughout his book (cf. 102), agrees very well with what I establish in “Defending Naturalism as a Worldview: A Rebuttal to Michael Rea’s World Without Design” (2003).
 Reppert mischaracterizes physicalism in one respect that I will mention only to correct it, since the effect of this error on his later arguments is not significant. He says, “materialism maintains that the basic substances of the physical world are pieces of matter, and physicalism maintains that those pieces of matter are properly understood by the discipline of physics” (47). This statement is wrong in two important respects: (a) Both materialism since its inception and certainly physicalism today incorporate several entities besides matter, including space-time and all “non-material” forms of energy (e.g. electromagnetic and kinetic), all of which do appear to be just as concrete and quantized as matter—in fact, matter is simply one form of what is actually the most basic “substance” of physics: energy; (b) Precisely because physicalists (almost) by definition defer to the findings of physics, all physicalists today essentially agree that matter constitutes less than 5% of all the material in the universe, and therefore it can only be called a “basic” substance in some figurative sense like “most familiar and relevant to ordinary human life.” But that is a fair assumption for Reppert to make. Even if we add the necessary precision to his characterization of physicalism, his arguments remain unaffected.
The evidence establishes a breakdown, according to mass, roughly as follows: 0.5% normal non-baryonic particles (e.g. light, neutrinos, etc.); 4.5% normal (baryonic) matter (the material Reppert has in mind); 22% dark matter; 73% dark energy (see Gérard Nollez, “Direct Detection of Nonbaryonic Dark Matter,” Europhysics News 34:4, 2003). It is not yet known what dark matter or dark energy “are,” only that they have mass, and no other observable properties, except that through some force they exert a physical effect on the universe: either a gravitational effect (dark matter) or an antigravitational effect (dark energy). Intuitively, the existence of such perfectly opposite and otherwise propertyless and undetectable entities suggests some theoretical error rather than any real substance, and it is possible these substances don’t exist, and that the effects are produced by normal physics in a way not yet recognized. I am intrigued, for example, by Dr. Ian McKay’s suggestion for eliminating dark energy as an explanatory entity (“A Simple Interpretation of Machís Principle Implies that an Open Universe would Undergo an Accelerating Expansion“) and Dr. Moti Milgrom’s suggestion for eliminating dark matter as an explanatory entity (“The MOND Pages“). But at present the consensus does not support these alternatives, and the evidence for them is not yet sufficient to prefer them.
 Reppert also says here that “some people” think those who believe that “propositions” exist can be naturalists, but not physicalists, because propositions (and similar things) have no “spatio-temporal location” (47). It is true some people think that, though they are only right with regard to certain conceptions of what a “proposition” is. Reppert generates a lot of muddle here, which will become a problem for him later, but since he is too unclear at this point to interpret, I will reserve criticism for when he says something more substantial later on. For now it should suffice to say that I agree with Reppert that naturalists are not necessarily committed to the physical as their only available ontological category, but the physical is their only available category of causal entity. It is this distinction that carries the application of Reppert’s argument from physicalism to all naturalism generally.
 Related to this is a minor issue of clarity worth digressing on: Reppert often says things like a “voodoo curse introduces a type of explanation that does not fit into a materialist account of how events are caused in the world” (68). He should perhaps qualify such statements, since an additional premise is needed to sustain them as true. For example, I can easily give a purely mechanical, even physicalist account of the operation of voodoo, just as many parapsychologists attempt for psychic phenomena. Such an account would entail that traditional voodoo practitioners were at least somewhat misinformed about the actual reasons their magic works, but it would still work (if it was true). For example, if voodoo works via a system of natural magic, it could be exploiting heretofore unknown laws of physics. Science has already produced magnetically levitated trains and fire that burns under water, and these are no less “voodoo” to the uninformed mind. Alternatively, conforming even more closely to the traditional theory, if voodoo works via a system of demonic magic, it could be that demons are physical, naturally-evolved animals, who achieved invisibility and other powers by exploiting ordinary laws of physics, and who carry out the commands of practitioners the same way any cooperative and intelligent human can, only using special skills and physical abilities. Neither system would contradict physicalism or naturalism. What Reppert has in mind is the narrower interpretation of phenomena like voodoo, the theory that actually matches what, for example, practitioners really think, which is that demons are supernatural entities, beyond the natural both in nature and origin, and that they can be affected, and can affect the world, by direct mental causation, bypassing physical-causal mechanisms. This brings us back to the question of how one draws the natural-supernatural distinction, which I discuss in my review of Rea (see Note 1 above). I have endnoted this point because I do not think it is a fatal omission on Reppert’s part—I am certain he always means, for example, a supernatural theory of voodoo.
 Note that proponents of mind-brain physicalism should heed Reppert’s point in his final chapter (cf. 115-16) that his argument does not necessarily go against the evidence adduced for mind-brain physicalism. Rather, Reppert concedes that most of what the mind does might derive or depend in some way on the brain. Thus, though he does not state it, one could infer Reppert would agree that even if the AfR is true, disembodied survival might still be impossible. Christianity is perfectly capable of accommodating such a view, merely by adopting its original notion of sleep (unconsciousness) until a general resurrection of the flesh, rather than (what seems most popular today) an immediate and eternal disembodied residence in heaven (or hell), though the problem would remain of why our minds depend upon a brain but God’s does not. So to properly characterize Reppert’s position: he believes there is some evidence (observed phenomena) that is still not explicable by mind-brain physicalism, and therefore, even if mind-brain physicalism accounts for all the other stuff, something else is still needed to make a mind work (cf. 87, 113).
 And I mean circular in a way naturalism is not. Trust in reason is also circular for naturalism insofar as a Cartesian Demon can be conceived that would fool us into believing naturalism is true, and there is no way to prove no such Demon exists. We reject the Demon theory from various applications of reason anyway, though the Demon could be fooling us even in that. But since this challenge cannot be escaped by any theory (not even theism), and since it is fundamentally unfruitful (even if we believe there is a Demon, nothing changes with regard to the nature of ourselves or the universe: everything proceeds exactly the same), there is no available motive to adopt it as an explanation for human experience. In other words, there is no meaningful difference between a real world and a Cartesian Demon’s world, and since there is no evidence of a Demon, we may as well operate as if the world is real and not a Demon’s construct—at least until we actually discover otherwise—for there is no reason to believe otherwise. Once we remove the Cartesian Demon from consideration, as both theists and naturalists rightly do, all we have left is the world we observe, and what we can actually infer therefrom.
In that position, naturalists posit that because the world that we observe can explain Reason, and we have no evidence contradicting that explanation, and a body of evidence lending support to it, therefore the proposition “Reason is Reliable” is most compatible with the evidence of the senses and human experience. But theists are not in such a position if the AfR is correct. For then, it is not the case that “the world that we observe can explain Reason.” The immediate conclusion of the AfR is then that Reason cannot be trusted. It then becomes suspect to reason that theism is true in such a situation—for then you are entering a vicious circle that is not shared by naturalism (if the AfR is false).
In other words, a naturalist does not have to assume naturalism is true to trust Reason (since the evidence a naturalist appeals to is compatible with almost any worldview, including theism). But on the AfR, a theist must assume theism (or some kind of supernaturalism) is true to trust Reason. The Pyrrhonic Skeptic is then left on even better ground than he was on when faced with naturalism, because theism (and hence the reliability of Reason) is then a mere presupposition. So the AfR puts Reppert into an additional vicious circle that naturalism avoids. I suspect Reppert could escape that vicious circle. But he does not actually do so in this book.
 But just to survey the obvious:
- (1) How can a book about the science of reason not even mention leading works on that very subject? Where is William Calvin, How Brains Think (1996)? Robert Moss, Brain Waves Through Time (1999)? Manfred Spitzer, Hare Brain, Tortoise Mind (1999)? Lesley Rogers, Minds of Their Own (1998)? Eduard Hugo Strauch, How Nature Taught Man to Know, Imagine, and Reason: How Language and Literature Recreate Nature’s Lessons (1995)? Valerie Walkerdine, The Mastery of Reason: Cognitive Development and the Production of Rationality (1990)? Reppert cites none of these, nor anything remotely comparable.
- (2) And what about naturalist philosophy on the subject? Where is Robert Nozick, The Nature of Rationality (1993)? Nicholas Rescher, Rationality: A Philosophical Inquiry into the Nature and the Rationale of Reason (1988)? Newton Garver & Peter Hare, eds., Naturalism and Rationality (1986)? Or the works of Ruth Millikan on the philosophy and evolution of language and reason, such as Language, Thought, and Other Biological Categories: New Foundations for Realism (1984) and Clear and Confused Ideas: An Essay about Substance Concepts (2000)? And that’s just to name the most prominent examples in the area of books. Philosophical papers on the subject are legion.
- (3) As to the subject of the science of consciousness and its relationship to the functions and abilities relevant to Reppert’s case, we should expect a reference to the following (which detail numerous facts I shall be bringing up throughout this critique), or comparable works, but we get none of any sort: Joseph LeDoux, Synaptic Self: How Our Brains Become Who We Are (2002); John Ratey, A User’s Guide to the Brain: Perception, Attention and the Four Theaters of the Brain (2001); Bernard Baars and James Newman, eds., Essential Sources in the Scientific Study of Consciousness (2001); Sandro Nannini & Hans Sandkühler, eds., Naturalism in the Cognitive Sciences and the Philosophy of Mind (2000); Nicholas Humphrey, A History of the Mind: Evolution and the Birth of Consciousness (1999); Daniel Dennett, Kinds of Minds: Towards an Understanding of Consciousness (1997) [which updates Consciousness Explained (1991)]; A. G. Cairns-Smith, Evolving the Mind: On the Nature of Matter and the Origin of Consciousness (1996); etc.
- (4) As for other specific findings of brain science that are relevant to the subject, one should expect to see references to the following (which detail numerous facts I shall be bringing up throughout this critique), or comparable works, but we get none of any sort: Stephen Palmer, Vision Science: Photons to Phenomenology (1999); Oliver Sacks, The Man Who Mistook His Wife for a Hat, and Other Clinical Tales (1998); Frederick Schiffer, Of Two Minds: The Revolutionary Science of Dual-Brain Psychology (1998); Brian Butterworth, What Counts: How Every Brain is Hardwired for Math (1999); etc.
 See Richard Carrier, “Test Your Scientific Literacy!” (2001), especially Section 6: Scientific Laws are Descriptions of Nature’s Behavior and Section 5: Scientific Theories are Explanations of Scientific Facts.
 I discuss the historical circumstances that led to the discovery and formulation of the laws of logic (and other disciplines of reason, like mathematics) in The Origins of Greek Philosophy (2000).
 One of my favorite examples is the Greek word (and proto-Indo-European root) for pig, sus / suos, which is a sound pigs make. American pig farmers still summon pigs with the famed “suieeee!” which is clearly the same root sound. Another gem is the Greek word for bad, evil, ugly things: kaka, one of the most common sounds first made by babies, who do a lot of pooing. Though the Greek word for poop is kopros, Americans still call baby poop “kaka,” and it is conceivable the Greek kakos is similarly derived. Every language is rich with onomatopoeic vocabulary. Analogous to the Greek word for pig, think of the English words cuckoo and cock—both sounds those birds make when they crow. Far more language may have derived from such quasi-intentional imitation-reference than is now obvious, since language has evolved a great deal from its earliest variants, disguising the original forms of words.
 Which is not to say that words aren’t ambiguous. Only in mathematics, logical notation, and similar languages is ambiguity completely removed from all words and constructions (in fact, that is what defines those languages as distinct from ordinary languages). But ambiguity in plain language is neither the result nor the cause of the impossibility of metaphysics (as Quine believed) and neither the result nor the cause of the impossibility of the reduction of intentionality (as Reppert believes). Ambiguity results only from the limitations of human contrivance (language, remember, is an invention, and always an unfinished one), the imperfect process of individuals acquiring and then using their codebooks, and the need to enhance efficiency by conveying more information with less data (by relying on inflection, context, qualification, etc.). It does not stem from any feature of the universe beyond human limitations and fallibility. This is easily demonstrated from the fact that unambiguous languages exist—such as mathematics, which can be used to completely define a physical system and thus could completely describe the entire universe down to the last microparticle…if humans were flawless immortals with infinite time to waste. The Problem of Ambiguity is something I discuss in greater detail in my book Sense and Goodness without God: A Defense of Metaphysical Naturalism.
 Cf. “Intentionality” (observe the excellent and extensive bibliography, as well as the detailed discussion, none of which gets into Reppert’s book), and also “Consciousness and Intentionality,” both in the Stanford Encyclopedia of Philosophy Online (2003). But beyond reference works, consider, e.g., that Dennett presents his physicalist theory of intentionality in Kinds of Minds (1997), pp. 19-56, 81-118. Reppert does not demonstrate that his arguments refute Dennett’s theory, and Dennett’s isn’t the only theory on the market. Consider relational frame theory, for example, as summarized in Relational Frame Theory: A Post-Skinnerian Account of Human Language and Cognition (2001).
 For example, Reppert concludes (emphasis mine): “If naturalists like the Churchlands are right in supposing that (1) is true, but (2) must be accepted in order for the scientific enterprise to be possible” then the AfT succeeds. The hopelessly conditional form of this argument is truly a major problem. But what immediately came to my mind when I read those words was: Why doesn’t Reppert then explain how it is that Paul Churchland can believe (1) and (2) without feeling any hint of self-contradiction? Isn’t Reppert obligated to show us where Churchland’s error lies? Or indeed to at least tell us that Churchland does not see a problem where Reppert does? After all, Paul Churchland is famous for his books defending scientific realism and the efficacy of scientific method, all founded on the very eliminative materialism that Reppert claims undermines the very things Churchland defends—for instance, in Scientific Realism and the Plasticity of Mind (1979) and A Neurocomputational Perspective: The Nature of Mind and the Structure of Science (1989). So, Victor, what’s up? Well, I can take a guess: I suspect you didn’t even try to understand them, and as a result, built a straw man of their worldview, as I will explain in a subsequent section of this critique: Giving the Churchlands a Fairer Shake.
 A clarification is in order here. “Knowledge” is the product of an act of perception. We can see marks on a page, but only when our brain (subconsciously) deduces that those marks indicate a pattern (such as a human face, or indeed a specific human face, like that of our uncle) does prospective knowledge come to exist, and only when that prospective knowledge actually corresponds to a real fact does actual knowledge exist. Thus, data is not enough. Perception must occur, and we know that perception is a computational process. Only then comes the issue of whether we can ever have reason to believe that something we perceive really exists. But that is a completely separate problem. To argue that we cannot know we know is not an AfR, for it would not succeed in establishing the Basic AfR Premise. An argument that “we cannot know we know” could be formulated into a completely different argument against naturalism, but we will not concern ourselves with that here. It is enough for now to say that we do have reason to believe that there are things we really know, and that “naturalism is probably true” is one of those things. I will develop this argument in my book Sense and Goodness without God: A Defense of Metaphysical Naturalism. But for the present issue, which is only the AfR, it remains a fact that perception exists, all perception can be (and probably is) a mechanical process, and reason is a form of perception. Therefore, the Basic AfR Premise is false. One can respond to this defeat only by challenging the central premise “perception can be (and probably is) a mechanical process,” but that would not be an AfR, but an AfC (Argument from Consciousness), a point I will explain later. And we are, again, only concerned with the AfR here. Nevertheless, I will address all these issues.
 In just a century or so this will be absolutely true: when we acquire both the means and the will to genetically engineer ourselves, genetic “differential reproductive success” will no longer matter at all. From then on, our existence will be entirely built on a memetic substrate, to which even our genetic scaffolding will be entirely subservient. That is assuming the alarmist neoluddites among the Christian Right and Radical Left don’t take over the world.
 Reppert references back to previous discussions of the views of Anscombe and Parsons (80, cf. 60-71), but I do not believe he has correctly understood their respective positions, and at any rate, even if he has, Reppert believes their position is essentially the same as Davidson’s, so citing them gets Reppert no closer to addressing all versions of naturalism defended today. For example, I do not believe propositional content is irrelevant in the causal system of human reasoning. Yet Reppert has very little to say against a naturalist who affirms that (and, as we shall see, what he does say doesn’t take physicalism seriously anyway). Therefore, his AfMC does not refute such a naturalism. So what good is it?
 As to the issue that a unified consciousness is not necessary to perform truth-finding computations, I will discuss the analogy of computers later on. As to the issue of what Goetz calls the “binding problem,” which drives the quest for the exact areas of the brain that stitch all consciousness together, see my discussion in the secondary half of this critique under Theory of Mind.
 I have discussed this problem online twice before, and those discussions add substantially to what I say here: (1) See my brief discussion of the “Evolutionary Argument against Atheism” in my Secular Web article “Defending Naturalism as a Worldview: A Rebuttal to Michael Rea’s World Without Design” (2003), and the related endnote there. Unlike Reppert, Rea at least cites two papers critical of the argument, though he still does not address them. (2) See the section entitled Must an Accidental Sensory Organ be Untrustworthy? in my critique “Nash on Naturalism v. Christian Theism” (Section 3b of my Review of In Defense of Miracles).
 Note: though I informed Reppert’s editors after examining the galley print that the text mistakenly states that “seven” arguments were presented when in fact there were only six (85), they still did not correct the text in the official print run.
 William Hasker, “How Not To Be A Reductivist” [PDF] (PCID 2.3.5, October 2003), due to appear in Alexander Batthyany, Dimitri Constant, and Avshalom Elitzur, eds., Mind—Its Place in the World: Non-Reductionist Approaches to the Ontology of Consciousness.
 I discuss this requirement, which holds for all improbability arguments, in the context of biogenesis in “The Argument from Biogenesis: Probabilities against a Natural Origin of Life,” Biology & Philosophy 19.5 (November 2004), pp. 739-64. Note that the argument that computers cannot arise without intelligent design because they are too complex (or irreducibly complex) would be an entirely different argument altogether, and would no longer fall under the category of AfR—it would instead be just another variant of the Argument to Intelligent Design, since there is no practical difference in this regard between the development of a computer and that of, say, an eye or a digestive system. At any rate, Reppert makes no attempt to argue against the evolution of reason from the irreducible complexity of the relevant organ, though he does draw on an argument from excessive complexity later on, in a different way (when he references brain size as a disadvantage).
At any rate, it is clear that human reason was developed in many stages over a very long period of evolution, consistent with natural selection, not intelligent design. Hominids had already become smart enough to manufacture and carry tools well over a million years ago, which entails the beginnings of natural reason at least by that time, yet it still took them about a million more years to invent fire. That was 400,000 years ago, and human intelligence has only accelerated since: pictographic art: over 20,000 years ago; agriculture: over 8,000 years ago; writing: over 4,000 years ago; science: over 2,000 years ago; mechanized industry, about 300 years ago; computers and nuclear power: about 50 years ago. Despite the acceleration, it still took humans two million years to get to where we are now. That does not suggest an innate Reasoning Faculty implanted at some specific time by God.
 Daniel Dennett, Consciousness Explained (1991), pp. 85-95. Just recently this evidence has been greatly updated: scientists have actually built a computer that can independently engage the scientific method and discover new knowledge by inventing its own hypotheses and testing them. See John Roach, “‘Robot Scientist’ Said to Equal Humans at Some Tasks,” National Geographic News Online (14 January 2004) and Kimberly Patch, “Robot Automates Science,” Technology Research News Online (28 January 28/4 February 2004), regarding the report by Ross King et al., “Functional Genomics Hypothesis Generation and Experimentation by a Robot Scientist,” Nature (vol. 427 = 15 January 2004): pp. 247-52. For a book on the subject (which covers pretty much all the same terrain as my entire critique), which Reppert absolutely must read (even though he must also qualify it with more recent work): Aaron Sloman, The Computer Revolution in Philosophy: Philosophy, Science and Models of Mind, 1978.
 I hardly need reiterate what I have discussed at length already: that his other argument, that only intelligent humans can tell whether an induction machine is giving true answers, is obviously false on any credible evolutionary theory of brain development. Survival can tell. As can the immediate effects of feedback (we can see whether our process of reasoning has worked by looking at the world we were reasoning about).
 See Wale Sangosanya and David Belton, “Basic Gates and Functions” (1998), part of the Digital Logic Tutorial Guide of the University of Surrey. Also relevant: Josh Dever, “A Brief Introduction to Sets” part of the WebLogic tutorial; Keith Calkins, “Logical Reasoning” (2003); and (for the axiomatic and mathematical foundations of inductive logic) Harri Lappalainen, “Bayesian Probability Theory” and “Theory of Modeling” (1998). Finally, see the entry for “Turing Machine” in the Stanford Encyclopedia of Philosophy, and Nikola K. Kasabov’s Evolving Connectionist Systems: Methods and Applications in Bioinformatics, Brain Study and Intelligent Machines (2002).
 In: Richard Carrier, “Defending Naturalism as a Worldview: A Rebuttal to Michael Rea’s World Without Design” (2003).
 True, an extreme skeptic might wonder whether this is just an accident, whether if we tried the same experiment again we might get different results—actually see a sign with more or less than three sides. But such a radical skeptic is just like someone who suspects they might be able to win at tic-tac-toe: after enough trials, eventually they would give up their vain hope; or someone who thinks that they can put a red and blue patch together in their mind and somehow, after enough trials, see them together without touching. Sure. Maybe. Even the conclusions of logical proofs are never certain—for we could always have missed something or misunderstood something or made a mistake, etc. But it would be fatally irrational to reject confidence in a proposition from such reasoning as “maybe, therefore probably.” As a method, a procedure, this is no good at all. Not only is it not truth-finding, but it is substantially error-prone—that is, it will find truth even less often than chance alone, as can be easily demonstrated empirically, again by direct observation. Note that this process of learning what is impossible is no more circular on naturalism than on theism (indeed, even God himself cannot escape this vicious circle any better than we can). On the circularity inherent in all thought, see Note 6 above.
 On the evolutionary utility of representational systems, their physical basis, and their role as the core engine of rational cognition, see: Jesse Prinz and Lawrence Barsalou, “Steering a Course for Embodied Representation,” Eric Dietrich and Arthur Markman, eds., Cognitive Dynamics: Conceptual and Representational Change in Humans and Machines (2000), as well as the relevant works in Note 7 above.
 See, for example: John Jefferys, “Brain Waves (40Hz) Research“; and Yoko Yamaguchi, “Synchronization in the Hippocampus as a Neural Principle Representing Contextual Information” [PDF].
 See: Barry Stein, Mark T. Wallace, and Terrence R. Stanford, “Brain Mechanisms for Synthesizing Information from Different Sensory Modalities” (pp. 709-36) in E. Bruce Goldstein, ed., Blackwell Handbook of Perception (2001), which includes excellent summaries of the science, and extensive bibliographies.
 In the domain of causal explanation, that is. Epiphenomenalism remains a useful prospect for the explanation of the ontology of qualia, though I am not convinced it is needed even for that. At any rate, I am a strict physicalist, though I do not have any strong reasons to exclude a simple ontological epiphenomenalism as a possible variation of my current worldview, since no causal role is needed for it.
 See the excellent guide by David Chalmers: Zombies on the Web, and his book The Conscious Mind: In Search of a Fundamental Theory (1996), for all different views on this subject. From his online essay on “Dancing Qualia” I believe we can make a solid case for the impossibility of a “zombie” in the sense Reppert intends (as opposed to a zombie who only appears the same but is actually physically different and engages in different “secret” behaviors, like lying rather than telling the truth).
 See the preceding note. I believe all animals that have any kind of perceptual brain system (rather than a mere sensory reflex system) have experiences just as we do—complete with qualia and so forth—as do all computers who have functionally comparable systems (like Shakey the Robot), since I believe it is physically impossible for the physical function to occur without at the same causing the phenomena of experience. I believe they are literally identical—to compute a perception literally is to have an experience of perceiving—but I will concede it is possible this might be an epiphenomenal relationship (as Chalmers believes). I believe the difference between animal and human consciousness is that while, for example, human and animal visual consciousness (and auditory consciousness, and primal emotional-physiological consciousness, and so on), are basically identical (except, of course, in respect to physiological differences in their sensory-perceptual systems), humans have one sensory-perceptual brain system all (or most) other animals lack: namely, an organ that can “see” thought processes and construct a perception of a “self” (etc.).
 I should note that here especially, but very often throughout the book, Reppert considers only simple deduction, and completely ignores inductive logic, or any other kind (like fuzzy or Boolean logic). So when my remarks, here as elsewhere, make the same assumption, this is simply because I am responding to Reppert—even though I know full well there is a lot more to logic than the Aristotelian syllogism.
 Reppert strangely cites Ayer on the ontology of logic here (95), even though Ayer was not a physicalist, but a positivist, who rejected all metaphysical doctrines. But more importantly, Reppert doesn’t even get the context right: the quote in question pertains to the issue of synthetic a priori propositions, not the question of logical relations, which constitute analytic a priori propositions, whose ontological reality Ayer never disputes. Thus, his quotation of Ayer has nothing to do with the point Reppert is using that quote to support.
 It is not correct that logic pertains to all thoughts. Logic would not pertain, for example, to emotional and other kinds of nonrational reasoning—internally speaking, that is; externally, one could still accuse such reasoning of being illogical, but that is precisely the point. In contrast, by definition logic will apply to all linguistic thought of any sort, since all languages by nature possess a logical structure, and thus entail logical rules. As I explained earlier: language can only have meaning, and thus can only be computationally useful, insofar as it obeys logic.
 Reppert strangely asserts that “these things are not learned from experience,” namely the fact that the rules of logic apply to all thoughts, which is something he would never say if he had actually studied the science of child intellectual and cognitive development. Has Reppert never heard of Piaget? And that’s just the beginning—a lot of work has been done since. But for what Piaget discovered regarding how children “learn” logic in a complex way over many years, see W. Huitt and J. Hummel, “Cognitive Development” (2003) at Educational Psychology Interactive; the entry for “Cognitive Development in Children” at AllPsych Online; and a survey of “Piaget’s Preoperational Stage of Cognitive Development” at the Houston University College of Technology.
 Since genetic structure is too limited and difficult to bring into line, and is harder to change, making the organism less flexible than a dependence on memetic assistance does. The first point relates to the difficulty of achieving a perfectly rational computer by natural selection—that is not impossible, but a much easier route is to achieve a memetically reasonable computer, i.e. one that possesses a serviceable variety of natural reason, as I defined it earlier. But the latter innately permits the discovery of the former as just another useful technology. The second point relates to what I have already said about the distinctive route taken by the hominids toward generalization of function, which greatly enhances adaptability far beyond what genetics can ever achieve by itself. An organism that has the faculties sufficient to discover a technology of reason is much more adaptable than an organism that is hardwired with that technology. Many excellent science fiction tales about the fatally excessive rationality of computers explore this point well enough. The advantages of innate adaptability in a memetic organism are so great that it is unlikely a thoroughly hardwired technology of reason will ever naturally exceed it in producing differential reproductive success (though it could perhaps exceed it if intelligently designed—as of yet, no such organism exists to test the theory, but again science fiction has explored the consequences of humans constructing cognitively superhuman androids with unlimited reproductive capabilities).
 Of course, even following Hume and Reppert, inductive reasoning is fundamentally circular for everyone, even for God. Everyone is in the same boat, theist or atheist. All one can do is posit a hypothesis (e.g. “God exists” or “Nature is consistently uniform”) to explain the evidence (consistent uniformity) and then constantly test that hypothesis (so far, it has never been falsified, and has been abundantly verified). To accept this line of reasoning, all one need do is reject the groundless and self-defeating methodological principle “Maybe, therefore probably.” And the theist is in no better position than an atheist here.
Still, I am skeptical of the common view that there can be no non-circular solution to Hume’s Problem of Induction. But since mine is not a view supported by scholarly consensus, I will only propose my solution here as an untested theory (or perhaps merely a research program), and not as an established fact:
Induction is not simply a past-future method of analysis. Induction can also be applied to static (atemporal) set analysis. For example, we can “induce” what the odds are that I own a car by taking a snapshot (i.e. examining a random subset) of the entire population that shares my relevant characteristics (income level, etc.) at the same point in time. The inference that what holds for the subset will hold for the whole set is induction, even though here it is not inferring from past to future, but from some present facts to other present facts. But in this case, if the subset is in fact a truly random sample, we can be deductively certain of the probability that I own a car, even if there are an infinite number of people and the sample only contains one thousand members. This is because the validity of statistical reasoning (that a conclusion regarding the probable content of an infinite set of data can be drawn from a very small and finite but random sample of that data) has been deductively established. Such proofs fix the core of the science of statistical mathematics. So no presumption of uniformity is necessary here.
Formal proofs of the validity of statistical reasoning took place after Hume’s day (see Theodore Porter, The Rise of Statistical Thinking: 1820—1900, 1986), so he can be excused for not having anticipated it. On the deductive foundations of statistical reasoning, see, for example, I. C. McKay and P. K. McKay, “Plugging the Gap in the Logic of Classical Statistical Inference” (2002). The most important element is, of course, what can be validly deduced from the procedure of random sampling, though statistical systems are still possible that derive from nonrandom sampling (as the McKays point out). But limiting the issue to randomization, one can deductively prove that a random sample will represent the statistical properties of the whole to within a deductively quantizable range of error (for example, one can deductively prove, say, that there is a 99% chance that it is between 88% and 92% probable that I own a car). No induction is needed to arrive at this conclusion.
One can challenge the data, but the point is that our theoretical model (the hypothesis that the data are correct) leads to our conclusion by deduction, not induction, and this is the rational basis for confidence in the application of our model to (relatively) uncertain data in the real world. The buck stops not with induction regarding the reliability of our data, but deduction regarding the formal validity of that induction as a statistical conclusion from the data. In the final analysis, this means the undeniable data of uninterpreted sensory content, from which we derive the conclusion that this data implies that, for example, people exist who own cars, etc. The only circularity here lies in the presumption against a Cartesian Demon (the only theoretical model that can fit all the data perfectly and thus can never be distinguished from any other model by a statistical comparison from the raw sensory data), but that circularity exists even for deductive logic (see Note 6).
I do not mean to say this is how we actually deploy inductive logic in practice. In practice we employ short-cuts, heuristic procedures that approximate to the conclusions of a thoroughly rigorous statistical inductive process. That is why humans make so many mistakes, especially in the domain of estimating probability. But we bought that inaccuracy in trade for receiving an efficient organ, since an organ that engaged a flawless deductive analysis of everything is technologically impractical (on naturalism—though it should not be on theism). But the theoretical underpinning of these natural inductive procedures and learned short-cuts is not built upon a Humean circle. There is (theoretically) a deductive justification available. Hence, for example, the famous “grue” paradox can be bracketed as an improbable phenomenon without presupposing the efficacy of induction, simply by demonstrating how a random sample of all phenomena that exist produces no instances of a comparable category (of a fundamental universal property changing at a specific chronological time), and therefore the probability is very high (say, 99%) that properties like “grue” are very likely (say, 98-99%) not to exist in the set of all things that exist. Therefore, there probably is no grue. That would be a deductively certain conclusion.
Applying this analysis to the problem of past-future inductions, one must begin by realizing that we have a vast database of past events which were once future events, so we have seen the principle of uniformity demonstrated countless times. Reppert, and Hume, erroneously treat all past events as if they were always and only past events, and thus conclude that, as Reppert puts it, “any extrapolation of past experience to the future will invariably appeal to the resemblance principle being defended” (98). But that is not what we do. The principle is a hypothesis (“future events will probably resemble past events”) which requires as evidence instances of future events matching past ones. We have endless instances of exactly that sort: past “future” events matching past (prior) events. So there is nothing circular about concluding that because we have a million examples of X matching Y, and no counter-instances to speak of, that therefore every other X will probably match Y. Or in more precise terms: a random sample of all past-future comparisons produces a finite but very large set which contains nothing but past-future correlations (i.e. no past-future nonuniformities are found in the sample), which deductively entails that there is a very high probability (approaching 100%) that past-future nonuniformities are very likely (approaching 100%) not to exist in the set of all past-future comparisons.
That doesn’t mean we can’t be wrong. We can never be certain the universe is and always will be uniform. The universe might spontaneously boil away tomorrow, or the color grue might really exist. But we can be deductively certain that these things are improbable. There are many details that would need fleshing out to make my theory work. For instance, the gaps identified by the McKays must be closed, and a comparably sound theory of nonrandom sampling must be applied (since our selection of past-future comparisons is not perfectly random, but it does approximate to it, especially given our ability to peer deep into the past of the universe and interrogate other persons on earth, but a model strictly defining this approximation would have to be developed). But there is at least a prospect for establishing a deductive statistical justification of induction.
 See, as just one example: J. E. Mazur’s research in “Choice with Probabilistic Reinforcement: Effects of Delay and Conditioned Reinforcers,” Journal of the Experimental Analysis of Behavior, 55 (1991): 63-77 and “Theories of Probabilistic Reinforcement,” ibid., 51 (1989): 87-99. The literature on the subject is vast, however, and dates back over forty years. As Nir Vulkan puts it in “An Economist’s Perspective on Probability Matching” (Journal of Economic Surveys, February 2000: available online as a PDF document on Nir Vulkan’s WebPage):
During the 1950s and 1960s a large data set was collected about the behaviour of humans, rats and pigeons in repeated choice experiments…[involving] a random device operating with fixed probabilities which are independent of the history of outcomes and of the behaviour of the subject….In some experiments subjects received (small) monetary rewards for making the correct prediction, and in a few of those experiments, had to pay a small penalty for making the wrong prediction. A striking feature of this data is that subjects match the underlying probabilities of the two outcomes.
 See the discussion of the flaws in human probabilistic intuition, for example, in Stuart Vyse, Believing in Magic: The Psychology of Superstition (1997), esp. pp. 94-138; and also: Massimo Piattelli-Palmarini, Inevitable Illusions: How Mistakes of Reason Rule Our Minds (1994); Thomas Gilovich, How We Know What Isn’t So: The Fallibility of Human Reason in Everyday Life (1993); and Thomas Gilovich, Dale Griffin, and Daniel Kahneman, eds., Heuristics and Biases: The Psychology of Intuitive Judgment (2002). All of these works include discussion of the evolutionary function provided by our innate sense of probability.
 Robert Grant, “Laws of Nature,” Miracle and Natural Law in Graeco-Roman and Early Christian Thought (1952: pp. 19-28; quotation from p. 28). His survey shows that the idea of a natural law was occasionally flirted with in ancient philosophy, but very rarely, and it was certainly never anything like an axiom employed in science. Most often it was meant only in a moral sense (a “moral” law, rather than a physical law), because the ancients were much more fond of a strong dichotomy between law and nature—law was usually whatever nature wasn’t, and nature was whatever law wasn’t. Instead, in science and physics, the standard idiom was to speak of “necessity of nature” or “natural necessity” or just “the nature of things” and similar expressions.
I am aware of a few exceptions that Grant also acknowledges: e.g. Lucretius refers to the fixed nature of atoms and the things they compose (like animals and their sperm) as “laws” in De Rerum Natura 2.718-29 and 5.58 (though more often as the “pacts” or “established agreements” of nature: 1.586, 2.302, 5.57, 5.310, 5.924, 6.906). But since Lucretius did not believe in creationism, he could not have been imagining a divine law-giver. His gods do not bring nature into existence or produce order in it or assign things their properties, but are in fact subject to the “laws” of nature, which precede the gods, chronologically and ontologically. Hence Lucretius instead argues that things have an inherent nature as a brute, uncaused, undesigned fact (more specifically, what he calls “laws” he says derive entirely from the physical structure of atoms and nothing else, cf. 2.725-29, 2.1090-1104, 5.156-234, 5.419-31). And, more importantly, Lucretius was not a scientist, and wrote long after ancient science had reached its full stature, and the text in question was a poem, and therefore poetic allegory is at play here, not a driving or fundamental conceptualization of nature. Likewise, the same poetic idiom occasionally comes up in other Latin authors after Lucretius (e.g. Cicero, Seneca, Pliny), but none of them are physicists or physiologists, either, and their use of the term is clearly rare and poetic (in contrast with their regular use of the more common phrases for what we call natural laws, like “necessity” or “the nature of things”).
 Reppert’s defense against objections to theism as a solution misses the point of three of those objections (122-24), nos. 2,5, and 6 (among many others one could add—Reppert’s list is by no means complete): the point is not that those facts contradict theism, but that those facts are difficult to explain on the assumption of theism—without making theism far more complex than naturalism by adding numerous ad hoc hypotheses for which there is no evidence. So the reason those objections are a problem is that naturalism more easily explains the same facts, meaning it is a “better explanation” than theism. Reppert does not adequately address the Best Explanation case for naturalism against his alternative.