Book Review: Michael J. Denton. 1998. Nature’s Destiny: How the Laws of Biology Reveal Purpose in the Universe. New York: The Free Press. xix+454 pages.
Mark I. Vuletic
Michael J. Denton argues for many things in his most recent book, but the thesis which occupies center stage is the thesis that
the cosmos is a specially designed whole with life and mankind as its fundamental goal and purpose, a whole in which all facets of reality, from the size of galaxies to the thermal capacity of water, have their meaning and explanation in this central fact.
Denton argues for this bio/anthropocentric teleological view by appealing to the notion that “the cosmos is uniquely fit for life as it exists on earth and for organisms of design and biology very similar to our own species.” What does Denton mean by “uniquely fit”? Although he does not provide the reader with a succinct definition, apparently thinking the term is self-explanatory, it appears from gathering bits and pieces here and there that his unique fitness thesis consists of three subtheses:
(i) that the laws of nature and boundary conditions governing our universe guarantee the emergence and persistence of familiar and anthropomorphic life (a subthesis I will call the fitness subthesis–FS),
(ii) that the laws of nature and boundary conditions governing our universe guarantee the failure to emerge and persist of any form of life radically different from familiar and anthropomorphic life (a subthesis I will call the uniqueness subthesis–US), and
(iii) that the laws of nature and boundary conditions governing our universe are ideally suited for the emergence and persistence of familiar and anthropomorphic life; that is, that every property of the laws of nature contributes optimally to the emergence and persistence of familiar and anthropomorphic life (a subthesis I will call the perfection subthesis–PS), 
which are coupled with a crucial auxiliary hypotheses:
(iv) that FS, US, and PS jointly embody an extraordinary and surprising collection of meaningful coincidences (a subthesis I will call the extraordinary coincidence subthesis–ECS).
Part 1 (Chapters 1-11) of Nature’s Destiny is the primary source of argument for ECS and the “persistence” half of FS, US, and PS, while Part 2 (Chapters 12-15) is the primary source of argument for the “emergence” half of FS, US, and PS, combined with a critique of gradualistic evolution. If nothing else, Part 1 is at least a veritable horn-o’-plenty of interesting tidbits about biology and chemistry, and was my favorite part of the book. Part 2 is more dubious and speculative, as Denton himself admits, but I still enjoyed it, and found in it plenty of technical references I will want to check over the next couple of years to see whether I will draw from them the same morals Denton does. Although I do not agree with Denton’s conclusions, I do not regret having read Nature’s Destiny, would not discourage anyone else from reading it, and am impressed with the amount of work Denton has put into it.
For the purposes of this review, I will conceed FS and US, and all of the specific examples Denton offers in support of unique fitness. It is worth pointing out that most people who offer arguments in the same category as Denton’s (i.e. so-called “fine-tuning” arguments) do not buy into FS and US. However, I do not feel qualified to assess FS and US myself, and granting them to Denton will not, in any case, effect my views on his main thesis. I will raise a few questions (fatal ones, I think) about PS in the third section of this review, and spend a great deal of time on ECS (but conceed PS for the sake of argument) in the fourth.
Before continuing my discussion, I think it is worth quoting Denton at length about the distance between his project and the standard creationist project (including that of the “intelligent design” movement):
[I]t is important to emphasize at the outset that the argument presented here is entirely consistent with the basic naturalistic assumption of modern science–that the cosmos is a seamless unity which can be comprehended in its entirety by human reason and in which all phenomena, including life and evolution and the origin of man, are ultimately explicable in terms of natural processes. This is an assumption which is entirely opposed to that of the so-called “special creationist school.” According to special creationism, living organisms are not natural forms, whose origin and design were built into the laws of nature from the beginning, but rather contingent forms analogous in essence to human artifacts, the result of a series of supernatural acts, involving God’s direct intervention in the course of nature, each of which involved the suspension of natural law. Contrary to the creationist position, the whole argument presented here is critically dependent on the presumption of the unbroken continuity of the organic world–that is, on the reality of organic evolution and on the presumption that all living organisms on earth are natural forms in the profoundest sense of the word, no less natural than salt crystals, atoms, waterfalls, or galaxies.
In large measure, therefore, the teleological argument presented here and the special creationist worldview are mutually exclusive accounts of the world. In the last analysis, evidence for one is evidence against the other. Put simply, the more convincing is the evidence for believing that the world is prefabricated to the end of life, that the design is built into the laws of nature, the less credible becomes the special creationist worldview.
So the theological view Denton argues for is at most like deism or pantheism, where divine agency is required to get the universe going, but any miraculous deviation from lawlike and mechanical behavior is prohibited once the universe is in motion. It is, however, distinct from pantheism and some varieties of deism in that it takes the production of familiar and anthropomorphic life as the raison d’être of the universe.
2. Establishing a bridge from unique fitness to the bio/anthropocentric teleological view.
One of my primary concerns with Nature’s Destiny is its failure to provide adequate argument about how to get from point A to point B. In order to move from the unique fitness thesis to the bio/anthropocentric telelogical view, one needs two bridging principles in addition to ECS. First one needs to establish that
(v) the collection of meaningful coincidences embodied by FS, US, and PS is best explained, and correctly explained, by positing that the collection has actually been consciously designed to have the form it does (a thesis I will call the coincidence-to-design thesis–CTD).
Justifying CTD would allow one to infer the existence of a designer from unique fitness. However, CTD alone would still not be enough to demonstrate the bio/anthropocentric teleological view–i.e. that the designer made the universe the way it is in order to produce ordinary and anthropomorphic life. To take that extra step, one must demonstrate further that
(vi) the correct or most probably correct explanation as to why the collection of meaningful coincidences embodied by FS, US, and PS would be designed to have the form it does is that the designer’s primary goal was to produce familiar and anthropomorphic life (a thesis I will not name since I discuss it only in the paragraph below).
Denton does not even attempt to provide a justification for (vi). This leaves him wide open to the objection implicit in John Haldane’s legendary remark about the creator having an inordinate fondness for beetles–for all we can tell, a designer set up things the way they are for the purpose of producing beetles (for instance), with humans merely being an interesting side-effect. After all, there are thousands of species of beetles and only one human species, and beetles have been around for a much longer time, so arguably the universe is more ideally fit for beetles than for man. What’s worse, if our kind of universe is the only kind that can support supernovas, for instance, then it may be that all life–including both beetles and humans–is an inconsequential by-product of a firecracker universe, a universe which was designed to act like a gigantic Roman candle for the pyrotechnic delight of the creator. Pre-Copernican instincts, of course, run deep in the blood of virtually everyone raised in Western civilization, so we reflexively balk at the notion of a cosmic designer who makes something other than us the showcase of his creation, but these impulses should be recognized for what they are–intuitions in need of rational justification. Thus, Denton has already fallen short of demonstrating the bio/anthropocentric teleological view.
However, we should return to CTD, because even if Denton has not provided a rigorous demonstration of the bio/anthropocentric view, a rigorous demonstration of CTD would at least confer upon him the nontrivial ability to infer from unique fitness a designer of one or another of the things for which the universe is ideally fit–a something-or-other which could conceivably turn out to be familiar and anthropocentric life, after all.
3. Minor attempts at justifying CTD.
One of the most disappointing aspects of Nature’s Destiny is its comparative lack of argument for CTD. In the more than four-hundred pages of the book, Denton spends only three (pp. 383-385) attempting to justify the thesis, and does so by attempting to answer only two objections–an objection based upon the weak anthropic principle (viz. the idea that the laws of nature must be fit for life, because otherwise we would not be here to discuss them), and an objection based upon chance over time (viz. that “given an infinite period of time [chance] will generate even the most improbable result” and that hence “the order of the cosmos is ultimately a matter of chance and we need seek no further explanation”). He nowhere even considers the idea that the laws of nature have the form they do by necessity–that they just could not have any other form. But setting this serious omission aside for the moment, let’s look at the objections he does respond to.
Denton responds to the weak anthropic principle by noting that it renders only FS unsurprising, since our existence does not necessitate PS, and that it therefore does not adequately explain the kinds of things he discusses throughout his book. My disagreement with Denton here may be a bit surprising: I think he conceeds too much to the weak anthropic principle. I wholeheartedly agree with Denton that the weak anthropic principle fails to explain PS, but I would submit–in agreement with standard critical discussions of the principle–that it does not even explain FS. It renders unsurprising the fact that we exist given that we exist (which is unsurprising anyway, since it is a tautology), but it does not render unsurprising the fact that we exist simpliciter.
Denton responds to the chance hypothesis by claiming that (presumably in contrast to his design hypothesis) it is unfalsifiable and counterintuitive. Although I am no great fan of the particular chance hypothesis Denton discusses, I believe both of his objections miss the mark. As far as being counterintuitive goes, different people find different things intuitive, so there is no rigorous argument against chance there. Denton himself points out that Lucretius, David Hume, and P. W. Atkins–none of whom are intellectual lightweights–buy into the type of chance hypothesis he attacks. Denton appears to insinuate (although perhaps, hopefully, he just chose his words poorly) that their motives are intellectually dishonest, that they reason the way they do “to avoid concluding to design”; but as far as demonstrating one’s case goes, ad hominems are even more worthless than appeals to personal intuition.
As for the claim of lack of falsifiability, the chance hypothesis Denton attacks certainly is falsifiable, since it requires an infinite amount of time in the past, and the existence of such a span of time is subject to refutation by the development of an accepted theory of quantum gravity which posits a singularity or an open finite time in the universe’s past. Even the more attenuated chance hypothesis that requires only a sufficiently long period of time in the past to make what we observe around us probable, is subject to refutation according to the normal canons of science by establishing a sufficiently shorter age for the universe.
I agree with Denton that his design hypothesis is falsifiable, given the testing conditions he has stipulated for it. Denton claims that it takes just “one clear case where a constituent of life or a law of nature is evidently not uniquely or ideally adapted for life, and [my] design hypothesis collapses”. This seems fair enough, but given such testing conditions, the design hypothesis is not just falsifiable, but falsified.
For instance, I think the following constitute refutations of PS: there are, first of all, aspects of natural law that are counter to the proliferation of life, such as (a) the fact that the laws of nature do not necessitate the existence of life on every planet and in the vacuum of space, (b) the fact that the laws of nature do not cause appropriate collections of molecules to spontaneoulsy coalesce into adult organisms, (c) the fact that conditions on Earth have caused 99.9% of all species that have ever existed to go extinct, (d) the fact that asteroids and comets are able to smash into inhabited planets, with the possibility of killing everything on them, and (e) the fact that the universe is doomed to either a collapse that will eventually kill everything, or a heat death which will eventually kill everything. There are also aspects and products of natural law that are simply inconsequential to the proliferation of life, such as (a) biological structures like male nipples and embryonic whale teeth, and (b) the fact that the laws of physics allow humans to artificially generate element number 112 for a mere 280 microseconds, or nickel-48 for a mere fraction of a second–both elements which do not occur outside of the laboratory.
What would Denton say to such examples? Curiously enough, Denton goes so far as to explicitly state that the deduction of the perfection of Homo sapiens from the fitness of the universe for its existence is “unwarranted and in fact absurd. The human body is a wonderfully crafted machine, but its design is not perfect in any absolutist sense.” Isn’t this just the denial of PS?
Perhaps Denton has been too cavalier in stating the conditions for falsifying his design hypothesis, and would prefer to replace PS with a more attenuated notion of fitness, whereby the laws of nature are alleged to be highly fit, rather than ideally fit, for ordinary and anthropomorphic life (Denton, in fact, sometimes talks about the laws being “supremely” fit as a mild contrast to “ideally” fit–this is one of the frustrating aspects of the book that makes it difficult to understand exactly what unique fitness is supposed to be). But if this is so, it becomes unclear what exactly would qualify as a falsification of his design hypothesis. He will at least need to rigorously quantify what a “high” degree of fitness in the laws of nature is. Perhaps this is not an insurmountable task, since biologists regularly talk about degrees of adaptation, but Denton needs to do some work here. How many properties or consequences of the laws of nature that are irrelevant or destructive to life would we need to find in order to falsify the attenuated thesis?
Clearly, there are some problems here that Denton needs to work out. However, I do not believe that solving them would permit Denton to make a case for design. Even if Denton were to conclusively demonstrate that the original unique fitness thesis were true (including PS, which I will simply grant him for the sake of argument from this point on), I do not see how this would raise the probability of a designer by even a fraction of a percent. This is because of the enormous complications involved in establishing ECS, which Denton does not even acknowledge, much less solve.
4. Chance, necessity, possibility, and probability.
Denton’s argument depends critically on the notion that the properties of the laws of nature are surprising in that they presumably form a coherent whole where everything works together optimally to generate and support familiar and anthropomorphic life (this is the sentiment embodied in ECS). But in considering whether any aspect of the laws of nature is surprising, we need to ask whether we may take their form, minus any flexible fundamental constants, as given. The standard fine-tuning argument answers the preceding question affirmatively–it appeals only to the allegedly improbable values of presumably flexible constants that appear in the laws of nature as needing explanation. If, in the end, science were to produce a theory of everything (TOE) which eliminated the flexibility of those constants, then the standard fine-tuning argument would fail entirely. Now, if I understand Denton’s argument correctly, he is asserting that even if such a TOE were eventually developed, it would still be genuinely surprising that all of the laws work together to create an ideal situation for life. For this sentiment to make any sense, it must be the case that the laws could be other than what they are.
If the laws of nature could not be other than they are, then, given Denton’s assumption that all of the other aspects of the universe follow inevitably from the laws of nature, there would be nothing left to explain. Perhaps the way the laws work together to produce and sustain life (to whatever extent they actually do) might still feel puzzling to some people, just as the way the axioms of Euclidean geometry work together to produce the very elegant Pythagorean theorem feels puzzling to some people (myself included) who accept Euclid’s axioms, but such a feeling would be just as groundless in the former case as it is in the latter. If A and B are all that there is, and A is a given, and B is a consequence of A, then B requires no explanation, even if A feels as mundane as “2+2=4” and B feels as surprising (to the uninitiated) as “You may already have won ten million dollars.” So if Denton took the laws of nature (minus the fundamental constants at most) as inflexible, his entire book would collapse into just the first chapter, where he gives a brief overview of the fine-tuning of the fundamental constants. Therefore, I presume that Denton is rejecting the assumptions of the standard fine-tuning argument, and declaring instead that the form of the laws of nature, in excess of the values of the fundamental constants, could be other than it is, and hence requires explanation for its current, presumably surprising form–an explanation he takes to be best provided by the bio/anthropocentric teleological view.
As a side note, I should point out one more thing that Denton is not doing. There are fine-tuning proponents who believe that there are flexible parameters other than just the fundamental constants or the laws of nature as a whole–these proponents believe that even if one takes both the laws of nature and the current values of the fundamental constants as given (although they are typically inclined to do neither), these givens would severely underdetermine certain properties within the universe required for life, such as the Earth’s distance from the sun. Such underdetermined extra parameters would have to be set by the intervention of some supernatural agent to permit life to exist. However, unless Denton has thoroughly tripped himself up, this can’t be the kind of argument he is setting forth, since he has explicitly stated the belief that the laws of nature are sufficient to produce and sustain life without the benefit of supernatural agency in the universe. In fact, much of Nature’s Destiny reads like a naturalist’s refutation of such extra underdetermined parameters.
So, given that Denton’s argument depends crucially upon the flexibility of the laws of nature in excess of the fundamental constants, and that he requires that the flexibility be sufficient to make the properties of our universe surprising, there are two very serious problems:
(Problem 1) How does one justify the assertion that the laws of nature could have a different form than they actually have, even in the presence of a TOE which fixed all of the fundamental constants? There seems to be a logical possibility that they could be different, but is ultimate reality constrained only by the laws of logic, or are there–or could there even be–more constraints than just the laws of logic? Personally, I have no idea how anyone would go about providing a rigorous answer to these two questions. I do not even find one position more intuitive than the other (not that my intuitions, or anyone else’s, would be worth anything even if they did express a preference). Denton, in any case, has not even acknowledged the question, much less provided a rigrous answer. But without a rigorous answer, we can’t know whether unique fitness is surprising. As I pointed out above, once the laws of nature are taken as given, no consequence of them requires any additional explanation, as mysterious as one might erroneously feel those consequences are.
(Problem 2) Even if one were to come up with a rigorous demonstration that the laws of nature are ultimately flexible, precisely how flexible are they: what possibilities are there, and what probabilities should be associated with these possibilities? To get any legitimate sense of how surprising the properties of our universe are, we need to establish with rigor a domain of possible sets of laws and a probabilty measure over that domain. But Denton has not done this, and personally, once again, I have no idea how anyone would go about establishing with rigor either one.
The only possible line of pursuit that occurs to me is that we take the null hypothesis of “everything goes, constrained only by the laws of logic, with equal probability for each possibility” as a default (i.e. that we employ the Principle of Indifference over the domain of all logically possible sets of laws). But there are two problems with doing this:
(Problem 2a) Even if we presume that everything goes, we need to figure out how to partition the sets of laws if we are to assign equal probabilities to each set. But it is not at all clear how to make such partitions, since it is not entirely clear which properties of the universe can vary independently of one another. For instance, if we suppose that minds and brains are not identical, then we get a different number of possible laws of nature than if we presume that minds and brains are in fact identical (because the former has two independent parameters–minds and brains can vary independently of one another–where the latter has only one–minds and brains vary together).14
(Problem 2b) Mathematical puzzles developed by Bertrand and von Kries raise difficulties with the very notion of equiprobability. Since a thorough discussion of Bertrand’s paradox would take the average reader into a technical no-man’s land, and since Poincare and Jaynes may have solved that exact problem, I will not deal with it here. Von Kries’ problem, however, is both easier to understand and more problematic. Bas van Fraassen describes the essence of the von Kries problem as follows:
Von Kries posed a problem [in which] several parameters are related by a simple logical transformation. Consider volume and density of a liquid. If mass is set equal to 1, then these parameters are related by:
density = 1/volume; volume = 1/density.
But a uniform distribution on parameter x is automatically non-uniform on y = (1/x). For example,
x is between 1 and 2 exactly if y is between 1/2 and 1
x is between 2 and 3 exactly if y is between 1/3 and 1/2.
Here the two intervals for x are equal in length, but the corresponding ones for y are not. Thus Indifference appears to give us two conflicting probability assignments again.
It is presumably because of the von Kries problem that van Fraassen states earlier that
It is true that the historical controversy extended into our century, but I regard it as settled now that probability is not uniquely assignable on the basis of a Principle of Indifference, or any other logical grounds.
Similarly, while Roy Weatherford believes that the Principle of Indifference may be justified when one is dealing with a finite number of elements, the von Kries problem compels him to the following conclusion:
We may say, then, that the Principle of Indifference just will not work reliably on problems involving continuums or an infinity of alternatives.
Since the domain under consideration in the fine-tuning argument always involves a continuum (a “non-denumerably” or “uncountably” infinite number of elements) whether we consider all logical possibilities or just a continuous range of possible values for the fundamental constants, von Kries’ problem apparently undercuts the fine-tuning argument as a whole.
But let’s set the partitioning and von Kries problem aside and assume both that we know which parameters are independent, and that the notion of equiprobability is unproblematic. One must still establish with rigor that the correct equiprobability distribution makes the family of all sets of laws that are uniquely fit for life less probable than the family of sets which are not. Ordinarily, when we impose equiprobability on a finite domain of discrete elements, we figure out the probability of a set of elements within that domain by counting the number of elements in the set and dividing it by the total number of elements in the domain. For instance, if you have a bag containing 90 red marbles and 10 blue ones you conclude that the probability of randomly drawing a red marble from the vase is 90 divided by 100, or 9/10, and the probability of randomly drawing a blue one is 10 divided by 100, or 1/10. But we can’t use this kind of method with families of sets relevant to the fine-tuning argument, because both the family of sets of laws ideal for life, and the complement of that family contain a continuum of sets–an uncountably infinite number of sets.  This means that there are the same number of sets in both families. So counting and dividing cannot do the trick.
What if we assign each equally-sized set of alternatives the same probability? After all, we are able to assign equally-sized regions of dartboards the same probability despite their containing a non-denumerably infinite number of positions. But how are we to justify coming to any conclusion about the relative sizes of distinct regions of possible universes? Let’s assume that the von Kries problem can be overcome–that we are able to settle on a specific parameter which tells us exactly how to divide up the regions of possible universes. It still remains to be shown that the relevant parameter will actually make the region of uniquely fit universes improbable. Proponents of fine-tuning arguments assume not only that there is a unique way of assigning probabilities to regions of possible universes, but that that unique way is one which will make the regions with life-permitting (or ideal conditions for life, for people like Denton) highly improbable. Even if the von Kries problem could be overcome, for all we know the region in question could be highly probable. Proponents of fine-tuning arguments speak as though the space of possible universes is laid out in some space like a gigantic dartboard, with the interesting universes clustered in a pin-point center. But there is no magical dartboard in the sky which pre-divides everything for us all nicely. Intuitions about the probabilities ultimately appear to come down to just that–intuitions, with no discernible foundation.
Aside from counting and comparing sizes, there is only one other way of establishing a probability measure, actually a very mundane way: we select possibilities randomly, and then assign probabilities based upon the frequency of each element as the number of random selections approach infinity. This is in fact why we are able to conclude that the center of a dartboard is much less probable to hit with a random dart throw (which for me personally, happens to be identical with a throw aimed at the center) than the rest of the board–because even though both the center and the rest of the board contain a non-denumerably infinite number of points, the expected frequencies are experimentally different. Of course, once we have established expected frequencies for such situations, we can through ordinary scientific means hypothesize physical mechanisms that will make all relevantly similar situations function in the same way (so we can extrapolate from the way dartboards are divided up into equiprobable regions to the way whole continents are divided up into equiprobable regions, in the absence of disconfirming data). However, such extrapolation can only be justified after experiment (not necessarily in the laboratory–no doubt dartboard-type probability assessments were ingrained into most organisms by natural selection long before they even developed the capacity for theorizing about underlying mechanisms), and only to situations that we have reason to believe involve similar physical mechanisms.
But as Hume pointed out long ago, we have access to only one universe, so there is no way we can establish anything about it by appealing to relative frequencies among actually observed universes, and there is moreover no reason at all to believe that whatever process generates universes (if there is such a process) functions like familiar physical mechanisms. In fact, familiar probability measures have been shown to break down even in some situations that seem quite mundane at first; for instance, in the two-slit experiment, one of the classic examples of quantum-mechanical weirdness that infects the world. Even our inuitions about the equiprobability of dartboard divisions fail in ordinary situtations due to the slight variation of the earth’s gravitational field between the top and bottom of earthbound dartboards. The necessity of a backdrop of understood natural law in non-experimental assessments of probabilities raises severe problems, as is made clear by Robin LePoidevin:
What determines the probability of [a] lamp’s coming on is a conjunction of various states of affairs obtaining and the laws of physics. Altering any of these will alter the probability. But if the probability of events is determined in part by the laws of physics, what can it mean to talk of the probability of the laws of physics themselves? If we judge that it was extremely improbable that the charge on the proton should have been 1.602 x 10-19 coulomb, against what background are we making this judgement? What do we suppose is determining the probability of this value?
Ultimately, we could establish an empirical or empirical-extrapolation account of the probability of our universe only if we had access to whatever process generates universes (if, once again, there is such a process), and deduced from that process the relative frequencies of universes it produces. But now this threatens to take us back to the kingdom of the TOE, in which our universe exists by necessity.
In the face of concerns like LePoidevin’s, the mind reels backwards and grasps once again at logical consistency as the only constraint on the form of universes, declaring all processes irrelevant. But with processes declared irrelevant, and no way to count universes or tape-measure the size of families of universes, we are left with no rigorous way at all to establish any probability measure over any set possible universes.
So, in the absence of a solid argument to the contrary (which Denton does not provide, since he does not recognize that one is needed), we simply cannot know whether it would be surprising for our universe to be uniquely fit for ordinary or anthropomorphic life. I would imagine that this feels disconcerting to many readers, and indeed, it jars with intuitions I readily admit I share with such readers, but I have found no way to avoid it. Intuition is no substitute for, much less a viable contender against, rational argument.
I must therefore conclude that the status of Denton’s main argument is pathologically indeterminate. All that we can know for certain is that the ultimate nature of reality allows for the possibility of life such as ours since we are, after all, here but we cannot (yet?) draw any conclusion with any degree of confidence about whether any aspect of the form of the laws of nature is surprising enough to require a teleological explanation. Although it may seem at first that “in science the cosmos has called us home”, apparently in epistemology the cosmos has told us to drop dead.
4. Has Denton recanted his earlier position on evolution?
Denton distinguished himself earlier in his career by becoming one of the very few scientists to oppose evolution without already having a conservative religious axe to grind against Darwin. Therefore, I feel compelled to say at least a few words about the views he expresses towards evolution in his new book, even though they do not add anything to the critique of the main thesis of the book above.
It seems to have become conventional lore that Denton has recanted his earlier book. In reality, I see only one major change between his position on evolution in the earlier and the newer book–that is his position on functionless intermediates. Denton still believes gradualistic mechanisms are not sufficient for macroevolution, and that the origin of life is a complete mystery; now, however, he is deeply impressed by the “closeness of all life in DNA space”:
One of the most surprising discoveries which has arisen from DNA sequencing has been the remarkable finding that the genomes of all organisms are clustered very close together in a tiny region of DNA sequence space forming a tree of related sequences that can all be interconverted via a series of tiny incremental natural steps. So the sharp discontinuities…between different organs and adaptations and different types of organisms, which have been the bedrock of antievolutionary arguments for the past century, have now greatly been diminished at the DNA level.
Because of this closeness of even morphologically dissimilar organisms in DNA space, Denton now believes that “functional DNA sequences can be derived via functionless intermediates [and therefore that] a new phenotype or organ system can be generated by saltation”. Note that “saltation” is probably the wrong word to use, since it is generally associated with the notion of a single mutation causing huge morphological changes, whereas Denton is actually talking about huge morphological changes being built gradually but neutrally (i.e. not by selection).
In addition to the closeness of all life in DNA space, Denton is impressed by the possibilities of self-organization and directed mutation–“directed” in the sense that the structures and processes guaranteed by the laws of nature constrain mutations to occur in a specific direction, not in the sense that God’s finger pokes out of the clouds every now and then to zap a thymine into an adenine. I will, not, unfortunately, be in any position to assess the evidence he offers for this altered evolutionary view until I have read all of the articles he cites. Denton also finally gestures to the possibility of an “inherent emergent inventive capacity possessed by all living things”, a “limited degree of genuine autonomous creativity [in which] the world of life might reflect and mirror in some small measure the creativity of God”–an idea which he admits “seems to beckon toward some mystical obscurantist cul-de-sac” but nevertheless does not want to discount. I must confess that I do not at all understand what Denton is trying to describe with this last suggestion, so I am unable to comment on it.
It is clear, in any case, that Denton has not disavowed the bulk of what he wrote in his earlier book. He ended that book claiming that the proper account of the development of life would probably be as radical in comparison to the idea of evolution as quantum mechanics was to classical physics. Although Denton appears to have settled on far more prosaic mechanisms than anticipated (with the possible exception of the mystical obscurantist cul-de-sac), Nature’s Destiny can be viewed as a natural extension of Denton’s earlier project of seeking an adequate non-Darwinian account of the development of life.
5. Closing remarks
I would expect many people whose intuitions are sympathetic to Denton’s to be upset at the esoteric line of argument I have taken against Denton’s main thesis. Be assured that I am sympathetic–how many times have I thrown up my hands in despair when some seemingly unshakeable intuition of mine has foundered on an obscure point of epistemology which I would much rather have ignored? It has at times almost been enough to throw me from the whole enterprise of philosophy. Unfortunately, however, there is no easy road to truth. Anyone who is not prepared to face the problems of philosophy in all of their intricate complexity, anyone who is ultimately willing to sacrifice the pursuit of truth on the altar of personal comfort, has no business pretending to care about the kind of project Denton has started to engage in. He has asked difficult questions, and pursued their answers singlemindedly. Now I hope he will ask the even more difficult questions I have pointed to, and give them the same admirable level of attention.
 See, for instance, the second list of parameters in H. Ross. n.d. “Design Evidences: Evidence for the Design of the Cosmos.” URL:http://www.reasons.org/resources/apologetics/designevidence.html. Spotted June 15, 2000.
 So that there is no misunderstanding, let me stress that Denton does not necessarily believe that every exact aspect of the universe was written into the laws of nature, but rather that the laws render it inevitable that all of the conditions required for life will arise somewhere in the universe. Denton’s designer may not have been able to know in advance that the planet we are on right now would come into existence, but he would have known that a planet with all of the relevant properties would have to come into existence somewhere. The laws of nature may underdetermine certain trivial specific aspects of the universe, but they guarantee that life will arise somewhere without the benefit of supernatural tinkering.
 I am indebted to Jon Jarrett for bringing Bertrand’s paradox to my attention and discussing it with me. Bas van Fraassen provides an excellent discussion of Bertrand’s paradox and the von Kries problem in Chapter 12 of his book Laws and Symmetry (Oxford: Clarendon, 1989).
 Note that the “sets” here are analogous to the “elements” in the marble example, while the “families” here are analogous to the “sets” in the marble example — in the fine-tuning argument, the elements we are worried about are “sets of logically consistent laws. The language being used here is like that in the drawing of bags (sets that are themselves elements) of red and blue marbles out of a larger bag (the domain), and trying to establish in advance the probability of drawing a bag of a certain type (family)–for instance the type of containing greater than a certain proportion of red marbles. There’s a straightforward way to carry out such calculations if we know how many marbles are in the bag, but only because this problem involves a finite number of marbles which can vary only in a finite and discrete number of ways.
 (Technical) Consider, for instance, a family F of sets of laws, where each set contains the law “an intelligent organism exists at position x in absolute space,” where x varies continuously from one law to the next. Then there is a non-denumerably infinite number of life-permitting sets of possible laws within F. Therefore, since F is a subset of the family of all possible sets of laws, there is a non-denumerably infinite number of life-permitting sets within the family of all possible sets of laws. Note: Remember that I am proceeding under the assumption that anything within the laws of logic goes. If there are appropriate restrictions on what possible laws can look like, then there may not be a non-denumerably infinite number of life-permitting sets of laws. However, such restrictions would require justification; moreover, one would need to show that they do in fact lead to the intended conclusion.
There is a literature in philosophy of science on what can count as a law, but it would take me too far afield to survey it. However, for those who feel that there is something illegitimate about F above, let me offer another example, which even proponents of the standard fine-tuning argument would accept: let K be the family of all sets of laws taking the form of the current laws of nature, except that one of the fundamental constants varies over the real numbers from one set to the next. Then, there is some continuous interval J of variation in that constant which will be so “small” that it will not make a practical difference on the fitness of the universe for ordinary and anthropomorphic life (for instance, let the constant be the ratio of the electromagnetic force to the gravitational force between two electrons, and take the interval to be [1039, 1039+10-1,000,000,000,000]). Since the interval is continuous, the subset J of K consists of a non-denumerably infinite number of sets of laws identically (for all practical purposes) suited for life. (The reader should, of course, remember from the discussion of von Kries that the “smallness” of such intervals under one description has nothing to do with their probability. The material following this footnote will amplify the point.)
 Jon Jarrett has pointed out to me that, of course, one does not need a non-denumerably infinite number of elements in each family for this problem to arise–a denumerably (“countably”) infinite number of discrete elements in each family would also make both families equinumerous, which would prevent one from assigning probabilities to each through the counting/dividing means.
 (Technical) The situation is, in fact, considerably worse than this, although things are already bad enough that I can confine the extra problems to this footnote. The problem is that if anything goes, then there is no reason why our default hypothesis should regard all possibile sets of laws as equiprobable, rather than holding that more than one might obtain at the same time (where there is one universe for each set that obtains). Generally when we assume equiprobability as a null hypothesis, it is because we have reason to believe that only one of a set of alternatives can be the case, and we don’t know which. But if anything goes, we can have no idea how many alternatives might be the case at the same time. Therefore, the domain we should be considering is not the family of all possible sets of laws, but a vastly more complicated entity which includes, for instance, the possibility that sets 1 and 563,586,232,132 are realized, the possibility that all sets except set 819,294,281,490,102,475 are realized, the possibility that all sets are realized, and so on–namely, the power set (the set of all subsets) of the family of all possible sets of laws. But power sets have a higher “cardinality” than the sets they are power sets of–so the power set of the family of all possible sets of laws contains a larger infinity of elements than the family of all possible sets of laws, which is itself already uncountable. How are we supposed to count possible sets of laws when there are more such sets than real numbers?
(In fact, perhaps we should be considering not just the situation where laws 12 and 226,212 obtain at the same time, but all of the situations which attach distinct simultaneous probabilities to the two. If I am not mistaken, though, these probabilities would average out on the hypothesis of equiprobability of all situations, leaving us with the slightly less complicated power set I describe above.)
 Any book on quantum mechanics will describe the two-slit experiment. For an especially clear presentation of the experiment and how it has caused all hell to break loose in metaphysics, consult the fine anthology, The Ghost in the Atom (Cambridge: Cambridge University Press, 1993), edited by J. R. Brown and Paul Davies.
 R. LePoidevin. 1996. Arguing for Atheism: An Introduction to the Philosophy of Religion. London: Routledge. pp. 49-50. LePoidevin’s entire book, by the way, is outstanding. LePoidevin shows like no one else how questions about the existence of God are entangled in deep metaphysical questions which most people outside of academia rarely ask.
 I would like to extend special thanks to Jon Jarrett for extended discussion about this review, and his special fortitude in checking the technical parts. I am also very grateful to David Hilbert for reviewing and commenting upon this paper, and in particular for correcting a serious error I made with respect to power sets. Last, but not least, a warm “thank you” to Jeffery Jay Lowder and Michael S. Valle for their reviewing efforts.
Copyright © 2000, Society of Humanist Philosophers. Reprinted with permission.