Stephen Nygaard Humean Formulations

The Failure of Mathematical Formulations of Hume’s Maxim (2021)

Stephen Nygaard

Abstract: Several commentators have attempted to reduce Hume’s maxim about miracles to a formula in the language of probability theory. This paper examines two such attempts, one of which is based on the probability of the alleged miracle conditioned by the testimony for it, and the other on its unconditional probability. The conditional probability leads to a formula that is valid—though only when qualified—but not useful, while the unconditional probability results in an invalid formula. The utility of expressing Hume’s maxim in terms of probability theory is shown to be questionable, and an alternative approach is presented.

Introduction

In December of 2019, the Secular Web published Keith Parsons’ “Hume’s Beautiful Argument,” an encomium and analysis of the argument against miracles in Section X of David Hume’s An Enquiry Concerning Human Understanding. The title alone was enough to remind me of John Earman’s book from nearly two decades earlier, Hume’s Abject Failure: The Argument against Miracles, which takes a very different view, as the two titles make clear. Indeed, Earman “contend[s] that Hume’s argument is largely derivative, almost wholly without merit where it is original, and worst of all, reveals the impoverishment of his treatment of inductive reasoning. Hume scholars will no doubt be enraged by this charge. Good! There has been much too much genuflecting at Hume’s altar” (Earman, 2000, p. 1).

While I agree with nearly everything that Parsons says, and with very little of Earman’s argument[1], I am troubled by one issue in Earman’s book that Parsons largely—but not entirely—avoids (and perhaps wisely so). This is Earman’s assertion that “Hume’s Maxim begs to be made precise by translating it into the language of probability theory” (Earman, 2000, p. 39). The maxim, which appears at the end of Part I of Hume’s argument, is this:

The plain consequence is (and it is a general maxim worthy of our attention), “That no testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous, than the fact, which it endeavours to establish: And even in that case, there is a mutual destruction of arguments, and the superior only gives us an assurance suitable to that degree of force, which remains, after deducting the inferior.” (Hume, 1748/1977, pp. 756-757)

If I had never read any of the attempts to translate the maxim into a formula, I might have thought that its meaning was perfectly clear. But I have since been disabused of that notion by the diversity of its interpretations. After studying various efforts to produce a formula, I’ve come to think that the attempt itself is misguided. Instead, I offer a paraphrase not based on probability theory.

But first I begin by reviewing two proposed formulas, namely Earman’s and one proposed by Peter Millican as an answer to Earman. This is hardly exhaustive, but I believe it is representative of the problems found in all such efforts. I chose Millican’s formula as the best attempt that I’ve seen (which turns out to be less of a compliment than it might seem), and Earman’s formula as the starkest contrast to Millican’s. Moreover, they both consider and reject several other formulas (Earman, 2000, pp. 39-40; Millican, 2003, pp. 3-7) because they consider them to be incorrect translations of the maxim, an opinion that I do not dispute. However, my concern here is not how accurately a formula translates the maxim, but whether a formula is useful for judging miracle claims. I believe that anyone who carefully examines these other formulas will find that they suffer from the same defects that, as I explain below, are found in the ones proposed by Earman and Millican. Accordingly, I consider it sufficient to examine just these two formulas.

The question of what Hume himself had in mind when writing his maxim cannot be answered without historical, biographical, and textual information that is beyond the scope of this paper. There is no shortage of commentary on the proper interpretation of Hume’s maxim, and it is not my intention to add to it. My thesis here in not that the correct interpretation is nonmathematical, although I’m strongly inclined to believe that it is, and the interested reader can find that position defended in the literature (e.g., Vanderburgh, 2019). But even if a formula misrepresents Hume’s meaning, it doesn’t follow that it can’t be useful for judging miracle claims. My aim is to show that regardless of whether it’s consistent with Hume’s intent, no mathematical interpretation has resulted in a useful formula, and there is little reason to expect that to change. I use the following variables to facilitate the discussion:



E An arbitrary event, which may or may not be a miracle.
T Testimony asserting that E has occurred.
m The probability that E has occurred.
f The probability that T is false.

Earman’s Formula

The maxim can be split in half at the colon, and both Earman and Millican treat each half separately. For the first half, Earman “starts from the fact that Hume describes a situation in which it is known that the witness has testified to the occurrence of a miraculous event. Thus, we should be working with probabilities conditioned on [the testimony], as well as on the evidence of experience and the other background knowledge” (Earman, 2000, p. 41). Accordingly, Earman sets f = P(~E|T) and m = P(E|T).[2] He then declares that “[t]o say that the falsehood of the testimony is more miraculous than the event it endeavors to establish is just to say that the former probability is smaller than the latter” (2000, p. 41). The formula for this is P(~E|T) < P(E|T), or equivalently, f < m.

Now P(E|T) is the same as the probability that the testimony is true, which means that m = 1 – f. This in turn means that f < m is equivalent to m > 0.5, as well as f < 0.5. Thus, the first part of the maxim says nothing more than that we should believe testimony when (and only when) it’s more likely to be true than not to be true. Earman regards this as “unexceptionable” and “conclude[s] that those commentators who have been impressed by the first half of Hume’s Maxim have been impressed not by content but by the nice ring of the language of Hume’s formulation” (Earman, 2000, p. 42).

As for the second half of the maxim, Earman says that it “appears to be nonsensical … [suggesting that] there is still a further destruction of arguments. Such talk appears to involve an illicit double counting: the weighing up of the countervailing factors … has already been done” (Earman, 2000, p. 43).[3] This accusation is not at all clear to me. Earman seems to be saying that Hume wants to calculate the relevant probabilities and then somehow adjust them by counting the “countervailing factors” a second time, although he doesn’t explain how Hume supposedly proposes to do this.

Earman considers the first half of the maxim to be an “unhelpful tautology” (Earman, 2000, p. 41), and I would certainly agree if I thought that his formula translated it correctly. His formula appears to assume that belief is binary because it says that we should believe the testimony if f < 0.5, otherwise we shouldn’t. To allow degrees of belief, it might be argued that implicit in the concept of probability is what I call the degree of confidence principle (DCP), which states (in this context) that there is a gray area around f = 0.5 where there is no warrant to believe either that T is true or that it’s false, and that we can be more confidant that T is true as f gets closer to 0, or that it’s false as f gets closer to 1. Note that vagueness is an essential feature of DCP. It doesn’t say how big the gray area is, nor does it assert any specific mathematical relation between the value of f and the degree of confidence that is warranted by that value, only the vague notion that warrant increases as f gets farther from the gray area. We can be more confident that T is true when f = 0.05 than when f = 0.25, but DCP doesn’t say how much more confident.

Now if DCP is implicit in the concept of probability, then Earman’s formula can be construed as allowing degrees of belief. But in that case, the formula is superfluous because it says nothing that’s not already in DCP. Although I said that I would not address the question of what Hume was thinking, that doesn’t mean that I won’t weigh in on what he was not thinking. It’s often unnecessary to know what’s true in order to recognize what is very likely to be false. Someone with only a cursory knowledge of Joan of Arc can still be confident that she was not the wife of Noah from the Bible. Similarly, even a rudimentary understanding of Hume’s maxim is enough to be confident that Earman’s formula fails to capture it. Part I of Hume’s argument is devoted to showing that the unconditional probability of a miracle is extremely low. But if m is the conditional probability, then the unconditional probability is completely irrelevant, as is the claim that the event is a miracle, because the resulting maxim is a general principle whose truth is independent of these factors. It strains credulity to suppose that Hume intended to conclude Part I with a maxim for which the argument leading up to it is irrelevant, especially since he declares the maxim to be “[t]he plain consequence” of what precedes it.

Millican’s Formula

Earman issues this challenge: “Commentators who wish to credit Hume with some deep insight must point to some thesis which is both philosophically interesting and which Hume has made plausible. I don’t think that they will succeed” (Earman, 2000, p. 48). Millican proposes to meet this challenge with his formula, which, for the first part of the maxim, is mathematically quite simple. If testimony asserts that some event has occurred, then either the testimony is false and the event did not occur, which has a probability of f(1 – m), or the event did occur and the testimony is true, which has a probability of m(1 – f). We should believe the testimony if and only if (which I hereafter abbreviate as “iff”) the former probability is less than the latter, that is, iff f(1 – m) < m(1 – f), which simplifies to f < m (Millican, 2003, pp. 12-13).

This may appear to be the same as Earman’s formula, since both justify belief iff f < m. But, in fact, the two are completely different. Earman uses the conditional probability m = P(E|T), whereas Millican uses the unconditional (or prior) probability m = P(E), with m and f treated as independent, from which it would seem to follow that sometimes we should doubt testimony even when it’s very likely to be true, that is, when f is low. He gives two examples of this, one of which hypothesizes a debilitating genetic condition afflicting one person in a million (m = 0.000001) and a test for this condition that is 99.9% accurate (f = 0.001). Given a population of 55 million people (the estimated population of Britain at the time), only 55 would have the condition, all of whom would very likely get a positive test result.[4] If the test were given to the entire population, everyone else should get a negative result, but because one test in a thousand is wrong, about 55,000 would get a positive result. Thus, only 55 out of 55,055 people with a positive result have the condition.[5] Despite the test being 99.9% accurate, someone who gets a positive result has only one chance in 1,002 of having the condition (Millican, 2003, pp. 8-9).[6] Millican concludes that although f is very low, meaning that the testimony is very likely to be true, because f > m, we should not believe that someone with a positive result is likely to have the condition.

Millican says that the second half of the maxim “does not involve any explicit calculation of the overall conditional probability … but only a comparison between f and m” (Millican, 2003, p. 16). In other words, there is no double counting of countervailing factors. Instead, Millican proposes a more complex explanation based on the assumption “that Hume has in mind a non-standard theory of probability” (Millican, 2003, p. 17). Even if this is mathematically valid, I maintain that the alternative given below is much simpler and is part of a maxim that is genuinely useful.

There are several problems with Millican’s formula. To begin with, his example appears to support his thesis, but only because he chose values to give that appearance. As a counterexample, suppose that 2% of the population is infected by a virus (m = 0.02), and that there is a test for this virus with 90% sensitivity and 99% specificity.[7] From this, the probability that a positive result is wrong is f ≈ 0.35.[8] Should we believe that someone with a positive result is unlikely to have the virus because f > m?[9] No, because although our confidence in the result should be tempered by the fact that there’s more than a one-third chance that it’s wrong, a positive test is still almost twice as likely to be right as wrong.

But even with cherry-picked values, Millican’s example still fails to support his thesis, a fact that he appears to concede when he admits that “the probability of a positive test’s falsehood … is not one in a thousand but rather 1000/1001, a figure that comes from calculation in the light of everything known about the present case and its background circumstances (including the rarity of the disease) rather than just from the statistical characteristics of the test itself” (Millican, 2003, p. 9). What he doesn’t quite say is that it’s valid to set f = 0.001 only if we don’t know the result of the test. When the result is unknown, there’s a 99.9% chance that it’s correct despite the unreliability of positive results because nearly all results will be negative and an incorrect negative result is exceedingly rare.[10] We would be sorely mistaken to think that because f > m, therefore the result is likely to be wrong. It’s only when we learn that the result is positive that we know that there’s only one chance in 1,002 that it’s correct, but this knowledge means that now we must set f = 1 – 1/1002 ≈ 0.999002. But now the value of m is irrelevant because f alone tells us the result is very likely wrong. Consequently, this example does not give any reason to doubt testimony that’s very likely to be true. This point is especially important because Hume’s maxim is concerned only with false positives[11], which is why T is defined to be testimony that E has occurred and does not include testimony that E has not occurred.

An equally serious problem with Millican’s example is that the testimony is not from human witnesses, but from a physical test. But as Parsons makes clear, Hume’s “argument concerns only evidence from one or more witnesses who claim to have observed miraculous occurrences” (Parsons, 2019). Part II of Hume’s argument is concerned with cognitive blind spots that make human testimony fallible (and especially untrustworthy in the case of miracle claims), not with the sort of random errors that occur with a physical test for a medical condition. In fact, this example is largely irrelevant to the issue of miracles because the existence of true positives or false negatives is not in question in the example, whereas it’s the central question for miracles.

Millican’s formula assumes that m and f are independent, yet he admits that “the apparent universal presumption of probabilistic independence is highly implausible” (Millican, 2003, p. 14). Earman is at the opposite extreme because he makes m the conditional probability, and thus the inverse of f, so that any value assigned to one automatically determines the other. The truth is somewhere in the middle, where f is partially dependent on m, but not completely determined by it.

How m Affects f

If we believe T, it’s usually because we believe that E is the best explanation for T. For example, suppose Sophia tells you, “I just saw your mother at the supermarket.” If you believe her, it’s because you believe that the best explanation for why she said this is that she really did see your mother at the supermarket. If you don’t believe her, then you must think that there’s some other reason for her statement, whether as an honest mistake or as a lie.

In this example, the two relevant probabilities are m, the unconditional probability that Sophia would see your mother at the supermarket, and f, the probability that she would falsely claim to have seen your mother. If you consider m to be reasonably high and f to be low, then presumably you believe her. However, if your mother lives a thousand miles away, then m is significantly lower. You may well suspect that Sophia didn’t really see your mother, but rather saw someone who looked like her. Even if you consider Sophia to be generally honest, the lower value of m still increases f in this case, and without more information you may well hesitate to accept her claim.

Now if your mother died ten years ago, then m is very nearly 0. No matter how honest or reliable you consider Sophia to be, unless you believe in ghosts or resurrections, or are somehow unaware of or doubt your mother’s death, it’s virtually certain that you would insist that Sophia is mistaken. In other words, you would surely consider f to be very nearly 1.

As m decreases in these variations of the scenario, f increases, but f is not determined solely by m, which would be the case if m were the conditional probability. In contrast, the unconditional probability of the encounter is just one factor to consider in deciding the truth of the testimony. Sophia’s general reliability as a witness independently of this one event is likely a more important factor when m is large. But m increases in importance as it gets smaller until finally, when it gets small enough, it far outweighs all other factors.

Although the independence of f and m is implausible, a more fundamental problem is that it’s untenable to base belief on whether f < m because f alone suffices to guide belief, and then only in accordance with DCP. This is not to say that m is unimportant. It is a factor in determining f, and decreasing m means increasing f, but the size of f relative to m is irrelevant for justifying belief.[12]

Parsons cites Hume’s admission “that there may possibly be miracles, or violations of the usual course of nature, of such a kind as to admit of proof from human testimony” (Hume, 1748/1977, p. 765). Hume gives a hypothetical example in which consistent testimony from around the globe tells of eight days of darkness, and he concludes that we should accept such “extensive and uniform” testimony as true. But he immediately follows this with another example in which Queen Elizabeth dies, and after being interred for a month, returns and reigns for another three years. Hume says that he “should not have the least inclination to believe so miraculous an event” (Hume, 1748/1977, p. 766). Why believe the testimony from the first example, but not from the second one?

While the darkness in the first example may be a “violation of the usual course of nature,” it’s not a miracle in Hume’s sense of “a transgression of a law of nature” (Hume, 1748/1977, p. 756). After all, it’s not hard to imagine some natural phenomenon that would explain eight days of darkness around the world.[13] Now would we accept the testimony in this example because (and only because) f < m? Even if we could calculate both probabilities accurately enough to know whether f < m, if we believe f is small even after accounting for a small value for m, isn’t that reason enough to accept the testimony, regardless of how it compares to m?

Now the second example does clearly involve a violation of natural law. In this case, m (the probability that the event occurred/materialized) is very small, much smaller than in the first example, and f (the probability that the testimony for the event is false) is very large. And though there’s no possibility that f < m, the large value of f is enough by itself to make us doubt the testimony. Perhaps the real point is that in the first example, m is not small enough to prevent f from being small as well, and it doesn’t matter which is smaller, only that f is small. But in the second example, m is so small that whatever reasons there are for believing the testimony are vastly outweighed by the extreme improbability of the event, and this forces f to be large.

An Alternative Approach

All this leads me to think that it’s a mistake to try to translate Hume’s maxim into the language of probability theory. I understand the allure of this idea to anyone versed in that language, but the only real justification that I can find for it is that it seems reasonable to interpret the word “miraculous” in the maxim to mean “improbable.” It’s only a small leap from this to probability theory.

I am unaware of any attempt to reduce the first part of Hume’s maxim to a formula in the language of probability theory that does not reduce to accepting the testimony iff f < m. Of course, there is a remarkable diversity of opinions on how f and m are to be calculated, and even what exactly they represent. Interestingly, they all seem to take it for granted that f is not sufficient on its own and must be compared to m.[14] And I would agree that if the phrase “more miraculous” does mean “more improbable” in a mathematical sense, then this comparison seems unavoidable.

Rather than trying to reduce the maxim to a formula, a better approach is to think of a scale made with two plates hanging equally distant from a fulcrum. (See the illustration in Millican, 2011, p. 159.) The reasons for believing the testimony are put on one plate, and the reasons against believing it are put on the other. Which side we believe will depend on which set of reasons outweighs the other. Millican himself lends support for this when he says, “Hume’s own route to this result was not, of course, so mathematical: he seems to have viewed the situation as involving a relatively simple trial of strength between the inductive evidence for the testimony and the inductive evidence for the relevant ‘law of nature'” (Millican, 2013, p. 6).

Now what are the reasons against believing the testimony? If a miracle violates a natural law, and “a firm and unalterable experience has established these laws” (Hume, 1748/1977, p. 756), then this firm and unalterable experience counts against the testimony. Advancements in science since Hume’s time have vastly increased the strength of this argument. Natural laws are established by much more than consistent experience. A natural law does not stand alone. Rather, science rigorously binds it together with other laws into a coherent whole where each law reinforces the others. For example, we now know that the laws governing the subatomic components of atoms underlie the laws governing the chemical properties of the elements, which in turn underlie the laws governing biological processes. Furthermore, natural laws are more than just vague assertions. They allow us to make mathematical predictions, often with significant precision. And these predictions are consistently verified by physical evidence, which makes it possible to develop extraordinarily complex technologies. The monumental and ever-expanding edifice of science rests on one side of the scale, while human testimony sits on the other. And it would be “miraculous” (although only in a metaphorical sense) if the latter were to outweigh the former.

As for the second half of the maxim, if there could be a case where the testimony was the greater weight (and this seems extremely unlikely), then our confidence in it depends on how much greater it weighs. If there are a thousand units (whatever a unit of persuasion might be) in favor of the testimony and 999 against, then 99.9% of the reasons for believing the testimony are counterbalanced by the reasons against, and only a single unit remains as the deciding factor. Our confidence in the testimony should be tempered by the relatively small size of this deciding factor. Hume uses the phrase “mutual destruction of arguments” rather than “counterbalancing,” but both strike me as equivalent corollaries to DCP. The size of the deciding factor is determined by deducting the lesser weight from the greater one. Of course, it’s not the absolute value of the size of the deciding factor that matters, but rather it’s size relative to the entire weight. The smaller its relative size, the less confident we should be.

Of course, in most—if not all—cases, the greater weight will be against the testimony. Moreover, the weight of the scientific evidence will dwarf whatever plausibility the testimony might have such that the deciding factor itself will be enormous. The “mutual destruction of arguments” will demolish the reasons for believing the testimony, while barely making a dent in the reasons against believing it. Consequently, we can be highly confident that in such cases no miracle has occurred.

I present the following paraphrase of Hume’s maxim based on this approach:

No testimony is sufficient to establish a miracle unless the reasons for believing it are so compelling as to outweigh the extreme improbability of the miracle, but because a miracle is a violation of natural law, the testimony must be more compelling than the science that has established the laws that the miracle has violated. Moreover, our confidence one way or the other depends on the degree to which the reasons on the one side outweigh those on the other.

I claim only that, unlike the attempts to translate the maxim into a mathematical formula, this paraphrase is a useful standard for judging miracle claims. I leave it to others to decide how accurately it reflects Hume’s intent.

There are two types of situation where we expect something improbable to happen. Both are illustrated by lotteries. One type of situation occurs when the number of events is large enough to offset the improbability of a single event. For example, although the probability that a lottery ticket is a winner is low, it’s not improbable that one ticket out of a sufficiently large number of tickets is a winner. A miracle, being a singular event, is clearly not in this category. The second type occurs when an event has a great many possible outcomes, each of which is equally or similarly improbable, but one of which must occur. Thus, whatever winning numbers are drawn for a lottery, it was improbable that they would be drawn, yet some set of numbers must be drawn. Again, miracles do not fall into this category because the possible outcomes where no violation of natural law has occurred are enormously more probable than those outcomes with such a violation. Consequently, there is no reason to believe that despite their improbability, miracles might still be expected to occur.

Although William Vanderburgh’s primary concern is the proper interpretation of Hume, he also considers the question of whether a nonmathematical approach is the best way to handle the question of miracles. He presents several arguments to support an affirmative answer (Vanderburgh, 2019, pp. 113-121), which, apart from two fallacious reasons taken from Susan Haack[15] (on p. 120), are persuasive individually, and collectively they comprise a cogent case that we should expect mathematical formulas to fail. I would like to think that this essay adds to that by showing that they actually do fail.

It is worth asking what it would take to make a mathematical formula useful. Obviously, a useful formula would have to be descriptive of reality, unlike Millican’s formula. Moreover, it would have to provide information beyond what we already know, unlike Earman’s formula. In addition, we would have to be able to obtain reasonably accurate values for the variables in the formula. Although it would be arrogant to say that this can never be done, I think it’s fair to say that no one has yet proposed a formula without the defects found in those proposed by Earman and Millican, nor has anyone demonstrated a way to assign anything close to a precise numerical value for the probability that testimony is true, except when testimony asserts something so extraordinarily implausible that the probability of the testimony being true is very close to zero. But in that case, what need is there for a formula?

It might be argued that the defining property of a miracle is not that it violates natural law, but that it has a supernatural agent as its cause. People who believe in the supernatural may well deny that miracles are improbable at all, at least not the miracles that they believe are caused by a powerful deity. But if science is our guide, as it must be if we are to be among the “wise and learned,” then ipso facto a miracle violates natural law. The total failure of supernatural explanations to gain any traction in science means that such explanations are inconsistent with natural law, and thus inherently improbable.

Conclusion

Earman’s formula for the first part of Hume’s maxim represents a valid principle if DCP is assumed, namely that we should believe testimony iff it’s sufficiently more likely to be true than not to be true. This is a simple consequence of using the conditional probability, which is the same as the probability that the testimony is true. However, this principle has no specific connection to miracles or the argument preceding the maxim. Even worse, it is ultimately vacuous because it’s invalid without DCP, yet adds nothing to it. Earman’s contention that the second part of the maxim is nonsense is also troubling. It strikes me as a complete misreading of the text, and seems little more than nonsense itself.

Millican’s formula for the first part matches the wording of the maxim, but only if “miraculous” is interpreted to mean “improbable” in a mathematical sense. However, the principle it represents isn’t valid because it would mean that there are times when we should not believe testimony that is very likely true, which is not reasonable. Millican’s explanation for the second part of the maxim strikes me as unnecessarily complex, and if it’s useful at all, it’s much less so than a nonmathematical alternative.

A consequence of Hume’s phrasing is that any attempt to translate the maxim into a formula seems doomed to fall into the trap of basing belief on whether f < m. It’s surprising how seductively plausible this formula seems. I admit that when I first read Millican, it seemed eminently reasonable. It wasn’t until I tried to apply it to real cases that I discovered that it doesn’t work. It is perhaps understandable that someone like Earman, who wants to denigrate Hume, might believe that his maxim is invalid, or at best useless. But it’s disconcerting that so many who look favorably on Hume have so easily accepted an invalid or useless formula as equivalent to the maxim. The irony is that if Hume had never written his maxim, it’s doubtful that anyone would have ever thought that a formula would be useful for judging miracle claims.

I propose that a more useful version of the maxim ignores the language of probability theory. Rather than comparing the probability of the falsehood of the testimony to the probability of the miracle, the persuasiveness of the testimony is compared with the persuasiveness of the science that has established the laws that the miracle has violated. The first part of the maxim then says that we should believe that the miracle occurred only when the witnesses testifying to its occurrence are so compelling that we should believe that the laws established over years, decades, or even centuries through the concerted efforts of numerous scientists following rigorous procedures, and which have hitherto been confirmed countless times without exception, must now be viewed as being in error. The second part is simply the observation that the more evenly matched the opposing reasons are, the less confident we can be in whichever side is stronger, so that even if the testimony were somehow persuasive enough to make us doubt the science, our confidence in that testimony would still be diminished by the strength of the scientific evidence against it.

Notes

[1] A major part of Earman’s thesis is the claim that Hume’s argument is largely derived from his contemporaries. I take no position on this and leave it to Hume scholars to debate. But Earman has failed to persuade me that those parts of Hume’s argument that he deems original are “almost wholly without merit” (Earman, 2000, p. 1).

[2] Earman uses an entirely different set of symbols in his formulas, as shown in the following table:

Earman This Paper Meaning
M E The event, which may or may not be a miracle
t(M) T The testimony asserting the event has occurred
E The evidence of experience
K Background knowledge
Pr P The probability function
¬ ~ Negation, read as “not”
/ | The symbol preceding the condition, read as “given”
Pr(M/t(M)&E&K) P(E|T) The probability that the event has occurred, given the testimony (m)
Pr(¬M/t(M)&E&K) P(~E|T) The probability that the event did not occur, given the testimony (f)

The probabilities discussed in this paper are implicitly conditioned on experience and background knowledge wherever this is appropriate, so these are not explicitly given in the formulas. I trust that the reader, after seeing how Earman’s formulas in the last two rows are translated for this paper, will agree that this enhances readability.

[3] Emphasis here, as in any quotation in this paper, is in the original.

[4] The probability that all 55 people would get a positive result is 0.99955 ≈ 0.946.

[5] Because of random variation, it’s very unlikely that, of the people without the condition, exactly 55,000 would get a positive result. But it’s very likely that the number would be close to 55,000. Similarly, it’s unlikely that there will be exactly 55 people with the condition, but the actual number is likely to be close to that number.

[6] Millican gives an incorrect value here of one chance in 1,001, but this is a rounding error resulting from his choice of population size. He gives the correct value in subsequent papers (Millican, 2011, p. 166; Millican, 2013, p. 7) by assuming a population of a billion, which eliminates rounding. The correct value can be obtained without assuming a population size by substituting the values of m and f to calculate the proportion of true positives p = m(1 – f), the proportion of false positives q = f(1 – m), and then the ratio p / (p + q). This ratio, the proportion of positive results that are true positives, is known as the positive predictive value.

[7] The sensitivity is defined to be the proportion of people with the condition who test positive, which is the ratio of true positives to the sum of true positives plus false negatives. The specificity is the proportion of people without the condition who test negative, which is the ratio of true negatives to the sum of true negatives plus false positives. The accuracy, which is the ratio of correct results (true positives plus true negatives) to all results, is the weighted average of the sensitivity and the specificity. If any two of these are equal, as in Millican’s example, then so is the third. However, it’s highly unlikely that the sensitivity and specificity would be equal, as Millican acknowledges (Millican, 2011, p. 184).

[8] If the test is given to 10,000 people, there would be 10,000 × 0.02 × 0.9 = 180 true positives and 10,000 × (1 – 0.02) × (1 – 0.99) = 98 false positives, giving a false discovery rate of f = 98 ÷ (180 + 98) ≈ 0.35. The false discovery rate is the proportion of positive results that are incorrect. It can be calculated either by subtracting the positive predictive value from 1, or (as done here) by computing the ratio of false positives to the sum of true positives plus false positives.

[9] If, like Millican, we were to calculate f by subtracting the accuracy from 1, we would get f = 0.0118. This would mean that f < m, which might seem to support Millican’s thesis. However, this merely makes f the “inaccuracy”—that is, the probability of an incorrect result when it’s not known whether the result is positive or negative. In other words, regardless of how it compares to m, f tells how unreliable the test is. There does not appear to be any value in knowing whether f < m. Moreover, what we really want f to be is the false discovery rate.

[10] There is a 99.8999002% chance of a negative result with barely more than one chance in a billion that a negative result is wrong.

[11] Consider, for example, Matthew 28:11-15, which tells how the Roman guards said to have witnessed the resurrection of Jesus were bribed to say the disciples stole the body from the tomb. If a miracle really occurred, then the guards’ denial of the Resurrection is a false negative. But we can know this only if we know that the Gospel account is not a false positive. Testimony against a miracle cannot be known to be a false negative unless it’s known that miracles occur, but this is the very thing in question.

[12] Earlier I presented the example of a virus to show how Millican picked values designed to make his formula appear to work. Using that example, suppose that the sensitivity and specificity are held constant, while m is decreased to 0.01, then 0.003, and then 0.0001; then the value of f increases to approximately 0.52, 0.79, and 0.99, respectively. In all three cases, a positive result is more likely to be wrong than right. But our confidence in that judgment should go from little, to moderate, to high as m decreases. Note that f alone, and not its size relative to m, is what justifies this. Because the specificity is very unlikely to be as close to 1 as the prevalence (m) is close to 0, low values of m are very likely to result in high values for f.

[13] For example, if a gaseous cloud passed through the solar system such that some part of it came between the Earth and the Sun, and that part was dense enough to block the Sun’s light and large enough to take eight days to go by, that would cause eight days of darkness everywhere on Earth.

[14] If m is assumed to be the conditional probability, then f < 0.5 iff f < m. Thus, the comparison is implicit under this assumption even though f alone is sufficient.

[15] Haack’s two reasons are part of an argument that mathematical probability is insufficient for a theory of warrant. I do not dispute that thesis, only these two reasons, which try (but fail) to show some deficiency in mathematical probability. Her first reason states that the probability of a proposition plus the probability of its negation must add up to 1, but when there is little or no evidence either way, neither the proposition nor its negation is warranted to any degree. The fallacy here is that warrant is at its lowest when the two probabilities are both 0.5, which does add up to 1. Her second reason states that the probability of having multiple pieces of evidence cannot exceed the probability of the individual pieces and is very likely lower, but the combined evidence may provide greater warrant than any component alone. The fallacy here is that this confuses the probability of the evidence, P(e), with the probability of a hypothesis given the evidence, P(h|e). After all, evidence doesn’t provide warrant for itself, but for (or against) some hypothesis. For example, suppose I toss three dice—one red, one blue, and one green—and they roll under the sofa. Let H be the hypothesis that I’ve rolled a number greater than 15. Then P(H) = 5/108. Let R be “the number showing on the red die is 6,” and let B be “the number showing on the blue die is 6.” Then P(R) = P(B) = 1/6 and P(R&B) = 1/36. If I find the red or the blue die and see that 6 is showing, that constitutes evidence for H because P(H|R) = P(H|B) = 1/6 > P(H). But if I find both dice with 6 showing, then P(H|R&B) = 1/2. In other words, the probability of having both pieces of evidence is less than the probability of having either one alone, but the probability of the hypothesis is greater with both pieces than with either one alone.

References

Earman, John. (2000). Hume’s Abject Failure: The Argument against Miracles. Oxford, UK: Oxford University Press.

Hume, David. (1977). An Enquiry Concerning Human Understanding ed. Steven M. Cahn. Indianapolis, IN: Hackett Publishing Company. (Originally published 1748).

Millican, Peter. (2003, July-August). “Hume, Miracles, and Probabilities: Meeting Earman’s Challenge.” Talk presented at the Hume Conference at University of Las Vegas, Las Vegas, Nevada. <https://davidhume.org/scholarship/papers/millican/2003%20Hume%20Miracles%20Probabilities.pdf>.

Millican, Peter. (2011). “Twenty Questions about Hume’s ‘Of Miracles’.” In Philosophy and Religion (pp. 151-192), ed. Antony O’Hear (Cambridge, UK: Cambridge University Press).

Millican, Peter. (2013). “Earman on Hume on Miracles.” In Debates in Modern Philosophy (pp. 271-283), ed. Stewart Duncan and Antonia Lolordo (London, UK: Routledge).

Parsons, Keith. (2019). “Hume’s Beautiful Argument.” The Secular Web. <https://infidels.org/library/modern/keith_parsons/hume-on-miracles.html>.

Vanderburgh, William L. (2019). David Hume on Miracles, Evidence, and Probability. Lanham, MD: Lexington Books.

Copyright ©2021 Stephen Nygaard. The electronic version is copyright ©2021 by Internet Infidels, Inc. with the written permission of Stephen Nygaard. All rights reserved.