Home » Kiosk » Kiosk Article » Knowledge, Expert Opinion and the Public Sphere: Why We Teach the Science We Do

Knowledge, Expert Opinion and the Public Sphere: Why We Teach the Science We Do

I am not a competent mathematician. I have forgotten all of my university-level calculus, and I barely remember my algebra from high school. Still, I am comfortable in saying, for example, that I know the value of pi rounded to four decimal places is 3.1416. (I admit it, though–I had to look it up just to be sure I remembered it right.) I am comfortable saying this even though I never sat down to calculate it.

I also feel that I know this fact well enough to deny a claim that, say, the value of pi is 3. Were someone to make that claim, I do not believe I would have to study the issue to dispose of it; it would be enough toward a justified rejection of that claim simply to say that the consensus view within the mathematics community is that pi is 3.1416.

Almost every scientific “fact” that we learned in our lives was the result of observation, experiment, and analysis by some expert or group of experts. Tacitly or not, we decide to rely on what we have found through experience to be reliable, truth-discovering and truth-conveying procedures that flow from relevant, expert praxis, to mainstream expert consensus, to expert, peer-reviewed publishing, to mainstream purveyors of knowledge such as textbooks, periodicals, popular science tomes, and thence on to the end users. Instead of subjecting each fact adduced in these sources to a battery of our own tests, we look to indicia of reliable expertise and competence–credentials, peer review, publicized professional reputation, and such–and then decide on that basis whether the purported fact counts as a piece of knowledge.

This sort of reliance on expert opinion is what is called a heuristic–a shortcut we use to build our base of reliable knowledge without ourselves having to re-replicate, regenerate, reverify or reanalyze all of the possibly relevant data.

Heuristics are an epistemic compromise between efficiency and reliability. Take the area of science. Suppose there are two alternative options in acquiring scientific knowledge:

(1) relying only on the “expert” heuristic; and

(2) relying solely on independent verification.

Start with the expert heuristic. Assume the published “facts” flowing from expert consensus are wrong 1% of the time. Assume also that a competent scientist could learn 1000 facts per year. At the end of one year, then, the scientist will know 990 true facts and believe 10 false claims.

Now consider the alternative–rejecting the expert heuristic and relying entirely on independent verification. In so rejecting the expert heuristic, our hypothetical scientist commits herself, for each fact claimed, to going out into the field or laboratory, conducting the relevant observation or experiment, doing an independent analysis, and then drawing her own conclusions. As you might imagine, this would drastically impede the rate of that scientist’s fact-gathering: for every fact that would take, perhaps, a minute to read, understand, and commit to memory under the expert heuristic, she now has to take additional time to imagine and design a test/observation protocol, and then implement that protocol. Let’s say when you work it out, that the rate of fact acquisition is now something like 100 facts per year (probably an unrealistically high rate, all things considered). So rejecting the expert heuristic clearly impedes the rate of acquisition.

What about the rate of error? Would our independent scientist here have made any gains by her diligent efforts at independent verification? We should doubt it. For one thing, expert consensus is generated over a much longer period of time than a year; for another, it is honed by a multiplicity of experts checking, cross-checking, reanalyzing and arguing back and forth–thus affording rich opportunities to uproot error. It therefore seems likely that our scientist

(1) would introduce error at the average rate or greater (having less time to generate relevant data), and

(2) would conserve most of those errors (because there are no external checks on her findings).

If anything, then, the lone scientist’s error rate should be higher than that of the expert community at large.

But let’s be charitable and nonetheless assume our independent scientist somehow manages to match the error rate enjoyed by the consensus. On our assumptions, then, the independent scientist knows 99 true facts, and believes 1 false one. The score, then:

Expert heuristic: true beliefs, 990; false beliefs, 10.
Independent verification: true beliefs, 99; false beliefs, 1.

So our lone wolf made fewer errors in absolute numbers, but only at the expense of being far less productive. More might be said about epistemologies of scale, here, but let’s leave it at that.

So much for a competent, independent-minded scientist, then. What about a layperson? As to error rate, it seems clear enough that a layperson who ignored the expert consensus risks an error rate of chance. (Without that person’s understanding the competing arguments, what else could be expected?) Whatever a layperson’s rate of fact acquisition, then, she is much better off using the expert heuristic. If a competent scientist cannot do better than expert consensus, a layperson certainly will not.

Much like Newton, then, we learn as much as we do only by standing on the shoulders of experts.

Heuristics create only a presumption, of course. Because they are only rules of thumb, they do not foreclose warranted, alternative, epistemic choices–that is, they do not prevent us from adopting alternative beliefs that are based on competent knowledge. If we know enough about the relevant domain of knowledge, we could rationally reject the expert consensus. This is often what scientists, historians, and others at the advance guard must do. A sea change only comes after turbulent dissent bubbles up high enough to breach the levees that channel consent. (Forgive the mixed fresh- and saltwater metaphors.)

Those on the vanguard of scientific discovery, however, have some qualifications you and I do not. That being the case, the discussion about error rates a few paragraphs ago should make clear why your average layperson’s selective skepticism is misbegotten: beliefs that differ from the heuristic “output” (i.e., the decision one comes to in applying the rule) can only be justifiably accepted by appeal to a comprehending comparison of the competing arguments–otherwise you get only a chance rate of knowing the truth; but then such comprehension typically requires significant expertise, and so is inaccessible to the layperson by definition.

That is not to say there is anything perverse in being attracted to novel theories. For my part, I appreciate–and even believe–many theories that do not enjoy contemporary expert consensus. But it is one thing to find a new theory fanciful, or even to believe that it is true, or even proclaim that belief to the world; it is another thing to insist that one’s favored theory (which again does not enjoy expert consensus) should have a claim on the minds of others.

Which brings us to the issue of expert consensus and public schooling. In public schools, we teach–and expect kids to learn–the consensus view in economics, the consensus view in mathematics, the consensus view in physics, in chemistry, engineering, etc., etc. Why? The reasons behind our general commitment to expert consensus explain why: variance from expert consensus is fraught with error.

To see how deference to the expert heuristic works out in the practice of designing a curriculum, consider a question of history: Did Jesus exist? The clear, overwhelming consensus among contemporary, mainstream historians is yes. Jesus’ existence among historians is so well-accepted that it almost has the status of a presupposition. Yet there is a legitimate, competing historical theory, call it “Jesus skepticism.” On this theory, Jesus was a mythical character in a mythical text; he was not a historical figure. Indeed, he was not even based on a historical figure. [1] I myself am “taken” with this historical theory. So too are many others. But who cares? Jesus skepticism is a fringe theory; therefore, it has no place, at present, in history class. As a polity, we are pedagogically committed to the expert heuristic; thus, in history class, we teach the deliverances of the consensus of contemporary, mainstream historians. Given that commitment, there is no reason that Jesus skepticism ought to be entertained. [2] And so it is not.

Take another example, recalling our shared knowledge (or shared ignorance) of mathematics, discussed earlier. The reasons stated above also explain why we should reject the teaching of the “Biblical-mathematical” view that pi = 3. [3] Very simply, because it is not the consensus view within mathematics. Full stop. Granted, a brilliant mathematician might be able to argue ingeniously and legitimately that, in fact, pi = 3. But even then, by our prior pedagogical commitment, we still would not be justified in adopting the brilliant mathematician’s finding until that finding emerges as an accepted view within the consensus of mathematical experts. This is something I do not think even a significant minority of Christians would deny.

Yet for those same Christians, and for perhaps a fair majority of Americans, intelligent design theory (a theory even whose own proponents admit lies outside current scientific consensus) should have a place in science class. This opinion trades on the perhaps appealing idea that it would be fair to give all scientific theories time in the class room. But think about it. Does fairness really require that the theory ‘pi= 3′ get “fair” hearing in math class? Or that the theory ‘Jesus was a mythical character’ get a “fair” hearing in history class? If not, then fairness surely cannot require us to give precious time in science class discussing intelligent design; that theory is, quite simply, just as at variance with the relevant expert consensus as the other two theories. And that is all that should need be said. Again, lay people, by definition, lack the expertise required to adjudge the competing arguments, and they therefore must view the scientific consensus as more probably true than the fringe view in any given case; “fairness” to fringe epistemic communities is simply not a relevant consideration in designing a sound science curriculum. (To paraphrase Clint Eastwood’s William Munny in The Unforgiven, fairness ain’t got nothin’ to do with it.)

It might be easy to draw a stronger conclusion from all of this than is warranted. So don’t! The expert heuristic does not “prove” anything about evolution (or about pi, or about Jesus). It is simply a tool enabling those lacking the relevant scientific expertise to make reasonably sound judgments as to what counts as scientific knowledge. As such, it is the tool that we use when we need to decide what should be taught in school. It is a humble tool, but one that works well–and for reasons we can all understand.


[1] What I call “Jesus skepticism” should not be confused with a standard liberal view that Jesus was a real person subsequently mythologized, analogous to King Arthur. “Jesus skepticism” is the much more radical view that Jesus was a mythical character who was later historicized (to coin a term). The best online source of argument that I have read for this position is Earl Dougherty’s The Jesus Puzzle.

[2] With an objective error rate of chance, I have no claim on the minds of my fellow citizens in making such a request. (A fortiori, I have no claim on the minds of their children.)

[3] The “biblical value of pi” is derived from I Kings 7:23:

And he made a molten sea, ten cubits from the one brim to the other: it was round all about, and his height was five cubits: and a line of thirty cubits did compass it about. (KJV.)

The relevant calculation (circumference divided by diameter), then, gives us that pi = 3. See, e.g., “A history of Pi.”