2024-11-16

Jesus Mythicism and Historical Knowledge, Part 1: Historical Facts and Probability

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

It’s been a long while since I wrote about Jesus mythicism. I hope what I write now will present a slightly different and useful perspective.

Should not Christian apologists be thrilled with Richard Carrier’s widely known conclusion and welcome it:

In my estimation the odds Jesus existed are less than 1 in 12,000. . . .

There is only about a 0% to 33% chance Jesus existed.

(On the Historicity of Jesus, 600, 607)

Doesn’t that indicate that Jesus was a truly exceptional figure according to the best conclusions of the atheist scholar? Don’t believing Christians want Jesus to be unique, to be different from anyone else, to bring about an unlikely event by normal human standards? A 1 in 12,000 figure is surely bringing Jesus down too close to normality, isn’t it? Shouldn’t Jesus be a unique figure in history? So if historical tools as understood and used by Richard Carrier conclude that Jesus is not to be expected in the annals of normal human history and left no record comparable to the records of other mortals for historians to ponder, should not apologists take comfort from such findings?

I want to address what appears to me to be a widespread misconception about historical knowledge across various social media platforms and in some published works where this question is discussed.

Too often I hear that historians can never be absolutely certain about anything in the past and that they always, of necessity, can only speak of “what probably happened”. (When I speak of historians I have in mind the main body of the historical guild in history departments around the world. I am not talking about biblical scholars and theologians because their methods are very often quite different.)

So let’s begin with Part 1 of the question of probability in historical research. Richard Carrier is widely known for reducing the entire question of Jesus’ existence to a matter of probabilities. I agree with much of Carrier’s approach but I also disagree on some major points. A fundamental point on which I disagree with Carrier is the claim that the most a historian can say about any historical event is that it is “probably” true. Carrier writes:

All claims have a nonzero epistemic probability of being true, no matter how absurd they may be (unless they’re logically impossible or unintelligible), because we can always be wrong about anything. And that entails there is always a nonzero probability that we are wrong, no matter how small that probability is. And therefore there is always a converse of that probability, which is the probability that we are right (or would be right) to believe that claim. This holds even for many claims that are supposedly certain, such as the conclusions of logical or mathematical proofs. For there is always a nonzero probability that there is an error in that proof that we missed. Even if a thousand experts check the proof, there is still a nonzero probability that they all missed the same error. The probability of this is vanishingly small, but still never zero. Likewise, there is always a nonzero probability that we ourselves are mistaken about what those thousand experts concluded. And so on. The only exception would be immediate experiences that at their most basic level are undeniable (e.g., that you see words in front of you at this very moment, or that “Caesar was immortal and Brutus killed him” is logically impossible). But no substantial claim about history can ever be that basic. History is in the past and thus never in our immediate experience. And knowing what logically could or couldn’t have happened is not even close to knowing what did. Therefore, all empirical claims about history, no matter how certain, have a nonzero probability of being false, and no matter how absurd, have a nonzero probability of being true.

(Proving History, 24f – my bolding in all quotations)

A little further on Carrier raises again the exception of a “trivial” event like an “uninterpreted [direct personal] experience”:

The only exceptions I noted are claims about our direct uninterpreted experience (which are not historical facts) and the logically necessary and the logically impossible (which are not empirical facts).17 Everything else has some epistemic probability of being true or false. 

17. Of course “historical facts” do include direct uninterpreted experience, because all observations of data and of logical and mathematical relations reduce to that, but no fact of history consists solely of that; and “the logically necessary and the logically impossible” are empirical facts in the trivial sense that they can be empirically observed, and empirical propositions depend on them, and logical facts are ultimately facts of the universe (in some fashion or other), but these are not empirical facts in the same sense as historical facts, because we cannot ascertain what happened in the past solely by ruminating on logical necessities or impossibilities. Logical facts are thus traditionally called analytical facts, in contrast to empirical facts. Some propositions might combine elements of both, but insofar as a proposition is at all empirical, it is not solely analytical (and thus has some nonzero epistemic probability of being true or false), and insofar as it is solely analytical, it is not relevantly empirical (and thus cannot affirm what happened in the past, but only what could or couldn’t have).

(Proving History, 62, 302)

And again, in pointing out that historians can never be absolutely certain about any “substantive claim”,

Such certainty for us is logically impossible (at least for all substantive claims about history . . . )

(Proving History, 329)

Not even God can avoid reducing all knowledge of the past to “what probably happened”:

A confidence level of 100% is mathematically and logically impossible, as we never have access to 100% of all information, i.e., we’re not omniscient, and as Gödel proved, no one can be, for it’s logically necessary that there will always be things we won’t know, even if we’re God . . . 

(Proving History, 331)

Publicly available on archive.org

I have to disagree. We don’t need “100% of all information” or to be “omniscient” in order to be absolutely certain about certain facts of the past. Historians are indeed certain about basic facts. We know for a fact that the U.S. dropped atomic bombs on Japan in 1945, that Japan attacked Pearl Harbor a few years before that event, that Europeans migrated to and settled in the Americas, Africa, Australasia in the sixteenth to the nineteenth centuries, that King John signed the Magna Carter in 1215, that Rome once ruled the Mediterranean, that the Jerusalem temple was destroyed in 70 CE.

Historical events are unique and unrepeatable and our knowledge of many of them can often be absolutely certain. Witness the “History Wars” around the world — the Americas, India, Australia. In Australia, for instance, the arguments over the killing of aborigines and removing children from their families is not about what “probably” happened but what the evidence tells us did actually happen — with no room for any doubt at all. The 1992 Holocaust trial of David Irving was not about what probably happened but what can be known as an indisputable fact to have happened.

To be certain about such events does not require us to possess 100% of all the related information. Further, being certain about such events does not mean we are certain about all the details. There are grey areas where probability does enter the picture but the core events themselves cannot be legitimately doubted.

* The quoted phrases are from Hindess, Barry, and Paul Q. Hirst. Pre-Capitalist Modes of Production. London: Routledge & Kegan Paul, 1975, page 2, in reference to Willer & Willer’s book, Systematic Empiricism: Critique of a Pseudo-Science.

A “brilliant and devastating critique”* of the probability approach to historical facts (in fact to the entire area of theoretical empiricism that once typically “characterised the academic social sciences and history”) was published in the 1972 book Systematic Empiricism: Critique of a Pseudo-Science by David and Judith Willer. The chapter that specifically addresses probability in this context was written by the sociologist Dr Cesar Hernandez-Cela. Here is what he says about probability in the context being discussed in this post:

A relative frequency is a probability only if the number of events taken into account is infinite. But when the number of instances is finite . . . the ratio is a relative frequency but not a probability. . . . . A relative frequency is a description, but a probability is a calculation. Although we may calculate a theoretical probability value of 1/2 for a universe in which A and B are equally represented when the number of instances approaches infinity, the most that can be said about the number of heads that will turn up when tossing a coin twenty times is that there will be a particular frequency which is unknown until we toss the coin. In other words, the assignment of a value of 1/2 simply because the coin has two sides is an error because we do not know that each side will be equally represented in any empirical case. Equal representation in probability is a mathematical assumption which is violated in finite empirical cases. . . . We may instead find that tossing a die results in a successive run of fives . . . .

The theory of probability . . . can be used in scientific theories, but it cannot be used to associate observables. Sociological statistical procedures are concerned with observables and therefore violate the conditions under which probability calculations may be legitimately used. But they are so often used that they are frequently accepted (in spite of their obvious absurdity) without question. We are told that the probability of rain tomorrow is 60 percent when, in fact, it will either rain or it will not. Such statements are unjustified, wrong, and misleading.

(Systematic Empiricism, 97f – italics in the original)

One is reminded here of Richard Carrier’s discussion of the “Rank-Raglan hero class”, a category of ancient figures — most of whom are mythical — who share certain mythical attributes.

This is a hero-type found repeated across at least fifteen known mythic heroes (including Jesus) — if we count only those who clearly meet more than half of the designated parallels (which means twelve or more matches out of twenty-two elements), which requirement eliminates many historical persons, such as Alexander the Great or Caesar Augustus, who accumulated many elements of this hero-type in the tales told of them, yet not that many.

The twenty-two features distinctive of this hero-type are:

1. The hero’s mother is a virgin.
2. His father is a king or the heir of a king.
3. The circumstances of his conception are unusual.
4. He is reputed to be the son of a god.
5. An attempt is made to kill him when he is a baby.
6. To escape which he is spirited away from those trying to kill him.
7. He is reared in a foreign country by one or more foster parents.
8. We are told nothing of his childhood.
9. On reaching manhood he returns to his future kingdom.
10. He is crowned, hailed or becomes king.
11. He reigns uneventfully (i.e., without wars or national catastrophes).
12. He prescribes laws.
13. He then loses favor with the gods or his subjects.
14. He is driven from the throne or city.
15. He meets with a mysterious death.
16. He dies atop a hill or high place.
17. His children, if any, do not succeed him.
18. His body turns up missing.
19. Yet he still has one or more holy sepulchers (in fact or fiction).
20. Before taking a throne or a wife, he battles and defeats a great adversary (such as a king, giant, dragon or wild beast).

and

21. His parents are related to each other.
22. He marries a queen or princess related to his predecessor.

Many of the heroes who fulfill this type also either (a) performed miracles (in life or as a deity after death) or were (b) preexistent beings who became incarnated as men or (c) subsequently worshiped as savior gods, any one of which honestly should be counted as a twenty-third attribute. . . . 

1. Oedipus (21)
2. Moses (20)
3. Jesus (20)
4. Theseus (19)
5. Dionysus (19)
6. Romulus (18)
7. Perseus (17)
8. Hercules (17)
9. Zeus (15)
10. Bellerophon (14)
11. Jason (14)
12. Osiris (14)
13. Pelops (13)
14. Asclepius (12)
15. Joseph [i.e., the son of Jacob] (12)

This is a useful discovery, because with so many matching persons it doesn’t matter what the probability is of scoring more than half on the Rank-Raglan scale by chance coincidence. Because even if it can happen often by chance coincidence, then the percentage of persons who score that high should match the ratio of real persons to mythical persons. In other words, if a real person can have the same elements associated with him, and in particular so many elements (and for this purpose it doesn’t matter whether they actually occurred), then there should be many real persons on the list—as surely there are far more real persons than mythical ones. . . . 

So there is no getting around the fact that if the ratio of conveniently named mythical godmen to conveniently named historical godmen is 2 to 1 or greater, then the prior probability that Jesus is historical is 33% or less.

(On the Historicity of Jesus, 229-231, 241 – italics original)

First, we have fewer than a quarter of 100 instances in our group so a per centum figure is misleading. The total number Raglan studied was twenty.

Second, on what basis can we validly decide to count only those figures who score more than half of the listed attributes? Carrier identifies ten of the twenty-two listed features as applicable to Alexander the Great and acknowledges (though disputes) the possibility of assigning him thirteen. Half seems to be an arbitrary cut-off point (or at least tendentious insofar as it excludes the exceptions, historical persons who would spoil the point being made) especially when we know that Raglan himself said that his list of twenty-two was an arbitrary number. Other scholars of mythical “types” produced different lists:

Von Hahn had sixteen incidents, Rank did not divide his pattern into incidents as such, and Raglan had twenty-two incidents. Raglan himself admitted that his choice of twenty-two incidents (as opposed to some other number of incidents) was arbitrary (Raglan 1956:186).

(In Quest of the Hero, 189. — Raglan’s words were: I have taken twenty-two, but it would be easy to take more. Would a more complete list reduce the other figures to matching fewer than half….? So we begin to see the arbitrariness of Carrier’s deciding to focus only on those with more than half of the attributes in the Raglan list of 22.)

Alexander the Great and Mithridates are not the only ancient figures to whom “hero attributes” were attributed in the literature. Sargon and Cyrus were also studied in the same context by other scholars:

Raglan wrote in complete ignorance of earlier scholarship devoted to the hero, and he was therefore unaware of the previous studies of von Hahn and Rank, for example. Raglan was parochial in other ways too. For one thing, the vast majority of his heroes came exclusively from classical (mostly Greek) sources. The first twelve heroes he treats are: Oedipus, The­seus, Romulus, Heracles, Perseus, Jason, Bellerophon, Pelops, Asclepios, Dionysos, Apollo, and Zeus. Raglan could have strengthened his case had he used some of the same heroes used by von Hahn and Rank and other scholars, e.g., such heroes as Sargon and Cyrus.

(In Quest of the Hero, 187 – my bolding)

One might even argue that the further east one went from Greece the more likely it was that historical persons matched the mythical hero reference class! Much fun can be had with statistics.

Let’s continue with Hernandez-Cela’s discussion of probability as it applies to the social sciences and history:

Social empiricists, when presenting numerical values such as the “probability” of churchgoers giving alms to the poor, might state that only in 5 percent of cases would an association as large as 60 percent or larger not obtain when instances are randomly selected. But, observing individuals, we may only say that they either do or do not give alms. In the first observation we may find that 60 percent of the total sample gave alms, but in succeeding observations this value may differ. We cannot, in fact, have any expectations of probability of giving alms to the poor, no matter how many samples we take. If, on the other hand, the sample approaches or is equal to the total population of churchgoers, then the figure represents a simple proportion, a frequency, not a probability. On the other hand, specification that only 5 percent of samples will not result in the .60 or more is meaningless. If we chose several samples all of the same size, and found that in only 5 percent of them the figure was under .60, then we still can draw no conclusions, for we know nothing about the empirical conditions prevailing in future samples. Such a claim has no basis either in theory or in observation. What the claim means is that if there were an infinite number of cases whose composition was on the average like that of the sample, then in only 5 percent of them would the percentage be smaller than .60. But, we cannot assume that any other empirical cases are on the average like the sample studied, and we cannot assume that they are infinite in number. Theoretical cases can be infinite in number, but empirical ones cannot. Such statistical claims, of course, cannot be violated empirically because they are not probability statements at all but disguised frequencies obtained by observation. Future observations cannot verify or falsify frequencies but only slightly modify their numerical value in the light of new cases. Furthermore, the statistical procedures themselves are not open to any kind of empirical verification or falsification . . .

(Systematic Empiricism, 99)

So a sample of a score of mythical heroes cannot be the basis for predicting the likelihood of any particular figure being historical or not.

The statement, “All As are Bs,” . . . . really means no more than “As have been observed with Bs.” But this statement is not a universal statement, but limited to a population. . . . Consequently no empirical generalization can act as a major premise in a deductive explanation, and empirical generalizations can never be used deductively to explain or predict.

(Systematic Empiricism, 130 — no longer from Hernandez-Cela’s chapter; italics original)

An illustration of the fallacy is set out thus:

Premise A: The probability of recovery from a streptococcus infection when treated by penicillin is close to 1.

Premise B: John Jones was treated with large doses of penicillin.

Conclusion: The probability that John Jones will recover from his streptococcus infection is close to 1.

(Systematic Empiricism, 130)

One might rephrase this as:

Premise A: The probability of a figure in the hero-class being non-historical is close to 0.

Premise B: Jesus is a figure in the hero-class.

Conclusion: The probability that Jesus is non-historical is close to 0.

But as D. and J. Willer observe,

Predictions and explanations cannot be made from [such a statement]. John Jones either does or does not recover. If he does recover the probability value of statement A is slightly increased by his case, and if he does not the probability value decreases. . . . [T]he event itself cannot be predicted with any certainty. Furthermore, if John Jones either recovers or does not, he does not recover with a probability of close to 1.

Individual facts either occur or they do not. Certain facts cannot be explained by uncertain statements. Even in ordinary everyday practical empiricism we do not make that error.

(Systematic Empiricism, 131, 135)

No two historical events are ever exactly alike. People and societies are not like that. There are always variables that make each historical event unique. Of course there are common experiences such as war or economic depression but no two wars or depressions are the same. Human events are not governed by laws in the same way geological forces or the weather are governed by scientific laws. Historians do not observe the results of “laws” in the historical data. They cannot make predictions about a unique historical event or person  — all historical events and persons are unique in some respect — on the basis of limited samples with variable (“arbitrary”) attributes. Generalizations can be made about the impacts of technologies on various kinds of social groups but particular historical events are each unique in some way. But generalizations cannot predict what a historian will find in the sources.

The most that probability (in the context of Richard Carrier’s discussion) can tell us about the likelihood of Jesus having existed is that Jesus was one of a few historical exceptions (or even the only exception) to general notions about mythical persons.

In the next post I’ll show what historians say about the certainty or otherwise of “their basic facts”.


Carrier, Richard. On the Historicity of Jesus: Why We Might Have Reason for Doubt. Sheffield: Sheffield Phoenix Press Ltd, 2014.

Carrier, Richard. Proving History: Bayes’s Theorem and the Quest for the Historical Jesus. Amherst, N.Y: Prometheus Books, 2012.

Hindess, Barry, and Paul Q. Hirst. Pre-Capitalist Modes of Production. London: Routledge & Kegan Paul Books, 1975.

Raglan, Lord. The Hero: A Study in Tradition, Myth and Drama. Mineola, N.Y: Dover Publications, 2011.

Rank, Otto, Raglan, and Alan Dundes. In Quest of the Hero. Mythos (Princeton, N.J.). Princeton, N.J.: Princeton University Press, 1990.

Willer, David, and Judith Willer. Systematic Empiricism: Critique of a Pseudoscience. Englewood Cliffs: Prentice-Hall, 1973.



2021-01-17

Applying Bayesian Reasoning to Trump’s Claims of Election Fraud

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

From the transcript of Sean Carroll’s Mindscape program, 129| Solo: Democracy in America [h/t Bob Moore]

0:15:13.8 SC: . . . . the idea that the election was stolen was made by a whole bunch of partisan actors, but it was also, I think, importantly, taken up as something worth considering, even if not necessarily true, by various contrarian, centrist pundits, right?

0:16:32.1 SC: . . . . So the answer I would have put forward is, “No. [chuckle] It was never worth taking that kind of claim seriously.” . . . . We like to talk here about being Bayesian, and in fact, it’s almost a cliche in certain corners of the internet talking about being good Bayesians, and what is meant by that is, for a set of propositions like the election was stolen, the election was not stolen. Okay, two propositions mutually exclusive, so you assign prior probabilities or prior credences to these propositions being true. So you might say,

      • “Well, elections are not usually stolen, so the credence I would put on that claim my prior is very, very small.
      • And the credence I would put on it not being stolen is very large.”

So we collect the data that will help us assess which proposition is the more likely. If the data is not what we would expect if X were true, then we revise our estimation that X really did happen. If the data we collect is exactly what we would expect to find if X were true, then we can be confident that X is indeed most likely true.

0:18:51.0 SC: . . . . So in a case like this where a bunch of people are saying, “Oh, there was election fraud, irregularities, the counting was off by this way or that way. It all seems suspicious.” You should ask yourself, “Did I expect that to happen?” The point is that if you expected exactly those claims to be made, even if the underlying proposition that the election was stolen is completely false, then seeing those claims being made provides zero evidence for you to change your credences whatsoever. Okay? So to make that abstract statement a little more down to earth, in the case of the elections being stolen, how likely was it that if Donald Trump did not win the election, that he and his allies would claim the election was stolen independent of whether it was, okay? What was the probability that he was going to say that there were irregularities and it was stolen?

0:20:19.6 SC: Well, a 100%, roughly speaking, 99.999, if you wanna be little bit more meta-physically careful, but they announced ahead of time that they were going to make those claims, right? He had been saying for months that the very idea of voting by mail is irregular and was going to lead to fraud, and they worked very hard to make the process difficult, both to cast votes and then to count them, different states had different ways of counting, certain states were prohibited from counting mail in ballots ahead of time. The Democrats were much more likely to vote by mail than the Republicans were, they slowed down the postal service, trying to make it take longer for mail-in votes to get there. There’s it’s a whole bunch of things going on in prior elections in the primaries, Trump had accused his opponents of rigging the election and stealing votes without any evidence.

0:21:15.3 SC: So your likelihood to function, that you would see these claims rise up even if the underlying proposition was not true, is basically, 100%. And therefore, as a good Bayesian, the fact that people were raising questions about the integrity of the election means nothing. It’s just what you expect to happen.

Oh someone claimed that something’s going on, therefore it’s my job to evaluate it and wait for more evidence to come in.

The data we need to see in order to take the claims of fraud seriously:

If you really want to spend any effort at all taking a claim like this seriously, you have to go beyond that simple thing, “Oh someone claimed that something’s going on, therefore it’s my job to evaluate it and wait for more evidence to come in.”

You should ask further questions, “What else should I expect to be true if this claim was correct?” For example, if the Democrats had somehow been able to get a lot of false ballots, rig elections, you would expect to see certain patterns, like Democrats winning a lot of elections, they had been predicted to lose different cities where or locations more broadly, where the frauds were purported to happen would be ones where anomalously large percentages of people were voting for Biden rather than Trump.

0:22:28.3 SC: In both cases, in both the idea that you would predict Democrats winning elections, they had been predicted lose and places where fraud was alleged to have happened would be anomalously pro-Biden it was the opposite. And you could instantly see that it was the opposite, right after election day.

      • The Democrats lost elections for the House of Representatives and the Senate that they were favored to win.

So they were very bad at packing the ballots, if that’s really what they were trying to do.

      • In cities like Philadelphia where it was alleged that a great voter fraud was taking place, Trump did better in 2020 than he did in 2016.

So right away, without working very hard, you know this is egregious bullshit, there is no duty to think, to take seriously, to spend your time worrying about the likely truth of this outrageous claim, all of which is completely compatible with every evidence, the falsity of which is completely compatible with all the evidence we have.

0:23:32.2 SC: So just to make it dramatic, let me spend a little bit of time here… Let me give you an aside, which is my favorite example of what I mean by this kind of attitude because it is very tricky. You should never, and I’m very happy to admit, you should never assign zero credence to essentially any crazy claim. That would be bad practice as a good Bayesian because if you assign assigned zero credence to any claim, then no amount of new evidence would ever change your mind. Okay? You’re taking the prior probability multiplying it with the likelihood, but at if the prior probability is zero, then it doesn’t matter what the likelihood is, you’re always gonna get zero at the end. And you should be open to the idea that evidence could come in that this outrageous claim is true, that the election was stolen, it’s certainly plausible that such evidence would come in.

0:24:21.9 SC: Now it didn’t, right, when actually they did have their day in court, they were laughed … out of court because they had zero evidence, even all the way up to January 6th when people in Congress were raising a stink about the election not being fair, they still had no evidence. The only claim they could make was that people were upset and people had suspicions, right? Even months later, so there was never any evidence that it was worth taking seriously. But nevertheless, even without that, I do think you should give some credence and therefore you have to do the hard work of saying, “Well, I’m giving it some non-zero credence, but so little that it’s not really worth spending even a minute worrying about it.” That’s a very crucial distinction to draw, and it’s very hard to do.


2020-08-11

“When everyone is agreed on something, it is probably wrong” — Thompson’s Rule

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

Another Thompson aphorism: ‘When everyone is agreed on something, it is probably wrong’. In other words, as Thompson has also put it, ‘in our fields, if all are in agreement, it signifies that no one is trying to falsify the theory: an essential step in any scientific argument’. — Doudna 2020

That’s not being perverse. It’s about pausing when “things seem too good to be true” and taking time out to ask if “there has probably been a mistake”. (Gunn, @ 2 mins)

[U]ntil the Romans ultimately removed the right of the Sanhedrin to confer death sentences, a defendant unanimously condemned by the judges would be acquitted [14, Sanhedrin 17a], the Talmud stating ‘If the Sanhedrin unanimously find guilty, he is acquitted. Why? — Because we have learned by tradition that sentence must be postponed till the morrow in hope of finding new points in favour of the defence’.

That practice could be interpreted as the Jewish judges being intuitively aware that suspicions about the process should be raised if the final result appears too perfect . . .

[I]f too many judges agree, the system has failed and should not be considered reliable. (Gunn et al 2016)

Or even more simply,

They intuitively reasoned that when something seems too good to be true, most likely a mistake was made. (Zyga, 2016)

Sanhedrin deciding the death penalty . . . but . . . https://arthive.com/vasilypolenov/works/493225~Guilty_of_death
See Interview 1 and Interview 2 with Thomas L. Thompson. All Vridar blog posts on Thompson’s work are archived here. I expect to begin posting my thoughts on Biblical Narratives, Archaeology & Historicity: Essays in Honour of Thomas L. Thompson fairly soon.

The opening quotation above is from a footnote to a chapter by Gregory Doudna in a newly published volume in honour of Thomas L. Thompson, Biblical Narratives, Archaeology & Historicity: Essays in Honour of Thomas L. Thompson. Doudna’s footnote continues:

I thought of what I have come to call Thompson’s Rule when I encountered this scientific study showing that, as counterintuitive as it sounds, unanimous agreement actually does reduce confidence of correctness in conclusions in a wide variety of disciplines (Gunn et al. 2016).

The paper by Gunn and others is Too good to be true: when overwhelming evidence fails to convince. The argument of the paper (with my bolding in all quotations):

Is it possible for a large sequence of measurements or observations, which support a hypothesis, to counterintuitively decrease our confidence? Can unanimous support be too good to be true? The assumption of independence is often made in good faith; however, rarely is consideration given to whether a systemic failure has occurred. Taking this into account can cause certainty in a hypothesis to decrease as the evidence for it becomes apparently stronger. We perform a probabilistic Bayesian analysis of this effect with examples based on (i) archaeological evidence, (ii) weighing of legal evidence and (iii) cryptographic primality testing. In this paper, we investigate the effects of small error rates in a set of measurements or observations. We find that even with very low systemic failure rates, high confidence is surprisingly difficult to achieve . . . . 

Sometimes as we find more and more agreement we can begin to lose confidence in those results. Gunn begins with a simple example in a presentation he gave in 2016 (link is to youtube video). Here is the key slide:

With a noisy voltmeter attempting to measure a very small voltage (nanovoltage) one would expect some variation in each attempted measurement. Without the variation, we can conclude something is wrong rather than that we have a precise measurement.

Another example:

The recent Volkswagen scandal is a good example. The company fraudulently programmed a computer chip to run the engine in a mode that minimized diesel fuel emissions during emission tests. But in reality, the emissions did not meet standards when the cars were running on the road. The low emissions were too consistent and ‘too good to be true.’ The emissions team that outed Volkswagen initially got suspicious when they found that emissions were almost at the same level whether a car was new or five years old! The consistency betrayed the systemic bias introduced by the nefarious computer chip. (Zyga 2016)

From https://www.cagle.com/arend-van-dam/2015/09/smart-vw-cars

Then there was the Phantom of Heilbronn or the serial killer “Woman Without a Face“. Police spent eight to fifteen years searching for a woman whom DNA connected to 40 crime scenes (murders to burglaries) in France, Germany and Austria. Her DNA was identified at six murder scenes. A three million euro reward was offered. It turned out that the swabs used to collect the DNA from the crime scenes had been inadvertently contaminated at their production point by the same woman.

Consider, also, election results. What do we normally suspect when we hear of a dictator receiving over 90% of the vote?

We have all encountered someone who has argued that “all the evidence” supports their new pet hypothesis to explain, say, Christianity’s origins. I have never been able to persuade them, as far as I know, that reading “all the evidence” with a bias they either cannot see or think is entirely valid.

Ironically, scholars like Bart Ehrman who attempt to deny a historical and even slightly significant “Jesus myth” view among scholars are doing their case a disservice. By insisting that there is and that there has been no valid or reasonable contrary view ever raised, such scholars are undermining confidence in the case for the historicity of Jesus. If they could accept the challenges from serious thinkers over the past near two centuries, and acknowledge the ideological pressure inherent in “biblical studies” for academics to conform within certain parameters of orthodox faith, then they could begin to not look quite so like those politicians who claim 90% of the vote, or like those police chasing a phantom woman serial killer for eight years across Europe, of the dishonest VW executives . . . . Continue reading ““When everyone is agreed on something, it is probably wrong” — Thompson’s Rule”


2019-10-28

What’s the Difference Between Frequentism and Bayesianism? (Part 3)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Tim Widowfield

Note: I wrote this post a few years back and left it lying in the draft pile, unable to come up with a satisfactory conclusion until earlier this year. Our forecast calls for snow tomorrow (something those of us who live in RVs would rather not see), so a post about precipitation and weather prediction might be apt. –TAW

yellow, Umbrella, bad weather
Yellow umbrella in bad weather (Photo credit: Wikipedia)

[This post begins our hard look at Chapter 6, “The Hard Stuff” in Carrier’s Proving History. — specifically, the section entitled “Bayesianism as Epistemic Frequentism.”]

In the 1980s, the history department building on the University of Maryland’s College Park campus had famous quotations painted on its hallway walls. Perhaps they still do.

The only quote I can actually still remember is this one:

“The American people never carry an umbrella. They prepare to walk in eternal sunshine.” — Alfred E. Smith

I used to enjoy lying to myself and say, “That’s me!” But the real reason I never carry an umbrella is not that I’m a naive Yankee optimist, but rather because I know if I do, I will leave it somewhere. In this universe, there are umbrella receivers and umbrella donors. I am a donor.

Eternal sunshine

So to be honest, the reason I check the weather report is to see if I should take a jacket. I’ve donated far fewer jackets to the universe than umbrellas. But then the question becomes, what does it actually mean when a weather forecaster says we have a 20% chance of rain in our area this afternoon? And what are we supposed to think or do when we hear that?

Ideally, when an expert shares his or her evaluation of the evidence, we ought to be able to apply it to the situation at hand without much effort. But what about here? What is our risk of getting rained on? In Proving History, Richard Carrier writes:

When weathermen tell us there is a 20% chance of rain during the coming daylight hours, they mean either that it will rain over one-fifth of the region for which the prediction was made (i.e., if that region contains a thousand acres, rain will fall on a total of two hundred of those acres before nightfall) or that when comparing all past days for which the same meteorological indicators were present as are present for this current day we would find that rain occurred on one out of five of those days (i.e., if we find one hundred such days in the record books, twenty of them were days on which it rained). (Carrier 2012, p. 197)

These sound like two plausible explanations. The first sounds pretty “sciency,” while the second reminds us of the frequentist definition of probability, namely “the number of desired outcomes over the total number of events.” They’re certainly plausible, but do they have anything to do with what real weather forecasters do?

Recently, I came across an article on this subject by a meteorologist in Jacksonville, Florida, written back in 2013. He even happened to use the same percentage. In “What does 20% chance of rain really mean?” Blake Matthews writes: Continue reading “What’s the Difference Between Frequentism and Bayesianism? (Part 3)”


2019-05-12

The Questions We Permit Ourselves to Ask

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

In historical research, we evaluate the plausibility of hypotheses that aim to explain the occurrence of a specific event. The explanations we develop for this purpose have to be considered in light of the historical evidence that is available to us. Data functions as evidence that supports or contradicts a hypothesis in two different ways, corresponding to two different questions that need to be answered with regard to a hypothesis:

1. How well does the event fit into the explanation given for its occurrence?

2. How plausible are the basic parameters presupposed by the hypothesis?

. . . . .

[A]lthough this basic structure of historical arguments is so immensely important and its disregard inevitably leads to wrong, or at least insufficiently reasoned, conclusions, it is not a sufficient condition for valid inferences. Historical data does not come with tags attached to it, informing us about (a) how – or whether at all – it relates to one of the two categories we have mentioned and (b) how much plausibility it contributes to the overall picture. The historian will never be replaced by the mathematician.23

23 This becomes painfully clear when one considers that one of the few adaptations of Bayes’s theorem in biblical studies, namely Richard Carrier, On the Historicity of Jesus: Why We Might Have Reason for Doubt (Sheffield: Sheffield Phoenix, 2014), aims to demonstrate that Jesus was not a historical figure.

Heilig, Christoph. 2015. Hidden Criticism?: The Methodology and Plausibility of the Search for a Counter-Imperial Subtext in Paul. Tübingen: Mohr Siebeck. pp. 26f


2018-09-16

Bayes’ theorem explained by Lily Serna

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

Last night I chanced to turn on the TV half way through a program trying to show viewers how interesting maths was. Yeh, okay. But I watched a little as they demonstrated how they do searches at sea for missing persons. Then it suddenly got interesting. Bayes’ theorem was introduced as their way of handling new information that came to them as they conducted their search. And the presenter, a maths wiz (I have seen her magical maths brain at work on another show), Lily Serner, explained it all without the maths. Move the red button forward to the 44:54 mark:

Or a more truncated version is also on youtube

Another simple introduction on The Conversation:

Bayes’ Theorem: the maths tool we probably use every day, but what is it?


2018-07-29

Even a Bayesian Historian Can Slip Up! (once)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

I argue that the interpretation of Bayesianism that I present here is the best explanation of the actual practices of historians.

— Tucker, Aviezer. 2009. Our Knowledge of the Past: A Philosophy of Historiography. Reissue edition. Cambridge University Press. p. 134

Aviezer Tucker

I have posted aspects of Aviezer Tucker’s discussion of how Bayesian reasoning best represents the way historians conduct their research but here I want to post a few details in Tucker’s chapter that I have not covered so far.

(Interjection: it is not strictly fair to call Aviezer Tucker a “Bayesian historian” because, as is clear from the opening quote, what he argues is that all historians, at least at their best and overall, employ Bayesian logic without perhaps realizing it.)

Tucker includes discussion of biblical criticism in his book but in his chapter on Bayesian methods he unfortunately contradicts himself. The contradiction can best be explained, I think, by appealing to the power of the Christian story to implant unquestioned assumptions into even the best of scholars. I could call that my hypothesis and suggest that the prior probability for it being so in many historians is quite high.

No doubt readers will recall my recent quotation from Tucker:

There have been attempts to use the full Bayesian formula to evaluate hypotheses about the past, for example, whether miracles happened or not (Earman, 2000, pp. 53–9). Despite Earman’s correct criticism of Hume (1988), both ask the same full Bayesian question:

“What is the probability that a certain miracle happened, given the testimonies to that effect and our scientific background knowledge?”

But this is not the kind of question biblical critics and historians ask. They ask,

“What is the best explanation of this set of documents that tells of a miracle of a certain kind?”

The center of research is the explanation of the evidence, not whether or not a literal interpretation of the evidence corresponds with what took place.

(Tucker, p. 99)

One explanation for the documents relating the miracles is that the miracles happened and were recorded. Other explanations can also come to mind.

No doubt because the question focused on miracles it was very easy for Tucker and countless others before and since to think of alternative hypotheses to explain the stories of miracles that have survived for our reading entertainment today.

The Slip Up

But look what happened to Tucker’s argument when he was faced with something that sounded more “historically plausible”: Continue reading “Even a Bayesian Historian Can Slip Up! (once)”


2018-07-11

Analysis of the McGrath and Carrier debate on a Bayesian approach to history

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

The latest contest started when James McGrath made a mockery of his understanding of Carrier’s mehod: Jesus Mythicism: Two Truths and a Lie

I have run the to and fro posts through a Linguistic Inquiry and Word Count (LIWC) analysis. Here are the interesting results:

VARIABLE MCGRATH 1
Two Truths
(449 words)
CARRIER
Wrong Again
(2485 words)
MCGRATH 2
Mythicist Math
(680 words)
Analytic thinking:
(the degree to which people use words that suggest formal, logical, and hierarchical thinking patterns)
82.22% 32.85% 55.17%
Authenticity:
(when people reveal themselves in an authentic or honest way)
49.57% 34.39% 39.55%
Clout:
(the relative social status, confidence, or leadership that people display through their writing)
38.59% 47.75% 48.82%
Tone:
(the higher the number, the more positive the tone)
92.86% 16.55% 13.75%
Anger: 0.22% 0.56% 0.88%

.

Tone

Unfortunately when one reads McGrath’s Two Truths post one soon sees that his very positive tone (over 92% positive) is in fact an indication of overconfidence with the straw-man take-down.

But but but….. Please, Richard, please, please, please! Don’t fall into McGrath’s trap. Sure he sets up a straw man and says all sorts of fallacious things but he also surely loves it when he riles you. It puts him on the moral high ground (at least with respect to appearances, and in the real world, despite all our wishes it were otherwise, appearances do seriously count).

But see how McGrath then followed with a lower tone — and that’s how it so easily can go in any debate on mythicism with a scholar who has more than an academic interest in the question.

Anger

Ditto for anger.

This variable was measured by the following words:

MCGRATH1 CARRIER MCGRATH2
lying destroyed
argued
argument
liar
arguments
argues
lied
lies
damned
insults
criticized
argument

Clearly a more thorough and serious analysis would need to sort words like “argument” between their hostile and academic uses.

Analytic thinking style

James McGrath began the discussion in a style that conveyed a serious analytical analysis of Carrier’s argument. Of course anyone who has read Carrier’s works knows McGrath’s target was a straw man and not the actual argument Carrier makes at all. (Interestingly when Carrier pointed out that it appeared McGrath had not read his actual arguments McGrath at best made inferences that he had read Carrier’s books but fell short of saying that he had actually read them or any of the pages where Carrier in fact argued the very opposite of what McGrath believed he had.) Nonetheless, McGrath’s opening gambit conveyed a positive approach for anyone unfamiliar with Carrier’s arguments.

But look what happened to McGrath’s analytical style after meeting Carrier’s less analytical style: he followed Carrier’s lead.

Carrier has chosen to write in natural language style which is fine for informal conversation but the first impression of an outsider unfamiliar with Carrier’s arguments would probably be that McGrath was the more serious analyst of the question. (I understand why Carrier writes this way but an overly casual style, I suspect, would appeal more to the friendly converted (who are happy to listen rather than actively share the reasoning process) than an outsider being introduced to the ideas.

In actual fact, Carrier uses far more words that do indeed point to analytic thinking than does McGrath. Carrier uses cognitive process words significantly more frequently than does McGrath (24% to 16%/19%). But his sentences are far less complex and shorter.

Other

There are many other little datasets that a full LIWC analysis reveals. One is a comparative use of the personal singular pronoun. A frequent use of “I” can indicate a self-awareness as one speaks and this can sometimes be a measure of some lack of confidence. Certainly the avoidance of “I” is often a measure of the opposite, of strong confidence and serious engagement in the task at hand. Carrier’s use of I is significantly less than McGrath’s.

Another progression one sees is the use of “he”. As the debate progressed it became increasingly focused on what “he” said: e.g. McGrath1: 0.45%; Carrier 1.65%; McGrath2 2.06%.

McGrath sometimes complains about the length of Carrier’s posts. But more words are linked to cognitive complexity and honesty.

—o—

Of course I could not resist comparing my own side-line contribution:

VARIABLE NEIL
Reply
(1077 words)
Analytic thinking:
(the degree to which people use words that suggest formal, logical, and hierarchical thinking patterns)
86.42%
Authenticity:
(when people reveal themselves in an authentic or honest way)
44.36%
Clout:
(the relative social status, confidence, or leadership that people display through their writing)
56.27%
Tone:
(the higher the number, the more positive the tone)
32.13%
Anger:
(measured by my use of “criticism”, “argument” and “critical”)
0.93%

.


Pennebaker, James W. 2013. The Secret Life of Pronouns: What Our Words Say About Us. Reprint edition. New York: Bloomsbury Press.



2018-07-10

How Historical Research Works (and does not work) — even with Bayes’

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

A Roman Catholic historian who thinks he’s a Bayesian walks into the secret Vatican archives. There he discovers a document that might have significance for rewriting the origins of Christianity. I have reproduced a facsimile:

The historian is stunned. His faith has taught him that James was only a cousin or half-brother. If he was wrong about that, he wonders, how can he even be sure Jesus existed at all?

Reeling in doubts, the historian is nonetheless conscientious and no fool. He knows he has to test this document for its authenticity. So he snips off a corner of it and sends it to the laboratory to determine the age and provenance of the material. As an extra check he sends a high definition copy to a paleographer.

The results come back. The material is dated between 40 AD and 60 AD and the paleographic analysis confirms that the style to what was typical of the year 50 AD.

Next, he asks if the letter is genuinely by Paul. His colleagues tell him it sounds just like the Paul they know so that is confirmed.

Since this is evidently an autograph questions of the contents of the letter being altered during the process of copying do not arise.

But how reliable are its contents as historical evidence? Our historian asks if we can verify that this particular James really was known to be the literal brother of Jesus.

He consults the latest scholarship on the book of Acts and discovers that it is now established “beyond doubt” that the first chapters, 1-15, were written in the year 45 AD and that the original text said that James was not only the head of the church but was also the junior brother of Jesus, one year younger to be precise. The contents of Paul’s letter are confirmed!

But our historian is more thorough still. Did anyone else in the early church know anything of this letter and its contents? He pores through Tertullian’s writings and sees that Tertullian quotes the passage about meeting James to refute Marcion’s heresy that Jesus was not really a flesh and blood human being born of a woman on earth.

That clinched it! The letter and its contents sure seemed to be genuine and known to be genuine by the venerable Fathers.

But our historian is a Bayesian. At least he thinks he is. He read half of a blurb on the back cover of a book that had Bayes written on its front cover and is confident that he got the hang of it from that.

If he was wrong about Jesus having brothers how can he be sure Jesus even existed? The historian pauses to think of all the unbelievable stories about Jesus. Could such a person really have existed in the first place? So he puts on what he thinks is his Bayesian cap that looks very much like one of those conical dunce caps and sets to work.

He weighed the evidence. He took all the stories that were mythical and set them against the evidence for the reality of Jesus and here’s what he found:

The weight of numbers proved it. Jesus did not exist after all. He was entirely mythical. The claims of the letter were all bogus. Continue reading “How Historical Research Works (and does not work) — even with Bayes’”


2018-07-07

Clarification needed for my reply to McGrath’s criticism of the use of Bayesian reasoning

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

McGrath does not tell his readers in the post we are addressing what he has in mind as the “clear-cut” evidence for the historicity of Jesus but from previous posts and comments I am convinced that it is the “brother of the Lord” passage in Galatians 1:19 that he has in mind. If I am wrong then someone will no doubt inform me.

I ought to have made that point clearer in my original post.

If someone can direct me to where McGrath recently made the point about that Galatians passage (was it in response to the reddit discussion about Vridar?) I would much appreciate it.

 

 


Reply to James McGrath’s Criticism of Bayes’s Theorem in the Jesus Mythicism Debate

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

Aviezer Tucker

James McGrath in a recent post, Jesus Mythicism: Two Truths and a Lie, made the following criticism of the use of Bayes’s theorem in the Jesus Mythicism debate:

. . . . as I was reminded of the problematic case that Richard Carrier has made for incorporating mathematical probability (and more specifically a Bayesian approach) into historical methods. . . .

If one followed Carrier’s logic, each bit of evidence of untruth would diminish the evidence for truth, and each bit of evidence that is compatible with the non-historicity of Jesus diminishes the case for his historicity.

The logic of this argument is based on a misunderstanding of the nature of historical inquiry and how a historian is expected to apply Bayesian logic. (It also misconstrues Carrier’s argument but that is another question. I want only to focus on a correct understanding of how a historian validly applies Bayesian reasoning.)

In support of my assertion that James McGrath’s criticism is misinformed I turn to a historian and philosopher of history, Aviezer Tucker (see also here and here), author of Our Knowledge of the Past: A Philosophy of Historiography. He treats Bayesian reasoning by historical researchers in depth in chapter three. I quote a section from that chapter (with my own formatting):

There have been attempts to use the full Bayesian formula to evaluate hypotheses about the past, for example, whether miracles happened or not (Earman, 2000, pp. 53–9).

We may compare McGrath’s criticism. He is of the impression that the Bayesian formula is used to evaluate the hypothesis that Jesus did exist. This is a common misunderstanding. If you are confused, continue to read.

Despite Earman’s correct criticism of Hume (1988), both ask the same full Bayesian question:

“What is the probability that a certain miracle happened, given the testimonies to that effect and our scientific background knowledge?”

We may compare McGrath’s criticism again. He is of the impression that the historian using Bayesian logic is asking what is the probability that Jesus existed, given the testimonies to that effect and our background knowledge. If you are still confused then you share McGrath’s misunderstanding of the nature of historical inquiry. So continue with Tucker:

But this is not the kind of question biblical critics and historians ask. They ask,

“What is the best explanation of this set of documents that tells of a miracle of a certain kind?”

The center of research is the explanation of the evidence, not whether or not a literal interpretation of the evidence corresponds with what took place.

(Tucker, p. 99)

In other words, biblical critics and historians ask (Tucker is assuming the biblical critic and historian is using Bayesian logic validly and with a correct understand of the true nature of historical research) what is the best explanation for a document that, say, purports to be by Paul saying he met the James, “the brother of the Lord”.

I use that particular example because — and someone correct me if I am mistaken — Jame McGrath and others believe that passage (Galatians 1:19) makes any questioning of the historicity of Jesus an act of “denialism”. (McGrath does not tell his readers in the post we are addressing what he has in mind as the “clear-cut” evidence for the historicity of Jesus but from previous posts and comments I am convinced that it is the “brother of the Lord” passage in Galatians 1:19 that he has in mind. If I am wrong then someone will no doubt inform me.)

No one, I am sure, would mean to infer that the late and highly respected Philip R. Davies was guilty of denialism when he suggested that the historical methods he applied to the Old Testament should also be applied to the New — a method I have sought to apply to the study of Christian origins ever since I read Davies’ groundbreaking book.

Back to the question. It is the question of what is the best explanation for the passage in our version of Galatians that I have attempted to address several times now.

That is the question that the historian needs to ask. Every decent book I have read for students about to undertake advanced historical studies has stressed, among many other duties, the necessity for the researcher to question the provenance, the authenticity, of the documents he or she is using, and to know all the questions related to such questions from a thorough investigation of the entire field. My several posts have attempted to introduce such questions that should be basic to any historical study.

Tucker, from my reading of his book, would not consider such an exercise to be “denialism”, but sound and fundamental historical method — and even sound biblical criticism. Continue reading “Reply to James McGrath’s Criticism of Bayes’s Theorem in the Jesus Mythicism Debate”


2017-12-15

How Bayes’ Theorem Proves the Resurrection (Gullotta on Carrier once more)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

Yet I cannot help but compare Carrier’s approach to the work of Richard Swinburne, who likewise uses Bayes’ theorem to demonstrate the high probability of Jesus’ resurrection, and wonder if it is not fatally telling that Bayes’ theorem can be used to both prove the reality of Jesus’ physical resurrection and prove that he had no existence as a historical person.49

49 Richard Swinburne, The Resurrection of God Incarnate (Oxford: Oxford University Press, 2003).

The above quotation is from page 16 of Daniel Gullotta’s 37 page review of Richard Carrier’s On the Historicity of Jesus [OHJ].

To make such a comparison one would expect the author to be familiar with how the Bayes’ rule is used by both Carrier and Swinburne. Unfortunately Gullotta nowhere indicates that he consulted the prequel to OHJ, Proving History, in which Carrier explained Bayes on a “for dummies” level and which was referenced repeatedly in OHJ. Gullotta moreover indicated that he found all of the Bayesian references in OHJ way over his head — even though the numbers used were nothing more than statements of probability of evidence leaning one way or the other, such as when we say there is a 50-50 chance of rain or we are 90% sure we know who’s been pinching our coffee at work. Robert M. Price has expressed similar mathemaphobia so Gullotta is not alone.

Anyway, we have a right to expect that the reviewer is familiar with the way Bayes is used in at least one of the works he is comparing, and since he skipped the Bayesian discussion in OHJ he is presumably aware of how Swinburne used Bayes to effectively prove the resurrection of Jesus.

Bayes’ theorem is about bringing to bear all background knowledge and evidence for a particular hypothesis, assessing it against alternative hypotheses, and updating one’s assessments in the light of new information as it comes along.

If that sounds no different from the common sense way we ought to approach any problem, that’s because it is no different from common sense methods. That’s why Bayes “cracked the enigma code, hunted down Russian submarines and emerged triumphant [even in historical studies!] from two centuries of controversy“.

Anyway, scholars who ought to know better have indicated that they can safely dismiss Bayes because Richard Swinburne used it to prove the resurrection of Jesus. Never mind that that’s like saying we should never use syllogisms in argument because some joker used one to prove everything with two legs was a man.

Richard Swinburne

So let’s see exactly how Swinburne used a tool that some of the smartest people in the world use for all sorts of good things in order to prove Jesus is alive today and about to come and judge us all.

Bayes works with facts. Hard data. Real evidence. Stuff.

For Swinburne, anything in the Bible is a definite fact that requires us to believe it if there is nothing else in the Bible to contradict it. That’s the “hard data” that Swinburne feeds into his Bayesian equation!

Notice some of Swinburne’s gems found in The Resurrection of God Incarnate (with my own bolding as usual):

Most of St Paul’s epistles are totally reliable historical sources. The synoptic gospels are basically historical works, though they do sometimes seek to make theological points (especially in the Infancy narratives) by adding details to the historical account. St John’s Gospel is also basically reliable, at any rate on the later events of the story of Jesus . . . . (p 69)

I argued earlier that, in the absence of counter-evidence, apparent testimony should be taken as real testimony and so apparent historical claims as real historical claims. (p. 70)

It seems fairly clear that the main body of the Acts of the Apostles is intended to be taken as literal history. It reads like any other contemporary work of history, and the later parts (which contain no reports of anything miraculous) are so detailed and matter-of-fact as to have a diary-like quality to them. (p. 71)

Hence there is no justification for thinking that Mark is trying to do anything else than record history when he writes about these events . . . (p. 73)

I conclude that the three synoptic Gospels purport to be history (history of cosmic significance, but I repeat, history all the same). (p. 74)

Just as apparent testimony must be read as real testimony, so real testimony must be believed, in the absence of counter-evidence.  (p. 76) Continue reading “How Bayes’ Theorem Proves the Resurrection (Gullotta on Carrier once more)”


2016-12-06

What’s the Difference Between Frequentism and Bayesianism? (Part 2)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Tim Widowfield

Witch of Endor by Nikolay Ge
Witch of Endor by Nikolay Ge

In the previous post we began to discuss the fundamental difference between the Bayesian and frequentist approaches to probability. A Bayesian defines probability as a subjective belief about the world, often expressed as a wagering proposition. “How much am I willing to bet that the next card will give me a flush?”

To a frequentist, however, probability exists in the physical world. It doesn’t change, and it isn’t subjective. Probability is the hard reality that over the long haul, if you flip a fair coin it will land heads up half the time and tails up the other half. We call them “frequentists,” because they maintain they can prove that the unchanging parameter is fixed and objectively true by measuring the frequency of repeated runs of the same event over and over.

Fairies and witches

But does objective probability really exist? After reading several books focused on subjective probability published in the past few decades, I couldn’t help noticing that Bruno de Finetti‘s Theory of Probability stands as a kind of watershed. In the preface, he says that objective probability, the very foundation of frequentism, is a superstition. If he’s correct, that means it isn’t just bad science; it’s anti-science. He writes: Continue reading “What’s the Difference Between Frequentism and Bayesianism? (Part 2)”


2016-11-22

What’s the Difference Between Frequentism and Bayesianism? (Part 1)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Tim Widowfield

English: Picturing 50 realisations of a 95%-co...
English: Picturing 50 realisations of a 95%-confidence interval (Photo credit: Wikipedia)

As my thesis partner and I gathered up the evidence we had collected, it began to dawn on us — as well as on our thesis advisers — that we didn’t have enough for ordinary, “normal” statistics. Our chief adviser, an Air Force colonel, and his captain assistant were on the faculty at the Air Force Institute of Technology (AFIT), where my partner and I were both seeking a master’s degree in logistics management.

We had traveled to the Warner Robins Air Logistics Center in Georgia to talk with a group of supply-chain managers and to administer a survey. We were trying to find out if they adapted their behavior based on what the Air Force expected of them. Our problem, we later came to understand, was a paucity of data. Not a problem, said our advisers. We could instead use non-parametric statistics; we just had to take care in how we framed our conclusions and to state clearly our level of confidence in the results.

Shopping for Stats

In the end, I think our thesis held up pretty well. Most of the conclusions we reached rang true and matched both common sense and the emerging consensus in logistics management based on Goldratt’s Theory of Constraints. But the work we did to prove our claims mathematically, with page after page of computer output, sometimes felt like voodoo. To be sure, we were careful not to put too much faith in them, not to “put too much weight on the saw,” but in some ways it seemed as though we were shopping for equations that proved our point.

I bring up this story from the previous century only to let you know that I am in no way a mathematician or a statistician. However, I still use statistics in my work. Oddly enough, when I left AFIT I simultaneously left the military (because of the “draw-down” of the early ’90s) and never worked in the logistics field again. I spent the next 24 years working in information technology. Still, my statistical background from AFIT has come in handy in things like data correlation, troubleshooting, reporting, data mining, etc.

We spent little, if any, time at AFIT learning about Bayes’ Theorem (BT). I think looking back on it, we might have done better in our thesis, chucking our esoteric non-parametric voodoo and replacing it with Bayesian statistics. I first had exposure to BT back around the turn of the century when I was spending a great deal of time both managing a mail server and maintaining an email interface program written in the most hideous dialect of C the world has ever produced. Continue reading “What’s the Difference Between Frequentism and Bayesianism? (Part 1)”