2016-06-21

Hermann Detering confronts Richard Carrier—Part 3

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

H. Detering confronts R. Carrier—Pt. 3

Screen Shot 2016-06-22 at 3.47.01 am

Let us call a spade a spade: Carrier may be an expert on the natural philosophers of the Early Roman Empire, but he is certainly not an expert on Paul. — H.D.

Related Posts on Vridar

The Carrier-Goodacre Exchange (part 1) on the Hist... I have taken down the gist of the arguments for and against the historicity of Jesus as argued by Richard Carrier (RC) and Mark Goodacre (MG) on Unbel...
Goodacre-Carrier Debate: What if . . . . ? I have finally caught up with the comments by Dr Mark Goodacre and Dr Richard Carrier since their radio discussion on the view that Jesus did not ex...
Daniel Gullotta’s Review of Richard Carrier&... Having just read Daniel Gullotta's review of Richard Carrier's On the Historicity of Jesus I expect to be posting over the coming weeks a series of an...
Gullotta’s Review of Carrier’s OHJ: A ... Before I address specific points of Daniel Gullotta's review of Richard Carrier's On the Historicity of Jesus here is my overall assessment. Despit...
The following two tabs change content below.

Neil Godfrey

Neil is the author of this post. To read more about Neil, see our About page.

Latest posts by Neil Godfrey (see all)

48 Comments

  • John MacDonald
    2016-06-21 19:23:17 GMT+0000 - 19:23 | Permalink

    Detering is correct. Carrier’s training no more makes him an expert in New Testament studies than does Detering’s training make him an expert in the natural philosophers of the Early Roman Empire. On the other hand, Carrier has published a peer-reviewed book on the topic, as well as scholarly articles …

    • Doston Jones
      2016-06-26 22:33:25 GMT+0000 - 22:33 | Permalink

      Carrier’s peer-reviewed book was on the historicity of Jesus, not on the historicity of Paul the Apostle or the authorship of the Pauline corpus.

      • John MacDonald
        2016-06-26 23:20:43 GMT+0000 - 23:20 | Permalink

        You don’t need to be an “expert” about something to have an opinion about it. And you can be an “expert” and still have a crackpot theory about the area you are an expert in: Eisenman comes to mind – And Robert M. Price – And none of Detering’s reputable peers find his musings about Paul to be persuasive.

  • R Pence
    2016-06-22 07:21:16 GMT+0000 - 07:21 | Permalink

    Be careful if commenting on Salm’s site. If there’s a pesky paragraph he doesn’t like (or doesn’t get), he’s liable to ‘abridge’ your comment. It rather takes some gall to rework internet comments.

    I made two points re: Carrier and using Bayes, which is perfectly fair to do since Carrier is known for Bayes and Detering mentions it in his piece.

    1. Using Bayes in historical work has about the same usefulness and limitations as clarifying an argument using symbolic logic. If you have a line of reasoning, sure, it could benefit by having all its premises and implications formalized. You can then spot your own errors and others (appropriately trained) can examine and assess your argument according to objective standards. Carrier’s exhortation to use Bayes could equally apply to something like symbolic logic, but there’s a reason why symbolic logic isn’t used all the time (not just because it’s ‘hard’): it’s a tool with limitations.

    2. Using Bayes runs into one interesting problem which is the ‘meaning’ of a given probability. Say you took a poll in which you asked people to assign values to “very likely” and “likely”. Where would “likely” end and “very likely begin”? Some people might say that “very likely” begins at 80%, 90%, 95%, 75%. Where does “likely” even begin? Some might say “50%”, “60%”, etc. The point is not that semantic terms such as “likely” figure into a computation, strictly speaking. But at the beginning of a Bayes process (assigning initial values) and at the end (interpreting the results) there is the basic problem of the “meaning’ of a given probability. I would go so far as to say that there should be psychological research done on how people *perceive* probability. Just as with everything else (perception of time, perception of magnitude, etc.) you’d probably find a subjective slipperiness baked into the cake, so to speak.

    Using Bayes in historical research has benefit insofar as you render assumptions explicit and are forced to reason your way through probabilities in a rules-based way. But just as with symbolic logic, it’s great for clarification as a kind of shorthand (for those trained to use it). But it would in no way eliminate confusion or stupidity in the field. It wouldn’t even greatly diminish it. If Jesus historicists all started using Bayes tomorrow, you’d have fifty different historical Jesuses with Bayesian justifications in the footnotes.

    • HoosierPoli
      2016-06-22 10:46:22 GMT+0000 - 10:46 | Permalink

      ” I would go so far as to say that there should be psychological research done on how people *perceive* probability.”

      There has been a considerable amount of work done, including by the intelligence agencies of the US, on just this question. The problem is, if (hypothetically) the CIA sends a memo to the president saying “It is likely that Saddam Hussein may have weapons of mass destruction”, the reader could understand that to mean anywhere from a 20 to 80 percent probability, and certainly different readers will have different interpretations and therefore make different decisions. This has been and continues to be a huge problem, especially since vague statements of likelihood are mostly just ass-covering (“We didn’t say HAS, we said MAY have”). In other words, weaseling out of predictions is a problem in more than just academic disputes.

      • Neil Godfrey
        2016-06-23 05:39:52 GMT+0000 - 05:39 | Permalink

        But that’s not how Bayesian reasoning works. It is not a single shot guess like that. Assign a probability, then assess and revise that probability against some piece of evidence or background knowledge; then repeat that process, and so on.

        • R Pence
          2016-06-24 11:37:42 GMT+0000 - 11:37 | Permalink

          I think I was pretty clear that using Bayes is not simply a question of a single subjective assessment of a given probability. My point goes to the problems assigning ‘meaning’ given to a given probability. I’m not sure that, strictly speaking, any probability *can* be meaningful except in relation to other probabilities. For example, I can say that something with a 55% probability is *more probable* than not. (55% is more than 45%.) But to say that 55% is likely, very likely, sort of likely, etc. gets me into trouble. Some might even say in certain contexts that 55% is unlikely.

          Moreover, philosophically speaking, the relation between physical evidence and quantities (in this case probabilities) is tenuous – another potential problem. To say that you can set an assigned probability ‘against’ a piece of evidence involves a lot of assumptions.

          In the end, as I said above, using Bayes for things like historical research is like using formal logic. In formal logic, you can construct a perfectly *valid* syllogism that follows all the logical rules. But your syllogism may not be *sound*. (For example: Socrates is a cow; all cows have horns; Socrates has horns.) Likewise, with Bayes if my assumptions are flawed or the probabilities I assign skewed, I can produce something that mathematically holds together but has no truth-value. Thus, using Bayes is great for comparing notes. But it would not be the end-all, be-all grand scientific basis for historical research that Carrier makes it out to be.

          • Neil Godfrey
            2016-06-29 04:36:56 GMT+0000 - 04:36 | Permalink

            I’m not sure what the problem is. Even if one assigns an initially bizarre and unrealistic probability to X then the the follow up process of adjustments in the light of each piece of data will soon move that probability to a more realistic figure.

    • Sili
      2016-06-22 12:55:39 GMT+0000 - 12:55 | Permalink

      If Jesus historicists all started using Bayes tomorrow, you’d have fifty different historical Jesuses with Bayesian justifications in the footnotes.

      And this would be bad, why?

      • R Pence
        2016-06-22 13:56:19 GMT+0000 - 13:56 | Permalink

        Never said it would be bad. Said that widespread use of Bayesian justifications wouldn’t greatly reduce the level of confusion in the field or in academic factional disputes generally.

      • 2016-06-22 14:39:07 GMT+0000 - 14:39 | Permalink

        I don’t think that scenario would be bad (one could skip the footnotes), the question is if it would be advantageous?
        Bayes theorem has the potential to clarify an argument, however whenever one uses mathematical notation there is also the potential to obfuscate an argument deliberately or accidentally.. I think that is realistic because the Bayesian argument can never be carried out in full details as Bayes theorem would otherwise suggest we should.
        These two questions should be weighted up against each other when one decides if one should use Bayes theorem.

        • R Pence
          2016-06-24 11:39:51 GMT+0000 - 11:39 | Permalink

          My point is that there is still enough subjectivity and wiggle-room ‘baked in’ to using Bayes that there wouldn’t be any great difference in the field.

        • MrHorse
          2016-06-26 02:44:57 GMT+0000 - 02:44 | Permalink

          Tim, “the Bayesian argument can never be carried out in full details as Bayes theorem would otherwise suggest we should” is not clear.

          Do you mean ‘the Bayesian argument can never be carried out</i? as fully as Bayes theorem would otherwise suggest we could’ ?

          • MrHorse
            2016-06-26 03:09:01 GMT+0000 - 03:09 | Permalink

            That’s supposed to be

            Do you mean ‘the Bayesian argument can never be carried out as fully as Bayes theorem would otherwise suggest we could’ ?

            • 2016-06-26 15:55:35 GMT+0000 - 15:55 | Permalink

              Hi,

              could/should; I guess it could be either. I simply mean that if you want to take every piece of information into account you would end up with 10’000 pages (I don’t blame Carrier!).

    • 2016-06-22 14:45:10 GMT+0000 - 14:45 | Permalink

      I agree with your two points but I wish to add a few thoughts regarding the first point. An additional problem with Bayes theorem is that a logical (symbolic) argument is “crisp”, that is it is either true (in which case the conclusion follows) or false and (hopefully!) we can figure this out.
      A Bayesian argument can also be true or false logically, but it has the additional problem the probabilities are most often guessed or approximated, and so we introduce uncertainty and bias with no real way to detect this. Additionally the bias will be magnified by combining several probabilities as Bays theorem suggest we do.

      • Neil Godfrey
        2016-06-23 05:36:45 GMT+0000 - 05:36 | Permalink

        But if the values were not “guesses or approximations” they would not be probabilities.

        • 2016-06-23 09:08:54 GMT+0000 - 09:08 | Permalink

          Hi,

          The common way to obtain probabilities initially is by symmetry arguments (a simple case: The chance a die comes up 3 is 1/6 because the die has 6 sides).

          • Neil Godfrey
            2016-06-29 04:40:46 GMT+0000 - 04:40 | Permalink

            Of course. But uncertainty and bias in other types of probability assessments are a given and the reason for reiterated assessments according to each piece of new data.

            • 2016-07-01 09:56:43 GMT+0000 - 09:56 | Permalink

              I don’t disagree with that at all, I just think it is worth keeping in mind that when we combine several several numbers (each with some uncertainty) we would normally expect the uncertainty in the final result to be larger than the individual uncertainties.. it’s a very basic statement and taken alone it does not mean that much. For instance if we combine 5 pieces of evidence, and we know all 5 provides evidence for one conclusion over another (but we are uncertain as to how much), we can still say that combining the 5 pieces of evidence makes the conclusion most likely true. It’s a bit more of a problem when you got 2 pieces for and 3 against (and perhaps other ways of dividing up the evidence) where uncertainty and bias becomes more important.

      • Zbykow
        2016-06-23 13:32:53 GMT+0000 - 13:32 | Permalink

        BT does not introduce any additional uncertainty, it only deals with uncertain matters, just a tool to quantify uncertainty.
        Sure it doesn’t give simple yes/no answers, you can’t blame it for not being something it’s not supposed to be.

        • 2016-06-23 14:08:15 GMT+0000 - 14:08 | Permalink

          I don’t disagree with anything you wrote in your post, but I think you might be missing my point. I have tried to illustrate my point in ThatDocumentIHaveQuoted117Times (click my name) in Figure 1-3 and related text. Put briefly, I agree with you that BT does not introduce uncertainty, but us guess/approximating probabilities and using BT does.

          • Zbykow
            2016-06-23 14:59:01 GMT+0000 - 14:59 | Permalink

            ” I agree with you that BT does not introduce uncertainty, but us guess/approximating probabilities and using BT does.”

            No, because we’re guessing anyway.
            We can make a quick single guess right off the bat, or take our time and use Bayes to break things down, which tends to give better result because we’re less likely to miss something important, and it makes explicit how component beliefs interact.

            • 2016-06-24 08:45:56 GMT+0000 - 08:45 | Permalink

              I still don’t think you get my point. Let’s say I have to guess the height of a person in front of me. I can either make one guess at his height (1.85m), or I can make individual guesses at the length of his legs (90cm), the length of his torso (80cm) and the length of his head and neck (35cm) and then combine these to get a height of 2.05m.

              All things being equal, the second way of guessing his height will be more prone to errors than the first because I add several numbers which each are estimated with some error.
              With BT something similar happens: We guess individual probabilities and combine them and that increases (all things being equal) the error. Do we agree on this and do you see my point?

              Now, you can argue that’s acceptable because we can be more accurate in our guesses at the individual numbers (i.e. the individual lengths) then in the combined guess — perhaps that’s true but it is a bit hard to know.

              • Zbykow
                2016-06-24 10:47:13 GMT+0000 - 10:47 | Permalink

                “Let’s say I have to guess the height of a person in front of me. I can either make one guess at his height (1.85m), or I can make individual guesses at the length of his legs (90cm)…”

                This is not how it works.
                In this case you’ve got only one single piece of evidence – visual input, there’s nothing to break down.

                Btw you’re talking measurement error, that’s quite a different matter, and it works completely different than bayesian probability. No analogy.

              • 2016-06-24 10:59:50 GMT+0000 - 10:59 | Permalink

                Well Zbykow, I think you found a way to miss the point of the analogy :-).

                When we apply Bays theorem we express one probability in terms of 3 others. If we assume some uncertainty in our ability to accurately guess these probabilities the uncertainty of the final guess will as a rule be larger than the uncertainty in each of the guesses. Can we perhaps agree so far?

                If not, I am not sure where our disagreement actually lie. Perhaps it would be helpful if you looked at figure 2-3 and subsequent text (in particular, the bullet points where I summarize my conclusions) in my review and explain which (if any) you believe are wrong and what the right view is?

              • Zbykow
                2016-06-24 14:42:47 GMT+0000 - 14:42 | Permalink

                Absolutely no agreement here.
                You’re confusing measured quantities with probabilities, they work differently, probabilities don’t add up like weights or lengths do.
                Second, there’s no such thing as “right” subjective probability, it’s right as long as one is honest about their beliefs.
                Third, if there are separate pieces of evidence, you have no choice but consider them separately, either explicitly (e.g with BT), or some obscure process in the back of the head.

                “Perhaps it would be helpful if you looked at figure 2-3 and subsequent text”

                That about guessing weights of an apple, teddy bear and crayons?

                Yes. it’s silly and irrelevant, but the funniest thing is you made a major goof there.
                If you’re guessing the weight of a teddy, then crayons, then total, in that order, you’re still guessing the weight of an apple! The results should still be about the same.

                How did you get those fine bell curves? Did you actually conduct many experiments involving different people guessing weights of teddy bears,
                or did you guess?

              • 2016-06-24 17:05:47 GMT+0000 - 17:05 | Permalink

                Well, lets avoid the analogy as it seems to be confusing you. You said that a subjective probability is “right” as long as we are “honest”. However what matters here is if it is accurate and how accurate it is. After all, if we could just estimate probabilities accurately we could just estimate the probability Jesus existed directly and there would be no need for Bayes. It is this uncertainty/inaccuracy I am concerned about. This variability is something Carrier himself discusses in PH.

                Figure 2-3 (it would probably be helpful for you to read the text before determining it is wrong :-)) is not relating to the apple-example. You are correct the result should be “about” the same, but it is the “about” I am concerned about.

                The normal distribution are just there fore illustration sake. You can choose another distribution if you like and the results will be qualitatively the same.

              • Zbykow
                2016-06-24 20:51:59 GMT+0000 - 20:51 | Permalink

                “You said that a subjective probability is “right” as long as we are “honest”. However what matters here is if it is accurate and how accurate it is”

                Neither “right” nor “accurate” make any sense in this context.
                It’s about subjective beliefs, which change with time and vary from person to person.
                What accuracy are we talking about? How do you tell objectively which beliefs are more accurate than others?

                “Well, lets avoid the analogy as it seems to be confusing you.”

                There’s nothing confusing about your analogy, it’s just wrong.

                In your analogy no hypotheses are tested, only guesses are made about values chosen from an infinite set. In this scenario accuracy indeed, does apply, but it’s only because it’s just a measurement made with an inaccurate tool, and has nothing to do with bayesian probability – probably the source of your accuracy misconception.

                But your analogy can be made better (that is similar to our case).
                The hypothesis becomes, say: “the apple is heavier than 100g”
                one guy says: “I give it 60% it’s true”
                another: “70% on true”

                Now, given you’ve got no idea how much the apple weighs, tell me, which one is more accurate?

              • 2016-06-24 21:30:30 GMT+0000 - 21:30 | Permalink

                It’s about subjective beliefs, which change with time and vary from person to person.
                What accuracy are we talking about? How do you tell objectively which beliefs are more accurate than others?

                Bingo! People assign different probabilities to the same evidence when the situation is complex; I know myself that I change my probability estimates from day to day. The problem is exactly how you tell which is more accurate, and what effect this variability will have on the computations. I discuss this concretely in the text associated to figure 2 and 3 in my review.

                In the analogy with height the inaccuracy in our ability to estimate (visually, not by a tool) the length of a persons length (two different people may give different answers) is considered equivalent by analogy to our ability to estimate probabilities. I think I have exhausted my ability to explain the analogy so I suggest we leave it at that.

                But your analogy can be made better (that is similar to our case).
                The hypothesis becomes, say: “the apple is heavier than 100g”
                one guy says: “I give it 60% it’s true”
                another: “70% on true”

                Now, given you’ve got no idea how much the apple weighs, tell me, which one is more accurate?

                I can’t! My entire point is that our ability to estimate probabilities is limited and so come with some inaccuracy! So can we after all this time agree that our ability to estimate probabilities is somewhat imprecise, i.e. comes with an error? Then go on to consider the discussion of figure 2 and 3, otherwise we should discuss this issue more.

              • Zbykow
                2016-06-25 14:42:52 GMT+0000 - 14:42 | Permalink

                “the length of a persons length (two different people may give different answers) is considered equivalent by analogy to our ability to estimate probabilities.”

                The mistake is, they’re not equivalent.
                In the length example accuracy pertains to the reality, which can be objectively measured. It’s not the case with bayesian probability. You have every right to hold a false belief, if in your subjective judgement the evidence points to it.
                And most important, you can’t stack pieces of evidence and measure them with a tape. Unlike in the length example, you don’t have a whole thing, you must judge every piece separately anyway, so your error cumulation argument is moot.

                “So can we after all this time agree that our ability to estimate probabilities is somewhat imprecise, i.e. comes with an error?”

                In the sense you are using in your examples, that error is a deviation from some true value, no, because no true, objective value of subjective probability can possibly be known, if such thing even exists.

                Only errors that can be detected are not inaccuracies, but faulty reasoning, failures of logic, lack of knowledge and the like – and BT approach is excellent at weeding out those.

              • 2016-06-25 15:05:55 GMT+0000 - 15:05 | Permalink

                Only errors that can be detected are not inaccuracies, but faulty reasoning, failures of logic, lack of knowledge and the like – and BT approach is excellent at weeding out those.
                Okay it seems that we agree that we, us, everyone are prone to making errors due to the above factors you mention in our estimates of the probabilities? That’s what I have claimed all along. I have written about what I see as the implications of this elsewhere which you can read if you like. Or not :-). Quite frankly I think this discussion is a bit tedious.

              • Zbykow
                2016-06-25 17:57:48 GMT+0000 - 17:57 | Permalink

                “Okay it seems that we agree that we, us, everyone are prone to making errors due to the above factors you mention in our estimates of the probabilities? That’s what I have claimed all along.”

                You also claimed, that BT introduces additional uncertainty.
                That’s not true. Sure it can make one more uncertain, but only because it helps quantify uncertainty better.

                You also claimed it’s silly like breaking objects to pieces to measure them.
                That’s not true, because it deals with stuff already in pieces, and the process is not even close to measuring physical quantities.

                You claimed it causes additional cumulative error, as in measuring.
                Not the case. It streamlines reasoning so errors can be avoided or revealed.

                Your criticism is based on a false analogy.

              • 2016-06-25 20:59:10 GMT+0000 - 20:59 | Permalink

                You also claimed, that BT introduces additional uncertainty.

                Yes, and under the conditions I consider I believe it does. If you wish to object to the claim you must read the argument and object to the specifics.

                You also claimed it’s silly like breaking objects to pieces to measure them.

                That’s not true, because it deals with stuff already in pieces, and the process is not even close to measuring physical quantities.

                No, I don’t claim that the use of BT is silly, I use it every day in my professional work. If you believe my argument is wrong I will be happy to hear the specifics you object to. Right now you are simply stating opinions that are so general as to be impossible to even evaluate.

                You claimed it causes additional cumulative error, as in measuring.
                Not the case. It streamlines reasoning so errors can be avoided or revealed.

                Combining several measurements into one will in general imply an accumulation of errors. Similarly, combining several probabilities, obtained with some imprecision as we appears to agree, into one will in general imply an increase in the uncertainty. If you disagree with me, could you please state the specifics of my argument (figure 2–3) you disagree with?

                Your criticism is based on a false analogy

                Then discard the analogy as this abstraction appears to cause undue confusion and consider my actual argument and the actual assertions I do make and support. Feel free to email me, timhendrix@gmx.com, if you think it is OT to this thread.

              • Zbykow
                2016-06-27 21:20:42 GMT+0000 - 21:20 | Permalink

                You might just take it as a feedback on your critical reviews.
                Weak analogies and made up charts are not terribly convincing.

              • 2016-06-28 10:39:37 GMT+0000 - 10:39 | Permalink

                Weak analogies and made up charts are not terribly convincing.

                Well, first of all the analogy serves as an illustration for a simple principle: When a measurement is made up of combining several numbers, the error in the final measurement will typically be larger than the error in the individual numbers.

                You can ignore the analogy/illustration and simply consider the actual argument; I have repeatedly asked you what is wrong in the actual argument I make and which of the conclusions I draw which you think is false and you have so far avoided this question.

                Secondly, if you are implying an allegation of fraud or error in my graphics I do hope you will substantiate this.

            • Zbykow
              2016-07-01 16:38:51 GMT+0000 - 16:38 | Permalink

              No, I believe you being purposely dishonest is unlikely.

              “You can ignore the analogy/illustration and simply consider the actual argument;”

              I can’t because you don’t provide any arguments other than analogy/illustration. You don’t mistake your conclusions for arguments, do you?

              If you think BT introduces more uncertainty, the first question is, compared to what method? An educated guess?
              Then you should try to prove it somehow, and quantify the difference.

              • 2016-07-01 21:00:21 GMT+0000 - 21:00 | Permalink

                Well, Zbykow, I believe I do provide arguments to support the claims I have made elsewhere and which I have repeated on this thread. If you don’t believe this to be the case, well, I am not sure that is my fault.

                If you think BT introduces more uncertainty, the first question is, compared to what method? An educated guess? Then you should try to prove it somehow, and quantify the difference.

                I think you need to read what I write a bit more closely. BT itself does not introduce uncertainty, however the uncertainty in the result obtained by BT will as a rule be larger than the uncertainty in the individual numbers*. You are right to ask “compared to what”. Presumably, an educated guess could have more uncertainty than the BT result. So how do we know if that is the case? How do we know that is not the case? These are very difficult questions to answer since we are guessing probabilities left and right, however I think a person who propose the use of BT must at least keep them in mind. That’s one of my points.

                * We need to nail down what we mean by “larger” in this context but I don’t think we should get too distracted by that here.

        • R Pence
          2016-06-24 11:50:54 GMT+0000 - 11:50 | Permalink

          I agree with this. I would add the nuance that using Bayes allows us to render uncertainties more explicit by way of an objective method. But this should not be confused with the stronger and less justified position that we are saying something absolute or final about the uncertainties involved. In other words, my uncertainties will not match your uncertainties if we compare our Bayesian arguments.

          But the sense I get from Carrier is that he sometimes veers into the ‘strong’ Bayesian position according to which we are able to determine ‘actual’ uncertainties in a more absolute sense. But I don’t think this is the case – that we are saying anything unequivocal about the uncertainties involved in the same way that I can solve an equation for a particular value.

          Bayes is great, but strictly within its limits: a) for internal mathematical consistency; b) for comparison to other subjective applications using the same rules.

      • R Pence
        2016-06-24 12:05:55 GMT+0000 - 12:05 | Permalink

        I think this way also. Which is why I might go so far as to say that the use of Bayes in historical research is objective without being empirical. It seems to me it’s just a rigorous way of mathematically relating a set of subjectively assigned probabilities. The use of Bayes might have a certain explanatory power, but it’s possible to overstate its efficacy.

        • Neil Godfrey
          2016-06-29 04:51:58 GMT+0000 - 04:51 | Permalink

          I suspect some of the differences of views on the question of Bayes in historical questions derives from confusion about the nature of history itself.

          What Carrier is doing in OHJ is a quite narrow form of historical inquiry. It could even be called something quite different from history. It’s more akin to detective work that attempts to arrive at the answer to who committed a crime by examining all the evidence and testimonies. It’s a form of fact-finding.

          (History as understood by a good number of historians is more than fact-finding. That process is only the first step towards writing a history.)

          I suspect in detective work there would be less criticism of the Bayesian process, yet I think that’s exactly what Carrier is doing — working with evidence pure and simple, not “history” or “historical questions” etc. Those are all things to be brought into the mix much, much later. But it’s easy to let them confuse the picture at the earliest fact-finding stage.

          Biblical HJ scholars are partly to blame for this. They themselves are working on the basis of a pre-twentieth century view of “history” — where certain events, persons, situations, etc are all pictured as really existing, in reality, and needing consideration whenever they think about specific questions. Yet in reality there is no really existing history. Everything is in the historian’s mind. And everything, everything, is subject to radical review because of this. It all comes back to understanding that evidence has to be interpreted and a picture created in our minds on the basis of this.

    • Mark
      2016-06-22 15:20:59 GMT+0000 - 15:20 | Permalink

      It’s entirely his right to abridge your comment.

      • R Pence
        2016-06-24 11:56:43 GMT+0000 - 11:56 | Permalink

        It’s within his power, but I wouldn’t say it’s his right. In my experience, a comment is either approved or denied. This is fairly universal. ‘Editing’ a comment for reasons other than use of profanity rather treats the comment as if it doesn’t belong at all to the person who took the time to compose it.

        In any case, if you take the time to write a comment – and thereby indirectly lend support to a site by contributing to the discussion, adding page hits, etc. – it’s entirely within your right not to visit the site again if your words are going to be altered. I exercise that right.

    • Neil Godfrey
      2016-06-23 05:35:47 GMT+0000 - 05:35 | Permalink

      There is no problem each of us assigning different subjective values for “likely”, “very likely”, etc — all that is needed is for one to be consistent if working alone, and to nut out a common agreement is working with others.

      • R Pence
        2016-06-24 12:00:41 GMT+0000 - 12:00 | Permalink

        Again, I don’t disagree, though with Bayes you are clearly doing computation with quantities and not intuitively juggling qualities, i.e. terms like ‘likely’, etc. Bayes is great for internal rigor and consistency *and* comparison with others using the same method. I simply wouldn’t overstate its usefulness in historical research, or more precisely, suggest its use would dramatically transform the quality of work on Jesus historicity, for example, as Carrier does.

  • Mark Erickson
    2016-06-26 03:58:15 GMT+0000 - 03:58 | Permalink

    Meh. Does anyone think these three posts were an effective rebuttal to Carrier’s arguments?

    • MrHorse
      2016-06-26 08:49:13 GMT+0000 - 08:49 | Permalink

      I think each issue or argument would need to be addressed on it’s own.

      Carrier seems to want to avoid arguing over aspects of Paul’s historicity, and that may be because he has decided to address methodology as well as throwing only ‘one cat among the pigeons’.

  • Pingback: Remembering |

  • Leave a Reply to R Pence Cancel reply

    Your email address will not be published. Required fields are marked *

    This site uses Akismet to reduce spam. Learn how your comment data is processed.