Mythicists have been told so often (oh how so very often) that they should publish in peer-review journals to be taken seriously. Peer-review, the public has been repeatedly told, is the guarantee of true scholarship. In the recent scholarly outrage over Joseph Atwill’s thesis gaining more public attention than the works of the academy, Larry Hurtado reminded us that “peer-review” and “reputable” go together like carrots and peas and Tom Verenna was once again extolling peer-review as the magic gateway guaranteed to keep intellectuals worthy and honest.
Now I do understand the reasons for peer-review. But what these reminders from among biblical scholars are overlooking is the research that demonstrates that peer-review in the various ways it is practiced today is also a deeply flawed process. Or maybe all those studies demonstrating this have no relevance for theologians.
You see, there is a conflict between what I read on the web by Bible scholars about how effective the peer-review process is in their profession on the one hand, and what I read about the flaws in the peer-review process in my professional capacity (coordinator of a research data management project and of a research publications archiving and access project) in an academic institution on the other.
While I read of the virtues of peer-review for maintaining the pure standards of biblical scholars after hours, during my work time I am reading published research findings that are not so sanguine about peer-review. Why the difference? Could it be that the research is focused mostly in the areas of the sciences? No doubt the nature of that sort of material makes objective analysis easier. Does that mean the demonstrated failings of peer-review could never apply to the field of biblical studies?
Given that scientists are increasingly being exposed to an understanding of the flaws in the peer-review process, are we to assume that biblical scholars are immune from these flaws and that their peer-review mechanisms really are guarantors of quality work?
One article that referenced several studies on the peer-review process is Richard Smith’s “Classical peer review: an empty gun” in Breast Cancer Research 2010, 12 (Suppl 4):S13 doi:10.1186/bcr2742 (a peer-reviewed journal).
If peer review was a drug it would never be allowed onto the market
This is how the article begins. It is a quotation from the deputy editor of a leading medical journal and “intellectual father of the international congresses of peer review that have been held every four years since 1989”, Drummond Rennie.
Later the article makes this claim:
If peer review is to be thought of primarily as a quality assurance method, then sadly we have lots of evidence of its failures. The pretentiously named medical literature is shot through with poor studies.
One would think that in a field like medical studies that peer-review would ensure that only accurate information is published. Certainly we would not think that the peer-review process would let through anything that would cause public harm.
But the facts prove otherwise.
There is much that is published that is downright false. The editors of the ACP Journal Club find that less than 1% of studies in most journals are “both scientifically sound and important for clinicians”. There are also documented instances of bad studies being published that have led to patient heart attacks and measles epidemics.
Note the following and ask if we have the same types of human nature producing and reviewing articles in biblical studies (with my bolding and formatting):
Doug Altman, perhaps the leading expert on statistics in medical journals, sums it up thus: ‘What should we think about researchers who
- use the wrong techniques (either wilfully or in ignorance),
- use the right techniques wrongly,
- misinterpret their results,
- report their results selectively,
- cite the literature selectively,
- and draw unjustified conclusions?’
We should be appalled. Yet numerous studies of the medical literature have shown that all of the above phenomena are common. This is surely a scandal.’
Back to Drummond Rennie:
Drummond Rennie writes in what might be the greatest sentence ever published in a medical journal:
‘There seems to be no study too fragmented, no hypothesis too trivial, no literature citation too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.’
Are biblical scholars more professional as a whole than doctors? Are their arguments and publications more rigorous? According to everything I read by biblical scholars themselves I must think they really are. No doubt our souls are worth much more care than our physical bodies.
Richard Smith continues:
We have little or no evidence that peer review ‘works,’ but we have lots of evidence of its downside.
What are the downsides?
Among those listed are:
- Bias: the ‘sexier’ articles end up in the ‘top’ journals — thus actually conveying a distorted view of science. (I’ll return to this point.)
- “Peer review is largely a lottery.” Numerous studies have shown that when several authors are asked to review a paper the amount of agreement among them on its worthiness to be published is little higher than would be expected by chance.
- Peer-review does not detect errors. Again numerous studies have demonstrated this. Papers have certain errors deliberately inserted into them (mixes of major and minor) and are then sent to peer review, and the rate of detection of those errors is so often very, very low indeed.
- Bias again, and again confirmed by many studies. The most famous study —
- The authors took 12 studies that came from prestigious institutions that had already been published in psychology journals. They retyped the papers, made minor changes to the titles, abstracts, and introductions but changed the authors’ names and institutions. They invented institutions with names like the Tri-Valley Center for Human Potential. The papers were then resubmitted to the journals that had first published them. In only three cases did the journals realise that they had already published the paper, and eight of the remaining nine were rejected – not because of lack of originality but because of poor quality. The authors concluded that this was evidence of bias against authors from less prestigious institutions. Most authors from less prestigious institutions, particularly those in the developing world, believe that peer review is biased against them.
- Another bias is the bias against new ideas, or the truly original. How often do we read a biblical scholar insisting that academics are all very keen to overturn the status quo and come up with radical new theses, so that mythicism should be in all the peer-review journals if it had any merit? Apparently, scientists have a different DNA (my bolding and underline):
- Peer review might be described as a process where the ‘establishment’ decides what is important. Unsurprisingly, the establishment is poor at recognizing new ideas that overturn the old ideas. It is the same in the arts where Beethoven’s late string quartets were declared to be nothing but noise and Van Gogh managed to sell only one painting in his lifetime. David Horrobin, a strong critic of peer review, has collected examples of peer review turning down hugely important work, including Hans Krebs’s description of the citric acid cycle, which won him the Nobel prize, Solomon Berson’s discovery of radioimmunoassay, which led to a Nobel prize, and Bruce Glick’s identification of B lymphocytes.
- One last form of weakness is apparently completely unknown among the scholarly publication world of biblical studies:
- Reviewers can steal ideas and present them as their own or produce an unjustly harsh review to block or at least slow down the publication of the ideas of a competitor. These have all happened.
I said I’d return to the point about the ‘sexier’ articles end up in the ‘top’ journals.
One of the conclusions of a recent study by Brembs, Button and Munafo, “Deep impact: unintended consequences of journal rank“, was that journal rank is “a moderate to strong predictor of both intentional and unintentional scientific reliability“.
The same article contains these alarming findings:
As journal rank [meaning here the most reputable journals are the most at fault] is also predictive of the incidence of fraud and misconduct in retracted publications, as opposed to other reasons for retraction (Steen, 2011a), it is not surprising that higher ranking journals are also more likely to publish fraudulent work than lower ranking journals (Fang et al., 2012).
There are thus several converging lines of evidence which indicate that publications in high ranking journals are not only more likely to be fraudulent than articles in lower ranking journals, but also more likely to present discoveries which are less reliable (i.e., are inflated, or cannot subsequently be replicated). Some of the sociological mechanisms behind these correlations have been documented, such as pressure to publish (preferably positive results in high-ranking journals), leading to the potential for decreased ethical standards (Anderson et al., 2007) and increased publication bias in highly competitive fields (Fanelli, 2010). The general increase in competitiveness, and the precariousness of scientific careers (Shapin, 2008), may also lead to an increased publication bias across the sciences (Fanelli, 2011). This evidence supports earlier propositions about social pressure being a major factor driving misconduct and publication bias (Giles, 2007), eventually culminating in retractions in the most extreme cases.
Mention was made in that last paragraph to certain publication bias. Another such publication bias is one for positive outcomes. This was demonstrated in an article “Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial” (see the link for the full article and citation). Its main conclusion:
A fabricated manuscript with a positive outcome was more likely to be recommended for publication than was an otherwise identical no-difference manuscript.
But when biblical scholars were finally able to come up with arguments that the Testimonium Flavianum in Josephus really did contain more evidence for the historical Jesus after all, no doubt nothing but purely objective criteria were the foundation of this overturning of the old consensus. Nothing so base as Positive Outcome Bias would have influenced Biblical Scholars and Theologians!
So what is it about scientists and doctors that make them more morally frail than biblical scholars and their peer guarantors of quality work?
The authors of that paper attempt an answer in their final paragraph (bolding added):
It has been proposed that bias of the sort observed by Mahoney and herein is not just a part of evidence-based medicine or peer review but is part of human cognitive behavior (finding what one seeks); indeed, Mahoney pointed this out and suggested that Francis Bacon identified the phenomenon nearly 400 years ago. Previous studies have found that the “newsworthy” (defined as a positive finding) is more likely to draw a favorable response from peer reviewers and, indeed, that work with positive outcomes is more likely to be submitted to peer review in the first place.
Aren’t we aficionados of biblical and Christian origins studies lucky we are regaled with only sound and sober scholarship guaranteed to be of the highest quality by peer-review gatekeepers who are indeed closest to the word of God! Evidence indeed that our souls, and their intellectual keepers, are deemed more precious than our mortal coils and those who care for them.
Latest posts by Neil Godfrey (see all)
- Varieties of Atheism #2 - 2023-05-21 02:18:55 GMT+0000
- Varieties of Atheism - 2023-05-20 07:10:56 GMT+0000
- The Troubled “Quiet” before the Jewish Diaspora’s Revolt against Rome: 116-117 C.E. - 2023-05-10 07:58:29 GMT+0000
If you enjoyed this post, please consider donating to Vridar. Thanks!
22 thoughts on “If Peer-Review Does Not Work for Science Why Does It Work for Biblical Studies?”
I submitted a paper to a theology journal which critiqued a paper by an earlier author. It was rejected with the comment that the underlying idea presented by the first author was not convincing. Why did they publish it, then? Different reviewer, different editor.
Another problem with pre-publication peer-review discussed in the literature is that it is not nearly as important as the post-publication peer-review. Most articles that pass the pre-pub process are soon forgotten, never or rarely cited, fated to irrelevance. On the other hand, some of the most ground-breaking work has had a hard time being accepted by the pre-pub process.
By turning down your paper with that excuse they are, consciously or not, hiding the failure of the system that your paper would have brought to a wider attention.
Here is an interesting very factual article on peer review, showing that it does not weed out bogus claims, that it can occasionally be shown to hold back innovative good ideas, and that only one of Albert Einstein’s many articles was peer reviewed.
It appears you intended to include a link that has dropped out. You may have been referring us to Einstein Versus the Physical Review by Daniel Kennefick or Three myths about scientific peer review by Michael Nielsen.
Other important articles are The philosophical basis of peer review and the suppression of innovation by David Horrobin; What errors do peer reviewers detect, and does training improve their ability to detect them? by Sara Schroter and others; as well as the three I linked in the post — and each of those links to many other worthwhile studies in addition.
And the most recent one of all, the one that kicked me into finally getting around to writing this post: The Assessment of Science: The Relative Merits of Post-Publication Review, the Impact Factor, and the Number of Citations by Adam Eyre-Walker and Nina Stoletzki.
I’ve been on both sides of the peer review business. (In Philosophy) One or two of my papers have been greatly improved by suggestions and criticisms of the peer reviewers, and I hope my comments have been useful to those whose papers I have reviewed.
That said, reviewers are mostly checking that the paper is not total blithering nonsense (unless it a journal of postmodern theory, when that is required) which means that they are most comfortable passing papers that fit the current views and less likely to admit genuine innovation.
It’s always bothered me that peer-review is often regarded as being just as fundamental to science as methodology. No matter what background you have or what environment you’re a part of you are not immune to a baseline feature of human cognition. Academics are no less prone to group dynamics than anyone else. Although the conclusions of bias studies are typically understood and acknowledged the broader implications are usually lost on people. To most, bias only fetters the minds of those who don’t think like they do; through “othering” they caricature outsiders in order to reinforce a positive group image for the purpose of dissonance reduction. Once this image is cross-checked with the caricature and fails to match it becomes evidence that their group’s ideas are safely insulated from the vices of outsiders. For example, a certain theologian couldn’t have been egregiously misrepresenting and unfairly attacking a mythicist because he’s actually a very nice chap once you get to know him – he doesn’t fit the profile of, say, a paranoid, Holocaust-denying conspiracy myther that despises Christians.
To appeal to mere peer-review or consensus isn’t to appeal to reason or logic, but harmony.
Frank Morgan was both the Gatekeeper and the Wizard — the best of methods to protect lies.
Great non-peer reviewed review of peer review. (almost a palindrome!)
Having played the game in “the “academy” (the pretentious jargon) for 12 years, I can agree that pretensions of safeguards on quality are pretensions.
It is like the Red Queen Principle of informatics — each time some check is devised to improve the quality of knowledge, it is bypassed by other clever mechanisms.
OK, I am depressed — signing off.
“Peer-review does not detect errors.” What is your process for checking and refining papers, then? It seems you’ve criticized a process, without enacting an equally successful, testable alternative process. Your title should be, “If X Works For Science, It Will Work for Biblical studies”
I think you are right Jack, that sentence is an exaggeration, it should say, “Peer-review can leave all sorts of errors undetected.” It does detect errors but the process has many downsides. In epidemiology, each level of study is known to have its own unique pitfalls. So with academia, they should be more honest about each of the pitfalls of their methods. The pitfalls of Peer Review need to be shouted loudly — as Vridar is doing here (albeit with this one [accidental?] exaggeration).
Okay, very good point. Thanks
The assertion that peer-review does not detect errors certainly is far too generalized a claim out of context. In my defence I would like to plead that I did link to the online source of the claim, and I will quote the original passage in context here:
And the footnote  is as follows:
What is meant is that it does not necessarily detect all or even significant errors according to certain studies. Tim, more than I have been doing recently, has been alerting readers here to blatant errors of understanding of fundamental arguments and hypotheses that are allowed to be published in the literature of Biblical Studies and Theology. And when these same errors are repeated in blog posts by some of these scholars we can see that there is a lot of ignorance being perpetrated among people who should no better. If the standards of the academy are so slipshod then the public is being very ill-served indeed. Hence we ought to be dismayed when these scholars appeal to peer-review as the golden gateway that sets them apart from the ignorant rabble.
Here is a great story about a student who reads a rather influential paper in American Psychologist that he instantly recognizes as worthless, and his subsequent struggles to publish an article debunking it. Of course this has nothing to do with mythicism or religion, but a lot to do with gatekeepers.
Good old Larry Hurtado has happily confirmed my conclusion in this post that Biblical Scholars are indeed immune from the failings besetting scholars subjected to the peer-review process in the sciences. His post is an obvious response to this one but he finds no need to mention that or link to it, of course. No need to acknowledge the source of the question he is addressing if it comes from Vridar.
Larry explains that in the sciences the problem is with the ability to fabricate data, etc. In the Humanities, however, the data is there, unchanging, immutable, like the word of God. All that peer-reviewers need to do is decide how adequately each article addresses the data that has been there for the past however many years.
Therefore — and quite oblivious to the many points I made in this post about the nature of pressure on scholars, and the different types of problems with peer-review and what all this reveals about the dark side of the scholarly enterprise itself — Larry writes as if he had no need to read more than a couple of lines of what I wrote any more than Jesus had any need to reflect on the laws of physics before walking on water.
Larry’s work is actually some of the worst when it comes to standing up to post-publication peer-review and he goes bananas with all sorts of grossly inaccurate nonsense when addressing his critics who are demolishing his work to powder over its gross inadequacies in addressing data — more often than not simply ignoring data that does not suit his theory or dismissing it with a gruff, “How could anyone think that is so important?!”
Gosh, not even James McGrath went as far as Larry here — even McGrath acknowledged how bad the peer review process is. His unique contribution to the discussion was to say that a method that keeps out good and innovative work systemically does not allow mythicist arguments because they are not good and innovative enough. Er, yes. Or was he arguing that the peer-review process lets in bad publications and the mythicist arguments can’t make it because they are not bad enough? Or is it because they are judged to be so very, very bad because that’s how so many of the most innovative ideas have been judged in the past and thus never passed peer-review. I’m sure he thinks he knows what he meant to say.
Traveling at the moment — look forward to getting back to more time on posts and responses to some of the comments in a week or two.
My favorite part is how detachedly he acknowledges “a certain concern” with “inadequate peer-reviewing” “in at least some fields in the Sciences” then avoids the issue completely by speculating about preparation and work ethic.
Our comments bypassed each other on the way to here.
Larry Hurtado is a master of what is known in some quarters as the “mealy-mouthed”.
You might find this post a little interesting, Trusting the Expert Consensus at LessWrong.
It is of relevance here. It’s a topic that needs its own book.
Just for the record, let me also point out that what Larry Hurtado is refusing to acknowledge is that far from the problem in sciences being the ability to fabricate raw data, the problems addressed in my post — taken from the scholarly literature itself — are:
* use the wrong techniques (either wilfully or in ignorance),
* use the right techniques wrongly,
* misinterpret their results,
* report their results selectively,
* cite the literature selectively,
* and draw unjustified conclusions?’
* “Peer review is largely a lottery.”
* Peer-review does not detect errors.
* Bias again, and again confirmed by many studies. The most famous study –
The authors took 12 studies that came from prestigious institutions that had already been published in psychology journals. . . .
* Another bias is the bias against new ideas, or the truly original.
Peer review might be described as a process where the ‘establishment’ decides what is important. Unsurprisingly, the establishment is poor at recognizing new ideas that overturn the old ideas. It is the same in the arts where Beethoven’s late string quartets were declared to be nothing but noise and Van Gogh managed to sell only one painting in his lifetime. . . . [Notice Case studies from the Humanities!]
* Reviewers can steal ideas and present them as their own or produce an unjustly harsh review to block or at least slow down the publication of the ideas of a competitor. These have all happened.
So in this context it is worth recalling what Larry Hurtado confuses for actual “data”. I pointed this out in my earler post, Who’s the Scholarly Scoundrel? . . . Scholars Bound by Bias, Immured in Myth.
Larry does not even know what “raw data” actually is (as demonstrated from his own words in the above-linked post), or at least does not know the difference between a document or author making an inference about X and the literal historicity of X itself. It is his post-publication peer-reviewers who are exposing him as a hopeless apologist for something akin to a fundamentalist Christian dogma.
I’d point out these errors myself on his blog but he has a habit of deleting key bits of posts he does not like so he can misrepresent his critics.
As far as Larry has framed the issue there can be no evidence of flaws in the process of peer review. If deliberately fabricated papers make it through peer review this only proves that certain individuals weren’t conscientious enough in screening them. If ideas which are initially rejected later come to be accepted this only proves the success of peer review because the only way they could come to be accepted is through peer review. Apparently the circularity of this is completely lost on him, not to mention the fact that this renders the success of peer review entirely unfalsifiable.