Mythicists have been told so often (oh how so very often) that they should publish in peer-review journals to be taken seriously. Peer-review, the public has been repeatedly told, is the guarantee of true scholarship. In the recent scholarly outrage over Joseph Atwill’s thesis gaining more public attention than the works of the academy, Larry Hurtado reminded us that “peer-review” and “reputable” go together like carrots and peas and Tom Verenna was once again extolling peer-review as the magic gateway guaranteed to keep intellectuals worthy and honest.
Now I do understand the reasons for peer-review. But what these reminders from among biblical scholars are overlooking is the research that demonstrates that peer-review in the various ways it is practiced today is also a deeply flawed process. Or maybe all those studies demonstrating this have no relevance for theologians.
You see, there is a conflict between what I read on the web by Bible scholars about how effective the peer-review process is in their profession on the one hand, and what I read about the flaws in the peer-review process in my professional capacity (coordinator of a research data management project and of a research publications archiving and access project) in an academic institution on the other.
While I read of the virtues of peer-review for maintaining the pure standards of biblical scholars after hours, during my work time I am reading published research findings that are not so sanguine about peer-review. Why the difference? Could it be that the research is focused mostly in the areas of the sciences? No doubt the nature of that sort of material makes objective analysis easier. Does that mean the demonstrated failings of peer-review could never apply to the field of biblical studies?
Given that scientists are increasingly being exposed to an understanding of the flaws in the peer-review process, are we to assume that biblical scholars are immune from these flaws and that their peer-review mechanisms really are guarantors of quality work?
One article that referenced several studies on the peer-review process is Richard Smith’s “Classical peer review: an empty gun” in Breast Cancer Research 2010, 12 (Suppl 4):S13 doi:10.1186/bcr2742 (a peer-reviewed journal).
If peer review was a drug it would never be allowed onto the market
This is how the article begins. It is a quotation from the deputy editor of a leading medical journal and “intellectual father of the international congresses of peer review that have been held every four years since 1989”, Drummond Rennie.
Later the article makes this claim:
If peer review is to be thought of primarily as a quality assurance method, then sadly we have lots of evidence of its failures. The pretentiously named medical literature is shot through with poor studies.
One would think that in a field like medical studies that peer-review would ensure that only accurate information is published. Certainly we would not think that the peer-review process would let through anything that would cause public harm.
But the facts prove otherwise.
There is much that is published that is downright false. The editors of the ACP Journal Club find that less than 1% of studies in most journals are “both scientifically sound and important for clinicians”. There are also documented instances of bad studies being published that have led to patient heart attacks and measles epidemics.
Note the following and ask if we have the same types of human nature producing and reviewing articles in biblical studies (with my bolding and formatting):
Doug Altman, perhaps the leading expert on statistics in medical journals, sums it up thus: ‘What should we think about researchers who
- use the wrong techniques (either wilfully or in ignorance),
- use the right techniques wrongly,
- misinterpret their results,
- report their results selectively,
- cite the literature selectively,
- and draw unjustified conclusions?’
We should be appalled. Yet numerous studies of the medical literature have shown that all of the above phenomena are common. This is surely a scandal.’
Back to Drummond Rennie:
Drummond Rennie writes in what might be the greatest sentence ever published in a medical journal:
‘There seems to be no study too fragmented, no hypothesis too trivial, no literature citation too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.’
Are biblical scholars more professional as a whole than doctors? Are their arguments and publications more rigorous? According to everything I read by biblical scholars themselves I must think they really are. No doubt our souls are worth much more care than our physical bodies.
Richard Smith continues:
We have little or no evidence that peer review ‘works,’ but we have lots of evidence of its downside.
What are the downsides?
Among those listed are:
- Bias: the ‘sexier’ articles end up in the ‘top’ journals — thus actually conveying a distorted view of science. (I’ll return to this point.)
- “Peer review is largely a lottery.” Numerous studies have shown that when several authors are asked to review a paper the amount of agreement among them on its worthiness to be published is little higher than would be expected by chance.
- Peer-review does not detect errors. Again numerous studies have demonstrated this. Papers have certain errors deliberately inserted into them (mixes of major and minor) and are then sent to peer review, and the rate of detection of those errors is so often very, very low indeed.
- Bias again, and again confirmed by many studies. The most famous study —
- The authors took 12 studies that came from prestigious institutions that had already been published in psychology journals. They retyped the papers, made minor changes to the titles, abstracts, and introductions but changed the authors’ names and institutions. They invented institutions with names like the Tri-Valley Center for Human Potential. The papers were then resubmitted to the journals that had first published them. In only three cases did the journals realise that they had already published the paper, and eight of the remaining nine were rejected – not because of lack of originality but because of poor quality. The authors concluded that this was evidence of bias against authors from less prestigious institutions. Most authors from less prestigious institutions, particularly those in the developing world, believe that peer review is biased against them.
- Another bias is the bias against new ideas, or the truly original. How often do we read a biblical scholar insisting that academics are all very keen to overturn the status quo and come up with radical new theses, so that mythicism should be in all the peer-review journals if it had any merit? Apparently, scientists have a different DNA (my bolding and underline):
- Peer review might be described as a process where the ‘establishment’ decides what is important. Unsurprisingly, the establishment is poor at recognizing new ideas that overturn the old ideas. It is the same in the arts where Beethoven’s late string quartets were declared to be nothing but noise and Van Gogh managed to sell only one painting in his lifetime. David Horrobin, a strong critic of peer review, has collected examples of peer review turning down hugely important work, including Hans Krebs’s description of the citric acid cycle, which won him the Nobel prize, Solomon Berson’s discovery of radioimmunoassay, which led to a Nobel prize, and Bruce Glick’s identification of B lymphocytes.
- One last form of weakness is apparently completely unknown among the scholarly publication world of biblical studies:
- Reviewers can steal ideas and present them as their own or produce an unjustly harsh review to block or at least slow down the publication of the ideas of a competitor. These have all happened.
I said I’d return to the point about the ‘sexier’ articles end up in the ‘top’ journals.
One of the conclusions of a recent study by Brembs, Button and Munafo, “Deep impact: unintended consequences of journal rank“, was that journal rank is “a moderate to strong predictor of both intentional and unintentional scientific reliability“.
The same article contains these alarming findings:
As journal rank [meaning here the most reputable journals are the most at fault] is also predictive of the incidence of fraud and misconduct in retracted publications, as opposed to other reasons for retraction (Steen, 2011a), it is not surprising that higher ranking journals are also more likely to publish fraudulent work than lower ranking journals (Fang et al., 2012).
There are thus several converging lines of evidence which indicate that publications in high ranking journals are not only more likely to be fraudulent than articles in lower ranking journals, but also more likely to present discoveries which are less reliable (i.e., are inflated, or cannot subsequently be replicated). Some of the sociological mechanisms behind these correlations have been documented, such as pressure to publish (preferably positive results in high-ranking journals), leading to the potential for decreased ethical standards (Anderson et al., 2007) and increased publication bias in highly competitive fields (Fanelli, 2010). The general increase in competitiveness, and the precariousness of scientific careers (Shapin, 2008), may also lead to an increased publication bias across the sciences (Fanelli, 2011). This evidence supports earlier propositions about social pressure being a major factor driving misconduct and publication bias (Giles, 2007), eventually culminating in retractions in the most extreme cases.
Mention was made in that last paragraph to certain publication bias. Another such publication bias is one for positive outcomes. This was demonstrated in an article “Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial” (see the link for the full article and citation). Its main conclusion:
A fabricated manuscript with a positive outcome was more likely to be recommended for publication than was an otherwise identical no-difference manuscript.
But when biblical scholars were finally able to come up with arguments that the Testimonium Flavianum in Josephus really did contain more evidence for the historical Jesus after all, no doubt nothing but purely objective criteria were the foundation of this overturning of the old consensus. Nothing so base as Positive Outcome Bias would have influenced Biblical Scholars and Theologians!
So what is it about scientists and doctors that make them more morally frail than biblical scholars and their peer guarantors of quality work?
The authors of that paper attempt an answer in their final paragraph (bolding added):
It has been proposed that bias of the sort observed by Mahoney and herein is not just a part of evidence-based medicine or peer review but is part of human cognitive behavior (finding what one seeks); indeed, Mahoney pointed this out and suggested that Francis Bacon identified the phenomenon nearly 400 years ago. Previous studies have found that the “newsworthy” (defined as a positive finding) is more likely to draw a favorable response from peer reviewers and, indeed, that work with positive outcomes is more likely to be submitted to peer review in the first place.
Aren’t we aficionados of biblical and Christian origins studies lucky we are regaled with only sound and sober scholarship guaranteed to be of the highest quality by peer-review gatekeepers who are indeed closest to the word of God! Evidence indeed that our souls, and their intellectual keepers, are deemed more precious than our mortal coils and those who care for them.
Latest posts by Neil Godfrey (see all)
- Bearing False Witness for Jesus - 2022-01-18 01:12:46 GMT+0000
- Why Did Written Stories of Jesus Take So Long to Appear? - 2022-01-17 05:02:14 GMT+0000
- Nero – Followup #2 - 2022-01-15 12:17:08 GMT+0000
If you enjoyed this post, please consider donating to Vridar. Thanks!