The open access dragnet
Mon 07 Oct 2013 11:27 AM
In Who's Afraid of Peer Review, recently published in Science, John Bohannon reports on an experiment he did with open access science journals. He sent them a spoof paper that had the form of a serious article but was chock full of horrible errors. Only about 38% of the journals rejected it. Bohannon also talked about the paper in a recent NPR interview.
The blog Games with Words points out that this fails as a test of open access journals, because it is not clear what the acceptance rate would have been at traditional, closed-access journals. Note that it doesn't help to say (as Bohannon does) that acceptance of the spoof paper probably occurred when an open access journal simply didn't bother with peer review, even if we also accept that closed journals all genuinely conduct peer review. The study has no way of distinguishing the absence of peer review from crap peer review, and some closed journals may conduct crap peer review.
This criticism is sharpened because failure to have a control group was one of the howlers in Bohannon's spoof paper. He says in the NPR interview:
It looked like real paper, not a joke. But if you peer-reviewed it, you would within five minutes see that it was so flawed that it could never be published. ... if you're claiming to have evidence that some chemical is a promising new drug, well, you better have tested at least on healthy cells. Because even if you show that it hurts cancer cells, how do you know what you have there isn't just a poison? So that's one thing that's just awful about the paper, is that it doesn't compare cancer cells to healthy cells at all.
So GamesWithWords retorts: "Science -- which is not open access -- published an obviously flawed article about open access journals publishing obviously flawed articles."
Nevertheless, it seems that Bohannon wasn't even trying to test open-access as such. He says, "Open-access is great and everyone believes that."
Instead, he says that his results could be used to separate the sham open-access journals (which print anything so as to scam author fees) from genuine ones (which conduct authentic peer review and try to publish good science). That doesn't require a control group. If the goal is to sort rotten open-access journals from good open-access journals, then we don't need to consider anything but the open-access journals.
Yet if that's the goal, then the percentage of acceptances really doesn't matter. What matters is using a reliable sorting process and publicizing the results. Journals by the Cairo-based publisher Hindawi rejected the paper. Journals by Elsevier, Wolters Kluwer, and Sage accepted it.
And the fact that the results are presented as a percentage and series of anecdotes means that numerous readers will read it instead as a poor score for open-access as such. Brian Leiter, for example, links to the NPR interview just by writing "Open access journals in science: This story is a bit worrisome!"
Also note that Bohannon's experiment only tested journals that charge author fees. He explicitly excluded journals that don't. As I've noted before, it's the author-pays model that gives publishers like Elsevier, Kluwer, and Sage the incentive to run bogus, review-free journals.