MP’s want to look at ‘fixing’ peer review. Now, don’t get me wrong – peer review has a number of significant flaws in it, and is far from a consistent process. Different journals in different fields apply different standards and processes.
Let me first outline, for those of you who are not familiar witht he process, what generally happens.
After carrying out a suite of experiments, or after making a finding of some kind you and your co-authors draft a paper. That paper will have a formal structure, presenting the results and then investigating the potential implications and important conclusions. This paper will then be submitted to a journal for consideration for publication.
Upon reciept, the journal will send the paper out to two or three reviewers. These are other academics who are experts in the field. These academics will then look over the paper and assess it on a number of criteria (which will vary according to the journal in question). Broadly, however, these can be summarised as:
- Is the method valid?
- Are the results real?
- Are the results statistically significant?
- Is there anything new in these findings?
The reviewers will then respond to the Associate Editor of the journal dealing with the paper, expressing any corrections or ammendments they feel should be made to the paper, and a recommendation for whether the paper should be taken forward for publciation or not. This feedback is then assessed by the Associate Editor, who returns the relevant feedback and approval/refusal notification to the authors. This process can take anything up to 3 or 4 months.
If the paper is rejected, the authors can choose to resubmit the article to another journal and start again on the process (although usually making the recommended ammendments from the reviewers first). If the paper is accepted by the journal, then the authors have a period of time (usually a couple of months or so) to respond to the comments made by the reviewers – either making the changes or not, but with an explanation in each case. Eventually the paper ends up published, and all is well with the world.
The reason the government are getting their knickers in a twist is that there have been a number of high profile medical cases in which peer review appeared not to work (such as MMR and the case of Hwang Woo-suk, who published two papers in the highly respected journal Science on cloned embyonic human stem cells).
The scientific community as a whole has known there have been issues with peer review for a long time. It is relatively common for reviewers to pressurise authors to reference their work (citations are the lifeblood of academia), and more significantly in small academic communities of specialists personal rivalries, friendships or disagreements can get in the way of otherwise good (or bad) science.
So what do the government want to do to help? They want to provide training to researchers in how to review articles. That’s all very well, but who is to provide this training and what form is it to take? Generic training is of no use – we all know what bad science and good science look like (and those who ignore these are not going to be changed by a government training scheme). Certainly there are many academics who would benefit from training in improving their writing style, but this is not the issue the government are concerned with. Any useful training would involve looking at how to interrogate particular types of results, but this is so specialised that it will vary enormously from field to field. Training isn’t their only solution either.
“Innovative approaches – such as the use of pre-print servers, open peer-review, increased transparency and online repository-style journals – should be explored by publishers”.
By pre-print servers they mean the online hosting of artiucles before they are officially published. This in itself actually worsens the problem of scientific trust, as it will blur the line between papers which have been submitted, and those which are published – between which there are often very significant corrections and changes to the interpretation of data.
Open peer review is a very strange idea indeed. There is, in some cases, a tendancy for journals to favour certain reviewers. Open peer review suggests that it will be available for anyone to go and review papers. It takes long enough to go through the review process with 3 invited reviewers – how an open review process would work I do not know. Most academics have too much to do already without crawling around numerous journal websites looking for new articles to review every month or so. In most cases the only type of reviewer an open system will attract is the obsessive, wiki-editor type person.
If journals really want to take a leap forward, then they should fully endorse the online system. In the apst authors have only been able to publish restricted amounts of data. As the journals go online (and in many cases the online version is becoming the official reference version of the paper, rather than its paper counterpart) there is huge scope for the papers to be published alongside vast tracts of data which would go a long way to supporting or refuting arguements. The problem which arises here is a very human one, however. As more and more data becomes available, the chances of people thoroughly checking every case become smaller and smaller. Is it easier to hide bad science behind a few select results or gigabytes of data no-one will look at?
Peer review certainly needs looking at, and I’d be interested to see whether anything follows on from this report. However, trying to apply crowd-sourcing to a field in which you need 4+ years training to even begin to grasp the subject matter properly seems to me to be more of a ‘hey look how cool we are’ idea from the government than a practical solution to a genuine problem. Let alone how an individual government can possibly envisage dealing with a global publishing base.