Revolutionise the publication process

Enough of this political squabble and twitter war (twar?) and back to the “real” world of neuroneuroticism. Last year I sadly agreed (for various reasons) to act as corresponding author on one of our studies. I also have this statistics pet project that I want to try to publish as a single-author paper. Both of these experiences reminded me of something I have long known:

I seriously hate submitting manuscripts and the whole peer review and publication process.

The way publishing currently works authors are encouraged to climb down the rungs of the impact factor ladder, starting at whatever journal they feel is sufficiently general interest and high impact to take their manuscript and then gradually working their way down through editorial and/or scientific rejections until it is eventually accepted by, how the rejection letters from high impact journals suggest, a “more specialised journal.” At each step one battles with an online submission system that competes for the least user-friendly webpage of the year award and you repeat the same actions: uploading your manuscript files, suggesting reviewers, and checking the PDF conversion worked. Before you can do these you of course need to format your manuscript into the style the journal expects, with the right kind of citations, and having the various sections in the correct place. You also modify the cover letter to the editors, in which you hype up the importance of the work rather than letting the research speak for itself, to adjust it to the particular journal you are submitting to. All of this takes precious time and has very little to do with research.

Because I absolutely loathe having to do this sort of mindless work, I have long tried to outsource this to my postdocs and students as much as I can. I don’t need to be corresponding author on all my research. Of course, this doesn’t absolve you from being involved with the somewhat more important decisions, such as rewriting the manuscripts and drafting the cover letters. More importantly, while this may help my own peace of mind, it just makes somebody else suffer. It is not a real solution.

The truth is that this wasted time and effort would be far better used for doing science and ensuring that the study is of the best possible quality. I have long felt that the entire publication process should be remodelled so that these things are no longer a drain of researcher’s time and sanity. I am by far not the first person to suggest a publication model like this. For instance, Micah Allen mentioned very similar ideas on his blog and, more recently, Dorothy Bishop had a passionate proposal to get rid of journals altogether. Both touched on many of the same points and partly inspired my own thoughts on this.

Centralised review platform

Some people think that all peer review could be post-publication. I don’t believe this is a good idea – depending on what you regard as publication. I think we need some sort of fundamental vetting procedure before a scientific study is indexed and regarded as part of the scientific record. I fear that without some expert scrutiny we will become swamped with poor quality outputs that make it impossible to separate the wheat from the chaff. Post-publication peer review alone is not enough to find the needles in the haystack. If there is so much material out there that most studies never get read, let alone reviewed or even receive comments, this isn’t going to work. By having some traditional review prior to “acceptance” in which experts are invited to review the manuscript – and reminded to do so – we can at least ensure that every manuscript will be read by someone. Nobody is stopping you from turning blog posts into actual publications. Daniël Lakens has a whole series of blog posts that have turned into peer reviewed publications.

A key feature of this pre-publication peer review though should be that it all takes place in a central place completely divorced from any of the traditional journals. All the judgements on the scientific quality of a study requires expert reviewers and editors but there should be no evaluation of the novelty or “impact” of the research. It should be all about the scientific details to ensure that the work is robust. The manuscript should be as detailed as necessary to replicate a study (and the hypotheses and protocols can be pre-registered – a peer review system of pre-registered protocols is certainly an option in this system).

Ideally this review should involve access to the data and materials so that reviewers can try to replicate the findings presented in the study. Most expert reviewers rarely reanalyse data even if they are available. Many people usually do not have the time to get that deeply involved in a review. An interesting possible solution to this dilemma was suggested to me recently by Lee de-Wit: there could be reviewers whose primary role is to check the data and try to reproduce the analysed results based on the documentation. These data reviewers would likely be junior researchers, that is, PhD students and junior postdocs perhaps. It would provide an opportunity to learn about reviewing and also to become known with editors. There is presently a huge variability as to what stage of their career a researcher starts reviewing manuscripts. While some people begin reviewing even as graduate students others don’t even seem to review often after several years of postdoc experience. This idea could help close that gap.

Transparent reviewing

Another in my mind essential key aspect should be that reviews are transparent. That is, all the review comments should be public and the various revisions of the manuscript should be accessible. Ideally, the platform allows easy navigation between the changes so that it is straightforward to simply look at the current/final product and filter out the tracked changes – but equally easy to blend in the comments.

It remains a very controversial and polarising issue whether reviewers’ names should be public as well. I haven’t come to a final conclusion on that. There are certainly arguments for both. One of the reasons many people dislike the idea of mandatory signed reviews is that it could put junior researchers at a disadvantage. It may discourage them from writing critical reviews of the work by senior colleagues, those people who make hiring and funding decisions. Reviewer anonymity can protect against that but it can also lead to biased, overly harsh, and sometimes outright nasty reviews. It also has the odd effect of creating a reviewer guessing game. People often display a surprising level of confidence in who they “know” their anonymous reviewers were – and I would bet they are often wrong. In fact, I know of certainly one case where this sort of false belief resulted in years of animosity directed at the presumed reviewer and even their students. Publishing reviewer names would put an end to this sort of nonsense. It also encourages people to be more polite. Editors at F1000Research (a journal with a completely transparent review process) told me that they frequently ask reviewers to check if they are prepared to publish the review in the state they submitted because it will be associated with their name – and many then decide to edit their comments to tone down the hostility.

However, I think even with anonymous reviews we could go a long way, provided that reviewer comments are public. Since the content of the review is subject to public scrunity it is in the reviewer’s, and even more so the editor’s, interest to ensure they are fair and of suitable quality. Reviews of poor quality or with potential political motivation could easily be flagged up and result in public discussion. I believe it was Chris Chambers who recently suggested a compromise in which tenured scientists must sign their reviews while junior researchers who still exist at the mercy of senior colleagues have the option to remain anonymous. I think this idea has merit although even tenured researchers can still suffer from political and personal biases so I am not sure this really protects against those problems.

One argument that is sometimes made against anonymous reviews is that it prevents people from taking credit for their reviewing work. I don’t think this is true. Anonymous reviews are nevertheless associated with a digital reviewer’s account and ratings of review quality and reliability etc could be easily quantified in that way. (In fact, this is precisely what websites like Publons are already doing).

Novelty, impact, and traditional journals

So what happens next? Let’s assume a manuscript passes this initial peer review. It then enters the official scientific record, is indexed on PubMed and Google Scholar. Perhaps it could follow the example of F1000Research in that the title of the study itself contains an indication that it has been accepted/approved by peer review.

This is where it gets complicated. A lot of the ideas I discussed are already implemented to some extent by journals like F1000Research, PeerJ, or the Frontiers brand. The only aspect that these implementations do not have is that they are not a single, centralised platform for reviews. And although I think having a single platform would be more ideal to avoid confusion and splintering, even a handful of venues for scientific review could probably work.

However, what these systems currently do not provide is the role currently still played by the high impact, traditional publishers: filtering the enormous volume of scientific work to select ground-breaking, timely, and important research findings. There is a lot of hostility towards this aspect of scientific publishing. It often seems completely arbitrary, obsessed with temporary fads and shallow buzzwords. I think for many researchers the implicit or even explicit pressure to publish as much “high impact” work as possible to sustain their careers is contributing to this. It isn’t entirely clear to me how much of this pressure is real and how much is an illusion. Certainly some grant applications still require you to list impact factors and citation numbers (which are directly linked to impact factors) to support your case.

Whatever you may think about this (and I personally agree that it has lots of negative effects and can be extremely annoying) the filtering and sorting by high impact journals does also have its benefits. The short format publications, brief communications, and perspective articles in these journals make work much more accessible to wider audiences and I think there is some point in highlighting new, creative, surprising, and/or controversial findings over incremental follow-up research. While published research should provide detailed methods and well-scrutinised results, there are different audiences. When I read about findings in astronomy or particle physics, or even many studies from biological sciences that aren’t in my area, I don’t typically read all the in-depth methods (nor would I understand them). An easily accessible article that appeals to a general scientific audience is certainly a nice way to communicate scientific findings. In the present system this is typically achieved by separating a general main text from Supplementary/Online sections that contain methods, additional results, and possibly even in-depth discussion.

This is where I think we should implement an explicit tier system. The initial research is published, after scientific peer review as discussed above, in the centralised repository of new manuscripts. These publications are written as traditional journal articles complete with detailed methods and results. Novelty and impact played no role up to this stage. However, now the more conventional publishers come into play. Authors may want to write cover letters competing for the attention of higher impact journals. Conversely, journal editors may want to contact authors of particularly interesting studies to ask them to submit a short-form article in their journal. There are several mechanisms by which new publications may come to the attention of journal editors. They could simply generate a strong social media buzz and lots of views, downloads, and citations. This in fact seems to be the basis of the Frontiers tier system. I think this is far from optimal because it doesn’t necessarily highlight the scientifically most valuable but the most sensational studies, which can be for all sorts of reasons, such as making extraordinary claims or because the titles contain curse words. Rather it would be ideal to highlight research that attracts a lot of post-publication review and discussion – but of course this still poses the question how to encourage that.

In either case, the decision as to what is novel, general interest research is still up to editorial discretion making it easier for traditional journals to accept this change. How these articles are accepted is still up to each journal. Some may not require any further peer review and simply ask for a copy-edited summary article. Others may want to have some additional peer review to keep the interpretation of these summaries in check. It is likely that these high impact articles would be heavy on the implications and wider interpretation while the original scientific publication has only brief discussion sections detailing the basic interpretation and elaborating on the limitations. Some peer review may help keep the authors honest at this stage. Importantly, instead of having endless online methods sections and (sometimes barely reviewed) supplementary materials, the full scientific detail of any study would be available within its original publication. The high impact short-form article simply contains a link to that detailed publications.

One important aim that this system would achieve is to ensure that the research that actually is published as high impact will typically meet high thresholds of scientific quality. Our current publishing model is still incentivising publishing shoddy research because it emphasises novelty and the speed of publication over quality. In the new system, every study would first have to pass a quality threshold. Novelty judgements should be entirely secondary to that.

How can we make this happen?

The biggest problem with all of these grand ideas we are kicking around is that it remains mostly unclear to me how we can actually effect this change. The notion that we can do away with traditional journals altogether sounds like a pipe dream to me as it is diametrically opposed to the self-interest of traditional publishers and our current funding structures. While some great upheavals have already happened in scientific publishing, such as the now wide spread of open access papers, I feel that a lot of these changes have simply occurred because traditional publishers realised that they can make considerable profit from open access charges.

I do hope that eventually the kinds of journals publishing short-form, general interest articles to filter the ground-breaking research from incremental, specialised topics will not be for-profit publishers. There are already a few examples of traditional  journals now that are more community driven, such as the Journal of Neuroscience, the Journal of Vision, and also e-Life (not so much community-driven but driven by a research funder rather than a for-profit publishing house). I hope to see more of that in the future. Since many scientists seem to be quite idealist in their hearts I think there is hope for that.

But in the meantime it seems to be necessary to work together with traditional publishing houses rather than antagonising them. I would think it shouldn’t be that difficult to convince some publishers of the idea that what now forms the supplementary materials and online methods in many high impact journals could be actual proper publications in their own right. Journals that already have a system like I envision, e.g. F1000Research or PeerJ, could perhaps negotiate such deals with traditional journals. This need not be mutually exclusive but it could simply apply to some articles published in these journals.

The main obstacle to do away with here is the in my mind obsolete notion that none of the results can have been published elsewhere. This is already no longer true in most cases anyway. Most research will have been published prior to journal publication in conference proceedings. Many traditional journals nowadays tolerate manuscripts uploaded to pre-print servers. The new aspect of the system I described would be that there is actually an assurance that the pre-published work has been peer reviewed properly thus guaranteeing a certain level of quality.

I know there are probably many issues to still resolve with these ideas and I would love to hear them. However, I think this vision is not a dream but a distinct possibility. Let’s make it come true.

4 thoughts on “Revolutionise the publication process

  1. I like it. As to getting it going, what’s that framework where, as a reviewer, your reviews for journal X can be forwarded to journal Y if a manuscript is rejected by X and ends up at Y? Seems like an extended treaty across many more journals would be the centralized platform you seek. It can be distributed, all that needs to happen is an agreement to forward reviews.

    As for making it happen, once such a treaty exists then both authors and reviewers can exert pressure by choosing to submit to/review for journals within the treaty framework. Membership of the centralized review platform treaty could be viewed as important as open access and other initiatives. A treaty, suitably policed, would also be able to exclude the fly-by-night publishing houses.

    I like the idea of transparent reviews. I’d like the reviews published along with the accepted manuscript. I can see that there might be some aspects that might get tricky, e.g. if parts are added/removed to satisfy the style of journal Y, but one would think that getting an appropriate summary of review(s) ought to be possible with a little bit of effort.

    Finally, if centralized reviews do happen then, along the lines of your “data reviewers,” I’d like to see the opportunity to have partial peer reviews rather than the all-or-nothing reviewer choice we are given now. (See for my take on it.) I am unqualified to review the vast majority of fMRI papers in their entirety, but it is quite possible for me to review their acquisition methods where mistakes abound.

    Liked by 1 person

    1. Hi practiCal (may I call you practiCal?),

      Thanks for your comment. I very much like the idea of partial reviews. In some journals (I think Frontiers) the review form asks the reviewers specifically whether a statistician should look over the analysis. That’s not quite the same as what you describe of course but it goes into this direction. I don’t think all manuscripts need partial reviews but some would certainly benefit a lot from that.

      So this could work in the same way as the data reviewers I mention in the post. Some people are recruited to review specific aspects of the study. Data reviewers try to reproduce the results using the available data and code. In MRI studies at least one reviewer checks over the MRI methods. The more interdisciplinary the study, the more such partial reviewers should probably be involved. Of course, the role isn’t strict. Nobody stops the data reviewer from commenting on more general aspects as well if they spot something worth commenting on – but they aren’t required to do so.

      Regarding the Review Consortium thing you mention (also can’t remember the name but I know what you mean), for me this doesn’t go far enough. I think the review process should happen independently of the journal in a separate place – and in public. Only final papers that actually have passed this level of initial review then are even considered by traditional journals.

      One thought I had that I didn’t discuss in the post is that I think the DOI of the article in the traditional journal should be linked directly with the paper. They should be able to count both the short-form article in their journal and the full-length detailed article as the same publication. As I hinted at in the post, the full-length version is essentially what the supplementary materials are now – except that they actually contain all the information in themselves and are fully reviewed.


    1. Yeah I think that could be very useful. All too often there are specialist sections of a paper that one may not be an expert on even though you can judge the whole of it. And looking at the data integrity is just a good idea in general but most typical reviewers probably wouldn’t bother with it.


Comments are closed.