Scholarly Publishing: Unnecessarily Slow in the Modern Era.

The Authorea Team

Scholarly publishing is slow, really slow.  The time from submission to publication takes on average one year, likely an underestimate considering the fact that many authors are forced to submit to multiple publishers.  
Arguably, the publishing process is necessarily slow because work must make it's way through a rigorous peer review system. This would probably be okay if such a system were in fact effective. However, research on the effectiveness of peer review shows that most major errors go unnoticed by reviewers. (Schroter 2008Godlee 1998Baxt 1998Smith 2010)

So-called "stings" have shown that:

  1. computer generated papers can pass peer review (Labbé 2012),
  2. made up work can pass peer review (Bohannon 2013),
  3. and that peer review can turn out to reject studies already published in the journals that previously accepted the same work. (Peters 1982Ceci 2014)

That is not to say that peer review is not without benefits, just that it it is not stopping major errors from being published.  An alternative is to post/publish/preprint work without review--hint: Authorea--and then coordinate review post-publication through traditional routes or through open post-publication peer review platforms--such as F1000Research and The Winnower.  Such a process affords seamless communication amongst scientists without unnecessary delay, eliminates editorial bias, and makes the entire process transparent.  In short, it makes the most sense.

Disagree? Leave an annotation anywhere on this document or write up a counterpoint.  We believe in open communication, making it more fluid and collaborative and we hope you'll join us.

Write your next paper today on Authorea. Send it wherever you'd like. #OpenScience


References

  1. S. Schroter, N. Black, S. Evans, F. Godlee, L. Osorio, R. Smith. What errors do peer reviewers detect and does training improve their ability to detect them?. JRSM 101, 507–514 SAGE Publications, 2008. Link

  2. Fiona Godlee, Catharine R. Gale, Christopher N. Martyn. Effect on the Quality of Peer Review of Blinding Reviewers and Asking Them to Sign Their Reports. JAMA 280, 237 American Medical Association (AMA), 1998. Link

  3. William G Baxt, Joseph F Waeckerle, Jesse A Berlin, Michael L Callaham. Who Reviews the Reviewers? Feasibility of Using a Fictitious Manuscript to Evaluate Peer Reviewer Performance. Annals of Emergency Medicine 32, 310–317 Elsevier BV, 1998. Link

  4. Richard Smith. Classical peer review: an empty gun. Breast Cancer Research 12, S13 Springer Nature, 2010. Link

  5. Cyril Labbé, Dominique Labbé. Duplicate and fake publications in the scientific literature: how many SCIgen papers in computer science?. Scientometrics 94, 379–396 Springer Science \(\mathplus\) Business Media, 2012. Link

  6. J. Bohannon. Whos Afraid of Peer Review?. Science 342, 60–65 American Association for the Advancement of Science (AAAS), 2013. Link

  7. Douglas P. Peters, Stephen J. Ceci. Peer-review practices of psychological journals: The fate of published articles submitted again. Behavioral and Brain Sciences 5, 187 Cambridge University Press (CUP), 1982. Link

  8. Stephen Ceci, Douglas Peters. The Peters & Ceci Study of Journal Publications. The Winnower The Winnower LLC, 2014. Link

[Someone else is editing this]

You are editing this file