Scholarly peer review

Impartial review, especially of work in less narrowly defined or inter-disciplinary fields, may be difficult to accomplish, and the significance (good or bad) of an idea may never be widely appreciated among its contemporaries.

[9][10][11] Editors of scientific journals at that time made publication decisions without seeking outside input, i.e. an external panel of reviewers, giving established authors latitude in their journalistic discretion.

On a much later occasion, Einstein was severely critical of the external review process, saying that he had not authorized the editor in chief to show his manuscript "to specialists before it is printed", and informing him that he would "publish the paper elsewhere" – which he did, with substantial modifications.

This process encourages authors to meet the accepted standards of their discipline and reduces the dissemination of irrelevant findings, unwarranted claims, unacceptable interpretations, and personal views.

[27] In the case of proposed publications, the publisher (editor-in-chief or the editorial board, often with assistance of corresponding or associate editors) sends advance copies of an author's work or ideas to researchers or scholars who are experts in the field (known as "referees" or "reviewers").

Desk rejection is intended to be a streamlined process so that editors may move past nonviable manuscripts quickly and provide authors with the opportunity to pursue a more suitable journal.

For example, the European Accounting Review editors subject each manuscript to three questions to decide whether a manuscript moves forward to referees: 1) Is the article a fit for the journal's aims and scope, 2) is the paper content (e.g. literature review, methods, conclusions) sufficient and does the paper make a worthwhile contribution to the larger body of literature, and 3) does it follow format and technical specifications?

Authors are sometimes also given the opportunity to name natural candidates who should be disqualified, in which case they may be asked to provide justification (typically expressed in terms of conflict of interest).

[59][60] In many fields of "big science", the publicly available operation schedules of major equipments, such as telescopes or synchrotrons, would make the authors' names obvious to anyone who would care to look them up.

Proponents of double-blind review argue that it performs no worse than single-blind, and that it generates a perception of fairness and equality in academic funding and publishing.

[67][68] Eugene Koonin, a senior investigator at the National Center for Biotechnology Information, asserts that the system has "well-known ills" and advocates "open peer review".

The practice to upload to preprint servers, and the activity of discussion heavily depend on the field,[74][75] and it allows an open pre-publication peer review.

[76] The journal Behavioral and Brain Sciences, published by Cambridge University Press, was founded by Stevan Harnad in 1978[77] and modeled on Current Anthropology's open peer commentary feature.

[86][87] Recent research has called attention to the use of social media technologies and science blogs as a means of informal, post-publication peer review, as in the case of the #arseniclife (or GFAJ-1) controversy.

[88] In December 2010, an article published in Scienceexpress (the ahead-of-print version of Science) generated both excitement and skepticism, as its authors – led by NASA astrobiologist Felisa Wolfe-Simon – claimed to have discovered and cultured a certain bacteria that could replace phosphorus with arsenic in its physiological building blocks.

At the time of the article's publication, NASA issued press statements suggesting that the finding would impact the search for extraterrestrial life, sparking excitement on Twitter under the hashtag #arseniclife, as well as criticism from fellow experts who voiced skepticism via their personal blogs.

[89] Ultimately, the controversy surrounding the article attracted media attention,[90] and one of the most vocal scientific critics – Rosemary Redfield – formally published in July 2012[91] regarding her and her colleagues' unsuccessful attempt to replicate the NASA scientists' original findings.

[133] Richard Smith, MD, former editor of the British Medical Journal, has claimed that peer review is "ineffective, largely a lottery, anti-innovatory, slow, expensive, wasteful of scientific time, inefficient, easily abused, prone to bias, unable to detect fraud and irrelevant; Several studies have shown that peer review is biased against the provincial and those from low- and middle-income countries; Many journals take months and even years to publish and the process wastes researchers' time.

In 2011, University of British Columbia assistant law professor, Lorna McCue, argued that emphasis on peer review publication was culturally inappropriate as it did not recognize the importance of Indigenous oral traditions.

[144] Occasionally however, peer review approves studies that are later found to be wrong and rarely deceptive or fraudulent results are discovered prior to publication.

Multiple examples across several areas of science find that scientists elevated the importance of peer review for research that was questionable or corrupted.

The New York Times gained access to confidential peer review documents for studies sponsored by the National Football League (NFL) that were cited as scientific evidence that brain injuries do not cause long-term harm to its players.

Furthermore, The Times noted that the NFL sought to legitimize the studies" methods and conclusion by citing a "rigorous, confidential peer-review process" despite evidence that some peer reviewers seemed "desperate" to stop their publication.

[154] Another problem that peer review fails to catch is ghostwriting, a process by which companies draft articles for academics who then publish them in journals, sometimes with little or no changes.

In 2010, the US Senate Finance Committee released a report that found this practice was widespread, that it corrupted the scientific literature and increased prescription rates.

Instead, the credibility conferred by the "peer-reviewed" label could diminish what Feynman calls the culture of doubt necessary for science to operate a self-correcting, truth-seeking process.

[177] Here again more oversight only adds to the impression that peer review ensures quality, thereby further diminishing the culture of doubt and counteracting the spirit of scientific inquiry.

Jon Tennant also argues that the outcry over the inefficiencies of traditional journals centers on their inability to provide rigorous enough scrutiny, and the outsourcing of critical thinking to a concealed and poorly-understood process.

A counterargument is that the conventional model of peer review diminishes the healthy skepticism that is a hallmark of scientific inquiry, and thus confers credibility upon subversive attempts to infiltrate the literature.

[206] In 2020, the Journal of Nanoparticle Research fell victim to an "organized rogue editor network", who impersonated respected academics, got a themed issue created, and got 19 substandard articles published (out of 80 submitted).

A display of open science principles including open peer review, open source, open data, open methodology, open Educational resources, and open access.