Scholarly peer review


Scholarly peer review is the process of subjecting an author's scholarly work, research, or ideas to the scrutiny of others who are experts in the same field, before a paper describing this work is published in a journal, conference proceedings or as a book. The peer review helps the publisher decide whether the work should be accepted, considered acceptable with revisions, or rejected.
Peer review requires a community of experts in a given field, who are qualified and able to perform reasonably impartial review. Impartial review, especially of work in less narrowly defined or inter-disciplinary fields, may be difficult to accomplish, and the significance of an idea may never be widely appreciated among its contemporaries. Peer review is generally considered necessary to academic quality and is used in most major scholarly journals. However, peer review does not prevent publication of invalid research,
and there is little evidence that peer review improves the quality of published papers.
There are attempts to reform the peer review process, including from the fields of metascience and journalology. Reformers seek to increase the reliability and efficiency of the peer review process and to provide it with a scientific foundation.
Alternatives to common peer review practices have been put to the test,
in particular open peer review, where the comments are visible to readers, generally with the identities of the peer reviewers disclosed as well, e.g., F1000, eLife, BMJ, and BioMed Central.

History

The first record of an editorial pre-publication peer-review is from 1665 by Henry Oldenburg, the founding editor of Philosophical Transactions of the Royal Society at the Royal Society of London.
The first peer-reviewed publication might have been the Medical Essays and Observations published by the Royal Society of Edinburgh in 1731. The present-day peer-review system evolved from this 18th-century process, began to involve external reviewers in the mid-19th-century, and did not become commonplace until the mid-20th-century.
Peer review became a touchstone of the scientific method, but until the end of the 19th century was often performed directly by an editor-in-chief or editorial committee.
Editors of scientific journals at that time made publication decisions without seeking outside input, i.e. an external panel of reviewers, giving established authors latitude in their journalistic discretion. For example, Albert Einstein's four revolutionary Annus Mirabilis papers in the 1905 issue of Annalen der Physik were peer-reviewed by the journal's editor-in-chief, Max Planck, and its co-editor, Wilhelm Wien, both future Nobel prize winners and together experts on the topics of these papers. On a much later occasion, Einstein was severely critical of the external review process, saying that he had not authorized the editor in chief to show his manuscript "to specialists before it is printed", and informing him that he would "publish the paper elsewhere"--which he did, and in fact he later had to withdraw the publication.
While some medical journals started to systematically appoint external reviewers, it is only since the middle of the 20th century that this practice has spread widely and that external reviewers have been given some visibility within academic journals, including being thanked by authors and editors. A 2003 editorial in Nature stated that, in the early 20th century, "the burden of proof was generally on the opponents rather than the proponents of new ideas." Nature itself instituted formal peer review only in 1967. Journals such as Science and the American Journal of Medicine increasingly relied on external reviewers in the 1950s and 1960s, in part to reduce the editorial workload. In the 20th century, peer review also became common for science funding allocations. This process appears to have developed independently from that of editorial peer review.
Gaudet provides a social science view of the history of peer review carefully tending to what is under investigation, here peer review, and not only looking at superficial or self-evident commonalities among inquisition, censorship, and journal peer review. It builds on historical research by Gould, Biagioli, Spier, and Rip. The first Peer Review Congress met in 1989. Over time, the fraction of papers devoted to peer review has steadily declined, suggesting that as a field of sociological study, it has been replaced by more systematic studies of bias and errors. In parallel with "common experience" definitions based on the study of peer review as a "pre-constructed process", some social scientists have looked at peer review without considering it as pre-constructed. Hirschauer proposed that journal peer review can be understood as reciprocal accountability of judgements among peers. Gaudet proposed that journal peer review could be understood as a social form of boundary judgement – determining what can be considered as scientific set against an overarching knowledge system, and following predecessor forms of inquisition and censorship.
Pragmatically, peer review refers to the work done during the screening of submitted manuscripts. This process encourages authors to meet the accepted standards of their discipline and reduces the dissemination of irrelevant findings, unwarranted claims, unacceptable interpretations, and personal views. Publications that have not undergone peer review are likely to be regarded with suspicion by academic scholars and professionals. Non-peer-reviewed work does not contribute, or contributes less, to the academic credit of scholar such as the h-index, although this heavily depends on the field.

Justification

It is difficult for authors and researchers, whether individually or in a team, to spot every mistake or flaw in a complicated piece of work. This is not necessarily a reflection on those concerned, but because with a new and perhaps eclectic subject, an opportunity for improvement may be more obvious to someone with special expertise or who simply looks at it with a fresh eye. Therefore, showing work to others increases the probability that weaknesses will be identified and improved. For both grant-funding and publication in a scholarly journal, it is also normally a requirement that the subject is both novel and substantial.
The decision whether or not to publish a scholarly article, or what should be modified before publication, ultimately lies with the publisher to which the manuscript has been submitted. Similarly, the decision whether or not to fund a proposed project rests with an official of the funding agency. These individuals usually refer to the opinion of one or more reviewers in making their decision. This is primarily for three reasons:
Reviewers are often anonymous and independent. However, some reviewers may choose to waive their anonymity, and in other limited circumstances, such as the examination of a formal complaint against the referee, or a court order, the reviewer's identity may have to be disclosed. Anonymity may be unilateral or reciprocal.
Since reviewers are normally selected from experts in the fields discussed in the article, the process of peer review helps to keep some invalid or unsubstantiated claims out of the body of published research and knowledge. Scholars will read published articles outside their limited area of detailed expertise, and then rely, to some degree, on the peer-review process to have provided reliable and credible research that they can build upon for subsequent or related research. Significant scandal ensues when an author is found to have falsified the research included in an article, as other scholars, and the field of study itself, may have relied upon the invalid research.
For US universities, peer reviewing of books before publication is a requirement for full membership of the Association of American University Presses.

Procedure

In the case of proposed publications, the publisher sends advance copies of an author's work or ideas to researchers or scholars who are experts in the field. Communication is normally by e-mail or through a web-based manuscript processing system like ScholarOne, , Scholastica, Open Journal Systems, , or . Depending on the field of study and on the specific journal, there are usually one to three referees for a given article. For example, Springer states that there are two or three reviewers per article.
The peer-review process involves three steps:
Step 1: Desk evaluation. An editor evaluates the manuscript to judge whether the paper will be passed on journal referees. At this phase many articles receive a “desk reject,” that is, the editor chooses not to pass along the article. The authors may or may not receive a letter of explanation.
Desk rejection is intended to be a streamlined process so that editors may move past nonviable manuscripts quickly and provide authors with the opportunity to pursue a more suitable journal. For example, the European Accounting Review editors subject each manuscript to three questions to decide whether a manuscript moves forward to referees: 1) Is the article a fit for the journal's aims and scope, 2) is the paper content does it follow format and technical specifications? If “no” to any of these, the manuscript receives a desk rejection.
Desk rejection rates vary by journal. For example, in 2017 researchers at the World Bank compiled rejection rates of several global economics journals; the desk rejection rate ranged from 21% to 66%. The American Psychological Association publishes rejection rates for several major publications in the field, and although they do not specify whether the rejection is pre- or post- desk evaluation, their figures in 2016 ranged from a low of 49% to a high of 90%.
Step 2: Blind review. If the paper is not desk rejected, the editors send the manuscript to the referees, who are chosen for their expertise and distance from the authors. At this point, referees may reject, accept without changes or instruct the authors to revise and resubmit.
Reasons vary for acceptance of an article by editors, but Elsevier published an article where three editors weigh in on factors that drive article acceptance. These factors include whether the manuscript: delivers “new insight into an important issue,” will be useful to practitioners, advances or proposes a new theory, raises new questions, has appropriate methods and conclusion, presents a good argument based on the literature, and tells a good story. One editor notes that he likes papers that he “wished he’d done” himself.
These referees each return an evaluation of the work to the editor, noting weaknesses or problems along with suggestions for improvement. Typically, most of the referees' comments are eventually seen by the author, though a referee can also send 'for your eyes only' comments to the publisher; scientific journals observe this convention almost universally. The editor then evaluates the referees' comments, her or his own opinion of the manuscript before passing a decision back to the author, usually with the referees' comments.
Referees' evaluations usually include an explicit recommendation of what to do with the manuscript or proposal, often chosen from options provided by the journal or funding agency. For example, Nature recommends four courses of action:
During this process, the role of the referees is advisory. The editor is typically under no obligation to accept the opinions of the referees, though he or she will most often do so. Furthermore, the referees in scientific publication do not act as a group, do not communicate with each other, and typically are not aware of each other's identities or evaluations. Proponents argue that if the reviewers of a paper are unknown to each other, the editor can more easily verify the objectivity of the reviews. There is usually no requirement that the referees achieve consensus, with the decision instead often made by the editor based on her best judgement of the arguments.
In situations where multiple referees disagree substantially about the quality of a work, there are a number of strategies for reaching a decision. The paper may be rejected outright, or the editor may choose which reviewer's point the authors should address. When a publisher receives very positive and very negative reviews for the same manuscript, the editor will often solicit one or more additional reviews as a tie-breaker. As another strategy in the case of ties, the publisher may invite authors to reply to a referee's criticisms and permit a compelling rebuttal to break the tie. If a publisher does not feel confident to weigh the persuasiveness of a rebuttal, the publisher may solicit a response from the referee who made the original criticism. An editor may convey communications back and forth between authors and a referee, in effect allowing them to debate a point.
Even in these cases, however, publishers do not allow multiple referees to confer with each other, though each reviewer may often see earlier comments submitted by other reviewers. The goal of the process is explicitly not to reach consensus or to persuade anyone to change their opinions, but instead to provide material for an informed editorial decision. One early study regarding referee disagreement found that agreement was greater than chance, if not much greater than chance, on six of seven article attributes, but this study was small and it was conducted on only one journal. At least one study has found that reviewer disagreement is not common, but this study is also small and on only one journal.
Traditionally, reviewers would often remain anonymous to the authors, but this standard varies both with time and with academic field. In some academic fields, most journals offer the reviewer the option of remaining anonymous or not, or a referee may opt to sign a review, thereby relinquishing anonymity. Published papers sometimes contain, in the acknowledgments section, thanks to anonymous or named referees who helped improve the paper. For example, Nature journals provide this option.
Sometimes authors may exclude certain reviewers: one study conducted on the Journal of Investigative Dermatology found that excluding reviewers doubled the chances of article acceptance. Some scholars are uncomfortable with this idea, arguing that it distorts the scientific process. Others argue that it protects against referees who are biased in some manner. In some cases, authors can choose referees for their manuscripts. mSphere, an open-access journal in microbial science, has moved to this model. Editor-in-Chief Mike Imperiale says this process is designed to reduce the time it takes to review papers and permit the authors to choose the most appropriate reviewers. But a scandal in 2015 shows how this choosing reviewers can encourage fraudulent reviews. Fake reviews were submitted to the Journal of the Renin-Angiotensin-Aldosterone System in the names of author-recommended reviewers, causing the journal to eliminate this option.
Step 3: Revisions. If the manuscript has not been rejected during peer review, it returns to the authors for revisions. During this phase, the authors address the concerns raised by reviewers. Dr. William Stafford Noble offers ten rules for responding to reviewers. His rules include:
  1. "Provide an overview, then quote the full set of reviews”
  2. “Be polite and respectful of all reviewers”
  3. “Accept the blame”
  4. “Make the response self-contained”
  5. “Respond to every point raised by the reviewer”
  6. “Use typography to help the reviewer navigate your response”
  7. “Whenever possible, begin your response to each comment with a direct answer to the point being raised”
  8. “When possible, do what the reviewer asks”
  9. “Be clear about what changed relative to the previous version”
  10. “If necessary, write the response twice”

    Recruiting referees

At a journal or book publisher, the task of picking reviewers typically falls to an editor. When a manuscript arrives, an editor solicits reviews from scholars or other experts who may or may not have already expressed a willingness to referee for that journal or book division. Granting agencies typically recruit a panel or committee of reviewers in advance of the arrival of applications.
Referees are supposed to inform the editor of any conflict of interests that might arise. Journals or individual editors may invite a manuscript's authors to name people whom they consider qualified to referee their work. For some journals this is a requirement of submission. Authors are sometimes also given the opportunity to name natural candidates who should be disqualified, in which case they may be asked to provide justification.
Editors solicit author input in selecting referees because academic writing typically is very specialized. Editors often oversee many specialties, and can not be experts in all of them. But after an editor selects referees from the pool of candidates, the editor typically is obliged not to disclose the referees' identities to the authors, and in scientific journals, to each other. Policies on such matters differ among academic disciplines.
One difficulty with respect to some manuscripts is that, there may be few scholars who truly qualify as experts, people who have themselves done work similar to that under review. This can frustrate the goals of reviewer anonymity and avoidance of conflicts of interest. Low-prestige or local journals and granting agencies that award little money are especially handicapped with regard to recruiting experts.
A potential hindrance in recruiting referees is that they are usually not paid, largely because doing so would itself create a conflict of interest. Also, reviewing takes time away from their main activities, such as his or her own research. To the would-be recruiter's advantage, most potential referees are authors themselves, or at least readers, who know that the publication system requires that experts donate their time. Serving as a referee can even be a condition of a grant, or professional association membership.
Referees have the opportunity to prevent work that does not meet the standards of the field from being published, which is a position of some responsibility. Editors are at a special advantage in recruiting a scholar when they have overseen the publication of his or her work, or if the scholar is one who hopes to submit manuscripts to that editor's publishing entity in the future. Granting agencies, similarly, tend to seek referees among their present or former grantees.
Peerage of Science is an independent service and a community where reviewer recruitment happens via Open Engagement: authors submit their manuscript to the service where it is made accessible for any non-affiliated scientist, and 'validated users' choose themselves what they want to review. The motivation to participate as a peer reviewer comes from a reputation system where the quality of the reviewing work is judged and scored by other users, and contributes to user profiles. Peerage of Science does not charge any fees to scientists, and does not pay peer reviewers. Participating publishers however pay to use the service, gaining access to all ongoing processes and the opportunity to make publishing offers to the authors.
With independent peer review services the author usually retains the right to the work throughout the peer review process, and may choose the most appropriate journal to submit the work to. Peer review services may also provide advice or recommendations on most suitable journals for the work. Journals may still want to perform an independent peer review, without the potential conflict of interest that financial reimbursement may cause, or the risk that an author has contracted multiple peer review services but only presents the most favorable one.
An alternative or complementary system of performing peer review is for the author to pay for having it performed. Example of such service provider is Rubriq, which for each work assigns peer reviewers who are financially compensated for their efforts.

Different styles

Anonymous and attributed

For most scholarly publications, the identity of the reviewers is kept anonymised. The alternative, attributed peer review involves revealing the identities of the reviewers. Some reviewers choose to waive their right to anonymity, even when the journal's default format is blind peer review.
In anonymous peer review, reviewers are known to the journal editor or conference organiser but their names are not given to the article's author. In some cases, the author's identity can also be anonymised for the review process, with identifying information is stripped from the document before review. The system is intended to reduce or eliminate bias.
Some experts proposed blind review procedures for reviewing controversial research topics.
In "double-blind" review, which has been fashioned by sociology journals in the 1950s and remains more common in the social sciences and humanities than in the natural sciences, the identity of the authors is concealed from the reviewers, and vice versa, lest the knowledge of authorship or concern about disapprobation from the author bias their review. Critics of the double-blind review process point out that, despite any editorial effort to ensure anonymity, the process often fails to do so, since certain approaches, methods, writing styles, notations, etc., point to a certain group of people in a research stream, and even to a particular person.
In many fields of "big science", the publicly available operation schedules of major equipments, such as telescopes or synchrotrons, would make the authors' names obvious to anyone who would care to look them up. Proponents of double-blind review argue that it performs no worse than single-blind, and that it generates a perception of fairness and equality in academic funding and publishing. Single-blind review is strongly dependent upon the goodwill of the participants, but no more so than double-blind review with easily identified authors.
As an alternative to single-blind and double-blind review, authors and reviewers are encouraged to declare their conflicts of interest when the names of authors and sometimes reviewers are known to the other. When conflicts are reported, the conflicting reviewer can be prohibited from reviewing and discussing the manuscript, or his or her review can instead be interpreted with the reported conflict in mind; the latter option is more often adopted when the conflict of interest is mild, such as a previous professional connection or a distant family relation. The incentive for reviewers to declare their conflicts of interest is a matter of professional ethics and individual integrity. Even when the reviews are not public, they are still a matter of record and the reviewer's credibility depends upon how they represent themselves among their peers. Some software engineering journals, such as the IEEE Transactions on Software Engineering, use non-blind reviews with reporting to editors of conflicts of interest by both authors and reviewers.
A more rigorous standard of accountability is known as an audit. Because reviewers are not paid, they cannot be expected to put as much time and effort into a review as an audit requires. Therefore, academic journals such as Science, organizations such as the American Geophysical Union, and agencies such as the National Institutes of Health and the National Science Foundation maintain and archive scientific data and methods in the event another researcher wishes to replicate or audit the research after publication.
The traditional anonymous peer review has been criticized for its lack of accountability, the possibility of abuse by reviewers or by those who manage the peer review process, its possible bias, and its inconsistency, alongside other flaws. Eugene Koonin, a senior investigator at the National Center for Biotechnology Information, asserts that the system has "well-known ills" and advocates "open peer review".

Open peer review

In 1999, the open access journal Journal of Medical Internet Research was launched, which from its inception decided to publish the names of the reviewers at the bottom of each published article. Also in 1999, the British Medical Journal moved to an open peer review system, revealing reviewers' identities to the authors but not the readers, and in 2000, the medical journals in the open access BMC series published by BioMed Central, launched using open peer review. As with the BMJ, the reviewers' names are included on the peer review reports. In addition, if the article is published the reports are made available online as part of the "pre-publication history"'.
Several other journals published by the BMJ Group allow optional open peer review, as does PLoS Medicine, published by the Public Library of Science. The BMJ's Rapid Responses allows ongoing debate and criticism following publication.
In June 2006, Nature launched an experiment in parallel open peer review: some articles that had been submitted to the regular anonymous process were also available online for open, identified public comment. The results were less than encouraging – only 5% of authors agreed to participate in the experiment, and only 54% of those articles received comments. The editors have suggested that researchers may have been too busy to take part and were reluctant to make their names public. The knowledge that articles were simultaneously being subjected to anonymous peer review may also have affected the uptake.
In February 2006, the journal Biology Direct was launched by BioMed Central, adding another alternative to the traditional model of peer review. If authors can find three members of the Editorial Board who will each return a report or will themselves solicit an external review, the article will be published. As with Philica, reviewers cannot suppress publication, but in contrast to Philica, no reviews are anonymous and no article is published without being reviewed. Authors have the opportunity to withdraw their article, to revise it in response to the reviews, or to publish it without revision. If the authors proceed with publication of their article despite critical comments, readers can clearly see any negative comments along with the names of the reviewers.
In the social sciences, there have been experiments with wiki-style, signed peer reviews, for example in an issue of the Shakespeare Quarterly.
In 2010, the British Medical Journal began publishing signed reviewer's reports alongside accepted papers, after determining that telling reviewers that their signed reviews might be posted publicly did not significantly affect the quality of the reviews.
In 2011, Peerage of Science, and independent peer review service, was launched with several non-traditional approaches to academic peer review. Most prominently, these include the judging and scoring of the accuracy and justifiability of peer reviews, and concurrent usage of a single peer review round by several participating journals.
Starting in 2013 with the launch of F1000Research, some publishers have combined open peer review with postpublication peer review by using a versioned article system. At F1000Research, articles are published before review, and invited peer review reports are published with the article as they come in. Author-revised versions of the article are then linked to the original. A similar postpublication review system with versioned articles is used by Science Open launched in 2014.
In 2014, Life implanted an open peer review system, under which the peer-review reports and authors’ responses are published as an integral part of the final version of each article.
Since 2016, Synlett is experimenting with closed crowd peer review. The article under review is sent to a pool of 80+ expert reviewers who then collaboratively comment on the manuscript.
In an effort to address issues with the reproducibility of research results, some scholars are asking that authors agree to share their raw data as part of the peer review process. As far back as 1962, for example, a number of psychologists have attempted to obtain raw data sets from other researchers, with mixed results, in order to reanalyze them. A recent attempt resulted in only seven data sets out of fifty requests. The notion of obtaining, let alone requiring, open data as a condition of peer review remains controversial. In 2020 peer review lack of access to raw data lead to article retractions in prestigious New England Journal of Medicine and Lancet. Many journals now require access to raw data to be included in peer review.

Pre- and post-publication peer review

The process of peer review is not restricted to the publication process managed by academic journals. In particular, some forms of peer review can occur before an article is submitted to a journal and/or after it is published by the journal.

Pre-publication peer review

Manuscripts are typically reviewed by colleagues before submission, and if the manuscript is uploaded to preprint servers, such as ArXiv, BioRxiv or SSRN, researchers can read and comment on the manuscript. The practice to upload to preprint servers, and the activity of discussion heavily depend on the field, and it allows an open pre-publication peer review. The advantage of this method is speed and transparency of the review process. Anyone can give feedback, typically in form of comments, and typically not anonymously. These comments are also public, and can be responded to, therefore author-reviewer communication is not restricted to the typical 2-4 rounds of exchanges in traditional publishing. The authors from can incorporate comments from a wide range of people instead of feedback from the typically 3-4 reviewers. The disadvantage is that a far larger number of papers are presented to the community without any guarantee on quality.

Post-publication peer review

After a manuscript is published, the process of peer review continues as publications are read. Readers will often send letters to the editor of a journal, or correspond with the editor via an on-line journal club. In this way, all 'peers' may offer review and critique of published literature. A variation on this theme is open peer commentary; journals using this process solicit and publish non-anonymous commentaries on the "target paper" together with the paper, and with original authors' reply as a matter of course. The introduction of the "epub ahead of print" practice in many journals has made possible the simultaneous publication of unsolicited letters to the editor together with the original paper in the print issue.
An extension of peer review after publication is open peer commentary, in which commentaries from specialists are solicited on published articles and the authors are invited to respond. It was first implemented by the anthropologist Sol Tax, who founded the journal Current Anthropology in 1957.
The journal Behavioral and Brain Sciences, published by Cambridge University Press, was founded by Stevan Harnad in 1978 and modeled on Current Anthropology's open peer commentary feature. Psycoloquy was founded in 1990 on the basis of the same feature, but this time implemented online.
In addition to journals hosting their own articles' reviews, there are also external, independent websites dedicated to post-publication peer-review, such as PubPeer which allows anonymous commenting of published literature and pushes authors to answer these comments. It has been suggested that post-publication reviews from these sites should be editorially considered as well. The megajournals F1000Research and ScienceOpen publish openly both the identity of the reviewers and the reviewer's report alongside the article.
Some journals use postpublication peer review as formal review method, instead of prepublication review. This was first introduced in 2001, by Atmospheric Chemistry and Physics. More recently F1000Research and ScienceOpen were launched as megajournals with postpublication review as formal review method. At both ACP and F1000Research peer reviewers are formally invited, much like at prepublication review journals. Articles that pass peer review at those two journals are included in external scholarly databases.
In 2006, a small group of UK academic psychologists launched Philica, the instant online journal Journal of Everything, to redress many of what they saw as the problems of traditional peer review. All submitted articles are published immediately and may be reviewed afterwards. Any researcher who wishes to review an article can do so and reviews are anonymous. Reviews are displayed at the end of each article, and are used to give the reader criticism or guidance about the work, rather than to decide whether it is published or not. This means that reviewers cannot suppress ideas if they disagree with them. Readers use reviews to guide their reading, and particularly popular or unpopular work is easy to identify.

Social media and informal peer review

Recent research has called attention to the use of social media technologies and science blogs as a means of informal, post-publication peer review, as in the case of the #arseniclife controversy. In December 2010, an article published in Scienceexpress generated both excitement and skepticism, as its authors—led by NASA astrobiologist Felisa Wolfe-Simon—claimed to have discovered and cultured a certain bacteria that could replace phosphorus with arsenic in its physiological building blocks. At the time of the article's publication, NASA issued press statements suggesting that the finding would impact the search for extraterrestrial life, sparking excitement on Twitter under the hashtag #arseniclife, as well as criticism from fellow experts who voiced skepticism via their personal blogs. Ultimately, the controversy surrounding the article attracted media attention, and one of the most vocal scientific critics—Rosemary Redfield—formally published in July 2012 regarding her and her colleagues' unsuccessful attempt to replicate the NASA scientists’ original findings.
Researchers following the impact of the #arseniclife case on social media discussions and peer review processes concluded the following:
Our results indicate that interactive online communication technologies can enable members in the broader scientific community to perform the role of journal reviewers to legitimize scientific information after it has advanced through formal review channels. In addition, a variety of audiences can attend to scientific controversies through these technologies and observe an informal process of post-publication peer review.

Result-blind peer review

Studies which report a positive or statistically-significant result are far more likely to be published than ones which do not. A counter-measure to this positivity bias is to hide or make unavailable the results, making journal acceptance more like scientific grant agencies reviewing research proposals. Versions include:
  1. Result-blind peer review or "results blind peer review", first proposed 1966: Reviewers receive an edited version of the submitted paper which omits the results and conclusion section. In a two-stage version, a second round of reviews or editorial judgment is based on the full paper version, which was first proposed in 1977.
  2. : Conclusion-blind review, proposed by Robin Hanson in 2007 extends this further asking all authors to submit a positive and a negative version, and only after the journal has accepted the article authors reveal which is the real version.
  3. Pre-accepted articles or "outcome-unbiased journals"/"early acceptance"/"advance publication review"/"registered reports"/"prior to results submission": extends study pre-registration to the point that journals accepted or reject papers based on the version of the paper written before the results or conclusions have been made, but instead describes the theoretical justification, experimental design, and statistical analysis. Only once the proposed hypothesis and methodology have been accepted by reviewers, the authors would collect the data or analyze previously collected data. A limited variant of a pre-accepted article was The Lancet's study protocol review from 1997-2015 reviewed and published randomized trial protocols with a guarantee that the eventual paper would at least be sent out to peer review rather than immediately rejected. For example, Nature Human Behaviour has adopted the registered report format, as it “shift the emphasis from the results of research to the questions that guide the research and the methods used to answer them”. The European Journal of Personality defines this format: “In a registered report, authors create a study proposal that includes theoretical and empirical background, research questions/hypotheses, and pilot data. Upon submission, this proposal will then be reviewed prior to data collection, and if accepted, the paper resulting from this peer-reviewed procedure will be published, regardless of the study outcomes.”
The following journals used result-blind peer review or pre-accepted articles:
Various editors have expressed criticism of peer review. In addition, a Cochrane review found little empirical evidence that peer review ensures quality in biomedical research, while a second systematic review and meta-analysis found a need for evidence-based peer review in biomedicine given the paucity of assessment of the interventions designed to improve the process.

Allegations of bias and suppression

The interposition of editors and reviewers between authors and readers may enable the intermediators to act as gatekeepers. Some sociologists of science argue that peer review makes the ability to publish susceptible to control by elites and to personal jealousy.
The peer review process may sometimes impede progress and may be biased against novelty. A linguistic analysis of review reports suggests that reviewers focus on rejecting the applications by searching for weak points, and not on finding the high-risk/high-gain groundbreaking ideas that may be in the proposal. Reviewers tend to be especially critical of conclusions that contradict their own views, and lenient towards those that match them. At the same time, established scientists are more likely than others to be sought out as referees, particularly by high-prestige journals/publishers. There are also signs of gender bias, favouring men as authors. As a result, ideas that harmonize with the established experts' are more likely to see print and to appear in premier journals than are iconoclastic or revolutionary ones. This accords with Thomas Kuhn's well-known observations regarding scientific revolutions. A theoretical model has been established whose simulations imply that peer review and over-competitive research funding foster mainstream opinion to monopoly.
Criticisms of traditional anonymous peer review allege that it lacks accountability, can lead to abuse by reviewers, and may be biased and inconsistent.

Failures

Peer review fails when a peer-reviewed article contains fundamental errors that undermine at least one of its main conclusions and that could have been identified by more careful reviewers. Many journals have no procedure to deal with peer review failures beyond publishing letters to the editor.
Peer review in scientific journals assumes that the article reviewed has been honestly prepared. The process occasionally detects fraud, but is not designed to do so. When peer review fails and a paper is published with fraudulent or otherwise irreproducible data, the paper may be retracted.
A 1998 experiment on peer review with a fictitious manuscript found that peer reviewers failed to detect some manuscript errors and the majority of reviewers may not notice that the conclusions of the paper are unsupported by its results.

Fake

There have been instances where peer review was claimed to be performed but in fact was not; this has been documented in some predatory open access journals or in the case of sponsored Elsevier journals.
In November 2014, an article in Nature exposed that some academics were submitting fake contact details for recommended reviewers to journals, so that if the publisher contacted the recommended reviewer, they were the original author reviewing their own work under a fake name. The Committee on Publication Ethics issued a statement warning of the fraudulent practice. In March 2015, BioMed Central retracted 43 articles and Springer retracted 64 papers in 10 journals in August 2015. Tumor Biology journal is another example of peer review fraud.

Plagiarism

Reviewers generally lack access to raw data, but do see the full text of the manuscript, and are typically familiar with recent publications in the area. Thus, they are in a better position to detect plagiarism of prose than fraudulent data. A few cases of such textual plagiarism by historians, for instance, have been widely publicized.
On the scientific side, a poll of 3,247 scientists funded by the U.S. National Institutes of Health found 0.3% admitted faking data and 1.4% admitted plagiarism. Additionally, 4.7% of the same poll admitted to self-plagiarism or autoplagiarism, in which an author republishes the same material, data, or text, without citing their earlier work.

Open access journals and peer review

Some critics of open access journals have argued that, compared to traditional subscription journals, open access journals might utilize substandard or less formal peer review practices, and, as a consequence, the quality of scientific work in such journals will suffer. In a study published in 2012, this hypothesis was tested by evaluating the relative "impact" of articles published in open access and subscription journals, on the grounds that members of the scientific community would presumably be less likely to cite substandard work, and that citation counts could therefore act as one indicator of whether or not the journal format indeed impacted peer review and the quality of published scholarship. This study ultimately concluded that "OA journals indexed in Web of Science and/or Scopus are approaching the same scientific impact and quality as subscription journals, particularly in biomedicine and for journals funded by article processing charges," and the authors consequently argue that "there is no reason for authors not to choose to publish in OA journals just because of the ‘OA’ label."

Examples

In 2017, the Higher School of Economics in Moscow unveiled a "Monument to an Anonymous Peer Reviewer". It takes the form of a large concrete cube, or dice, with "Accept", "Minor Changes", "Major Changes", "Revise and Resubmit" and "Reject" on its five visible sides. Sociologist Igor Chirikov, who devised the monument, said that while researchers have a love-hate relationship with peer review, peer reviewers nonetheless do valuable but mostly invisible work, and the monument is a tribute to them.