Recent analyses indicate that the proportion of retracted claims in the scientific literature is steadily increasing.
[1] The number of retractions has grown tenfold over the past decade, but they still make up approximately 0.2% of the 1.4m papers published annually in scholarly journals.
[3] A separate study analyzed 432 claims of genetic links for various health risks that vary between men and women.
Another meta review, found that of the 49 most-cited clinical research studies published between 1990 and 2003, more than 40 percent of them were later shown to be either totally wrong or significantly incorrect.
Tyrannosaurus rex was thought to have increased in size by more than 700 kg a year, until Mhyrvold showed that this was a factor of 2 too large.
[7] Torcetrapib was originally hyped as a drug that could block a protein that converts HDL cholesterol into LDL with the potential to "redefine cardiovascular treatment".
Two days after Pfizer announced its plans for the drug, it ended the Phase III clinical trial due to higher rates of chest pain and heart failure and a 60 percent increase in overall mortality.
[5] An in-depth review of the most highly cited biomarkers (whose presence are used to infer illness and measure treatment effects) claimed that 83 percent of supposed correlations became significantly weaker in subsequent studies.
[5] Priming studies claim that decisions can be influenced by apparently irrelevant events that a subject witnesses just before making a choice.
A paper in PLoS ONE[8] reported that nine separate experiments could not reproduce a study purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan.
It supplied research with induced errors and found that most reviewers failed to spot the mistakes, even after being told of the tests.
[1] A pseudonymous fabricated paper on the effects of a chemical derived from lichen on cancer cells was submitted to 304 journals for peer review.
[2] Peer reviewers typically do not re-analyse data from scratch, checking only that the authors’ analysis is properly conceived.
[2] In 2005 Stanford epidemiologist John Ioannidis showed that the idea that only one paper in 20 gives a false-positive result was incorrect.
He found three categories of problems: insufficient "statistical power" (avoiding type II errors); the unlikeliness of the hypothesis; and publication bias favoring novel claims.
In exploratory disciplines like genomics, which rely on examining voluminous data about genes and proteins, only one in a thousand should prove correct.
[5] While correlations track the relationship between truly independent measurements, such as smoking and cancer, they are much less effective when variables cannot be isolated, a common circumstance in biological systems.
[1] In 21 surveys of academics (mostly in the biomedical sciences but also in civil engineering, chemistry and economics) carried out between 1987 and 2008, 2% admitted fabricating data, but 28% claimed to know of colleagues who engaged in questionable research practices.
A campaign to persuade pharmaceutical firms to make all trial data available won its first convert in February 2013 when GlaxoSmithKline became the first to agree.
[2] Even well-written papers may not include sufficient detail and/or tacit knowledge (subtle skills and extemporisations not considered notable) for the replication to succeed.
[1] Replacing peer review with post-publication evaluations can encourage researchers to think more about the long-term consequences of excessive or unsubstantiated claims.
More than half of 238 biomedical papers published in 84 journals failed to identify all the resources (such as chemical reagents) necessary to reproduce the results.
Journals have begun to demand that at least some raw data be made available, although only 143 of 351 randomly selected papers covered by some data-sharing policy actually complied.
[2] The Reproducibility Initiative is a service allowing life scientists to pay to have their work validated by an independent lab.
Blog Syn is a website run by graduate students that is dedicated to reproducing chemical reactions reported in papers.
Nature and related publications introduced an 18-point checklist for life science authors in May,[10] in its effort to ensure that its published research can be reproduced.