Scientific integrity

A series of publicized scandals in the United States led to heightened debate on the ethical norms of sciences and the limitations of the self-regulation processes implemented by scientific communities and institutions.

[1] Following the development of codes of conduct, taxonomies of non-ethical uses have been significantly expanded, beyond the long-established forms of scientific fraud (plagiarism, falsification and fabrication of results).

Definitions of "questionable research practices" and the debate over reproducibility also target a grey area of dubious scientific results, which may not be the outcome of voluntary manipulations.

Several case studies have highlighted that while the principles of typical codes of conduct adhere to common scientific ideals, they are seen as remote from actual work practices and their efficiency is criticized.

International codes of conduct and national legislation on research integrity have officially endorsed open sharing of scientific output (publications, data, and code used to perform statistical analyses on the data[clarification needed]) as ways to limit questionable research practices and to enhance reproducibility.

In contrast with other forms of ethical misconducts, the debate over research integrity is focused on "victimless offence" that only hurts "the robustness of scientific record and public trust in science".

[5] In the Reflections on the Decline of Science in England, and on Some of its Causes, first published in 1830, Babbage identified four classes of scientific frauds,[6] from outright forgery to varied degrees of arrangements and cooking of the data or the methods.

At the time, the "scientific community responded to reports of 'scientific fraud' (as it was often called) by asserting that such cases are rare and that neither errors nor deception can be hidden for long because of science's self-correcting nature".

[11] For Patricia Wolff, along with a few obvious manipulations, there were a wide range of grey areas, which were due to the complexity of fundamental research: "the boundaries between egregious self-deception, culpable carelessness, fraud, and just plain error, can be very blurred indeed".

[13] By the end of the 1980s, the amplification of misconduct scandals and the heightened political and public scrutiny put scientists in a difficult position in the United States and elsewhere: "The tone of the 1988 US congressional oversight hearings, chaired by Rep. John Dingell (D-MI), that investigated how research institutions were responding to misconduct allegations reinforced many scientists’ view that both they and scientific research itself were under siege.

"[14] The main answer was procedural: research integrity has "been codified into numerous codes of conduct field specific, national, and international alike.

][20] While these normative texts may frequently share a core of common principles, there has been growing concern "over fragmentation, lack of interoperability and varying understandings of central terms can be sensed".

[22] In 1830, Charles Babbage introduced the first taxonomy of scientific frauds that already encover some forms of questionable research practices : hoaxing (a voluntary fraud "far from justifiable"[6]), forging ("whereas the forger is one who, wishing to acquire a reputation for science, records observations which he has never made"[23]), trimming (which "consists in clipping off little bits here and there from those observations which differ most in excess from the mean"[24] and cooking.

Cooking is the main focus of Babbage as an "art of various forms, the object of which is to give to ordinary observations the appearance and character of those of the highest degree of accuracy".

The scope of scientific misconducts is expansive: along with data fabrication, falsification and plagiarism it includes "other serious deviations" that are demonstrably done in bad faith.

A 2012 survey of 2,000 psychologists found that "the percentage of respondents who have engaged in questionable practices was surprisingly high",[35] especially in regard to selective reporting.

[38] In 2004, Caroline Whitbeck underlined that the enforcement of a few formal rules has overall failed to answer[clarification needed] to a structural "erosion or neglect" of scientific trust.

[39] In 2009, Schuurbiers, Osseweijer and Kinderler led a series of interviews in the aftermath[clarification needed] of the Dutch code of conduct on research integrity, introduced in 2005.

[40] While the principles "were seen to reflect the norms and values within science rather well", they seemed to be isolated from the actual work practices, which "may lead to morally complex situations".

[41] Respondents were also critical of the underlying individualist philosophy of the code, which shifted the entire blame to individual researchers without taking into account institutional or community-wide issues.

[42] In 2015, a survey of "64 faculty members at a large southwestern university" in the United States "yielded similar results":[38] many of the respondents were not aware of the existing ethical guidelines, and the communication process remained poor.

This norm "was far from universally accepted" in the early development of scientific communities and has remained "one of the many ambivalent precepts contained in the institution of science.

[51] Access is no longer the main dimension of open science, as it has been extended by more recent commitments toward transparency, collaborative work and social impact.

"[54] The translation of the ethical values of open science toward applied recommendation[clarification needed] was mostly undertaken by institutional and communities initiatives until the 2010s.

[55] The highest levels of compliance for each standard include the following requirements: In 2018, Heidi Laine attempted to establish a nearly-exhaustive list of "ethical principles associated with open science":[57] This categorization has to contend with the diversity of approaches and values associated with the open science movement and their ongoing evolutions, as the "term will likely remain as fluid as any other attempt to coin a complex system of practices, values and ideologies in one term".

Open scientific productions can be universally shared in theory: their dissemination is not constrained to the classic membership model of the "knowledge club".

Inclusivity, transparency, and protection from inappropriate influence are hallmarks of scientific integrity.”-HHS  To promote a culture of scientific integrity at HHS, they have outlined their policy in seven specific areas:[66] As a result of these areas, open science practices can be promoted to protect against bias, plagiarism, and data fabrication, falsification as well as inappropriate influencing, political interference, and censorship.

They act as the nation's medical research agency which focuses on making important discoveries that improve health and save lives.

HHS Seal
NIH 2013 logo