Hey Rypke, oh nee…. het is een ander… Wie is dit dan in hemelsnaam? Het is Daniele Fanelli. Het levende bewijs dat er niet alleen culinair maar ook intellectueel goeds uit Italië komt. Hij schrijft op 13 februari jl. een opzienbarend stuk in Nature. Ik ontdekte het in een artikel in Newsweek dat je gelukkig integraal online kunt lezen:

‘Awash in False Findings’ – Is most scientific research factually distorted?

De openingsalinea is:

There is, writes Daniele Fanelli in a recent issue of Nature, something rotten in the state of scientific research—“an epidemic of false, biased, and falsified findings” where “only the most egregious cases of misconduct are discovered and punished.” A research fellow at the Institute for the Study of Science, Technology and Innovation at the University of Edinburgh, Fanelli is a leading thinker in an increasingly alarming field of scientific research: one that seeks to find out why it is that so much scientific research turns out to be wrong.

Zijn wij geïnteresseerd? Wij zijn geïnteresseerd. Zeker als je ziet dat het artikel verder gaat met een referentie aan Diederik Stapel en dat de meer schimmige verdoezeling van onwelgevallige feiten veel erger is voor de samenleving dan zo’n rotte vegetarische appel.

Heb je dat gelezen? Dan meteen verder naar het originele artikel in Nature dat opent met een knaller van een samenvattend intro:

To make misconduct more difficult, the scientific community should ensure that it is impossible to lie by omission, argues Daniele Fanelli.

Hieronder het hele Nature artikel voor onze blog review:

Redefine misconduct as distorted reporting

To make misconduct more difficult, the scientific community should ensure that it is impossible to lie by omission, argues Daniele Fanelli. 

13 February 2013

Against an epidemic of false, biased and falsified findings, the scientific community’s defences are weak. Only the most egregious cases of misconduct are discovered and punished. Subtler forms slip through the net, and there is no protection from publication bias.

Delegates from around the world will discuss solutions to these problems at the 3rd World Conference on Research Integrity (wcri2013.org) in Montreal, Canada, on 5–8 May. Common proposals, debated in Nature and elsewhere, include improving mentorship and training, publishing negative results, reducing the pressure to publish, pre-registering studies, teaching ethics and ensuring harsh punishments.These are important but they overestimate the benefits of correcting scientists’ minds. We often forget that scientific knowledge is reliable not because scientists are more clever, objective or honest than other people, but because their claims are exposed to criticism and replication.

The key to protecting science, therefore, is to strengthen self-correction. Publication, peer-review and misconduct investigations should focus less on what scientists do, and more on what they communicate.

What is wrong with current approaches? By defining misconduct in terms of behaviours, as all countries do at present, we have to rely on whistle-blowers to discover it, unless the fabrication is so obvious as to be apparent from papers. It is rare for misconduct to have witnesses; and surveys suggest that when people do know about a colleagues’ misbehaviour, they rarely report it. Investigators, then, face the arduous task of reconstructing what a scientist did, establishing that the behaviour deviated from accepted practices and determining whether such deviation expressed an intention to deceive. Only the most clear-cut cases are ever exposed.

Take the scandal of Diederik Stapel, the Dutch star psychologist who last year was revealed to have been fabricating papers for almost 20 years. How was this possible? First, Stapel insisted on collecting data by himself, which kept away potential whistle-blowers. Second, researchers had no incentive to replicate his experiments, and when they did, they lacked sufficient information to explain discrepancies. This was mainly because, third, Stapel was free to omit from papers details that would have revealed lies and statistical flaws.

In tackling these issues, a good start would be to redefine misconduct as distorted reporting: ‘any omission or misrepresentation of the information necessary and sufficient to evaluate the validity and significance of research, at the level appropriate to the context in which the research is communicated’.

“Focus less on what scientists do and more on what they communicate.”

Some might consider this too broad. But it is no more so than the definition of falsification used by the US Office of Science and Technology Policy: “manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record”. Unlike this definition, however, mine points unambiguously to misconduct whenever there is a mismatch between what was reported and what was done.

Authors should be held accountable for what they write, and for recording what they did. But who decides what information is necessary and sufficient? That would be experts in each field, who should prepare and update guidelines. This might seem daunting, but such guidelines are already being published for many biomedical techniques, thanks to initiatives such as the EQUATOR Network (equator-network.org) or Minimum Information for Biological and Biomedical Investigations (mibbi.sourceforge.net).

The main task of journal editors and referees would then be to ensure that researchers comply with reporting requirements. They would point authors to the appropriate guidelines, perhaps before the study had started, and make sure that all the requisite details were included. If authors refused or were unable to comply, their paper (or grant application or talk) would be rejected. The publication would indicate which set or sets of guidelines were followed.

By focusing on reporting practices, the community would respect scientific autonomy but impose fairness. A scientist should be free to decide, for example, that ‘fishing’ for statistical significance is necessary. However, guidelines would require a list of every test used, allowing others to infer the risk of false positives.

Carefully crafted guidelines could make fabrication and plagiarism more difficult, by requiring the publication of verifiable details. And they could help to uncover questionable practices such as ghost authorship, exploiting subordinates, post hoc hypotheses or dropping outliers.

Graduate students could, in addition to learning the guidelines, train by replicating published studies. Special research funds could be reserved for independent replications of unchallenged claims.

The current defence against misconduct is prepared for the wrong sort of attack: the community tries to regulate research like any other profession, but it is different. The reliability of scientific ‘products’ is ensured not by individual practice, but by collective dialogue.

Bron: Nature.com