Jump to content


The Question Of How To Prevent Irreproducible Data From Being Published


No replies to this topic

#1 Dr. Joseph Lorenzo

    Advanced Member

  • Administrators
  • 94 posts

Posted 30 April 2015 - 09:18 AM

The reproducibility of published data remains a critical determinant of the quality of any research. Unfortunately, some scientific articles are published and subsequently their results are found to be difficult or impossible to reproduce. When this occurs, it affects all scientists because it erodes the public’s trust in science and scientists. The majority of recently disputed articles have concerned preclinical or basic science topics. In response to this issue, Francis Collins, the Director, and Lawrence Tabak, the Principal Deputy Director, of the US National Institutes of Health wrote an article in Nature (1) in which they proposed a number of steps to mitigate this problem. Firstly, they stated that irreproducibility is rarely caused by scientific misconduct. Rather they said that: “a complex array of other factors seems to have contributed to the lack of reproducibility. Factors include poor training of researchers in experimental design; increased emphasis on making provocative statements rather than presenting technical details; and publications that do not report basic elements of experimental design”. In their excellent article last fall in the JBMR (2) Stavros Manolagas from the University of Arkansas for Medical Sciences and Henry Kronenberg from the Massachusetts General Hospital and Harvard Medical School went over this issue in detail as it relates to the bone field and they suggested several potential solutions to this problem.

In response to this controversy, the NIH released its Principals and Guidelines for Reporting Preclinical Research in November of last year (3). These were accepted by a number of journals including Nature, Science and Cell as requirements before articles could be published in those journals. The Guidelines contained a large number of recommendations. These included: 1) Encouraging the use of standards within each scientific discipline for nomenclature and reporting and 2) Requiring that: a) each study list the number of times experiments were repeated; b)statistics be fully reported; c) a statement be included about whether samples were randomized and how this was accomplished; d) there be clarity about whether experimenters were blinded to the treatments and outcome assessments; e) authors state how the sample size for each group in an experiment was determined and f) the criteria used to exclude data be clear.

The Guidelines also stipulate that, at a minimum, all datasets on which the conclusions of the paper were made be available upon request, where ethically appropriate, to the editors and journal reviewers and upon reasonable request immediately after publication. Finally, the Guidelines obligate journals to consider articles for publication that refute their published papers using the same standards for acceptance that were used for the original publication.

However, the Principals and Guidelines have not received a universal endorsement. John Haywood, the President of the Federation of American Societies for Experimental Biology (FASEB, of which ASBMR is a member) recently wrote to Lawrence Tabak at NIH about concerns that FASEB had with the Guidelines (4). Haywood agreed that “Guidelines to encourage uniform reporting of data and experimental methods are valuable to the scientific community and the public”. However, FASEB concerns were 1) that the NIH Guidelines were “a “one size fits all” list” that “could become burdensome, overwhelming, and ultimately ineffective if it requires everyone to report every factor regardless of its relevance to a particular kind of research”; 2) that the NIH Guidelines are too rigid. Haywood states “Biomedical research is a vast enterprise with substantial variety in statistical methods, data types, and best practices within and between disciplines. To achieve the stated goals of enhancing rigor, reproducibility, robustness, and transparency, guidelines must allow flexibility and discretion by journal editors and reviewers”. Finally, FASEB was concerned that the guidelines represent an “increased administrative burden for researchers and reviewers” that may weaken scientific peer review.

So where do we go from here? Clearly, there is a need to diminish the likelihood that irreproducible data will be published. This means that some version of the Guidelines is inevitable. Scientists need to engage in a dialogue with funding agencies and journals to produce a set of standards for our profession that are fair, effective and the least burdensome possible. Achieving such standards should be a two-way street that ultimately improves published science and the public’s faith in scientists.

Joe Lorenzo,
Farmington, CT, USA




Reply to this topic


This post will need approval from a moderator before this post is shown.

  


1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users