physioblog_2

Irreproducibility in Preclinical Research

Feb 20, 2017, 09:19 AM by Sara Kruper

Abstract

Irreproducibility is a significant issue in preclinical research and has a substantial economic impact.  Clinical researchers rely heavily on results from preclinical studies as a starting point for their own.  If results are difficult to reproduce, clinical researchers spend excessive amounts of time and money attempting to do so (6).  Many scientists argue that even if the study cannot be replicated, science is still advanced because the results will be corrected over time as others attempt similar studies.  The question lies in whether the return on investment is acceptable.

It is widely held that a lack of regulation on preclinical studies led to the prevalence of irreproducibility as much is left to the judgement of the individual researcher.  Clinical research, which is highly regulated by FDA standards, does not see the same level of irreproducibility, indicating this may alleviate the issue.

Increased regulation will directly impact DSI customers.  Fortunately, DSI offers products and support from knowledgeable service teams which have the ability to reduce irreproducibility and lessen the burden of increased regulation.

Introduction

The reproducibility of a preclinical study has a direct impact on the success and timeline of clinical trials and production of potentially life-saving medical advances.  Results published by preclinical researchers often serve as a starting point for clinical studies.  A cautious estimate that 50% of preclinical studies are irreproducible, reveals $28 billion spent in the United States annually on studies that cannot be duplicated (6).  This figure does not include the additional time and money expended in the attempt to reproduce studies at the clinical level if it cannot be easily completed.

Amgen, a California biotechnology company, attempted to demonstrate the prevalence of the reproducibility issue by attempting to reproduce 53 well-known studies (1).  The scientists were only able to replicate 6 of the studies, even with assistance from the initial researchers (10).

This post outlines the definition, causes, and impact of irreproducibility in preclinical research.  In addition, it explores suggested solutions for the problem and steps already taken.  Finally, it endeavors to show how DSI can support researchers to reduce irreproducibility and the impact of increased regulation. 

Background

Definition

Leonard Freedman, Iain Cockburn, and Timothy Simcoe, authors of the PLOS Biology article “The Economics of Reproducibility in Preclinical Research”, define reproducibility as “the ability to replicate the same results demonstrated in a particular study using precisely the same methods and materials” (6).  Irreproducibility, therefore, is the inability to replicate results. 

The objective and unbiased nature of science is predicated on the ability of any researcher to replicate a study and reach the same end result (11).  The question is, does irreproducibility mean the initial study was flawed, and if so, was it worth the investment.  It can be argued that science is still advanced with every study completed, reproducible or not, because others will use it as a starting point and potentially replicate or correct previous results. 

Causes

Irreproducibility can occur for a number of reasons.  Freedman et al. believe the main causes are differences in study design, biological reagents and reference materials, laboratory protocols, and data analysis and reporting (6).  Others discuss the peer review process and the pressure on academic researchers to publish. 

Differences in study design could involve variation in animal strains, protocol, reporting, deficiency of appropriate controls, or improper calibration of instruments (3) (7).  The National Institute of Neurological Disorders and Stroke (NINDS) program director Shai Silberberg contends that even if animals used in a study are of identical age, gender, and type, differences can still occur, causing inconsistency in the results (10).  For example, some animals may be more sensitive to changes in environment, handling, or other external stressors than others even if they are the same type.  Incorrectly calibrated equipment can cause inaccuracies in the data collected, also leading to varying results.

There are significant variances in biological reagents and misidentification or cross-contamination are common (6) (7).  Proper training of laboratory staff, clear labeling and detailed documentation are required to ensure these incidents do not occur.

Laboratory protocols depend on the lab environment (7).  The way labs operate, their cleanliness, and staff vary significantly which can play a role in reproducibility (4).  Researchers should clearly explain study requirements to all staff and enforce necessary procedures. 

Without standard practices and proper training, the way data are evaluated may also differ between labs.  Variances in techniques, statistical models, interpretation, and unintentional bias can occur (7).  Sometimes even if the researcher is not able to pinpoint how, results can reach statistical significance unexpectedly (3).  If this occurs, it should be tested and repeated before it’s published.  However, as much is left to the judgment of the individual researcher who is under great pressure to publish, this does not always happen (2).

Impact

Of the $114.8 billion spent on life science research in the United States, almost half is spent in preclinical research (6).  Freedman et al. cautiously estimate that 50% of these studies are irreproducible resulting in $28 billion spent annually on studies which cannot be trusted (6). 

Clinical researchers depend on the findings of preclinical studies, especially from academia, as a starting point for their applications.  Clinical studies work under tighter regulation so findings are more likely to survive rigorous examination (7).  Pharmaceutical and biotechnology companies spend 3-24 months and $500,000-$2 million attempting to reproduce preclinical studies, which postpones development of medical advances (6). 

Solutions

Proposed

One of the most common proposed solutions is to enforce increased regulation or best practices on how studies are designed and documented to ease the transition from preclinical to clinical.  Some have already taken steps to accomplish this goal.  The National Institute of Health (NIH), a large financial contributor to preclinical research, released more stringent guidelines on documentation of studies and provided a checklist of items a researcher must complete to ensure their study is prepared properly (3).  Ideally, the more documentation done on experimental conditions, the easier the study will be to reproduce.  For example, The Federation of American Societies for Experimental Biology (FASEB) recommends researchers effectively communicate the reasoning behind the animal model used as well as its ability to simulate conditons and healing in humans (4).  They also suggest recording animal care procedures will increase reproducibility (4). 

Implementing best practices could increase costs by 15-25% and enforcing them on the entire inernational research community would require great cooperation from numerous parties (6).  Although costs would increase, the amount of money spent on irreproducible research could decrease, potentially resulting in a higher return on investment (6). 

The NIH is reviewing the grant proposal and peer review processes for areas where irreproducibility issues could be identified (3).  If funding agencies and journals enforce best practices, researchers will have to comply.  In addition to guidelines and checklists, they’ve added additional required training for postdoctoral fellows on improving reproducibility and proper experimental planning and execution (3).  FASEB advocates for training to be readily available to researchers so they can complete it on their own time (4).  Requirements would include improved training for all levels of laboratory staff, and would be enforced by funding agencies (6). 

The NIH also sponsored a seminar with Nature Publishing Group and Science in which many journals for preclinical and basic science came together to determine what actions could be taken to reduce irreproducibility (9).  They created a list of policies journals should outline and enforce for publication.  These include policies around analysis, transparency, as well as sharing of results and resources.  The polices also outline the journals’ responsibility to consider publishing contradictions of a paper, and suggest guidelines on best practices for use and documentation of antibodies, cell lines, and animals (9).

Another organization taking action to enact new standards for research is the University of California, San Diego (UCSD).  They worked with the Global Biological Sciences Institute (GBSI), a non-profit organization, to create a consistent approach for selecting and documenting antibodies and reagents in hopes of increasing reproducibility (5).  According to Tiffany Fox from UCSD, “A scoring and tracking system will both provide a way to validate an antibody’s performance for different kinds of experiments (since not all experiments use antibodies in the same way) and improve researchers’ confidence that a particular antibody will work as expected” (5).  Thermo Fisher Scientific, an international biotechnology organization, supports these efforts.  Matt Baker, Director of R&D and Business Development, RUO Antibodies, said, “Labeling and identification standards of research materials promotes transparency and reproducibility of important scientific research.  We believe the goal of Research Identification Initiative is one very important part of what the International Working Group on Antibody Validation is striving to accomplish – standardized guidelines for antibody specificity, functionality and reproducibility” (5).  Freedman et al. are aligned with these efforts as they name study design and biological reagents and reference materials as areas to improve first since they will provide the greatest return on investment (6). 

Best practices could also be enforced through the peer review process.  Freedman discusses “the approriate role of journals as gatekeepers of information” (7).  A standard for the number of times a study needs to be reproduced before it can be published could be established, but some would argue this slows the progression of science and diminishs the amount of available information from preclinical studies as scientists build on one another’s work.  Academia.edu advocates for papers to be published prior to peer review allowing the same number of published studies, while maintaing control over peer reviewed content (2).

DSI can help

Reproducibility is an important issue to DSI as all of the proposed solutions would impact its customers.  However, DSI has solutions to reduce irreproducibility and the pressure of increased regulation.

Products

DSI’s implantable telemetry offering allows for in vivo physiologic monitoring of freely moving animals.  Implantable telemetry is superior to tethered or restrained options for reproducibility and animal welfare.  Any interference in the animal’s natural habitat or existence could affect physiologic signals and consequently the data collected.  The form of interference and effect it has on the experimental animal are difficult to reproduce, as they would need to be documented clearly and some animals may be more sensitive than others. 

Supporting the “three R’s” of animal welfare (replacement, reduction, and refinement), implantable telemetry allows researchers to use fewer animals per study and reduce pain and stress to those they do use, as the animals are handled less than other recording methodologies (8).  Using fewer animals and reducing their stress decreases variability and increases the quality of data collected.  Implantable telemetry also allows continuous monitoring, rather than periodic collection of a manual data point, providing more data to analyze and increases the likelihood of detecting adverse events.  Consequently, continuous data collection provides a more accurate picture as information could be missed if data points are collected periodically. 

Additionally, DSI offers software packages to collect and analyze data.  One tool which helps reduce variability in data analysis, and therefore irreproducibility, is Data Insights™.  Users can set standard analysis settings in the software to be applied to all subjects within a study and across multiple studies to maintain consistent analysis.  When this is done manually, without considering inter-animal variability, the algorithm may exclude good, physiological data and/or mismark reported data.  Data Insights can apply quality assessment-based searches to ensure the data are reported accurately and minimize the exclusion of data. 

Services

DSI’s Data Services team can assist with study design through free consultations to help researchers develop designs with high internal and external validity, optimized for the species and system under investigation.  Mid-study consultation and data review are also helpful to catch data quality, calibration, or other issues early on.  Mid-study evaluation is critical to ensure any necessary procedural, design, or scheduling changes can be made in a timely fashion.  An example occurred when Data Services supported a customer performing respiratory research whose baseline calibrations were incorrect.  Fortunately, Data Services received these data just after collection and were quickly able to identify the problem.  Although the researcher did have to repeat that portion of the study, they found the problem and collected a proper baseline before dosing the animals with a test compound.  The result was a study with an appropriate baseline for analysis of test compound results.  Had the issue not been identified, they would have had to try to salvage results after the fact using a less appropriate baseline, or repeat the entire study.

Once data are collected, Data Services can assist with analysis and reporting.  Comprised of highly trained experts, the team provides accurate and consistent analysis.  They employ rigorous documentation of methodology and procedures as well as systems for checking data quality and validity.  DSI also has in-house expertise across various physiological systems which is crucial for identifying questionable or non-physiologic values.  Data Services offers detailed, customizable reporting options and has access to advanced tools, such as Data Insights, which allow them to quantify data which might otherwise have been excluded as noise. 

DSI can also help with improving laboratory protocols.  The consultation and training offered by Data Services and Technical Support helps researchers ensure their protocols and procedures align well with the products they use (e.g. using appropriate sampling rates for quality data and teaching researchers procedures to minimize scoring variability when analyzing in NeuroScore™). 

In addition, DSI’s Surgical Services team can support researchers to increase reproducibility.  Instead of developing appropriate surgical protocols from scratch, researchers can take advantage of Surgical Services in a variety of ways.  First, DSI’s expert surgeons can provide pre-implanted animals where surgery has already been completed which ensures accurate implant placement, optimized telemetry signals, and consistent animal care.  Correct catheter placement is crucial to successful data collection as it will avoid stress to the catheter which could cause discrepancies.  The surgeons carefully select animals sent to customers, only sending those who are healthy and properly healed.  Telemetry signals are verified by Data Services to ensure optimal signals post-surgery and pre-shipment. 

Alternatively, researchers can take advantage of surgical training to successfully perform surgeries at their facility.  Surgical training provides knowledge of best practices for animal care throughout the process, implant specific surgical techniques, as well as how to identify and achieve optimized telemetry signals.

DSI’s complete solution reduces variability in data collected and reported.  The service teams can support researchers throughout their studies ensuring less stress and a quality result.

Conclusion

Research suggests the lack of best practices and regulation in the preclinical arena led to a significant irreproducibility issue with substantial economic impact.  Approximately $28 billion is spent annually in the United States alone on irreproducible research with an uncertain return on investment.  Irreproducibility arguably results in delays to potentially life changing medical advances.

The most commonly proposed solution for the issue is increased regulation on best practices and documentation of studies.  Increased regulation would directly affect DSI customers by increasing their workload.  However, DSI’s products reduce the risk of irreproducibility by automating much of the process and reducing animal stress.  Its service teams can alleviate the impact of increased regulation by assisting with initial study setup and design, consultation throughout the study, and complete analysis after data collection.




Works Cited

  1. Baker, M. (2016, February 11). Biotech giant publishes failures to confirm high-profile science. Nature, 530(7589), 141. doi:10.1038/nature.2016.19269, from http://www.nature.com/news/biotech-giant-publishes-failures-to-confirm-high-profile-science-1.19269
  2. Begley, C. G., & Ioannidis, J. P. (2015, January 2). Reproducibility in Science: Improving the Standard for Basic and Preclinical Research. Circulation Research, 116(1), 116-126. doi:10.1161/circresaha.114.303819, from https://www.ncbi.nlm.nih.gov/pubmed/25552691
  3. Collins, F. S., & Tabak, L. A. (2014, January 27). Policy: NIH plans to enhance reproducibility. Nature, 505(7485), 612-613. doi:10.1038/505612a, from http://www.nature.com/news/policy-nih-plans-to-enhance-reproducibility-1.14586
  4. Enhancing Research Reproducibility: Recommendations from the Federation of American Societies for Experimental Biology (Publication). (2016). Bethesda, MD.
  5. Fox, B. T. (2016, October 18). UC San Diego Researchers Make Strides in Research Reproducibility at Global Biological Standards Meeting. Retrieved October 19, 2016, from http://www.calit2.net/newsroom/release.php?id=2767
  6. Freedman L.P., Cockburn I.M., Simcoe T.S. (2015) The Economics of Reproducibility in Preclinical Research. PLoS Biology 13(6): e1002165. doi:10.1371/journal.pbio.1002165
  7. Freedman, L. P., & Inglese, J. (2014, August 1). The Increasing Urgency for Standards in Basic Biologic Research. Cancer Research, 74(15), 4024-4029. doi:10.1158/0008-5472.can-14-0925, from http://cancerres.aacrjournals.org/content/early/2014/07/17/0008-5472.CAN-14-0925
  8. Kramer, K., & Kinter, L. (2003, May 13). Evaluation and applications of radiotelemetry in small laboratory animals. Physiological Genomics, 13(3), 197-205. doi: 10.1152/ physiolgenomics.00164.2002, from http://physiolgenomics.physiology.org/content/13/3/197
  9. Principles and Guidelines for Reporting Preclinical Research | National Institutes of Health (NIH). (2016, February 5). Retrieved October 19, 2016, from https://www.nih.gov/research-training/rigor-reproducibility/principles-guidelines-reporting-preclinical-research
  10. Schmidt, C. W. (2014, July). Research Wranglers: Initiatives to Improve Reproducibility of Study Findings. Retrieved October 17, 2016, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4080539/
  11. Trouble at the Lab; Unreliable Research. (2013, October 19). The Economist (US). Retrieved October 11, 2016, from http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble

 

 

 

 

Thank you for the comment! Your comment must be approved first
avatar
Login to be able to comment