Reproducibility remains one of the most highly debated topics in preclinical research, with countless sources and solutions explored over the last few years. A 2016 survey conducted by Nature evaluated the extent of the reproducibility crisis and asked researchers to evaluate several proposed solutions. The results emphasized how widespread the issue is and how divided researchers are on it. For example, the highly regarded solution of standardizing all experimental conditions has been challenged by new research published in PLOS Biology.
In 2016, Nature surveyed over 1500 researchers and reported “more than 70 percent of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments”.1 Several hypotheses on ways to improve reproducibility were evaluated. The five most popular responses are shown below.2
- Better understanding of statistics
- Better mentoring/supervision
- More robust design
- Better teaching
- More within-lab validation
Challenging the standard
Hanno Würbel and his team at the University of Bern recently conducted a study to evaluate the widely recognized method of standardizing all experimental conditions. The team emulated single and multi-site experiments using data from 440 preclinical studies. They found multi-site experiments, even those with just two labs involved, produced more reproducible results than single-site experiments. Despite attempts to standardize all conditions, there will always be differences in lab environment, staff, animals, etc. Animal phenotypes, in particular, are influenced by environmental factors such as noise, odors, microbiota, or personnel.3 Würbel and team posit, “instead of indicating that a study was biased or underpowered, a failure to reproduce its results might rather indicate that the replication study was testing animals of a different phenotype”.3 This fact explains why multi-site experiments are more reproducible as they account for variation in animal phenotype. The team further proved this idea by showing that increasing sample size in one lab simply produces results that are even more specific and difficult to reproduce.3
DSI can help
DSI continues to monitor this issue and look for additional ways to help our clients maximize reproducibility through products and services. The following paragraphs discuss how DSI contributes to the solutions stated above and assists clients in obtaining high-quality, reproducible data.
DSI’s implantable telemetry offering allows for in vivo physiologic monitoring of freely moving animals, and provides a consistent way of collecting high quality, trustworthy data. Supporting the “three R’s” of animal welfare (replacement, reduction, and refinement), implantable telemetry allows researchers to use fewer animals per study and reduce pain and stress to those used, as the animals are handled less than other recording methodologies.4 Using fewer animals and reducing their stress increases the quality of data collected. Implantable telemetry also allows continuous monitoring versus periodic collection of a manual data point, providing more data to analyze and increasing the likelihood of detecting adverse events. Consequently, continuous data collection provides a more accurate picture as important information could be missed if data points are collected periodically.
Telemetry cannot compensate for the lack of variation in animal phenotype discussed above, but it does provide a consistent way of analyzing animals across multiple sites, removing any potential variability between data collection methods. The use of telemetry has been cited in thousands of publications and is a standard way of collecting physiologic data in a wide range of research disciplines. A sampling of citations for studies using DSI technology can be found in our bibliography.
DSI offers software packages to collect and analyze data. One tool which helps reduce variability and bias in data analysis, and therefore increases reproducibility, is Data Insights™. Users can set standard analysis settings in the software to be applied to all subjects within a study and across multiple studies to maintain consistent analysis. When this is done manually, without considering inter-animal variability, the algorithm may exclude good, physiologic data and/or mismark reported data. Data Insights™ can apply quality assessment-based searches to ensure the data is reported accurately and exclusion of data is minimized.
DSI also offers solutions for monitoring respiratory endpoints of animals. One of the tools which greatly contributes to reproducibility in the respiratory line is the Allay™ restraint. Traditionally, animals have been restrained using a plunger method in which there is a seal around the animal’s neck and they’re secured into place by a plunger-like shaft pushed up behind them. This causes stress to the animal as they’re held tightly in place. Their breathing is also compromised as the thorax is compressed in the seal. DSI’s Allay™ restraint is a collar design that simply slides in around the top and sides of the animal’s neck without compressing the thorax. Studies have been conducted demonstrating this method induces less stress for animals than traditional methods once the animal has been properly acclimated.5 The Allay™ also ensures consistent positioning of all animals, providing high quality results. For example, when using DSI’s inhalation tower, consistent animal position ensures consistent exposure to the inhalant for all animals connected to the tower.
DSI’s respiratory solutions are also capable of performing automatic calibration and system diagnostic tests to ensure all equipment is functioning and calibrated properly, thus removing the possibility of human error. Traditionally, calibration was performed manually by injecting air into chambers. There was significant variability with this calibration, such as how steadily the air was injected, and no verification that calibration was successful. Without proper calibration, data quality and study reproducibility can suffer. Within the Buxco system calibration protocol, all transducers are calibrated, valves are tested for functionality and leaks, reservoirs are tested for leaks, the bias flow is calibrated and tested for accuracy, and the temperature and humidity sensors are tested and calibrated. Once this process is complete, the system advises the user whether the it is functioning properly or if there are any issues, providing assurance prior to beginning a study.
As with telemetry, these solutions cannot directly address animal phenotype variability. However, they do offer a consistent means of collecting high quality data, providing confidence within one lab or across many.
DSI’s Data Services team can assist with study design through free consultations to help researchers develop designs with high internal and external validity, optimized for the species and system under investigation. Mid-study consultation and data review are also helpful to catch data quality, calibration, or other issues early on. Mid-study evaluation is critical to ensure any necessary procedural, design, or scheduling changes can be made in a timely fashion. An example occurred when Data Services supported a customer performing respiratory research whose baseline calibrations were incorrect. Fortunately, Data Services received these data just after collection and were quickly able to identify the problem. Although the researcher did have to repeat that portion of the study, they found the problem and collected a proper baseline before dosing the animals with a test compound. The result was a study with an appropriate baseline for analysis of test compound results. Had the issue not been identified, they would have had to try to salvage results after the fact using a less appropriate baseline, or repeat the entire study.
Once data are collected, Data Services can assist with analysis and reporting. Comprised of highly trained experts, the team provides accurate and consistent analysis. They employ rigorous documentation of methodology and procedures as well as systems for checking data quality and validity. DSI also has in-house expertise across various physiologic systems which is crucial for identifying questionable or non-physiologic values. Data Services offers detailed, customizable reporting options and has access to advanced tools, such as Data Insights™, which allow them to quantify data that may otherwise have been excluded as noise.
DSI can also help with improving laboratory protocols. The consultation and training offered by Data Services and Technical Support help researchers ensure their protocols and procedures align well with the products they use (e.g. using appropriate sampling rates for quality data and teaching researchers procedures to minimize scoring variability when analyzing in NeuroScore™).
In addition, DSI’s Surgical Services team can support researchers to increase reproducibility. Instead of developing appropriate surgical protocols from scratch, researchers can take advantage of Surgical Services in a variety of ways. First, DSI’s expert surgeons can provide pre-implanted animals where surgery has already been completed, ensuring accurate implant placement, optimized telemetry signals, and consistent animal care. Correct catheter placement is crucial to successful data collection as it will avoid stress to the catheter which could cause discrepancies. The surgeons carefully select animals sent to customers, only sending those who are healthy and properly healed. Telemetry signals are verified by Data Services to ensure optimal signals post-surgery and pre-shipment.
Alternatively, researchers can take advantage of surgical training to successfully perform surgeries at their facility. Surgical training provides knowledge of best practices for animal care throughout the process, implant-specific surgical techniques, as well as how to identify and achieve optimized telemetry signals.
DSI is committed to collaborating with our clients to ensure high quality data is collected and reproducibility is maximized.
We’d love to hear your thoughts on reproducibility!
Use #FutureOfReproducibility on social media to tell us what you think is causing this issue or what measures have worked for you in improving reproducibility!
Don’t forget to follow us on Facebook, LinkedIn, and Twitter to get the latest updates from DSI!
1 Linney Y. (2018). Cloud Computing May be Key to Data Reproducibility. Laboratory Equipment. https://www.laboratoryequipment.com/article/2018/02/cloud-computing-may-be-key-data-reproducibility
2Baker M. (2016). 1,500 scientists lift the lid on reproducibility. Nature. https://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970
3Voelkl B., Vogt L., Sena E.S., Würbel H. (2018). Reproducibility of preclinical animal research improves with heterogeneity of study samples. PLOS Biology. https://doi.org/10.1371/journal.pbio.2003693
4Kramer K., Kinter L. (2003). Evaluation and applications of radiotelemetry in small laboratory animals. Physiological Genomics. 13(3), 197-205. doi: 10.1152/ physiolgenomics.00164.2002, from http://physiolgenomics.physiology.org/content/13/3/197
5Kearney, K., Pittman, R., Appleby, C., Roche, B. (2017) Are All Respiratory Chambers Created Equal? A Comparative Assessment Between Allay Restraint and Standard Head Out Plethysmography. Journal of Pharmacological and Toxicological Methods. 88 part 2: 234. https://doi.org/10.1016/j.vascn.2017.09.215