The irreproducibility of published science

Yes, there are issues and problems around the issue of reproducibility in scientific studies, and they need to be addressed; at the same time, there are reasons this area is challenging, and there is no reason to denigrate science overall because of it

Peter Kissinger
Register for free to listen to this article
Listen with Speechify
0:00
5:00
For the several years since the age of untruth began, science has not escaped the dismay many feel for this phenomenon in political discourse. In this space, I’ve previously discussed selecting data to fit a narrative, both intentionally or through the confirmation bias that none of us can avoid. Making things up is more egregious, but has the same result. There are so many fine examples to debate, such as the role of genetically modified organisms, the purported impact of vaccines on autism, the conversion of climate change into a religious order and the difficulty of reproducing published science.
 
Why do we publish?
 
It is said that science not published is science not done. At a fundamental level, we advance science incrementally by sharing our observations with others. Gradually, the old dogma dies away in the face of new observations which have been enabled by new instruments our predecessors did not have. Our ability to resolve time, space, lower concentration and molecular specificity get better and better. We see what others could not see. The funders of natural science see a return on their investment as quality published work with data of archival value. There is a purity to that purpose.
 
On the other hand, we also publish to be promoted, to polish our CV, to facilitate renewal of our funding, to get invitations to speak and, as youth, to impress our parents. The same applies to patents, which like for most collected stamps or butterflies, are not worth a whole lot. We are counters. So, we count. “Professor Jones has published 300 peer-reviewed papers and has 56 issued patents.” That’s a friendly introduction which, by itself, says nothing about impact. A better citation would be that “Prof. Jones and her team first synthesized the drug that has now made Alzheimer’s disease a historic curiosity.” In the current jargon, publications are at best a potential surrogate marker of effort.
 
Why do we not publish?
 
For most of history we did not feel a benefit to publishing negative results that refuted our hypotheses. Recently this habit has been questioned, for example, in the case of clinical trials that did not meet their end points. There are ethical issues. Volunteers who participated with the best of intentions have performed a service that suggests later trials not repeat the same mistake. Perhaps there is some signal in the subpopulation noise that is not recognized by averages reported in a press release. The details may have hints of omission or commission that suggest next steps. It seems unwise to leave them out.
 
There are times we do not publish (or otherwise present) because progress may entail intellectual property of great value. We might also not want to reveal a business strategy. Many are in biopharma positions where disclosure must be “cleared by legal.” The easiest thing for legal to do to avoid an error is “just say no.” Thought costs more.
 
It is also a common habit to not publish, because there is pain and little gain. Most of the gain is in academia, some is in government, but much less in commercial firms. The overwhelming majority of quality data are obtained in regulated industry, not academia. That’s where we have training, longer-term technical personnel, validated software, calibrated instruments, regulatory inspections, and we have a lot at stake. We have lot release testing, clinical samples, in-process measurements, stability testing, a safety program, raw materials to test and the river out back of the plant is to be protected. There are real consequences to lives, property and stock price. On the other hand, little of this data is interesting. It doesn’t provide a stimulating narrative to know that X meets specifications. Besides, it’s all there in the server farm in the cloud.
 
Some will argue that this is not science at all. It is certainly not science of interest to the natural philosophers of the 18th and 19th centuries. It’s technical data. It’s important, it looks like science, but is no more science than the inventory of widgets.
 
What do we publish?
 
Keep in mind that academics are rewarded for innovation, not consistency. Significance may be imagined early, but not achieved for a decade. Professors are at the fuzzy front end of the creative process. Many academic papers are pedestrian and even a generation or two behind the state of the art.
 
This is especially true in the developing economies where the infrastructure is not there. On the other hand, something very important is happening: education. That education leads to economic development. Life gets better. In academia, we have no QA, no GLP, no GMP, no CE, no ICH or ISO and no product sales to cover instrument service contracts. We have no regulatory inspections beyond OSHA and environmental management. We pay no taxes and ignore depreciation. Research funding comes in fits and starts. It’s three years from Armageddon if the grant is not renewed. We have no resources for long-term technical support to make measurements or to care for rats.
 
The strategic goal is to convert raw HR into capable scientists and engineers in five to six years. Many publications are about methods, perhaps because methods are much easier than advancing natural science. Determining X in Y using Z bores me, even more so when X, Y and Z are old news. Such papers can’t be fully validated for future measurement challenges at another time and place. Now, when Z defines a new measurement strategy evolving from underlying science, that gets my attention. But such papers need not be validated to regulatory standards. That is not needed when the purpose is to stimulate other imaginations directed at other problems. Synergies come from the strangest places, once the innovation is set free in the literature.
 
Where do we publish?
 
The system of scientific/medical publishing that expanded after WWII is badly broken. The channels are a hundredfold more than at the turn of the millennium. Everyone can publish anything somewhere. Search engines make nearly everything accessible, no matter how obscure the journal or website. The noise has gone up much faster than the signals. If journal A rejects your work and the reviewers’ comments annoy you, it will be published in journal B or C. There is rarely time to go back in the lab. There is a bubble in publishing that seems irreversible now that we obsess over counting and speed. Too few obsess over reproducibility, and experimental sections of technical papers are generally inadequate.
 
What’s the problem?
 
Expectations are high. Modern measurement tools are required, but often are not understood by those using them. We could win WWII with military coming from the farms who could fix whatever floated or flew. Likewise, designing, taking apart, recalibrating, rebuilding and repurposing instruments was standard in many post-WWII labs. Later, writing software was required and became addictive.
 
As the scientific instrument industry evolved, we lost many basic skills, in a perfect analogy with advancing automotive technologies. Autonomous cars are not likely to lead to autonomous instruments producing validated results. Why not? It’s because the measurements we make require samples and their variety is not finite. My Amazon Alexa is not about to operate a high-resolution mass spectrometer, and my smart phone is not going to make quantitative chemical measurements beyond estimating glucose.
 
The problem of irreproducibility is real. We do have some scientific fraud. We have much more confirmation bias. We are in too much of a hurry and have gotten a bit sloppy.
 
We do have “test to compliance” where we repeat experiments until we get the desired result. Biological tools (antibodies, cell cultures, mice) don’t match the precision of resistors and capacitors. We often don’t have the funding or time for large sample sets. The funding stressors take us away from our labs. Quality has gone down. As experiments became more complex, experimental sections of papers became less complete and peer review more challenging.
 
Colleagues in industry have argued that academia should adopt the formalities of regulated science. I disagree. That raises cost, creates bureaucracy and stifles innovation. We do not want an administrative state running our labs. Quality, like safety, is an attitude more than a set of rules.
 
Academic science is now working more diligently on safety. Awareness mattered. Can the same now happen with quality? The leading journals are taking up the cause in response to retractions and irreproducibility. The Nature journals are especially to be commended. The concept of supplementary material is helping. We must have teams with a variety of deep skills. Few life-sciences investigators can effectively work alone. Academic career pathways of the last millennium are very deficient. They are not working. Old faculty will need to fade away before this can be fixed.
 
Given that no two scientists are programmed alike and no two tissue samples can be exactly alike, we should expect to be disappointed, but let’s not go negative on the whole enterprise of academic research. Great work is being published by great students and their faculty in great journals. Don’t let the published noise impede the strong signals. Reproducibility matters. Let’s go!

Peter T. Kissinger (who can be reached at kissinger@ddn-news.com) is professor of chemistry at Purdue University, chairman emeritus of BASi and a director of Chembio Diagnostics, Phlebotics and Prosolia.

Peter Kissinger

Published In:


Subscribe to Newsletter
Subscribe to our eNewsletters

Stay connected with all of the latest from Drug Discovery News.

March 2024 Issue Front Cover

Latest Issue  

• Volume 20 • Issue 2 • March 2024

March 2024

March 2024 Issue