The world of the unseen, unlearned, and unknown has forever entranced human minds. Imagine the delight of the first creature to wield fire on the end of a stick, suddenly able to illuminate features in the darkness on command. Scientists enjoy a similar feeling when, for a brief moment at the microscope or analyzing fresh data, they are the only souls in the world to know about a new discovery. I’ve felt it myself! In research, we are motivated by the dreamlike, addictive thrill of pioneering through the uncharted phenomena that govern our being and our universe. But we also understand that absolute truth about nature is fundamentally out of reach. As the brilliant physicist Werner Heisenberg eloquently explained, “We have to remember that what we observe is not nature in itself, but nature exposed to our method of questioning.”

In a moment when the representation of truth and fact by both the government and the media is in the national spotlight, people are criticized for changing their minds and it’s easy to earn the brand of “fake news”. But in science, truth is, by definition, a malleable and perfectly revisable thing. This is because scientists compile data into models of how natural systems work. As time passes, new data and perspectives are assimilated into the consensus and the models are adjusted accordingly. It’s the best we can do. After all, we did not build the grand machines we study and there are no blueprints to compare our models against. The strongest scientific evidence is compiled into a consensus, which we take as our empirical truth.

The transformation of piles of primary literature into an evolving consensus is not often made clear to the general public. This problem is plainly illustrated when activists for a politicized science issue point to individual, cherry-picked papers to defend their position. While a paper may have passed the tests of peer review to become published, the requirements for publication are far from the only mechanism that ensures accuracy in a scientific consensus.

Researchers dedicate years of work employing the scientific method in their experiments in order to submit results to a journal. During this time, there is a lot of internal quality control that happens within a research institution. In academia, in-progress work is constantly shared with the lab’s principal investigator (PI), coworkers, and the community. If there is a hole to be poked through a project or loose strings that are being neglected, the researcher will be made aware.

Once compiled into a manuscript, results are then submitted to a journal, where an editor will send them to expert PIs for peer review. Peer reviewers ensure the appropriateness of methods, clarity of data presentation, and validity of claims that are made while taking into account the current knowledge base.

Peer review is not a perfect process. On the side of authors, willfully deceptive or accidentally erroneous data occasionally makes it into print. To address these problems, a framework is in place for journals to formally make corrections to or retract published work altogether. The scientific community and specific watchdogs such as and PubPeer play a major role in discovering and tracking mistakes in publications and keeping tabs on research groups who are repeat offenders. On the side of publishers, there is a growing number of “predatory” journals that do not subject their articles to appropriate (or any) peer review while pocketing scientists’ publishing fees. The scientific community takes activist steps against such publishers, maintaining lists and warning others to steer clear. In addition, the Federal Trade Commission is set to challenge one of these publishers in court with the hope of setting the precedent for punishing deceptive practices from journals.

The data that ends up in this traditional pipeline is a tiny fraction of all data generated by researchers. The rest is a mixture of interpretable but “ugly” data, unexplainable one-off results, results of troubleshooting, or experiments that don’t have a compelling conclusion. But researchers still value these results and have quite a few ways to share them. If generated by a graduate student, some of the extraneous data will end up published within a thesis. But perhaps more importantly, unpublished results are shared with colleagues at conferences, invited talks, and in good-old conversations. Sharing unpublished data pushes fields forward by temporarily sidestepping the traditional gate of publishing and allowing researchers to more nimbly explore new ideas.

Many now also argue in favor of publishing of so-called “negative data”. That is, data that leads to the conclusion that the results from the experimental group and control group are the same. With the expansion of digital publishing, the idea of a vast, searchable collection of negative and inconclusive data is becoming a reality.

All our sources of data, both in published and unpublished form, are formed into empirical truth by consensus. The strength of this massive amount of data is that it overlaps considerably, and many questions have been repeatedly asked by different groups in different ways. When all the studies on a given topic are examined together and methodological differences are accounted for, a picture emerges of which results are reproducible by many research groups from many angles, and which are anomalies that cannot be replicated.

The early history of the immunoediting hypothesis—the idea that the adaptive immune system can control cancer—is a wonderful demonstration of this consensus-building in action. The notion dates back to the turn of the 20th century, but the tools to even begin to investigate such a hypothesis did not exist until the 1950s. Some of these first experiments were promising, and the hypothesis was revived in 1957 by two scientists: Sir Frank Macfarlane Burnet and Lewis Thomas.

In the 1990s, far better models of studying adaptive immunity were developed and the immunoediting hypothesis was again revisited. A tidal wave of data was produced during this decade, which built a stronger case that, in fact, the immune system was shaping tumor development and eliminating early cancer cells until they mutated so much that they became uncontrollable and formed tumors. More research during these years also revealed that nude mice (the mice used in some of these experiments) have a more functional immune system than was originally believed, which could explain why they had developed cancer like normal mice in the prior experiments. In retrospect, the results from the 1970s could be explained by this and other unforeseen caveats in these older experiments. After enduring a dramatic history of swinging between fact and fantasy, immunoediting is now widely accepted as true and has led to some of today’s revolutionary cancer immunotherapies.

In his book What Mad Pursuit, Francis Crick gracefully describes another example of the scientific process of truth-finding in action: his experience with his colleagues as they struggled with a misleading data point and ultimately failed to identify a structural element. “The failure…made a deep impression on Jim Watson and me. Because of it I argued that it was important not to place too much reliance on any single piece of experimental evidence. It might turn out to be misleading….” Jim was a little more brash, stating that “no good model ever accounted for all the facts, since some data is bound to be misleading if not plain wrong.” We’re bound to be misled from time to time, but that doesn’t mean the scientific method has failed or that it’s not worth discussing those results. In the immunoediting case, the studies that did not support the hypothesis are still sound and the papers haven’t been retracted just because the conclusions were incorrect. They are a fundamental part of the history of this particular scientific truth, which includes missteps and misinterpretations.

The search for truth in science calls to mind the parable of blind monks examining an elephant. Most versions end with the monks comparing their findings, only to discover that they all have different conclusions about the nature of the beast. Individual researchers are like the monks, each working on a tiny sliver of their field with their personal set of tools. But unlike the monks, we know that with enough people working and pushing boundaries, our observations are bound to overlap and allow smaller pieces to be correctly placed in the context of something larger.

The drive of the scientific community to craft models that most closely represent the absolute truth keeps us honest. Competing hypotheses are turned over and over with intense scrutiny, and the empirical truth emerges from the most convincing evidence. Too often, the evolving nature of scientific consensus is used to discredit it. I would argue instead that truth in science is trustworthy precisely because it can change as directed by evidence. As the very nature of truth is under assault in our current political climate, it is more important now than ever that we take a lesson from science and demand that decisions that affect all of us are based firmly on evidence.

Featured image credit: Nicole Repina, BSR Blog Design Team.

Leave a Reply