“To know inauthenticity is not the same as to be authentic.” —Paul See de Man
Lately, public media has been expressing increased skepticism towards scientific practices. Consider the recent cover articles in The Economist: How Science Goes Wrong and Trouble in the Lab. In the latter article, Jason Ford artfully depicts scientists in labs that would make EH&S shiver, sweeping poorly conducted experiments and data under the rug. These articles indeed resurface the problems shaped by incentive systems, such as publishing models. They also point out how the disavowal of null results by high impact journals may ultimately promote unethical practices for those who wish to stay in the publication pipeline.
Scientists as people are not the disinterested, communistic, universal, and organized obeyers of the ideals of science proposed by Robert Merton, though they are expected to be when conducting experiments. What The Economist fails to acknowledge is that although scientists are people preserving self-interests (just as any other professional), science itself constantly destroys and reconstructs new hypotheses across fields (in line with Thomas Kuhn’s paradigm shift). Science has self-revision at the core of its process. If an experiment was conducted poorly and the results significant, scientists have the right and expectation to challenge it by conducting their own experiments.
In contrast, the Economist article makes it seem like the mistakes within science are a problem with science itself. Although the problems identified are important to rectify within science, and indeed doing so should be a part of science, it would be a great exaggeration to treat science as though it were broken. What must be acknowledged, in spite of identifiable problems in scientific practices, is “how science goes right” for the most part, and the problems in it should not provoke a skepticism of the entire practice.
If the practices enforced by science correct for cheaters, then should we be concerned by these articles from The Economist? If their goal is to stir public distrust of science, then it is time to reconsider all of the technology (that arose from these “dirty” practices) upon which society depends. Again, these authentic accomplishments are undermined in the guise of inauthentic scientific practices (consider the Paul See de Man quote that introduces this article).
There is an important ideological question raised by this article. Is the underlying message of, or inevitable reaction to, the article one that convinces people that they need to give greater support to science and help facilitate the development of better scientific practices? Or instead, is the message that the public should indulge in politically motivated negative attitudes towards science itself, and divest (metaphorically or literally) in scientific progress. This latter possibility is particularly unsettling given some of the popular politicized positions taken towards issues such as climate change, vaccinations, and the environment.
Whatever one’s view on science may be, it is important to not let cultural views (e.g. politics, religion, etc.) shape how one interprets numbers shown in the data, assuming the experiment is conducted ethically. As a part of a colloquium on Law and Psychology, Dan Kahan presented findings from his paper, “Motivated Numeracy and Enlightened Self Government” with the question, “Why care?” underpinning the title of the talk.
Is academic discourse supported by empirical evidence (or data) just a “grab bag” from which you can grab any story you want in order to better reinforce your cultural identity? How do we understand our world and the evidence that reinforces this world-view, Weltanschauung?
The debate in the “public” (mis)understanding of science is shaped by two ideas. Either:
- the public is not properly educated in discriminating dependable sources and interpreting scientific data (i.e. low numeracy, elaborated by the Scientific Comprehension Thesis) or
- the public is influenced by cultural cognitive worldviews (e.g. political, religious, cultural, etc.), and interpret information (both qualitative and quantitative) through a biased lens to reinforce these worldviews (i.e. confirmation bias, elaborated by Identity-protective Cognition Thesis).
Kahan performs an empirical investigation to disentangle these (mis)understandings.
How can we overcome this barrier to scientific understanding?
This barrier is in part imposed, in philosophical dimensions, as a consequence of social constructionism. When scientists were formerly conceived as as the beholders of absolute truth, similar findings observed across research labs could be more easily generalized as universal law—this was the view of materialism. However, social constructionism implies there is no universality (absolute truth), and our reality is merely layers of abstraction put forth by subjective scientists of relative research. Some opponents of controversial topics (e.g. global warming) discredit the research done by weakly applying the social constructionist argument and stating that the researchers are biased in their approach. These insights were inspired from a talk by Luigi Pellizzoni at the 4S Conference.
Does being biased skew our interpretation of robust statistical analyses? Perhaps you were not expecting the question of “scientific understanding” to be turned towards scientists. Dan Kahan does not specifically research scientists. He shows that individuals with higher numeracy (a category in which most scientists are expected) were more likely to fall victim to Identity-protective Cognition (i.e. preserving cultural-identity at the sacrifice of numeracy).
We are making important political decisions with topics that extend outside our field of expertise. Do we look as critically at research in these areas as we look at research in our fields, or do we save the time and energy needed to deliberate on such issues by relying on anecdotal evidence for the scientific topics outside of our fields?
Approaches to Consider: Slow vs. Fast and Altercentric vs. Egocentric
At the conference Being Human, hosted in San Francisco this year, a few ideas emerged under the guise of different names: “Slow vs. Fast” and “Altercentric vs. Egocentric” styles of thinking.
Joshua Greene from Harvard Moral Cognition Lab gives a talk (see here) on “Slow vs. Fast” thinking. Based on the moral problem from Garrett Hardin The Tragedy of the Commons in 1968, it is the conflict between individual and collective interests, illustrated by animal herders having to negotiate how many animals to have on common pastures, without adding too many animals to destroy the pasture for everyone (overview).
Greene’s paper on The Cognitive Neuroscience of Moral Judgement states:
“A range of studies using diverse methods support a dual-process theory of moral judgment according to which utilitarian moral judgments (favoring the “greater good” over individual rights) are enabled by controlled cognitive processes, while deontological judgments (favoring individual rights) are driven by intuitive emotional responses.”
*bolded statements for emphasis are not in the original
Greene describes utilitarian (for us) and deontological (for me) as something more complicated. What if two separate groups, solving utilitarian problems differently (e.g. one group is communist while another is individualist), cooperating under different answers to ethical questions, are suddenly confronted with one another?
- System 1: Fast, automatic, frequent, emotional, stereotypic, subconscious
- System 2: Slow, effortful, infrequent, logical, calculating, conscious
In Greene’s camera analogy, “automatic” is fast and efficient and “manual” is slow and flexible. “Think Fast” with problems of “Me vs. Us” (individual vs. utilitarian) and “Think Slow” with problems of “Us vs. Them” (local utilitarian vs. meta-utilitarian), notably assigning slow-thinking for problems of meta-morality described in the “meta-utilitarian” conflict.
Egocentrism involves trusting yourself as an expert (“You are like me.”), e.g. telling others what they can do around Berkeley when they visit. Altercentrism is trusting others more than yourself (“I am like you.”), e.g. using Yelp to look for business reviews and recommendations.
Santos describes an experiment by Kenneth Savitsky at Williams College showing increased egocentrism among friends versus strangers. We are more considerate to different perspectives of strangers, but our egocentric bias misleads us to make wrong assumptions about our friends.
Two boxes, one opaque and one transparent, each contain a treat. The experimenter goes through ritualistic tapping and prodding and removes a treat from the box. Both children and chimps will go through the ritual when the box is opaque; however, the clear box reveals that the treat is just behind a clear door, and the ritualistic movements are not necessary to get it. Chimps will ignore the ritual and grab the treat, but children will still go through the ritual (note there is no description in the video if the children were given verbal directions from the conductor of the experiment).
The opaque box conjures altercentric (“I am like you”) responses in both children and chimps. Since the box is a novel contraption and you can not see exactly how it works, it makes sense to trust the person showing you how to use it. What happened with the clear box? The chimps were more egocentric in their approach, trusting their own judgement of how to get the treat more than the ritual shown by conductor of the experiment. Children, due to social structures or cultural constraints, still take the altercentric approach and go through the useless ritual.
Santos shares a quote from Mark Twain’s Own Autobiography that is worth quoting:
“In the matter of slavish imitation, man is the monkey’s superior all the time. The average man is destitute of independence of opinion of his own, by study and reflection, but is only anxious to find out what his neighbor’s opinion is and slavishly adopt it.”
There is value in being valued. Mimicking our neighbors may show some element of trust in their judgement. If that’s what we care about, then there is no immediate need to condemn these external forces in our decision making; however, should data speak louder than anecdote? Do some data conceal subjectivity?
There is an interesting parallel with gut reaction (instilled by cultural-cognitive identity) vs. data deliberation (determined by scientific numeracy) in the previously described study of Dan Kahan.
Some important take aways may be summarized to say, “Smarter are more stubborn,” describing a, “Why think harder? I got the answer I want,” epidemic. Know your triggers. If you are emotionally passionate about certain issues, so much so that you take measures to make political changes, then I would suggest at least considering your opponents’ views first, with a slow-thinking, altercentric thoughtfulness.
 Blindness and Insight: Essays in the Rhetoric of Contemporary Criticism
 Allow me to emphasize the definition of “public” as anyone outside his/her field of expertise. We are all “public” in some contexts and try-fail experts in others.
 Maney, Ethel S. “Literal and critical reading in science.” The Journal of Experimental Education 27.1 (1958): 57-64.
 Kahan, Dan M et al. “Culture and identity‐protective cognition: Explaining the white‐male effect in risk perception.” Journal of Empirical Legal Studies 4.3 (2007): 465-505.
 Carl Zimmer. “Children Learn by Monkey See, Monkey Do. Chimps Don’t. – New …” 2004. 20 Oct. 2013 <http://www.nytimes.com/2005/12/13/science/13essa.html>
 I anthropomorphize chimps with concepts of “trust” or “judgement” for descriptive purposes, and do not intend to imply such abstractions are concluded from these experiments