Published On: Tue, Sep 18th, 2018

The Danger of Judging Scientists by What They Discover

Earlier this year, a research team led by Dr. Sven Karlsson published the largest scale study on the causes of human intelligence. They found an intriguing pattern of results: Focusing on arithmetic and linguistic tests, genetics predicted over 26% of people’s responses. Namely, individuals with a long allele of the 4-GTTLR gene got more right answers on the arithmetic, mental rotation, and semantic memory tasks than did individuals with the short version of the gene. In contrast, education explained only 4% of people’s responses. Describing the work, Karlsson wrote “We believe this is an interesting result! Our findings indicate that, contrary to certain previous assumptions, basic cognitive capabilities—mental rotation, math and language—really have a strong heritable component. Intelligence in adulthood seems to be predicted by genes early in life… things like education and effort play a small role once you take into account the role of genetics.”

How did you react to the description above? Hopefully you haven’t already tweeted about it: it’s completely made up.

A genetic basis for intelligence is a politically fraught scientific idea about which you had likely developed an opinion before reading about the fictitious Dr. Karlsson. You might think it obviously so that genes play an important role in shaping all traits, including intelligence. Or you might think that genes play a trivial role in comparison to socialization and learning. The ease with which you accepted the brief synopsis of research above as true likely depends on these existing beliefs. If the findings are consistent with your beliefs, you might have quickly accepted its truth value. If inconsistent, then you might have been tempted to either dismiss the finding out of hand, or perhaps dig deeper into the article to find some disqualifying error in method or analysis. These are reactions that psychologists have known about for decades. Motivated reasoning, confirmation bias, selective attention. We are equipped with a range of psychological processes that inoculate us from the threat of information that pokes up against our worldviews and beliefs, and attract us to information consistent with our beliefs.

But here’s a question yet to be asked: how does hearing about controversial research findings influence your beliefs about the researchers? In other words, if you had to guess the ideological leanings of Dr. Karlsson would you have an inkling one way or the other? What about the ideological leanings of a scientist who reports a strong gender difference in sexual appetite? Or a scientist who reports a rapidly rising average global temperature? Do we infer scientists’ motivations and ideologies on the basis of their results?

This is precisely what Dr. Ivar Hannikainen from the Pontifical Catholic University of Rio de Janeiro set out to test in a new paper. He conducted three studies presenting participants with a series of made up scientific experiments involving politically charged research questions. The opening description of Dr. Karlsson’s research was one of the invented studies.

The studies shared an identical experimental structure: two scientists conduct the same study but find opposite results. One group of participants read a description of the study where the results point to the importance of intrinsic causes (e.g. genetics, hormones, neurochemistry) in determining behavior and another group read about an identical study where the results point to the importance of extrinsic causes (e.g. education, socialization, nutrition). All participants were then asked questions about the values and ideology of the scientist. For example, does Dr. Karlsson agree that “people and social groups should be treated equally, independently of ability” or that “Some people should be treated as superior to others, given their hard-wired talent”?

If participants view science as an objective process of inquiry then the results of a study should not influence our beliefs about the researcher’s values at all. An experimental question can lead to results one way or the other, independent from the scientist’s expectations. But if we view science as a biased pursuit of evidence in favor of one’s ideological beliefs, then we might infer the values of a researcher from their results: Dr. Karlsson doesn’t believe in social equality! Just look at his data!

In all three studies participants attributed more egalitarian views to a scientist if the evidence indicated that extrinsic factors shapes behavior more than intrinsic factors. This result held in the domain of intelligence research, gender differences in mating strategy, and aggression. One possible explanation is that participants think a scientist’s results change their normative beliefs. Dr. Karlsson might come to believe that there are innately superior beings by virtue of finding a genetic basis for intelligence. But this is not what the data show. Instead, results supported a belief in “value-driven science,” wherein a researcher’s desire to prove a particular position shapes the nature of their results. We tend to believe that scientific results provide a window into the ideology of the scientist.

This is a dim and dangerous view of science, and one that no doubt contributes to the increasing politicization of the field. If results discordant with our preferred worldviews can be cast as the workings of an ideologue in a lab coat, then reactions to research will increasingly polarize. The possibility for empirical truths to accurately inform our views erodes, and a shared understanding of the means by which we discover truth is undermined. If we turn to the results of studies to determine our level of credence, as opposed to the methods by which those results were achieved, then science becomes a breeding ground for our biases as opposed to their antidote.

There are legitimate concerns to be had about the degree to which science can be value-driven.  Evidence has shown that scientists can embed their values in the kinds of hypotheses they test, as well as the methods they use to test them. More worryingly, scientists might be reluctant to publish controversial findings if they believe the public will judge their character on the basis of their data. Some have pointed to these kinds of concerns to argue for greater ideological heterodoxy in the academy, as if what science needs to regain public trust is to balance ideological distortion of one sort with ideological distortion of all sorts. But Dr. Hannikainen’s work can also be interpreted as a warning against this kind of approach: mixing science with ideology promotes a general mistrust not only of the objectivity of scientific results but of scientists. Solutions should focus on promoting advancements that allow for greater objectivity and transparency in methods. Shifting the locus of critical attention away from results and towards methods would stifle the opportunities for value-driven research, creating an intellectual culture in which all kinds of results are possible and all kinds of scientists are welcome.

Let’s block ads! (Why?)

Scientific American Content: Global

Leave a comment

The Danger of Judging Scientists by What They Discover