Isn't all reasoning (outside mathematics and formal logic) motivated reasoning?

Tianyi Jia, Princeton High School, United States

Winner of the 2020 Psychology Prize​ | 8 min read 

Introduction

 

When voters vehemently defend a candidate after his or her weaknesses have been exposed, or smokers convince themselves that cigarettes are actually not as bad for their health as they appear, these instances highlight how personal preferences can generally influence beliefs. People have a tendency to reason their way to favorable conclusions, with their proclivities guiding how evidence is gathered, arguments are evaluated, and memories are recollected. These actions of reasoning are all driven by underlying motivations, leading to beliefs tinged with bias that can seem objective to the individual (Gilovich and Ross, 2016). Motivated reasoning, a phenomenon studied in social psychology, can be defined as the “tendency to find arguments in favor of conclusions we want to believe to be stronger than arguments for conclusions we do not want to believe” (Kunda, 1990). This concept often contrasts critical thinking, which is generally viewed as the rational, unbiased analysis of facts to form a judgment at the highest level of quality (Paul and Elder, 2009). In this essay, I will champion a case for motivated reasoning and in turn, prove why there is no such thing as “good” or “accurate” critical thinking. Instead, all reasoning, outside mathematics and formal logic, is essentially motivated reasoning – justifications that are most desired instead of impartially reflect the evidence.

 

An Evolutionary Perspective

 

Motivated reasoning has been a pervasive tendency of human cognition, since the beginning of time, as it is ingrained in our basic survival instincts. Evolutionarily, people have been shown to utilize motivated reasoning to confront threats to the self. Research shows that people

weighed facts differently when those facts proved to be life-threatening. In 1992, Ditto and Lopez compared study participants who’d received either positive or negative medical test results. Those who were told they’d tested positive for an enzyme associated with pancreatic disorders were more likely to believe the test was inaccurate and discredit the results (Ditto and Lopez, 1992). When it comes to our health and quality of life especially, we tend to delude ourselves. Although we may prefer that human decision making be a thoughtful and deliberative process, in reality, our motivations tip the scales to make us less likely to believe something is true if we do not wish to believe it. For instance, a study by Reed and Aspinwall found that women who were caffeine drinkers engaged in motivated reasoning when they dispelled scientific evidence that caffeine consumption was linked to fibrocystic breast disease (Reed and Aspinwall, 1998).

 

In addition to protecting their health, evolutionarily, humans use motivated reasoning to bolster their self-esteem and protect their self-worth. A common example of this is the self-serving bias, which is “the tendency to attribute our successes to ourselves, and our failures to others

and the situation” (Stangor, 2015). For instance, students might attribute good test results to their own capabilities, but perform motivated reasoning and make a situational attribution to explain bad test results, all the while upholding the idea that they are intelligent beings. The phenomenon of the self-serving bias is widely considered to be essential for people’s mental health and adaptive functions (Taylor and Brown, 1994). It is thought to be a universal, fundamental need of individuals for positive self-regard (Heine et al., 1999). That is, people are motivated to possess and maintain positive self-views, and in turn, minimize the negativity of their self-views – by glorifying one’s virtues and minimizing one’s weaknesses, relative to objective criteria. This basis begs the question of whether humans are truly ever able to process information in an unbiased fashion.

 

A Fight for Personal Beliefs

 

People not only interpret facts in a self-serving way when it comes to their health and well-being; research also demonstrates that we engage in motivated reasoning if the facts challenge our personal beliefs, and essentially, our moral valuation and present understanding of the world. For example, Ditto and Liu showed a link between people’s assessment of facts and their moral convictions; they found that individuals who had moral qualms about condom education were less likely to believe that condoms were an effective form of contraception (Ditto and Liu, 2016). Oftentimes, the line between factual and moral judgments become blurred in this way.

 

In the context of identity, there are powerful social incentives that drive people’s thought processes. People strive for consistency among their attitudes and self-images. Festinger’s cognitive dissonance theory highlights this tendency – he found that members of a group who believed in the end of the world for a predicted date became even more extreme in their views after that date had passed, in order to mitigate their cognitive dissonance (Festinger, 1962). Moreover, when it comes to voting, normatively, new negative information surrounding a preferred candidate should cause downward adjustment of an existing evaluation. However, recent studies prove that the exact opposite takes place; voters become even more supportive of a preferred candidate when faced with negatively valenced information, with motivated reasoning as the explanation for this behavior (Redlawsk et al., 2010). In a 2015 APA analysis, 41 experimental studies of bipartisan bias were examined, demonstrating that self-identified liberals and conservatives showed a robust partisan bias when assessing empirical evidence, almost to an equal degree (Weir, 2017). Additionally, neuroscience research suggests that “reasoning away contradictions is psychologically easier than revising feelings” (Redlawsk,

 

2011). Given the context of groupthink and one’s group identity, the bias’ prevalence is powerful and persistent. Ultimately, people are psychologically motivated to support and maintain existing evaluations, even when confronted with disconfirming information, as to take an opposing viewpoint against a group would damage one’s reputation and challenge one’s existing social identity.

 

The Illusion of Objectivity

 

With the exception of mathematics and formal logic, all reasoning, essentially, is motivated reasoning. When it comes to decision-making and critical thinking, total unbiased analysis or evaluation of factual evidence, is largely illusory. In reality, we act based on an incomplete vision, perceived through filters constructed by our individual history and personal preferences. To scientifically operate on an objective level cannot be achieved. Every second, we as humans receive and process thousands of bits of information from our environment. To consciously analyze all of the sensory stimuli would be overwhelming; thus, our brain utilizes pre-existing knowledge and memory to filter, categorize and interpret the data we receive. The brain extrapolates information it believes to be missing or eliminates those deemed extraneous, to form a considerably coherent image (Thornton, 2015). Each person has unique filters that prevent them from being unbiased, even on a granular level, to cope with life’s complexity. Whether we are aware of our biases or not, affective contagion occurs, a phenomenon where “conscious deliberation is heavily influenced by earlier, unconscious information processing” (Strickland et al., 2011).

 

Even in scientific journals, statistical analysis is utilized to provide a stamp of objectivity to conclusions. However, people tend to use statistical information in a motivated way, further perpetuating the illusion of objectivity. Berger and Berry argue that although objective data from an experiment can be obtained, “reaching sensible conclusions from the statistical analysis of this data requires subjective input,” and the role of subjectivity inherent in the interpretation of data should be more acknowledged (Berger and Berry, 1988). Similarly, in law, lawyers and advocates for both the prosecution and the defense utilize motivated reasoning to prove innocence or guilt. The judge’s job, on the other hand, is to eliminate motivational bias in their own assessment of evidence when drawing up a conclusion. However, the interpretation of the law can be skewed; sometimes, preferred outcomes, based on legally irrelevant factors, drive the reasoning of judges too, without their full awareness. Redding and Reppucci examined whether the sociopolitical views of state court judges motivated their judgments about the dispositive weight of evidence in death penalty cases. They found that judges’ personal views on the death penalty did indeed influence their decisions (Sood, 2013).

 

In the modern day, one of the greatest promises of artificial intelligence and machine learning is a world free of human biases. Scientists believed that operating by algorithm would create gender equality in the workplace or sidestep racial prejudice in policing. But studies have shown that even computers can be biased as well, especially when they learn from humans, adopting stereotypes and schemas analogous to our own. Biases can creep into algorithms; recently, ProPublica found that a criminal justice algorithm in Florida mislabeled African-American defendants as “high risk,” approximately twice the rate it mislabeled white defendants (Larson and Angwin, 2016).

 

Conclusion

 

Essentially, I demonstrate that all reasoning, aside from logic-based, is essentially motivated. Ultimately, to support preferred conclusions, people unknowingly display a bias in their cognitive processes that underlie reasoning. Even though we can never fully be rid of motivated reasoning, consistently striving towards an unbiased evaluation of facts is still key to achieving rigorous standards for decision-making. With today’s media landscape and the internet, a deviation from a purely fact-based evaluation has been amplified; it is now easier than ever to operate in an echo chamber and choose which sources of information fit one’s preferred reality. A report by Stanford’s Graduate School of Education found that students ranging from middle school to college were all poor at evaluating the quality of online information (Donald, 2016). Fake-news websites and the spread of misinformation that have proliferated in the past decade, all compound the problem. Mistrust of the media has increasingly grown to become a powerful tool for motivated reasoning. To restore our faith in facts, media literacy must take place. I champion improving existing channels of communication so that they help us to identify the roots of our biases, then encourage us to adjust our beliefs accordingly. Becoming aware of our deeply-rooted tendencies and thinking mechanisms is valuable, as it enables us to make decisions with more lucidity and transparency, and hopefully, for the betterment of our world.

Bibliography

 

Berger, J. O., & Berry, D. A. (1988). Statistical analysis and the illusion of objectivity. Infectious Diseases Newsletter, 7(8), 62. doi:10.1016/0278-2316(88)90057-6

 

Ditto, P. H., & Liu, B. S. (2016). Moral Coherence and Political Conflict. Social Psychology of Political Polarization, 102-122. doi:10.4324/9781315644387-6

 

Ditto, P. H., & Lopez, D. F. (1992). Motivated skepticism: Use of differential decision criteria for preferred and nonpreferred conclusions. Journal of Personality and Social Psychology, 63(4), 568-584. doi:10.1037/0022-3514.63.4.568

 

Donald, B. (2016, December 15). Stanford researchers find students have trouble judging the credibility of information online. Retrieved 2020, from https://ed.stanford.edu/news/stanford-researchers-find-students-have-trouble-judging-credibility- information-online

 

Festinger, L. (1962). Cognitive Dissonance. Scientific American, 207(4), 93-106. doi:10.1038/scientificamerican1062-93

 

Gilovich, T., & Ross, L. (2016). The wisest one in the room: How you can benefit from social psychology's most powerful insights. New York, New York: Free Press.

 

Heine, S. J., Lehman, D. R., Markus, H. R., & Kitayama, S. (1999). Is there a universal need for positive self-regard? Psychological Review, 106(4), 766-794. doi:10.1037/0033-295x.106.4.766

 

Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480-498. doi:10.1037/0033-2909.108.3.480

 

Larson, J., & Angwin, J. (2016, May 23). Machine Bias. Retrieved 2020, from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

 

Paul, R., & Elder, L. (2009). Critical Thinking, Creativity, Ethical Reasoning: A Unity of Opposites. Morality, Ethics, and Gifted Minds, 117-131. doi:10.1007/978-0-387-89368-6_8

 

Redlawsk, D. P. (2011, April 22). The Psychology of the 'Birther' Myth. Retrieved from https://www.nytimes.com/roomfordebate/2011/04/21/barack-obama-and-the-psychology-of-the- birther-myth/a-matter-of-motivated-reasoning

 

Redlawsk, D. P., Civettini, A. J., & Emmerson, K. M. (2010). The Affective Tipping Point: Do Motivated Reasoners Ever “Get It”? Political Psychology, 31(4), 563-593.doi:10.1111/j.1467-9221.2010.00772.x

 

Reed, M. B., & Aspinwall, L. G. (1998). Self-Affirmation Reduces Biased Processing of Health-Risk Information. Motivation and Emotion, 22, 99-132. doi:10.1023/A:1021463221281

 

Sood, A. M. (2013). Motivated Cognition in Legal Judgments—An Analytic Review. Annual Review of Law and Social Science, 9(1), 307-325.doi:10.1146/annurev-lawsocsci-102612-134023

 

Stangor, C. (2015). Principles of Social Psychology. Minneapolis, Minnesota: Open Textbook Library.

 

Strickland, A. A., Taber, C. S., & Lodge, M. (2011). Motivated Reasoning and Public Opinion. Journal of Health Politics, Policy and Law, 36(6), 935-944. doi:10.1215/03616878-1460524

 

Thornton, E. (2015). The objective leader: How to leverage the power of seeing things as they are. New York, New York: Palgrave Macmillan.

 

Weir, K. (2017). Why we believe alternative facts: How motivation, identity and ideology combine to undermine human judgment. Monitor on Psychology, 48(5), 24.

education@johnlocke.com

+44 (0)1865 566166 (UK)

+1 (609) 608-0543 (US)