The Plight of Evaluating Facts Through a Moral Lens
Were the great Bard alive today and putting pen to paper for a 2021 version of As You Like It, he could conceivably replace the infamous line “All the world's a stage...”, with “All the world's a morality play...”
With nearly all of the major political, social, and cultural questions of the day polarised along moral value lines, what is the implication of presenting people with facts that don't align with their morality construct?
In a series of experiments published in 2015 by Friesen, Campbell, and Kay, the researchers presented participants who were either in favour or opposed to same-sex marriage with concocted scientific “facts”, that either supported or rejected their position on the issue. When either side of the issue was presented with the concocted facts that showed the opposite of their belief, both groups reframed the issue as one not about facts, but a question of moral opinion: what the researchers termed a “defensive function” of holding unfalsifiable beliefs.
To push this further, the researchers hypothesised that when people believe that their belief is unfalsifiable, their position becomes more entrenched, what the researchers termed an “offensive function” of holding unfalsifiable beliefs. When they told highly religious participants that the existence of god would always be unfalsifiable, the participants reported having a strengthening of their religious attitudes afterward.
People hold beliefs to protect their self-construct, their values. Because we are irrational creatures, who want to see ourselves as rational creatures, we find ways to support beliefs and reconcile available information against our preconceived ideas about the world. While unfalsifiability provides people with both an offensive and defensive means of morally reasoning against evidence, how do people balance evidential and non-evidential considerations in evaluating what they believe, and what they think others ought to believe?
In another series of elegant studies, Corey Cusimano & Tania Lombrozo demonstrated that people will knowingly endorse non-evidential norms to support their own belief, and prescribe motivated reasoning to others. Participants were given a series of mock scenarios which contained a factually accurate interpretation of the scenario, a belief-based interpretation, and a moral justification for the belief-based interpretation. The participants were asked which of these interpretations the main character in the scenario should believe. The study demonstrated that participants used motivated reasoning to justify that the main character ought to prefer the belief-based interpretation of the facts.
In the second study, the participants evaluated similar scenarios, but this time they were evaluating beliefs that the characters had already formed, one an evidence-based belief and the other a morally-based optimistic belief. They found that morality provided the participants with a justification for motivated reasoning (where people evaluate arguments in a biased fashion in order to arrive at a conclusion they prefer).
The final study asked participants to make a prospective judgment about what the main characters in the scenarios should believe, before revealing to them what the characters had actually decided. Participants again affirmed that moral evaluation of the facts justified a more belief-based view of the scenario than the objective facts presented. Participants also prescribed a high degree of control to characters for their beliefs, i.e., they did not consider that their circumstance meant that they had little control over their beliefs. This was an important finding, because it is often assumed that if people may have little control over their beliefs (e.g., for religious reasons), they are permitted to think differently accordingly. Cusimano & Lombrozo found no evidence to support this.
The main finding from this research was that participants decided what others ought to believe and were justified in so believing, based on what was morally beneficial to believe in the circumstance, even where that moral benefit contrasted with factual accuracy. In effect, motivated reasoning provided participants with a cognitive tool to reframe the evaluation of facts in light of the moral belief, such that whatever little evidence supports the belief constitutes sufficient evidence because of the moral quality of the belief.
For example, someone could be made aware that homeopathy has little to no supporting evidence, yet have a strong moral view of nature and of 'natural' remedies, and thus whatever implausible possibility that homeopathy does something - anything - is considered sufficient having regard to the moral benefit the belief confers on the individual. The same morality-justified motivated reasoning is at play in nearly every conversation you could have with a vegan about nutrition research. And it is at play in conversations around issues like religion, gun control (often linked), politics, race, gender, sex, the lot.
Yet, this is intractable. As highlighted by David Hume, you can't make an ought from an is. Much of the dialectic in society is predicated upon the moral value system of what ought of one group being directed, often forcefully, at the reality of what is of another group.
One of the core foundational principles in the emergence of the scientific method in the 17th Century was the segregation of science and theology, a necessity without which it was realised that the development and advancement of new ideas through science would be hampered by the dogmas of theology, and restrained by the morality these dogmas upheld.
The dilemma is whether beliefs should only, and always, be grounded in evidence, or whether moral considerations can, or should, apply. The intractability of this problem stems from the fact that, with the former, this process of evidential evaluation should (in theory), allow us to make decisions and form new ideas without the hindrance of moral value beliefs. The latter process is, more often than not, a crippling impediment to a fair evaluation of the evidence in the first instance, engendering biased or misinterpretations. And even where evidence may have been considered, moral values still act as a further hurdle to changing a stance if, on the objective face of the evidence it is warranted. This is motivated reasoning: faced with new information, we can either appraise that information openly and alter beliefs in the face of better evidence, or we can reconcile that new information against our preconceived beliefs in a way that validates rejecting the information and affirming our beliefs. And the latter is justified if it is deemed to be morally preferable.
In an era where people are more entrenched, more belief-driven, more didactic, more aggressive, and more inflexible, on even the most rudimentary of issues (i.e., wearing a mask), morally justified motivated reasoning for what others ought to be, do, or think, may explain why so many commentators only seem capable of portraying “the other side” in pejorative, Strawman terms. In much of the wider societal dialectic, particularly in the print and online media, what people are often stating is already what they believe to be the settled result, the “fact” as they see it and as it accords with their beliefs.
None of this is to argue that an evidence-based belief should take precedence if it would result in outcomes that are morally unjustifiable. For example, it may be a fact that women may become pregnant, yet that fact does not warrant discrimination in pay or employment due to that fact. But these morally unambiguous circumstances are clear-cut, and the use of moral value as a justification for non-evidential belief formation has more potential to be a road to hell paved with good intentions.
Hume was correct when he stated:
“A wise man proportions his belief to the evidence.”