There is a paradox of the human condition, which is that at our core we are irrational, emotive creatures who also, through our unique capacities for self-awareness and consciousness, want to be perceived as rational actors. Our default-mode for decision making is a roadmap of cognitive shortcuts; heuristics and gut feelings, conditioned beliefs and moulded frames of reference. This is why the scientific method has been, by orders of magnitude, the most successful system of thinking humans have ever, or likely will ever devise, because the entire basis of the method is our innate fallibility and finite understanding of the world we inhabit. By placing our inherent cognitive blindspots at the centre of the endeavour, it systematises a method of externalising, testing, and overcoming, our predisposed limitations.
It is the method that forces us to question gut feelings, seek to test beliefs, and adjust our frames of references based on better information and facts. I use the term scientific method deliberately to distinguish the cognitive system from other aspects of the word “science”, i.e., individuals who participate in and produce science, or institutions (universities, research centres, journals, etc.) that provide the support structure for the conduct and dissemination of science. There are real issues with these aspects of “science”. But the scientific method itself is distinct; it is antecedent to anything else that may be called “science”, and is the only true definition of the word. A core pillar of the method is falsifiability; that a theory must be testable, and therefore open to refutation by acquired evidence. The Popperian classic “all swans are white” is a testable belief; it only takes the sighting of a single black swan to falsify the statement.
Falsifiability is integral to the integrity of the method because in attempting to externalise and systematise our cognitive shortcomings, it provides a mechanism to challenging our biases and perceptions. And this can be particularly difficult, because it pits evidence and facts, which are not the substance that guide most decisions people make, against an individual’s preconceived beliefs, values, and moral framework. Cognitive dissonance is a powerful blindspot, allowing us to reconcile evidence against a belief we may hold either by dismissing the inconsistency between evidence and our belief, or by rationalising our belief so as to appear congruent with the evidence. With nearly all of the major political, social, and cultural questions of the day polarised along moral value lines, there is a pernicious tendency which may influence this dissonance: morality-justified motivated reasoning (where people evaluate arguments in a biased fashion in order to arrive at a conclusion they prefer).
A series of experiments published in 2015 by Friesen, Campbell, and Kay, presented participants’ who were either in favour of, or opposed to, same-sex marriage, with concocted scientific “facts” that either supported or rejected their position on the issue. When presented with the concocted facts that purported to show the opposite of their belief, both groups reframed the issue: it wasn’t about what the facts indicated, it was a question of moral opinion. This was characterised as a “defensive function” of holding unfalsifiable beliefs, allowing the belief to be defended in the fact of contradictory evidence. However, an “offensive function” was also evident: when participants were told their beliefs could not be falsified by facts, their position became more entrenched.
This ability to engage morality-justified motivated reasoning extends not only to what a person believes themselves, but that they think others ought to believe. Another series of experiments published in 2021 by Cusimano & Lombrozo provided participants with different mock scenarios based on fictional characters. Each scenario contained a factually accurate interpretation of the scenario, a belief-based interpretation, and a moral justification for the belief-based interpretation. The participants were asked which of these interpretations the main character in the scenario should believe; they used motivated reasoning to justify that the main character ought to prefer the belief-based interpretation of the facts. Thus, the participants knowingly endorsed non-evidential norms to support the belief.
The study then asked participants to make a prospective judgment about what the main characters in the scenarios should believe, before revealing to them what the characters had actually decided. Again, participants affirmed that a moral evaluation of the facts justified a more belief-based view of the scenario than the objective facts presented. Thus, participants decided what others ought to believe, and were justified in so believing, based on what was morally beneficial for them to believe in the circumstance, even where that moral benefit contrasted with factual accuracy. Morality-justified motivated reasoning provided participants with a cognitive tool to reframe the evaluation of facts in light of the moral belief, such that whatever little evidence supports the belief constitutes sufficient evidence because of the moral quality of the belief.
Yet, this is intractable. To echo Hume, you can’t make an ought from an is. Much of the dialectic in society is predicated upon the moral value system of the ought for one group being directed, often forcefully, at the reality of what is for another group. The same morality-justified motivated reasoning is at play in nearly every conversation around issues like religion, gun control (often linked), politics, race, gender and sex. And yet these are important issues; the dilemma is whether beliefs should only, and always, be grounded in evidence, or whether moral considerations can, or should, apply. One of the core foundational principles in the emergence of the scientific method in the 17th Century was the segregation of science and theology, a necessity without which it was realised that the development and advancement of new ideas through science would be hampered by the dogmas of theology, and restrained by the morality these dogmas upheld.
The intractability of this problem stems from the fact that the process of evidential evaluation should (in theory), allow us to make decisions and form new ideas without the hindrance of moral value beliefs. Moral value beliefs are, more often than not, a crippling impediment to a fair evaluation of the evidence in the first instance, engendering biased or misinterpretations. Even where evidence may have been considered, moral values still act as a further hurdle to changing a stance if, on the objective face of the evidence, it is warranted. This is motivated reasoning: faced with new information, we can either appraise that information openly and alter beliefs in the face of better evidence, or we can reconcile that new information against our preconceived beliefs in a way that validates rejecting the information and affirming our beliefs. We are living in a time where the latter is justified if it is deemed to be morally preferable to an individual or group.
None of this is to argue that an evidence-based belief should take precedence if it would result in outcomes that are morally unjustifiable. For example, while it is a fact that women may become pregnant, that fact does not warrant discrimination in pay or employment due to maternity leave (an ongoing reality that is morally incompatible with equality). But these morally unambiguous circumstances are clear-cut.
Where the use of moral value as a justification for non-evidential belief formation is more problematic is when we are asking falsifiable scientific questions, such as whether wearing a mask is an effect prophylactic against Covid-19 transmission, or whether the “affirmative care” model of gender medicine is supported by evidence. And increasingly, these questions are being addressed purely through the lens of morality-justified motivated reasoning with the name “science” slapped over it. In America, for example, answers to both of these questions are predicted by what political party someone votes for; hardly a real consideration of evidence.
In an era where people are more entrenched, more belief-driven, more didactic, more aggressive, and more inflexible, on even the most rudimentary of issues, morality-justified motivated reasoning is weaponised against what others ought to be, do, or think. This explain not only the continued erosion in the credibility of “science”, but in the relativist marketplace of competing “truths”, where anything is a sufficient “truth” merely because of the moral quality the belief holds for an individual or group.
If there's one thing I'd like to share with everyone (even indoctrinate, why not?), it's the difference bw OUGHT and IS. Once we leave the terra firma of IS and venture into the highly subjective and often fantastical OUGHT, we move from an ability to find shared premises into the realm of arguing abstractions (my utopia is bigger than yours!). And, as mentioned here, OUGHT is just a synonym for Morality (and its evil twin, Moralism), which I'll let Wilde define: "Morality is simply the attitude we adopt towards people whom we personally dislike."
The Taoists do away with morality entirely (and replace it w simple benevolence), because once any Morality is posited it just becomes another way for people to compete and wound each other. This seems to be almost superhuman—I judge you, therefore I am (better than you) is more of a Western mantra—but is piece of wisdom that should be sprinkled onto every intellectual meal.
Cheers!