Why the Genetic Fallacy is Not a Fallacy

Trivia question: which country was the first to pass a public smoking ban? Answer: Nazi Germany. Indeed, German scientists were among the first to determine a causal relationship between smoking and cancer. Adolf Hitler himself intensely disliked smoking and personally donated money to the Wissenschaftliches Institut zur Erforschung der Tabakgefahren (Scientific Institute for Tobacco Hazards Research). The term Passivrauchen (passive smoking) itself was coined in Nazi Germany, when German researchers discovered that smoking was harmful to bystanders too.

When, much later, Western countries also considered passing smoking bans, the tobacco industry was quick to highlight the historical origins of smoking bans: do you really want to follow in the Nazis’ footsteps? This type of reasoning is an instance of the so-called ‘genetic fallacy’ (as well, incidentally, as the reductio ad Hitlerum), in which you judge a belief or practice merely on the basis of how it originated. The term ‘genetic fallacy’ was coined by the philosophers Morris Cohen and Ernest Nagel (not that this matters to the validity of the analysis… or does it?).

It’s easy to see why the argument about Nazi Germany doesn’t hold water: we have overwhelming independent evidence that smoking is harmful, regardless of one’s ideological convictions. Even if the Nazis had never implemented a smoking ban, or indeed even if the Nazi party had never existed, we would still have figured this out. The historical origin of smoking bans is therefore irrelevant. After all, even a raving fanatic like Hitler, who believed a lot of crazy stuff, was bound to be right about some things.

In philosophy, arguments that aim to undermine a belief or practice by exposing its historical or psychological origins are known as ‘debunking arguments‘. Nietzsche was arguably the most famous proponent of this type of reasoning, which he called ‘genealogical analysis’. Later on, thinkers like Michel Foucault enthusiastically embraced Nietzsche’s method, applying it to everything from sexual morality, the prison system, medical science, and virtually every aspect of modernity. For Nietzsche and Foucault, the rhetorical purpose of such genealogical explanations was not merely to unravel the origins of something, but to expose or debunk it in the same breath. Not just to explain, but to explain away. And indeed, even when this debunking intent is not made explicit, it often has that effect on the reader. To trace the origins of something, especially something lofty or cherished, is often to devalue or demean it. When we read such aetiologies, wrote William James in The Varieties of Religious Experience, we feel afraid that “the same breath which should succeed in explaining their origin would simultaneously explain away their significance”. For Nietzsche, this was exactly the point: “How [belief in God] originated can at the present stage of comparative ethnology no longer admit of doubt, and with the insight into that origin the belief falls away.”

Well, does it? From a strictly logical point of view, knowing the psychological or historical origin of your belief is irrelevant to its truth value, provided that you have an independent first-order justification of your belief. It doesn’t matter how you arrived at your belief that there is no largest prime number: you may have been drunk or stoned while you acquired it, your math teacher may have been a Nazi, or you may have a strong desire to believe it because you once wagered a bet on its truth. As long as you have the mathematical proof at hand, it’s all the same.

But this doesn’t apply to many of our beliefs, including religious beliefs, for which we don’t have such incontrovertible first-order evidence. Take Freud’s historical aetiology of religious belief. According to Freud, religious belief is a symbolic distortion of real events of the distant past, in which a ‘primal horde’ of sons ganged up to kill their father. The collective guilt of this terrible crime was subsequently repressed and then transmitted (epigenetically?) through the generations. Belief in ‘God’ then emerged as a projection of this mythical father figure and somehow resolved humanity’s collective guilt.

decorative image

Naturally, we have to distinguish between the empirical validity of the explanation itself and its consequent debunking power. Few scientists today would take seriously Freud’s fanciful story about the ‘primal horde’. But Freud was quite right that, if true, his account would indeed have a corrosive effect on religious belief. This is because repressed guilt is obviously an “off-track process”, as the philosopher Guy Kahane calls it, in the sense that it doesn’t “track” empirical truths in the world. In many cases, psychological aetiologies of religious belief are also in direct competition with the causal stories that are part and parcel of the religion in question. According to Calvin, for example, people believe in God because God himself has endowed us with a sensus divinitatis, a special faculty enabling us to detect the divine presence.

Psychology has come a long way since Freud and Nietzsche. According to researchers working in the Cognitive Science of Religion (CSR), one important reason why people are predisposed to believe in gods is that our brains have been evolved to over-detect ‘agents’ in our environment. Identifying agents, and then inferring their intentions, was obviously very important for the survival of our ancestors. If someone is out to get you, you’d better watch your back. Because failure to detect real agents (false negatives) can have very serious fitness consequences, our agency detection module is slightly over-sensitive and occasionally misfires, detecting agents where none are present (false positives). Sometimes we hear something rustling through the leaves, or we have an eery feeling of someone else being present in the room, but then it turns out there’s nobody there. Could it be… an invisible agent? In this view, religious belief is a by-product of our biased agency detection modules.

Some philosophers have objected that this sort of reasoning commits the genetic fallacy. Religious beliefs are not rendered any less plausible, they argue, just because you have shown how they originate from certain psychological mechanisms. Indeed, researchers in CSR sometimes soothingly reassure their readers that scientific explanations of religions are completely agnostic with respect to the truth value of religious beliefs. They are only interested in explaining how belief in God originates, not whether God exists.

But I think they’re just being polite. If we had incontrovertible first-order evidence for the existence of God, the psychological origins of belief would indeed be irrelevant. But we don’t. Religious belief is usually justified on the basis of spiritual experiences, or intuitions about the perceived design of nature, or even just the universality of religious belief across human cultures. In the absence of any independent evidence for the existence of God, it is very revealing indeed to understand where religious belief comes from. Indeed, CSR is much more corrosive to religious belief than old Freudian explanations about murderous primal hordes, because it also carefully explains the origin of the intuitions undergirding religious belief. For instance, the intuition of design in nature is so strong because of our hyperactive agency detection, combined with our evolved disposition for teleological reasoning. Modern cognitive science can explain not just where belief in God comes from, but why it seems so compelling despite its falsity. For the same reason, CSR casts doubt on many intellectual justifications for religious faith, notably the traditional arguments for the existence of God that theologians have come up with. Knowing the cognitive underpinnings of these arguments legitimates the strong suspicion that these are merely elaborate post hoc justifications for evolved intuitions.

If this whole line of reasoning is an instance of the genetic fallacy, this just shows that there’s something wrong with the concept of the genetic fallacy. By way of analogy, consider a more specific form of supernatural belief. Some people believe that it is possible to communicate with the spirit of dead people using Ouija boards. In Ouija séances, people place their hands on a glass or cup, located in the centre of a circle marking the letters of the alphabet, and then they summon some dead spirit. Amazingly, the cup starts moving back and forth, forming coherent words and sentences, even though people protest that they are not deliberately steering the cup. So, are ghosts real? Enter science and its discovery of the ‘ideomotor response’, which refers to a set of tiny muscle movements we all make but of which we’re unaware. In the setting of the Ouija séance, the ideomotor response allows for unconscious coordination between the participants. Nobody feels like they are individually stringing together the messages, but that’s exactly what they’re doing collectively.

Now, would we really maintain that this aetiology of the participants’ experience and attendant belief does not make their belief in ghosts any less likely, and that to think otherwise is to commit a fallacy? In Joseph Heller’s Catch-22, the protagonist Yossarian remarks at some point: “Just because you’re paranoid doesn’t mean they aren’t after you”. From a logician’s perspective, Yossarian’s point is unassailable. But the remark is funny precisely because it is absurd, and logic be damned. If your friend claims that the CIA is wiretapping his phone, and you then learn that he suffers from a clinical case of persecutory delusions, you will no longer take his suspicions seriously (even though it is still logically possible that the CIA are listening in on him!).

So, when are psychological or historical debunking arguments really guilty of the genetic fallacy? The problem lies with the concept itself. Obviously, there are stronger and weaker forms of etiological debunking arguments. However, the concept of the genetic fallacy suggests that there is something wrong with the formal structure of such arguments; it pretends that there is a principled distinction between first-order reasons for a belief and causal explanations of a belief. But there isn’t. In real-life argumentation, knowing how a belief or practice has originated is often very informative for assessing its credibility. Only if you have a rock-solid independent justification for believing something (e.g., that smoking causes cancer) can you truly say that the causal history of that belief is irrelevant. Of course, aetiological debunking arguments are never deductively valid, and if anyone claimed that they were, that would be fallacious. But this fixation on deductive validity is exactly one of the problems with the traditional canon of logical fallacies, with their portentous Latin names (ad hominem, ad ignorantiam, post hoc ergo propter hoc). In real life, deduction doesn’t get you very far. After all, even the ideomotor explanation of the Ouija board doesn’t deductively rule out the possibility that Ouija boards really can communicate with dead spirits. But who would take that possibility seriously?

(Published on the Blog of the American Philosophical Association, June 25, 2021)