How is it possible to live a moral life if we don’t believe in a god?
Without belief in God, some believers shriek hysterically, we would have no moral code. We would believe we would be free to kill and steal and do all sorts of other horrible things.
Christians, Muslims, Jews claim that their God gave humanity its moral laws or codes. Other believers attribute moral interests to their respective deities, too. Gods are so interested in the morality of our actions, we are told, that they will even punish or reward people according to whether they have been good or bad.
What follows is from a book first published a decade and a half ago so others more in the know may be able to contribute more current insights or simply alternative explanations. Pending those updates here is Pascal Boyer‘s explanation for why people connect moral interests to gods or spirits or ancestors that he set out in Religion Explained: The Evolutionary Origins of Religious Thought.
[W]e know that religious codes and exemplars cannot literally be the origin of people’s moral thoughts. These thoughts are remarkably similar in people with different religious concepts or no such concepts. Also, these thoughts naturally come to children, who would never link them to supernatural agency. Finally, even religious people’s thoughts about moral matters are constrained by intuitions they share with other human beings, more than by codes and models. (p. 191)
Boyer begins by addressing the many cross-cultural studies that demonstrate beyond all doubt what all parents have always known: that even young children have moral intuitions. They don’t need to be taught by a thunderous voice from heaven that it is wrong to intentionally deceive someone else with misleading information. No-one taught my infant that it is wrong to lie before he told his first lie with clear signs of associated guilt. Further, young children know the difference between “moral principles” and “conventional rules”. In a classroom, for example, they know the difference between shouting out in class and stealing someone’s pencil case. They also know that stealing an eraser is not as serious as hitting others.
Most significantly, they know that
social consequences are specific to moral violations. (p. 179)
If they forget or disregard an instruction not to leave their notebook beside the fireplace they will not be surprised or troubled by the worst consequences in the same way they expect to suffer social ostracism or condemnation for being caught stealing.
So experimental studies show that there is an early-developed specific inference system, a specialized moral sense underlying ethical intuitions. Notions of morality are distinct from those used to evaluate other aspects of social interaction (this is why social conventions and moral imperatives are easily distinguished by very young children). (p. 179)
There is something remarkable about such moral intuitions in the story of our development to maturity. Certain actions are seen as immoral for their own sake no matter what, and that understanding does not change into adulthood. Stealing an eraser is wrong, period. Now there might be circumstances where you, the thief, think stealing it is justified — the owner doesn’t care; its owner stole something from you earlier so stealing the eraser is rationalised as “just deserts”; etc. — but the fact remains we know that stealing the eraser is nonetheless a moral breach.
So it is all the more interesting that no such change is observed in the domain of moral intuitions. For the three-year-old as well as for the ten-year-old and indeed for most adults, the fact that a behavior is right or wrong is not a function of one’s viewpoint. It is only seen as a function of the actual behavior and the actual situation. (p. 180, my highlighting in all quotations)
Humans are social animals so we would expect us to have “evolved moral dispositions that are beneficial to social groups.” But we know that everyone is not genetically predisposed to refrain from stealing. People do steal. Societies are not perfect models of cooperation.
But genes always vary and always have varied. This is what makes evolution possible in the first place. Some variants give their bearers better chances to pass on their genes, so these variants spread in the gene pool. Other variants reduce these chances and therefore tend to disappear. If we had dispositions for socially acceptable behavior, these should vary too. (p. 181)
We know that many animal species demonstrate unselfish acts: birds putting themselves at risk by pretending to be injured to distract a predator from their young; lookout animals that sound alarms to their group when a predator approaches and so themselves draw the attention of the predator; sharing food, and so forth. Kin selection and reciprocal altruism may help explain many such behaviours but they are not the only explanations.
People behave in altruistic ways in many circumstances where no common genes are involved and no reciprocation is expected. They refrain from extracting all the possible benefits from many situations. It would be trivially easy to steal from friends, mug old ladies or leave restaurants without tipping. Also, this restraint does not stem from rational calculation—for instance, from the fear of possible sanctions—for it persists when there is clearly no chance of getting caught; people just say that they would feel awful if they did such things. Powerful emotions and moral feelings seem to be driving behavior in a way that does not maximize individuals’ benefits. (p. 184)
Moral behaviour is not simply about rational choices. If everybody in our community was moral on the basis of rational calculations in the sense that they would always steal if they had the opportunity and were sure they would not be caught then society would barely survive. We would trust nobody.
But how would we feel towards somebody who was irrationally committed to honesty — even when it was not in their best personal interest?
Would you be more likely to trust someone who voluntarily put themselves in a position where they would be severely punished if they cheated you? An example: a person became a financial advisor and joined an association that would financially penalize and expel him if he committed fraud. (I am being slightly ironic with this example given current scandals in Australia concerning major insurance companies and banks.)
Firstly we would need to know that the person did put themselves in such a position, so we would look for signs such as certificates on their office wall, statements on their business cards, a forthright manner.
Such a person has a better chance of winning the trust of others and so gain the rewards of social cooperation.
But I have oversimplified the scenario that makes for a generally cooperative society. There is no point embracing a strategy that will make others more likely to cooperate with you if no-one knows you follow that strategy. We need to know that if a financial advisor were to be dishonest and defraud us that there really will be dire consequences. We want to know that someone or some group will share our outrage and pay real costs to punish the dishonest person. If we know that our community is made up of persons who would be very emotionally hostile towards anyone who takes cruel advantage of another, so angry that they really would go to work to punish the criminal severely, we would feel more secure.
It is not difficult from this point to understand how we would attribute such moral choices that promote social cooperation for our benefit to a will of the gods or spirits.
So certain values are prized in all cultures. Peace is good. So is treating a guest as sacred. No-one argues against those values in principle. The reason we don’t live in a perfect world, however, is because we need to weigh the benefits of cooperation against the risks of denying it. We want peace and have much to gain cooperating with others but should we really trust our plundering neighbours? The potential rewards of safety that come with cooperation need to be balanced against the risks of cooperating with an unknown entity. Our behaviours are indeed rational. My own extrapolation of Boyer’s argument is that we accuse code-breakers of hypocrisy (“you worship a god of peace but wage war”) when we fail to understand the evolutionary function of our moral intuitions and impute our moral codes to gods.
There is more yet but this is meant to be a readable sized blog post.
But why bring in the gods?
Moral intuitions are part of our mental makeup that promote a cooperative society; and we intuit that behaviours are right or wrong by themselves regardless of points of view. A child knows that stealing an eraser is wrong regardless of motive or excuse.
Gods, spirits, ancestors know what’s going on. They know what we do and why we do it. That’s their nature. They can see us and they watch what is happening in our life. Or if they have been busy elsewhere for a time we can always talk to them to bring them up to date.
Now take a “moral situation” that involves conflict between you and another. Here is Boyer’s illustration:
Imagine this situation: You know (a) that there is a banknote in your pocket and remember that you stole it from your friend’s wallet. This situation may produce a specific emotion (guilt).
Let me change the context. You took the banknote from your friend’s wallet but also remember (b) that he stole money from you in the first place. This new context will probably result in a rather different emotional reaction, perhaps a mixture of reduced guilt, outrage at his behavior and partly quenched resentment.
So your emotions are very much a function of the information you represent about the situation at hand. But that is the crucial point: in either case you assume that the emotion you feel is the only possible one given the situation.
A disinterested third party who knew the facts about (a) would agree that stealing the money was shameful; whoever knew about (a) and (b) would share your outrage and your sense of justice done. This at least is what we assume and why we invariably think that the best way to explain our behavior is to explain the actual facts. That is why, were your friend (in situation [b]) to complain about your behavior, you would certainly explain to him that it was only a just retribution for his own misdemeanor. Most family rows are extensive and generally futile attempts to get the other party to “see the facts as they really are”—that is, how you see them—and by virtue of that to share your moral judgements. This rarely works in practice, but we do have this expectation. (p. 189, formatting and bolding mine)
Intuitively we assume that anyone who had access to all the information we have that they would make the same moral judgement we make. We assume that anyone who knows the facts just as we do would share the same emotional response to the situation.
Intuitively we believe that anyone who could see the whole of any situation (“just as we see it”) they would know as surely as we do where the right and wrong lay and would feel the same way about each party, sympathy or outrage.
And gods and spirits are merely extended concepts of persons with special attributes like being invisible. So if you have a belief in such a god or spirit ancestor it follows that that god or spirit would share the same moral judgement as you make and would understand the outrage against the offending party that you have.
Enter guilt feelings
Most of us have some feeling of guilt when doing something wrong even if we convince ourselves that we have a right to do the wrong thing (e.g. “They deserved their payback!”). Guilt feelings are difficult to analyze or understand. It is also difficult for us to understand why we think certain actions are wrong, even when we feel we have a right to perform them. Consequently they are easy project to an outsider like that god or spirit who watches over all. That is, the guilt feelings tell the believer what the god or spirit thinks about what he or she is doing. Recall that we intuit that certain actions such as stealing the eraser are in themselves right or wrong regardless of how we rationalise them.
To sum up, then: Our evolution as a species of cooperators is sufficient to explain the actual psychology of moral reasoning, the way children and adults represent moral dimensions of action. But then this requires no special concept of religious agent, no special code, no models to follow. However, once you have concepts of supernatural agents with strategic information, these are made more salient and relevant by the fact that you can easily insert them in moral reasoning that would be there in any case. To some extent religious concepts are parasitic upon moral intuitions. (p. 191)
Boyer, P. (2001). Religion Explained: The Evolutionary Origins of Religious Thought. New York: Basic Books.
Latest posts by Neil Godfrey (see all)
- History. It’s Long Lost Dead and Gone. - 2020-09-27 02:58:21 GMT+0000
- More of Something Light - 2020-09-24 08:59:26 GMT+0000
- Overthrowing the 2020 Election, US Safety and the World’s Future - 2020-09-24 02:09:03 GMT+0000
If you enjoyed this post, please consider donating to Vridar. Thanks!