What’s the point of remembering historical traumas? Has remembering the Holocaust prevented genocides? What of 9/11? Why do we remember these things? To what purposes do we put our memories? Are they always for good?
But most times they are not memories at all, not really. They are political stories we have chosen to latch on to for specific reasons. No-one in Ireland “remembers” the Irish Easter Rising. No-one in Australia “remembers” Gallipoli. Why do we sacralize certain political stories we call memories? And why do we even call them memories? To what use do we put these “memories”?
Remembrance as a species of morality has become one of the more unassailable pieties of the age. Today, most societies all but venerate the imperative to remember. We have been taught to believe that the remembering of the past and its corollary, the memorialising of collective historical memory, has become one of humanity’s highest moral obligations.
But what if this is wrong, if not always, then at least part of the time? What if collective historical memory, as it is actually employed by communities and nations, has led far too often to war rather than peace, to rancour and resentment rather than reconciliation, and the determination to exact revenge for injuries both real and imagined, rather than to commit to the hard work of forgiveness?
The questions I opened with are based on an interview with David Rieff on the Late Night Live program on Australia’s Radio National. Interviewer Philip Adams: In praise of forgetting. That’s the link to the most excellent interview. Promise to listen to it before you go any further. (I have not yet fully read the Guardian article I quoted from above but this post is inspired by the interview.)
Esteemed American journalist David Rieff argues against our passion for the past. He looks at how memory serves nationalistic history every ANZAC Day and annual pilgrimage to Gallipoli, and how memory of past horrors inflame deep-seated ethnic hatreds, violence and wars.“. . . Today, the consensus that it is moral to remember, immoral to forget, is nearly absolute. And yet is this right? David Rieff . . . poses hard questions about whether remembrance ever truly has, or indeed ever could, “inoculate” the present against repeating the crimes of the past. He argues that rubbing raw historical wounds—whether self-inflicted or imposed by outside forces—neither remedies injustice nor confers reconciliation. If he is right, then historical memory is not a moral imperative but rather a moral option—sometimes called for, sometimes not. Collective remembrance can be toxic. Sometimes, Rieff concludes, it may be more moral to forget. . . . “
A landmark in national life has just been passed. For the first time in recorded history, those declaring themselves to have no religion have exceeded the number of Christians in Britain. Some 44 per cent of us regard ourselves as Christian, 8 per cent follow another religion and 48 per cent follow none. . . . We can more accurately be described now as a secular nation with fading Christian institutions. . . . .
Christians, for their part, should not automatically associate a decline in religiosity with a rise in immorality. On the contrary, Britons are midway through an extraordinary period of social repair: a decline in teenage pregnancies, divorce and drug abuse, and a rise in civic-mindedness.
A few excerpts from the interview . . . . First, on evolutionary psychology itself:
Robin Lindley: . . . . What did you learn from neuroscientists and others about why our brain tends to work this way?
Rick Shenkman: Whatever you make of Evolutionary Psychology, and many people hold it in dim regard, its main assumption seems very compelling to me and that is that our brain evolved to address the problems we faced during the Pleistocene, a two and a half million long period. See a leopard in the jungle and you jump. That’s your automatic brain at work. Your instincts. You don’t have to think about jumping, you just do. We jump out of the way because people who jumped when danger approached were more likely to survive and pass along their genes than those who didn’t.
A scientific consensus now exists that the brain works by using either System 1 or 2, as Daniel Kahneman explains in his book, Thinking Fast and Slow. System 1 is automatic thinking, System 2 is reflective. I found this fascinating. It helped explain how we respond to politics. Eventually, I came to the conclusion that we respond to politics most of the time using System 1. This insight wasn’t my own. I first encountered it watching a video lecture by the Cornell social scientist David Pizarro. It made a deep impression. Fortunately, I came across it early on in my research.
We are more Mulder than Scully . . . despite the strongest wishes of us sceptics that it be otherwise.
Robin Lindley: Our trust in leaders is often misplaced. You’re an expert in presidential history and you recount numerous examples of when presidents lied but there was little public reaction, such as when Grover “Jumbo” Cleveland failed to disclose he had cancer and Lyndon Johnson lied about the Gulf of Tonkin incident and Richard Nixon lied about Watergate. The public response was muted and you attribute that response to an innate credulity. How do you explain that?
Rick Shenkman: Human beings are basically believers as Harvard’s Daniel Gilbert has demonstrated. To borrow a line from another social psychologist, we’re more like Mulder than Sculley from the “X Files.” The reason is fairly straightforward. We couldn’t accomplish much if we went around skeptical of everything. Once we decide on a matter we are inclined to consider it settled unless a good reason comes along to make us question it. That gives our brain a chance to focus on threats and opportunities around us. Experiments with sea slugs that I cite in the book show this is a feature of the animal brain. It has to do with our habituation to information. Once we become accustomed to something we stop thinking about it. We grow bored by it. That’s our brain helping us keep focus on what’s new. It’s a survival instinct and it shows up, as I say, even in snails, as the scientist Eric Kandel proved half a century ago.
Another factor comes into play. We want to believe in our leaders. So it takes us quite a bit of time to become convinced that they aren’t all they’re cracked up to be. And once we cast a vote in favor of a leader we tend to come to their defense when attacked. That’s our partisan brain at work. We like being consistent. So if we decided that someone is a good leader we tend to dismiss any evidence to the contrary. Our brain literally shuts off the flow of electricity to neurons telling us something we don’t want to hear that might make us doubt our beliefs.
There are other factors, to be sure. I spend several chapters addressing these.
When I wrote Do You Understand What You Argue Against I had only just finished reading Richard Shenkman’s Political Animals: How Our Stone Age Brain Gets in the Way of Smart Politics and was sharing some reflections arising out of that book. In particular, I had been thinking about how Shenkman’s overview of recent findings in psychology and related studies helped us understand why so often we find people with very strong opinions about certain things (evolution, mythicism, Zionism, refugees, Muslims, terrorists, politicians, national history, poverty . . . ) even though they are incapable of explaining the viewpoint about those things that they oppose. I opened with something written by PZ Myers:
I’ve talked to creationists one-on-one about this before, and they can’t tell me what I’m thinking at all accurately — it’s usually some nonsense about hating God or loving Satan, and it’s not at all true. But at the same time, I’m able to explain to them why they’re promoting creationism in a way they can agree with.
I discussed this problem in the context of System 1 and System 2 types of thinking. It didn’t take very many comments on that post to send me looking for the main source for that model of System 1 and 2 thinking raised by Shenkman. Shenkman’s discussion was only second hand information. So I have since started reading the primary source: Thinking, Fast and Slow by Daniel Kahneman (2011). When I first heard of that book I impulsively dismissed it (System 1 reaction) because I thought the title and someone’s comment about it meant it was just another pop psychology book. I have since learned I could not have been more wrong. Daniel Kahneman is not a pop psychologist. See The Guardian’s article Daniel Kahneman changed the way we think about thinking. But what do other thinkers think of him?
Excuse me if I copy and paste some paragraphs from the conclusion of Kahneman’s book. Work pressures and bouts of illness have kept me from posting anything more demanding at this stage. There will be some slight shift of understanding of the nature of System 2 thinking in what follows. (Always check the primary sources before repeating what you think you understand from a secondary source!) Bolding is my own. Continue reading “Once more on System 1 and System 2 thinking”
I’ve talked to creationists one-on-one about this before, and they can’t tell me what I’m thinking at all accurately — it’s usually some nonsense about hating God or loving Satan, and it’s not at all true. But at the same time, I’m able to explain to them why they’re promoting creationism in a way they can agree with. — PZ Myers
PZ’s quandary reminds me of my own attempts to discuss political topics (terrorism, Islam, Israel and Palestine) and “religious” ones (methods used by Christian origins scholars, mythicism) with both academics and lay folk. Yesterday I read Jerry Coyne’s complete failure to explain the meaning of Zionism. Coyne has very strong views about Israel but he does not know what Zionism is or why some people oppose it. I have found the same ignorance when it comes to Islamist terrorism and Islam itself in a number of discussions here on this blog. Ironically that ignorance sometimes expresses itself in response to posts where I have cited or directly quoted serious research into the questions. Some people appear to ignore the explanations of the ideas they are supposedly responding to.
I once spent many, many exchanges with a Butler university then associate professor comparing the evidence for Socrates and Jesus. I could not understand why he appeared to keep repeating arguments that I thought I had so clearly demonstrated were false so I asked him to tell me what he understood my argument to be. It took quite a while but eventually he did respond and he stymied me by responding with a nonsensical idea that completely missed my point. I can only assume he was sincere and he really was not registering what I was writing in my exchanges with him. We have seen the same travesty with his inability to explain the most fundamental arguments of Earl Doherty and Richard Carrier even after supposedly reading sections of their books. But that’s no surprise because we saw the same distortions in Ehrman’s and Casey’s claims to have read and responded to mythicist arguments.
They — people like Coyne and McGrath — are not really engaging with the arguments of their opponents. They really do not know what their opponents are arguing.
But then I have to confess that I sometimes have rushed to conclusions about political and religious claims and other situations on some sort of instinct, or certainly with knee-jerk reactions. I do know that there was a time when I was like PZ Myer’s creationists. I was confident that I knew the fallacies at the heart of evolution and the thinking of scientists who wrote about it. And I know I have a tendency to form instant judgments when I listen to certain politicians speak.
[E]volution teaches us to think quickly. In the life-and-death setting common in the world of hunter-gatherers, speed was of the essence in sizing up both people and situations. We couldn’t let anything get in the way of our making up our minds, not even an absence of facts. In circumstances where we lacked facts— a common occurrence in the real world— we found other bases upon which to make a decision. The point was to act. Dillydallying could kill you.
The legacy of this evolutionary inheritance is that today we leap to make decisions even when we don’t need to. Instead of waiting for facts we rush to judgment. Though in the modern world we are seldom called on to render a lightning-fast, life-or-death judgment involving a politician, that’s what we do. We can’t help ourselves. We are hardwired to think fast rather than to reflect at length.
(From Wikipedia)
Fast thinking (also known as System 1), as the pioneering psychologist Daniel Kahneman points out, is easy. It doesn’t require us to dwell. It really doesn’t require us to think at all, at least as most people define thinking. That’s because it mostly happens in the unconscious, where most of our brain functioning actually takes place. As psychologist Michael Gazzaniga informs us, “98 percent of what the brain does is outside of conscious awareness.” When Michael Jordan dunks a ball he doesn’t think through all the steps he needs to take to gain lift, angle his arms, and provide thrust. He performs these tasks automatically. If he suddenly tried to think about what he’s doing when dunking a ball he’d probably stumble. Reflection gets in the way of the performance of tasks that are usually left to the unconscious. Why is that? Reflection takes time. It’s slow thinking (System 2). Literally slow. Operations in the brain involving the unconscious are five times faster than those involving consciousness.
How do we arrive at a quick decision? We use shortcuts, what social scientists refer to as heuristics. Quick— which of these capital cities in Africa has the most people?
1. Libreville
2. Asmara
3. Cape Town
The answer is Cape Town. How do you know this? Because you have heard of Cape Town (pop. 3.74 million), and you probably haven’t heard of Libreville (pop. 797,000) or Asmara (pop. 649,000). Your brain concluded that since you haven’t heard of either city, chances are they aren’t very big. This is an example of the recognition heuristic. If we recognize something, our brain automatically assumes it must be because it’s important. Why do we vote for people whose names we recognize on the ballot even if we know nothing about them? It’s because we recognize their names. Our mere recognition of them must mean that they are known for something, and in the absence of a strong negative cue, we naturally believe it must be something positive. In fact, social scientists have discovered that familiarity seems to have the same effect on us as happiness. We get a charge in the reward center of our brain when we experience the familiar. And when do we become less analytical? When we are engaged in System 1 thinking.
Shenkman, Rick (2016-01-05). Political Animals: How Our Stone-Age Brain Gets in the Way of Smart Politics (pp. 53-54). Basic Books. Kindle Edition.
I was beginning to think that I no longer have any idea if we have free will or not and after reading the ensuing discussion I felt I could firmly conclude that I really am undecided — though lately beginning to lean a little towards the “yes, we do have it” side of the fence. For now.
Connections between beliefs and behaviour are not routine and when they happen they require explanation. Personal experience and a passing acquaintance with a thing we call the subconscious both tell us that.
I am sometimes a little taken aback by the forcefulness of some people’s claims that “of course beliefs determine what people do”. The context in which this is dogmatically asserted is discussion relating to Islam. I really can’t imagine the same dogmatism surfacing if almost any other mainstream religion or non-religious belief system were being addressed.
If a terrorist shouts “God is Great” before opening fire or blowing himself up in a crowded place then bizarrely that one phrase is taken to represent the entire motivation of that act. To point to videotapes and other remnants of far more wide-ranging conversations and arguments in the lead up to that murder will not change some minds.
So I quote here a piece that has long been in waiting to be included in my next post on Clark McCauley and Sophia Moskalenko’s book on the causes of radicalization and terrorist acts,Friction: How Radicalization Happens to Them and Us. It won’t hurt to use it now and repeat it later:
Opinions and attitudes are not always good predictors of action. Of all those who might say they want to help starving children, how many would actually donate to UNICEF or work in a local soup kitchen? But for the Russian students of the 1870s, radicalization in opinion was often associated with radicalization in action. How are we to understand this unusually high consistency between opinion and behavior?
One possibility is the degree to which the era was swept up in a culture of change. Tectonic plates of Russian society were shifting, and the young generation who grew up amidst this change, themselves beneficiaries and victims of new hopes and new norms, felt that it was their job to rewrite history.
Social psychologist Robert Abelson advanced a similar perspective in relation to student activism in the United States. Abelson reviewed evidence that beliefs are not automatically translated into feelings, and feelings are not automatically translated into behavior. He then identified three kinds of encouragement for acting on beliefs: seeing a model perform the behavior; seeing oneself as a “doer,” the kind of person who translates feelings into action; and unusual emotional investment that overcomes uncertainties about what to do and fear of looking foolish. Abelson brought these ideas to focus on 1970s student activism in the United States:
3. Abelson, R. (1972). Are attitudes necessary. In B.T. King and E. McGinnies (Eds), Attitudes, conflict, and social change, pp. 19-32. New York: Academic Press.
. . . it is interesting to note that certain forms of activism, for example, campus activism, combine all three of the above types of encouragement cues. Typically. the campus activist has at least a vague ideology that pictures the student as aggrieved, and provides both social support and self-images as doers to the participants in the group. A great deal of the zest and excitement accompanying the activities of student radicals, whether or not such activities are misplaced, thus may be due to the satisfaction provided the participants in uniting a set of attitudes with a set of behaviors.3
As U.S. students of the 1970s discussed, dared, and modeled their way to the excitement linking new ideas with new behaviors . . . , so too did Russian students of the 1870s. [Friction, Kindle version, bolded emphasis mine]
It happens in reverse, too, as we well know (except when some of us have Islam on our minds). Most of us have heard of the Milgram experiment where an unexpectedly high number of people behaved contrary to their beliefs about how they should treat others and suffered emotional stress for a time as a consequence.
James Cook witnessing human sacrifice in Tahiti c. 1773 — Wikipedia
Egalitarian societies are a good thing.
Don’t we see even in class societies that no longer practice human sacrifice the upper classes expending the blood of the lower classes in other ways — all buttressed by noble and praiseworthy ideologies, of course.
The following article or letter has just appeared in Nature, Moralistic gods, supernatural punishment and the expansion of human sociality. (Or try this link.) Warning, however: high profile journals such as Nature are known to experience the highest retraction rate among scientific publications. Presumably this is because they attract readers (i.e. payers) by publishing articles sure to be popular even though their claims have not been properly tested or have an inadequate peer review process for determining final editorial decisions.
Since the origins of agriculture, the scale of human cooperation and societal complexity has dramatically expanded. This fact challenges standard evolutionary explanations of prosociality because well-studied mechanisms of cooperation based on genetic relatedness, reciprocity and partner choice falter as people increasingly engage in fleeting transactions with genetically unrelated strangers in large anonymous groups.
To explain this rapid expansion of prosociality, researchers have proposed several mechanisms. Here we focus on one key hypothesis: cognitive representations of gods as increasingly knowledgeable and punitive, and who sanction violators of interpersonal social norms, foster and sustain the expansion of cooperation, trust and fairness towards co-religionist strangers.
We tested this hypothesis using extensive ethnographic interviews and two behavioural games designed to measure impartial rule-following among people (n = 591, observations = 35,400) from eight diverse communities from around the world: (1) inland Tanna, Vanuatu; (2) coastal Tanna, Vanuatu; (3) Yasawa, Fiji; (4) Lovu, Fiji; (5) Pesqueiro, Brazil; (6) Pointe aux Piments, Mauritius; (7) the Tyva Republic (Siberia), Russia; and (8) Hadzaland, Tanzania. Participants reported adherence to a wide array of world religious traditions including Christianity, Hinduism and Buddhism, as well as notably diverse local traditions, including animism and ancestor worship.
Holding a range of relevant variables constant, the higher participants rated their moralistic gods as punitive and knowledgeable about human thoughts and actions, the more coins they allocated to geographically distant co-religionist strangers relative to both themselves and local co-religionists.
Our results support the hypothesis that beliefs in moralistic, punitive and knowing gods increase impartial behaviour towards distant co-religionists, and therefore can contribute to the expansion of prosociality.
This post introduces the general idea of the fundamental principles of morality being universal and innate in human beings, yet being tweaked and expressed in different ways according to culture, much the same way different languages derive from the same basic principles of grammar that are part of our unconscious makeup.
According to this theory, we rationalise moral judgments and respond emotionally to them. That is, the moral judgments to acts that we witness come first (intuitively, unconsciously) and we react emotionally to these and may attempt to explain our judgments rationally. But reason and emotion are not the origins of our moral judgments, as Kant and Hume thought respectively.
Kant
Immanuel Kant: It is through our reason and rationality that we determine what is right and wrong. Emotions incite us to acting selfishly and foolishly so true morality ought to be guided by reason alone. We should use our reasoning faculties to determine general moral obligations that would apply universally. Hence his “categorical imperative:
I ought never to act except in such a way that I could also will my maxim should become a universal law.
This principle meant that we should never treat people merely as a means to an end, be we should respect others as having their own desires and goals.
Hume
David Hume: Our moral judgements come to us through our emotions. Just as we recognise immediately a beautiful painting or an ugly one, so our emotions tell us immediately when an act we witness is virtuous or immoral. Some personality traits, Hume said, are innate, while others are acquired through our culture. An innately generous person who gives to charity is recognised as doing a morally good thing. One who has learned from society the importance of acting fairly and who resolves to act fairly even against self-interest, is also recognised as a morally good person.
It is our emotional response to some action that is the basis of our judgment on whether or not the act is moral.
Rawls
John Rawls: Not emotions, nor reason, but unconscious principles drive our moral judgements. We accordingly cannot always explain why a certain action is right or wrong — it just “is”.
We possess an innate moral grammar akin to the Chomskyan notion of an innate and universal linguistic grammar.* Just as we have a faculty for language, one that is hidden beneath our conscious awareness, so we also have a faculty for moral judgments. Continue reading “Where Morality Comes From – a Rawlsian view”
This post offers some explanation for the monogamous societies as a “footnote” to my previous post on the statistical benefits of monogamy for men. Robert Wright (The Moral Animal) points out that most societies have not been strictly monogamous, so it’s not as though we’ve evolved to be monogamous by nature:
A huge majority — 980 of the 1,154 past or present societies for which anthropologists have data — have permitted a man to have more than one wife. And that number includes most of the world’s hunter-gatherer societies, societies that are the closest thing we have to a living example of the context of human evolution. (p. 90)
It’s been a mixed bag:
Actually, there is a sense in which polygynous marriage has not been the historical norm. For 43 percent of the 980 polygynous cultures, polygyny is classified as “occasional.” And even where it is “common,” multiple wives are generally reserved for a relatively few men who can afford them or qualify for them via formal rank. For eons and eons, most marriages have been monogamous, even though most societies haven’t been.
Still, the anthropological record suggests that polygyny is natural in the sense that men given the opportunity to have more than one wife are strongly inclined to seize it. (p. 91)
Another work I’m finally catching up with is Robert Wright’s Moral Animal: The New Science of Evolutionary Psychology (1994). We all know the usual narrative about men being shaped by their genes to want to reproduce with everything in sight while women are always on the shrewd lookout for the best candidate to protect and provide for her children.
To cut to the chase and speaking in broad evolutionary/social psychological terms, Wright raises an interesting question (at least for me who is shamefully twenty years late in reading his book!):
[W]hereas a polygynous society is often depicted as something men would love and women would hate, there is really no natural consensus on the matter within either sex. Obviously, women who are married to a poor man and would rather have half of a rich one aren’t well served by the institution of monogamy. And, obviously, the poor husband they would gladly desert wouldn’t be well served by polygyny. (p. 96, my formatting and bolding in all quotations)
Wright adds that the males who are advantaged by monogamy are not only those at the bottom of the income scale.
Consider a crude and offensive but analytically useful model of the marital marketplace. One thousand men and one thousand women are ranked in terms of their desirability as mates. Okay, okay: there isn’t, in real life, full agreement on such things. But there are clear patterns. Few women would prefer an unemployed and rudderless man to an ambitious and successful one, all other things being even roughly equal; and few men would choose an obese, unattractive, and dull woman over a shapely, beautiful, sharp one. For the sake of intellectual progress, let’s simplemindedly collapse these and other aspects of attraction into a single dimension.
Suppose these 2,000 people live in a monogamous society and each woman is engaged to marry the man who shares her ranking. She’d like to marry a higher-ranking man, but they’re all taken by competitors who outrank her. The men too would like to marry up, but for the same reason can’t.
From http://www.nature.com/scitable/content/plasmodium-falciparum-life-cycle-14465535
Sam Harris and Jerry Coyne have in a recent Youtube discussion and publication both explained how they studied religion, read lots of theology, before undertaking their anti-theistic critiques. Harris begins by informing us that in his twenties he read a wide range of religious traditions; Coyne tells readers he read much theology as he “dug deeper” into the questions that troubled him and as he did so he “realized that there were intractable incompatibilities between science and religion” that accommodationists “glossed over”. Heather Hastie has taken exception to a recent post of mine and pointed out that in her own research into terrorism she has downloaded a dozen issues of an online terrorist recruiting journal for study.
It is one thing to read what religious beliefs and claims are made by converts. It is quite another to study why they have embraced those beliefs, why those beliefs have the hold over believers that they do, and the relationship between those beliefs and claims and the extremist behaviours of adherents.
Gullible and weak minded?
There is a widespread perception among people who have never had much or anything to do with religious cults that people who join them are somehow the more gullible or weak-willed than average. Such popular perceptions are problematic. Some cult members demonstrate superior intelligence and knowledge in other studies in their life; and many of them are exceptionally strong-willed to the point of undergoing extreme sacrifices and hardship, even giving up their own lives and even the lives of loved ones when tested on their faith. That certain cults can generate a public presence beyond their actual numbers often shows they must have some extraordinary skills and determination to maximise the impact of their meagre human resources. The nineteen men who planned and carried out the 9/11 attacks were far from having lesser intelligence and from being weak-willed.
Recently I have been sharing snippets from anthropologist Pascal Boyer’s Religion Explained and in follow up comments have added a few more quotations addressing common views that people embrace religion because we they are seeking explanations to big or ultimate questions, or because they are gullible. In the past I have posted some explanations of how religious thinking differs from other types of thinking. (Understanding extremist religion; Religious credence part 1 and part 2; Science and religion; Fantasy and religion)
If we want to find a way to counter potentially dangerous extremist acts in the name of religion or simply wind back daily oppressive practices of some religions (e.g. choosing death over medical care, child abuse, denial of women’s rights) we will need to do more than simply present rational arguments. The converts do not shield themselves from opposing arguments; they are prepared for them and know how to counter them. Religious thinking does not work in the same way – we need to understand that.
I must thank Dan Jones for revitalising my interest in this question and providing me with new readings to follow up. I have already studied a few works on “how religion works” but need to do much more.
In the meantime, here is an alternative approach to what is required to understand how people acquire the religious mind. (Alternative, that is, to simply reading the theologies and ramblings of the religious texts themselves and thinking, “how bizarre!”, “how frightening!”).
The author, an anthropologist, would compare the methods of Harris, Coyne and Hastie to studying in depth the malaria pathogen — such a study alone will never explain how malaria spreads among some people and not others or the symptoms it produces. Continue reading “Studying Religious Beliefs Without Understanding How Humans Work”