Origin of Belief in a Supreme Moralizing God

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

What could have prompted individuals of our highly successful species to obey an unseen being who told them what they should and should not do?
— Peoples & Marlowe, 253

In my Origins of Religion post I focussed on anthropological evidence for the most fundamental of religious concepts that we can reasonably infer were found among our earliest homo sapiens ancestors. We saw there that the idea of a high god who is actively interested in humans, imposing on them moral codes and threatening to punish them for disobedience, was alien to our forebears just as it is generally alien to today’s hunter-gatherer societies.

So what is the evidence that informs us when and why the belief in a supreme moralizing god emerged?

The TL;DR version of this post:

In short, the explanation offered by Peoples and Marlowe is that belief in a supreme deity that actively rewards and punishes people according to their moral conduct arises in societies that have a very large population who rely on strong cooperation in order to maintain and protect the facilities and methods on which their survival depends. These are (mostly large) societies in need of controlled complex cooperation.

This time I rely heavily on another article by two of the authors (Peoples and Marlowe) who informed my earlier post. They ask two questions:

What circumstances could have triggered belief in a single creator, or gods that affect the lives of humans, or one all-powerful god of morality? 

What characteristics of High Gods ensured that they would be culturally sustained, and why?

(255 — my formatting and bolding in all quotations)

The authors examined four types of societies across a sample of 168 in all:


Who are the foragers? How do they live?

Foragers are the equivalent of the hunter-gatherer societies of the previous post, those among whom the idea of a moralizing high god was found to be an anomaly. (The article on which I drew for that post was published some years later than the one I am discussing now.)

High Gods are present in all types of societies, but far less often among foragers and horticulturalists. Foragers are far more likely to have inactive High Gods or no High Gods at all….

Among our sample of forager societies, 58% had no High God and 88% had either no High God or one that was inactive.

(258, 260)

The explanation given for this absence of belief:

Most foragers are self-sufficient and usually require less cooperative labor…. They are able to survive on a variety of resources and occupy a wide range of habitats…. Most foraging societies are egalitarian and by nature resistant to others dictating what they should do. They would resist the concept of a demanding High God, and thus be less susceptible to evangelism.



Who/what are pastoralists?

Pastoralists are more mobile than foragers but also more stratified. They are often on the move in small groups sparsely scattered throughout vast areas of land. Their most important and often main source of subsistence exists in the form of large, divisible amounts of energy and wealth which can be stolen: their herd animals.


And the likelihood that they will believe in an active moralizing deity?

According to the sample studied the pastoralists were highly likely to believe in a moralizing high god.

In contrast [to foragers], pastoralists are in constant and direct contact with their main source of subsistence and livelihood in a landscape filled with moment-to-moment contingencies. Situations often become quickly unstable (herds scattering) or dangerous (attacks by marauders or wild animals). Blood feuds within and among groups are not uncommon …. Pastoralists have the highest frequency of warfare across the four modes of subsistence …, which increases the need for collective action. Recurring environmental and ecological threats take on enhanced importance because of the self-generating wealth embodied in herded animals. When drought devastates pasture, disease decimates herds, and constant violence over grazing rights becomes unrelenting, a bond of cooperation within one group or tribe must provide a survival advantage when challenged by other feuding groups.


But how might the above influence a tendency to believe in such a god?

The conceptual seed of a paternalistic High God that meddles in human morality and promises social order probably originated several times in many societies. This type of monotheistic god found fertile ground in the threatening landscape of pastoralism. A broader contextual view, based on Whiting’s theory of psychocultural evolution (Worthman 2010), might suggest that belief in a High God is the projection of the pastoralists’ sense of insecurity in the face of unmitigated ecological threat.



Horticulturalists often combine cultivation with some foraging. They are close to foragers on the productivity-subsistence continuum but less mobile and less autonomous with respect to acquiring food. We might expect to see a pattern of High Gods among horticulturalists similar to that of foragers. But increased food production leads to larger villages and related social problems (crime, disease) that reduced mobility exacerbates. The beginnings of stratification appear among horticulturalists (and complex foragers) in the form of charismatic, entrepreneurial community leaders (“big men”) who begin to establish conventions that institutionalize social controls (Johnson and Earle 2000). These leaders would gain personal power, prestige, and enhanced reproductive success from their association with active High Gods who lend support to their initiatives. Even if they were not the originators of the concept of a High God they would likely have been promoters of it.


That was the prediction. What were the findings?

High Gods are present in all types of societies, but far less often among foragers and horticulturalists.



Agriculturalists reside at the far end of the continuum, excelling in production of resources while being sedentary. Emergence of early agricultural societies benefited from the efficiency and success of cooperative labor. The demands of public works such as construction of communal storage facilities, defensive perimeters, and irrigation networks overcame limits to growth and made possible the population booms that led to big city problems…. It is now clear that agriculture began in a range of habitats from dry to wet. But the high productivity of a managed irrigation system was fundamental to the formation of pristine states in the Mexican highlands, coastal Peru, Egypt, the Indus Valley, Middle East, and possibly China…. These societies were highly stratified and their leaders would gain the most from moral conventions that reduce chances of fissioning and also ensure high levels of cooperation. But the population as a whole would eventually benefit as the society expands at the expense of other competing groups.


And the results of the survey….

High Gods were present in 73% of agricultural societies, and of those where a High God was present, 62% were active or moral.



Peoples and Marlowe note that group size increases significantly among successful agriculturalists. This leads to a higher level of social stratification or inequality:

Group size will grow with increased food production, which may depend on cooperative efforts. Drought, disease, and social-action problems constantly threaten famine. If the costs of cooperation outweigh the benefits, groups will fission as individuals leave for less-competitive resource environments. When leaving is not a good alternative, a population may remain intact even though some individuals are at a disadvantage. The result is inequality (stratification) and exploitation of others by certain individuals or kin groups (Boone 1992).


Maintaining workable cooperation among ever larger populations presents new challenges for the successful functioning of the entire society — with its need to maintain storage facilities, defences against marauders, and increasing numbers of the community wanting to go their own way. The chances of an authority figure emerging in such a situation are high.

One scenario for achieving the levels of cooperation and prosociality needed to stabilize larger populations (or growing ones) suggests the emergence of an authority figure and the concept of an overpowering high god to leverage his authority would be appealing, Peoples and Marlowe suggest.

Proposed Explanation for Belief in a Supreme Moralizing God

We propose that belief in active or moral High Gods stemmed from challenges encountered by individuals employing modes of subsistence that demanded the effective manipulation and cooperation of others in order to produce, manage, and defend vital resources. Constant threats to subsistence and survival engendered, for pastoralists and some agriculturalists, the practical idea of promoting belief in a powerful spiritual force that could promise deliverance from the enemy, and punish those who did not follow the rules of cooperation and moral constraint. The coercive power of religion was used to facilitate cooperation for the benefit of higher-status individuals, which in turn benefitted the whole group. The success of this strategy was copied, and it led to the transformation of human societies into higher levels of collective, economic organization that sustained larger populations….

This research has shown the importance of High Gods to achieving cooperation in growing populations or those under environmental stress.


Peoples and Marlowe, 259

Peoples, Hervey C., and Frank W. Marlowe. “Subsistence and the Evolution of Religion.” Human Nature : An Interdisciplinary Biosocial Perspective 23, no. 3 (September 2012): 253–69. https://doi.org/10.1007/s12110-012-9148-6.


Origins of Religion

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

The universality of religion across human society points to a deep evolutionary past.

The universality of religion across human society points to a deep evolutionary past. However, specific traits of nascent religiosity, and the sequence in which they emerged, have remained unknown. Here we reconstruct the evolution of religious beliefs and behaviors in early modern humans . . . . (Peoples, Duda, Marlowe, 261)

Of interest to most of us is the question of God. Belief in a high god, we will see, does not “come naturally”. We can conclude (and as I will further show in future posts) that belief in God is associated with the development of complex societies. (More specifically this post addresses “traits of religion” since “religion” as an abstract concept was alien to many ancient cultures, with what we would consider to be religious beliefs and practices being, to the ancients, a natural part of everyday life.)

We began our evolutionary journey with a study of hunter gatherers since it is reasonable to infer they represent or earliest way of living.

Locations of the 33 hunter-gatherer societies studied (p. 265)

There are many traits of religion and seven were looked for and studied:

Animism is inevitably with us. “The most striking animist is … Charles Dickens. Both as narrator and through his leading characters, Dickens finds animal form, motion, and volition everywhere. Among artifacts, he most animates houses, furniture, clothing, and portraits…” (Guthrie, 58) Surely since Dickens Walt Disney would be a serious contender to be “the most striking animist”. Of should that title go to the modern advertising industry?

1. Animism

We define animism as the belief that all “natural” things, such as plants, animals, and even such phenomena as thunder, have intentionality (or a vital force) and can have influence on human lives. (266 — some will baulk at classifying animism as a religion but it is undeniably a “trait of religion”, “the oldest trait of religion” and “fundamental to religion”.)

2. Belief in an afterlife

Belief in an afterlife is defined as belief in survival of the individual personality beyond death …. (266)

3. Shamanism

We define shamanism as the presence in a society of a “shaman” (male or female), a socially recognized part-time ritual intercessor, healer, and problem solver …. Shamans often use their power over spirit helpers during performances involving altered states of consciousness … to benefit individuals and the group as a whole …. We view shamans as a general category of individuals often found in hunter-gatherer societies who mediate between the earthly and spirit worlds to promote cohesion and physical and mental well-being in the society …. (266)

4. Ancestor worship

Ancestor worship is defined as belief that the spirits of dead kin remain active in another realm where they may influence the living, and can be influenced by the living …. (p. 266)

There are various types of ancestor worship:

  • spirits can be present but not necessarily active in human affairs;
  • they may or may not be influenced by humans through prayer and offerings.

5. High gods

… single, all-powerful creator deities who may be active in human affairs and supportive of human morality. (p. 266)

As with ancestor worship there are variants:

  • some high gods can be present but not interested in humans;
  • or they can be active in human affairs but not interested in moral conduct;
  • or they can be active in judging behaviour and punishing immorality.

6. Worship of active ancestors

7. Worship of active high gods

And here is their result. The graph illustrates how often the respective types of religious belief were found across the 33 societies:



Our results reflect [the] belief that animism was the earliest and most basic trait of religion because it enables humans to think in terms of supernatural beings or spirits. Animism is not a religion or philosophy, but a feature of human mentality, a byproduct of cognitive processes that enable social intelligence, among other capabilities. It is a widespread way of thinking among hunter-gatherers ….  This innate cognitive trait allows us to attribute a vital force to animate and inanimate elements in the environment …. Once that vital force is assumed, attribution of other human characteristics will follow…. Animistic thinking would have been present in early hominins, certainly earlier than language (Coward 2015; Dunbar 2003).

It can be inferred from the analyses, or indeed from the universality of animism, that the presence of animistic belief predates the emergence of belief in an afterlife.

(Peoples, Duda, Marlowe, 274 — my bolding in all quotations)

Our basic psychology that makes it so easy for us to believe in spirit forces has been discussed in earlier posts:

In brief,

The fundamental building blocks of religious thinking and behaviour are found in all human societies around the world – from small-scale indigenous communities to industrial conurbations and inner cities. For example, people everywhere imagine that bodies and minds can be separated, that features of the natural world have hidden essences and purposes and that we ought to please and placate the spirits of the dead. People everywhere are inclined to spread stories about miraculous events and special beings that contradict our intuitive expectations about the way the world works, giving rise to a plethora of beliefs in supernatural beings and forces of various kinds. People everywhere – even babies who have not yet learned to speak – experience feelings of awe and reverence when they encounter those who can channel otherworldly powers. (Whitehouse, 42f)

Animism would appear to be the fundamental human universal.

Our results indicate that the oldest trait of religion, shared by the most recent common ancestor of present-day hunter-gatherers, was animism. This supports longstanding beliefs about the antiquity and fundamental role of this component of human mentality, which enables people to attribute intent and lifelike qualities to inanimate objects and would have prompted belief in beings or forces in an unseen realm of spirits. (Peoples, Duda, Marlowe, 277)

The following traits of religion are contingent. Why some hunter gatherers embraced the following is a follow-up question. But animism is fundamental: animism is the precondition for all the other traits.


Many but not all groups acquire a belief in an afterlife or embrace shamanism.

Once animistic thought is prevalent in a society, interest in the whereabouts of spirits of the dead could reasonably lead to the concept of an unseen realm where the individual personality of the deceased lives on. The afterlife might be a rewarding continuation of life on earth, or a realm of eternal punishment for those who break social norms. Belief in an afterlife may have generated a sense of “being watched” by the spirits of the dead, prompting archaic forms of social norms … actualized in the role of the shaman. (Peoples, Duda, Marlowe, 274)

On the “naturalness” of believing in an afterlife:

While most wild religious beliefs were not selected for in the human evolutionary journey, they came about as a side effect of other, more adaptive psychological characteristics: for example, our intuitive psychology (including the way we anticipate how others will behave), our intuitive biology (such as the way we create taxonomies of the natural world), our tool-making brains (which make us think everything has a purpose) and our tendency to overdetect agency in the environment (particularly our early warning systems for detecting predators). These intuitions helped our ancestors to survive and pass on their genes, but they also make us naturally susceptible to believing in an afterlife; they make us treat certain objects or places as sacred; they lead us to think that the natural world was intelligently designed; and they convince us that invisible and dangerous spirits lurk in caves and forests.

. . . . 

This ability to reason about the mental states of others [“theory of mind”] has profound consequences for our ideas about spirits and the afterlife. A good example of this is … an attempt to explain beliefs in the afterlife as a side effect of the way we naturally reason about minds. … The idea of a ghost or spirit of the dead results from the impossibility of imagining the elimination of certain mental states….

In other words, we imagine the spirits of the dead to remember things that happened during their lives, to have feelings, and to form judgements even if we know that their bodies (including their eyes and their ears) no longer function and may be incinerated or rotting in the ground. This theory is consistent with the very early development of such assumptions in children.

(Whitehouse, 53f)

Belief in afterlife is a precondition for shamanism and ancestor worship.


Shamanism significantly correlates with belief in an afterlife, which emerged first….

Although shamanism has been described as the universal religion of Paleolithic hunter-gatherers … it is not a religion per se, but a complex of beliefs and behaviors that focus on communication with the ancestral spirits, as well as the general world of spirits in the realm of the afterlife. Shamans are healers, ritual leaders, and influential members of society …. Communication with omniscient and perhaps judgmental spirits of known deceased, including ancestors, would have been a useful tool in the work of the shaman….

As humans migrated out of Africa more than 60 kya … the shaman’s curing skills and group rituals would have enhanced survival through physical and emotional healing, enforcement of group norms, and resource management. (Peoples, Duda, Marlowe, 274f)

Ancestor worship

Ancestor worship is not commonly found among hunter-gatherer societies. In future posts I will address research that points to a very strong association between ancestor worship and early agricultural societies.

Worship of dead kin is neither widespread among hunter-gatherers nor the oldest trait of religion. Fewer than half of the societies in our sample believe that dead kin can influence the living…. Greater likelihood of the presence of active ancestor worship [i.e. worship of ancestors actively interested in human affairs] has been linked to societies with unilineal descent where important decisions are made by the kin group…. Ancestor worship is an important source of social control that strengthens cohesion among kin and maintains lineal control of power and property … particularly in the more complex hunter-gatherer societies. In contrast, immediate-return hunter-gatherer societies … seldom recognize dead ancestors who may intervene in their lives. (Peoples, Duda, Marlowe, 275)

The latecomer in the evolution of human societies is “God” . . . .

High gods

Belief in a high god is an outlier in the study. Where it is found the researchers suspected it the belief was acquired from borrowing instead of being organically evolved in the group. (See the original article for a detailed explanation of the ways and extent to which the above religious traits are associated with one another, and the evidence that the belief in high gods is not associated with any other early attribute.)

Belief in high gods appears to be a rather “stand-alone” phenomenon in the evolution of hunter-gatherer religion. Prior studies have shown that among the four modes of subsistence (hunter-gatherers, pastoralists, horticulturalists, and agriculturalists) hunter-gatherers are least likely to adopt morally punishing active high gods, if any high gods at all (Botero et al. 2014; Norenzayan 2013; Peoples and Marlowe 2012; Swanson 1960)…. Early egalitarian hunter-gatherers would rarely have acknowledged an active high god (Norenzayan 2013; Peoples and Marlowe 2012) and would be the least likely to accept or benefit from the supernatural meddling and social constraints of deities who would be seen as “high rulers” (Peoples and Marlowe 2012). The leaders of complex hunter-gatherer societies whose subsistence relies on collective effort should be more likely to benefit from the coercive power of a punishing high god. Our analysis does not support the prevalence of either type of high god among ancestral hunter-gatherers, and the evolution of high gods does not correlate with any of the other traits of hunter-gatherer religion, including ancestor worship. (Peoples, Duda, Marlowe, 276)

God, we will see in more detail in coming posts, appears in certain kinds of socially complex societies:

If a society acquires belief in an omniscient and potentially morally punishing creator deity, it does so regardless of other aspects of its religion but more as a reflection of its social and political structure. (Peoples, Duda, Marlowe, 278)

Guthrie, Stewart. Faces in the Clouds: A New Theory of Religion. New York: Oxford University Press, 1993.

Peoples, Hervey C., Pavel Duda, and Frank W. Marlowe. “Hunter-Gatherers and the Origins of Religion.” Human Nature : An Interdisciplinary Biosocial Perspective 27, no. 3 (September 2016): 261–82. https://doi.org/10.1007/s12110-016-9260-0.

Whitehouse, Harvey. Inheritance (pp. 42-43). Cornerstone. Kindle Edition.


Universals of Morality (without God)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

It’s all about cooperation, being the uber-social mammals that we are:

Studies led by my colleague Oliver Scott Curry have shown that much of human morality is rooted in a single preoccupation: cooperation. More specifically, seven principles of cooperation are judged to be morally good everywhere and form the bedrock of a universal moral compass. Those seven principles are:

  1. help your kin,
  2. be loyal to your group,
  3. reciprocate favours,
  4. be courageous,
  5. defer to superiors,
  6. share things fairly,
  7. and respect other people’s property.

This new idea was quite a big deal because up until then it seemed quite reasonable to assert – as cultural relativists have always done – that there are no moral universals, and each society has therefore had to come up with its own unique moral compass. As I will explain, this is not the case. Moreover, the same seven principles of cooperation on which these moral ideas are based are found in a wide range of social species and are not unique to human beings.39 These moral intuitions evolved because of their benefits for survival and reproduction. Genetic mutations favouring cooperative behaviours in the ancestors of social species, such as humans, conferred a reproductive advantage on the organisms adopting them, with the result that more copies of those genes survived and spread in ensuing generations. Take the principle that we should care for (and avoid harm to) members of our family. This moral imperative likely evolved via the mechanism of ‘kin selection’, which ensures that we behave in ways that increase the chances of our genes being passed on by endeavouring to help our close genetic relatives to stay alive and produce offspring. Loyalty to group, on the other hand, evolves in social species that do better when acting in a coordinated way rather than independently. Reciprocity (the idea that I’ll scratch your back if you scratch mine) leads to benefits that selfish action alone cannot accomplish. And deference to superiors is another way of staying alive, in this case by allocating positions of dominance or submission in a coordinated fashion rather than both parties fighting to the death.

The theory of ‘morality as cooperation’ proposes that these seven principles of cooperation together comprise the essence of moral thinking everywhere. Ultimately, every human action that prompts a moral judgement can be directly traced to a transgression against one or more of these cooperative principles.

Whitehouse, Harvey. Inheritance (pp. 66-67). Cornerstone. Kindle Edition. (my formatting)

Universal Morality

That’s the theory. What follows is a description of an “unprecedented study” to test the hypothesis that these seven principles are indeed universal. Harvey Whitehouse and his colleagues took sixty societies that had been extensively studied by anthropologists:

To qualify for inclusion, each society had to have been the subject of at least 1,200 pages of descriptive data pertaining to its cultural system. It must also have been studied by at least one professionally trained anthropologist based on at least one year of immersive fieldwork utilizing a working knowledge of the language used locally. The sample of societies was selected to maximize diversity and minimize the likelihood that cultural groups had adopted their moral beliefs from one another. They were drawn from six major world regions: Sub-Saharan Africa, Circum-Mediterranean, East Eurasia, Insular Pacific, North America, and South America.

Whitehouse, Harvey. Inheritance (p. 67). Cornerstone. Kindle Edition.

In 3,460 paragraphs from 400 documents they located the seven principles being judged by each society according to an ethical value.

This produced 962 observed moral judgements of the seven types of cooperative behaviour. In 961 of those instances (99.9 per cent of all cases), the cooperative behaviour was judged morally good. The only exception was on a remote island in Micronesia where stealing openly (rather than covertly) from others was morally endorsed. In this unusual case, however, it seemed to be because this type of stealing involved the (courageous) assertion of social dominance. So, even though this one instance seemed to contradict the rule that you should respect other people’s property, it did so by prioritizing the alternative cooperative principle of bravery.

The main take-home here is that the seven cooperative principles appear to be judged morally good everywhere.

Whitehouse, Harvey. Inheritance (p. 68). Cornerstone. Kindle Edition.



Hardwired to Venerate the Supernatural

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

Natural intuitions feed into our social systems in strange and unexpected ways. To take just one example, our intuitions about supernatural beings are also associated with intuitions about social dominance in ways that are consistent across cultures. My colleagues and I have shown in lab experiments that when babies observe an agent capable of floating around like a ghost or a flying witch, they expect the levitator to win out in a confrontation with a rival who lacks such powers.7 To put it more pithily, we naturally look up to supernatural beings. This could help to explain not only why stories about superheroes – from Santa to Superman – are so popular with children but also why magical beings and their earthly embodiments are so often venerated in human societies.

7 Meng, Xianwei, Yo Nakawake, Kazuhide Hashiya, Emily Burdett, Jonathan Jong, and Harvey Whitehouse. “Preverbal Infants Expect Agents Exhibiting Counterintuitive Capacities to Gain Access to Contested Resources.” Scientific Reports 11, no. 1 (May 25, 2021): 10884. https://doi.org/10.1038/s41598-021-89821-0.

Whitehouse, Harvey. Inheritance (p. 7). Cornerstone. Kindle Edition.


When the Genocide Stops

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

Before we begin, let’s be clear about what the word genocide means. The following is from the United Nations Office on Genocide Prevention and the Responsibility to Protect



I have superimposed on to the UN page’s image the cover of the book on which this post is based.

The word “genocide” was first coined by Polish lawyer Raphäel Lemkin in 1944 in his book Axis Rule in Occupied Europe. It consists of the Greek prefix genos, meaning race or tribe, and the Latin suffix cide, meaning killing. Lemkin developed the term partly in response to the Nazi policies of systematic murder of Jewish people during the Holocaust, but also in response to previous instances in history of targeted actions aimed at the destruction of particular groups of people. Later on, Raphäel Lemkin led the campaign to have genocide recognised and codified as an international crime.

Genocide was first recognised as a crime under international law in 1946 by the United Nations General Assembly . . . . .


Convention on the Prevention and Punishment of the Crime of Genocide

Article II

In the present Convention, genocide means any of the following acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group, as such:

  1. Killing members of the group;
  2. Causing serious bodily or mental harm to members of the group;
  3. Deliberately inflicting on the group conditions of life calculated to bring about its physical destruction in whole or in part;
  4. Imposing measures intended to prevent births within the group;
  5. Forcibly transferring children of the group to another group.

Elements of the crime

The Genocide Convention establishes in Article I that the crime of genocide may take place in the context of an armed conflict, international or non-international, but also in the context of a peaceful situation. The latter is less common but still possible. The same article establishes the obligation of the contracting parties to prevent and to punish the crime of genocide.

The popular understanding of what constitutes genocide tends to be broader than the content of the norm under international law. Article II of the Genocide Convention contains a narrow definition of the crime of genocide, which includes two main elements:

  1. A mental element: the “intent to destroy, in whole or in part, a national, ethnical, racial or religious group, as such”; and
  2. A physical element, which includes the following five acts, enumerated exhaustively:
    • Killing members of the group
    • Causing serious bodily or mental harm to members of the group
    • Deliberately inflicting on the group conditions of life calculated to bring about its physical destruction in whole or in part
    • Imposing measures intended to prevent births within the group
    • Forcibly transferring children of the group to another group

So the actual killing may have its limits. Genocide does not necessarily mean that every single person of a group has to be killed. There can come a time when the targeted group is so reduced that the group doing the killing starts to feel actual pity for the remaining few.

Case in point: Australia, 1860s.

Uhr and his men set up camp at Pelican Lake in Valley of Lagoons in about September 1865. In theory, he was responsible for a vast territory stretching from the Pacific to the Gulf of Carpentaria and miles north towards Cape York. But his essential task was to police the country seized by Herbert, Dalrymple and the Scotts. That meant tangling with the Gugu Badhun, whose country covered about 3500 square miles running west from the Seaview Range. In Valley of Lagoons, they hunted kangaroo, trapped fish and harvested water-lily seeds in streams that never ran dry. It is thought that over a thousand Gugu Badhun were on country when the invaders came. The Scotts derided them:

I am sure they have not as keen senses as humans higher in the scale of humanity. “Like beasts, with lower pleasures; like beasts, with lower pains”. They have not the slightest sense of gratitude, in any kind of way; far less than a dog, or horse. Of course they know where they are well-treated, and well-fed. I believe fish, even, learn that.

The Scotts set about getting rid of them. In Gugu Badhun: People ofthe Valley ofLagoons, it is written: “After the establishment of the pasto­ral stations, any Gugu Badhun person who ventured into those areas risked being shot and killed.” But their resistance was strong. The coun­try favoured them. “Their lands… included a good deal of rough, basalt country unsuitable for grazing sheep or cattle, but still holding water and food resources.” In that broken landscape, horses could not give chase to the Gugu Badhun who could hide in caves, biding their time until they emerged to attack again.

Arthur Scott, back in England to become a fellow of All Souls, was deeply worried about the run. At that point they had ninety white men on the payroll. He thought perhaps it was time to stop driving the blacks away and start putting them to work. He remarked that the Gugu Bad­ hun had already been given a “dressing” and believed that was enough to keep them in line.

I am rather sorry about those blacks; I think the time has now come to try & be friendly with them, we are strong enough now to defend ourselves & they would do a lot of work in washing… Certainly the best way will be to bring in some gins and boys & we shall soon make the others understand what we want. I am convinced that with our scrub & lava it is far more dangerous to keep them out than to let them in.

And so it was. Not quite as simply as might be implied by the above extract. Those habituated to killing needed more time to be persuaded. But the voices that had long been protesting the killing of the blacks throughout the early and middle decades of the nineteenth century did win out — but only after there were so few left that further killing seemed pointless; better to use the remaining few to do the menial work on the outback properties. Much cheaper than white labour, too, of course. When orphaned black children, helpless elderly, and struggling mothers dominated the remaining few, it was easy to have feelings of pity for them. So humanity “triumphed” and further killing was steadily, albeit slowly, forbidden in reality, not only in empty words of protest.

Quote, and the specific notion that the genocide of Australian aborigines only stopped, at least in the state of Queensland, when numbers of blacks were so few that enough whites began to feel sorry for them, is from:

  • Marr, David. Killing for Country: A Family Story. Black Inc, 2023. pp 287f


It was coincidence that I happened to be reading Killing for Country at the time when Israel began its bombing of Gaza. The idea for this post, however, was initiated through reflection on current events in the State of Palestine.


The Evolution of Free Will

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

After having posted sympathetically about the possibility of our lacking free will (so much so that I am not even sure I know what “thinking” entails at the most fundamental level) — I’m pleased to imagine that I am freely choosing to post a link to an argument for us having free will:

by Kevin Mitchell, a scholar of genetics and neuroscience at Trinity College Dublin.

The processes of cognition are thus mediated by the activities of neurons in the brain, but are not reducible to those activities or driven by them in a mechanical way. What matters in settling how things go is what the patterns mean – the low-level details are often arbitrary and incidental. Organisms with these capacities are thereby doing things for reasons – reasons of the whole organism, not their parts.


A common claim of free will skeptics is that we, ourselves, had no hand in determining what that configuration is. It is simply a product of our evolved human nature, our individual genetic make-up and neurodevelopmental history, and the accumulated effects of all our experiences. Note, however, that this views our experiences as events thathave happened to us. It thus assumes the point it is trying to make – that we have no agency because we never have had any.

If, instead, we take a more active view of the way we interact with the world, we can see that many of our experiences were either directly chosen by us or indirectly result from the actions we ourselves have taken. Not only do we make choices about what to do at any moment, we manage our behaviour in sustained ways through time. We adopt long-term plans and commitments – goals that require sustained effort to attain and that thereby constrain behavior in the moment. We develop habits and heuristics based on past experience – efficiently offloading to subconscious processes decisions we’ve made dozens or hundreds of times before. And we devise policies and meta-policies – overarching principles that can guide behaviour in new situations. We thus absolutely do play an active role in the accumulation of the attitudes, dispositions, habits, projects, and policies that collectively comprise our character.

More at his blog — http://www.wiringthebrain.com/

He also has a book titled Free Agents, subtitled How Evolution Gave Us Free Will.

The experience of one friend of mine many years ago still haunts me. He had enormous emotional, mental and behavioural problems, having come from a brutal family upbringing. There was one period when he seemed to have completely changed, to have become “whole” even, and positive. It turned out that he had had a good sleep and a healthy meal for once. I was religious at the time and could not help wondering how God would judge someone whose behaviour depended so critically on a healthy salad sandwich and 8 hours sleep.



Knowledge, Belief — and How Humans Work

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

The third post on Black Box Site addresses the difference between Knowledge and Belief. That’s a question I have sprinkled across various posts here so again I read Blackmun’s thoughts with particular interest.

Some key takeaways (at least for me):

. . . those who held their beliefs with greater persistence also tended to have more activity in the brain’s amygdala, which is involved in threat perception and anxiety, as well as in the insular cortex, which deals with emotions. In other words, a threat to belief tended to be perceived as an emotionally charged personal threat. As one of the researchers involved, Jonas Kaplan, put it in a press release for the experiment: “Political beliefs are like religious beliefs in the respect that both are part of who you are and important for the social circle to which you belong. To consider an alternative view, you would have to consider an alternative version of yourself.

. . . .

Further, unlike knowledge, a belief can be held with what seems to be absolute certainty, though that certainty is in itself a belief.

. . . .

However, knowledge is also adaptive, with mistaken conclusions (which must be tentative in any case) always subject to correction by new evidence or better understanding; while a particular belief, whether true or false, is essentially static, a thing that is assumed to be true, but never truly examined (though it might eventually be replaced by another belief – also unexamined). In other words, knowledge can steadily advance and improve, while belief, due its very nature, cannot.

I think back to the time I was moving out of mainstream religion into the thought-world of what at the time I was beginning to “believe” was a “true church” that held “the truth”. There was one moment when I innocently asked the question: “To what extent does God expect me, a student with a heavy work load, to keep Sunday holy?” I opened the package they sent me by way of an answer. I read the title of the main booklet: Which Day is the Christian Sabbath? Why was the cover title alone enough to make my heart sink? I “knew” that if I opened the booklet and read it that I would find “incontrovertible arguments” that the seventh day, Saturday (not Sunday), was the “true sabbath”. I “believed” before I even turned to the first page that what was contained in it “was true”. I felt ill because I did not want to be joining some sect or cult, and that’s what keeping Saturday would look like to others. In retrospect I can see something that I surely rationalized at the time — that I believed before I even read the arguments. The notion that I ought to step back and genuinely, objectively be open to opposing arguments or an analysis of the booklet’s rhetoric that demonstrated its psychological manipulations was simply non-existent. By the time I did read opposing arguments I was already more than capable of “shooting them down in flames”.

Even less did I think to seriously reflect on the emotional and mental processes that had led me to that point where I “believed” this particular church was “true”.

There were times when I did struggle to find the evidence I was looking for. So when I read from the same source “A True History of the True Church” I was a little disappointed that it lacked the detailed evidence I would have liked. But I was a history student and had a vast library at hand so I did my own research to complement what I had read. That led to more frustration, sadly. It looked to me like the Waldensians were not really the same sort of sabbath-keepers as we were, and I found no evidence for the Cathars keeping the sabbath but I did find details that they observed teachings we opposed. But my fundamental beliefs were not thrown overboard. I was not at a church-run college so I did not have all the resources that they stocked in their library.

I could not deny that some details of the church’s teachings were wrong, and some forms of practice and behaviour were not what I would have expected in a “biblical church”. But I convinced myself that those details were not fundamental, or would change in time, and I maintained my respectable standing by embracing a “good attitude”. A “good attitude”, I learned not so many years ago, was the same expression used in the 1930s and 40s by Nazi youth and party members who questioned aspects of Nazi practice and doctrines: as long as they asked their questions with a “good attitude” they remained lovingly embraced by the party. A “good attitude” meant that one submits to the authorities and does not cause dissension among one’s peers. In other words, one learns to be very discreet about sharing one’s doubts and questions.

I see now how I was immersed in a pattern common to so many who enter counter-culture type groups:

and many more.

The problem with belief is that a believer just “KNOWS” that one’s beliefs are true. Belief thereby binds a mind more tightly than knowledge. Enter the arrogance of belief. Or if one is so totally confident then it follows that there is no need for arrogance: one can be humble about “knowing” the truth. The arrogance of humility.

That last listed post above, The Brainwashing Myth, concludes with this line:

I reject the idea of brainwashing for three reasons: It is pseudoscientific, ignores research-based explanations for human behavior and dehumanizes people by denying their free will.

Given that this post comes on the heels of a post expressing openness to the possibility that free will is an illusion, that last reason should be modified in some way. The first two reasons draw upon “knowledge” — which is necessarily tentative pending new evidence. The third reason rests on a “belief”, even if a necessary one to maintain our dignity and humanity. And if it’s a false belief, then we have no choice but to call on our reserves of compassion and understanding to uphold our humanity.



The Cradle Rocks Above an Abyss

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

There is much in Costica Bradatan’s In Praise of Failure that I would like over time to address but let’s begin with the portion of Prologue that I quoted yesterday:

. . . . human existence is something that happens, briefly, between two instantiations of nothingness. Nothing first—dense, impenetrable nothingness. Then a flickering. Then nothing again, endlessly. “A brief crack of light between two eternities of darkness,” as Vladimir Nabokov would have it. . . . 

That remark followed from his description of the feeling that we imagine would follow from surviving what at the time felt like plummeting to a certain death. I was reminded of that feeling from a real-life experience when I listened to the news last night about the death of the commander of 108 men who had fought off an attack of a 2500 strong enemy in the battle of Long Tan in 1966. He was quoted as having said that it was only a short time after the three hour battle that the full realization that he was “still alive” fell upon him.

I turned to the source of Bradatan’s quote. Here it is in (disturbingly colourful) context:

The cradle rocks above an abyss, and common sense tells us that our existence is but a brief crack of light between two eternities of darkness. Although the two are identical twins, man, as a rule, views the prenatal abyss with more calm than the one he is heading for (at some forty-five hundred heartbeats an hour). I know, however, of a young chronophobiac who experienced something like panic when looking for the first time at homemade movies that had been taken a few weeks before his birth. He saw a world that was practically unchanged— the same house, the same people—and then realized that he did not exist there at all and that nobody mourned his absence. He caught a glimpse of his mother waving from an upstairs window, and that unfamiliar gesture disturbed him, as if it were some mysterious farewell. But what particularly frightened him was the sight of a brand-new baby carriage standing there on the porch, with the smug, encroaching air of a coffin; even that was empty, as if, in the reverse course of events, his very bones had disintegrated.

As I quoted yesterday, Bradatan sees our “myths, religion, spirituality, philosophy, science, works of art and literature” as products of our efforts to make an “unbearable fact a little more bearable.” Continuing that thought, he writes,

One way to get around this is to deny the predicament altogether. It’s the optimistic, closed-eye way. Our condition, this line goes, is not that precarious after all. In some mythical narratives, we live elsewhere before we are born here, and we will reincarnate again after we die. Some religions go one step further and promise us life eternal. It’s good business, apparently, as takers have never been in short supply. More recently, something called transhumanism has entered this crowded market. The priests of the new cult swear that, with the right gadgets and technical adjustments (and the right bank accounts), human life will be prolonged indefinitely. Other immortality projects are likely to do just as well, for our mortality problem is unlikely to be resolved.

But this approach is not for everyone…

No matter how many of us buy into religion’s promise of life eternal, however, there will always be some who remain unpersuaded. As for the transhumanists, they may know the future, but they seem largely ignorant of the past: “human enhancement” products have, under different labels, been on the market at least since the passing of Enkidu of Gilgamesh fame. Compared with what the medieval alchemists had to offer, the transhumanists’ wares seem rather bland. Yet thousands of years of life prolongation efforts haven’t put death out of business. We may live longer lives today, but we still die eventually.

Simone Weil (Wikipedia photo)

The Bullfighting way?

Bradatan finds himself siding with the views of Simone Weil:

Another way to deal with our next-to-nothingness is to confront it head-on, the bullfighting way: no escape routes, no safety nets, no sugarcoating. You just plow ahead, eyes wide open, always aware of what’s there: nothing. Remember the naked facts of our condition: nothing ahead and nothing behind. If you happen to obsess over your next-to-nothingness and cannot buy into the life eternal promised by religion or afford a biotechnologically prolonged life, this may be “right for you. Certainly, the bullfighting way is neither easy nor gentle—particularly for the bull. For that’s what we are, after all: the bull, waiting to be done in, not the bullfighter, who does the crushing and then goes on his way.

Hardly a higher form of human knowledge . . .

* Quoted in David McLellan, Utopian Pessimist: The Life and Thought of Simone Weil (New York: Poseidon Press, 1990), 93.

Human beings are so made,” writes Simone Weil, that “the ones who do the crushing feel nothing; it is the person crushed who feels what is happening.”* Pessimistic as this may sound, there is hardly a higher form of human knowledge than the one that allows us to understand what is happening—to see things as they are, as opposed to how we would like them to be. Besides, an uncompromising pessimism is superbly feasible. Given the first commandment of the pessimist (“Whenever in doubt, assume the worst!”), you will never be taken by surprise. Whatever happens on the way, however bad, will not put you off balance. For this reason, those who approach their next-to-nothingness with open eyes manage to live lives of composure and equanimity, and rarely complain. The worst thing that could befall them is exactly what they have expected.

Above all, the eyes-wide-open approach allows us to extricate ourselves, with some dignity, from the entanglement that is human existence. Life is a chronic, addictive sickness, and we are in bad need of a cure.

The bolding is my own. I think that is worth taking in …. “there is hardly a higher form of human knowledge than the one that allows us to understand what is happening – to see things as they are”.

Somewhere in there we can find the real gift that can come from failure, or from the humility that it brings.

Bradatan, Costica. In Praise of Failure: Four Lessons in Humility. Cambridge, Massachusetts: *Harvard University Press, 2023.


In Praise of Failure

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

Two interviews:

With Philip Adams: A new approach to failure
With David Rutledge: The lessons of failure

The opening words of the book — “a brief crack of light between two eternities of darkness”

Picture yourself on a plane, at high altitude. One of the engines has just caught fire, the other doesn’t look very promising, and the pilot has to make an emergency landing. Finding yourself in such a situation is no doubt shattering, but also illuminating. At first, amid the wailing and gnashing of teeth, you cannot think in any detached, rational fashion. You have to admit it, you are paralyzed by fear and scared to death, just like everyone else. Eventually, the plane lands safely, and everybody gets off unharmed. Once you’ve had a chance to pull yourself together, you can think a bit more clearly about what just happened. And you start learning from it.

You learn, for instance, that human existence is something that happens, briefly, between two instantiations of nothingness. Nothing first—dense, impenetrable nothingness. Then a flickering. Then nothing again, endlessly. “A brief crack of light between two eternities of darkness,” as Vladimir Nabokov would have it. These are the brutal facts of the human condition—the rest is embellishment. No matter how we choose to reframe or retell the facts, when we consider what precedes us and what follows us, we are not much to talk about. We are next to nothing, in fact. And much of what we do in life, whether we know it or not, is an effort to address the sickness that comes from the realization of this next-to-nothingness. Myths, religion, spirituality, philosophy, science, works of art and literature—they seek to make this unbearable fact a little more bearable.

A little further on in the Prologue — “how we relate to failure defines us”

The failure-based therapy that I offer in this book may seem surprising. After so much worshipping of success, failure’s reputation is in tatters. There seems to be nothing worse in our world than to fail—illness, misfortune, even congenital stupidity are nothing by comparison. But failure deserves better. There is, in fact, much to praise about it.

Failing is essential to what we are as human beings. How we relate to failure defines us, while success is auxiliary and fleeting and does not reveal much. We can live without success, but we would live for nothing if we didn’t come to terms with our imperfection, precariousness, and mortality, which are all epiphanies of failure.

“Only humility”:

In Praise of Failure is not about failure for its own sake, then, but about the humility that failure engenders, and the healing process that it triggers. Only humility, a “selfless respect for reality,” as Iris Murdoch defines it, will allow us to grasp what is happening. When we achieve humility, we will know that we are on the way to recovery, for we will have started extricating ourselves from the entanglement of existence.

So, if you are after success sans humility, you can safely ignore this book. It will not help you—it will only lead you astray.

We come to the Epilogue:

Every morning, when we wake up, there is a moment—the briefest of moments—when our memory hasn’t come back to us. We are not yet ourselves because we don’t have a story to tell. We can be anyone at this stage, but right now we are no one. We are a blank sheet of paper waiting to be written on. As our memory gradually returns, we start recalling things: where we are, what happened before we fell asleep, what we need to do next, the tasks of the day ahead. We start becoming ourselves again as the memory of these things comes back and slowly forms a story. When everything has fallen into place, and the story is complete, we can be said to have come back to life. We now have a self. The sheet is covered with our story—we are our story.

This is the most significant moment of every day, and philosophically the most gripping: the process through which we come into existence, and our self comes back to us, every time we wake up. If, for some reason, things failed to fall into place and form a coherent narrative, we would never find ourselves. The sheet would remain blank. We would miss ourselves in the same way we would miss someone who didn’t show up for a meeting.

Human beings are fundamentally narrative-driven creatures. Our lives take the shape of the stories we tell; they move this way or that as we change the plot. These stories are what gives our existence consistency, direction, and a unique physiognomy. We are irreducible individuals not because of, say, our DNA, but because no story can be told in exactly the same way twice. Even the slightest change of rhythm and diction produces a different story. Another person.

At our most intimate, then, we are what we tell ourselves we are. The German philosopher Wilhelm Dilthey called this process the “coming together of a life”—Zusammenhang des Lebens. The stories we tell about our life are sometimes more important than life itself. They are what brings that life together and makes it what it is: our life. Without them, we would remain only some insignificant occurrence in the planet’s biosphere.


As storytelling animals, we need stories not just for coming into existence every morning, but for pretty much everything—for things big and small, important and trivial, ennobling and shameful. We need a good story to live by and to die for, to fall in love with someone and out of love with her, to help us fight for a cause or betray it.

We likewise need a story to cure ourselves of the umbilicus mundi syndrome. To achieve true humility it is not enough just to be humble. We also need to weave a story that structures our self-effacing efforts and gives them sustenance, continuity, and meaning. We have to narrate our way into humility. And that’s what renders humility one of the most difficult stories to tell. For the self that narrates is the same one that longs for self-effacement and seeks to be lowered and subdued. The narrator’s voice, so vital to storytelling, has to be silenced. But how are we going to tell a story with silence? How can we narrate ourselves and reduce ourselves to dust at the same time? Dust has never had any stories to tell. That puts humility and storytelling seriously at odds with each other.

Costica Bradaton reminds readers here of the stories he has told in the previous pages: of Simone Weil, of Mahatma Gandhi, of E. M. Cioran, of Osamu Dazai, of Seneca, of Yukio Mishima.

The final words:

At any given moment, we may find our life to be empty and our existence meaningless, but we know, at some deeper level, that we are not done yet. Our story is just not over, and it’s frustrating—profoundly, viscerally so—to quit a story before the end, whether it’s a book, a film, or your own life. Once we have reached that point, we may decide that there is nothing left to tell, but quitting the story while it is still being told is a violation not just of narrative but of nature. The longed-for meaning may be revealed at the very end, and we will no longer be there to receive the revelation. It is written, after all, that the “pearl” we are supposed to retrieve can only be found at the story’s end.

Can a story save my life, then? Yes, it can. The truth is, only a story can redeem our lives. And not just our lives, but life itself. That’s the reason why, in case you’ve wondered, there are so many stories in this book, from beginning to end. Without stories, we would be nothing.

There is enough to think about in the above to make it superfluous to add any of my own commentary at this point.


Nice Racism

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

I do not fully understand “racism”.

I grew up in a time when aboriginal children were sometimes being taken from their families “for their own good”. Everything “we”, the white rulers of the land, were doing in relation to aborigines was “for their good”. Today, on the contrary, one is often confronted with an aboriginal’s story of the trauma that was one of the thousands of what is now termed “the stolen generation“. Many white Australians only became aware of the impact of that practice on the indigenous people in 2002 with the release of the film Rabbit-Proof Fence.

My first experience as a target of racism was when I was touring central China. It was innocuous enough and I laughed along with it. But at the same time I could not deny that there was a little gurgle deep down in my gut that felt a little unpleasant. I asked my Chinese companion why some people seemed to be so curious and smiling among themselves as they looked across at me in a community meal hall. I was wearing shorts, and it was explained to me that someone had said I looked like a monkey because of my hairy legs and arms.

My second experience was soon after I was employed at the Singapore National Library. I don’t believe any of the local citizens and employees there would think they had a racist bone in their bodies. But on an institutional level, when statements were made at a “high level” of conceptualization — NOT at a personal one-on-one level — I was made to feel that my place as a white westerner was somehow tolerated only on sufferance. I was needed for my specialist skills and experience and the sooner my tenure was over the happier they would all be. Australians, I very quickly earned, were reflexively viewed through negative stereotypes, and my own personality and habits that defied those stereotypes made no difference to those perceptions. (I had been asked what things I found problematic with my work environment and I said that Singaporeans “work too hard” — they would almost as a rule work way past the official “knock off” time and seem to give their lives for the corporation and only go home to their families when absolutely necessary, usually quite late at night. The response indicated that I was a “typical” lazy Australian who loved to go on strike at the drop of a hat, gamble, drink and be generally work-shy. My immediate impulse was to argue the point but the environment at the time made that inappropriate. Everyone laughed at “the Australian” and “the virtue” that he saw as “a problem”.)

So as a white Westerner — and as nothing more than a tourist or temporary worker — I have experienced very mild forms of what have felt to me to be some kind of racial prejudice.

My point is that in neither of the above experiences would I have suspected any of the commenters as having the slightest awareness of any racist undertone in their remarks. Had I challenged them on their views I am convinced that they would have denied outright having any racist attitude at all. They were only joking, after all. They liked me personally. So why did I have that little unpleasant gut feeling each time? I smiled and responded as a friend and suppressed my gut gurgling so they would have no reason to notice it.

Robin DiAngelo (Wikipedia)

Today I listened to an Australian national radio podcast talk by Robin DiAngelo. I do not know if I can agree with every statement she made about “nice racism” — The ‘nice racism’ of progressive white people — but I don’t know yet if that’s because I haven’t thought through my own ideas thoroughly enough or if some of her views really are missing the mark by just a fraction of an inch or millimetre. She has her critics and these are candidly addressed in the podcast. But I am still left thinking.

But there is one comment of hers that I certainly could relate to:

“You’re going to have to educate yourself. 

If the thought leaders in this field, for example, are using the term “white supremacy”, and you think that’s a really harsh term, and a terrible term, and you don’t understand why they’re using it, then rather than ask us not to use it, see it as, “Well, I need to get up to speed because I must be missing something. They’re using this with comfort, and they’re talking about something that’s different from what I think this is about”, and so, we’re back to the humility that I necessarily am missing something, because this is arguably the most complex, nuanced, sociopolitical dynamic of the last several hundred years. 

Around 23 mins of https://www.abc.net.au/radionational/programs/bigideas/the-nice-racism-of-progressive-white-people/14087776

In her most recent book she writes:

Our racism avoids the blatant and obvious, such as saying the N-word or telling people to go back to where they came from. We employ more subtle methods: racial insensitivity, ignorance, and arrogance. These have a racist impact and contribute to an overall racist experience for BIPOC people, an experience that may be all the more maddening precisely because it is easy to deny and hard to prove. I am constantly asked for examples, so here are a few: . . . . 

• Not understanding why something on this list is problematic, and rather than seeking to educate yourself further, dismiss it as invalid.

Excerpt From: Dr. Robin DiAngelo. “Nice Racism.” Apple Books.

In case you are wondering what the other examples are, I copy and paste them here from DiAngelo’s book, Nice Racism:

• Confusing one person for another of the same racial group
• Not taking the effort to learn someone’s name; always mispronouncing it, calling them something that’s easier to pronounce; making a show of saying it, or avoiding the person altogether
• Repeating/rewording/explaining what a BIPOC person just said
• Touching, commenting on, marveling at, and asking questions about a Black person’s hair
• Expecting BIPOC people to be interested in and skilled at doing any work related to race
• Using one BIPOC person who didn’t mind what you did to invalidate another who did
• Calling a Black person articulate; expressing surprise at their intelligence, credentials, or class status
• Speaking over/interrupting a BIPOC person
• Lecturing BIPOC people on the answer to racism (“People just need to . . .”)
• Bringing up an unrelated racial topic while talking to a BIPOC person (and only when talking to a BIPOC person)”
• Blackface/cultural appropriation in costumes or roles
• Denying/being defensive/explaining away/seeking absolution when confronted with having enacted racism
• Only naming the race of people who are not white when telling a story
• Slipping into a southern accent or other caricature when talking to or about Black people
• Asking for more evidence or offering an alternate explanation when a BIPOC person shares their lived experience of racism
• Making a point of letting people know that you are married to a BIPOC person or have BIPOC people in your family
• Not being aware that the evidence you use to establish that you are “not racist” is not convincing
• Equating an oppression that you experience with racism
• Changing the channel to another form of oppression whenever race comes up
• Insisting that your equity team address every other possible form of oppression, resulting in racism not getting addressed in depth or at all (“It’s really about class”)
• Including “intellectual diversity,” “learning styles,” “neurodiversity,” and personality traits such as introversion/extroversion in your diversity work so that everyone in your majority-white organization feels included
• Gossiping about the racism of other white people to BIPOC people to distinguish yourself as the good white person
• Using an experience as the only white person in a group or community to say that you’ve experienced racism (which you call reverse racism)
• Telling a BIPOC person that you witnessed the racism perpetrated toward them but doing nothing further
• Equating your experience as a white immigrant or the child of white immigrants to the experiences of African Americans (“The Irish were discriminated against just as bad”)
• Using your experience with service learning or missionary work in BIPOC communities to present yourself as an expert on how to address the issues experienced by those communities
• Loving and recommending films about racism that feature white saviors
• Deciding for yourself how to support a BIPOC person without asking them what they want or need
• Claiming to have a friendship with a Black colleague who has never been to your home
• Being involved in your workplace equity team without continually working on your own racism
• Attending your first talk or workshop on racism and complaining that the speaker did not provide you with the “answer”
• Asking how to start a diversity consulting business because you attended a talk and found it interesting
• Focusing your diversity work on “increasing your numbers” with no structural changes and equating increased numbers with racial justice
• Blocking racial justice efforts by continually raising a concern that your organization is “not ready” and needs to “go slow” to protect white people’s delicate racial sensibilities
Not understanding why something on this list is problematic, and rather than seeking to educate yourself further, dismiss it as invalid

Excerpt From: Dr. Robin DiAngelo. “Nice Racism.” Apple Books.

It’s that last one that got to me and made me pause and wonder. We all know others who fall into that category and have probably been there ourselves at some time. So I am forced to rethink the other points I find myself disagreeing with. That doesn’t mean Robin DiAngelo is right all the time, but the questions she raises are of concern to some people so maybe I need to think them through more fully.

One comparison that came up in that radio interview was the response of a husband who says he cannot be racist because he married a black woman, and the converse for the wife. DiAngelo pointed out that a man marrying a woman does not prove that he is free from sexist or patriarchal (and anti-feminist) biases.

It’s a topic I keep returning to and wondering if I have really understood all its complexity. Individually we may not be racist but we are part of a community and perhaps that’s where we have to wonder about our unconscious biases and how they influence systemic words and actions.

Other reading that surfaced from listening to the above:

  • Anderson, Carol. White Rage: The Unspoken Truth of Our Racial Divide. Bloomsbury, 2016.
  • DiAngelo, Robin. White Fragility: Why It’s So Hard for White People to Talk About Racism. Penguin, 2019.
  • Eddo-Lodge, Reni. Why I’m No Longer Talking to White People About Race. Bloomsbury, 2017.
  • Hamad, Ruby. White Tears/Brown Scars. Melbourne University Press, 2019.


Sidetracked though misadventure: Time to reflect

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

The past week I have been met with a series of costly misadventures (I blame it all on Australian dentists charging outrageous prices) that have led me to Indonesia and last night was the first night in a week I have had to truly relax. The restaurant where I ate displayed this thought-provoking picture:


The Necessity of Scepticism Contra Cynicism

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

Mention scholarly research into a matter that someone feels deeply about while holding contrarian or conspiracy theory views, and one is likely to be told that the academies are filled with institutional bias that compels them to ignore or hide or tell untruths about “the real facts” of a matter. Sometimes the criticism can go deeper than certain perceptions of institutional bias and target the scientific method itself:

The scientific method is what is wrong with the world

I mentioned once to Harold, the friend mentioned earlier who falls constantly for quack medical remedies and diagnostic methods, that the scientific method is based on a demand for evidence and that he should think about being a little more scientific (i.e., skeptical) about unverified claims. Harold replied that “the scientific method is what is wrong with the world and with people like you.” I suppose what he was saying is that by being skeptical I am cutting myself off from various mystical and creative aspects of the human experience. That is certainly a possibility, but I think that Harold may have been overreacting to the word “skepticism” (and to my implicit criticism of him) and assuming that I meant something other than what I actually meant.

His cynicism, ironically, contributes to his gullibility

I believe that Harold is confusing skepticism with cynicism and thinks that I am saying that one should never believe in anyone or anything. In fact, Harold is probably much more cynical than I am (he sees the evil hand of conspirators everywhere). His cynicism, ironically, contributes to his gullibility, in that he champions alternative medicine in part as a way of saying “screw you” to the more mainstream medical establishment. To my mind, a knee-jerk reaction of “no” (the cynical stance) is no more justified than a knee-jerk reaction of “yes” (the gullible stance) and the essence of skepticism is nothing more than saying “I will maintain an open mind but I want to get a better understanding of the truth of the matter before I commit myself.” However, the evidence is often ambiguous and incomplete, and the skeptic is someone who holds out for a higher and more rigorous standard of proof (and of one’s own understanding) than the fact that you like someone or want what he says to be true.

Our author is zeroing in specifically on quack remedies and “spiritual forces” and “mysterious agents” in the world but the same reasoning applies to anyone seeking “confirmation” that governments of the western world have come under the powers of conspiratorial elites who are using the fear of Covid-19 as a pretext to introduce controls that will eventually lead to new totalitarian tyrannies. The result of that kind of thinking is either a complete disengagement from the political processes or attempts to sabotage them: either way, positive action to defend and deepen democratic processes is the loser.

Perhaps what Harold was really saying is “I pity people who do not believe in magic.” Skeptics are people who believe that the laws of nature and probability underlie all phenomena, and are dubious about claims that there are realms of functioning that are immune from such laws. Harold definitely believes in magic (the real, not the conjuring, kind) and his evidence for this belief is likely to take such a form as “I was thinking about an old friend and the next thing I knew I got a phone call from her.” Harold is not likely to be persuaded by the argument that “you thought of 500 other things or people during the same day without such a congruence, but you are focusing on a single congruence that confirms your belief in magic and then holding it up as proof that it couldn’t have been a coincidence.

The passage addresses belief in magic but the same “intellectual laziness” applies to beliefs in conspiracy theories and any other belief that avoids the drier texts based on the scientific methods of research. Conspiracy-theory believers may consider themselves more astute, sceptical and informed than others but in fact the opposite is true.

one can absolve oneself of any obligation to master complex reality by passing it off to forces that are unknowable

Belief in magic is an intellectually lazy stance to take, as one can absolve oneself of any obligation to master complex reality by passing it off to forces that are unknowable. Belief in magic contributes to gullibility as one can be too easily influenced by misleading external realities or superficial explanations. Just as Harold pities me for not being more intuitive, I pity him for not being more rational. People who eschew skepticism are people at the mercy of charlatans, bogus experts, and false claims. Although I generally admire and like people who are trusting more than people who are distrusting, I think trust needs to be tempered with an understanding that it can be misplaced. Blind trust can be a formula for disaster, as countless stories in this book illustrate. One needs to be alert to warning signs of untruth, whether or not it emanates from conscious deception. To refuse to heed those signs, and to refuse to ask for proof when proof is needed, may be to put oneself in harm’s way. 

On reading the above passage I am reminded of little questions that arose along the way of my journey into a religious cult many years ago. Little points of errors in facts seemed to be only minor and unintentional glitches in the literature I was reading. I should have stopped and stood my ground and demanded to find out why and how such “little errors” were there in the first place. It might have led me to discover behind all that glossy, free literature was an institution that was geared towards psychological manipulation.

The following depicts the end result of what I have come to call “ideological” or “principled” thinking. Now there are good ideologies and there are good principles to live by. The problem is when people espouse “ideological” or “principled” ideas that at some level treat those principles as more important than the full lives of real people (either by thinking of most people as stupid or as faceless numbers to be manipulated in a power and control game) then we have the seedbed of vicious totalitarianism.

Trofim Lysenko

An example of how cynicism is compatible with gullibility can be found in the life of Joseph Stalin, the ruthless dictator who ruled the Soviet Union through periodic murder sprees for several decades. In a review of Simon Sebag Montefiore’s (2005) Stalin: The Court of the Red Tsar, Ian Buruma (2004) described Stalin, and the equally murderous ruler of China, Mao Zedong, as extraordinarily cynical, that is motivated by power rather than Communist ideology, and generally distrusting of everyone, including family members. There was one area, however, in which both Stalin and Mao were overly trusting, and that had to do with agriculture policy. Driven by an emotional commitment to the notion that Marxism–Leninism had made possible a “creative Darwinism,” both men “appear to have been completely taken in by the crackpot science of Trofim Lysenko” (p. 5). Lysenko’s experiments in high-yield wheat proved to be a complete disaster in both countries, “but these failures were blamed on ‘saboteurs’ and ‘bourgeois scientists’, many of whom were killed, even as people were dying of hunger in far greater numbers than ever. Such things might not have happened if Stalin and Mao had been complete skeptics. But they were gullible as well as cynical, and that is why millions had to die” (Buruma, 2004, p. 6).

Greenspan, Stephen. Annals of Gullibility: Why We Get Duped and How to Avoid It. Westport, Conn. : Praeger Publishers, 2009. http://archive.org/details/annalsofgullibil0000gree. pp 181-182


The Intelligence Trap: Why Smart People Make Dumb Mistakes

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

From ABC website

We need to discard the widely held belief that a high IQ and rationality go hand-in-hand, argues David Robson, author of The Intelligence Trap: Why Smart People Make Dumb Mistakes.

He says while intelligence quotient (IQ) tests have become the benchmark for smarts, they’re highly selective and only measure one’s ability for certain kinds of abstract reasoning.

Worst of all, they say nothing about a person’s common sense.

“When it comes to things like analysing evidence and thinking about it in a fair, even-handed way, or looking at the news and being able to work out what’s true and what’s false, actually IQ is really bad at predicting whether people can do that kind of thing,” Mr Robson tells ABC RN’s Future Tense.

And individuals with a high IQ score are just as vulnerable to cognitive biases as anyone else.

“The most important one for me is this idea of motivated reasoning,” he says.

“If you have a hunch or an intuition that something is right and it fits with your overall worldview, then you will only look for the information that supports that point of view.”

And, according to Mr Robson, when it comes to motivated reasoning, the crucial difference between highly intelligent people and the rest of us is that so-called smart people are simply better at it.

“They have that mental agility that lets them rationalise their points of view in a more convincing way.

“So, what you find is that on certain polarised issues, more intelligent people become even more polarised.”

Extracted from Are some of us destined to be dumb and is there anything we can do about it? by Antony Funnell for Future Tense on ABC Radio National.


On Charity and Tyrants

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Neil Godfrey

When reading about Herod the Great recently I was reminded at one point of my recent post about The Dawn of Everything. Herod prided himself on showering the poor with free food — following the “bread and circuses” custom of aspiring and established political leaders in Rome.


Northwest Coast societies, in contrast, became notorious among outside observers for the delight they took in displays of excess. They were best known to European ethnologists for the festivals called potlatch, usually held by aristocrats acceding to some new noble title (nobles would often accumulate many of these over the course of a lifetime). In these feasts they sought to display their grandeur and contempt for ordinary worldly possessions by performing magnificent feats of generosity, overwhelming their rivals with gallons of candlefish oil, berries and quantities of fatty and greasy fish. Such feasts were scenes of dramatic contests, sometimes culminating in the ostentatious destruction of heirloom copper shields and other treasures, just as in the early period of colonial contact, around the turn of the nineteenth century, they sometimes culminated in the sacrificial killing of slaves. Each treasure was unique; there was nothing that resembled money. Potlatch was an occasion for gluttony and indulgence, ‘grease feasts’ designed to leave the body shiny and fat. Nobles often compared themselves to mountains, with the gifts they bestowed rolling off them like boulders, to flatten and crush their rivals.

So far so good. The authors towards the end of the book describe the archaeological evidence of the first Mesopotamian city, Uruk, and the evidence for the temple complex there accommodating the poor:

It is often hard to determine exactly who these temple labourers were, or even what sort of people were being organized in this way, allotted meals and having their outputs inventoried – were they permanently attached to the temple, or just ordinary citizens fulfilling their annual corvée duty? – but the presence of children in the lists suggests at least some may have lived there. If so, then this was most likely because they had nowhere else to go. If later Sumerian temples are anything to go by, this workforce will have comprised a whole assortment of the urban needy: widows, orphans and others rendered vulnerable by debt, crime, conflict, poverty, disease or disability, who found in the temple a place of refuge and support.

One wonders, of course, about the possibility that these people were slaves. Graeber and Wengrow note,

There is a possibility some were already slaves or war captives at this time (Englund 2009), and as we’ll see, this becomes much more commonplace later; indeed, it is possible that what was originally a charitable organization gradually transformed as captives were added to the mix.

From here, we move on to the question of how authoritarian societies arose and to quote a portion of what I covered in an earlier post,

The shorter version of Steiner’s doctoral work . . . focuses on what he calls ‘pre-servile institutions’. Poignantly, given his own life story, it is a study of what happens in different cultural and historical situations to people who become unmoored: those expelled from their clans for some debt or fault; castaways, criminals, runaways. It can be read as a history of how refugees such as himself were first welcomed, treated as almost sacred beings, then gradually degraded and exploited, again much like the women working in the Sumerian temple factories. In essence, the story told by Steiner appears to be precisely about the collapse of what we would term the first basic freedom (to move away or relocate), and how this paved the way for the loss of the second (the freedom to disobey). . .

What happens, Steiner asked, when expectations that make freedom of movement possible – the norms of hospitality and asylum, civility and shelter – erode? Why does this so often appear to be a catalyst for situations where some people can exert arbitrary power over others? Steiner worked his way in careful detail through cases . . .  Along the journey he suggested one possible answer to the question . . . : if stateless societies do regularly organize themselves in such a way that chiefs have no coercive power, then how did top-down forms of organization ever come into the world to begin with?

You’ll recall how both Lowie and Clastres were driven to the same conclusion: that they must have been the product of religious revelation. Steiner provided an alternative route. Perhaps, he suggested, it all goes back to charity. In Amazonian societies, not only orphans but also widows, the mad, disabled or deformed – if they had no one else to look after them – were allowed to take refuge in the chief’s residence, where they received a share of communal meals.

Bread and circuses, charity . . . and the rise of tyrants. The message to me is the importance of allowing everyone to participate in the distribution of their needs.

Graeber, David, and David Wengrow. The Dawn of Everything: A New History of Humanity. New York: Farrar, Straus and Giroux, 2021.