The Science of Discworld IV Judgement Da

TWENTY



* * *



DISBELIEF SYSTEM





Roundworld has its own home-grown Omnians. We’re not referring to the great majority of religious believers, who are entirely normal people who happen to have been brought up in a culture that has its own distinctive set of beliefs in things that lack objective evidence. Neither are we referring to Roundworld’s equivalent of mainstream Omnians, who since the overthrow of the extremist Vorbis and his rerun of the Inquisition (see Small Gods) have been decent-enough sorts and kept themselves to themselves.

No, it is the Vorbises of Roundworld who cause all the trouble. Believers with a capital B. These are the people who not only know that their worldview is The Truth – the sole truth, the only truth, the truth revealed from the mouth of God himself – but are intent on forcing it onto everyone else, whether they want it or not, at any cost.

Most sane, rational human beings learn quite early on that you feel just as certain even when you’re wrong: the strength of your belief is not a valid measure of its relation to reality. If you have scientific training, you may even learn the value of doubt. You can certainly have religious beliefs and still be a good scientist; you can also be a good person and understand that people who disagree with your beliefs need not necessarily be evil, or even misguided. After all, most of the world’s people – even the religious ones – probably think your beliefs are nonsense. They have a different set of beliefs, which you think are nonsense.

But religious extremists seem unaware of the human tendency towards self-delusion, and decline to take even the simplest steps to counteract it. When the British Humanist Association hired a bus to tour the UK with the advert ‘There’s probably no God. Now stop worrying and enjoy your life’ on its side, the immediate response from some religious authorities was: ‘They don’t seem terribly confident about it.’ No, what they did with the ‘probably’ was to try not to let opponents score an easy point by criticising them for being dogmatic. Being too confident of their view. More practically, they were also worried about potentially breaching the Advertising Code. Another response from some of those of a religious persuasion was synthetic outrage and claims of persecution.

But Humanists are just as entitled to put their views on the side of a bus as tens of thousands of churches worldwide are to stick ‘The wages of sin is death’ on their walls. That’s why the Humanists hired the bus – one small voice crying out against the multitudes, many of whom were clearly intolerant.

Belief is a very odd word, and it is used in several ways. ‘Belief that’ differs greatly from ‘belief in’, which is again different from ‘belief about’. Our belief about science, for example, is that it’s simply our best defence against believing (in) what we want to. But we may also have, to some extent, a belief in science, as distinct from belief in a religion or a cult: we believe that science can find ways out of humankind’s present difficulties, ways that are not available to politics, philosophy or religion.

There is also a different usage of ‘belief’ altogether, one that we suspect is not always appreciated. Suppose that a scientist says ‘I believe that humans evolved’, and a religious person counters with ‘I believe humans were created by God’. On the surface, these are similar statements, and it’s easy to conclude that science is just another kind of religion. However, in religion, once you believe something, then you consider it to be an immutable truth. In science, the same word means ‘I’m not very sure about this’. As we might say ‘I believe I left my credit card in the pub’, when we haven’t a clue where it’s gone.

Ponder Stibbons believes that Roundworld is a construction whose genesis was events on Discworld. We, and you, believe the converse: that Discworld is a construct, created by Terry Pratchett in Roundworld. It’s just possible for both of these beliefs to be true – for a given value of truth. We all have beliefs of one kind or another. Let’s look at how we get them, and how we might judge them.

Do newborn babies have beliefs? Surprisingly, the answer seems to be ‘yes’. They are very primitive, ill-formed beliefs, and they are considerably refined even in the first six months of life, but a few behaviours, even of newborns, suggest that a lot of wiring-up of the brain has gone on in the womb. The baby is far from being a blank slate on which anything can be written – a stance that Pinker argues persuasively in his book The Blank Slate. The baby is especially responsive to the sight of its mother, and can become very disturbed if she simply disappears from view. It responds to music that is similar to what it heard while in the womb in the later stages of its development; it can distinguish jazz from Beethoven or folksong by attentively ‘listening’ for familiar sounds. It has a whole suite of beliefs about suckling, about breasts and what they’re for. These things are beliefs in the sense that the baby’s brain already holds some model of mother, and of music, and it prefers things that fit this model.

Soon, the baby begins to smile in response to a smile; even to a drawing of a smile. Is that a belief too? The answer depends on, but also illuminates, what we mean by a belief. The baby acts in particular ways – smiles, or suckles – because of the way its brain is wired up, because of programmes in its brain that could be otherwise, and, in occasional babies, are otherwise. Mostly, these are pathologies; apart from different musical preferences, there are few normal differences between baby brains. But very soon, because of a mother’s behaviour, whether the baby is swaddled or carried on a bare back into the fields, or left out on a mountainside, or has its feet bound, babies diverge. And very soon, they are inducted into the Make-a-Human-Being kit that is characteristic of, and specific to, each human culture.

There are several ways to look at how a baby interacts with its surroundings. When the baby throws out toys from its pram, for example, this can be read in at least two ways. On the one hand, we might simply assume that it cannot retain a good grasp of the toy, which falls. However, observing the radiant smile with which it welcomes the return of the toy, we might conclude that the baby is teaching its mother to fetch. Such apparently minor interactions have a strong effect on the baby’s future, and they complicate it in ways that often reinforce the culture concerned. They include little songs and stories; learning to walk, to talk, and to play. We say ‘learning’ here, but these processes are like birds learning to fly. Many features of the ability are already wired into the brain, but now they have to be adjusted in a kind of dialogue with the real world. ‘If I stretch this bit out, and pull it back, what happens?’ So these abilities mature: they are not learned from scratch.

In Unweaving the Rainbow, Dawkins likens juvenile humans to caterpillars, voracious in their uptake of information, especially from parents: Father Christmas, Heaven, fairies, what food to eat at festivals. He points out how credulous we must be as juveniles, to avoid obstacles to learning; but also how we should become more sceptical as adults, and that too many adults fail to do so, hence, alas, astrologers, mediums, priests and the like.

We can see just how indiscriminately juveniles pick up information through something that happened to Jack. He ran an extramural class in animal-handling for about thirty years, and became very impressed by the distribution of animal phobias (although he did realise that this was a very peculiar group of students in that respect). About a quarter of the students had a spider phobia, rather fewer had snake phobia (which, if bad, included worms). Some had a phobia for rats and mice. A few reacted badly to birds, feathers or bats. It seems likely (but we can’t document it in this instance) that these phobias came about by cultural infection: Mother screamed when she found a spider in the bath, or a television series depicted snakes as poisonous. (Less than 3% actually are, but it might be wise to assume lethality as a default, for solid evolutionary reasons.) Rats are often depicted as being dirty, and the same goes for mice. Jack never worked out what gave rise to phobias about birds and feathers, but it certainly passes on in families, and it’s much more likely to be learned rather than genetic. It might be a great example of how beliefs can pass from brain to brain like a computer virus, in this case not transmitted verbally. But we can see how useful these phobias would have been when we were much nearer to nature. They let us learn what creatures to avoid, instantly. And while it didn’t much matter if we occasionally avoided an animal that was actually harmless, the same mistake the other way round could be disastrous.

Beliefs are formed through interactions between an individual’s brain and his or her environment, especially other people but also the natural world (spiders!) So it’s worth taking a general look at interactions.

If A acts on B, we call this an action; but if B also (re)acts on A, we say that A and B are interacting. A baby and its mother are like that. But most interactions are not just some sort of exchange, and they have a deeper effect: A and B are, to a greater or lesser extent, changed by the interaction. They then become A' and B'; then they interact again, and again, and are changed still more. After several changes of this kind, A and B have become quite different systems.

For example, the actor walks out onto the stage, and the audience reacts; the actor reacts to this, and the audience in turn reacts to the actor’s new persona … and so on. In The Collapse of Chaos we called this deeper kind of interaction ‘complicity’, giving a familiar word a technical meaning that is not too far removed from the usual one, but also hinting at a mix of complexity and simplicity. The complicity between child and mother, later between child and teachers, then with sports teams, then with the whole adult world, is the Make-a-Human-Being kit we talked of earlier. We also need a word for this cultural interaction, and have suggested ‘extelligence’. Individuals are intelligent; there are useful ideas and abilities somehow represented, remembered and readied for use, inside their brains. But most of a culture’s collective knowledge is outside any given individual, forming a body of information that is not in any one brain, but outside; hence extelligence. Before the invention of writing, most of a culture’s extelligence resided in the entire collective of brains, but when writing came along, some of it – often the most important to the culture – didn’t need a brain to contain it; only to extract and interpret it. Printing boosted the role of this type of extelligence, and modern technology has led to its dominance.

Where do our beliefs come from? From complicity between our intelligence and the extelligence that surrounds it. This process continues into adulthood, but its greatest effect occurs when we are children. St Francis Xavier, co-founder of the Jesuits and a missionary, is quoted as saying ‘Give me the child until he is seven and I’ll give you the man’. A trawl of today’s premier extelligence, the internet, will haul up an almost endless range of interpretations of that phrase, from benign to malign, but their common ingredient is the malleability of human intelligence at an early age, and its fixity thereafter.

Until fairly recently, almost all people were religious believers. The majority still are, but the proportions depend on culture in a dramatic way. In the United Kingdom, about 40% say they have no religion, 30% align themselves with one but do not consider themselves in any way religious, and only 30% say they have significant religious beliefs. An even smaller proportion attends some kind of place of worship regularly. In the United States, over 80% identify with a specific religious denomination, 40% say they attend services weekly, and 58% say that they pray most weeks. It’s an intriguing difference between cultures that have such a lot in common.

Most religious activity, for the last few thousand years, is based on belief in a god or gods that acted to create the world, human beings, the beasts of the field, plants – everything. We discussed some of these creator gods in chapter 4; they used to resemble human beings or animals, but nowadays they are often abstract and ineffable; either way, they have supernatural powers. They are believed to be in daily contact with the world, making thunderstorms, providing good and bad luck for individual people, acting as a source of wisdom and authority through oral tradition (maintained by a shaman, a priest, or a priesthood). And, in the last few thousand years, Holy Books. Such theist beliefs contrast with deist beliefs, in which there is no overt anthropomorphic god, but some entity, or process, looks after the whole caboodle in deep background.

Such beliefs can be very powerful, and they form the basis of most people’s views of the world and of our lives. In the seventeenth and eighteenth centuries there was a strong movement among intellectuals to reform the structure of society, by basing it on reason, rather than on faith and tradition. This movement, known as the Enlightenment or the Age of Reason, was highly influential throughout Europe and America. It played a role in the formulation of constitutional declarations of human rights, among them the American Declaration of Independence and the French Declaration of the Rights of Man.

Since then, the proportion of non-believers has increased throughout the Western world, especially among those who are well educated and well heeled financially (as a survey in the United States has clearly shown, for example). Such people, among whom we count ourselves, agree with Dawkins, though perhaps not so publicly: they maintain that there is no god, or God, out there: it’s all done by laws of nature, sometimes ‘transcended’ by changing the context for those laws. Good and bad ‘luck’ come from our own actions and the general cussedness of nature; there’s no supernatural entity that consciously affects our lives.

Why do so many people believe in a god? Dennett’s Breaking the Spell is an attempt to examine that question, for Christian fundamentalists, Islamic teachers, Buddhist monks, atheists, and others. He begins by pointing to the commonality of pre-scientific answers in groups of people: ‘How do thunderstorms happen?’ answered by ‘It must be someone up there with a gigantic hammer’ (our example, not his). Then, probably after a minimum of discussion, a name such as ‘Thor’ becomes agreed. Having successfully sorted out thunderstorms, in the sense that you now have an agreed answer to why they happen, other forces of nature are similarly identified and named. Soon you have a pantheon, a community of gods to blame everything on. It’s very satisfying when everyone around you agrees, so the pantheon soon becomes the accepted wisdom, and few question it. In some cultures, few dare to question it, because there are penalties if you do.

J. Anderson Thomson Jr’s book Why We Believe in God(s) devotes each chapter to a different reason for the existence of beliefs. It makes a good case for a Dennett-style system, and is persuasive enough that we’d expect aliens, if they have anything like the kind of social life we have, to have believed in god(s) during at least the early growth of their culture. The aliens would have to have had nurturing parent(s), tribes with a big alien as boss, and so on, but that’s a reasonable expectation if they are extelligent.

People in all cultures grow up and acquire a set of beliefs. One way of looking at this is to call the beliefs that are inherited ‘memes’. Just as ‘genes’ code for hereditary traits, so memes are intended to show the inheritance of individual items, rather than a whole belief system. A tune like ‘Happy Birthday’, a concept like Father Christmas, atom, bicycle or fairy – all are memes. A whole slew of memes that forms an interacting whole is called a memeplex, and religions are the best examples, which at various times and in various cultures have had, or still do have, many linked-up memes like ‘There is Heaven and there is Hell …’ and ‘Unless you pray to this God you’ll go to Hell’ and ‘You must teach this to your children …’ and ‘You must kill those who don’t believe in this …’ and so on. You will have some familiarity with other religions, and you will appreciate that we’re not saying that your religion is like that. It’s all the others, the mistaken ones …

We should look at a few belief systems, to see how they worked and whence they got their authority. We’ll choose some relatively unfamiliar ones, where it’s easier (for most of us) to set aside our own beliefs. If you’re a Jewish Cathar Scientologist, skip this bit.

The Cathars were an odd group of Christians, existing from about 1100 until they were massacred around the period 1220 to 1250, initially by barons of Northern France empowered by the Pope, but then by the Inquisition. They believed that the material world was essentially evil, and that only the spiritual world was good. They deplored sex in general; indeed their bonhommes, or perfecti, wouldn’t eat meat because it was the result of sexuality. Fish was all right: they didn’t know about underwater sex – or plant sex, for that matter. They were totally celibate, and deplored sex even in marriage. There was a ceremony, prescribed for attainment of the perfectus state, a single sacrament, the consolamentum or consolation. It involved a brief spiritual ceremony to remove all sin from the credente, or believer, and induct them into the next higher level as a perfectus. It was commonly performed as death approached, so that the believer was not condemned. Belief in its effectiveness, however, was by no means universal.

Presumably their anti-sex views would weigh against having children, so that any such belief system would be likely to lose its adherents as time passes, but that seems not to have happened. They were remarkably successful in Languedoc, perhaps mostly through conversion. In this they were the cultivated roses of religion, propagated not through sex but by taking cuttings. Considering the practices of Catholic priests, whose behaviour at that time was a distinct contrast, it’s not surprising there were many conversions. That is probably why they had to be annihilated.

The Jews of Poland in the late Middle Ages were mostly confined to ghettos, and restricted to a few trades including usury – money-lending. Their beliefs were complicated. Males learned Torah (Old Testament, Five Books of Moses) from a very young age, and then graduated to Talmud, a compilation of commentaries on the Torah by mostly-Babylonian rabbis. After the Bar Mitzvah ceremony at about age thirteen, which included reciting, and usually singing, a piece from Torah and commenting on it, they continued to study Jewish texts, especially the Talmud and the Gemara (additional rabbinical comments).

Boys who continued to study were frequently maintained by general ghetto funds, such as they were (even today in Israel, boys of Orthodox clans are allowed not to do national service). Females had to learn to keep a kosher household, which involved a whole complex of issues, not simply having kosher meat, but also separating milk dishes from meat dishes, keeping separate cloths and cutlery as well as dishes, and cleaning house, particularly for the Passover, which required a different set of menus. The reward system was not, basically, Heaven or Hell; it was simply that doing these things led to a good life, consonant with what God (Jehovah, but his name must not be said) wanted for man, and to some extent woman.

In the 1550s the rules were collected into a great composition, the Shulchan Aruch, by a Sephardic rabbi in Israel, or possibly Damascus. They became the greatest compendium of Jewish law, especially for the Ashkenazi communities of middle-Europe (Sephardi and Ashkenazi are two separate streams of Jewish culture). This belief system has continued, with much evolution, to the present day. Jack’s rabbi has said that he’s the best atheist in her congregation.

Scientology evolved from L. Ron Hubbard’s earlier invention, Dianetics. L. Ron (‘Elron’) was a fairly successful science fiction author, but his entry into belief systems was distinctly more successful. Few scientists would agree with his claim that Dianetics was a science, but it sold a lot of books; he had audiences of thousands, and after the editor John W. Campbell described it in Astounding Science Fiction it really took off. Martin Gardner’s claim that science fiction fans were very gullible seems to have been true. However, in the longer term Dianetics failed, and Hubbard produced Scientology, which has gone from strength to strength on the basis of a set of beliefs not very different from those of Dianetics.

Basically, the idea is that a set of ‘engrams’ is induced in people by their experiences (including when they were an embryo, before the nervous system develops). Engrams are records of bad experiences, especially very bad ones, which have to be erased for people to become clears – a step upwards on the evolutionary ladder from ordinary humans. People have souls, thetans, that have jumped from alien to alien over billions of years. The important issue for questions about belief is that this system derived from the imagination of one man, who failed to sell Dianetics. It now has tens of thousands of adherents, at least; it claims millions.

These are just three examples. Here are some others to consider, since people seem to pick up sets of beliefs terribly easily.

Rosicrucians, for instance, believe that a set of mystical instructions will enable them to achieve telepathy, success in their jobs and instantaneous travel anywhere, including other planets. The cost of this instruction is considerable, but eventually it gets you into the central core of the sect, where anything is possible. Atlanteans believe that every so often the Earth tilts, flooding all the present continents and exposing new ones; if you find an Atlantean, note where he buys his next house. There are hundreds of such belief systems, and the people who subscribe to them – often paying large sums of money – get all kinds of benefits, especially being privy to the real truth about life, the universe, and everything.

Other belief systems are not so wild. We have in mind systems like Count Alfred Korzybski’s general semantics, which produced wise little gems like ‘the map is not the territory’, Ludwig von Bertalanffy’s general system theory, and the many systems of mind training such as Esalen, with which Gregory Bateson was associated. There are thousands of ‘mind training’ hits on Google, most of them based in California. It is easy to understand the feelings, the beliefs, that send people into these systems of self-improvement. We subscribe to some ourselves – devotion to explanations involving ‘complexity’, promoted by the Santa Fe Institute for Complex Systems and the New England Complex Systems Institute (whose acronym, NECSI, has enabled Jack to promote himself as a necsialist, if not quite a nexialistfn1).

However, the variety of these beliefs – most of which seem very strange to outsiders – is amazing. How can so many belief systems, differing so radically from the common experience of humanity, be accepted by so many people? For each individual belief system, the majority of us consider at least some of the beliefs to be absurd. So why is the absurdity not apparent to everyone? Can it be that people in general are so ignorant of reality nowadays that they will buy into anything that promises a better or more interesting life?

There was also a system advertised not that long ago which forecast that 2012 would be a year of financial collapse and the beginning of World War III – which wouldn’t of itself have been a great surprise given some of the conflicts. However, the forecast was based on rather strange reasoning: not as a result of the antics of greedy bankers and the armaments industry, but because the ancient Mayan calendar ran out in 2012.fn2 The Mayans themselves mostly ran out in the 1600s, because of the diseases which the Spaniards brought, not because of Spanish military prowess. So it’s difficult to see what their calendar had to do with us. The calendars on many kitchen walls this year – and most years – run out on 31 December … Hallelujah! It’s the apocalypse!

In 2012 Scientific Americanfn3 reported a psychological study carried out by Will Gervais and Ara Norenzayan, under the title ‘How critical thinkers lose their faith in God’. It was a follow-up to a 2011 investigation by Harvard researchers, who concluded that what we believe is closely linked to how we usually think. Intuitive thinkers, who come to conclusions instinctively, tend to have religious beliefs. Analytic thinkers tend not to. Encouraging people to use intuition rather than logical analysis increased their belief in God.

Gervais and Norenzayan wondered whether the underlying distinction could be understood in a slightly different manner, as a difference between two ways of thinking that are both useful in suitable circumstances. System 1 thinking is ‘quick and dirty’, relying on simple rules of thumb to make decisions rapidly. If an early human on the savannah spots a patch of orange behind a bush, it makes good sense to assume that it might be a lion, and take avoiding action. A more analytical System 2 assessment might subsequently discover that the orange patch was a bunch of dried leaves, but the processes involved would be slower, and involve more work. In this case, System 1 thinking does little harm if it later turns out to be mistaken, but System 2 could kill you if there really is a lion and you waste time trying to decide.

On the other hand, there are many occasions on which System 2 saves lives, but System 1 does not. Thinking about past forest fires, and deciding not to build your village in an area surrounded by dry vegetation, trumps an intuitive assessment that the location has ample building materials. Avoiding floodplains, even though it is easy to build houses on them and they are currently unoccupied, can prevent complete destruction of your property when the river rises. There is a reason why they are currently unoccupied.

Teasing out the workings of the human brain is tricky, but psychologists have developed techniques that help. In this case, participants were first interviewed to determine the extent of their religious beliefs. Sometime later, the main experiment was carried out, in two different ways. In the first, participants were given a randomly rearranged five-word phrase – such as ‘speak than louder words actions’ – and were asked to rearrange the words to make sense. Some of them were given scrambled phrases containing many words related to analytical thinking; the rest were not. After this exercise, they were asked whether they agreed that God exists. The group whose training period involved words related to analytical thinking were more likely to disagree. Moreover, this tendency remained, even when their prior beliefs were taken into account. The second version of the experiment relied on previous research, showing that asking people to read something printed in a hard-to-read font promoted analytical thinking, perhaps because they have to proceed more slowly and puzzle out the meaning of the letters. Subjects that completed a survey printed in a semi-illegible font were less likely to agree that God exists than those given the same material in a legible one.

The magazine article summed up the study: ‘It may help to explain why the vast majority of Americans tend to believe in God. Because System 2 thinking requires effort, most of us tend to rely on System 1 thinking processes whenever possible.’

There is a loose relationship between System 1/System 2 and Benford’s distinction between human-centred or universe-centred thinking. Intuitive thinking mainly takes a human-scale view of the world, and often places emphasis on quick decisions based on little more than hunches. Many people, finding it difficult to weigh up electoral candidates’ manifestos because political issues are often complicated, rely on instant judgements – System 1. ‘His eyes look too close together.’ ‘I like that smart suit he’s wearing.’ ‘Anyone who’s for/against a free market gets my vote.’ Universe-centred thinking is necessarily analytical, System 2. Humans have to train themselves to think inhuman thoughts. It takes conscious effort, and education, to reject a human-centred view.

Of course, there is no reason to suppose that these two ways of distinguishing thought processes have to match up, and they probably don’t, not in detail. Moreover, the psychological experiments only scratch the surface of human motivations and beliefs. Even if the conclusions are correct – and it is relatively easy to raise objections – they demonstrate an association, not a cause. But the results correspond to other observations of religious belief, for example that it is much rarer among scientists and well-educated people than it is among the poorly educated. And it is the common experience of atheists and rationalists that people who embrace extreme versions of religion tend not to be good at critical thinking. Especially about their own beliefs.

Psychologists study the whole human brain; neuroscientists look at the brain’s detailed workings, in particular how it controls the movements of the body. Many think that this is why the brain evolved to begin with, and sensory information-processing came later, along with all of the other subtler functions of the brain. Engineers, aiming to build better robots, are borrowing tricks from the brain. One of the fundamental features of the brain is how it deals with uncertainty.

Our senses are imprecise, and their inputs to the brain are subject to ‘noise’ – random mistakes. The workings of the brain, being evolved wetware (the organic material of the nervous system) rather than carefully engineered hardware or software, are also subject to errors. The signals that the brain sends to the body suffer from unavoidable variability. Try to sink a golf ball with a ten-metre putt, a hundred times. You won’t get it in the hole every time. Sometimes you may succeed, sometimes you’ll miss by a small amount, but occasionally you’ll miss by more. Professional golfers are paid a lot of money because they are marginally better at reducing this kind of variability than the rest of us.

The same variability comes into play, usually in a more exaggerated form, when it comes to social and political judgements. Here the noise-to-signal ratio is even higher. Not only do we need to take into account all of the information that is being provided: we have to decide which of it is sensible and which is rubbish. How does the brain juggle all of these conflicting factors and come to some kind of decision? A theory that currently explains a great deal, and has a lot of experimental support, is that the brain can be well modelled as a Bayesian decision machine.

It’s a mistake to say that any natural phenomenon is the same as some formal mathematical model, if only because mathematics is a system of human thought, and nature isn’t. Bayesian decision theory is a branch of mathematics, a way of formulating probabilities and statistics. The brain is an interconnected network of nerve cells, whose dynamics depend on chemistry and electrical currents. Bearing this in mind, it seems that over the megayears our brains have evolved networks that mimic the mathematical features of Bayesian decision theory. We can test whether such networks exist, but as yet we have little idea of how they actually work.

In the 1700s, the Reverend Thomas Bayes unwittingly started a revolution in statistics when he suggested a new interpretation of probability. At the time this was a hazy concept anyway, but there was broad agreement that the probability of some event can be defined as the proportion of trials on which that event happens, in the long run. Pick a card at random from a pack, billions of times, and you will get the ace of spades about one time in 52. The same goes for any other specific card, and the reason is that there are 52 cards, and it’s hard to see why any particular one should turn up more frequently than any other.

Bayes had a different idea. There are many circumstances in which it is not possible to repeat a trial many times. What, for example, is the probability that God exists? Whatever our views, we can’t generate billions of universes and count how many of them have a deity. One way to handle such problems is to decide that such probabilities have no meaning. But Bayes argued that in many contexts, you could assign a probability to a one-off event: it was the degree of belief in the occurrence of that event. More strongly, if there was some genuine evidence, it was the degree of confidence in the evidence. We make this kind of snap judgement all the time, for example when thinking that Spain’s football team has roughly a 75% chance of winning the UEFA Europa League football championship, or that the chances of rain today are low.

What Bayes did, sometime in the mid-1700s, was to find a mathematical formula, allowing these ‘prior probabilities’ to modify solid information obtained by other means. A friend of his published the formula in 1763, two years after his death. Suppose you know that Spain’s record of winning big football tournaments is only 60% (a figure we pluck from a hat for illustrative purposes), but you also have a hunch that this year they are playing a lot better than usual. Put the two together, and you will assess their chances as being higher.

Bayesian inference can put numbers to all of this, and provide a rational system for calculating the probabilities concerned – except for prior probabilities, which are plugged into the formulas but are not consequences of them. So the method is a ‘worlds of if’ approach: if the prior probability is such and such, then the consequences of new data will be so and so. The formula does not justify any particular prior probability; however, its consequences may let us test the accuracy of the prior probability, by comparison with observations. Bayesian inference often outperforms more ‘rational’ methods. Although we may not be certain that we’ve assessed the prior probabilities correctly, it may still be better to make a guess, rather than ignoring such influences altogether.

In conventional statistics, a statement being tested – a hypothesis – should be accepted (or at least not rejected) if the evidence agrees with it. In the Bayesian approach, however, the hypothesis should be rejected, despite the evidence, if its prior probability is very low. Indeed, it may be reasonable to reject the alleged evidence, on the same grounds.

For example, suppose the hypothesis is the existence of UFOs, and the evidence is a photograph of one. The photo supports the hypothesis, but if you believe that the chance of UFOs existing is extremely small, then the evidence is not convincing. The photo might be a fake, for example; but even if you don’t know whether it is genuine, you are justified in rejecting the hypothesis … unless, of course, it turns out that your prior probability is wrong. So Bayesian inference does not disprove the existence of UFOs: instead, it quantifies the view that ‘extraordinary claims require extraordinary evidence’. And a photo isn’t extraordinary enough.

Anyway, the neuroscience theory holds that the brain operates by generating beliefs about the world. Here a belief is defined to be what the brain decides about some event or phenomenon, so it is hard to deny that the brain operates by generating such things. The theory says something less tautologous, however: it asserts that the brain combines two distinct sources of information: memory and data. It does not just assess incoming sensory data as such; it compares them to what’s already stored in memory.

Experiments carried out by Daniel Wolpert and his team support the view that the results of these comparisons correspond very closely to Bayes’s formula. The brain seems to have evolved an effective and fairly accurate way to combine its existing knowledge with new information, thereby modifying what it holds in its memory. The experiments look at how we move our limbs to perform some task. Suppose we want to pick up a cup of coffee. There are many ways to do this, and most end in disaster. If we tip the cup too far, for example, the coffee will spill. The response of our muscles is affected by inherent random fluctuations in the motor system, and some strategies for picking up the cup are less error-prone than others. Optimal choices, determined by Bayesian decision theory, generally agree with the actual motions observed.

We repeat, this doesn’t imply that the brain carries out Bayesian calculations the way a mathematician would consciously do using pencil and paper. On the contrary, the brain has evolved neural networks that produce the same general results. The choices indicated by Bayesian decision theory are the choices that best fit reality, assuming that memory and data are being combined. This fit provides an evolutionary advantage – on the whole, those choices work better. So the neural networks that control how we walk, run, hold or throw objects, have been selected to mimic the results of Bayesian decision theory – our way to formalise mathematical rules that describe whatever nature is actually doing.

More generally, we can speculate that similar neural networks control our snap judgements about social or political matters. Again, there are two ingredients: prior beliefs already in memory, and new data. Crucially, the Bayesian model shows why beliefs may override data. If you are certain that global warming is a hoax – for whatever reasons, good or bad – the Bayesian decision machine in your head will reject new evidence that global warming exists, whatever that may be, and stick to your existing beliefs. It may even lead you to reject all such evidence on the grounds that it has to be part of the hoax. If you don’t have strong beliefs either way, new evidence may cause you to modify your views. If you are already convinced about global warming, you may accept new evidence even if it is questionable.

The same goes for religious beliefs. What we might call the epidemiology of religion shows that most people get their beliefs from their parents, close relatives, teachers (if of that persuasion), and priests. By the time they reach an age where they are capable of questioning what they have been taught, they may have built up such a strong system of beliefs that it is proof against any contrary evidence.

So we use two ways of thinking, Systems 1 and 2. That’s suspiciously like Benford’s distinction. Are human-centred and universe-centred thinking related to the two components of Bayesian decisions – memory and data? It’s always tempting to line up dichotomies, assuming they carve things up in the same way, but in this case they don’t. Both memory and data are part of a quick-and-dirty intuitive decision process; they are different components that together drive System 1 thinking. System 2 is different, a much more conscious, deliberative analysis, assessing the evidence and trying – not always successfully – to ignore inbuilt prejudices. It’s not Bayesian.

What does this tell us about belief? First, it explains why people have beliefs at all. They are a vital part of System 1 thinking, which has evolutionary survival value when snap judgements are essential. On the other hand, it also shows that this type of thinking may have deep flaws, whereby our beliefs override important data. If a snap judgement is not needed, it is better not to make one. Instead, we can employ System 2 thinking – often described as ‘rational’ or ‘analytical’ – and allow the data to change our beliefs if they fail to match reality.

There is also the knotty question of belief versus disbelief. A UFO believer, for example, may argue that not believing in UFOs is merely another kind of belief. Namely, a belief that UFOs don’t exist. However, when virtually all of the alleged ‘evidence’ for UFOs turns out to be mistaken, or false, the contrary position isn’t a matter of belief at all. Zero belief in UFOs is not the same as 100% belief in the nonexistence of UFOs. Zero belief is an absence of belief, not an opposed form of belief. Similarly, science sets up a framework in which human beings consciously try to override their innate tendency to use System 1 thinking, because they know it can often be misleading. Scientists actively try to disprove the things they would like to be true.

That’s not a belief system. It’s a disbelief system.

fn1 Science fiction author A.E. Van Vogt coined the term in Voyage of the Space Beagle. He defined a nexialist to be someone who is good at joining together, in an orderly fashion, the knowledge of several fields of learning.

fn2 It didn’t, anyway. The period concerned was just the first of an even vaster series of calendar cycles.

fn3 Daisy Grewal, How critical thinkers lose their faith in God, Scientific American 307 No. 1 (July 2012) 26.





Terry Pratchett, Ian Stewart's books