21 Lessons for the 21st Century

One of the nastiest experiments in the history of the social sciences was conducted in December 1970 on a group of students at the Princeton Theological Seminary, who were training to become ministers in the Presbyterian Church. Each student was asked to hurry to a distant lecture hall, and there give a talk on the Good Samaritan parable, which tells how a Jew travelling from Jerusalem to Jericho was robbed and beaten by criminals, who then left him to die by the side of the road. After some time a priest and a Levite passed nearby, but both ignored the man. In contrast, a Samaritan – a member of a sect much despised by the Jews – stopped when he saw the victim, took care of him, and saved his life. The moral of the parable is that people’s merit should be judged by their actual behaviour, rather than by their religious affiliaton.

The eager young seminarians rushed to the lecture hall, contemplating on the way how best to explain the moral of the Good Samaritan parable. But the experimenters planted in their path a shabbily dressed person, who was sitting slumped in a doorway with his head down and his eyes closed. As each unsuspecting seminarian was hurrying past, the ‘victim’ coughed and groaned pitifully. Most seminarians did not even stop to enquire what was wrong with the man, let alone offer any help. The emotional stress created by the need to hurry to the lecture hall trumped their moral obligation to help strangers in distress.18

Human emotions trump philosophical theories in countless other situations. This makes the ethical and philosophical history of the world a rather depressing tale of wonderful ideals and less than ideal behaviour. How many Christians actually turn the other cheek, how many Buddhists actually rise above egoistic obsessions, and how many Jews actually love their neighbours as themselves? That’s just the way natural selection has shaped Homo sapiens. Like all mammals, Homo sapiens uses emotions to quickly make life and death decisions. We have inherited our anger, our fear and our lust from millions of ancestors, all of whom passed the most rigorous quality control tests of natural selection.

Unfortunately, what was good for survival and reproduction in the African savannah a million years ago does not necessarily make for responsible behaviour on twenty-first-century motorways. Distracted, angry and anxious human drivers kill more than a million people in traffic accidents every year. We can send all our philosophers, prophets and priests to preach ethics to these drivers – but on the road, mammalian emotions and savannah instincts will still take over. Consequently, seminarians in a rush will ignore people in distress, and drivers in a crisis will run over hapless pedestrians.

This disjunction between the seminary and the road is one of the biggest practical problems in ethics. Immanuel Kant, John Stuart Mill and John Rawls can sit in some cosy university hall and discuss theoretical problems in ethics for days – but would their conclusions actually be implemented by stressed-out drivers caught in a split-second emergency? Perhaps Michael Schumacher – the Formula One champion who is sometimes hailed as the best driver in history – had the ability to think about philosophy while racing a car; but most of us aren’t Schumacher.

Computer algorithms, however, have not been shaped by natural selection, and they have neither emotions nor gut instincts. Hence in moments of crisis they could follow ethical guidelines much better than humans – provided we find a way to code ethics in precise numbers and statistics. If we teach Kant, Mill and Rawls to write code, they can carefully program the self-driving car in their cosy laboratory, and be certain that the car will follow their commandments on the highway. In effect, every car will be driven by Michael Schumacher and Immanuel Kant rolled into one.

Thus if you program a self-driving car to stop and help strangers in distress, it will do so come hell or high water (unless, of course, you insert an exception clause for infernal or high-water scenarios). Similarly, if your self-driving car is programmed to swerve to the opposite lane in order to save the two kids in its path, you can bet your life this is exactly what it will do. Which means that when designing their self-driving car, Toyota or Tesla will be transforming a theoretical problem in the philosophy of ethics into a practical problem of engineering.

Granted, the philosophical algorithms will never be perfect. Mistakes will still happen, resulting in injuries, deaths and extremely complicated lawsuits. (For the first time in history, you might be able to sue a philosopher for the unfortunate results of his or her theories, because for the first time in history you could prove a direct causal link between philosophical ideas and real-life events.) However, in order to take over from human drivers, the algorithms won’t have to be perfect. They will just have to be better than the humans. Given that human drivers kill more than a million people each year, that isn’t such a tall order. When all is said and done, would you rather the car next to you was driven by a drunk teenager, or by the Schumacher–Kant team?19

The same logic is true not just of driving, but of many other situations. Take for example job applications. In the twenty-first century, the decision whether to hire somebody for a job will increasingly be made by algorithms. We cannot rely on the machine to set the relevant ethical standards – humans will still need to do that. But once we decide on an ethical standard in the job market – that it is wrong to discriminate against black people or against women, for example – we can rely on machines to implement and maintain this standard better than humans.20

Yuval Noah Harari's books