21 Lessons for the 21st Century

A human manager may know and even agree that it is unethical to discriminate against black people and women, but then, when a black woman applies for a job, the manager subconsciously discriminates against her, and decides not to hire her. If we allow a computer to evaluate job applications, and program the computer to completely ignore race and gender, we can be certain that the computer will indeed ignore these factors, because computers don’t have a subconscious. Of course, it won’t be easy to write code for evaluating job applications, and there is always a danger that the engineers will somehow program their own subconscious biases into the software.21 Yet once we discover such mistakes, it would probably be far easier to debug the software than to rid humans of their racist and misogynist biases.

We saw that the rise of artificial intelligence might push most humans out of the job market – including drivers and traffic police (when rowdy humans are replaced by obedient algorithms, traffic police will be redundant). However, there might be some new openings for philosophers, because their skills – hitherto devoid of much market value – will suddenly be in very high demand. So if you want to study something that will guarantee a good job in the future, maybe philosophy is not such a bad gamble.

Of course, philosophers seldom agree on the right course of action. Few ‘trolley problems’ have been solved to the satisfaction of all philosophers, and consequentialist thinkers such as John Stuart Mill (who judge actions by consequences) hold quite different opinions to deontologists such as Immanuel Kant (who judge actions by absolute rules). Would Tesla have to actually take a stance on such knotty matters in order to produce a car?

Well, maybe Tesla will just leave it to the market. Tesla will produce two models of the self-driving car: the Tesla Altruist and the Tesla Egoist. In an emergency, the Altruist sacrifices its owner to the greater good, whereas the Egoist does everything in its power to save its owner, even if it means killing the two kids. Customers will then be able to buy the car that best fits their favourite philosophical view. If more people buy the Tesla Egoist, you won’t be able to blame Tesla for that. After all, the customer is always right.

This is not a joke. In a pioneering 2015 study people were presented with a hypothetical scenario of a self-driving car about to run over several pedestrians. Most said that in such a case the car should save the pedestrians even at the price of killing its owner. When they were then asked whether they personally would buy a car programmed to sacrifice its owner for the greater good, most said no. For themselves, they would prefer the Tesla Egoist.22

Imagine the situation: you have bought a new car, but before you can start using it, you must open the settings menu and tick one of several boxes. In case of an accident, do you want the car to sacrifice your life – or to kill the family in the other vehicle? Is this a choice you even want to make? Just think of the arguments you are going to have with your husband about which box to tick.

So maybe the state should intervene to regulate the market, and lay down an ethical code binding all self-driving cars? Some lawmakers will doubtless be thrilled by the opportunity to finally make laws that are always followed to the letter. Other lawmakers may be alarmed by such unprecedented and totalitarian responsibility. After all, throughout history the limitations of law enforcement provided a welcome check on the biases, mistakes and excesses of lawmakers. It was an extremely lucky thing that laws against homosexuality and against blasphemy were only partially enforced. Do we really want a system in which the decisions of fallible politicians become as inexorable as gravity?





Digital dictatorships


AI often frightens people because they don’t trust the AI to remain obedient. We have seen too many science-fiction movies about robots rebelling against their human masters, running amok in the streets and slaughtering everyone. Yet the real problem with robots is exactly the opposite. We should fear them because they will probably always obey their masters and never rebel.

There is nothing wrong with blind obedience, of course, as long as the robots happen to serve benign masters. Even in warfare, reliance on killer robots could ensure that for the first time in history, the laws of war would actually be obeyed on the battlefield. Human soldiers are sometimes driven by their emotions to murder, pillage and rape in violation of the laws of war. We usually associate emotions with compassion, love and empathy, but in wartime, the emotions that take control are all too often fear, hatred and cruelty. Since robots have no emotions, they could be trusted to always adhere to the dry letter of the military code, and never be swayed by personal fears and hatreds.23

On 16 March 1968 a company of American soldiers went berserk in the South Vietnamese village of My Lai, and massacred about 400 civilians. This war crime resulted from the local initiative of men who had been involved in jungle guerrilla warfare for several months. It did not serve any strategic purpose, and contravened both the legal code and the military policy of the USA. It was the fault of human emotions.24 If the USA had deployed killer robots in Vietnam, the massacre of My Lai would never have occurred.

Nevertheless, before we rush to develop and deploy killer robots, we need to remind ourselves that the robots always reflect and amplify the qualities of their code. If the code is restrained and benign – the robots will probably be a huge improvement over the average human soldier. Yet if the code is ruthless and cruel – the results will be catastrophic. The real problem with robots is not their own artificial intelligence, but rather the natural stupidity and cruelty of their human masters.

Yuval Noah Harari's books