On 7 May 2016, Joshua Brown put his Tesla model S into autopilot mode. The car drove full speed under a trailer. He died in the crash. (1) While the company suggested that statistically Tesla is still doing quite well – this was the first death in 130 million miles, a much better rate than the general one in the U.S. – the accident raises questions about automated cars. Will drivers trust their autopilot too much? Can autopilot systems in cars get smart enough to avoid these kinds of accidents? Can humans stay attentive when the steering of the car is taken over by the autopilot? Is a death caused by automation worse, i.e. less acceptable, than a death caused by a mistake of a human driver? Are we prepared to accept the risks, and if not, why are we prepared to accept the risks in the case of airplanes? Should we have self-driving cars at all?
These problems are not unique to self-driving cars. There are plans to automate many technologies in many domains of life. Developments in the areas of robotics and artificial intelligence seem to be accelerating. A lot of research money is put into these domains by governments and by big companies such as Google. There are plans for automation in the military (drones for instance) and even health care – a domain which is traditionally seen as requiring humans –sees the introduction of “care robots”. Automation is no longer confined to the factories; the robots are there. But what can be automated, and what should be automated? And are we prepared to take the risks? What are the ethical and legal consequences?
While there are increasingly intelligent machines who can do tasks previously held to be impossible – consider for instance IBM’s Watson (2) or Google’s DeepMind (3) – not everything can be automated. This is because humans have a different kind of intelligence. They are embodied, they have emotions, and they are not a machine. They are biological and cultural beings. They can be highly creative, and give meaning to their world. They are able to care – not only care for, but also care about others. They cannot and should not be reduced to an information processing thing. They are not a thing at all but experience and exist. Humans can never be entirely replaced by machines.
That being said, the days are past that automation was reserved for very simple tasks. Increasingly algorithms can take over more complex tasks and therefore also more complex jobs, or so it seems. Driving is one of them. Waiters can be replaced. It is also said that journalism and medical doctors will become replaceable. Researchers of Oxford University have predicted that in the next two decades 47% of all U.S. jobs could be automated. (4) If this would become reality, it seems that many people will have to fear for their jobs. This has also happened in the industrial past, and it could happen again in what Erik Brynjolfsson and Andrew McAfee call ‘the second machine age’. (5)
Yet even if it turns out that less jobs will be automated, and if, more likely, that many humans will remain involved in many tasks and jobs next to robots, it is important to think now about the potential psychological, ethical, and social consequences of automation. How will automation re-shape our human experience of certain activities and indeed the activity itself, such as writing or communication? Should in some domains of human practice such as health care automation be limited? Which tasks should be delegated to computers and robots, and which tasks should remain human? How can humans and machines work together in particular areas? What kind of society are we heading towards, if the current automation trend continues? Is automation good for everyone? Who will be the winners and who will be the losers? How should we politically deal with the risks and challenges of automation?
Even if there will be no robot Armageddon, even if the machines will not fully take over, it is crucial that we address these ethical and social questions now, at the stage when these technologies are still designed and tested. Otherwise we might blindly and automatically walk into a future no-one wants.