The driverless car: a bright or dystopian future ahead?
Autonomous cars are already being tested on public roads in the US. For the sick, infirm and incompetent, it’s a boon: for others, it’s a sign our reliance on computers is out of control
The age of the petrolhead car-obsessive is, hopefully, about to come to an end. If Google has its way, the cars of the not-too-distant-future will be able to drive themselves, freeing up the occupants to kick back and relax while they’re ferried to their destinations. But if you listened to the howls of those boy-racers who are aghast at the thought of losing control of their precious motorised toys, you’d think people were about to be plunged into a joyless future where machines will send them to their deaths in motorised metal coffins.
The news that self-driving (or ‘autonomous’) cars are set to roll onto public streets in the UK next year caused a stir, with many drivers wondering how safe it will be to have a computer-controlled vehicle competing for space.
Driving enthusiasts aren’t exactly the most level-headed people. If they’re not hurtling down a motorway over the speed limit, they can usually be found clogging up the already cramped city streets, spewing noxious CO2 emissions into the atmosphere, and bellowing about how it’s their right to have their own chariot take them to work. Ludicrously, their main argument against autonomous cars seems to be about safety; something they have little regard for when bending almost all rules of the road.
You’d think people were about to be plunged into a joyless future where machines will send them to their deaths in motorised metal coffins
While some of the protests suggest autonomous cars will be set loose on our streets without any thought for safety, the reality is quite different. The introduction in the UK will involve tests in certain cities for periods lasting between 18 and 36 months, and they will only include a small number of cars. In the US, states including California and Nevada have all signed up to test driverless cars, while Japan and Sweden have also undertaken extensive testing of the technology.
Google’s tests have so far seen more than 700,000 miles driven – 1,000 of which were on difficult terrain and awkward roads – and only one reported accident has occurred. Unsurprisingly, that happened when a human was driving.
The reality is millions have been invested by some of the world’s leading technology companies into developing a safe, reliable and efficient way of transporting people around that takes the primary cause of accidents – human error – out of the equation: the UK experiences five road deaths related to driver error every day.
With autonomous cars, issues around inexperienced and careless drivers, speeding and drink driving will all be eliminated. Safety will be determined on the ability of a highly advanced computer to judge its surroundings. The driver will be free to put their feet up, read a book, watch a film, or even do some work, eliminating all those wasted hours of the day that could be spent driving the economy.
Technology has come a long way over the last few decades. We can safely assume scientists who have devised a way of using satellites to pinpoint where you are on a map are more than capable of developing cars that are as aware of their surroundings. While the joys of driving slightly over the speed limit will be taken out of the gripped hands of petrolheads and given over to an emotionless computer, the lives saved as a result are surely worth the sacrifice.
There is one last reason self-driving cars cannot hit the roads soon enough: there are many in the world incapable of driving, be it for health reasons, age or incompetence – myself included. We should not be excluded from the outside world. Finally, the stigma of not being able to drive to the shops will be a thing of the past.
Consumers are right to relish the thought of turning their daily commute into a real head start on the day. Imagine those many, many mornings and nights spent with both hands on the wheel, staring into the all-too-familiar abyss of rush hour traffic: now imagine using them instead to do whatever you want. Except – and this is key – your safety rests on the ability of a computer program to prevent your self-driving car running a red light, speeding along school roads, or plunging you headlong off the side of a cliff.
The prospect of freeing up time and eliminating traffic casualties obviously makes for attractive reading, but it appears that, in amid the haze of early development excitement, many have lost sight of what could just as easily make the technology so problematic, if not life-threatening.
The single most frequently cited advantage of autonomous vehicles is reduced road traffic casualties: that unwitting pedestrians would, in theory, no longer fall foul of drink drivers, tiredness, or any number of human errors. However, the theory is not without its limitations, and the moment a self-driving car runs a red light or clocks a couple of mph over the limit, the industry will find itself forced to contend with wave-upon-wave of legal questions, all of them asking: ‘Who is to blame?’
Without a designated driver to speak of, the issue of who will be held responsible for an accident looks likely to throw the industry into disarray. Whether the owner, manufacturer or algorithm is to blame for any wrongdoing is a question few people – if any – are qualified to answer. What’s even more disconcerting is that accidents cannot be so easily attributed to computers once lives are at stake. The reassurance that no accidents have been caused by automated cars so far does not change the fact that no technology is immune to flaws.
Your safety rests on the ability of a computer program to prevent your self-driving car plunging you headlong off a cliff
Most worrying of all is that California passed a bill last year paving the way for autonomous cars without addressing the issue of liability in any depth whatsoever. When pressed on the issue of who would get the blame in the event of any wrongdoing, California Governor Jerry Brown retorted: “I don’t know – whoever owns the car, I would think,” before following up with: “That will be the easiest thing to work out.” Google’s Sergey Brin, who was also in attendance, was similarly flippant about the potential risks. “Self-driving cars don’t run red lights,” he said quite simply.
Aside from the issue of accountability, there’s also the question of ethics, and whether a computer is capable of weighing up what’s more important when it comes to making potentially life-threatening decisions. The problem in the main is that a situation could arise in which a computer-driven car is forced to choose between two dangerous actions, and, in that moment, must decide which is the less costly of the two. Whereas a human being is likely to factor ethical considerations into the mix, a computer can surely only opt for the more logical of the two. Without a human to guide it, an autonomous car is essentially a machine, programmed to either protect the driver’s best interests at all costs or else limit damages as much as possible – even if that means putting the driver at risk.
The technology is still in its infancy, but what’s clear already is that there are far greater challenges than cost and technological complexities if those in the industry are to avert what looks increasingly as if it will be an inevitable disaster.