That figure alone demonstrates the serious opportunity that this huge change in transport infrastructure presents. However, large hurdles still remain in terms of the levels of trust shown by the public – the passengers of this proposed autonomous future – when asked if they would be happy to be driven by a robot. The number one concern is safety and the risk of accidents.
The 2018 Global Automotive Consumer Study Report, released last month by Deloitte, has found that the public are gradually getting used to the idea of being transported by autonomous vehicles. What’s more, people are starting to trust artificially intelligent systems with 47% of US candidates currently feeling that autonomous cars won’t be safe, a marked improvement from the previous year’s 74%.
But this is still a sizeable percentage who remain unconvinced. The automotive sector might invest huge sums in developing exceptionally elegant driverless cars, but if passengers don't feel safe enough to use them, the industry is not going anywhere very fast.
So how can industry help scale the hurdle of trust?
The safety argument for self-driving vehicles is certainly compelling on paper. Statistics and early research show that autonomous cars will be far safer than human drivers on the roads, as they can’t get distracted, check phone messages or fall asleep. It is estimated that 90% of car crashes are caused by human error, based on losing concentration. The Huffington Post reports that autonomous car development company Waymo has logged over two million miles on US streets and has only been at fault in one accident. Even the safest demographic of drivers – 60-69-year-olds – are 10 times more likely to have an accident. New drivers? They’re 40 times more likely.
However, we must recognise that simply replacing human drivers with autonomous ones will not prevent every single accident from occurring. There are a lot of miles of research still to be covered, to teach driverless cars to be able to effectively process the unpredictable nature of the roads, and ensure they remain predictable to other vehicles. There have, of course, been a number serious accidents, widely reported in the media recently, involving autonomous cars from Tesla and Uber. These require more analysis before we can draw conclusions, but they have drawn attention to the fact that there must be clarity between the human and the autonomous car about who is in control at any given moment.
A crucial point is understanding the level of automation being applied in any particular case. To date, all self-driving cars, that are carrying passengers on the roads, are ‘Level 2 autonomy’, meaning computers take over multiple functions from the driver, but strictly require the driver to stay actively engaged with the journey and take back control immediately on request. Some new models are ‘Level 3 ready’, which will see cars take over all controls, but the driver will still need to stay involved, while Level 4 – predicted by 2023 – will see fully driverless cars being used in geofenced urban areas. The ultimate prediction is that self-driving car manufacturers will be in a position to offer exceptionally advanced vehicles at Level 5 autonomy – where the car has become a ‘cocoon’ in which to sit passively and be transported – which will be on the roads by 2035.