Gaining the right amount of trust is the major impediment for autonomous cars, says NRMA Director and author Rachel Botsman.
The first time the elevator went driverless, many took one look at these tiny metal boxes, plummeting freely through space, and had the same reaction: ‘There’s no way I’m getting into that thing. Not without a driver!’
The idea of an elevator without a driver was initially dismissed as risky – or even crazy. To encourage people to step inside them, they had to be persuaded to take a ‘trust leap’. And it’s the very same trust leap that people will need to take to get inside self-driving cars.
A trust leap happens when we take a risk and do something in a new or fundamentally different way. This could be something monumental, like the first time you entered your credit card details online; or more incremental, like switching from paper to online invoicing.
Let’s not pretend otherwise: a lot of persuasion needs to happen to get to the widespread adoption of autonomous vehicles – and the real hurdle is psychological. As with elevators, these are big moving things that could kill us.
Trust-centred design in new technology has two common features: first, it gives some control back to the user. In the case of the elevator, it was the big red ‘stop’ button. Self-driving car manufacturers are building in functions for humans to turn off or take over the machine.
Second, it personifies the technology to make it seem more human or harmless. In the case of elevators, there was the calming bland music, the soft female voice announcing floors and an ad campaign depicting children pressing elevator buttons with their granny. A self-driving car can do this in several ways: its shape can be round and cute; its design can make you feel fuzzy, not intimidated; and it can include elements that are redundant but increase the feeling of familiarity, such as a rear-view mirror.
Personifying the vehicle and often gendering it female also increases trust. Recent studies by Northwestern University’s Kellogg School of Management showed that when participants were put inside an automated car called Iris, who spoke to them, their trust of the vehicle was significantly higher in comparison to participants who rode her identical but anonymous and un-gendered counterpart.
Australians already seem surprisingly keen to trust a self-driving car. Our attitudes appear quite advanced, compared with American drivers. Three out of four drivers in the US said they’d feel afraid to ride in self-driving cars and 84 per cent said this was because they trusted their own driving skills more than the technology, according to a survey conducted by the American Automobile Association (AAA) in 2016.
Compare this with Australian keenness. Seven in 10 Australians want a self-driving car to take over when they feel tired or bored. Just under half already recognise autonomous vehicles will be safer than a human driver, according to a study conducted by the Australian Driverless Vehicle Initiative.
But herein lurks an overlooked challenge: not whether people will step into the cars in the first place, but how do you prevent people from trusting them too quickly and too easily? For my book, Who Can You Trust?, I interviewed Dr Brian Lathrop, who works in VW’s Electronics Research Lab. He told me that, after about 20 minutes of shock and awe at the automation of self-driving cars, the experience feels normal – even boring. He worries that people will give away their trust too easily – and nod off. “When it comes to autonomous cars, it’s a system. It’s a machine. It’s not aware of everything,” Lathrop says.
Last year in Florida, a Tesla Model S on autopilot crashed into the side of a truck and its passenger, 40-year-old Joshua Brown, was killed. The truck driver reported he could hear a Harry Potter film playing when he approached the car. Investigators found both a portable DVD player and a laptop inside. In a blog post, Tesla stated, “Neither the autopilot nor the driver noticed the white side of the tractor trailer against the brightly lit sky, so the brake was not applied.”
For the automator, it was a rare error. Brown’s is the only death so far in more than three billion miles driven by Tesla’s autonomous Autopilot driving feature. For the human, it was an avoidable error. If the trust leap had been made with slightly more caution and alertness, the driver might have avoided the crash.
We may be focusing on the wrong thing in how to persuade people to trust the self-driving car. The more pressing concern could, ironically, be whether people will give away their trust too easily.
This article was originally published in the Open Road magazine.