Not another article, you ask, on the ethical challenges of autonomous vehicles (AVs) choosing between saving the baby in the back seat and killing the cyclist in front of the vehicle? Not quite. But in ‘only in America’ moments, Tesla-enthusiasts have taken AV testing into their own hands by putting their children in front of them to test the vehicle’s safety features. (in the interest of responsible writing, no links: as Adam Bandt would say ‘Google it mate’).

On the basis that we are now past the ‘live testing’ phase for AVs, the time has come to get into the detailed design of the legal and regulatory framework for AVs.  A recent policy paper published by the UK’s Centre for Data Ethics and Innovation provides some interesting insights into the direction of laws and regulations for autonomous vehicles.

Is ‘safe enough’ good enough?

The Policy Paper suggests that AVs need to be ‘safe enough’:

‘Road safety is a key consideration for AVs. If the technology is not seen as ‘safe enough’, it is unlikely to be accepted by the public. However, there is no empirically verifiable answer to the question of ‘how safe is safe enough?’

This may be unsettling for some consumers because it implies that a certain amount of risk needs to be accepted in deploying AVs.  Surveys show that most consumers are still apprehensive about AVs and attitudes have not shifted in the last 5 years:

  • 58% of respondents would be uncomfortable using self-driving vehicles and 55% would be comfortable with sharing the road with them; and
  • In the digital version of the person walking with a red flag in front of early versions of the car, 86% of respondents agreed or strongly agreed that ‘it must be clear when a vehicle is driving itself’, both to the driver and people around the vehicle.

Even if AVs objectively cause less crashes than human-controlled vehicles, consumers are less willing to accept crashes being caused by factors out of their control without an exponential improvement in safety. Perhaps this is because most drivers have an (unwarranted) high confidence in their own driving ability.

So a ‘safe enough’ standard for AVs, while realistic, could be a ‘hard sell’.   Recognising this, the Policy Paper supports the introduction of AVs managed over time and that during that time, hazardous behaviour of AV software be identified and mitigated. In a frank acknowledgement, the Policy Paper acknowledges that autonomous vehicle manufacturers will need to learn on the job and that consumers need to accept this level of risk because there will never be complete foresight. The trust element is achieved by this learning happening within a public, transparent and accountable process.

How will this be achieved?

Just as human drivers are subject to standardised tests and laws keeping them to account on roads, we have a unique opportunity to set the tests and laws that will shape the decision-making process of an AV. How?

  • Data-sharing amongst competitors: Data about crashes and near-misses need to be shared amongst competing AV suppliers in order for them to learn. This is particularly important in new industries as it provides stakeholders with a larger pool of information to further develop their products. The UK is considering establishing a ‘no-blame safety culture’ that promotes the idea that ‘anybody’s accident is everybody’s accident’, akin to the black box on an aircraft which is shared with investigators.
  • Promoting explainability: ‘Explainability’ is about ‘being able to understand why AV systems do what they do, both in real-time and in hindsight’. Consumers need to be comforted that AVs are making decisions that prioritise their safety and being able to offer transparency in how AV decisions are made will encourage a faster adoption of AVs.

Achieving this level of openness between competitors will be tricky – with start-up electric (and automated) car manufacturers holding their IP close to their chests, the major legacy manufacturers feeling under assault from those start-ups, and then add in the dynamic of West vs China vying to lead the manufacturing of electric vehicles.

This also raises a competition/market structure question. As the Policy Paper recognises, competition based on safety can be an important dimension of a dynamic market – consider, for example, the ‘boxy’ Volvos sold in the 1970s known for their higher safety rating.

This approach of ‘competing on safety’ contrasts with the medical and airline industries, where learning is prioritised over legal liability in incident investigations. In the airline industry, for example, the US National Transportation Safety Board has sought to encourage the idea that ‘anybody’s accident is everybody’s accident’. Until recently, it was possible to assert that airlines and aircraft manufacturers did not compete on safety, although the Policy Paper says that the recent Boeing 737 Max experience may show that this approach has its own downside of accommodating complacency.

However, the Policy Paper, echoing the views of the UK Law Commissions, says that a ‘safety first’ approach should be taken and that a ‘no-blame safety culture’ would allow learning between competitors.

AVs motoring into other areas of law

Some of the most interesting aspects of the Policy Paper are about the legal framework to address issues arising from AVs which lie well beyond road safety (and the traditional ambit of vehicle-related regulation).

Privacy of AV users is a critical issue. Rich information about occupants will be collected by AVs. This may include timestamped locations, details of passengers, and the condition of the driver (e.g. alertness).  

But less understood is that the ‘surveillance’ technology inbuilt into autonomous vehicles can be turned outwards to see everything around the vehicle as it moves through the landscape.  AVs will use cameras to ‘see’ the road, collecting and storing video footage of private and public spaces that the vehicle travels, as well as pedestrians and other road users. The footage may also be coupled with advancements like ‘gaze detection’ technology, which track the eyes of people outside of the vehicle and assess their intentions (e.g. has the pedestrian seen the AV approaching the pedestrian crossing or were their eyes downcast on their phone as they step out onto the road?).

While primarily for safety, this technology embedded in AVs can also be useful for other applications such as insurance claims, traffic management and infrastructure planning. However, they can also be used for surveillance and potentially used by law enforcement, which raises questions about the privacy of road users and the public generally.  AVs also may incidentally collect data unrelated to the driving experience as they pass by: a robbery being committed on a sidewalk.  Should the AV dial the police?

At some level, these datapoints are already being monitored, collected and being used by law enforcement. Examples include:

However, the potential ubiquity and granularity of data that AVs would collect, will raise eyebrows for consumers, especially those that are privacy-conscious. 

The Policy Paper does not say that privacy must be sacrificed when AVs are introduced. It goes as far as saying that AVs raise privacy concerns and recommends that clear boundaries should be set.

In Australia, the National Transport Commission is drafting the AVSL and is considering whether specific privacy protection provisions should be included to govern the actions of a regulator of AVs.

Read more: Responsible Innovation in Self-Driving Vehicles