Skip to main content

What are the Risks and Rewards of Autonomous Vehicles?

autonomous vehicle image

Autonomous vehicles (AVs) aren’t commonplace on our roads — yet! When that future vision will become a reality, however, is still unknown.

Meanwhile, we continue to grapple with defining the benefits and detriments of sharing our highways and byways with driverless cars. Once thought to be the stuff of science fiction, these vehicles combine sensors, cameras, radar and artificial intelligence to navigate and make decisions on the road.

In the recent past, this technology has brought successes and failures. That, in turn, has resulted in a call for more research, more testing and more fine-tuning.

So, what are the safety, economic, moral and social consequences AVs pose? How will they impact transportation and the public? What are the best implementation strategies to minimize risks and maximize rewards? How do we navigate the road ahead?

To help us understand these and other questions, we spoke with three scholars from our Department of Psychology and one from our Department of Philosophy and Religious Studies. They offer insights into the technology’s societal ramifications, opportunities for enhancing accessibility, safety and trust concerns, the need for stronger ethical programming, the importance of human supervision and more.

What are the moral decision-making challenges of AVs?

Veljko Dubljevic
Associate Professor of Philosophy and Science, Technology and Society

The development and growth of artificial intelligence (AI) has spurred important challenges for humanity. A crucial issue is how to ensure that AI systems benefit society while helping realize human values. 

This question, called the alignment problem, has been extensively discussed in recent years. 

A particularly important subject of discussion concerns which values or principles AI should align with and, before that, which procedure AI ethicists should implement to ascertain the relevant values. 

The second important issue pertains to conflicts of values. While we (usually) value following rules, certain situations may be better solved by not strictly following rules but focusing on other aspects of the moral domain instead. 

For instance, while it was tempting to believe that autonomous vehicles can circumvent ethical problems by just stopping and moving to the side of the road, recent incidents involving pedestrian injuries attest to the need to develop a more nuanced and robust approach to ethical guidance functions. 

The work of the NeuroComputational Ethics Research Group, and especially the testing of the Agent-Deed-Consequence (ADC) model shows promise.

An illustrative image of two cars colliding.
An image from a recent paper by Dubljevic’s research team shows a virtual traffic scenario.

Should we embrace or resist this technology?

Jing Feng
Associate Professor of Psychology

There is still a very long way to full automation. Until then, human supervision will remain essential as we gradually adopt vehicle automation technology. 

Research has shown human factors problems, as drivers transition from controlling to supervising the vehicle. With increasing automation, drivers may become disengaged, which can manifest as drivers take their hands off the wheel, eyes off the road, or minds wander away from driving. 

Solutions include technologies to monitor drivers and periodically re-engage them with the driving task. Additionally, pairing advanced vehicle automation with telecommunication technologies may enable a remote operator to supervise the vehicle.

This would turn everyone inside the vehicle into passengers, potentially enhancing the mobility of individuals unable to supervise a vehicle, such as those with blindness. This introduces new human factors challenges related to remote operation that need to be solved. 

There are several proactive steps we can take to use vehicle automation more effectively. It’s important to continually educate ourselves about this technology. Research suggests a better understanding of it could lead to more appropriate trust and usage. 

Never assume a car is self-driving or can perform without your supervision. Approach automated features with caution and be vigilant and observant, allowing the experience to guide our understanding of the automation’s reliability and effectiveness.

What opportunities and challenges do AVs present in terms of accessibility and inclusion?

Yingchen He 
Assistant Professor of Psychology

The emergence of AVs presents great opportunities for enhancing accessibility and inclusion. AVs free up people’s driving time and could potentially increase transportation safety with well-trained driving behaviors. 

They also provide greater freedom and independence for those who cannot drive and may need to rely on friends, caregivers, public transportation or scheduled ride services. This could enhance equity in employment and education opportunities, especially for individuals living in distant locations.

However, accessibility challenges remain. For example, how can blind and visually impaired individuals safely navigate the first and last mile between the AV and their destination, considering the complex environment and unpredictable human behavior? 

Furthermore, from a pedestrian’s perspective, can AVs reliably detect white canes and guide dogs, signifying visual impairments, and act accordingly?

We must also ensure that AVs do not unintentionally widen the gap. For instance, if some groups feel unsafe and less trusting inside AVs and avoid using them, they could be disadvantaged as AVs become more prevalent. 

We also need to be mindful of unforeseen side effects, similar to the safety concerns for visually impaired individuals due to the quiet nature of electric cars. Only by fully addressing accessibility can AVs truly unlock their potential to enhance transportation equity.

How do our perceptions of AVs influence whether we choose to use them or not?

Colleen Patton
Assistant Professor of Psychology

Our perceptions of automation are often relative to their actual abilities – people can correctly understand when an automated system is highly reliable or less reliable. They tend to set their trust in the system accordingly.

Self-driving cars, however, pose a unique challenge because they introduce automation in high-risk situations with high operator confidence. In other words, the consequences for poor driving are very high (i.e., crashes and death), and drivers tend to believe they are quite good at driving.

Even though autonomous vehicles are highly reliable, these other salient influences on a driver’s perception of the situation make them less likely to trust the car. Without a reasonable baseline of trust, the drivers will not choose to turn on the self-driving features – even if they really are safe and reliable, or better than drivers on their own.

So, when we couple people’s general distrust toward self-driving cars and their overconfidence in their driving abilities, we end up with less use of the self-driving features. 

Whether this is a good or bad thing remains to be seen. However, we are certainly far off from a real-life version of The Jetsons, or even most cars driving themselves, even if their capabilities might be better than the average human driver’s ability. 

This post was originally published in College of Humanities and Social Sciences.