scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Psychological roadblocks to the adoption of self-driving vehicles

TL;DR: Self-driving cars offer a bright future, but only if the public can overcome the psychological challenges that stand in the way of widespread adoption; three are discussed: ethical dilemmas, overreactions to accidents, and the opacity of the cars’ decision-making algorithms.
Abstract: Self-driving cars offer a bright future, but only if the public can overcome the psychological challenges that stand in the way of widespread adoption. We discuss three: ethical dilemmas, overreactions to accidents, and the opacity of the cars’ decision-making algorithms — and propose steps towards addressing them.

Summary (2 min read)

Standfirst

  • The authors discuss three—ethical dilemmas, overreactions to accidents, and the opacity of the cars’ decision-making algorithms—and propose steps towards addressing them.
  • Manufacturers are speeding past the remaining technical challenges to the cars’ readiness.
  • For AVs, which will need to navigate their complex urban environment with the power of life and death, trust will determine how widely they are adopted by consumers, and how tolerated they are by everyone else.
  • Here the authors diagnose three factors underlying this resistance and offer a plan of action (see Table).

The Dilemmas of Autonomous Ethics

  • The necessity for AVs to make ethical decisions leads to a series of dilemmas for their designers, regulators, and the public at large3.
  • In handling these situations, the cars may operate as utilitarians, minimizing total risk to people regardless of who they are, or as self-protective, placing extra weight on the safety of their own passengers.
  • The existence of this ethical dilemma in turn produces a social dilemma.
  • Communication about the overall safety benefits of AVs could be further leveraged to appeal to potential consumers’ concerns about self-image and reputation.
  • Virtue signalling is a powerful motivation for buying ethical products—but only when the ethicality is conspicuous4.

Risk Heuristics and Algorithm Aversion

  • When the first traffic fatality involving Tesla’s Autopilot occurred in May 2016, it was covered by every major news organization—a feat unmatched by any of the other 40,200 US traffic fatalities that year.
  • AV spokespeople should prepare the public for the inevitability of accidents—not overpromising infallibility, but still emphasizing AVs' safety advantages over human drivers.
  • Though human themselves, and ultimately answerable to the public, legislators should resist capitulating to the public’s fears of low-probability risks8.
  • But even if a detailed account of the computer’s decisions were available, it would only offer the enduser an incomprehensible deluge of information.
  • For AVs, whereas some transparency can improve trust, too much transparency into the explanations for the car’s actions can overwhelm the passenger, increasing anxiety10.

A new social contract

  • Automobiles began their transformational integration into their lives over a century ago.
  • A system of laws regulating the behaviour of drivers and pedestrians, and the designs and practices of manufacturers, has been introduced and continuously refined.
  • In that time, the authors will need a new social contract that provides clear guidelines about who is responsible for different kinds of accidents, how monitoring and enforcement will be performed, and how trust among all stakeholders can be engendered.
  • The authors have identified several here, but more work remains.
  • Every day the adoption of autonomous cars is delayed is another day that people will continue to lose their lives to the non-autonomous human drivers of yesterday.

Competing interests

  • The novelty and nature of AVs will result in outsized reactions in the face of inevitable accidents.
  • Manage public overreaction with “fear placebos” and information about actual risks levels.
  • Research the type of information required to form trustable mental models of AVs.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Psychological roadblocks to the
adoption of self-driving vehicles
Azim Shariff*, Jean-François Bonnefon*, and Iyad Rahwan*
Correspondence: azim.shariff@uci.edu; jean-francois.bonnefon@tse-fr.eu; irahwan@mit.edu
Standfirst
Self-driving cars offer a bright future, but only if the public can overcome the psychological
challenges that stand in the way of widespread adoption. We discuss threeethical dilemmas,
overreactions to accidents, and the opacity of the cars decision-making algorithmsand
propose steps towards addressing them.

The widespread adoption of autonomous vehicles (AVs) promises to make us happier,
safer, and more efficient. Manufacturers are speeding past the remaining technical
challenges to the cars readiness. But the biggest roadblocks standing in the path of
the mass adoption may be psychological, not technological; 78% of Americans report
fearing riding in an AV, with only 19% indicating they would trust the car
1
.
Trustthe comfort in making oneself vulnerable to another entity in the pursuit of some
benefithas long been recognized as critical to the adoption of automation, and
becomes even more important as both the complexity of automation and the
vulnerability of the users increase
2
. For AVs, which will need to navigate our complex
urban environment with the power of life and death, trust will determine how widely they
are adopted by consumers, and how tolerated they are by everyone else. Achieving the
bright future promised by AVs will require overcoming the psychological barriers to
trust. Here we diagnose three factors underlying this resistance and offer a plan of
action (see Table).
The Dilemmas of Autonomous Ethics
The necessity for AVs to make ethical decisions leads to a series of dilemmas for their
designers, regulators, and the public at large
3
. These begin with the need for an AV to
decide how it will operate in situations where its actions could decrease the risk of
harming its own passengers by increasing the risk to a potentially larger number of non-
passengers (e.g. pedestrians, other drivers). While these decisions will most often
involve probabilistic tradeoffs in small-risk manoeuvres, at its extreme the decision
could involve an AV determining whether to harm its passenger to spare the lives of
two or more pedestrians, or vice-versa (see Figure).
<Figure about here>
In handling these situations, the cars may operate as utilitarians, minimizing total risk to
people regardless of who they are, or as self-protective, placing extra weight on the
safety of their own passengers. Human drivers make such decisions instinctively in a
split-second, and thus cannot be expected to abide by whatever ethical principle they
formulated in the comfort of their armchair. But AV manufacturers have the luxury of
moral deliberation and thus the responsibility of deliberation.
The existence of this ethical dilemma in turn produces a social dilemma. People are
inconsistent about what principles they want AVs to follow, recognizing the utilitarian
approach to be the most ethical, and as citizens, wanting cars to save the greater
number. But as consumers, they want self-protective cars
3
. As a result, adopting either
strategy brings its own risks for manufacturersa self-protective strategy risks public
outrage, whereas a utilitarian strategy may scare consumers away.
Both the ethical and social dilemmas will need to be addressed to earn the trust of the
public. And because it seems unlikely that regulators will adopt the strictest self-
protective solutionin which AVs would never harm their passengers, however small
the danger to passengers, and large the risk to otherswe will have to grapple with
consumers' fear that their car might someday decide to harm them.

To overcome that fear, we need to make people feel both safe and virtuous about
owning an AV. To make people feel safe, we must understand how to most effectively
convey the absolute reduction in risk to passengers due to overall accident reduction,
so that it is not irrationally overshadowed by a potentially small increase in relative risk
that passengers face in relation to other road users.
Communication about the overall safety benefits of AVs could be further leveraged to
appeal to potential consumers concerns about self-image and reputation. Virtue
signalling is a powerful motivation for buying ethical productsbut only when the
ethicality is conspicuous
4
. Allowing the altruistic benefits of AVs to reflect on the
consumer can change the conversation about AV ethics and prove itself to be a
marketing asset. The most relevant example of successful virtue consumerism is that
of the Toyota Prius, a hybrid-electric automobile whose distinctive shape has allowed
owners to signal their environmental commitment. However, whereas "green"
marketing can backfire for those politically unaligned with the environmental
movement
5
, the package of virtues connected with AVssafety, but also reductions in
traffic and parking congestioncontain uncontroversial values that allow consumers to
advertise themselves as safe, smart, and prosocial.
<Table about here>
Risk Heuristics and Algorithm Aversion
When the first traffic fatality involving Teslas Autopilot occurred in May 2016, it was
covered by every major news organizationa feat unmatched by any of the other
40,200 US traffic fatalities that year. We can expect an even larger reaction the first
time an AV kills a pedestrian, or kills a child, or two AVs crash into each other.
Outsized media coverage of crashes involving AVs may feed and amplify people's
fears by tapping into the availability heuristic (risks are subjectively higher when they
come to mind easily) and affective heuristic (risks are perceived to be higher when they
evoke a vivid emotional reaction). As with airplane crashes, the more
disproportionateand disproportionately sensationalthe coverage that AV accidents
receive, the more exaggerated people will perceive the risk and dangers of these cars
in comparison to those of traditional human-driven ones. Worse, for AVs these
reactions may be compounded by algorithm aversion
6
, the tendency for people to more
rapidly lose faith in an erring decision-making algorithm than in humans making
comparable errors.
These reactions could derail the adoption of AVs through numerous paths; it could
directly deter consumers, it could provoke politicians to enact suffocating restrictions, or
it could create outsized liability issuesfuelled by court and jury overreactionsthat
compromise the financial feasibility of AVs. Each path could slow or even stall
widespread adoption.
Countering these powerful psychological effects may prove especially difficult.
Nevertheless, there are opportunities. AV spokespeople should prepare the public for
the inevitability of accidentsnot overpromising infallibility, but still emphasizing AVs'

safety advantages over human drivers. One barrier that prevents people from adopting
(superior) algorithms over human judgment is overconfidence in ones own
performance
7
something famously prevalent in driving. Manufacturers should also be
open about algorithmic improvements. AVs are better portrayed as being perfected, not
as perfect.
Politicians and regulators can also play a role in managing overreaction. Though human
themselves, and ultimately answerable to the public, legislators should resist
capitulating to the publics fears of low-probability risks
8
. Instead they should educate
the public about the actual risks and, if moved to act, do so in a calculated way, perhaps
by offering the public fear placebos
8
high-visibility, low-cost gestures that do the
most to assuage the publics fears without undermining the real benefits that AVs might
bring.
Asymmetric Information and the Theory of the Machine Mind
The dubious reputation of the CIA is sometimes blamed on the asymmetry between
the secrecy of their successes and the broad awareness of their failures. AVs will face
a similar challenge. Passengers will be acutely aware of the cars rare failures
leading to the issues described abovebut may be blissfully unaware of all the cars'
small successes and optimizations.
This asymmetry of information is part of a larger psychological barrier to the trust in
AVs: the opacity to the decision-making occurring under the hood. If trust is
characterized by the willingness to yield vulnerability to another entity, it is critical that
people can comfortably predict and understand the behaviour of the other entity.
Indeed, the European Union General Data Protection Regulation recently established
the citizens "right to [...] obtain an explanation of the decision reached [] and to
challenge the decision" made by algorithms
9
.
However, full transparency may be neither possible nor optimal. AV intelligence is
driven in part by machine learning, in which computers learn increasingly sophisticated
patterns without being explicitly taught. This leaves underlying decision-making
processes opaque even to the programmer (let alone the passenger). But even if a
detailed account of the computers decisions were available, it would only offer the end-
user an incomprehensible deluge of information. The trend in many lower stakes
computer interfaces (e.g. web-browsers) has thus been in the opposite directionhiding
the complex decision-making of the machine in order to present a simple, minimalistic
user experience. For AVs, whereas some transparency can improve trust, too much
transparency into the explanations for the cars actions can overwhelm the passenger,
increasing anxiety
10
.
Thus, what is most important for generating trust and comfort is not full transparency but
communication of the right amount and kind of information to allow people to develop
mental models (an abstract representation of the entitys perceptions and decision
rules) of the cars
5
a sort of theory of the machine mind. There is already a robust

literature investigating what information is most crucial to communicate, however most
of this research has been conducted on AI in industrial, residential, or software interface
settings. Not all of it will be perfectly transferable to AVs, so researchers need to
investigate what information best fosters predictability, trust and comfort in this new and
specific setting. Moreover, AVs will need to communicate not just with their passengers,
but with pedestrians, fellow drivers, and the other stakeholders on the road. Currently,
people decipher the intentions of other drivers through explicit signals (blinkers, horns,
gestures) and through assumptions based on the mental models formed of drivers (why
is she slowing down here? Why is he positioning himself like that?). Everyone on the
road will need to adjust their human models to those of AVs, and the more research
delineating what information people find crucial and comforting, the more seamless and
less panicky this transition will be.
A new social contract
Automobiles began their transformational integration into our lives over a century ago.
In this time, a system of laws regulating the behaviour of drivers and pedestrians, and
the designs and practices of manufacturers, has been introduced and continuously
refined. Today, the technologies that mediate these regulations, and the norms, fines
and other punishments that enforce them, maintain just enough trust in the traffic
system to keep it tolerable. Tomorrow, the integration of autonomous cars will be
similarly transformational, but will occur over a much shorter timescale. In that time, we
will need a new social contract that provides clear guidelines about who is responsible
for different kinds of accidents, how monitoring and enforcement will be performed, and
how trust among all stakeholders can be engendered. Many challenges remain
hacking, liability, and labour displacement issues, most significantly but this social
contract will be bound as much by psychological realities as by technological and legal
ones. We have identified several here, but more work remains. We believe it is morally
imperative for behavioural scientists of all disciplines to weigh in on this contract. Every
day the adoption of autonomous cars is delayed is another day that people will continue
to lose their lives to the non-autonomous human drivers of yesterday.

Citations
More filters
Journal ArticleDOI
24 Oct 2018-Nature
TL;DR: The Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles, gathered 40 million decisions in ten languages from millions of people in 233 countries and territories to shed light on similarities and variations in ethical preferences among different populations.
Abstract: With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available.

897 citations

Journal ArticleDOI
TL;DR: In this paper, the influence of direct experience of an automated vehicle (AV, Level 3) and explaining and predicting public acceptance of AVs through a psychological model was analyzed. But the authors considered the last two determinants, namely perceived usefulness (PU), perceived ease of use (PEU), trust related to SDVs, and perceived safety (PS) while riding in our AV.
Abstract: This field study aims at understanding the influence of direct experience of an automated vehicle (AV, Level 3) and explaining and predicting public acceptance of AVs through a psychological model. The model includes behavioral intention (BI) to use self-driving vehicles (SDVs, Level 5), willingness to re-ride (WTR) in our AV (Level 3), and their four potential determinants, namely perceived usefulness (PU), perceived ease of use (PEU), trust related to SDVs, and perceived safety (PS) while riding in our AV. The last two determinants are largely ignored, but we consider them critical in the context of AVs. Three-hundred students were invited as participants (passengers) to experience the AV. The trust, PU, PEU, and BI of the participants were recorded prior to their experiencing the AV; after this experience, all the constructs of the psychological model were recorded. The participants’ experience with the AV was found to increase their trust, PU and PEU (but not BI), the consistency between PU/PEU and BI, and the explanatory power of BI. The model explained 55% of the variance in BI and 40% in WTR. PU, trust, and PS were found to be steady and direct predictors of both the acceptance measures; PEU predicted BI only after the participants’ AV experience. Mediation analysis showed that trust also can indirectly affect AV acceptance through other determinants. Out-of-sample prediction confirmed the model’s predictive capability for AV acceptance. The theoretical contributions and practical implications of the findings are discussed.

297 citations

Journal ArticleDOI
TL;DR: Drawing upon the trust heuristic, a psychological model to explain three acceptance measures of fully AD: general acceptance, willingness to pay (WTP), and behavioral intention (BI) was tested and social trust retained a direct effect as well as an indirect effect on all FAD acceptance measures.
Abstract: Automated driving (AD) is one of the most significant technical advances in the transportation industry. Its safety, economic, and environmental benefits cannot be realized if it is not used. To explain, predict, and increase its acceptance, we need to understand how people perceive and why they accept or reject AD technology. Drawing upon the trust heuristic, we tested a psychological model to explain three acceptance measures of fully AD (FAD): general acceptance, willingness to pay (WTP), and behavioral intention (BI). This heuristic suggests that social trust can directly affect acceptance or indirectly affect acceptance through perceived benefits and risks. Using a survey (N = 441), we found that social trust retained a direct effect as well as an indirect effect on all FAD acceptance measures. The indirect effect of social trust was more prominent in forming general acceptance; the direct effect of social trust was more prominent in explaining WTP and BI. Compared to perceived risk, perceived benefit was a stronger predictor of all FAD acceptance measures and also a stronger mediator of the trust-acceptance relationship. Predictive ability of the proposed model for the three acceptance measures was confirmed. We discuss the implications of our results for theory and practice.

161 citations

Journal ArticleDOI
TL;DR: In this paper, the authors examined the antecedents of consumers' intention to shop at AI-Powered Automated Retail Stores and found that the Innovativeness and Optimism of consumers affect the perceived ease and perceived usefulness.

140 citations

Journal ArticleDOI
TL;DR: Examination of willingness to pay in China found trust and perceived benefit were positive predictors of WTP and perceived risk and perceived dread were negative predictorsOf WTP.
Abstract: Research on willingness to pay (WTP) can provide practical insights for assessing the value of self-driving vehicle (SDV) technology in the vehicle market. Are people willing to pay extra for the technology? What demographic and psychological factors can influence people’s WTP for this technology? These questions are not yet well investigated. We conducted surveys in two cities in China (total N = 1355) and examined WTP and its potential demographic determinants (familiarity, age, gender, education, and income) and psychological determinants (perceived benefit and risk of SDVs, anticipated perceived dread riding in SDVs, and trust in SDVs). About 26.3% of participants were unwilling to pay extra, 39.3% were willing to pay less than $2900, and the remaining 34.3% were willing to pay more than $2900. Younger and highly educated participants with higher-income were willing to pay more. Participants who had heard about SDVs before the survey reported higher WTP and higher trust and perceived higher benefits, lower risks, and lower dread. Trust and perceived benefit were positive predictors of WTP and perceived risk and perceived dread were negative predictors of WTP. Our results may offer practical implications for increasing the public’s acceptance and WTP of SDVs.

120 citations

References
More filters
Journal ArticleDOI
TL;DR: Supporting the notion that altruism signals one's willingness and ability to incur costs for others' benefit, status motives increased desire for green products when shopping in public and when green products cost more (but not less) than nongreen products.
Abstract: Why do people purchase proenvironmental "green" products? We argue that buying such products can be construed as altruistic, since green products often cost more and are of lower quality than their conventional counterparts, but green goods benefit the environment for everyone. Because biologists have observed that altruism might function as a "costly signal" associated with status, we examined in 3 experiments how status motives influenced desire for green products. Activating status motives led people to choose green products over more luxurious nongreen products. Supporting the notion that altruism signals one's willingness and ability to incur costs for others' benefit, status motives increased desire for green products when shopping in public (but not private) and when green products cost more (but not less) than nongreen products. Findings suggest that status competition can be used to promote proenvironmental behavior.

1,581 citations

Journal ArticleDOI
24 Jun 2016-Science
TL;DR: Even though participants approve of autonomous vehicles that might sacrifice passengers to save others, respondents would prefer not to ride in such vehicles, and regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.
Abstract: Autonomous vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils, such as running over pedestrians or sacrificing themselves and their passenger to save the pedestrians. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants in six Amazon Mechanical Turk studies approved of utilitarian AVs (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs. The study participants disapprove of enforcing utilitarian regulations for AVs and would be less willing to buy such an AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.

994 citations

Journal ArticleDOI
TL;DR: This paper showed that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster, and that people more quickly lose confidence in algorithmic than human forecasters when they make the same mistake.
Abstract: Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

741 citations

Journal ArticleDOI
TL;DR: It is shown that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster, and this phenomenon, which is called algorithm aversion, is costly, and it is important to understand its causes.
Abstract: Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In five studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

706 citations

Journal ArticleDOI
TL;DR: In this paper, the authors explore how the content of the verbalized message accompanying the car's autonomous action affects the driver's attitude and safety performance and suggest that car makers need to attend not only to the design of autonomous actions but also to the right way to explain these actions to the drivers.
Abstract: This study explores, in the context of semi-autonomous driving, how the content of the verbalized message accompanying the car’s autonomous action affects the driver’s attitude and safety performance. Using a driving simulator with an auto-braking function, we tested different messages that provided advance explanation of the car’s imminent autonomous action. Messages providing only “how” information describing actions (e.g., “The car is braking”) led to poor driving performance, whereas “why” information describing reasoning for actions (e.g., “Obstacle ahead”) was preferred by drivers and led to better driving performance. Providing both “how and why” resulted in the safest driving performance but increased negative feelings in drivers. These results suggest that, to increase overall safety, car makers need to attend not only to the design of autonomous actions but also to the right way to explain these actions to the drivers.

347 citations

Frequently Asked Questions (17)
Q1. What is the role of the AV industry in the debate about AV ethics?

Allowing the altruistic benefits of AVs to reflect on the consumer can change the conversation about AV ethics and prove itself to be a marketing asset. 

For AVs, which will need to navigate their complex urban environment with the power of life and death, trust will determine how widely they are adopted by consumers, and how tolerated they are by everyone else. 

In handling these situations, the cars may operate as utilitarians, minimizing total risk to people regardless of who they are, or as self-protective, placing extra weight on the safety of their own passengers. 

Outsized media coverage of crashes involving AVs may feed and amplify people's fears by tapping into the availability heuristic (risks are subjectively higher when they come to mind easily) and affective heuristic (risks are perceived to be higher when they evoke a vivid emotional reaction). 

The most relevant example of successful virtue consumerism is that of the Toyota Prius, a hybrid-electric automobile whose distinctive shape has allowed owners to signal their environmental commitment. 

AV intelligence is driven in part by machine learning, in which computers learn increasingly sophisticated patterns without being explicitly taught. 

AV spokespeople should prepare the public for the inevitability of accidents—not overpromising infallibility, but still emphasizing AVs'safety advantages over human drivers. 

Many challenges remain— hacking, liability, and labour displacement issues, most significantly— but this social contract will be bound as much by psychological realities as by technological and legal ones. 

In this time, a system of laws regulating the behaviour of drivers and pedestrians, and the designs and practices of manufacturers, has been introduced and continuously refined. 

But the biggest roadblocks standing in the path of the mass adoption may be psychological, not technological; 78% of Americans report fearing riding in an AV, with only 19% indicating they would trust the car1. 

These reactions could derail the adoption of AVs through numerous paths; it could directly deter consumers, it could provoke politicians to enact suffocating restrictions, or it could create outsized liability issues—fuelled by court and jury overreactions—that compromise the financial feasibility of AVs. 

Self-driving cars offer a bright future, but only if the public can overcome the psychological challenges that stand in the way of widespread adoption. 

As with airplane crashes, the more disproportionate—and disproportionately sensational—the coverage that AV accidents receive, the more exaggerated people will perceive the risk and dangers of these cars in comparison to those of traditional human-driven ones. 

These begin with the need for an AV to decide how it will operate in situations where its actions could decrease the risk of harming its own passengers by increasing the risk to a potentially larger number of nonpassengers (e.g. pedestrians, other drivers). 

for AVs these reactions may be compounded by algorithm aversion6, the tendency for people to more rapidly lose faith in an erring decision-making algorithm than in humans making comparable errors. 

Psychological Challenge Suggested ActionsThe Dilemmas of Autonomous Ethics People are torn between how they want AVs to ethically behave; they morally believe the vehicles should operate under utilitarian principles, but prefer to buy vehicles that prioritize their own lives as passengers. 

Instead they should educate the public about the actual risks and, if moved to act, do so in a calculated way, perhaps by offering the public “fear placebos” 8—high-visibility, low-cost gestures that do the most to assuage the publics’ fears without undermining the real benefits that AVs might bring.