Bio: Brenda Leong is an academic researcher from Future of Privacy Forum. The author has contributed to research in topics: Information privacy & Public engagement. The author has an hindex of 1, co-authored 3 publications receiving 19 citations.
••29 Jan 2019
TL;DR: A new taxonomy is created that identifies fundamental types of dishonest anthropomorphism and pinpoints harms that they can cause and critically considers a representative series of ethical issues, proposals, and questions concerning whether the principle of honest anthropomorphicism has been violated.
Abstract: The goal of this paper is to advance design, policy, and ethics scholarship on how engineers and regulators can protect consumers from deceptive robots and artificial intelligences that exhibit the problem of dishonest anthropomorphism. The analysis expands upon ideas surrounding the principle of honest anthropomorphism originally formulated by Margot Kaminsky, Mathew Ruben, William D. Smart, and Cindy M. Grimm in their groundbreaking Maryland Law Review article, "Averting Robot Eyes." Applying boundary management theory and philosophical insights into prediction and perception, we create a new taxonomy that identifies fundamental types of dishonest anthropomorphism and pinpoints harms that they can cause. To demonstrate how the taxonomy can be applied as well as clarify the scope of the problems that it can cover, we critically consider a representative series of ethical issues, proposals, and questions concerning whether the principle of honest anthropomorphism has been violated.
TL;DR: In this article, a comprehensive presentation of leading ethical issues in debates about facial recognition technology is presented, including standards, measures, and disproportionately distributed harms; erosions of trust; ethical harms associated with perfect facial surveillance; alienation, dehumanization, and loss of control.
Abstract: This is a comprehensive presentation of leading ethical issues in debates about facial recognition technology. After defining basic terms (facial detection, facial characterization, facial verification, and facial identification), the following issues are discussed: standards, measures, and disproportionately distributed harms; erosions of trust; ethical harms associated with perfect facial surveillance; alienation, dehumanization, and loss of control; and the slippery slope debate.
••21 Jun 2021
Abstract: AI in My Life’ project will engage 500 Dublin teenagers from disadvantaged backgrounds in a 15-week (20-hour) co-created, interactive workshop series encouraging them to reflect on their experiences in a world shaped by Artificial Intelligence (AI), personal data processing and digital transformation. Students will be empowered to evaluate the ethical and privacy implications of AI in their lives, to protect their digital privacy and to activate STEM careers and university awareness. It extends the ‘DCU TY’ programme for innovative educational opportunities for Transition Year students from underrepresented communities in higher education. Privacy and cybersecurity researchers and public engagement professionals from the SFI Centres ADAPT1 and Lero2 will join experts from the Future of Privacy Forum3 and the INTEGRITY H20204 project to deliver the programme to the DCU Access5 22-school network. DCU Access has a mission of creating equality of access to third-level education for students from groups currently underrepresented in higher education. Each partner brings proven training activities in AI, ethics and privacy. A novel blending of material into a youth-driven narrative will be the subject of initial co-creation workshops and supported by pilot material delivery by undergraduate DCU Student Ambassadors. Train-the-trainer workshops and a toolkit for teachers will enable delivery. The material will use a blended approach (in person and online) for delivery during COVID-19. It will also enable wider use of the material developed. An external study of programme effectiveness will report on participants’: enhanced understanding of AI and its impact, improved data literacy skills in terms of their understanding of data privacy and security, empowerment to protect privacy, growth in confidence in participating in public discourse about STEM, increased propensity to consider STEM subjects at all levels, and greater capacity of teachers to facilitate STEM interventions. This paper introduces the project, presents more details about co-creation workshops that is a particular step in the proposed methodology and reports some preliminary results.
TL;DR: In this article, the authors developed a comprehensive model to investigate relationships between anthropomorphism and its antecedents and consequences, and found that the impact depends on robot type (i.e., robot gender) and service type (e.g., possession processing service, mental stimulus processing service).
Abstract: An increasing number of firms introduce service robots, such as physical robots and virtual chatbots, to provide services to customers. While some firms use robots that resemble human beings by looking and acting humanlike to increase customers’ use intention of this technology, others employ machinelike robots to avoid uncanny valley effects, assuming that very humanlike robots may induce feelings of eeriness. There is no consensus in the service literature regarding whether customers’ anthropomorphism of robots facilitates or constrains their use intention. The present meta-analysis synthesizes data from 11,053 individuals interacting with service robots reported in 108 independent samples. The study synthesizes previous research to clarify this issue and enhance understanding of the construct. We develop a comprehensive model to investigate relationships between anthropomorphism and its antecedents and consequences. Customer traits and predispositions (e.g., computer anxiety), sociodemographics (e.g., gender), and robot design features (e.g., physical, nonphysical) are identified as triggers of anthropomorphism. Robot characteristics (e.g., intelligence) and functional characteristics (e.g., usefulness) are identified as important mediators, although relational characteristics (e.g., rapport) receive less support as mediators. The findings clarify contextual circumstances in which anthropomorphism impacts customer intention to use a robot. The moderator analysis indicates that the impact depends on robot type (i.e., robot gender) and service type (i.e., possession-processing service, mental stimulus-processing service). Based on these findings, we develop a comprehensive agenda for future research on service robots in marketing.
TL;DR: It is argued that the performative threshold that robots need to cross in order to be afforded significant moral status may not be that high and that they may soon cross it and that the authors may need to take seriously a duty of ‘procreative beneficence’ towards robots.
Abstract: Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory-'ethical behaviourism'-which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need to cross in order to be afforded significant moral status may not be that high and that they may soon cross it (if they haven't done so already). Finally, the implications of this for our procreative duties to robots are considered, and it is argued that we may need to take seriously a duty of 'procreative beneficence' towards robots.
TL;DR: In this article, the authors argue that over the next five to ten years society will see a shift in the nature of the Web, as consumers, firms and regulators become increasingly concerned about privacy, and predict that various information sharing and protection practices currently found on the Dark Web will be increasingly adapted in the overall Web, and in the process, firms will lose much of their ability to fuel a modern marketing machinery that relies on abundant, rich, and timely consumer data.
Abstract: The Web is a constantly evolving, complex system, with important implications for both marketers and consumers. In this paper, we contend that over the next five to ten years society will see a shift in the nature of the Web, as consumers, firms and regulators become increasingly concerned about privacy. In particular, we predict that, as a result of this privacy-focus, various information sharing and protection practices currently found on the Dark Web will be increasingly adapted in the overall Web, and in the process, firms will lose much of their ability to fuel a modern marketing machinery that relies on abundant, rich, and timely consumer data. In this type of controlled information-sharing environment, we foresee the emersion of two distinct types of consumers: (1) those generally willing to share their information with marketers (Buffs), and (2) those who generally deny access to their personal information (Ghosts). We argue that one way marketers can navigate this new environment is by effectively designing and deploying conversational agents (CAs), often referred to as “chatbots.” In particular, we propose that CAs may be used to understand and engage both types of consumers, while providing personalization, and serving both as a form of differentiation and as an important strategic asset for the firm—one capable of eliciting self-disclosure of otherwise private consumer information.
TL;DR: It is identified that some potential epistemological and ethical consequences of the use of anthropomorphic language and discourse within the AI research community are identified, reinforcing the need of complementing the practical with a conceptual analysis.
Abstract: AI research is growing rapidly raising various ethical issues related to safety, risks, and other effects widely discussed in the literature. We believe that in order to adequately address those is ...
TL;DR: It is suggested that consumers follow four paths to trust in smart technology: on one path, in which consumers relate their trust to the perceived personality of the technology’s voice interface and on three nonanthropomorphism-based trust paths.
Abstract: Trust is considered a prerequisite for consumer interaction with smart voice-interaction technologies such as smart speakers, although how exactly this develops remains unclear. Adopting th...