scispace - formally typeset
Search or ask a question

Showing papers in "Ai & Society in 2021"


Journal ArticleDOI
TL;DR: In this article, the authors focus on the socio-political background and policy debates that are shaping China's AI strategy and analyse the main strategic areas in which China is investing in AI and concurrent ethical debates that delimiting its use.
Abstract: In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence (AI), entitled ‘New Generation Artificial Intelligence Development Plan’ (新一代人工智能发展规划). This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan (ca. 150 billion dollars) industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this article, we focus on the socio-political background and policy debates that are shaping China’s AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. By focusing on the policy backdrop, we seek to provide a more comprehensive and critical understanding of China’s AI policy by bringing together debates and analyses of a wide array of policy documents.

140 citations


Journal ArticleDOI
TL;DR: The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.
Abstract: Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mittelstadt et al. Big Data Soc 3(2), 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.

137 citations


Journal ArticleDOI
TL;DR: By looking at the politics of classification within machine learning systems, this article demonstrates why the automated interpretation of images is an inherently social and political project.
Abstract: By looking at the politics of classification within machine learning systems, this article demonstrates why the automated interpretation of images is an inherently social and political project. We begin by asking what work images do in computer vision systems, and what is meant by the claim that computers can “recognize” an image? Next, we look at the method for introducing images into computer systems and look at how taxonomies order the foundational concepts that will determine how a system interprets the world. Then we turn to the question of labeling: how humans tell computers which words will relate to a given image. What is at stake in the way AI systems use these labels to classify humans, including by race, gender, emotions, ability, sexuality, and personality? Finally, we turn to the purposes that computer vision is meant to serve in our society—the judgments, choices, and consequences of providing computers with these capacities. Methodologically, we call this an archeology of datasets: studying the material layers of training images and labels, cataloguing the principles and values by which taxonomies are constructed, and analyzing how these taxonomies create the parameters of intelligibility for an AI system. By doing this, we can critically engage with the underlying politics and values of a system, and analyze which normative patterns of life are assumed, supported, and reproduced.

101 citations


Journal ArticleDOI
TL;DR: This paper discusses AIEd’s purported capacities, including the abilities to simulate teachers, provide robust student differentiation, and even foster socio-emotional engagement, and contrasts sociotechnical possibilities and risks through two idealized futures.
Abstract: Like previous educational technologies, artificial intelligence in education (AIEd) threatens to disrupt the status quo, with proponents highlighting the potential for efficiency and democratization, and skeptics warning of industrialization and alienation. However, unlike frequently discussed applications of AI in autonomous vehicles, military and cybersecurity concerns, and healthcare, AI's impacts on education policy and practice have not yet captured the public's attention. This paper, therefore, evaluates the status of AIEd, with special attention to intelligent tutoring systems and anthropomorphized artificial educational agents. I discuss AIEd's purported capacities, including the abilities to simulate teachers, provide robust student differentiation, and even foster socio-emotional engagement. Next, to situate developmental pathways for AIEd going forward, I contrast sociotechnical possibilities and risks through two idealized futures. Finally, I consider a recent proposal to use peer review as a gatekeeping strategy to prevent harmful research. This proposal serves as a jumping off point for recommendations to AIEd stakeholders towards improving their engagement with socially responsible research and implementation of AI in educational systems.

63 citations


Journal ArticleDOI
TL;DR: A new integrated and comprehensive research design framework that addresses all aspects of the above three perspectives, and includes principles that support developers to reflect and anticipate upon potential effects of AI in society.
Abstract: Within current debates about the future impact of Artificial Intelligence (AI) on human society, roughly three different perspectives can be recognised: (1) the technology-centric perspective, claiming that AI will soon outperform humankind in all areas, and that the primary threat for humankind is superintelligence; (2) the human-centric perspective, claiming that humans will always remain superior to AI when it comes to social and societal aspects, and that the main threat of AI is that humankind’s social nature is overlooked in technological designs; and (3) the collective intelligence-centric perspective, claiming that true intelligence lies in the collective of intelligent agents, both human and artificial, and that the main threat for humankind is that technological designs create problems at the collective, systemic level that are hard to oversee and control The current paper offers the following contributions: (a) a clear description for each of the three perspectives, along with their history and background; (b) an analysis and interpretation of current applications of AI in human society according to each of the three perspectives, thereby disentangling miscommunication in the debate concerning threats of AI; and (c) a new integrated and comprehensive research design framework that addresses all aspects of the above three perspectives, and includes principles that support developers to reflect and anticipate upon potential effects of AI in society

55 citations


Journal ArticleDOI
TL;DR: This review found that there are multiple concerns about opacity, accountability, responsibility and liability when considering the stakeholders of technologists and clinicians in the creation and use of AIS in clinical decision making.
Abstract: The aim of this literature review was to compose a narrative review supported by a systematic approach to critically identify and examine concerns about accountability and the allocation of responsibility and legal liability as applied to the clinician and the technologist as applied the use of opaque AI-powered systems in clinical decision making. This review questions (a) if it is permissible for a clinician to use an opaque AI system (AIS) in clinical decision making and (b) if a patient was harmed as a result of using a clinician using an AIS’s suggestion, how would responsibility and legal liability be allocated? Literature was systematically searched, retrieved, and reviewed from nine databases, which also included items from three clinical professional regulators, as well as relevant grey literature from governmental and non-governmental organisations. This literature was subjected to inclusion/exclusion criteria; those items found relevant to this review underwent data extraction. This review found that there are multiple concerns about opacity, accountability, responsibility and liability when considering the stakeholders of technologists and clinicians in the creation and use of AIS in clinical decision making. Accountability is challenged when the AIS used is opaque, and allocation of responsibility is somewhat unclear. Legal analysis would help stakeholders to understand their obligations and prepare should an undesirable scenario of patient harm eventuate when AIS were used.

41 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present empirical findings collected using a set of ten case studies and provide an account of the cross-case analysis, showing that organisations are highly aware of the AI ethics debate and keen to engage with ethical issues proactively.
Abstract: The ethics of artificial intelligence (AI) is a widely discussed topic. There are numerous initiatives that aim to develop the principles and guidance to ensure that the development, deployment and use of AI are ethically acceptable. What is generally unclear is how organisations that make use of AI understand and address these ethical issues in practice. While there is an abundance of conceptual work on AI ethics, empirical insights are rare and often anecdotal. This paper fills the gap in our current understanding of how organisations deal with AI ethics by presenting empirical findings collected using a set of ten case studies and providing an account of the cross-case analysis. The paper reviews the discussion of ethical issues of AI as well as mitigation strategies that have been proposed in the literature. Using this background, the cross-case analysis categorises the organisational responses that were observed in practice. The discussion shows that organisations are highly aware of the AI ethics debate and keen to engage with ethical issues proactively. However, they make use of only a relatively small subsection of the mitigation strategies proposed in the literature. These insights are of importance to organisations deploying or using AI, to the academic AI ethics debate, but maybe most valuable to policymakers involved in the current debate about suitable policy developments to address the ethical issues raised by AI.

40 citations


Journal ArticleDOI
Dong-Hee Shin1
TL;DR: Examination of how literacy and user trust influence perceptions of chatbot information credibility confirms that algorithmic literacy and users’ trust play a pivotal role in how users form perceptions of the credibility of chatbots messages and recommendations.
Abstract: The exponential growth of algorithms has made establishing a trusted relationship between human and artificial intelligence increasingly important. Algorithm systems such as chatbots can play an important role in assessing a user’s credibility on algorithms. Unless users believe the chatbot’s information is credible, they are not likely to be willing to act on the recommendation. This study examines how literacy and user trust influence perceptions of chatbot information credibility. Results confirm that algorithmic literacy and users’ trust play a pivotal role in how users form perceptions of the credibility of chatbot messages and recommendations. Insights on how user trust is related to credibility provide a useful perspective on the conceptualization of algorithmic credibility. Algorithmic information processing that has been identified provides better foundations for algorithm design and development and a stronger basis for the design of sense-making chatbot journalism.

37 citations


Journal ArticleDOI
TL;DR: It is contended that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience, and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible.
Abstract: In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking about the latter can help us better understand and regulate the former. I contend that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience. Moral zombies and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons.

32 citations


Journal ArticleDOI
TL;DR: It is argued that the media performances of the Sophia robot were choreographed to advance specific political interests and put the discussions about the robot’s rights or citizenship in the context of AI politics and economics.
Abstract: A humanoid robot named ‘Sophia’ has sparked controversy since it has been given citizenship and has done media performances all over the world. The company that made the robot, Hanson Robotics, has touted Sophia as the future of artificial intelligence (AI). Robot scientists and philosophers have been more pessimistic about its capabilities, describing Sophia as a sophisticated puppet or chatbot. Looking behind the rhetoric about Sophia’s citizenship and intelligence and going beyond recent discussions on the moral status or legal personhood of AI robots, we analyse the performativity of Sophia from the perspective of what we call ‘political choreography’: drawing on phenomenological approaches to performance-oriented philosophy of technology. This paper proposes to interpret and discuss the world tour of Sophia as a political choreography that boosts the rise of the social robot market, rather than a statement about robot citizenship or artificial intelligence. We argue that the media performances of the Sophia robot were choreographed to advance specific political interests. We illustrate our philosophical discussion with media material of the Sophia performance, which helps us to explore the mechanisms through which the media spectacle functions hand in hand with advancing the economic interests of technology industries and their governmental promotors. Using a phenomenological approach and attending to the movement of robots, we also criticize the notion of ‘embodied intelligence’ used in the context of social robotics and AI. In this way, we put the discussions about the robot’s rights or citizenship in the context of AI politics and economics.

31 citations


Journal ArticleDOI
TL;DR: This paper focuses on the use of ‘black box’ AI in medicine and asks whether the physician needs to disclose to patients that even the best AI comes with the risks of cyberattacks, systematic bias, and a particular type of mismatch between AI’s implicit assumptions and an individual patient's background situation.
Abstract: This paper focuses on the use of 'black box' AI in medicine and asks whether the physician needs to disclose to patients that even the best AI comes with the risks of cyberattacks, systematic bias, and a particular type of mismatch between AI's implicit assumptions and an individual patient's background situation. Pace current clinical practice, I argue that, under certain circumstances, these risks do need to be disclosed. Otherwise, the physician either vitiates a patient's informed consent or violates a more general obligation to warn him about potentially harmful consequences. To support this view, I argue, first, that the already widely accepted conditions in the evaluation of risks, i.e. the 'nature' and 'likelihood' of risks, speak in favour of disclosure and, second, that principled objections against the disclosure of these risks do not withstand scrutiny. Moreover, I also explain that these risks are exacerbated by pandemics like the COVID-19 crisis, which further emphasises their significance.

Journal ArticleDOI
TL;DR: The authors conducted a mixed-methods qualitative analysis to answer the following four questions: what do AI practitioners understand about the need to translate ethical principles into practice? What motivates AI practitioners to embed AI ethics principles into design practices? What barriers do AI professionals face when attempting to translate AI principles into practical use? And finally, what assistance do practitioners want and need when translating ethical principles in practice?
Abstract: By mid-2019 there were more than 80 AI ethics guides available in the public domain. Despite this, 2020 saw numerous news stories break related to ethically questionable uses of AI. In part, this is because AI ethics theory remains highly abstract, and of limited practical applicability to those actually responsible for designing algorithms and AI systems. Our previous research sought to start closing this gap between the ‘what’ and the ‘how’ of AI ethics through the creation of a searchable typology of tools and methods designed to translate between the five most common AI ethics principles and implementable design practices. Whilst a useful starting point, that research rested on the assumption that all AI practitioners are aware of the ethical implications of AI, understand their importance, and are actively seeking to respond to them. In reality, it is unclear whether this is the case. It is this limitation that we seek to overcome here by conducting a mixed-methods qualitative analysis to answer the following four questions: what do AI practitioners understand about the need to translate ethical principles into practice? What motivates AI practitioners to embed ethical principles into design practices? What barriers do AI practitioners face when attempting to translate ethical principles into practice? And finally, what assistance do AI practitioners want and need when translating ethical principles into practice?

Journal ArticleDOI
TL;DR: It is argued that contestability may be a possible, acceptable, and useful alternative so that even if the authors cannot understand how a system came up with a particular output, they at least have the means to challenge it.
Abstract: Some recent developments in Artificial Intelligence—especially the use of machine learning systems, trained on big data sets and deployed in socially significant and ethically weighty contexts—have led to a number of calls for “transparency”. This paper explores the epistemological and ethical dimensions of that concept, as well as surveying and taxonomising the variety of ways in which it has been invoked in recent discussions. Whilst “outward” forms of transparency (concerning the relationship between an AI system, its developers, users and the media) may be straightforwardly achieved, what I call “functional” transparency about the inner workings of a system is, in many cases, much harder to attain. In those situations, I argue that contestability may be a possible, acceptable, and useful alternative so that even if we cannot understand how a system came up with a particular output, we at least have the means to challenge it.

Journal ArticleDOI
TL;DR: A threefold critique of robotics in healthcare is derived, which calls attention to the politics, historicity, and social situatedness of care robotics in elderly care.
Abstract: When the social relevance of robotic applications is addressed today, the use of assistive technology in care settings is almost always the first example. So-called care robots are presented as a solution to the nursing crisis, despite doubts about their technological readiness and the lack of concrete usage scenarios in everyday nursing practice. We inquire into this interconnection of social robotics and care. We show how both are made available for each other in three arenas: innovation policy, care organization, and robotic engineering. First, we analyze the discursive “logics” of care robotics within European innovation policy, second, we disclose how care robotics is encountering a historically grown conflict within health care organization, and third we show how care scenarios are being used in robotic engineering. From this diagnosis, we derive a threefold critique of robotics in healthcare, which calls attention to the politics, historicity, and social situatedness of care robotics in elderly care.

Journal ArticleDOI
TL;DR: This study integrates case-based reasoning (CBR) as an artificial intelligence technique to develop a recommender system towards promoting smart city planning and results suggest that the developed system is applicable in supporting smart city adoption.
Abstract: With the deployment of information and communication technologies (ICTs) and the needs of data and information sharing within cities, smart city aims to provide value-added services to improve citizens’ quality of life. But, currently city planners/developers are faced with inadequate contextual information on the dimensions of smart city required to achieve a sustainable society. Therefore, in achieving sustainable society, there is need for stakeholders to make strategic decisions on how to implement smart city initiatives. Besides, it is required to specify the smart city dimensions to be adopted in making cities smarter for sustainability attainment. But, only a few methods such as big data, internet of things, cloud computing, etc. have been employed to support smart city attainment. Thus, this study integrates case-based reasoning (CBR) as an artificial intelligence technique to develop a recommender system towards promoting smart city planning. CBR provides suggestions on smart city dimensions to be adopted by city planners/decision-makers in making cities smarter and sustainable. Accordingly, survey data were collected from 115 respondents to evaluate the applicability of the implemented CBR recommender system in relation to how the system provides best practice recommendations and retaining of smart city initiatives. Results from descriptive and exploratory factor analyses suggest that the developed system is applicable in supporting smart city adoption. Besides, findings from this study are expected to provide valuable insights for practitioners to develop more practical strategies and for researchers to better understand smart city dimensions.

Journal ArticleDOI
TL;DR: This paper presents how the attention mechanism could be incorporated effectively and efficiently in analyzing the Bangla sentiment or opinion.
Abstract: With the accelerated evolution of the internet in the form of web-sites, social networks, microblogs, and online portals, a large number of reviews, opinions, recommendations, ratings, and feedback are generated by writers or users. This user-generated sentiment content can be about books, people, hotels, products, research, events, etc. These sentiments become very beneficial for businesses, governments, and individuals. While this content is meant to be useful, a bulk of this writer-generated content requires using text mining techniques and sentiment analysis. However, there are several challenges facing the sentiment analysis and evaluation process. These challenges become obstacles in analyzing the accurate meaning of sentiments and detecting suitable sentiment polarity specifically in the Bangla language. Sentiment analysis is the practice of applying natural language processing and text analysis techniques to identify and extract subjective information from text. This paper presents how the attention mechanism could be incorporated effectively and efficiently in analyzing the Bangla sentiment or opinion.

Journal ArticleDOI
TL;DR: The paper concludes that one should not ascribe moral and legal personhood to currently existing robots, given their technological limitations, but that oneshould do so once they have achieved a certain level at which they would become comparable to human beings.
Abstract: This paper considers the hotly debated issue of whether one should grant moral and legal personhood to intelligent robots once they have achieved a certain standard of sophistication based on such criteria as rationality, autonomy, and social relations. The starting point for the analysis is the European Parliament’s resolution on Civil Law Rules on Robotics (2017) and its recommendation that robots be granted legal status and electronic personhood. The resolution is discussed against the background of the so-called Robotics Open Letter, which is critical of the Civil Law Rules on Robotics (and particularly of §59 f.). The paper reviews issues related to the moral and legal status of intelligent robots and the notion of legal personhood, including an analysis of the relation between moral and legal personhood in general and with respect to robots in particular. It examines two analogies, to corporations (which are treated as legal persons) and animals, that have been proposed to elucidate the moral and legal status of robots. The paper concludes that one should not ascribe moral and legal personhood to currently existing robots, given their technological limitations, but that one should do so once they have achieved a certain level at which they would become comparable to human beings.

Journal ArticleDOI
TL;DR: In this article, the authors analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change and identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively.
Abstract: In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI’s greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based, and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combatting climate change, while reducing its impact on the environment.

Journal ArticleDOI
Jon Dron1
TL;DR: This theoretical paper elucidates the nature of educational technology and sheds light on a number of phenomena in educational systems, from the no-significant-difference phenomenon to the singular lack of replication in studies of educational technologies.
Abstract: This theoretical paper elucidates the nature of educational technology and, in the process, sheds light on a number of phenomena in educational systems, from the no-significant-difference phenomenon to the singular lack of replication in studies of educational technologies. Its central thesis is that we are not just users of technologies but coparticipants in them. Our participant roles may range from pressing power switches to designing digital learning systems to performing calculations in our heads. Some technologies may demand our participation only to enact fixed, predesigned orchestrations correctly. Other technologies leave gaps that we can or must fill with novel orchestrations, which we may perform more or less well. Most are a mix of the two, and the mix varies according to context, participant, and use. This participative orchestration is highly distributed: in educational systems, coparticipants include the learner, the teacher, and many others, from textbook authors to LMS programmers, as well as the tools and methods they use and create. From this perspective, all learners and teachers are educational technologists. The technologies of education are seen to be deeply, fundamentally, and irreducibly human, complex, situated and social in their constitution, their form, and their purpose, and as ungeneralizable in their effects as the choice of paintbrush is to the production of great art.

Journal ArticleDOI
TL;DR: This article will present a foundation for a continued discussion on these issues and an argument for why discussions about safety should be prioritized over ethical concerns related to crashing.
Abstract: The philosophical-ethical literature and the public debate on autonomous vehicles have been obsessed with ethical issues related to crashing. In this article, these discussions, including more empi ...

Journal ArticleDOI
TL;DR: In this article, an ethical analysis of the potential conflicts in relation to this information technology that arise between security, on the one hand, and individual privacy and autonomy, and democratic accountability, is presented.
Abstract: Biometric facial recognition is an artificial intelligence technology involving the automated comparison of facial features, used by law enforcement to identify unknown suspects from photographs and closed circuit television. Its capability is expanding rapidly in association with artificial intelligence and has great potential to solve crime. However, it also carries significant privacy and other ethical implications that require law and regulation. This article examines the rise of biometric facial recognition, current applications and legal developments, and conducts an ethical analysis of the issues that arise. Ethical principles are applied to mediate the potential conflicts in relation to this information technology that arise between security, on the one hand, and individual privacy and autonomy, and democratic accountability, on the other. These can be used to support appropriate law and regulation for the technology as it continues to develop.

Journal ArticleDOI
TL;DR: A literature review of current and future challenges in the use of artificial intelligence (AI) in cyber physical systems and identifies a conceptual framework for increasing resilience with AI through automation supporting both, a technical and human level.
Abstract: This article conducts a literature review of current and future challenges in the use of artificial intelligence (AI) in cyber physical systems. The literature review is focused on identifying a conceptual framework for increasing resilience with AI through automation supporting both, a technical and human level. The methodology applied resembled a literature review and taxonomic analysis of complex internet of things (IoT) interconnected and coupled cyber physical systems. There is an increased attention on propositions on models, infrastructures and frameworks of IoT in both academic and technical papers. These reports and publications frequently represent a juxtaposition of other related systems and technologies (e.g. Industrial Internet of Things, Cyber Physical Systems, Industry 4.0 etc.). We review academic and industry papers published between 2010 and 2020. The results determine a new hierarchical cascading conceptual framework for analysing the evolution of AI decision-making in cyber physical systems. We argue that such evolution is inevitable and autonomous because of the increased integration of connected devices (IoT) in cyber physical systems. To support this argument, taxonomic methodology is adapted and applied for transparency and justifications of concepts selection decisions through building summary maps that are applied for designing the hierarchical cascading conceptual framework.

Journal ArticleDOI
TL;DR: Results support that simulated data can help configure algorithms to a degree of performance when real labeled data are not available and that this type of learning might be especially helpful in initial deployment of a system without prior data.
Abstract: The application of machine learning algorithms to healthcare data can enhance patient care while also reducing healthcare worker cognitive load. These algorithms can be used to detect anomalous physiological readings, potentially leading to expedited emergency response or new knowledge about the development of a health condition. However, while there has been much research conducted in assessing the performance of anomaly detection algorithms on well-known public datasets, there is less conceptual comparison across unsupervised and supervised performance on physiological data. Moreover, while heart rate data are both ubiquitous and noninvasive, there has been little research specifically for anomaly detection of this type of data. Considering that heart rate data are indicative of both potential health complications and an individual’s physical activity, this is a rich source of largely overlooked data. To this end, we employed and evaluated five machine learning algorithms, two of which are unsupervised and the remaining three supervised, in their ability to detect anomalies in heart rate data. These algorithms were then evaluated on real heart rate data. Findings supported the effectiveness of local outlier factor and random forests algorithms in the task of heart rate anomaly detection, as each model generalized well from their training on simulated heart rate data to real world heart rate data. Furthermore, results support that simulated data can help configure algorithms to a degree of performance when real labeled data are not available and that this type of learning might be especially helpful in initial deployment of a system without prior data.

Journal ArticleDOI
TL;DR: It is concluded that social robots should not be regarded as proper objects of moral concern unless and until they become capable of having conscious experience, which would suggest that humans do not owe direct moral duties to them.
Abstract: While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with human beings. In recent years, some approaches to moral consideration have been proposed that would include social robots as proper objects of moral concern, even though it seems unlikely that these machines are conscious beings. In the present paper, I argue against these approaches by advocating the “consciousness criterion,” which proposes phenomenal consciousness as a necessary condition for accrediting moral status. First, I explain why it is generally supposed that consciousness underlies the morally relevant properties (such as sentience) and then, I respond to some of the common objections against this view. Then, I examine three inclusive alternative approaches to moral consideration that could accommodate social robots and point out why they are ultimately implausible. Finally, I conclude that social robots should not be regarded as proper objects of moral concern unless and until they become capable of having conscious experience. While that does not entail that they should be excluded from our moral reasoning and decision-making altogether, it does suggest that humans do not owe direct moral duties to them.

Journal ArticleDOI
TL;DR: In this paper, a content analysis of frames is performed to reveal how the German government uses its AI future vision to uphold the status quo and how the media largely adapt the government´s frames and do not integrate alternative future narratives into the public debate.
Abstract: In this article, we shed light on the emergence, diffusion, and use of socio-technological future visions. The artificial intelligence (AI) future vision of the German federal government is examined and juxtaposed with the respective news media coverage of the German media. By means of a content analysis of frames, it is demonstrated how the German government strategically uses its AI future vision to uphold the status quo. The German media largely adapt the government´s frames and do not integrate alternative future narratives into the public debate. These findings are substantiated in the framing of AI futures in policy documents of the German government and articles of four different German newspapers. It is shown how the German past is mirrored in the German AI future envisioned by the government, safeguarding the present power constellation that is marked by a close unity of politics and industry. The German media partly expose the government´s frames and call for future visions that include fundamentally different political designs less influenced by the power structures of the past and present.

Journal ArticleDOI
TL;DR: In this paper, the authors examine whether the use of artificial intelligence and automated decision-making (ADM) aggravates issues of discrimination as has been argued by several authors, and they argue that AI/ADM can, in fact, increase the issue of discrimination, but in a different way than most critics assume.
Abstract: In this paper, I examine whether the use of artificial intelligence (AI) and automated decision-making (ADM) aggravates issues of discrimination as has been argued by several authors. For this purpose, I first take up the lively philosophical debate on discrimination and present my own definition of the concept. Equipped with this account, I subsequently review some of the recent literature on the use AI/ADM and discrimination. I explain how my account of discrimination helps to understand that the general claim in view of the aggravation of discrimination is unwarranted. Finally, I argue that the use of AI/ADM can, in fact, increase issues of discrimination, but in a different way than most critics assume: it is due to its epistemic opacity that AI/ADM threatens to undermine our moral deliberation which is essential for reaching a common understanding of what should count as discrimination. As a consequence, it turns out that algorithms may actually help to detect hidden forms of discrimination.

Journal ArticleDOI
TL;DR: A taxonomy that enables identification of the very nature of human–machine interaction and five levels of automation and technical autonomy are introduced, based on the assumption that both automation and autonomy are gradual.
Abstract: Due to the ongoing advancements in technology, socio-technical collaboration has become increasingly prevalent This poses challenges in terms of governance and accountability, as well as issues in various other fields Therefore, it is crucial to familiarize decision-makers and researchers with the core of human–machine collaboration This study introduces a taxonomy that enables identification of the very nature of human–machine interaction A literature review has revealed that automation and technical autonomy are main parameters for describing and understanding such interaction Both aspects must be carefully evaluated, as their increase has potentially far-reaching consequences Hence, these two concepts comprise the taxonomy’s axes Five levels of automation and five levels of technical autonomy are introduced below, based on the assumption that both automation and autonomy are gradual The levels of automation were developed from existing approaches; those of autonomy were carefully derived from a review of the literature The taxonomy’s use is also explained, as are its limitations and avenues for further research

Journal ArticleDOI
TL;DR: Ethical aspects of the alleged right to explanation, privacy, and informed consent, applying artificial intelligence in medical diagnostic consultations, and a legal analysis of the limits and requirements for the explainability of artificial intelligence are analyzed.
Abstract: This paper inquiries into the complex issue of informed consent applying artificial intelligence in medical diagnostic consultations. The aim is to expose the main ethical and legal concerns of the New Health phenomenon, powered by intelligent machines. To achieve this objective, the first part of the paper analyzes ethical aspects of the alleged right to explanation, privacy, and informed consent, applying artificial intelligence in medical diagnostic consultations. This analysis is followed by a legal analysis of the limits and requirements for the explainability of artificial intelligence. Followed by this analysis, recommendations for action are given in the concluding remarks of the paper.

Journal ArticleDOI
TL;DR: In this article, the attitude and moral perception of 228 college students towards artificial intelligence (AI) in an international university in Japan was examined, and the students were asked to select a single most significant ethical issue associated with AI in the future from a list of nine ethical issues suggested by the World Economic Forum, and to explain why they believed that their chosen issues were most important.
Abstract: We have examined the attitude and moral perception of 228 college students (63 Japanese and 165 non-Japanese) towards artificial intelligence (AI) in an international university in Japan. The students were asked to select a single most significant ethical issue associated with AI in the future from a list of nine ethical issues suggested by the World Economic Forum, and to explain why they believed that their chosen issues were most important. The majority of students (n = 149, 65%) chose unemployment as the major ethical issue related to AI. The second largest group of students (n = 29, 13%) were concerned with ethical issues related to emotional AI, including the impact of AI on human behavior and emotion. The paper discusses the results in detail and concludes that, while policymakers must consider how to ameliorate the impact of AI on employment, AI engineers need to consider the emotional aspects of AI in research and development, as well.

Journal ArticleDOI
TL;DR: This essay argues that the appellation ‘machine ethics’ does not sufficiently capture the entire project of embedding ethics into AI/S, and hence the need for computational ethics, and offers a philosophical analysis of the subject of computational ethics that is not found in the literature.
Abstract: Research into the ethics of artificial intelligence is often categorized into two subareas—robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term ‘machine ethics’ is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value and indeed is an important frontier in developing ethical artificial intelligence systems (AIS). I also show that computational is a distinct, often neglected field in the ethics of AI. In contrast to much of the literature, I argue that the appellation ‘machine ethics’ does not sufficiently capture the entire project of embedding ethics into AI/S, and hence the need for computational ethics. This essay is unique for two reasons; first, it offers a philosophical analysis of the subject of computational ethics that is not found in the literature. Second, it offers a finely grained analysis that shows the thematic distinction among robot ethics, machine ethics and computational ethics.