scispace - formally typeset
Search or ask a question

Showing papers in "Science and Engineering Ethics in 2020"


Journal ArticleDOI
TL;DR: In this paper, the authors present a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed.
Abstract: The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel in Science, 132(3429):741-742, 1960. https://doi.org/10.1126/science.132.3429.741 ; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles-the 'what' of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)-rather than on practices, the 'how.' Awareness of the potential issues is increasing at a fast rate, but the AI community's ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.

251 citations


Journal ArticleDOI
TL;DR: Seven ethical factors that are essential for future AI4SG initiatives are identified and corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.
Abstract: The idea of artificial intelligence for social good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.

143 citations


Journal ArticleDOI
TL;DR: Inspired by a relational approach, responsibility as answerability offers an important additional, if not primary, justification for explainability based, not on agency, but on patiency.
Abstract: This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence (AI) technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws attention to the issues of transparency and explainability. In contrast to standard discussions, however, it is then argued that this knowledge problem regarding agents of responsibility is linked to the other side of the responsibility relation: the addressees or “patients” of responsibility, who may demand reasons for actions and decisions made by using AI. Inspired by a relational approach, responsibility as answerability thus offers an important additional, if not primary, justification for explainability based, not on agency, but on patiency.

134 citations


Journal ArticleDOI
TL;DR: This paper will show that AI cannot be something that has the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or can be held responsible for their actions—requirements of the affective and normative accounts of trust.
Abstract: One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission's High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in human relationships, so proposing that AI should be trusted, is a very serious claim. This paper will show that AI cannot be something that has the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or can be held responsible for their actions-requirements of the affective and normative accounts of trust. While AI meets all of the requirements of the rational account of trust, it will be shown that this is not actually a type of trust at all, but is instead, a form of reliance. Ultimately, even complex machines such as AI should not be viewed as trustworthy as this undermines the value of interpersonal trust, anthropomorphises AI, and diverts responsibility from those developing and using them.

125 citations


Journal ArticleDOI
TL;DR: The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being.
Abstract: This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being.

105 citations


Journal ArticleDOI
TL;DR: This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing ethicists, policy-makers, and law enforcement organisations with a synthesis of the current problems, and a possible solution space.
Abstract: Artificial intelligence (AI) research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this article AI-Crime (AIC). AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently interdisciplinary area—spanning socio-legal studies to formal science—there is little certainty of what an AIC future might look like. This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing ethicists, policy-makers, and law enforcement organisations with a synthesis of the current problems, and a possible solution space.

98 citations


Journal ArticleDOI
TL;DR: It is argued that the performative threshold that robots need to cross in order to be afforded significant moral status may not be that high and that they may soon cross it and that the authors may need to take seriously a duty of ‘procreative beneficence’ towards robots.
Abstract: Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory-'ethical behaviourism'-which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need to cross in order to be afforded significant moral status may not be that high and that they may soon cross it (if they haven't done so already). Finally, the implications of this for our procreative duties to robots are considered, and it is argued that we may need to take seriously a duty of 'procreative beneficence' towards robots.

97 citations


Journal ArticleDOI
TL;DR: Transparency by Design is a model that helps organizations design transparent AI systems, by integrating these principles in a step-by-step manner and as an ex-ante value, not as an afterthought.
Abstract: In this article, we develop the concept of Transparency by Design that serves as practical guidance in helping promote the beneficial functions of transparency while mitigating its challenges in automated-decision making (ADM) environments. With the rise of artificial intelligence (AI) and the ability of AI systems to make automated and self-learned decisions, a call for transparency of how such systems reach decisions has echoed within academic and policy circles. The term transparency, however, relates to multiple concepts, fulfills many functions, and holds different promises that struggle to be realized in concrete applications. Indeed, the complexity of transparency for ADM shows tension between transparency as a normative ideal and its translation to practical application. To address this tension, we first conduct a review of transparency, analyzing its challenges and limitations concerning automated decision-making practices. We then look at the lessons learned from the development of Privacy by Design, as a basis for developing the Transparency by Design principles. Finally, we propose a set of nine principles to cover relevant contextual, technical, informational, and stakeholder-sensitive considerations. Transparency by Design is a model that helps organizations design transparent AI systems, by integrating these principles in a step-by-step manner and as an ex-ante value, not as an afterthought.

68 citations


Journal ArticleDOI
TL;DR: A taxonomy to classify Artificial Moral Agents according to the strategies and criteria used to deal with ethical problems is proposed and it is illustrated that there is a long way to go before this type of artificial agent can replace human judgment in difficult, surprising or ambiguous moral situations.
Abstract: One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as unmanned vehicles, intelligent houses, and humanoid robots capable of caring for people. In this context, research in the field of machine ethics has become more than a hot topic. Machine ethics focuses on developing ethical mechanisms for artificial agents to be capable of engaging in moral behavior. However, there are still crucial challenges in the development of truly Artificial Moral Agents. This paper aims to show the current status of Artificial Moral Agents by analyzing models proposed over the past two decades. As a result of this review, a taxonomy to classify Artificial Moral Agents according to the strategies and criteria used to deal with ethical problems is proposed. The presented review aims to illustrate (1) the complexity of designing and developing ethical mechanisms for this type of agent, and (2) that there is a long way to go (from a technological perspective) before this type of artificial agent can replace human judgment in difficult, surprising or ambiguous moral situations.

65 citations


Journal ArticleDOI
Mark Ryan1
TL;DR: This paper will synthesise the wide range of ethical, legal, social and economic impacts that may result from SDV use and implementation by 2025, such as issues of autonomy, privacy, liability, security, data protection, and safety, to provide steps that the authors need to take to avoid these pitfalls, while ensuring they reap the benefits that SDVs bring.
Abstract: Self-driving vehicles (SDVs) offer great potential to improve efficiency on roads, reduce traffic accidents, increase productivity, and minimise our environmental impact in the process. However, they have also seen resistance from different groups claiming that they are unsafe, pose a risk of being hacked, will threaten jobs, and increase environmental pollution from increased driving as a result of their convenience. In order to reap the benefits of SDVs, while avoiding some of the many pitfalls, it is important to effectively determine what challenges we will face in the future and what steps need to be taken now to avoid them. The approach taken in this paper is the construction of a likely future (the year 2025), through the process of a policy scenario methodology, if we continue certain trajectories over the coming years. The purpose of this is to articulate issues we currently face and the construction of a foresight analysis of how these may develop in the next 6 years. It will highlight many of the key facilitators and inhibitors behind this change and the societal impacts caused as a result. This paper will synthesise the wide range of ethical, legal, social and economic impacts that may result from SDV use and implementation by 2025, such as issues of autonomy, privacy, liability, security, data protection, and safety. It will conclude with providing steps that we need to take to avoid these pitfalls, while ensuring we reap the benefits that SDVs bring.

64 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provided an updated meta-analysis that calculates the pooled estimates of research misconduct (RM) and questionable research practices (QRPs), and explored the factors associated with the prevalence of these issues.
Abstract: Irresponsible research practices damaging the value of science has been an increasing concern among researchers, but previous work failed to estimate the prevalence of all forms of irresponsible research behavior. Additionally, these analyses have not included articles published in the last decade from 2011 to 2020. This meta-analysis provides an updated meta-analysis that calculates the pooled estimates of research misconduct (RM) and questionable research practices (QRPs), and explores the factors associated with the prevalence of these issues. The estimates, committing RM concern at least 1 of FFP (falsification, fabrication, plagiarism) and (unspecified) QRPs concern 1 or more QRPs, were 2.9% (95% CI 2.1-3.8%) and 12.5% (95% CI 10.5-14.7%), respectively. In addition, 15.5% (95% CI 12.4-19.2%) of researchers witnessed others who had committed at least 1 RM, while 39.7% (95% CI 35.6-44.0%) were aware of others who had used at least 1 QRP. The results document that response proportion, limited recall period, career level, disciplinary background and locations all affect significantly the prevalence of these issues. This meta-analysis addresses a gap in existing meta-analyses and estimates the prevalence of all forms of RM and QRPs, thus providing a better understanding of irresponsible research behaviors.

Journal ArticleDOI
TL;DR: It is argued that attention should be paid to the vocabulary currently used when discussing the governance of data initiatives; the internal tension between current data initiatives and environmental policies; issues of fair distribution and how taking into account these aspects would allow for a more responsible behaviour in the context of data storage and production.
Abstract: This paper addresses a problem that has so far been neglected by scholars investigating the ethics of Big Data and policy makers: that is the ethical implications of Big Data initiatives' environmental impact. Building on literature in environmental studies, cultural studies and Science and Technology Studies, the article draws attention to the physical presence of data, the material configuration of digital service, and the space occupied by data. It then explains how this material and situated character of data raises questions concerning the ethics of the increasingly fashionable Big Data discourses. It argues that attention should be paid to (1) the vocabulary currently used when discussing the governance of data initiatives; (2) the internal tension between current data initiatives and environmental policies; (3) issues of fair distribution. The article explains how taking into account these aspects would allow for a more responsible behaviour in the context of data storage and production.

Journal ArticleDOI
TL;DR: A positive account is developed of how trolley cases can inform the ethics of automated vehicles and four arguments for this view are outlined and rejected.
Abstract: This paper argues against the view that trolley cases are of little or no relevance to the ethics of automated vehicles. Four arguments for this view are outlined and rejected: the Not Going to Happen Argument, the Moral Difference Argument, the Impossible Deliberation Argument and the Wrong Question Argument. In making clear where these arguments go wrong, a positive account is developed of how trolley cases can inform the ethics of automated vehicles.

Journal ArticleDOI
TL;DR: Ethical issues concerning brain–computer interfaces (BCIs) have already received a considerable amount of attention, but one particular form of BCI has not received the attention that it deserves: Affective BCIs that allow for the detection and stimulation of affective states.
Abstract: Ethical issues concerning brain–computer interfaces (BCIs) have already received a considerable amount of attention. However, one particular form of BCI has not received the attention that it deserves: Affective BCIs that allow for the detection and stimulation of affective states. This paper brings the ethical issues of affective BCIs in sharper focus. The paper briefly reviews recent applications of affective BCIs and considers ethical issues that arise from these applications. Ethical issues that affective BCIs share with other neurotechnologies are presented and ethical concerns that are specific to affective BCIs are identified and discussed.

Journal ArticleDOI
TL;DR: The crash of two 737 MAX passenger aircraft in late 2018 and early 2019, and subsequent grounding of the entire fleet of 737 MAX jets, turned a global spotlight on Boeing's practices and culture as discussed by the authors.
Abstract: The crash of two 737 MAX passenger aircraft in late 2018 and early 2019, and subsequent grounding of the entire fleet of 737 MAX jets, turned a global spotlight on Boeing's practices and culture. Explanations for the crashes include: design flaws within the MAX's new flight control software system designed to prevent stalls; internal pressure to keep pace with Boeing's chief competitor, Airbus; Boeing's lack of transparency about the new software; and the lack of adequate monitoring of Boeing by the FAA, especially during the certification of the MAX and following the first crash. While these and other factors have been the subject of numerous government reports and investigative journalism articles, little to date has been written on the ethical significance of the accidents, in particular the ethical responsibilities of the engineers at Boeing and the FAA involved in designing and certifying the MAX. Lessons learned from this case include the need to strengthen the voice of engineers within large organizations. There is also the need for greater involvement of professional engineering societies in ethics-related activities and for broader focus on moral courage in engineering ethics education.

Journal ArticleDOI
TL;DR: It is argued that mHealth technologies should instead be framed as digital companions, and that reframing the narrative cannot be the only means for avoiding harm caused to the NHS as a healthcare system by the introduction of mHealth tools.
Abstract: This article highlights the limitations of the tendency to frame health- and wellbeing-related digital tools (mHealth technologies) as empowering devices, especially as they play an increasingly important role in the National Health Service (NHS) in the UK. It argues that mHealth technologies should instead be framed as digital companions. This shift from empowerment to companionship is advocated by showing the conceptual, ethical, and methodological issues challenging the narrative of empowerment, and by arguing that such challenges, as well as the risk of medical paternalism, can be overcome by focusing on the potential for mHealth tools to mediate the relationship between recipients of clinical advice and givers of clinical advice, in ways that allow for contextual flexibility in the balance between patiency and agency. The article concludes by stressing that reframing the narrative cannot be the only means for avoiding harm caused to the NHS as a healthcare system by the introduction of mHealth tools. Future discussion will be needed on the overarching role of responsible design.

Journal ArticleDOI
TL;DR: Research institutions have the duty to empower their research staff to steer away from QRPs and to explain how they realize that in a Research Integrity Promotion Plan.
Abstract: In many countries attention for fostering research integrity started with a misconduct case that got a lot of media exposure. But there is an emerging consensus that questionable research practices are more harmful due to their high prevalence. QRPs have in common that they can help to make study results more exciting, more positive and more statistically significant. That makes them tempting to engage in. Research institutions have the duty to empower their research staff to steer away from QRPs and to explain how they realize that in a Research Integrity Promotion Plan. Avoiding perverse incentives in assessing researchers for career advancement is an important element in that plan. Research institutions, funding agencies and journals should make their research integrity policies as evidence-based as possible. The dilemmas and distractions researchers face are real and universal. We owe it to society to collaborate and to do our utmost best to prevent QRPs and to foster research integrity.

Journal ArticleDOI
TL;DR: Root cause identification of corruption risks, namely, the noticeable impact of authoritarianism on project selection in Iran, over criterion of economic benefit or social good, is a significant outcome of this study.
Abstract: The construction industry consistently ranks amongst the highest contributors to global gross domestic product, as well as, amongst the most corrupt. Corruption therefore inflicts significant risk on construction activities, and overall economic development. These facts are widely known, but the various sources and nature of corruption risks endemic to the Iranian construction industry, along with the degree to which such risks manifest, and the strength of their impact, remain undescribed. To address the gap, a mixed methods approach is used; with a questionnaire, 103 responses were received, and these were followed up with semi-structured interviews. Results were processed using social network analysis. Four major corruption risks were identified: (1) procedural violations in awarding contracts, (2) misuse of contractual arrangements, (3) neglect of project management principles, and, (4) irrational decision making. While corruption risks in Iran align with those found in other countries, with funds being misappropriated for financial gain, Iran also shows a strong inclination to champion projects that serve the government’s political agenda. Root cause identification of corruption risks, namely, the noticeable impact of authoritarianism on project selection in Iran, over criterion of economic benefit or social good, is a significant outcome of this study.

Journal ArticleDOI
TL;DR: It is suggested that authorship disputes may contribute to an unhealthy competitive dynamic that can undermine researchers’ wellbeing, team cohesion, and scientific integrity.
Abstract: Scientific authorship serves to identify and acknowledge individuals who “contribute significantly” to published research. However, specific authorship norms and practices often differ within and across disciplines, labs, and cultures. As a consequence, authorship disagreements are commonplace in team research. This study aims to better understand the prevalence of authorship disagreements, those factors that may lead to disagreements, as well as the extent and nature of resulting misbehavior. Methods include an international online survey of researchers who had published from 2011 to 2015 (8364 respondents). Of the 6673 who completed the main questions pertaining to authorship disagreement and misbehavior, nearly half (46.6%) reported disagreements regarding authorship naming; and discipline, rank, and gender had significant effects on disagreement rates. Paradoxically, researchers in multidisciplinary teams that typically reflect a range of norms and values, were less likely to have faced disagreements regarding authorship. Respondents reported having witnessed a wide range of misbehavior including: instances of hostility (24.6%), undermining of a colleague’s work during meetings/talks (16.4%), cutting corners on research (8.3%), sabotaging a colleague’s research (6.4%), or producing fraudulent work to be more competitive (3.3%). These findings suggest that authorship disputes may contribute to an unhealthy competitive dynamic that can undermine researchers’ wellbeing, team cohesion, and scientific integrity.

Journal ArticleDOI
TL;DR: Arguments supporting that an approval procedure for genome-edited organisms for food or feed should include a broad assessment of societal, ethical and environmental concerns; so-called non-safety assessment are presented and evaluated.
Abstract: This article presents and evaluates arguments supporting that an approval procedure for genome-edited organisms for food or feed should include a broad assessment of societal, ethical and environmental concerns; so-called non-safety assessment. The core of analysis is the requirement of the Norwegian Gene Technology Act that the sustainability, ethical and societal impacts of a genetically modified organism should be assessed prior to regulatory approval of the novel products. The article gives an overview how this requirement has been implemented in the regulatory practice, demonstrating that such assessment is feasible and justified. Even in situations where genome-edited organisms are considered comparable to non-modified organisms in terms of risk, the technology may have-in addition to social benefits-negative impacts that warrant assessments of the kind required in the Act. The main reason is the disruptive character of the genome editing technologies due to their potential for novel, ground-breaking solutions in agriculture and aquaculture combined with the economic framework shaped by the patent system. Food is fundamental for a good life, biologically and culturally, which warrants stricter assessment procedures than what is required for other industries, at least in countries like Norway with a strong tradition for national control over agricultural markets and breeding programs.

Journal ArticleDOI
TL;DR: The best worst method is proposed as a possible method to determine the weights of values and is used in an evaluative fashion to examine the importance of values for three dimensions of acceptance namely sociopolitical, market, and household acceptance.
Abstract: Proactively including the ethical and societal issues of new technologies could have a positive effect on their acceptance. These issues could be captured in terms of values. In the literature, the values stakeholders deem important for the development of technology have often been identified. However, the relative ranking of these values in relation to each other have not been studied often. The best worst method is proposed as a possible method to determine the weights of values, hence it is used in an evaluative fashion. The applicability of the method is tested by applying it to the case of smart meters, one of the main components of the smart grid. The importance of values is examined for three dimensions of acceptance namely sociopolitical, market, and household acceptance.

Journal ArticleDOI
TL;DR: The Ethical Valence Theory is proposed, which paints AV decision-making as a type of claim mitigation: different road users hold different moral claims on the vehicle’s behavior, and the vehicle must mitigate these claims as it makes decisions about its environment.
Abstract: The ethics of autonomous vehicles (AV) has received a great amount of attention in recent years, specifically in regard to their decisional policies in accident situations in which human harm is a likely consequence. Starting from the assumption that human harm is unavoidable, many authors have developed differing accounts of what morality requires in these situations. In this article, a strategy for AV decision-making is proposed, the Ethical Valence Theory, which paints AV decision-making as a type of claim mitigation: different road users hold different moral claims on the vehicle's behavior, and the vehicle must mitigate these claims as it makes decisions about its environment. Using the context of autonomous vehicles, the harm produced by an action and the uncertainties connected to it are quantified and accounted for through deliberation, resulting in an ethical implementation coherent with reality. The goal of this approach is not to define how moral theory requires vehicles to behave, but rather to provide a computational approach that is flexible enough to accommodate a number of 'moral positions' concerning what morality demands and what road users may expect, offering an evaluation tool for the social acceptability of an autonomous vehicle's ethical decision making.

Journal ArticleDOI
TL;DR: This research provides new insight into contextual specificities related to fair authorship distribution that can be instrumental in developing applicable training tools to identify, prevent, and mitigate authorship disagreement.
Abstract: Authorship is commonly used as the basis for the measurement of research productivity. It influences career progression and rewards, making it a valued commodity in a competitive scientific environment. To better understand authorship practices amongst collaborative teams, this study surveyed authors on collaborative journal articles published between 2011 and 2015. Of the 8364 respondents, 1408 responded to the final open-ended question, which solicited additional comments or remarks regarding the fair distribution of authorship in research teams. This paper presents the analysis of these comments, categorized into four main themes: (1) disagreements, (2) questionable behavior, (3) external influences regarding authorship, and (4) values promoted by researchers. Results suggest that some respondents find ways to effectively manage disagreements in a collegial fashion. Conversely, others explain how distribution of authorship can become a "blood sport" or a "horror story" which can negatively affect researchers' wellbeing, scientific productivity and integrity. Researchers fear authorship discussions and often try to avoid openly discussing the situation which can strain team interactions. Unethical conduct is more likely to result from deceit, favoritism, and questionable mentorship and may become more egregious when there is constant bullying and discrimination. Although values of collegiality, transparency and fairness were promoted by researchers, rank and need for success often overpowered ethical decision-making. This research provides new insight into contextual specificities related to fair authorship distribution that can be instrumental in developing applicable training tools to identify, prevent, and mitigate authorship disagreement.

Journal ArticleDOI
TL;DR: Results of a systematic literature review of RRI practices show that practices already reflect the rich variety of values, dimensions and characteristics provided in the main definitions in use, although not all are addressed yet.
Abstract: This paper presents results of a systematic literature review of RRI practices which aimed to gather insights to further both the theoretical and practical development of RRI. Analysing practices of RRI and mapping out main approaches as well as the values, dimensions or characteristics pursued with those practices, can add to understanding of the more conceptual discussions of RRI and enhance the academic debate. The results, based on a corpus of 52 articles, show that practices already reflect the rich variety of values, dimensions and characteristics provided in the main definitions in use, although not all are addressed yet. In fact, articles dealing with uptake of RRI practices may be improved by including more methodological information. RRI practices may further the conceptual debate by including more reflection, and these may foster mutual responsiveness between theory and practice by early anticipating impacts.

Journal ArticleDOI
TL;DR: It is argued that a truly socially integrated and morally competent robot must be willing to communicate its objection to humans’ proposed violations of shared norms by using strategies such as blame-laden rebukes, even if doing so may violate other standing norms, such as politeness.
Abstract: Empirical studies have suggested that language-capable robots have the persuasive power to shape the shared moral norms based on how they respond to human norm violations This persuasive power presents cause for concern, but also the opportunity to persuade humans to cultivate their own moral development We argue that a truly socially integrated and morally competent robot must be willing to communicate its objection to humans’ proposed violations of shared norms by using strategies such as blame-laden rebukes, even if doing so may violate other standing norms, such as politeness By drawing on Confucian ethics, we argue that a robot’s ability to employ blame-laden moral rebukes to respond to unethical human requests is crucial for cultivating a flourishing “moral ecology” of human–robot interaction Such positive moral ecology allows human teammates to develop their own moral reflection skills and grow their own virtues Furthermore, this ability can and should be considered as one criterion for assessing artificial moral agency Finally, this paper discusses potential implications of the Confucian theories for designing socially integrated and morally competent robots

Journal ArticleDOI
TL;DR: The VSD methodology, particularly applied to nano-bio-info-cogno technologies, has an insufficient grounding for the determination of moral values, and an exploration of the value-investigations of VSD are deconstructed to illustrate both its strengths and weaknesses.
Abstract: Safe-by-design (SBD) frameworks for the development of emerging technologies have become an ever more popular means by which scholars argue that transformative emerging technologies can safely incorporate human values. One such popular SBD methodology is called value sensitive design (VSD). A central tenet of this design methodology is to investigate stakeholder values and design those values into technologies during early stage research and development. To accomplish this, the VSD framework mandates that designers consult the philosophical and ethical literature to best determine how to weigh moral trade-offs. However, the VSD framework also concedes the universalism of moral values, particularly the values of freedom, autonomy, equality trust and privacy justice. This paper argues that the VSD methodology, particularly applied to nano-bio-info-cogno technologies, has an insufficient grounding for the determination of moral values. As such, an exploration of the value-investigations of VSD are deconstructed to illustrate both its strengths and weaknesses. This paper also provides possible modalities for the strengthening of the VSD methodology, particularly through the application of moral imagination and how moral imagination exceeds the boundaries of moral intuitions in the development of novel technologies.

Journal ArticleDOI
TL;DR: It is concluded that ethical decisions regarding moral robots should be based on avoiding what is immoral in combination with a pluralistic ethical method of solving moral problems, rather than relying on a particular ethical approach, so as to avoid a normative bias.
Abstract: This paper examines the ethical pitfalls and challenges that non-ethicists, such as researchers and programmers in the fields of computer science, artificial intelligence and robotics, face when building moral machines. Whether ethics is "computable" depends on how programmers understand ethics in the first place and on the adequacy of their understanding of the ethical problems and methodological challenges in these fields. Researchers and programmers face at least two types of problems due to their general lack of ethical knowledge or expertise. The first type is so-called rookie mistakes, which could be addressed by providing these people with the necessary ethical knowledge. The second, more difficult methodological issue concerns areas of peer disagreement in ethics, where no easy solutions are currently available. This paper examines several existing approaches to highlight the ethical pitfalls and challenges involved. Familiarity with these and similar problems can help programmers to avoid pitfalls and build better moral machines. The paper concludes that ethical decisions regarding moral robots should be based on avoiding what is immoral (i.e. prohibiting certain immoral actions) in combination with a pluralistic ethical method of solving moral problems, rather than relying on a particular ethical approach, so as to avoid a normative bias.

Journal ArticleDOI
TL;DR: This paper explores the ethical desirability of geoengineering from an overall review of the existing literature on the ethics of geo engineering by employing a standard methodology and observes the semantic diversity and ethical ambiguity.
Abstract: Geoengineering as a technological intervention to avert the dangerous climate change has been on the table at least since 2006. The global outreach of the technology exercised in a non-encapsulated system, the concerns with unprecedented levels and scales of impact and the overarching interdisciplinarity of the project make the geoengineering debate ethically quite relevant and complex. This paper explores the ethical desirability of geoengineering from an overall review of the existing literature on the ethics of geoengineering. It identifies the relevant literature on the ethics of geoengineering by employing a standard methodology. Based on various framing of the major ethical arguments and their subsets, the results section presents the opportunities and challenges at stake in geoengineering from an ethical point of view. The discussion section takes a keen interest in identifying the evolving dynamics of the debate, the grey areas of the debate, with underdeveloped arguments being brought to the foreground and in highlighting the arguments that are likely to emerge in the future as key contenders. It observes the semantic diversity and ethical ambiguity, the academic lop-sidedness of the debate, missing contextual setting, need for interdisciplinary approaches, public engagement, and region-specific assessment of ethical issues. Recommendations are made to provide a useful platform for the second generation of geoengineering ethicists to help advance the debate to more decisive domains with the required clarity and caution.

Journal ArticleDOI
TL;DR: It is shown that a reduction in light pollution, and more boldly a better balance of lighting and darkness, can be achieved via the design of future autonomous vehicles, while simultaneously introducing questions of autonomous vehicles into debates about light pollution.
Abstract: This paper proposes that autonomous vehicles should be designed to reduce light pollution. In support of this specific proposal, a moral assessment of autonomous vehicles more comprehensive than the dilemmatic life-and-death questions of trolley problem-style situations is presented. The paper therefore consists of two interre-lated arguments. The first is that autonomous vehicles are currently still a technol-ogy in development, and not one that has acquired its definitive shape, meaning the design of both the vehicles and the surrounding infrastructure is open-ended. Design for values is utilized to articulate a path forward, by which engineering ethics should strive to incorporate values into a technology during its development phase. Second, it is argued that nighttime lighting—a critical supporting infrastructure—should be a prima facie consideration for autonomous vehicles during their development phase. It is shown that a reduction in light pollution, and more boldly a better balance of lighting and darkness, can be achieved via the design of future autonomous vehicles. Two case studies are examined (parking lots and highways) through which autono-mous vehicles may be designed for “driving in the dark.” Nighttime lighting issues are thus inserted into a broader ethics of autonomous vehicles, while simultaneously introducing questions of autonomous vehicles into debates about light pollution.

Journal ArticleDOI
TL;DR: split by extreme polar forces, for reasons still unknown to the public, Beall deliberately shut down his blog, causing some academic chaos among global scholars, including to the open access movement.
Abstract: A very important event took place on January 15, 2017. On that day, the Jeffrey Beall blog ( www.scholarlyoa.com ) was silently, and suddenly, shut down by Beall himself. A profoundly divisive and controversial site, the Beall blog represented an existential threat to those journals and publishers that were listed there. On the other hand, the Beall blog was a ray of hope to critics of bad publishing practices that a culture of public shaming was perhaps the only way to rout out those journals—and their editors—and publishers who did not respect basic publishing ethical principles and intrinsic academic values. While members of the former group vilified Beall and his blog, members of the latter camp tried to elevate it to the level of policy. Split by extreme polar forces, for reasons still unknown to the public, Beall deliberately shut down his blog, causing some academic chaos among global scholars, including to the open access movement.