scispace - formally typeset
Search or ask a question

Showing papers in "Ethics and Information Technology in 2010"


Journal ArticleDOI
TL;DR: Ethical concerns that must be addressed before embarking on future research in social networking sites are addressed, including the nature of consent, properly identifying and respecting expectations of privacy on social network sites, strategies for data anonymization prior to public release, and the relative expertise of institutional review boards when confronted with research projects based on data gleaned from social media.
Abstract: In 2008, a group of researchers publicly released profile data collected from the Facebook accounts of an entire cohort of college students from a US university. While good-faith attempts were made to hide the identity of the institution and protect the privacy of the data subjects, the source of the data was quickly identified, placing the privacy of the students at risk. Using this incident as a case study, this paper articulates a set of ethical concerns that must be addressed before embarking on future research in social networking sites, including the nature of consent, properly identifying and respecting expectations of privacy on social network sites, strategies for data anonymization prior to public release, and the relative expertise of institutional review boards when confronted with research projects based on data gleaned from social media.

727 citations


Journal ArticleDOI
TL;DR: In this article, a novel argument for moral consideration based on social relations is presented, which can assist us in shaping our relations to intelligent robots and, by extension, to all artificial and biological entities that appear to us as more than instruments for our human purposes.
Abstract: Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration to some intelligent social robots: it sketches a novel argument for moral consideration based on social relations. It is shown that to further develop this argument we need to revise our existing ontological and social-political frameworks. It is suggested that we need a social ecology, which may be developed by engaging with Western ecology and Eastern worldviews. Although this relational turn raises many difficult issues and requires more work, this paper provides a rough outline of an alternative approach to moral consideration that can assist us in shaping our relations to intelligent robots and, by extension, to all artificial and biological entities that appear to us as more than instruments for our human purposes.

201 citations


Journal ArticleDOI
TL;DR: In this paper, the authors argue that moral robots can be drawn into the social-moral world by learning to create the appearance of emotions and the appearances of being fully moral, and they also argue that this way of drawing robots into our social moral world is less problematic than it might first seem.
Abstract: Can we build `moral robots'? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such `psychopathic' robots would be dangerous since they would lack full moral agency. However, I will argue that in the future we might nevertheless be able to build quasi-moral robots that can learn to create the appearance of emotions and the appearance of being fully moral. I will also argue that this way of drawing robots into our social-moral world is less problematic than it might first seem, since human morality also relies on such appearances.

93 citations


Journal ArticleDOI
TL;DR: In this paper, the authors argue that a virtue-based perspective is needed to correct for a strong utilitarian bias in the research methodologies of existing empirical studies on the social and ethical impact of IT.
Abstract: This paper argues in favor of more widespread and systematic applications of a virtue-based normative framework to questions about the ethical impact of information technologies, and social networking technologies in particular. The first stage of the argument identifies several distinctive features of virtue ethics that make it uniquely suited to the domain of IT ethics, while remaining complementary to other normative approaches. I also note its potential to reconcile a number of significant methodological conflicts and debates in the existing literature, including tensions between phenomenological and constructivist perspectives. Finally, I claim that a virtue-based perspective is needed to correct for a strong utilitarian bias in the research methodologies of existing empirical studies on the social and ethical impact of IT. The second part of the paper offers an abbreviated demonstration of the merits of virtue ethics by showing how it might usefully illuminate the moral dimension of emerging social networking technologies. I focus here on the potential impact of such technologies on three virtues typically honed in communicative practices: patience, honesty and empathy.

89 citations


Journal ArticleDOI
TL;DR: In this paper, the authors argue that decision support systems must be subject to active and continuous assessment and regulation because of the ways in which they are likely to contribute to economic and social inequality.
Abstract: In the future systems of ambient intelligence will include decision support systems that will automate the process of discrimination among people that seek entry into environments and to engage in search of the opportunities that are available there. This article argues that these systems must be subject to active and continuous assessment and regulation because of the ways in which they are likely to contribute to economic and social inequality. This regulatory constraint must involve limitations on the collection and use of information about individuals and groups. The article explores a variety of rationales or justifications for establishing these limits. It emphasizes the unintended consequences that flow from the use of these systems as the most compelling rationale.

86 citations


Journal ArticleDOI
Wendell Wallach1
TL;DR: Building artificial moral agents (AMAs) is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the "ought" of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology as discussed by the authors.
Abstract: Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the "ought" of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, in the making of moral decisions. However, assembling a system from the bottom-up which is capable of accommodating moral considerations draws attention to the importance of a much wider array of mechanisms in honing moral intelligence. Moral machines need not emulate human cognitive faculties in order to function satisfactorily in responding to morally significant situations. But working through methods for building AMAs will have a profound effect in deepening an appreciation for the many mechanisms that contribute to a moral acumen, and the manner in which these mechanisms work together. Building AMAs highlights the need for a comprehensive model of how humans arrive at satisfactory moral judgments.

76 citations


Journal ArticleDOI
TL;DR: This work thinks the use of the capabilities approach will be especially valuable for improving the ability of impaired persons to interface more effectively with their physical and social environments.
Abstract: As we near a time when robots may serve a vital function by becoming caregivers, it is important to examine the ethical implications of this development. By applying the capabilities approach as a guide to both the design and use of robot caregivers, we hope that this will maximize opportunities to preserve or expand freedom for care recipients. We think the use of the capabilities approach will be especially valuable for improving the ability of impaired persons to interface more effectively with their physical and social environments.

75 citations


Journal ArticleDOI
TL;DR: In this article, the authors explore the philosophical critique of this claim and also look at how the robots of today are impacting our ability to fight wars in a just manner, and compare those findings with the claims made by robotics researchers that their machines are able to behave more ethically on the battlefield than human soldiers.
Abstract: Telerobotically operated and semiautonomous machines have become a major component in the arsenals of industrial nations around the world. By the year 2015 the United States military plans to have one-third of their combat aircraft and ground vehicles robotically controlled. Although there are many reasons for the use of robots on the battlefield, perhaps one of the most interesting assertions are that these machines, if properly designed and used, will result in a more just and ethical implementation of warfare. This paper will focus on these claims by looking at what has been discovered about the capability of humans to behave ethically on the battlefield, and then comparing those findings with the claims made by robotics researchers that their machines are able to behave more ethically on the battlefield than human soldiers. Throughout the paper we will explore the philosophical critique of this claim and also look at how the robots of today are impacting our ability to fight wars in a just manner.

57 citations


Journal ArticleDOI
TL;DR: In this paper, the authors argue that the socio-technical system conditions the cubicle warrior to dehumanize the enemy, and as a result, the warrior is morally disengaged from his destructive and lethal actions.
Abstract: In the last decade we have entered the era of remote controlled military technology. The excitement about this new technology should not mask the ethical questions that it raises. A fundamental ethical question is who may be held responsible for civilian deaths. In this paper we will discuss the role of the human operator or so-called `cubicle warrior', who remotely controls the military robots behind visual interfaces. We will argue that the socio-technical system conditions the cubicle warrior to dehumanize the enemy. As a result the cubicle warrior is morally disengaged from his destructive and lethal actions. This challenges what he should know to make responsible decisions (the so-called knowledge condition). Nowadays and in the near future, three factors will influence and may increase the moral disengagement even further due to the decrease of locus of control orientation: (1) photo shopping the war; (2) the moralization of technology; (3) the speed of decision-making. As a result, cubicle warriors cannot be held reasonably responsible anymore for the decisions they make.

55 citations


Journal ArticleDOI
TL;DR: This paper uses philosophical accounts on the relationship between trust and knowledge in science to apprehend this relationship on the Web and draws conclusions about the forms of transparency needed in such systems to support epistemically vigilant behaviour, which empowers users to become responsible and accountable knowers.
Abstract: In this paper I use philosophical accounts on the relationship between trust and knowledge in science to apprehend this relationship on the Web. I argue that trust and knowledge are fundamentally entangled in our epistemic practices. Yet despite this fundamental entanglement, we do not trust blindly. Instead we make use of knowledge to rationally place or withdraw trust. We use knowledge about the sources of epistemic content as well as general background knowledge to assess epistemic claims. Hence, although we may have a default to trust, we remain and should remain epistemically vigilant; we look out and need to look out for signs of insincerity and dishonesty in our attempts to know. A fundamental requirement for such vigilance is transparency: in order to critically assess epistemic agents, content and processes, we need to be able to access and address them. On the Web, this request for transparency becomes particularly pressing if (a) trust is placed in unknown human epistemic agents and (b) if it is placed in non-human agents, such as algorithms. I give examples of the entanglement between knowledge and trust on the Web and draw conclusions about the forms of transparency needed in such systems to support epistemically vigilant behaviour, which empowers users to become responsible and accountable knowers.

51 citations


Journal ArticleDOI
TL;DR: This article argued that video games are defensible from the perspective of Kantian, Aristotelian, and utilitarian moral theories, and argued that theoretical and empirical arguments against violent video games often suffer from a number of significant shortcomings that make them ineffective.
Abstract: The effect of violent video games is among the most widely discussed topics in media studies, and for good reason. These games are immensely popular, but many seem morally objectionable. Critics attack them for a number of reasons ranging from their capacity to teach players weapons skills to their ability to directly cause violent actions. This essay shows that many of these criticisms are misguided. Theoretical and empirical arguments against violent video games often suffer from a number of significant shortcomings that make them ineffective. This essay argues that video games are defensible from the perspective of Kantian, Aristotelian, and utilitarian moral theories.

Journal ArticleDOI
TL;DR: The experience of one’s identity may affect whether the cases of unwarranted discrimination resulting from ubiquitous differentiations and identifications within an Ambient Intelligent environment, will become a matter of societal concern.
Abstract: The tendency towards an increasing integration of the informational web into our daily physical world (in particular in so-called Ambient Intelligent technologies which combine ideas derived from the field of Ubiquitous Computing, Intelligent User Interfaces and Ubiquitous Communication) is likely to make the development of successful profiling and personalization algorithms, like the ones currently used by internet companies such as Amazon, even more important than it is today. I argue that the way in which we experience ourselves necessarily goes through a moment of technical mediation. Because of this algorithmic profiling that thrives on continuous reconfiguration of identification should not be understood as a supplementary process which maps a pre-established identity that exists independently from the profiling practice. In order to clarify how the experience of one's identity can become affected by such machine-profiling a theoretical exploration of identity is made (including Agamben's understanding of an apparatus, Ricoeur's distinction between idem- and ipse-identity, and Stiegler's notion of a conjunctive---disjunctive relationship towards retentional apparatuses). Although it is clear that no specific predictions about the impact of Ambient Intelligent technologies can be made without taking more particulars into account, the theoretical concepts are used to describe three general scenarios about the way wherein the experience of identity might become affected. To conclude, I argue that the experience of one's identity may affect whether the cases of unwarranted discrimination resulting from ubiquitous differentiations and identifications within an Ambient Intelligent environment, will become a matter of societal concern.

Journal ArticleDOI
TL;DR: A look is taken at the relatively new area of culturing neural tissue and embodying it in a mobile robot platform—essentially giving a robot a biological brain.
Abstract: In this paper a look is taken at the relatively new area of culturing neural tissue and embodying it in a mobile robot platform--essentially giving a robot a biological brain. Present technology and practice is discussed. New trends and the potential effects of and in this area are also indicated. This has a potential major impact with regard to society and ethical issues and hence some initial observations are made. Some initial issues are also considered with regard to the potential consciousness of such a brain.

Journal ArticleDOI
TL;DR: The results show that public information and discussion about surveillance and social networking platforms is important for activating critical information behaviour and public discussions that influenced students’ knowledge and information behaviour.
Abstract: This paper presents some results of a case study of the usage of the social networking platform studiVZ by students in Salzburg, Austria. The topic is framed by the context of electronic surveillance. An online survey that was based on questionnaire that consisted of 35 (single and multiple) choice questions, 3 open-ended questions, and 5 interval-scaled questions, was carried out (N = 674). The knowledge that students have in general was assessed with by calculating a surveillance knowledge index, the critical awareness towards surveillance by calculating a surveillance critique index. Knowledge about studiVZ as well as information behaviour on the platform were analyzed and related to the surveillance parameters. The results show that public information and discussion about surveillance and social networking platforms is important for activating critical information behaviour. In the case of studiVZ, a change of the terms of use in 2008 that brought about the possibility of targeted personalized advertising, was the subject of public discussions that influenced students' knowledge and information behaviour.

Journal ArticleDOI
TL;DR: It is found that open-source designs for software and encyclopaedias are likely to converge in the future towards a mid-level of discretion, in which the anonymous user is no longer invested with unquestioning trust.
Abstract: Open-source communities that focus on content rely squarely on the contributions of invisible strangers in cyberspace. How do such communities handle the problem of trusting that strangers have good intentions and adequate competence? This question is explored in relation to communities in which such trust is a vital issue: peer production of software (FreeBSD and Mozilla in particular) and encyclopaedia entries (Wikipedia in particular). In the context of open-source software, it is argued that trust was inferred from an underlying `hacker ethic', which already existed. The Wikipedian project, by contrast, had to create an appropriate ethic along the way. In the interim, the assumption simply had to be that potential contributors were trustworthy; they were granted `substantial trust'. Subsequently, projects from both communities introduced rules and regulations which partly substituted for the need to perceive contributors as trustworthy. They faced a design choice in the continuum between a high-discretion design (granting a large amount of trust to contributors) and a low-discretion design (leaving only a small amount of trust to contributors). It is found that open-source designs for software and encyclopaedias are likely to converge in the future towards a mid-level of discretion. In such a design the anonymous user is no longer invested with unquestioning trust.

Journal ArticleDOI
TL;DR: In this article, the authors explore the relation between the management of our (moral) identities and identity management as conceptualized in IT discourse and explore the relationship between identity management and the need to manage our moral identities and their related information.
Abstract: Over the past decade Identity Management has become a central theme in information technology, policy, and administration in the public and private sectors. In these contexts the term `Identity Management' is used primarily to refer to ways and methods of dealing with registration and authorization issues regarding persons in organizational and service-oriented domains. Especially due to the growing range of choices and options for, and the enhanced autonomy and rights of, employees, citizens, and customers, there is a growing demand for systems that enable the regulation of rights, duties, responsibilities, entitlements and access of innumerable people simultaneously. `Identity Management' or `Identity Management Systems' have become important headings under which such systems are designed and implemented. But there is another meaning of the term `identity management', which is clearly related and which has gained currency. This second construal refers to the need to manage our moral identities and our identity related information. This paper explores the relation between the management of our (moral) identities and `Identity Management' as conceptualized in IT discourse.

Journal ArticleDOI
TL;DR: The alleged right to privacy has increasingly come to be about informational privacy, especially in light of persistent technological breakthroughs, but getting clear on the nature of this threat is quite elusive.
Abstract: The alleged right to privacy has increasingly come to be about informational privacy, especially in light of persistent technological breakthroughs. The amount of information capable of being known about each of us is by now familiar, yet remains staggering. Whenever we use our computers, show a discount card at the supermarket, order a pizza, apply for a loan or job, use a bank or credit card, or engage in a host of other activities, multiple bits of our personal data are collected, collated, distributed, and stored. This state of affairs strikes many as cause for concern, even alarm. But getting clear on the nature of this threat is quite elusive. That there is a zone of informational privacy, in some sense and of some sort, seems uncontroversial. But of what sense and of what sort? Further, why think we might have a right to informational privacy, such that a breach of that zone would be wrongful? Several have tried to make the case that a threat to informational privacy is a threat to our personal identity. It is difficult to articulate the precise nature of this connection, however. Here is one quite literal attempt: [I]nformational privacy requires [a] ... radical reinterpretation..., achieved by considering each person as constituted by his or her information, and hence by understanding a breach of one’s informational privacy as a form of aggression towards one’s personal identity.

Journal ArticleDOI
TL;DR: In this paper, the authors examined various historical events involving social networking sites through the lens of the PAPA framework to highlight select ethical issues regarding the sharing of information in the social-networking age.
Abstract: The advent of social networking sites has changed the face of the information society Mason wrote of 23 years ago necessitating a reevaluation of the social contracts designed to protect the members of the society. Despite the technological and societal changes that have happened over the years, the information society is still based on the exchange of information. This paper examines various historical events involving social networking sites through the lens of the PAPA framework (Mason 1986) to highlight select ethical issues regarding the sharing of information in the social-networking age. Four preliminary principles are developed to guide the ethical use of social networking sites (SNS).

Journal ArticleDOI
TL;DR: The benefits of mainstreaming the excluded are numerous as discussed by the authors, however, e-inclusion does raise ethical issues, and a few of the key ones are discussed in this paper.
Abstract: E-inclusion is getting a lot of attention in Europe these days. The European Commission and EU Member States have initiated e-inclusion strategies aimed at reaching out to the e-excluded and bringing them into the mainstream of society and the economy. The benefits of mainstreaming the excluded are numerous. Good practices play an important role in the strategies, and examples can be found in e-health, e-learning, e-government, e-inclusion and other e-domains. So laudable seems the rationale for e-inclusion, few have questioned the benefits. In fact, e-inclusion does raise ethical issues, and this paper discusses a few of the key ones. The paper draws several conclusions, principally regarding the need for some empirical research on what happens to the e-excluded once they have access to information and communications technologies, notably the Internet.

Journal ArticleDOI
TL;DR: In this article, the authors argue that there is value in thinking about ethical issues related to information technologies, especially, though not exclusively, issues concerning identity and identity management, explicitly in terms of respect for persons understood as a core value of IT ethics.
Abstract: There is surprisingly little attention in Information Technology ethics to respect for persons, either as an ethical issue or as a core value of IT ethics or as a conceptual tool for discussing ethical issues of IT. In this, IT ethics is very different from another field of applied ethics, bioethics, where respect is a core value and conceptual tool. This paper argues that there is value in thinking about ethical issues related to information technologies, especially, though not exclusively, issues concerning identity and identity management, explicitly in terms of respect for persons understood as a core value of IT ethics. After explicating respect for persons, the paper identifies a number of ways in which putting the concept of respect for persons explicitly at the center of both IT practice and IT ethics could be valuable, then examines some of the implicit and problematic assumptions about persons, their identities, and respect that are built into the design, implementation, and use of information technologies and are taken for granted in discussions in IT ethics. The discussion concludes by asking how better conceptions of respect for persons might be better employed in IT contexts or brought better to bear on specific issues concerning identity in IT contexts.

Journal ArticleDOI
TL;DR: The tendency in intellectual property law to commodify information embedded in software and profiles could counteract this shift to transparency and control in the profiling process by which knowledge is produced from these data.
Abstract: Profiling technologies are the facilitating force behind the vision of Ambient Intelligence in which everyday devices are connected and embedded with all kinds of smart characteristics enabling them to take decisions in order to serve our preferences without us being aware of it. These technological practices have considerable impact on the process by which our personhood takes shape and pose threats like discrimination and normalisation. The legal response to these developments should move away from a focus on entitlements to personal data, towards making transparent and controlling the profiling process by which knowledge is produced from these data. The tendency in intellectual property law to commodify information embedded in software and profiles could counteract this shift to transparency and control. These rights obstruct the access and contestation of the design of the code that impacts one's personhood. This triggers a political discussion about the public nature of this code and forces us to rethink the relations between property, privacy and personhood in the digital age.

Journal ArticleDOI
TL;DR: The design of a working machine for making ethical decisions, the N-Reasons platform, applied to the ethics of robots, is discussed and experimental results that show the platform is a success are sketched as well as pointing to ways it can be improved.
Abstract: We can learn about human ethics from machines. We discuss the design of a working machine for making ethical decisions, the N-Reasons platform, applied to the ethics of robots. This N-Reasons platform builds on web based surveys and experiments, to enable participants to make better ethical decisions. Their decisions are better than our existing surveys in three ways. First, they are social decisions supported by reasons. Second, these results are based on weaker premises, as no exogenous expertise (aside from that provided by the participants) is needed to seed the survey. Third, N-Reasons is designed to support experiments so we can learn how to improve the platform. We sketch experimental results that show the platform is a success as well as pointing to ways it can be improved.

Journal ArticleDOI
TL;DR: It is suggested that the use of ‘inaccurate’ data can potentially play a useful role to preserve the informational autonomy of the individual, and that any understandings of privacy or personal data protection that would tend to unduly limit such potential should be critically questioned.
Abstract: The accuracy principle is one of the key standards of informational privacy. It epitomises the obligation for those processing personal data to keep their records accurate and up-to-date, with the aim of protecting individuals from unfair decisions. Currently, however, different practices being put in place in order to enhance the protection of individuals appear to deliberately rely on the use of `inaccurate' personal information. This article explores such practices and tries to assess their potential for privacy protection, giving particular attention to their legal implications and to related ethical issues. Ultimately, it suggests that the use of `inaccurate' data can potentially play a useful role to preserve the informational autonomy of the individual, and that any understandings of privacy or personal data protection that would tend to unduly limit such potential should be critically questioned.

Journal ArticleDOI
TL;DR: The authors found that online users attribute perceptions of moral qualities to Web sites and, further, that differential perceptions of morality affected the extent of persuasion, and that the web sites' perceived morality and participants' worldview predicted credibility, persuasiveness, and attitudes toward Web sites.
Abstract: This study extended the scope of previous findings in human---computer interaction research within the computers are social actors paradigm by showing that online users attribute perceptions of moral qualities to Websites and, further, that differential perceptions of morality affected the extent of persuasion. In an experiment (N = 138) that manipulated four morality conditions (universalist, relativist, egotistic, control) across worldview, a measured independent variable, users were asked to evaluate a Web site designed to aid them in making ethical decisions. Web sites offered four different types of ethical advice as participants contemplated cases involving ethical quandaries. Perceptions of the Web sites' moral qualities varied depending on the type of advice given. Further, the Web sites' perceived morality and participants' worldview predicted credibility, persuasiveness, and attitudes toward the Web sites.

Journal ArticleDOI
TL;DR: In this paper, a pragmatist approach to social networking sites based on the work of Richard Rorty is developed, with the aim of finding a middle ground, and the argument proceeds in three steps.
Abstract: What do Social Networking Sites (SNS) `do to us': are they a damning threat or an emancipating force? Recent publications on the impact of "Web 2.0" proclaim very opposite evaluative positions. With the aim of finding a middle ground, this paper develops a pragmatist approach to SNS based on the work of Richard Rorty. The argument proceeds in three steps. First, we analyze SNS as conversational practices. Second, we outline, in the form of an imaginary conversation between Rorty and Heidegger, a positive and negative `conversational' view on SNS. Third, we deploy a reflection, again using Rortian notions, on that evaluation, starting from the concept of `self-reflectivity.' Finally, the relations between these three steps are more detailedly investigated. By way of the sketched technique, we can interrelate the two opposing sides of the recent debates--hope and threat--and judge SNS in all their ambiguity.

Journal ArticleDOI
TL;DR: In this article, the authors argue that the development and convergence of information and communication technologies (ICT) is creating a global network of surveillance capabilities which affect the traveler, and as such the emerging global surveillance network has been referred to as the travel panopticon.
Abstract: I argue in this paper that the development and convergence of information and communication technologies (ICT) is creating a global network of surveillance capabilities which affect the traveler. These surveillance capabilities are reminiscent of 18th century philosopher Jeremy Bentham's panopticon, and as such the emerging global surveillance network has been referred to as the travel panopticon. I argue that the travel panopticon is corrosive of personal autonomy, and in doing so I describe and analyse various philosophical approaches to personal autonomy.

Journal ArticleDOI
TL;DR: In this article, the authors argue that the problem of ''moral luck'' is an unjustly neglected topic within Computer Ethics, which is unfortunate given that the very nature of computer technology, its ''logical malleability'' leads to ever greater levels of complexity, unreliability and uncertainty.
Abstract: I argue that the problem of `moral luck' is an unjustly neglected topic within Computer Ethics. This is unfortunate given that the very nature of computer technology, its `logical malleability', leads to ever greater levels of complexity, unreliability and uncertainty. The ever widening contexts of application in turn lead to greater scope for the operation of chance and the phenomenon of moral luck. Moral luck bears down most heavily on notions of professional responsibility, the identification and attribution of responsibility. It is immunity from luck that conventionally marks out moral value from other kinds of values such as instrumental, technical, and use value. The paper describes the nature of moral luck and its erosion of the scope of responsibility and agency. Moral luck poses a challenge to the kinds of theoretical approaches often deployed in Computer Ethics when analyzing moral questions arising from the design and implementation of information and communication technologies. The paper considers the impact on consequentialism; virtue ethics; and duty ethics. In addressing cases of moral luck within Computer Ethics, I argue that it is important to recognise the ways in which different types of moral systems are vulnerable, or resistant, to moral luck. Different resolutions are possible depending on the moral framework adopted. Equally, resolution of cases will depend on fundamental moral assumptions. The problem of moral luck in Computer Ethics should prompt us to new ways of looking at risk, accountability and responsibility.

Journal ArticleDOI
TL;DR: Wiegel and Wiegel as mentioned in this paper make a good overview of most of the current research in the field, nicely setting the stage in Chap. 5 for a discussion of the relationship between engineer and philosopher; and the cooperation between the two raises various issues that occupy the remainder of the book, including questions such as: Who or what is leading?
Abstract: Robots and smart software have an increasing impact on our lives, and they make decisions that might have a profound effect on our welfare. Some of these decisions have a moral dimension. Hence, we need to consider whether (a) we want them making such decisions, and (b) if affirmative, how we proceed in equipping machines with ‘‘moral sensitivity’’ or even with ‘‘moral decision-making abilities.’’ Wallach and Allen make an eloquent and forceful case that we should seriously consider granting machines such decision-making power in their book, Moral machine, teaching robots right from wrong. Their argument (in Chaps. 1 and 2) is that machines are deployed in situations in which they make decisions that have a moral impact. Hence we should extend them with moral sensitivity to the moral dimensions of the situations in which the increasingly autonomous machines will inevitably find themselves. This may lead to machines making moral decisions. The machines they refer to may be anything from software, softbots to robots, and in particular combinations of these. Through interconnected and open systems situations might arise that are neither desirable nor were they foreseeable when the systems were designed. Whether we can actually build such systems Chap. 3) is still an open question. If we were to engineer artificially moral systems, would they count as truly moral systems? Wallach and Allen conclude (Chap. 4) by noting that human and artificial morality will be different, but that there is no reason a priori to rule out the notion of artificial morality. Moreover, they argue that the very attempt to construct artificial morality will prove worthwhile for all involved. Raising these points is the first, and possibly the greatest, strength of their book. It puts the theme squarely on the agenda. Yet, theirs is also a book of open and unanswered questions. On virtually all topics, the verdict is still out, as no common opinions have been established, no approaches proven, and no answers found. Their book also serves to illustrate how young this field of research still is. Though at times it is a little disconcerting to find yet again that the answer (one of these open questions) might be A, but then again, it might not. Writing a book that touches on several research domains—in this case moral philosophy, robotics, software developing, and neuroscience—is always a hazardous enterprise. The risk of not providing enough depth and thus losing the attention of specialists in either domain is present. The specialist will be lost unless there is enough to be learned from other domains to provide a fresh perspective on the research in his own domain. Providing an overview of the research on artificial morality—moral philosophy and machine decisionmaking—is a tall order. Though the field is relatively new, there is already much and widely varied research being conducted, which ranges from moral learning algorithms and various logics to model moral decision-making, and from neural nets to nano-technology. Overall, the book provides a good overview of most of the current research in the field, nicely setting the stage in Chap. 5 for a discussion of the relationship between engineer and philosopher; and the cooperation between the two raises various issues that occupy the remainder of the book, including questions such as: Who or what is leading? How can philosophers formulate their theories such that engineers V. Wiegel (&) Section of Philosophy, Delft University of Technology, PO Box 5015, 2600 GA Delft, The Netherlands e-mail: v.wiegel@tudelft.nl

Journal ArticleDOI
TL;DR: Moral Machines as mentioned in this paper is an introduction to this newly emerging area of machine ethics and is written primarily to stimulate further inquiry by both ethicists and engineers, and as such it does not get bogged down in dense philosophical prose or technical specification.
Abstract: Can a machine be a genuine cause of harm? The obvious answer is affirmative. The toaster that flames up and burns down a house is said to be the cause of the fire, and in some weak sense, we might even say that the toaster was responsible for it; but the toaster is broken or defective, not immoral and irresponsible, though possibly the engineer who designed it is. But what about machines that decide things before they act, that determine their own course of action? Somewhere between digital thermostats and the murderous HAL of 2001: A Space Odyssey, autonomous machines are quickly gaining in complexity, and most certainly a day is coming when we will want to blame them for genuinely causing harm, even if philosophical issues concerning their moral status have not been fully settled. When will that be? Without lapsing into futurology or science fiction, Wallach and Allen predict that within the next few years, ‘‘there will be a catastrophic incident brought about by a computer system making a decision independent of human oversight’’ (p. 4). In this light, philosophers and engineers should not wait for a threat of robot domination before determining how to keep the behavior of machines within the scope of morality. The practical concerns to motivate such an inquiry—and indeed this book—are already here. Moral Machines is an introduction to this newly emerging area of machine ethics. It is written primarily to stimulate further inquiry by both ethicists and engineers, and as such, it does not get bogged down in dense philosophical prose or technical specification. It is, in other words, comprehensible by the general reader, who will walk away informed about why machine morality is already necessary, where we are with various attempts to implement it, and the authors’ recommendations of where we need to be. Chapter One notes the inevitable arrival of autonomous machines and the possible harm that can come from them. Some automated agents that are quickly integrating into modern life do things like regulate the power grid in the United States, monitor financial transactions, make medical diagnoses and fight on the battlefield. A failure of these systems to behave within moral parameters could have devastating consequences. As they become more and more autonomous, Wallach and Allen argue, it becomes more and more necessary that they employ ‘‘ethical subroutines’’ to evaluate their possible actions before they are executed. Chapter Two notes that machine morality should unfold in the dynamic interplay between ethical sensitivity and increasingly complex autonomy, and several candidate models for automated moral agents, or AMAs, are presented. Borrowing from Moor, the authors indicate that machines can be implicitly ethical in that their behavior conforms to moral standards. Moor marks a three-fold division among kinds of ethical agents: such agents are either ‘‘implicit,’’ ‘‘explicit’’ or ‘‘full’’. The first are constrained to emulate ethical behavior, whereas the second engage in ethical decisions making and the third are, like human beings, conscious and have free will. Robots, Wallach and Allen argue, are capable of being the first, while setting the question of whether they can be explicit or full ethical agents to the side. After a brief digress in Chapter Three to address whether we really want machines making moral decisions, the issue of agency reappears in Chapter Four, where the ingredients of full moral agency (free will, understanding and consciousness) are addressed. This review is a slightly revised version of another that originally appeared in the January/February 2009 issue of Philosophy Now.

Journal ArticleDOI
TL;DR: The case study of Intercultural Information Ethics by Capurro et al. as mentioned in this paper is a case study for making and keeping information accessible to all people in professional education, and it is based on the work of Robert Hauptman, who has been a pioneer in the field of ethics and librarianship.
Abstract: The final chapter on Intercultural Information Ethics by Rafael Capurro adds 25 more case studies. Each case is followed by ‘‘Questions to Consider.’’ The questions encourage reflection and sharing rather than leading to a specific answer. Recognizing that decision-making may differ according to the history and the values of both institutions and individuals, the authors leave room for the analytical process to move in different directions. Like many case books, this one grew out of teaching; it also evolved as issues and players changed faster than available resources. For those of us who believe in the pedagogical power of case studies, particularly in professional education, the case study is praised for being, ‘‘Smooth and simple on the outside, juicy and deeply layered on the inside.’’ The authors acknowledge the inspiration and guidance offered by Robert Hauptman, who has been a pioneer in the field of ethics and librarianship since the mid-1970’s. Hauptman, whose Foreword opens the book, reminds readers of the high stakes involved in making and keeping information accessible to all people. Hauptman’s Foreword echoes Article 19 of the Universal Declaration of Human Rights: