scispace - formally typeset
Search or ask a question

Showing papers by "Mariarosaria Taddeo published in 2019"


Journal ArticleDOI
TL;DR: In this paper, the authors argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of "reliable" AI is necessary.
Abstract: Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double-edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats. We argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, we offer three recommendations focusing on the design, development and deployment of AI for cybersecurity. Current national cybersecurity and defence strategies of several governments mention explicitly the use of AI. However, it will be important to develop standards and certification procedures, which involves continuous monitoring and assessment of threats. The focus should be on the reliability of AI-based systems, rather than on eliciting users’ trust in AI.

66 citations


Journal ArticleDOI
TL;DR: In this paper, the authors extrapolate seven ethical factors that are essential for future AI4SG initiatives from the analysis of 27 case studies of AI-based projects and formulate corresponding best practices to ensure that well-designed AI is more likely to serve the social good.
Abstract: The idea of Artificial Intelligence for Social Good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to address social problems effectively through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies (Cath et al. 2018). This article addresses this gap by extrapolating seven ethical factors that are essential for future AI4SG initiatives from the analysis of 27 case studies of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.

39 citations


Journal ArticleDOI
TL;DR: It is stressed how important it is that the ethical challenges raised by implementing AI in healthcare settings are tackled proactively rather than reactively and map the key considerations for policymakers to each of the ethical concerns highlighted.
Abstract: Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic counterfactual definition of AI as an umbrella term for a range of techniques that can be used to make machines complete tasks in a way that would be considered intelligent were they to be completed by a human. Automation of this nature could offer great opportunities for the improvement of healthcare services and ultimately patients’ health by significantly improving human clinical capabilities in diagnosis, drug discovery, epidemiology, personalised medicine, and operational efficiency. However, if these AI solutions are to be embedded in clinical practice, then at least three issues need to be considered: the technical possibilities and limitations; the ethical, regulatory and legal framework; and the governance framework. In this article, we report on the results of a systematic analysis designed to provide a clear overview of the second of these elements: the ethical, regulatory and legal framework. We find that ethical issues arise at six levels of abstraction (individual, interpersonal, group, institutional, sectoral, and societal) and can be categorised as epistemic, normative, or overarching. We conclude by stressing how important it is that the ethical challenges raised by implementing AI in healthcare settings are tackled proactively rather than reactively and map the key considerations for policymakers to each of the ethical concerns highlighted.

32 citations


Journal ArticleDOI
TL;DR: Artificial intelligence (AI) could help to improve defences and reduce the impact of cyber attacks, why initiatives to develop applications of AI for cybers security applications are attracting increasing attention both within the private and public sector.
Abstract: In 2017, the WannCry and NotPeya showed that attacks targeting the cyber component of infrastructures (e.g. attacks on power plants), services (e.g. attacks to banks or hospitals servers), and endpoint devices (e.g. attacks on mobiles and personal computers) have a great disruptive potential and could cause serious damage to our information societies. WannaCry crippled hundreds of IT systems. And NotPetya costed pharmaceutical giant Merck, shipping firm Maersk and logistics company FedEx around US$300 million each. At a global level, cyber crime causes multibillion dollar losses to businesses, with average losses per organization running from US$3.8 to US$16.8 million in the smallest and largest quartiles respectively (Accenture 2017). The picture did not improve in 2018. Data show that over the year 2.6 million people encountered newly discovered malware on a daily basis.1 Attacks ranged over 1.7 million different forms of malware, and 60% of the attacks lasted less than 1 h. Cyber attacks are escalating in frequency, impact, and sophistication. The escalation is due to several factors, for example, attacking in cyberspace is easier than defending; most attacks remain unattributed and, therefore, unpunished. Moreover, as defences are porous, cyber attacks are more likely to succeed than not (Taddeo 2017b). Artificial intelligence (AI)2 could help to improve defences and reduce the impact of cyber attacks. This is why initiatives to develop applications of AI for cybers security applications are attracting increasing attention both within the private and public sector (The 2019 Official Annual Cybercrime Report 2019).

26 citations


Journal ArticleDOI
TL;DR: The analysis concludes by stressing the need to develop an ethical code for data donation to minimise the risks, and offers five foundational principles for ethical medical data donation suggested as a draft code.
Abstract: This article argues that personal medical data should be made available for scientific research, by enabling and encouraging individuals to donate their medical records once deceased, in a way similar to how they can already donate organs or bodies. This research is part of a project on posthumous medical data donation (PMDD) developed by the Digital Ethics Lab at the Oxford Internet Institute. Ten arguments are provided to support the need to foster posthumous medical data donation. Two major risks are also identified—harm to others, and lack of control over the use of data—which could follow from unregulated donation of medical data. The argument that record-based medical research should proceed without the need to ask for informed consent is rejected, and it instead a voluntary and participatory approach to using personal medical data should be followed. The analysis concludes by stressing the need to develop an ethical code for data donation to minimise the risks providing five foundational principles for ethical medical data donation; and suggesting a draft for such a code.

19 citations


Journal ArticleDOI
TL;DR: It is argued that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for security is necessary.
Abstract: Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defense strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double-edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats. We argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, we offer three recommendations focusing on the design, development and deployment of AI for cybersecurity.

18 citations



Journal ArticleDOI
TL;DR: In this article, the authors present the first systematic analysis of the ethical challenges posed by recommender systems and identify six areas of concern, and map them onto a proposed taxonomy of different kinds of ethical impact.
Abstract: This article presents the first, systematic analysis of the ethical challenges posed by recommender systems. Through a literature review, the article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders — as opposed to just the receivers of a recommendation — in assessing the ethical impacts of a recommender system.

14 citations


Journal ArticleDOI
TL;DR: From year to year, data about cyber attacks and their impact continue to increase indicating that cyber attacks pose an ever-growing threat for information societies and there are two lessons to be learned.
Abstract: The World Economic Forum’s Global Risks Report 2019 ranked cyber attacks among the top-ten most impactful global risks. A report published in 2019 by the Ponemon Institute shows that 90% of companies supporting national critical infrastructures—energy, health, industrial and manufacturing, and transport—experienced at least one cyber attacks between 2017 and 2019 that led to data breaches or significant disruption of operations (Ponemon Institute LLC 2019). These reports are two of a long series of studies conducted over the past decade on the status of cybersecurity. From year to year, data about cyber attacks and their impact continue to increase indicating that cyber attacks pose an ever-growing threat for information societies. There are two lessons to be learned from these data. The first lesson is not controversial, digital infrastructures are porous. We should think of them as agile, flexible, but brittle systems. This brittleness, as I argued elsewhere (Taddeo 2016, 2017a), favours offence over defence, explaining in part the continue growth of cyber threats and the escalation of their impact. The more digital technologies become pervasive, the wider becomes the surface of attacks, and with it also number of successful attacks grows. Think for example about the distribution of Internet of Things (IoT). In 2018, a Symantec study reported an average of 5200 attacks per month on IoT devices, the figure almost double the 3650 attacks counted in 2016. The second lesson may be harder to learn, for it is about the inadequacy of the ways in which we have framed and governed cybersecurity. This is clear when considering that data on the escalation of number and impact of cyber attacks, despite the growing value of the cybersecurity market and the increasing efforts of companies and state actors to improve the security of information systems and infrastructures (Technavio 2018). The lack of effective cybersecurity measures has a potential knock-on effect on the information revolution, and on the development of information societies around

9 citations


Journal ArticleDOI
TL;DR: In this article, the authors focus on the socio-political background and policy debates that are shaping China's AI strategy and analyse the main strategic areas in which China is investing in AI and concurrent ethical debates that delimiting its use.
Abstract: In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence (AI), entitled ‘New Generation Artificial Intelligence Development Plan’ (新一代人工智能发展规划). This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan (ca. 150 billion dollars) industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this article, we focus on the socio-political background and policy debates that are shaping China’s AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. By focusing on the policy backdrop, we seek to provide a more comprehensive and critical understanding of China’s AI policy by bringing together debates and analyses of a wide array of policy documents.

8 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present the first thematic review of the literature on the ethical issues concerning digital well-being and highlight three broader themes: positive computing, personalised human-computer interaction, and autonomy and self-determination.
Abstract: This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being, and review the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies key issues related to four key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human-computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being.

Book ChapterDOI
01 Jan 2019
TL;DR: The extensive use of increasingly more data, the growing reliance on algorithms to analyse them in order to shape choices and to make decisions, and the gradual reduction of human involvement or oversight over many automatic processes pose pressing questions about fairness, responsibility, and respect of human rights.
Abstract: The digital revolution provides huge opportunities to improve private and public life, and our environments, from health care to smart cities and global warming. Unfortunately, such opportunities come with significant ethical challenges. In particular, the extensive use of increasingly more data—often personal, if not sensitive (Big Data)—the growing reliance on algorithms to analyse them in order to shape choices and to make decisions (including machine learning, AI, and robotics), and the gradual reduction of human involvement or oversight over many automatic processes, pose pressing questions about fairness, responsibility, and respect of human rights.

Journal ArticleDOI
TL;DR: In a recent report, the UK Digital, Culture, Media and Sport Committee focused on the role and responsibilities of online service providers (OSPs) with respect to the circulation of fake news and their impact on democratic processes, like public debate and political elections.
Abstract: In a recent report,1 the UK Digital, Culture, Media and Sport (DCMS) Committee focused on the role and responsibilities of online service providers (OSPs) with respect to the circulation of fake news and their impact on democratic processes, like public debate and political elections. The first recommendation offered in the report calls for “compulsory Code of Ethics for tech companies overseen by independent regulator”. The recommendation is sensible and should be adopted. Scholars working in the area of digital ethics have often stressed the need for a code of ethics shaping the conduct of OSPs (Taddeo and Floridi 2017). And an authority with teeth enforcing the code should be a measure welcome by the public sector, civil society, and OSPs themselves. On the one hand, the authority would ensure that OSPs respect essential values and principles safeguarding users’ rights and fundamental processes of our societies. On the other hand, an authority endorsing a code of ethics and recognising (when appropriate) compliance with it would help OSPs to improve their reputation, build their trustworthiness, and hence breed users’ trust (Taddeo 2017). The content of the code to which OSPs should abide remains to be specified. This is a central question in the debate on the moral responsibilities of OSPs. The debate dates back to the early 2000 (Taddeo and Floridi 2015) and ranges from the moral responsibility of OSPs with respect to correcting biases in information indexing (Introna and Nissenbaum 2006; Granka 2010), protecting of users’ privacy (Zhang et al. 2010) and security (Cerf 2011; Taddeo 2013, 2014), safeguarding democratic processes (Pariser 2012; Sunstein 2001; Floridi 2016), and respecting human rights, particularly freedom of information and freedom of internet (Broeders and Taylor 2017). Three questions are central to this debate: (1) which role have, and should have, OSPs in mature information societies; (2) what are the moral responsibilities

Journal ArticleDOI
TL;DR: In this paper, the authors analyze the ethical aspects of multi-stakeholder recommendation systems and conclude that the MRS approach offers the resources to understand the normative social dimension of RSs.
Abstract: This article analyses the ethical aspects of multi-stakeholder recommendation systems (RSs). Following the most common approach in the literature, we assume a consequentialist framework to introduce the main concepts of multi-stakeholder recommendation. We then consider three research questions: who are the stakeholders in a RS? How are their interests taken into account when formulating a recommendation? And, what is the scientific paradigm underlying RSs? Our main finding is that multi-stakeholder RSs (MRSs) are designed and theorised, methodologically, according to neoclassical welfare economics. We consider and reply to some methodological objections to MRSs on this basis, concluding that the multi-stakeholder approach offers the resources to understand the normative social dimension of RSs.

Book ChapterDOI
01 Jan 2019
TL;DR: In this paper, the authors identify the limits of deterrence theory in cyberspace, clear the ground of inadequate approaches to cyber deterrence and define the conceptual space for a domain-specific theory of cyber deterrence, still to be developed.
Abstract: In this chapter, I analyse deterrence theory and argue that its applicability to cyberspace is limited and that these limits are not trivial. They are the consequence of fundamental differences between deterrence theory and the nature of cyber conflicts and cyberspace. The goals of this analysis are to identify the limits of deterrence theory in cyberspace, clear the ground of inadequate approaches to cyber deterrence, and define the conceptual space for a domain-specific theory of cyber deterrence, still to be developed.

Book ChapterDOI
01 Jan 2019
TL;DR: This chapter presents the first ethical code for posthumous medical data donation (PMDD), based on five foundational principles, which seeks to inform and guide the implementation of an effective and ethical PMDD scheme by addressing the key risks associated with the utilisation of personal health data for the promotion of the common good.
Abstract: This chapter follows the argument that personal medical data should be made available for scientific research by enabling and encouraging individuals to donate their medical records after death, provided that this can be done safely and ethically. While medical donation schemes with dedicated regulatory and ethical frameworks for blood, organ or tissue donations are already in place, no such ethical guidance currently exists with regard to personal medical data. In addressing this gap, this chapter presents the first ethical code for posthumous medical data donation (PMDD). It is based on five foundational principles and seeks to inform and guide the implementation of an effective and ethical PMDD scheme by addressing the key risks associated with the utilisation of personal health data for the promotion of the common good.

Posted Content
TL;DR: This study investigates the relationship between search engines' approach to privacy and the scientific quality of the information they return, and suggests that designing a search engine that is privacy savvy and avoids issues with filter bubbles that can result from user tracking is necessary but insufficient.
Abstract: The fact that internet companies may record our personal data and track our online behavior for commercial or political purpose has emphasized aspects related to online privacy. This has also led to the development of search engines that promise no tracking and privacy. Search engines also have a major role in spreading low-quality health information such as that of anti-vaccine websites. This study investigates the relationship between search engines' approach to privacy and the scientific quality of the information they return. We analyzed the first 30 webpages returned searching 'vaccines autism' in English, Spanish, Italian and French. The results show that alternative search engines (Duckduckgo, Ecosia, Qwant, Swisscows and Mojeek) may return more anti-vaccine pages (10 to 53 percent) than this http URL (zero). Some localized versions of Google, however, returned more anti-vaccine webpages (up to 10 percent) than this http URL. Our study suggests that designing a search engine that is privacy savvy and avoids issues with filter bubbles that can result from user tracking is necessary but insufficient; instead, mechanisms should be developed to test search engines from the perspective of information quality (particularly for health-related webpages), before they can be deemed trustworthy providers of public health information.