scispace - formally typeset
Search or ask a question

Showing papers in "Ai & Society in 2020"


Journal ArticleDOI
TL;DR: It is concluded that AI has not yet been impactful against COVID-19, and its use is hampered by a lack of data, and by too much data.
Abstract: This paper provides an early evaluation of Artificial Intelligence (AI) against COVID-19. The main areas where AI can contribute to the fight against COVID-19 are discussed. It is concluded that AI has not yet been impactful against COVID-19. Its use is hampered by a lack of data, and by too much data. Overcoming these constraints will require a careful balance between data privacy and public health, and rigorous human-AI interaction. It is unlikely that these will be addressed in time to be of much help during the present pandemic. In the meantime, extensive gathering of diagnostic data on who is infectious will be essential to save lives, train AI, and limit economic damages.

254 citations


Journal ArticleDOI
TL;DR: Data from a scenario-based survey experiment with a national sample show that people are by and large concerned about risks and have mixed opinions about fairness and usefulness of automated decision-making at a societal level, with general attitudes influenced by individual characteristics.
Abstract: Fueled by ever-growing amounts of (digital) data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, (public) health, and judicial contexts. Data from a scenario-based survey experiment with a national sample (N = 958) show that people are by and large concerned about risks and have mixed opinions about fairness and usefulness of automated decision-making at a societal level, with general attitudes influenced by individual characteristics. Interestingly, decisions taken automatically by AI were often evaluated on par or even better than human experts for specific decisions. Theoretical and societal implications about these findings are discussed.

234 citations


Journal ArticleDOI
TL;DR: This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review, and identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact.
Abstract: This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system.

173 citations


Journal ArticleDOI
TL;DR: It is argued that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring.
Abstract: The increasing use of Artificial Intelligence (AI) for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on how transparency can secure high-quality, fair, and reliable decisions, far less attention has been devoted to the role of transparency when it comes to how the general public come to perceive AI decision-making as legitimate and worthy of acceptance. Since relying on coercion is not only normatively problematic but also costly and highly inefficient, perceived legitimacy is fundamental to the democratic system. This paper discusses how transparency in and about AI decision-making can affect the public’s perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. We argue that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring.

75 citations


Journal ArticleDOI
TL;DR: The normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society, and the design of social choice AI faces three sets of decisions.
Abstract: A major approach to the ethics of artificial intelligence (AI) is to use social choice, in which the AI is designed to act according to the aggregate views of society. This is found in the AI ethics of “coherent extrapolated volition” and “bottom–up ethics”. This paper shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society. Instead, the design of social choice AI faces three sets of decisions: standing, concerning whose ethics views are included; measurement, concerning how their views are identified; and aggregation, concerning how individual views are combined to a single view that will guide AI behavior. These decisions must be made up front in the initial AI design—designers cannot “let the AI figure it out”. Each set of decisions poses difficult ethical dilemmas with major consequences for AI behavior, with some decision options yielding pathological or even catastrophic results. Furthermore, non-social choice ethics face similar issues, such as whether to count future generations or the AI itself. These issues can be more important than the question of whether or not to use social choice ethics. Attention should focus on these issues, not on social choice.

74 citations


Journal ArticleDOI
TL;DR: The results suggest that the media has a fairly realistic and practical focus in its coverage of the ethics of AI, but that the coverage is still shallow.
Abstract: As artificial intelligence (AI) technologies become increasingly prominent in our daily lives, media coverage of the ethical considerations of these technologies has followed suit. Since previous research has shown that media coverage can drive public discourse about novel technologies, studying how the ethical issues of AI are portrayed in the media may lead to greater insight into the potential ramifications of this public discourse, particularly with regard to development and regulation of AI. This paper expands upon previous research by systematically analyzing and categorizing the media portrayal of the ethical issues of AI to better understand how media coverage of these issues may shape public debate about AI. Our results suggest that the media has a fairly realistic and practical focus in its coverage of the ethics of AI, but that the coverage is still shallow. A multifaceted approach to handling the social, ethical and policy issues of AI technology is needed, including increasing the accessibility of correct information to the public in the form of fact sheets and ethical value statements on trusted webpages (e.g., government agencies), collaboration and inclusion of ethics and AI experts in both research and public debate, and consistent government policies or regulatory frameworks for AI technology.

65 citations


Journal ArticleDOI
TL;DR: The study revealed that emotion detection is predominantly carried out through four major methods, namely, facial expression recognition, physiological signals recognition, speech signals variation and text semantics on standard databases such as JAFFE, CK+, Berlin Emotional Database, SAVEE, etc.
Abstract: Human emotion recognition through artificial intelligence is one of the most popular research fields among researchers nowadays. The fields of Human Computer Interaction (HCI) and Affective Computing are being extensively used to sense human emotions. Humans generally use a lot of indirect and non-verbal means to convey their emotions. The presented exposition aims to provide an overall overview with the analysis of all the noteworthy emotion detection methods at a single location. To the best of our knowledge, this is the first attempt to outline all the emotion recognition models developed in the last decade. The paper is comprehended by expending more than hundred papers; a detailed analysis of the methodologies along with the datasets is carried out in the paper. The study revealed that emotion detection is predominantly carried out through four major methods, namely, facial expression recognition, physiological signals recognition, speech signals variation and text semantics on standard databases such as JAFFE, CK+, Berlin Emotional Database, SAVEE, etc. as well as self-generated databases. Generally seven basic emotions are recognized through these methods. Further, we have compared different methods employed for emotion detection in humans. The best results were obtained by using Stationary Wavelet Transform for Facial Emotion Recognition , Particle Swarm Optimization assisted Biogeography based optimization algorithms for emotion recognition through speech, Statistical features coupled with different methods for physiological signals, Rough set theory coupled with SVM for text semantics with respective accuracies of 98.83%,99.47%, 87.15%,87.02% . Overall, the method of Particle Swarm Optimization assisted Biogeography based optimization algorithms with an accuracy of 99.47% on BES dataset gave the best results.

56 citations


Journal ArticleDOI
TL;DR: It is shown that at each level of AI’s intelligence power, separate types of possible catastrophes dominate, and that AI safety theory is complex and must be customized for each AI development level.
Abstract: A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of AI development, namely, (1) before it starts self-improvement, (2) during its takeoff, when it uses various instruments to escape its initial confinement, or (3) after it successfully takes over the world and starts to implement its goal system, which could be plainly unaligned, or feature-flawed friendliness. AI could also halt at later stages of its development either due to technical glitches or ontological problems. Overall, we identified around several dozen scenarios of AI-driven global catastrophe. The extent of this list illustrates that there is no one simple solution to the problem of AI safety, and that AI safety theory is complex and must be customized for each AI development level.

49 citations


Journal ArticleDOI
Jesper Aagaard1
TL;DR: The article develops the notion of digital akrasia, which can be defined as a tendency to become swept up by ones digital devices in spite of better intentions, and proposes that this phenomenon may be the result of bad technohabits.
Abstract: The present article focuses on the issue of ignoring conversational partners in favor of one’s phone, or what has also become known as phubbing. Prior research has shown that this behavior is associated with a host of negative interpersonal consequences. Since phubbing by definition entails adverse effects, however, it is interesting to explore why people continue to engage in this hurtful behavior: Are they unaware that phubbing is hurtful to others? Or do they simply not care? Building on interviews with students in a Danish business college, the article reveals a pronounced discrepancy in young people’s relationship to phubbing: While they emphatically denounce phubbing as both annoying and disrespectful, they readily admit to phubbing others. In other words, they often act against their own moral convictions. Importantly, participants describe this discrepancy as a result of an unintentional inclination to divert attentional engagement. On the basis of these results, the article develops the notion of digital akrasia, which can be defined as a tendency to become swept up by ones digital devices in spite of better intentions. It is proposed that this phenomenon may be the result of bad technohabits. Further implications are discussed.

48 citations


Journal ArticleDOI
TL;DR: The paper intends to provide an overview of current challenges which the research and development of applications in the field of artificial intelligence and machine learning have to face, whereas all three mentioned areas are to be further explored in this paper.
Abstract: The current “AI Summer” is marked by scientific breakthroughs and economic successes in the fields of research, development, and application of systems with artificial intelligence. But, aside from the great hopes and promises associated with artificial intelligence, there are a number of challenges, shortcomings and even limitations of the technology. For one, these challenges arise from methodological and epistemological misconceptions about the capabilities of artificial intelligence. Secondly, they result from restrictions of the social context in which the development of applications of machine learning is embedded. And third, they are a consequence of current technical limitations in the development and use of artificial intelligence. The paper intends to provide an overview of current challenges which the research and development of applications in the field of artificial intelligence and machine learning have to face, whereas all three mentioned areas are to be further explored in this paper.

45 citations


Journal ArticleDOI
TL;DR: The proposed approach has outperformed Deep Neural models and other traditional machine learning algorithms in terms of accuracy and justifies its low dependency on extensive Hyper-parameter tuning and the size of the dataset as against other Deep Learning Models based on neural networks.
Abstract: Apart from being relied upon for feeding the entire world, the agricultural sector is also responsible for a third of the global Gross-Domestic-Product (GDP). Additionally, a majority of developing nations depend on their agricultural produce as it provides employment opportunities for a significant fraction of the poor. This calls for methods to ensure the accurate and efficient diagnosis of plant disease, to minimize any adverse effects on the produce. This paper proposes the recognition and classification of maize plant leaf diseases by application of the Deep Forest algorithm. The Automated novel approach and accurate classification using the Deep Forest technique are a significant step-up from the existing manual classification and other techniques with less accuracy. The proposed approach has outperformed Deep Neural models and other traditional machine learning algorithms in terms of accuracy. It justifies its low dependency on extensive Hyper-parameter tuning and the size of the dataset as against other Deep Learning Models based on neural networks.

Journal ArticleDOI
TL;DR: The assembly line of machine learning: data, algorithm, model, dataset, training dataset, and the social origins of machine intelligence.
Abstract: Some enlightenment regarding the project to mechanise reason. The assembly line of machine learning: data, algorithm, model. The training dataset: the social origins of machine intelligence. The history of AI as the automation of perception. The learning algorithm: compressing the world into a statistical model. All models are wrong, but some are useful. World to vector: the society of classification and prediction bots. Faults of a statistical instrument: the undetection of the new. Adversarial intelligence vs. statistical intelligence: labour in the age of AI.

Journal ArticleDOI
TL;DR: It is recommended that the danger of an unfriendly AGI can be reduced by taxing AI and using public procurement, which would reduce the pay-off of contestants, raise the amount of R&D needed to compete, and coordinate and incentivize co-operation.
Abstract: An arms race for an artificial general intelligence (AGI) would be detrimental for and even pose an existential threat to humanity if it results in an unfriendly AGI. In this paper, an all-pay contest model is developed to derive implications for public policy to avoid such an outcome. It is established that, in a winner-takes-all race, where players must invest in R&D, only the most competitive teams will participate. Thus, given the difficulty of AGI, the number of competing teams is unlikely ever to be very large. It is also established that the intention of teams competing in an AGI race, as well as the possibility of an intermediate outcome (prize), is important. The possibility of an intermediate prize will raise the probability of finding the dominant AGI application and, hence, will make public control more urgent. It is recommended that the danger of an unfriendly AGI can be reduced by taxing AI and using public procurement. This would reduce the pay-off of contestants, raise the amount of R&D needed to compete, and coordinate and incentivize co-operation. This will help to alleviate the control and political problems in AI. Future research is needed to elaborate the design of systems of public procurement of AI innovation and for appropriately adjusting the legal frameworks underpinning high-tech innovation, in particular dealing with patenting by AI.

Journal ArticleDOI
TL;DR: An epistemological examination of the AI’s opacity in light of the latest techniques to remedy it is carried out and the rationality of delegating tasks in opaque agents is evaluated.
Abstract: The artificial intelligence models with machine learning that exhibit the best predictive accuracy, and therefore, the most powerful ones, are, paradoxically, those with the most opaque black-box architectures. At the same time, the unstoppable computerization of advanced industrial societies demands the use of these machines in a growing number of domains. The conjunction of both phenomena gives rise to a control problem on AI that in this paper we analyze by dividing the issue into two. First, we carry out an epistemological examination of the AI’s opacity in light of the latest techniques to remedy it. And second, we evaluate the rationality of delegating tasks in opaque agents.

Journal ArticleDOI
TL;DR: A conceptual regulatory framework for small autonomous agricultural robots, from a practical, self-contained engineering guide perspective, sufficient to get working research and commercial agricultural roboticists quickly and easily up and running within the law is developed.
Abstract: Legal structures may form barriers to, or enablers of, adoption of precision agriculture management with small autonomous agricultural robots. This article develops a conceptual regulatory framework for small autonomous agricultural robots, from a practical, self-contained engineering guide perspective, sufficient to get working research and commercial agricultural roboticists quickly and easily up and running within the law. The article examines the liability framework, or rather lack of it, for agricultural robotics in EU, and their transpositions to UK law, as a case study illustrating general international legal concepts and issues. It examines how the law may provide mitigating effects on the liability regime, and how contracts can be developed between agents within it to enable smooth operation. It covers other legal aspects of operation such as the use of shared communications resources and privacy in the reuse of robot-collected data. Where there are some grey areas in current law, it argues that new proposals could be developed to reform these to promote further innovation and investment in agricultural robots.

Journal ArticleDOI
TL;DR: In this study, particle herd optimization algorithm, one of the herd-based optimization techniques, was used with two classifiers, SVM and Boosted Tree, and the eight most frequently selected channels were determined to improve system performance in terms of speed and accuracy.
Abstract: Social participation of people with disabilities is tried to be increased with state-supported projects recently. However, even in neuromuscular diseases such as Motor Neurone Disease (MND), Full Sliding Status (TSD), even the communication skills of individuals are interrupted. Brain-Computer Interfaces (BBA), which have a few decades of history and an increasing number of studies with exponential momentum, are being developed to enable individuals with such disorders to communicate with their environment. Spelling systems are BBA systems that detect the letters that the person focuses on the matrix of letters and numbers on a screen and convert them into text through the application. In this context, with the random flashing of the letters on the screen, it aims to detect the electrical changes occurring in the brain as a result of the stimulus given to the person. Research reveals that the stimulus that the individual encounters cause an amplitude in the EEG signal called P300, between 250 and 500 ms. Brain-computer interfaces are used through EEG signals to provide environmental interactions for individuals with restricted movements due to stroke or neurodegenerative diseases. The multi-channel structure of EEG signals both increases system cost and reduces processing speed. For this reason, reducing the system cost by detecting more active electrodes during the process increases the accessibility of people. In this context, the use of optimization techniques in electrode selection is used to determine the most effective channels by a random selection method. In the study, particle herd optimization algorithm, one of the herd-based optimization techniques, was used with two classifiers, SVM and Boosted Tree, and the eight most frequently selected channels were determined to improve system performance in terms of speed and accuracy.

Journal ArticleDOI
TL;DR: It is argued that the authors are obligated to grant moral rights to artificial intelligent robots once they have become full ethical agents, i.e., subjects of morality.
Abstract: Great technological advances in such areas as computer science, artificial intelligence, and robotics have brought the advent of artificially intelligent robots within our reach within the next century. Against this background, the interdisciplinary field of machine ethics is concerned with the vital issue of making robots “ethical” and examining the moral status of autonomous robots that are capable of moral reasoning and decision-making. The existence of such robots will deeply reshape our socio-political life. This paper focuses on whether such highly advanced yet artificially intelligent beings will deserve moral protection (in the form of being granted moral rights) once they become capable of moral reasoning and decision-making. I argue that we are obligated to grant them moral rights once they have become full ethical agents, i.e., subjects of morality. I present four related arguments in support of this claim and thereafter examine four main objections to the idea of ascribing moral rights to artificial intelligent robots.

Journal ArticleDOI
TL;DR: This work investigates the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence, and concludes that virtuous machines would indeed be included on “relational” views of membership in the moral community, even if they are indeed weakened.
Abstract: Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing moral character? An experiment describes situations where either human or AI agents engage in virtuous or vicious behavior and experiment participants then judge their level of virtue or vice. The scenarios represent different virtue ethics domains of truth, justice, fear, wealth, and honor. Quantitative and qualitative analyses show that moral attributions are weakened for AIs compared to humans, and the reasoning and explanations for the attributions are varied and more complex. On “relational” views of membership in the moral community, virtuous machines would indeed be included, even if they are indeed weakened. Hence, while our moral relationships with artificial agents may be of the same types, they may yet remain substantively different than our relationships to human beings.

Journal ArticleDOI
TL;DR: This paper focuses on constraining AI and those machines powered by AI within microenvironments—both physical and virtual—which allow these machines to realize their function whilst preventing harm to humans.
Abstract: With Artificial Intelligence (AI) entering our lives in novel ways—both known and unknown to us—there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the ‘black box’ of modern machine-learning algorithms to understand the reasoning behind their decisions—especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be transparent we should focus on constraining AI and those machines powered by AI within microenvironments—both physical and virtual—which allow these machines to realize their function whilst preventing harm to humans. In the field of robotics this is called ‘envelopment’. However, to put an ‘envelope’ around AI-powered machines we need to know some basic things about them which we are often in the dark about. The properties we need to know are the: training data, inputs, functions, outputs, and boundaries. This knowledge is a necessary first step towards the envelopment of AI-powered machines. It is only with this knowledge that we can responsibly regulate, use, and live in a world populated by these machines.

Journal ArticleDOI
TL;DR: It is found that values are sensitive to disvalue if algorithms are designed, implemented or deployed inappropriately or without sufficient consideration for their value impacts, potentially resulting in problems including discrimination and constrained autonomy.
Abstract: This article presents a conceptual investigation into the value impacts and relations of algorithms in the domain of justice and security. As a conceptual investigation, it represents one step in a value sensitive design based methodology (not incorporated here are empirical and technical investigations). Here, we explicate and analyse the expression of values of accuracy, privacy, fairness and equality, property and ownership, and accountability and transparency in this context. We find that values are sensitive to disvalue if algorithms are designed, implemented or deployed inappropriately or without sufficient consideration for their value impacts, potentially resulting in problems including discrimination and constrained autonomy. Furthermore, we outline a framework of conceptual relations of values indicated by our analysis, and potential value tensions in their implementation and deployment with a view towards supporting future research, and supporting the value sensitive design of algorithms in justice and security.

Journal ArticleDOI
TL;DR: Similar to the human nervous system, AI systems in finance/treasury must manage data quickly and accurately, including the capture and classification of data and its integration into larger datasets.
Abstract: Artificial intelligence poses a particular challenge in its application to finance/treasury management because most treasury functions are no longer physical processes, but rather virtual processes that are increasingly highly automated. Most finance/treasury teams are knowledge workers who make decisions and conduct analytics within often dynamic frameworks that must incorporate environmental considerations (foreign exchange rates, GDP forecasts), internal considerations (growth needs, business trends), as well as the impact of any actions on related corporate decisions which are also highly complex (e.g., hedging, investing, capital structure, liquidity levels). Artificial intelligence in finance and treasury is thus most analogous to the complexity of a human nervous system as it encompasses far more than the automation of tasks. Similar to the human nervous system, AI systems in finance/treasury must manage data quickly and accurately, including the capture and classification of data and its integration into larger datasets. At present, the AI network neural system has been gradually improved and is widely used in many fields of treasury management, such as early warning of potential financial crisis, diagnosis of financial risk, control of financial information data quality and mining of hidden financial data, information, etc.

Journal ArticleDOI
TL;DR: The attempt in this paper is to discuss, first, whether ethics is the sort of thing that can be computed, and second, if the authors are ascribing mind to machines, it gives rise to ethical issues regarding machines.
Abstract: The advent of the intelligent robot has occupied a significant position in society over the past decades and has given rise to new issues in society. As we know, the primary aim of artificial intelligence or robotic research is not only to develop advanced programs to solve our problems but also to reproduce mental qualities in machines. The critical claim of artificial intelligence (AI) advocates is that there is no distinction between mind and machines and thus they argue that there are possibilities for machine ethics, just as human ethics. Unlike computer ethics, which has traditionally focused on ethical issues surrounding human use of machines, AI or machine ethics is concerned with the behaviour of machines towards human users and perhaps other machines as well, and the ethicality of these interactions. The ultimate goal of machine ethics, according to the AI scientists, is to create a machine that itself follows an ideal ethical principle or a set of principles; that is to say, it is guided by this principle or these principles in decisions it makes about possible courses of action it could takea. Thus, machine ethics task of ensuring ethical behaviour of an artificial agent. Although, there are many philosophical issues related to artificial intelligence, but our attempt in this paper is to discuss, first, whether ethics is the sort of thing that can be computed. Second, if we are ascribing mind to machines, it gives rise to ethical issues regarding machines. And if we are not drawing the difference between mind and machines, we are not only redefining specifically human mind but also the society as a whole. Having a mind is, among other things, having the capacity to make voluntary decisions and actions. The notion of mind is central to our ethical thinking, and this is because the human mind is self-conscious, and this is a property that machines lack, as yet.

Journal ArticleDOI
TL;DR: This paper argues that a rule-based utilitarian approach (in contrast to a strict act utilitarian approach) is superior, because it can capture the most important features of the virtue-theoretic approach while realizing additional significant benefits of machine ethics.
Abstract: Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), for an artificial moral agent based on virtue theory. While the virtuous artificial moral agent has various strengths, this paper argues that a rule-based utilitarian approach (in contrast to a strict act utilitarian approach) is superior, because it can capture the most important features of the virtue-theoretic approach while realizing additional significant benefits. Specifically, a two-level utilitarian artificial moral agent incorporating both established moral rules and a utility calculator is especially well suited for machine ethics.

Journal ArticleDOI
TL;DR: It will be assumed that future artificial entities, such as Sophia the Robot, will be granted citizenship on an international scale, and an analysis of rights will be made with respect to the needs of a non-biological intelligence possessing legal and civic duties akin to those possessed by humanity today.
Abstract: The concept of artificial intelligence is not new nor is the notion that it should be granted legal protections given its influence on human activity. What is new, on a relative scale, is the notion that artificial intelligence can possess citizenship—a concept reserved only for humans, as it presupposes the idea of possessing civil duties and protections. Where there are several decades’ worth of writing on the concept of the legal status of computational artificial artefacts in the USA and elsewhere, it is surprising that law makers internationally have come to a standstill to protect our silicon brainchildren. In this essay, it will be assumed that future artificial entities, such as Sophia the Robot, will be granted citizenship on an international scale. With this assumption, an analysis of rights will be made with respect to the needs of a non-biological intelligence possessing legal and civic duties akin to those possessed by humanity today. This essay does not present a full set of rights for artificial intelligence—instead, it aims to provide international jurisprudence evidence aliunde ab extra de lege lata for any future measures made to protect non-biological intelligence.

Journal ArticleDOI
TL;DR: This study compares Bayesian networks with artificial neural networks (ANNs) for predicting recovered value in a credit operation and finds that ANNs are a more efficient tool for predicting credit risk than the a¨ive bayesian (NB) approach.
Abstract: Credit risk threatens financial institutions and may result in irrecoverable consequences. Tools for risk prediction can be used to reduce bank insolvency. This study compares Bayesian networks with artificial neural networks (ANNs) for predicting recovered value in a credit operation. The credit scoring problem is typically been approached as a supervised classification problem in machine learning. The present study explores this problem and finds that ANNs are a more efficient tool for predicting credit risk than the a¨ive bayesian (NB) approach. The most crucial point is related to lending decisions, and a significant credit operation is associated with a set of factors to the degree that probabilities are used to classify new applicants based on their characteristics. The optimum achievement was obtained when the linear regression was equivalent to 0.2, with a mean accuracy of 85%. For the na¨ive Bayes approach, the algorithm was applied to four datasets in a single process before the entire dataset was used to create a confusion matrix.

Journal ArticleDOI
TL;DR: The primary goal of this paper is to map and synthetize the different existing perspectives to pave the way for an open discussion on the topic of digital hermeneutics.
Abstract: Today, there is an emerging interest for the potential role of hermeneutics in reflecting on the practices related to digital technologies and their consequences. Nonetheless, such an interest has neither given rise to a unitary approach nor to a shared debate. The primary goal of this paper is to map and synthetize the different existing perspectives to pave the way for an open discussion on the topic. The article is developed in two steps. In the first section, the authors analyze digital hermeneutics “in theory” by confronting and systematizing the existing literature. In particular, they stress three main distinctions among the approaches: (1) between “methodological” and “ontological” digital hermeneutics; (2) between data- and text-oriented digital hermeneutics; and (3) between “quantitative” and “qualitative” credos in digital hermeneutics. In the second section, they consider digital hermeneutics “in action”, by critically analyzing the uses of digital data (notably tweets) for studying a classical object such as the political opinion. In the conclusion, we will pave the way to an ontological turn in digital hermeneutics. Most of this article is devoted to the methodological issue of interpreting with digital machines. The main task of an ontological digital hermeneutics would consist instead in wondering if it is legitimate, and eventually to which extent, to speak of digital technologies, or at least of some of them, as interpretational machines.

Journal ArticleDOI
TL;DR: It is argued that digital modernity has two dimensions, of progression through time and progression through space, and these two dimensions can be in contradiction.
Abstract: This paper explores the concept of digital modernity, the extension of narratives of modernity with the special affordances of digital networked technology. Digital modernity produces a new narrative which can be taken in many ways: to be descriptive of reality; a teleological account of an inexorable process; or a normative account of an ideal sociotechnical state. However, it is understood that narratives of digital modernity help shape reality via commercial and political decision-makers, and examples are given from the politics and society of the United Kingdom. The paper argues that digital modernity has two dimensions, of progression through time and progression through space, and these two dimensions can be in contradiction. Contradictions can also be found between ideas of digital modernity and modernity itself, and also between digital modernity and some of the basic pre-modern concepts that underlie the whole technology industry. Therefore, digital modernity may not be a sustainable goal for technology development.

Journal ArticleDOI
TL;DR: A significant finding is that the image of the risk of AI is mostly associated with existential risks that became popular after the fourth quarter of 2014.
Abstract: The goal of this paper is to describe the mechanism of the public perception of risk of artificial intelligence. For that we apply the social amplification of risk framework to the public perception of artificial intelligence using data collected from Twitter from 2007 to 2018. We analyzed when and how there appeared a significant representation of the association between risk and artificial intelligence in the public awareness of artificial intelligence. A significant finding is that the image of the risk of AI is mostly associated with existential risks that became popular after the fourth quarter of 2014. The source of that was the public positioning of experts who happen to be the real movers of the risk perception of AI so far instead of actual disasters. We analyze here how this kind of risk was amplified, its secondary effects, what are the varieties of risk unrelated to existential risk, and what is the dynamics of the experts in addressing their concerns to the audience of lay people.


Journal ArticleDOI
TL;DR: The results indicated that participants with higher social anxiety tended to feel less “anticipatory anxiety” and tension when they knew that they would be interacting with robots compared with humans and that interaction with a robot elicited less tension compared with interaction withA person regardless of the level of social anxiety.
Abstract: To investigate whether people with social anxiety have less actual and “anticipatory” anxiety when interacting with a robot compared to interacting with a person, we conducted a 2 × 2 psychological experiment with two factors: social anxiety and interaction partner (a human confederate and a robot). The experiment was conducted in a counseling setting where a participant played the role of a client and the robot or the confederate played the role of a counselor. First, we measured the participants’ social anxiety using the Social Avoidance and Distress Scale, after which, we measured their anxiety at two specific moments: “anticipatory anxiety” was measured after they knew that they would be interacting with a robot or a human confederate, and actual anxiety was measured after they actually interacted with the robot or confederate. Measurements were performed using the Profile of Mood States and the State–Trait Anxiety Inventory. The results indicated that participants with higher social anxiety tended to feel less “anticipatory anxiety” and tension when they knew that they would be interacting with robots compared with humans. Moreover, we found that interaction with a robot elicited less tension compared with interaction with a person regardless of the level of social anxiety.