scispace - formally typeset
Search or ask a question

Showing papers in "Philosophy & Technology in 2015"


Journal ArticleDOI
TL;DR: In this article, the authors explore the ambiguous impact of new information and communications technologies (ICTs) on the cultivation of moral skills in human beings, and conclude that moral skills are essential prerequisites for the effective development of practical wisdom and virtuous character, and since market and cultural forces are not presently aligned to bring about the more salutary of the ambiguous potentials presented here.
Abstract: This paper explores the ambiguous impact of new information and communications technologies (ICTs) on the cultivation of moral skills in human beings. Just as twentieth century advances in machine automation resulted in the economic devaluation of practical knowledge and skillsets historically cultivated by machinists, artisans, and other highly trained workers (Braverman 1974), while also driving the cultivation of new skills in a variety of engineering and white collar occupations, ICTs are also recognized as potential causes of a complex pattern of economic deskilling, reskilling, and upskilling. In this paper, I adapt the conceptual apparatus of sociological debates over economic deskilling to illuminate a different potential for technological deskilling/upskilling, namely the ability of ICTs to contribute to the moral deskilling of human users, a potential that exists alongside rich but currently underrealized possibilities for moral reskilling and/or upskilling. I flesh out this general hypothesis by means of examples involving automated weapons technology, new media practices, and social robotics. I conclude that since moral skills are essential prerequisites for the effective development of practical wisdom and virtuous character, and since market and cultural forces are not presently aligned to bring about the more salutary of the ambiguous potentials presented here, the future shape of these developments warrants our close attention—and perhaps active intervention.

113 citations


Journal ArticleDOI
TL;DR: This investigation is structured around four related factors of the new technology: Totality, Practical Incorporability, Autonomy and Entanglement to inquire into the implications of this cloud-based memory technology for the authors' minds and their sense of self.
Abstract: Technologies and artefacts have long played a role in the structure of human memory and our cognitive lives more generally. Recent years have seen an explosion in the production and use of a new regime of information technologies that might have powerful implications for our minds. Electronic-Memory (E-Memory), powerful, portable and wearable digital gadgetry and “the cloud” of ever-present data services allow us to record, store and access an ever-expanding range of information both about and of relevance to our lives. Already, for a decade we have been carrying around expansive gadgetry which allows us to collect, store and use what would have been almost unimaginable amounts of digital information only a short time ago. Now, thanks to the wireless internet adding vast processing and storage potential to the powerful portable devices which many of us carry constantly or wear, this information can be accessed and customised in an ever-greater variety of ways. How should we assess the implications of the new portable and pervasive cognitive technologies on offer? Does E-Memory and the wider panoply of cloud-enabled cognitive technologies really promise (as some see it), or threaten (as others do), a radical change to the human cognitive abilities and perhaps the very nature of our minds? If so, how are we to assess the possibilities and attempt to understand whether they offer a hopeful or dangerous turn in the human condition? This investigation is structured around four related factors of the new technology: Totality, Practical Incorporability, Autonomy and Entanglement. We use these factors to inquire into the implications of this cloud-based memory technology for our minds and our sense of self.

60 citations


Journal ArticleDOI
TL;DR: In this article, the authors argue that at the core of any such framework must be the human right to science, and stress an almost entirely neglected dimension of this right, the entitlement it confers on all human beings to participate in the scientific process in all of its aspects.
Abstract: The flourishing of citizen science is an exciting phenomenon with the potential to contribute significantly to scientific progress. However, we lack a framework for addressing in a principled and effective manner the pressing ethical questions it raises. We argue that at the core of any such framework must be the human right to science. Moreover, we stress an almost entirely neglected dimension of this right—the entitlement it confers on all human beings to participate in the scientific process in all of its aspects. We then explore three of its key implications for the ethical regulation of citizen science: (a) the positive obligations imposed by the right on the state and other agents to recognize and promote citizen science, (b) the convective nature of the participation in science facilitated by the right and (c) the potential to mobilize the right in rolling back the unprecedented expansion of intellectual property regimes.

50 citations


Journal ArticleDOI
TL;DR: In this article, a person of sufficiently high standing could accept responsibility for the actions of autonomous robotic devices, even if that person could not be causally linked to those actions besides this prior agreement.
Abstract: Sparrow (J Appl Philos 24:62–77, 2007) argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias (Ethics Inf Technol 6:175–183, 2004) has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently high standing could accept responsibility for the actions of autonomous robotic devices—even if that person could not be causally linked to those actions besides this prior agreement. The basic intuition behind our proposal is that humans can impute relations even when no other form of contact can be established. The missed alternative we want to highlight, then, would consist in an exchange: Social prestige in the occupation of a given office would come at the price of signing away part of one's freedoms to a contingent and unpredictable future guided by another (in this case, artificial) agency.

47 citations


Journal ArticleDOI
TL;DR: In this article, the authors compare the autopoietic theory and the enactive approach on the nature of living beings and evaluate their respective degrees of internal coherence, focusing on certain key notions such as autonomy and organizational closure.
Abstract: The autopoietic theory and the enactive approach are two theoretical streams that, in spite of their historical link and conceptual affinities, offer very different views on the nature of living beings. In this paper, we compare these views and evaluate, in an exploratory way, their respective degrees of internal coherence. Focusing the analyses on certain key notions such as autonomy and organizational closure, we argue that while the autopoietic theory manages to elaborate an internally consistent conception of living beings, the enactive approach presents an internal tension regarding its characterization of living beings as intentional systems directed at the environment.

43 citations


Journal ArticleDOI
TL;DR: This paper addresses two key questions at the intersection of philosophy and technology: What is deception and when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development?
Abstract: As software developers design artificial agents (AAs), they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to gain our trust? Is trust generated through technological “enchantment” warranted? Next, we investigate more complex questions of how deception that involves AAs differs from deception that only involves humans. Finally, we analyze the role and responsibility of developers in trust situations that involve both humans and AAs.

29 citations


Journal ArticleDOI
TL;DR: In this article, the authors argue that virtual rape in a future virtual reality environment involving a haptic device or robotics should in principle count as the crime of rape; for it corresponds to rape as it is viewed under the liberal theories that currently dominate the law.
Abstract: This paper is about the question of whether or not virtual rape should be considered a crime under current law. A virtual rape is the rape of an avatar (a person’s virtual representation) in a virtual world. In the future, possibilities for virtual rape of a person him- or herself will arise in virtual reality environments involving a haptic device or robotics. As the title indicates, I will study both these present and future instances of virtual rape in light of three categories of legal philosophical theories on rape in order to answer the aforementioned question. I will argue that a virtual rape in a future virtual reality environment involving a haptic device or robotics should in principle count as the crime of rape; for it corresponds to rape as it is viewed under the liberal theories that currently dominate the law. A surprising finding will be that a virtual rape in a virtual world re-actualizes the conservative view of rape that used to dominate the law in the Middle Ages and resembles rape as it is viewed under the feminist theories that criticize current law. Virtual rape in a virtual world cannot count as rape under current law; however, and at the end of this paper, I will suggest qualifying it as sexual harassment instead.

22 citations


Journal ArticleDOI
TL;DR: The article concludes by proposing the notion of Relational Law to summarize the ethical dimension of SWRM, and compares existing principles in privacy by design, linked open data, legal information institutes, and online dispute resolution.
Abstract: The notion of validity fulfils a crucial role in legal theory. In the emerging Web 3.0, Semantic Web languages, legal ontologies, and normative multi-agent systems (nMAS) are designed to cover new regulatory needs. Conceptual models for complex regulatory systems shape the characteristic features of rules, norms, and principles in different ways. This article outlines one of such multilayered governance models, designed for the CAPER platform, and offers a definition of Semantic Web Regulatory Models (SWRM). It distinguishes between normative-SWRM and institutional-SWRM. It also compares existing principles in privacy by design, linked open data (LOD), legal information institutes (LII), and online dispute resolution (ODR). The article concludes by proposing the notion of Relational Law to summarize the ethical dimension of SWRM. Ethics are the only regulatory way to constitute a global space, out of the jurisdictional public domain set by national, international, or transnational law, and opposed to the private one.

21 citations


Journal ArticleDOI
TL;DR: In this paper, a provisional autonomist philosophy of technology is developed using Foucauldian dispositifs of biopower in contrast to the hermeneutic and dialectical approach.
Abstract: This article focuses on the power of technological mediation from the point of view of autonomist Marxism (Hardt, Negri, Virno, Berardi, Lazzarrato). The first part of the article discusses the theories developed on technological mediation in postphenomenology (Ihde, Verbeek) and critical theory of technology (Feenberg) with regard to their respective power perspectives and ways of coping with relations of power embedded in technical artifacts and systems. Rather than focusing on the clashes between the hermeneutic postphenomenological approach and the dialectics of critical theory, it is argued that in both the category of resistance amidst power-relations is at least similar in one regard: resistance to the power of technology is conceptualized as a reactive force. The second part of the article reads technological mediation through the lens of the antagonistic power-perspective on class struggle developed in autonomist Marxism. The outline of a provisional autonomist philosophy of technology is developed using Foucauldian dispositifs of biopower in contrast to the hermeneutic and dialectical approach. It is thus argued that resistance should here be understood in terms of practice that subverts the technically mediated circuit of production itself.

21 citations


Journal ArticleDOI
TL;DR: The authors argue that post-phenomenology tends towards different senses in these contexts, and that this renders its sense more problematic than the work of Ihde and Verbeek makes it appear.
Abstract: This paper builds a three-part argument in favour of a more transcendentally focused form of ‘postphenomenology’ than is currently practised in philosophy of technology. It does so by problematising two key terms, ‘constitution’ and ‘postphenomenology’, then by arguing in favour of a ‘transcendental empiricist’ approach that draws on the work of Foucault, Derrida, and, in particular, Deleuze. Part one examines ‘constitution’, as it moves from the context of Husserl’s phenomenology to Ihde and Verbeek’s ‘postphenomenology’. I argue that the term tends towards different senses in these contexts, and that this renders its sense more problematic than the work of Ihde and Verbeek makes it appear. Part two examines ‘postphenomenology’. I argue that putatively ‘poststructuralist’ thinkers such as Derrida, Foucault, and Deleuze may be better characterised as ‘postphenomenologists’, and that approaching them in this way may allow better access to their work from a philosophy of technology perspective. Part three argues for a ‘transcendental empiricist’ approach to philosophy of technology. In doing so, it argues for a rewriting of contemporary philosophy of technology’s political constitution: since an ‘empirical turn’ in the 1990s, I argue, philosophy of technology has been too narrowly focused on ‘empirical’ issues of fact, and not focused enough on ‘transcendental’ issues concerning conditions for these facts.

17 citations


Journal ArticleDOI
TL;DR: In this article, the authors argue that social actors can upgrade to political actors once they become real interlocutors, namely political actors that can participate in the formation of the political discourse (which underlies political decisions).
Abstract: This paper criticizes the tendency to view the extension of the class of social actors, which stems from the process of democratization of data, as also implying the extension of the class of the political actors involved in the process of governance of the Information Society. The paper argues that social actors can upgrade to political actors once they become real interlocutors, namely political actors that can participate in the formation of the political discourse (which underlies political decisions) and that this can happen only once they are able to combine, to a greater or lesser degree, the reduction of information asymmetries with the reduction of power differentials.

Journal ArticleDOI
TL;DR: In this article, Ryle's view has been questioned and criticised by those who claim that there is only one type of knowledge, for instance, Jason Stanley and Timothy Williamson who claimed that knowing how is really a form of knowing that and Stephen Hetherington who claims that knowing that is knowing how.
Abstract: A wide variety of skills, abilities and knowledge are used in technological activities such as engineering design. Together, they enable problem solving and artefact creation. Gilbert Ryle’s division of knowledge into knowing how and knowing that is often referred to when discussing this technological knowledge. Ryle’s view has been questioned and criticised by those who claim that there is only one type, for instance, Jason Stanley and Timothy Williamson who claim that knowing how is really a form of knowing that and Stephen Hetherington who claims that knowing that is knowing how. Neither Ryle himself nor any of his critics have discussed technological knowledge. Exposing both Ryle’s and his critics’ ideas to technological knowledge show that there are strong reasons to keep the knowing how–knowing that dichotomy in technological contexts. The main reasons are that they are justified in different ways, that Stanley’s and Williamson’s ideas have great difficulties to account for learning of technological knowing how through training, and that knowing that is susceptible to Gettier problems, which technological knowing how is not.

Journal ArticleDOI
TL;DR: This paper discusses the potential of computer-supported argument visualization tools for coping with the complexity of philosophical arguments and introduces the term synergetic logosymphysis (defined as a process in which an argumentative structure grows in a collaborative effort) to describe a practice that combines these two dimensions of collaborative- and web-based argument mapping.
Abstract: Technology is not only an object of philosophical reflection but also something that can change this reflection. This paper discusses the potential of computer-supported argument visualization tools for coping with the complexity of philosophical arguments. I will show, in particular, how the interactive and web-based argument mapping software “AGORA-net” can change the practice of philosophical reflection, communication, and collaboration. AGORA-net allows the graphical representation of complex argumentations in logical form and the synchronous and asynchronous collaboration on those “argument maps” on the internet. Web-based argument mapping can overcome limits of space, time, and access, and it can empower users from all over the world to clarify their reasoning and to participate in deliberation and debate. Collaborative and web-based argument mapping tools such as AGORA-net can change the practice of arguing in two dimensions. First, arguing on web-based argument maps in both collaborative and adversarial form can lead to a fundamental shift in the way arguments are produced and debated. It can provide an alternative to the traditional four-step process of writing, publishing, debating, and responding in new writing with its clear distinction between individual and social activities by a process in which these four steps happen virtually simultaneously, and individual and social activities become more closely intertwined. Second, by replacing the linear form of arguments through graphical representations of networks of inferential relations which can grow over time in an infinite space, these tools do not only allow a clear visualization of structures and relations, but also forms of collaboration in which, for example, participants work on different “construction zones” of larger argument maps, or debates are performed at specific points of disagreement on those maps. I introduce the term synergetic logosymphysis (defined as a process in which an argumentative structure grows in a collaborative effort) to describe a practice that combines these two dimensions of collaborative- and web-based argument mapping.

Journal ArticleDOI
Marco Nørskov1
TL;DR: In this paper, a modification of Ihde's theory of technology-based relationships is presented, where alterity and background relations are ontologically reduced to ratios between the mediated relationships, and a discussion of the usefulness of applying static categorization to complex technology and of various challenges and limitations.
Abstract: The question of how we relate to the world via technology is fundamental to the philosophy of technology. One of the leading experts, the contemporary philosopher Don Ihde, has addressed this core issue in many of his works and introduced a fourfold classification of technology-based relationships. The conceptual paper at hand offers a modification of Ihde’s theory, but unlike previous research, it explores the functional compositions of Ihde’s categories instead of complementing them with additional relational categories. The result is a simplification and reduction of the analytical categories of Ihde’s theory, where alterity and background relations are ontologically reduced to ratios between the mediated relationships. The paper uses cutting-edge robotics as a hermeneutic tool in order to present this point and concludes with a discussion of the usefulness of applying static categorization to complex technology and of various challenges and limitations.

Journal ArticleDOI
TL;DR: In this article, Buechner and Tavani argue that the "short answer" to this question is yes, but they also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and AAs.
Abstract: Are trust relationships involving humans and artificial agents (AAs) possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani (Ethics and Information Technology 13(1):39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents (HAs) and AAs. In defending this view, I show how James Moor’s model for distinguishing four levels of ethical agents in the context of machine ethics (Moor, IEEE Intelligent Systems 21(4):18–21, 2006) can help us to develop a framework that differentiates four (loosely corresponding) levels of trust. Via a series of hypothetical scenarios, I illustrate each level of trust involved in HA–AA relationships. Finally, I argue that these levels of trust reflect three key factors or variables: (i) the level of autonomy of the individual AAs involved, (ii) the degree of risk/vulnerability on the part of the HAs who place their trust in the AAs, and (iii) the kind of interactions (direct vs. indirect) that occur between the HAs and AAs in the trust environments.

Journal ArticleDOI
TL;DR: The concept of proxy has its roots in the Roman Catholic Church and its roots are political, not religious, for it is a late Middle English contraction of "procuracy" which means ‘legitimate action taken in the place of, or on behalf of, another, in the context of government or some kind of socio-political structures as mentioned in this paper.
Abstract: We do not check out a hotel personally; we rely on TripAdvisor. We may have never met a person, yet we are ‘friends’ on Facebook. We may press ‘like’ but engage only in some kind of slacktivism. It does not matter whether we haven’t got a clue about how to reach a place downtown, as long as we have access to Google Maps and follow the instructions. Five stars on Amazon may be sufficient to convince us of the quality of a product, even if we have never tried it ourselves. Being a ‘best seller’ in The New York Times Best Seller list is often a self-fulfilling prophecy. In all these cases, something (the signifier) signifies something else (the signified). Such ‘signifying’ is at the heart of every semantic and semiotic process. It is the immensely important relation of ‘standing for’. There is no sense, reference, or meaning without it. So, we have always helped ourselves to different kinds of signifying means, in order to interact with each other and the world, and make sense of both. We are the symbolic species after all, and twentieth-century philosophy—whether hermeneutically oriented or based on a philosophy of language—can easily be read in terms of a theory of signification. All this is clear, if complicated. The point here is that only our own culture, the culture that characterises mature information societies, is now evolving from being a culture of signs and signification into a culture of proxies and interaction. What is the difference? Why is this happening today? And what are the implications of such a major transformation? In order to answer these questions, one needs to understand better what a proxy is and what ‘degenerate’ proxies may be. Let me start from the concept of proxy. In the Roman Catholic Church, a vicar is a representative or deputy of a bishop. This role, and its long familiarity, led to the idea of something being ‘vicarious’ as something ‘acting or done for another’ and, hence, ‘vicariously’. The idea of ‘proxy’ is similar. The main difference is that its roots are political, not religious, for it is a late Middle English contraction of ‘procuracy’, which means ‘legitimate action taken in the place of, or on behalf of, another’, in the context of government or some kind of socio-political structures (e.g. one could get married by proxy). Today, in a vocabulary more deeply affected by information technology than by Philos. Technol. (2015) 28:487–490 DOI 10.1007/s13347-015-0209-8

Journal ArticleDOI
TL;DR: The role of the body–brain dynamics in the processes that give rise to genuine understanding of the world are discussed, in line with recent proposals from enactive cognitive science.
Abstract: John Searle’s Chinese Room Argument (CRA) purports to demonstrate that syntax is not sufficient for semantics, and, hence, because computation cannot yield understanding, the computational theory of mind, which equates the mind to an information processing system based on formal computations, fails. In this paper, we use the CRA, and the debate that emerged from it, to develop a philosophical critique of recent advances in robotics and neuroscience. We describe results from a body of work that contributes to blurring the divide between biological and artificial systems; so-called animats, autonomous robots that are controlled by biological neural tissue and what may be described as remote-controlled rodents, living animals endowed with augmented abilities provided by external controllers. We argue that, even though at first sight, these chimeric systems may seem to escape the CRA, on closer analysis, they do not. We conclude by discussing the role of the body–brain dynamics in the processes that give rise to genuine understanding of the world, in line with recent proposals from enactive cognitive science.

Journal ArticleDOI
TL;DR: In this paper, a deeper understanding of what cyber war is requires us to adopt an informational approach, which may enable us to account for the two-dimensional nature of cyber war (destruction and exploitation), to revise the notion of violence on which war is premised and to understand to what extent the traditional ideas of "just war" may apply to the scenario of cyber warfare.
Abstract: Cyber warfare has changed the scenario of war from an empirical and a theoretical viewpoint. Cyber war is no longer based on physical violence only, but on military, political, economic and ideological strategies meant to exploit a state’s informational resources. This means that a deeper understanding of what cyber war is requires us to adopt an informational approach. This approach may enable us to account for the two-dimensional nature of cyber war (destruction and exploitation), to revise the notion of violence on which war is premised and to understand to what extent the traditional ideas of ‘just war’ may apply to the scenario of cyber warfare. This point is crucial, since it concerns whether a cyber war is meant to restore a broken international political and legal order or to participate in its construction.

Journal ArticleDOI
TL;DR: In this paper, the authors define semantic or factual information as the combination of a question plus the relevant, correct answer, and if one has only the question but not the answer, then one is uncertain.
Abstract: What is uncertainty? There are of course several possible definitions, offered by different fields, from epistemology to statistics, but, in the background, one usually finds some kind of relation with the lack of information, in the following sense. Suppose we define semantic or factual information as the combination of a question plus the relevant, correct answer. If one has both the question and the correct answer, one is informed: Bwas Berlin the capital of Germany in 2010? Yes^. If one has the question but the incorrect answer, one is insipient. If one has neither, one is ignorant. And if one has only the question but not the answer, then one is uncertain. Uncertainty is what a correct answer to a relevant question erases. This is why, in information theory, the value of information is often discussed in terms of the amount of uncertainty that it decreases. And this is also why there are many things in life that we value, but uncertainty is not usually one of them. At first sight, this may seem to be unproblematic, indeed obvious. What we actually value is information, understandable now as the appropriate combination of relevant questions and correct answers, the Qs and the As. We value information because it is power: power to understand what happened, forecast what will happen and, hence, choose now among the things that could happen between the past and the future. Marx and the past two centuries thought that power, understood as the sociopolitical ability to control or influence people’s behaviour, was exercised through the creation or control of (the means of production of) things, i.e. goods and services. But it is equally clear that power is also exercised through the creation or control of (the means of production of) information about things, e.g. laws, statistics, news or technoscience. To use a trivial example, if you wish to buy a secondhand car, you value information about its past (was it involved in any accident? yes), its future (is it expensive to run? yes) and its present (should I haggle over the price? yes). The more information you have, the better you may shape your environment and control its development and the more advantage you may enjoy against competitors who lack such a resource. This applies even to stand-alone contexts: Robinson Crusoe wishes to have information about the island, even if there is nobody else. But it applies even more strongly to socio-political contexts: once Robinson Crusoe is joined by Friday, the native Philos. Technol. (2015) 28:1–4 DOI 10.1007/s13347-015-0192-0


Journal ArticleDOI
TL;DR: In this article, the authors provide an underlying theory of argument by disanalogy, in order to employ it to show that cyberwarfare is fundamentally new (relative to traditional kinetic warfare, and espionage).
Abstract: We provide an underlying theory of argument by disanalogy, in order to employ it to show that cyberwarfare is fundamentally new (relative to traditional kinetic warfare, and espionage). Once this general case is made, the battle is won: we are well on our way to establishing our main thesis: that Just War Theory itself must be modernized. Augustine and Aquinas (and their predecessors) had a stunningly long run, but today’s world, based as it is on digital information and increasingly intelligent information-processing, points the way to a beast so big and so radically different, that the core of this duo’s insights needs to be radically extended.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that the traditional jus ad bellum and jus in bello criteria are fully capable of providing the ethical guidance needed to legitimately conduct military cyber operations.
Abstract: This article argues that the traditional jus ad bellum and jus in bello criteria are fully capable of providing the ethical guidance needed to legitimately conduct military cyber operations. The first part examines the criteria’s foundations by focusing on the notion of liability to defensive harm worked out by revisionist just war thinkers. The second part critiques the necessity of alternative frameworks, which its proponents assert are required to at least supplement the traditional just war criteria. Using the latter, the third part evaluates ethical issues germane to responding to cyber force, including casus belli, moral aspects of “the attribution problem,” and respective rights and duties when attacks involve innocent third-party states. The fourth part addresses in bello issues, including compliance with discrimination, necessity, and civilian due care imperatives; whether civilians may be targeted with sub-“use of force” cyber-attacks and the permissibility of using civilian contractors to conduct cyber-attacks. Throughout these analyses, conclusions are brought into conversation with those of The Tallinn Manual on the International Law Applicable to Cyber Warfare.

Journal ArticleDOI
TL;DR: In this paper, a special issue gathers together a selection of papers presented by international experts during a workshop entitled "Ethics of Cyber-Conflicts" which was devoted to fostering interdisciplinary debate on the ethical and legal problems and the regulatory gap concerning cyber conflicts.
Abstract: This special issue gathers together a selection of papers presented by international experts during a workshop entitled ‘Ethics of Cyber-Conflicts’, which was devoted to fostering interdisciplinary debate on the ethical and legal problems and the regulatory gap concerning cyber conflicts. The workshop was held in 2013 at the Centro Alti Studi Difesa in Rome under the auspices of the NATO Cooperative Cyber Defence Centre of Excellence (NATO CCD COE). This NATO-accredited international military organisation that has always placed a high value on an interdisciplinary approach to cyber defence, uniting as it does perspectives from technical, policy, legal, and strategic domains. The Centre’s mission is to enhance capability, cooperation, and information-sharing between NATO, its member states, and partner countries in the area of cyber defence by virtue of research, education, and consultation. The workshop was one of the projects supported by the Centre to achieve this mission. Readers may already be familiar with the term ‘cyber conflict’, which is understood as any use of information and communication technologies (ICTs) that may have disruptive or destructive consequences. Cyber conflicts are an umbrella phenomenon encompassing several instances ranging from cyber warfare and hacktivism to cyber crime and cyber terrorism. As contemporary societies grow dependent from ICTs, any form of conflict that uses these technologies, both as a means and as a target, poses serious threats to their stability, security, and welfare. As recently reported by the Financial Times, Bthe ultimate impact [of cyber-conflicts] could be as much as $3 trillion in lost productivity and growth^. Furthermore, should cyber conflicts propagate without an adequate response then contemporary societies may risk a cyber backlash in the shape of a deceleration to the digitization process imposed by governments and international institutions in order to prevent this Philos. Technol. (2015) 28:333–338 DOI 10.1007/s13347-015-0197-8

Journal ArticleDOI
Ugo Pagallo1
TL;DR: In this paper, the authors of cyber attacks can be non-state actors, and identifying the party responsible for such a use of force, whether nonstate actors or national sovereign states, is often impossible.
Abstract: The use of cyber force can be as severe and disruptive as traditional armed attacks are. Cyber attacks may neither provoke physical injuries nor cause property damages and still, they can affect essential functions of today’s societies, such as governmental services, business processes or communication systems that progressively depend on information as a vital resource. Whereas several scholars claim that an international treaty, much as new forms of international cooperation, are necessary, a further challenge should be stressed: authors of cyber attacks can be non-state actors, and identifying the party responsible for such a use of force, whether non-state actors or national sovereign states, is often impossible. Accordingly, several programmes on online security and national defence have been developed by sovereign states to tackle this menace and yet, the endurance of Western democracies and their aim to protect basic rights have already been tested by such programmes over the past years. The new scenarios of cyber force do not only concern the field of international law, since they may represent the main threat in the fields of national and constitutional law as well.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on the conception of technologies as problem-solving physical instruments and study the debate between the humanist and nonhumanist ways of understanding technologies.
Abstract: A form of metaphysical humanism in the field of philosophy of technology can be defined as the claim that besides technologies’ physical aspects, purely human attributes are sufficient to conceptualize technologies. Metaphysical nonhumanism, on the other hand, would be the claim that the meanings of the operative words in any acceptable conception of technologies refer to the states of affairs or events which are in a way or another shaped by technologies. In this paper, I focus on the conception of technologies as problem-solving physical instruments in order to study the debate between the humanist and the nonhumanist ways of understanding technologies. I argue that this conception commits us to a hybrid understanding of technologies, one which is partly humanist and partly nonhumanist.

Journal ArticleDOI
Dingmar van Eck1
TL;DR: This paper argues that the presented approach is useful for validating function-based design methods with respect to their explanatory elements and that it supports assessment of the explanatory and design utility of “function”, and the different conceptualizations thereof, as used in such engineering design methods.
Abstract: Analysis of the adequacy of engineering design methods, as well as analysis of the utility of concepts of function often invoked in these methods, is a neglected topic in both philosophy of technology and in engineering proper. In this paper, I present an approach—dubbed an explanationist perspective—for assessing the adequacy of function-based design methods. Engineering design is often intertwined with explanation, for instance, in reverse engineering and subsequent redesign, knowledge base-assisted designing, and diagnostic reasoning. I argue that the presented approach is useful for validating function-based design methods with respect to their explanatory elements and that it supports assessment of the explanatory and design utility of “function”, and the different conceptualizations thereof, as used in such engineering design methods. I deploy two key desiderata from the explanation literature to assess the viability of function-based design methods: explanatorily relevant difference-making factors and counterfactual understanding defined in terms of what-if-things-had-been-different questions. I explicate the approach and its merits in terms of two case studies drawn from the engineering functional modeling literature: reverse engineering and redesign and malfunction analysis. I close the paper by discussing ramifications of the presented approach for the philosophy of design and the philosophy of explanation.

Journal ArticleDOI
TL;DR: In this article, the authors analyze different categories of non-state participation in cyber operations and undertake to show under what conditions such actions, though illegal, might be morally defensible.
Abstract: A great deal of attention has been paid in recent years to the legality of the actions of states and state agents in international and non-international cyber conflicts. Less attention has been paid to ethical considerations in these situations, and very little has been written regarding the ethics of the participation of non-state actors in such conflicts. In this article, I analyze different categories of non-state participation in cyber operations and undertake to show under what conditions such actions, though illegal, might be morally defensible.

Journal ArticleDOI
Ugo Pagallo1
TL;DR: In this article, the authors examine the realignment of the legal sources in an information society, by considering first of all the differences with the previous system of sources, dubbed as the Westphalian model.
Abstract: The paper examines the realignment of the legal sources in an information society, by considering first of all the differences with the previous system of sources, dubbed as the “Westphalian model”. The current system is tripartite, rather than bipartite, for the sources of transnational law should be added to the traditional dichotomy between national and international law. In addition, the system is dualistic, rather than monistic, because the tools of legal constructivism, such as codes or statutes, have to be complemented with the rules of customary law and contracts. In light of the canonical version of the law as a set of commands enforced through the threat of physical sanctions, however, two further novelties must be stressed, namely the soft law-tools of governance as a source of the system and the aim to embed normative constraints into technology, in order to enforce the law through the use of filtering systems, self-enforcing technologies, etc. This latter approach impacts on functions and requirements of the system, by transferring the normative side of the law from the traditional “ought to” of legal commands to what actually is on the basis of automatic techniques, thus affecting the very notion of source. The overall aim of the paper is to show that rearrangements of the legal sources are intertwined with redistributions of power and hence, a normative standpoint is needed, so as to determine whether scholars can obtain the solution that best justifies the integrity of the law before its hard cases, or the answer is a reasonable compromise between many conflicting interests.

Journal ArticleDOI
TL;DR: This issue presents the reader with six articles dwelling upon ethical problems characterizing contemporary information societies: The Democratic Governance of Information Societies: A Critique to the Theory of Stakeholders, Semantic Web Regulatory Models: Why Ethics Matter, Levels of Trust in the Context of Machine Ethics, Developing Automated Deceptions and the Impact on Trust, and Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character.
Abstract: The special issue collects a selection of papers presented during the Computer Ethics: Philosophical Enquiries (CEPE) 2013 conference. This is a series of conferences organized by the International Association for Ethics and Information Technology (INSEIT) (http://inseit.net/), a professional organization formed in 2001 and which gathers experts in information and computer ethics prompting interdisciplinary research and discussions on ethical problems related to design and deployment of information and communication technologies (ICTs). During the past two decades, CEPE conferences have been a focal point for the research concerning crucial topics (Buchanan 1999, 2011), such as privacy (Hildebrandt, Mireille 2008), online trust (Taddeo 2010; Taddeo and Floridi 2011), online identity (Ess 2012), value-sensitive design (Friedman and Peter H. Kahn, Alan Borning 2006), cyber-warfare (Floridi and Taddeo 2014; Taddeo, Mariarosaria 2014), along with education and professional ethics (Buchanan and D. Ocholla 2011). In this special issue, we present the reader with six articles dwelling upon ethical problems characterizing contemporary information societies: The Democratic Governance of Information Societies: A Critique to the Theory of Stakeholders, Semantic Web Regulatory Models: Why Ethics Matter, The Realignment of the Sources of the Law and their Meaning in an Information Society, Levels of Trust in the Context of Machine Ethics, Developing Automated Deceptions and the Impact on Trust, and Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character. In addition, this issue also includes a commentary describing the Online Manifesto Initiative; more on this presently. Media, academic articles, policy debates, and everyday discussions increasingly focus on the informational, technology-driven turn—the information revolution—that characterizes this historical moment, in which widely disseminated and radical changes simultaneously affect both individuals and societies. Over the past two decades, these Philos. Technol. (2015) 28:5–10 DOI 10.1007/s13347-015-0193-z

Journal ArticleDOI
TL;DR: In this article, the conceptual reduction of knowledge-that in general to knowledge-how has been applied to the case of technological knowledge that, and it has been shown that such knowledge cannot be reduced conceptually to a form of knowledge how.
Abstract: Norstrom has argued that contemporary epistemological debates about the conceptual relations between knowledge-that and knowledge-how need to be supplemented by a concept of technological knowledge—with this being a further kind of knowledge. But this paper argues that Norstrom has not shown why technological knowledge-that is so distinctive because Norstrom has not shown that such knowledge cannot be reduced conceptually to a form of knowledge-how. The paper thus applies practicalism (the conceptual reduction of knowledge-that in general to knowledge-how) to the case of technological knowledge-that. Indeed, the paper shows why Norstrom’s conception of technological knowledge unintentionally strengthens this proposed form of reduction.