scispace - formally typeset
Search or ask a question
JournalISSN: 2210-5433

Philosophy & Technology 

Springer Nature (Netherlands)
About: Philosophy & Technology is an academic journal published by Springer Nature (Netherlands). The journal publishes majorly in the area(s): Philosophy of technology & Computer science. It has an ISSN identifier of 2210-5433. Over the lifetime, 637 publications have been published receiving 14263 citations. The journal is also known as: philosophy (general) & Philosophy.


Papers
More filters
Journal ArticleDOI
TL;DR: The 20-volume Oxford English Dictionary (OED) as discussed by the authors is the accepted authority on the evolution of the English language over the last millennium, tracing the usage of words through 2.4 million quotations from a wide range of international English language sources.
Abstract: The 20-volume Oxford English Dictionary is the accepted authority on the evolution of the English language over the last millennium. It traces the usage of words through 2.4 million quotations from a wide range of international English language sources. The OED has a unique historical focus. Accompanying each definition is a chronologically arranged group of quotations that trace the usage of words, and show the contexts in which they can be used. The quotations are drawn from a huge variety of sources worldwide - literary, scholarly, technical, and popular - and represent authors as disparate as Geoffrey Chaucer and Erica Jong, William Shakespeare, Charles Darwin and Isabella Beeton. Other features distinguishing the entries in the Dictionary are authoritative definitions; detailed information on pronunciation using the International Phonetic Alphabet; listings of variant spellings used throughout each word's history; extensive treatment of etymology; and details of area of usage and of any regional characteristics. Alongside the print edition is the Oxford English Dictionary Online (www.oed.com). Updated quarterly, this award-winning online resource allows the Dictionary to evolve with the English language while the print edition remains as a historical record. Subscriptions are available to OED online on an individual or institutional basis. Visit www.oup.com/online/oed/ for details.

2,389 citations

Journal ArticleDOI
TL;DR: The distinction between personal identity and self-conception is a well-honed distinction between who we are (call it our ontological self) and who we think we are, and this too seems to work at its best once you drop it as mentioned in this paper.
Abstract: Some time ago, I met a very bright and lively graduate, who registered with Facebook during the academic year 2003–2004, when she was a student at Harvard. Her Facebook ID number was 246. Impressive. A bit like being the 246th person to land on a new continent. Such Facebook ID numbers no longer exist. In a few years, that continent has become rather crowded, as she has been joined by several hundreds of million users worldwide. Half a billion was reached in July 2010. It is a good reminder of how more and more people spend an increasing amount of time ‘onlife’, interacting with and within an infosphere that is neither entirely virtual nor only physical. It is also a good reminder of how influential information and communication technologies are becoming in shaping our personal identities, as technologies of the self. In the philosophy of mind, there is a well-honed distinction between personal identity and self-conception or more simply between who we are (call it our ontological self) and who we think we are (call it our epistemological self). Like many other handy distinctions, this too seems to work at its best once you drop it. Like a Wittgensteinian ladder, it helps you to reach a better perspective, as long as you do not get stuck on it. Of course, there is a difference between being and believing to be. However, it is equally obvious that, in healthy individuals, the ontological and the epistemological selves flourish only if they support each other in a symbiotic relationship. Not only our self-conceptions should be close to who we really are. Our ontological selves are also sufficiently malleable to be significantly influenced by who we think we are or would like to be. And such epistemological selves in turn are sufficiently ductile to be shaped by who we are told to be. Enter the social self:

1,675 citations

Journal ArticleDOI
TL;DR: An overview of available technical solutions to enhance fairness, accountability, and transparency in algorithmic decision-making is provided and the Open Algortihms project is described as a step towards realizing the vision of a world where data and algorithms are used as lenses and levers in support of democracy and development.
Abstract: The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this paper, we provide an overview of available technical solutions to enhance fairness, accountability, and transparency in algorithmic decision-making. We also highlight the criticality and urgency to engage multi-disciplinary teams of researchers, practitioners, policy-makers, and citizens to co-develop, deploy, and evaluate in the real-world algorithmic decision-making processes designed to maximize fairness and transparency. In doing so, we describe the Open Algortihms (OPAL) project as a step towards realizing the vision of a world where data and algorithms are used as lenses and levers in support of democracy and development.

330 citations

Journal ArticleDOI
TL;DR: It is suggested that a practice-based approach, which studies how values are enacted in specific practices, can open the way for a new set of theoretical questions on self-tracking for health and how this can work by describing various enactments of autonomy, solidarity, and authenticity among self-trackers in the Quantified Self community.
Abstract: Self-tracking devices point to a future in which individuals will be more involved in the management of their health and will generate data that will benefit clinical decision making and research. They have thus attracted enthusiasm from medical and public health professionals as key players in the move toward participatory and personalized healthcare. Critics, however, have begun to articulate a number of broader societal and ethical concerns regarding self-tracking, foregrounding their disciplining, and disempowering effects. This paper has two aims: first, to analyze some of the key promises and concerns that inform this polarized debate. I argue that far from being solely about health outcomes, this debate is very much about fundamental values that are at stake in the move toward personalized healthcare, namely, the values of autonomy, solidarity, and authenticity. The second aim is to provide a framework within which an alternative approach to self-tracking for health can be developed. I suggest that a practice-based approach, which studies how values are enacted in specific practices, can open the way for a new set of theoretical questions. In the last part of the paper, I sketch out how this can work by describing various enactments of autonomy, solidarity, and authenticity among self-trackers in the Quantified Self community. These examples show that shifting attention to practices can render visible alternative and sometimes unexpected enactments of values. Insofar as these may challenge both the promises and concerns in the debate on self-tracking for health, they can lay the groundwork for new conceptual interventions in future research.

260 citations

Journal ArticleDOI
TL;DR: In this article, the authors argue that algorithmic governance does pose a significant threat to the legitimacy of public decision-making processes (bureaucratic, legislative and legal) and propose two possible solutions: resistance and accommodation.
Abstract: One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes (bureaucratic, legislative and legal) on algorithms, i.e. computer-programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithmic governance does pose a significant threat to the legitimacy of such processes. Modelling my argument on Estlund’s threat of epistocracy, I call this the ‘threat of algocracy’. The article clarifies the nature of this threat and addresses two possible solutions (named, respectively, ‘resistance’ and ‘accommodation’). It is argued that neither solution is likely to be successful, at least not without risking many other things we value about social decision-making. The result is a somewhat pessimistic conclusion in which we confront the possibility that we are creating decision-making processes that constrain and limit opportunities for human participation.

225 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202348
2022128
202188
202055
201940
201842