scispace - formally typeset
Search or ask a question
Author

David Wright

Bio: David Wright is an academic researcher from Inmarsat. The author has contributed to research in topics: Data Protection Act 1998 & Information privacy. The author has an hindex of 27, co-authored 75 publications receiving 2223 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: It is found that current regulatory mechanisms do not adequately address privacy and civil liberties concerns because UASs are complex, multimodal surveillance systems that integrate a range of technologies and capabilities.

307 citations

Book ChapterDOI
01 Jan 2013
TL;DR: It is argued that there are seven different types of privacy and an imprecise conceptualisation of privacy may be necessary to maintain a fluidity that enables new dimensions of privacy to be identified, understood and addressed in order to effectively respond to rapid technological evolution.
Abstract: As technologies to develop, conceptualisations of privacy have developed alongside them, from a “right to be let alone” to attempts to capture the complexity of privacy issues within frameworks that highlight the legal, social-psychological, economic or political concerns that technologies present. However, this reactive highlighting of concerns or intrusions does not provide an adequate framework though which to understand the ways in which privacy should be proactively protected. Rights to privacy, such as those enshrined in the European Charter of Fundamental Rights, require a forward-looking privacy framework that positively outlines the parameters of privacy in order to prevent intrusions, infringements and problems. This paper makes a contribution to a forward-looking privacy framework by examining the privacy impacts of six new and emerging technologies. It analyses the privacy issues that each of these technologies present and argues that there are seven different types of privacy. We also use this case study information to suggest that an imprecise conceptualisation of privacy may be necessary to maintain a fluidity that enables new dimensions of privacy to be identified, understood and addressed in order to effectively respond to rapid technological evolution.

195 citations

Journal ArticleDOI
TL;DR: In this paper, the authors propose a framework for an ethical impact assessment which can be performed in regard to any policy, service, project or program involving information technology, which is structured on the four principles posited by Beauchamp and Childress together with a separate section on privacy and data protection.
Abstract: This paper proposes a framework for an ethical impact assessment which can be performed in regard to any policy, service, project or programme involving information technology. The framework is structured on the four principles posited by Beauchamp and Childress together with a separate section on privacy and data protection. The framework identifies key social values and ethical issues, provides some brief explanatory contextual information which is then followed by a set of questions aimed at the technology developer or policy-maker to facilitate consideration of ethical issues, in consultation with stakeholders, which may arise in their undertaking. In addition, the framework includes a set of ethical tools and procedural practices which can be employed as part of the ethical impact assessment. Although the framework has been developed within a European context, it could be applied equally well beyond European borders.

141 citations

Journal ArticleDOI
25 Jun 2018
TL;DR: It is suggested that the concept of responsible research and innovation (RRI) can provide the framing required to act with a view to ensuring that the technologies are socially acceptable, desirable, and sustainable.
Abstract: Emerging combinations of artificial intelligence, big data, and the applications these enable are receiving significant media and policy attention. Much of the attention concerns privacy and other ethical issues. In our article, we suggest that what is needed now is a way to comprehensively understand these issues and find mechanisms of addressing them that involve stakeholders, including civil society, to ensure that these technologies’ benefits outweigh their disadvantages. We suggest that the concept of responsible research and innovation (RRI) can provide the framing required to act with a view to ensuring that the technologies are socially acceptable, desirable, and sustainable. We draw from our work on the Human Brain Project, one potential driver for the next generation of these technologies, to discuss how RRI can be put in practice.

124 citations

Proceedings ArticleDOI
21 May 2015
TL;DR: This paper reviews existing best practices in the analysis and design stages of the system development lifecycle, introduces a systematic methodology for privacy engineering that merges and integrates them, leveraging their best features whilst addressing their weak points, and describes its alignment with current standardization efforts.
Abstract: Data protection authorities worldwide have agreed on the value of considering privacy-by-design principles when developing privacy-friendly systems and software However, on the technical plane, a profusion of privacy-oriented guidelines and approaches coexists, which provides partial solutions to the overall problem and aids engineers during different stages of the system development lifecycle As a result, engineers find difficult to understand what they should do to make their systems abide by privacy by design, thus hindering the adoption of privacy engineering practices This paper reviews existing best practices in the analysis and design stages of the system development lifecycle, introduces a systematic methodology for privacy engineering that merges and integrates them, leveraging their best features whilst addressing their weak points, and describes its alignment with current standardization efforts

93 citations


Cited by
More filters
01 Jan 1982
Abstract: Introduction 1. Woman's Place in Man's Life Cycle 2. Images of Relationship 3. Concepts of Self and Morality 4. Crisis and Transition 5. Women's Rights and Women's Judgment 6. Visions of Maturity References Index of Study Participants General Index

7,539 citations

Journal Article
TL;DR: This research examines the interaction between demand and socioeconomic attributes through Mixed Logit models and the state of art in the field of automatic transport systems in the CityMobil project.
Abstract: 2 1 The innovative transport systems and the CityMobil project 10 1.1 The research questions 10 2 The state of art in the field of automatic transport systems 12 2.1 Case studies and demand studies for innovative transport systems 12 3 The design and implementation of surveys 14 3.1 Definition of experimental design 14 3.2 Questionnaire design and delivery 16 3.3 First analyses on the collected sample 18 4 Calibration of Logit Multionomial demand models 21 4.1 Methodology 21 4.2 Calibration of the “full” model. 22 4.3 Calibration of the “final” model 24 4.4 The demand analysis through the final Multinomial Logit model 25 5 The analysis of interaction between the demand and socioeconomic attributes 31 5.1 Methodology 31 5.2 Application of Mixed Logit models to the demand 31 5.3 Analysis of the interactions between demand and socioeconomic attributes through Mixed Logit models 32 5.4 Mixed Logit model and interaction between age and the demand for the CTS 38 5.5 Demand analysis with Mixed Logit model 39 6 Final analyses and conclusions 45 6.1 Comparison between the results of the analyses 45 6.2 Conclusions 48 6.3 Answers to the research questions and future developments 52

4,784 citations

Journal ArticleDOI
TL;DR: In this paper, a taxonomy of recent contributions related to explainability of different machine learning models, including those aimed at explaining Deep Learning methods, is presented, and a second dedicated taxonomy is built and examined in detail.

2,827 citations

Journal ArticleDOI
TL;DR: The results suggested that TAM was able to provide a reasonable depiction of physicians' intention to use telemedicine technology, and suggested both the limitations of the parsimonious model and the need for incorporating additional factors or integrating with other IT acceptance models in order to improve its specificity and explanatory utility in a health-care context.
Abstract: The rapid growth of investment in information technology (IT) by organizations worldwide has made user acceptance an increasingly critical technology implementation and management issue While such acceptance has received fairly extensive attention from previous research, additional efforts are needed to examine or validate existing research results, particularly those involving different technologies, user populations, and/or organizational contexts In response, this paper reports a research work that examined the applicability of the Technology Acceptance Model (TAM) in explaining physicians' decisions to accept telemedicine technology in the health-care context The technology, the user group, and the organizational context are all new to IT acceptance/adoption research The study also addressed a pragmatic technology management need resulting from millions of dollars invested by health-care organizations in developing and implementing telemedicine programs in recent years The model's overall fit, explanatory power, and the individual causal links that it postulates were evaluated by examining the acceptance of telemedicine technology among physicians practicing at public tertiary hospitals in Hong Kong Our results suggested that TAM was able to provide a reasonable depiction of physicians' intention to use telemedicine technology Perceived usefulness was found to be a significant determinant of attitude and intention but perceived ease of use was not The relatively low R-square of the model suggests both the limitations of the parsimonious model and the need for incorporating additional factors or integrating with other IT acceptance models in order to improve its specificity and explanatory utility in a health-care context Based on the study findings, implications for user technology acceptance research and telemedicine management are discussed

1,924 citations

Posted Content
TL;DR: Previous efforts to define explainability in Machine Learning are summarized, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought, and a taxonomy of recent contributions related to the explainability of different Machine Learning models are proposed.
Abstract: In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.

1,602 citations