scispace - formally typeset
Search or ask a question
Author

Christina Tikkinen-Piri

Bio: Christina Tikkinen-Piri is an academic researcher from University of Oulu. The author has contributed to research in topics: European union & General Data Protection Regulation. The author has an hindex of 1, co-authored 1 publications receiving 156 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The purposes of this study were to compare the current Data Protection Directive 95/46/EC with theGDPR by systematically analysing their differences and to identify the GDPR's practical implications, specifically for companies that provide services based on personal data.

244 citations


Cited by
More filters
Posted Content
TL;DR: A framework with step-by-step design guidelines paired with evaluation methods to close the iterative design and evaluation cycles in multidisciplinary XAI teams is developed and summarized ready-to-use tables of evaluation methods and recommendations for different goals in XAI research are provided.
Abstract: The need for interpretable and accountable intelligent systems grows along with the prevalence of artificial intelligence applications used in everyday life. Explainable intelligent systems are designed to self-explain the reasoning behind system decisions and predictions, and researchers from different disciplines work together to define, design, and evaluate interpretable systems. However, scholars from different disciplines focus on different objectives and fairly independent topics of interpretable machine learning research, which poses challenges for identifying appropriate design and evaluation methodology and consolidating knowledge across efforts. To this end, this paper presents a survey and framework intended to share knowledge and experiences of XAI design and evaluation methods across multiple disciplines. Aiming to support diverse design goals and evaluation methods in XAI research, after a thorough review of XAI related papers in the fields of machine learning, visualization, and human-computer interaction, we present a categorization of interpretable machine learning design goals and evaluation methods to show a mapping between design goals for different XAI user groups and their evaluation methods. From our findings, we develop a framework with step-by-step design guidelines paired with evaluation methods to close the iterative design and evaluation cycles in multidisciplinary XAI teams. Further, we provide summarized ready-to-use tables of evaluation methods and recommendations for different goals in XAI research.

291 citations

Journal ArticleDOI
31 Aug 2021
TL;DR: The need for interpretable and accountable intelligent systems grows along with the prevalence of artificial intelligence (AI) applications used in everyday life as discussed by the authors, and explainable AI (XAI) systems are i...
Abstract: The need for interpretable and accountable intelligent systems grows along with the prevalence of artificial intelligence (AI) applications used in everyday life. Explainable AI (XAI) systems are i...

102 citations

Posted Content
TL;DR: The author assumes that the new provisions of Article 17 of the EU Proposal for a General Data Protection Regulation do not seem to represent a revolutionary change to the existing rules with regard to the right granted to the individual, but instead have an impact on the extension of the protection of the information disseminated on-line.
Abstract: The EU Proposal for a General Data Protection Regulation has caused a wide debate between lawyers and legal scholars and many opinions have been voiced on the issue of the right to be forgotten. In order to analyse the relevance of the new rule provided by Article 17 of the Proposal, this paper considers the original idea of the right to be forgotten, pre-existing in both European and U.S. legal frameworks. This article focuses on the new provisions of Article 17 of the EU Proposal for a General Data Protection Regulation and evaluates its effects on court decisions. The author assumes that the new provisions do not seem to represent a revolutionary change to the existing rules with regard to the right granted to the individual, but instead have an impact on the extension of the protection of the information disseminated on-line.

100 citations

Posted Content
28 Nov 2018
TL;DR: This work supports the different evaluation goals in interpretable machine learning research by a thorough review of evaluation methodologies used in machine-explanation research across the fields of human-computer interaction, visual analytics, and machine learning.
Abstract: The need for interpretable and accountable intelligent system gets sensible as artificial intelligence plays more role in human life. Explainable artificial intelligence systems can be a solution by self-explaining the reasoning behind the decisions and predictions of the intelligent system. Researchers from different disciplines work together to define, design and evaluate interpretable intelligent systems for the user. Our work supports the different evaluation goals in interpretable machine learning research by a thorough review of evaluation methodologies used in machine-explanation research across the fields of human-computer interaction, visual analytics, and machine learning. We present a 2D categorization of interpretable machine learning evaluation methods and show a mapping between user groups and evaluation measures. Further, we address the essential factors and steps for a right evaluation plan by proposing a nested model for design and evaluation of explainable artificial intelligence systems.

90 citations

Journal ArticleDOI
23 Jan 2020
TL;DR: The use of blockchain technology and biometrics as a means to ensure the “unicity” and “singularity” of identities, and the associated challenges pertaining to the security and confidentiality of personal information are explored.
Abstract: After introducing key concepts and definitions in the field of digital identity, this paper will investigate the benefits and drawbacks of existing identity systems on the road toward achieving self-sovereign identity. It will explore, in particular, the use of blockchain technology and biometrics as a means to ensure the "unicity" and "singularity" of identities, and the associated challenges pertaining to the security and confidentiality of personal information. The paper will then describe an alternative approach to self-sovereign identity based on a system of blockchain-based attestations, claims, credentials, and permissions, which are globally portable across the life of an individual. While not dependent on any particular government or organization for administration or legitimacy, credentials and attestations might nonetheless include government-issued identification and biometrics as one of many indicia of identity. Such a solution-based on a recorded and signed digital history of attributes and activities-best approximates the fluidity and granularity of identity, enabling individuals to express only specific facets of their identity, depending on the parties with whom they wish to interact. To illustrate the difficulties inherent in the implementation of a self-sovereign identity system in the real world, the paper will focus on two blockchain-based identity solutions as case studies: (1) Kiva's identity protocol for building credit history in Sierra Leone, and (2) World Food Programme's Building Blocks program for delivering cash aid to refugees in Jordan. Finally, the paper will explore how the combination of blockchain-based cryptocurrencies and self-sovereign identity may contribute to promoting greater economic inclusion. With digital transactions functioning as identity claims within an ecosystem based on self-sovereign identity, new business models might emerge, such as identity insurance schemes, along with the emergence of value-stable cryptocurrencies ("stablecoins") functioning as local currencies.

87 citations