scispace - formally typeset
Search or ask a question
BookDOI

The EU General Data Protection Regulation (GDPR)

TL;DR: This Privacy Statement provides information CBR is required to give in relation to the processing of personal data under EU law as it is information relating to an individual.
Abstract: When providing our services, CBR acts as “data controller”. We collect and process information mainly on companies and businesses. In the process of doing so, we also process data that may be qualified as “personal data” under European Union (“EU”) law as it is information relating to an individual. In this Privacy Statement we provide information we are required to give in relation to the processing of personal data under EU law.
Citations
More filters
Journal ArticleDOI
TL;DR: It is shown that federated learning among 10 institutions results in models reaching 99% of the model quality achieved with centralized data, and the effects of data distribution across collaborating institutions on model quality and learning patterns are investigated.
Abstract: Several studies underscore the potential of deep learning in identifying complex patterns, leading to diagnostic and prognostic biomarkers. Identifying sufficiently large and diverse datasets, required for training, is a significant challenge in medicine and can rarely be found in individual institutions. Multi-institutional collaborations based on centrally-shared patient data face privacy and ownership challenges. Federated learning is a novel paradigm for data-private multi-institutional collaborations, where model-learning leverages all available data without sharing data between institutions, by distributing the model-training to the data-owners and aggregating their results. We show that federated learning among 10 institutions results in models reaching 99% of the model quality achieved with centralized data, and evaluate generalizability on data from institutions outside the federation. We further investigate the effects of data distribution across collaborating institutions on model quality and learning patterns, indicating that increased access to data through data private multi-institutional collaborations can benefit model quality more than the errors introduced by the collaborative method. Finally, we compare with other collaborative-learning approaches demonstrating the superiority of federated learning, and discuss practical implementation considerations. Clinical adoption of federated learning is expected to lead to models trained on datasets of unprecedented size, hence have a catalytic impact towards precision/personalized medicine.

526 citations

Journal ArticleDOI
01 Mar 2021
TL;DR: In this article, the authors provide a review of federated learning in the biomedical space, and summarize the general solutions to the statistical challenges, system challenges, and privacy issues in federated Learning, and point out the implications and potentials in healthcare.
Abstract: With the rapid development of computer software and hardware technologies, more and more healthcare data are becoming readily available from clinical institutions, patients, insurance companies, and pharmaceutical industries, among others. This access provides an unprecedented opportunity for data science technologies to derive data-driven insights and improve the quality of care delivery. Healthcare data, however, are usually fragmented and private making it difficult to generate robust results across populations. For example, different hospitals own the electronic health records (EHR) of different patient populations and these records are difficult to share across hospitals because of their sensitive nature. This creates a big barrier for developing effective analytical approaches that are generalizable, which need diverse, "big data." Federated learning, a mechanism of training a shared global model with a central server while keeping all the sensitive data in local institutions where the data belong, provides great promise to connect the fragmented healthcare data sources with privacy-preservation. The goal of this survey is to provide a review for federated learning technologies, particularly within the biomedical space. In particular, we summarize the general solutions to the statistical challenges, system challenges, and privacy issues in federated learning, and point out the implications and potentials in healthcare.

396 citations

Posted Content
TL;DR: A comprehensive review of federated learning systems can be found in this paper, where the authors provide a thorough categorization of the existing systems according to six different aspects, including data distribution, machine learning model, privacy mechanism, communication architecture, scale of federation and motivation of federation.
Abstract: Federated learning has been a hot research topic in enabling the collaborative training of machine learning models among different organizations under the privacy restrictions. As researchers try to support more machine learning models with different privacy-preserving approaches, there is a requirement in developing systems and infrastructures to ease the development of various federated learning algorithms. Similar to deep learning systems such as PyTorch and TensorFlow that boost the development of deep learning, federated learning systems (FLSs) are equivalently important, and face challenges from various aspects such as effectiveness, efficiency, and privacy. In this survey, we conduct a comprehensive review on federated learning systems. To achieve smooth flow and guide future research, we introduce the definition of federated learning systems and analyze the system components. Moreover, we provide a thorough categorization for federated learning systems according to six different aspects, including data distribution, machine learning model, privacy mechanism, communication architecture, scale of federation and motivation of federation. The categorization can help the design of federated learning systems as shown in our case studies. By systematically summarizing the existing federated learning systems, we present the design factors, case studies, and future research opportunities.

305 citations

Proceedings ArticleDOI
01 Jun 2021
TL;DR: Zhang et al. as discussed by the authors proposed model-contrastive federated learning (MOON) to correct the local training of individual parties, i.e., conducting contrastive learning in model-level.
Abstract: Federated learning enables multiple parties to collaboratively train a machine learning model without communicating their local data. A key challenge in federated learning is to handle the heterogeneity of local data distribution across parties. Although many studies have been proposed to address this challenge, we find that they fail to achieve high performance in image datasets with deep learning models. In this paper, we propose MOON: model-contrastive federated learning. MOON is a simple and effective federated learning framework. The key idea of MOON is to utilize the similarity between model representations to correct the local training of individual parties, i.e., conducting contrastive learning in model-level. Our extensive experiments show that MOON significantly outperforms the other state-of-the-art federated learning algorithms on various image classification tasks.

292 citations

Proceedings Article
08 May 2019
TL;DR: A Systematic Literature Review of eXplainable Artificial Intelligence (XAI), finding that almost all of the studied papers deal with robots/agents explaining their behaviors to the human users, and very few works addressed inter-robot (inter-agent) explainability.
Abstract: Humans are increasingly relying on complex systems that heavily adopts Artificial Intelligence (AI) techniques. Such systems are employed in a growing number of domains, and making them explainable is an impelling priority. Recently, the domain of eXplainable Artificial Intelligence (XAI) emerged with the aims of fostering transparency and trustworthiness. Several reviews have been conducted. Nevertheless, most of them deal with data-driven XAI to overcome the opaqueness of black-box algorithms. Contributions addressing goal-driven XAI (e.g., explainable agency for robots and agents) are still missing. This paper aims at filling this gap, proposing a Systematic Literature Review. The main findings are (i) a considerable portion of the papers propose conceptual studies, or lack evaluations or tackle relatively simple scenarios; (ii) almost all of the studied papers deal with robots/agents explaining their behaviors to the human users, and very few works addressed inter-robot (inter-agent) explainability. Finally, (iii) while providing explanations to non-expert users has been outlined as a necessity, only a few works addressed the issues of personalization and context-awareness.

212 citations


Additional excerpts

  • ...This might be due to the effect of general emphasis on explainable AI and the “right to explanation” by the GDPR [101] and similar initiatives [12]....

    [...]

  • ...However, in the near future, the European research on this subject might increase with the promulgation of the GDPR. Figure 4 presents the research domains working on EA....

    [...]

  • ...This direction is confirmed by the ratification of recent General Data Protection Regulation (GDPR) law which underlines the right to explanations [12]....

    [...]