scispace - formally typeset
Search or ask a question
Author

Dominic King

Bio: Dominic King is an academic researcher from University of Birmingham. The author has contributed to research in topics: Medicine & Health care. The author has an hindex of 35, co-authored 109 publications receiving 6465 citations. Previous affiliations of Dominic King include Imperial College London & Imperial College Healthcare.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel deep learning architecture performs device-independent tissue segmentation of clinical 3D retinal images followed by separate diagnostic classification that meets or exceeds human expert clinical diagnoses of retinal disease.
Abstract: The volume and complexity of diagnostic imaging is increasing at a pace faster than the availability of human expertise to interpret it. Artificial intelligence has shown great promise in classifying two-dimensional photographs of some common diseases and typically relies on databases of millions of annotated images. Until now, the challenge of reaching the performance of expert clinicians in a real-world clinical pathway with three-dimensional diagnostic scans has remained unsolved. Here, we apply a novel deep learning architecture to a clinically heterogeneous set of three-dimensional optical coherence tomography scans from patients referred to a major eye hospital. We demonstrate performance in making a referral recommendation that reaches or exceeds that of experts on a range of sight-threatening retinal diseases after training on only 14,884 scans. Moreover, we demonstrate that the tissue segmentations produced by our architecture act as a device-independent representation; referral accuracy is maintained when using tissue segmentations from a different type of device. Our work removes previous barriers to wider clinical use without prohibitive training data requirements across multiple pathologies in a real-world setting.

1,665 citations

Journal ArticleDOI
01 Jan 2020-Nature
TL;DR: A robust assessment of the AI system paves the way for clinical trials to improve the accuracy and efficiency of breast cancer screening and using a combination of AI and human inputs could help to improve screening efficiency.
Abstract: Screening mammography aims to identify breast cancer at earlier stages of the disease, when treatment can be more successful1. Despite the existence of screening programmes worldwide, the interpretation of mammograms is affected by high rates of false positives and false negatives2. Here we present an artificial intelligence (AI) system that is capable of surpassing human experts in breast cancer prediction. To assess its performance in the clinical setting, we curated a large representative dataset from the UK and a large enriched dataset from the USA. We show an absolute reduction of 5.7% and 1.2% (USA and UK) in false positives and 9.4% and 2.7% in false negatives. We provide evidence of the ability of the system to generalize from the UK to the USA. In an independent study of six radiologists, the AI system outperformed all of the human readers: the area under the receiver operating characteristic curve (AUC-ROC) for the AI system was greater than the AUC-ROC for the average radiologist by an absolute margin of 11.5%. We ran a simulation in which the AI system participated in the double-reading process that is used in the UK, and found that the AI system maintained non-inferior performance and reduced the workload of the second reader by 88%. This robust assessment of the AI system paves the way for clinical trials to improve the accuracy and efficiency of breast cancer screening. An artificial intelligence (AI) system performs as well as or better than radiologists at detecting breast cancer from mammograms, and using a combination of AI and human inputs could help to improve screening efficiency.

1,413 citations

Journal ArticleDOI
TL;DR: The safe and timely translation of AI research into clinically validated and appropriately regulated systems that can benefit everyone is challenging, and robust clinical evaluation, using metrics that are intuitive to clinicians and ideally go beyond measures of technical accuracy, is essential.
Abstract: Artificial intelligence (AI) research in healthcare is accelerating rapidly, with potential applications being demonstrated across various domains of medicine. However, there are currently limited examples of such techniques being successfully deployed into clinical practice. This article explores the main challenges and limitations of AI in healthcare, and considers the steps required to translate these potentially transformative technologies from research to clinical practice. Key challenges for the translation of AI systems in healthcare include those intrinsic to the science of machine learning, logistical difficulties in implementation, and consideration of the barriers to adoption as well as of the necessary sociocultural or pathway changes. Robust peer-reviewed clinical evaluation as part of randomised controlled trials should be viewed as the gold standard for evidence generation, but conducting these in practice may not always be appropriate or feasible. Performance metrics should aim to capture real clinical applicability and be understandable to intended users. Regulation that balances the pace of innovation with the potential for harm, alongside thoughtful post-market surveillance, is required to ensure that patients are not exposed to dangerous interventions nor deprived of access to beneficial innovations. Mechanisms to enable direct comparisons of AI systems must be developed, including the use of independent, local and representative test sets. Developers of AI algorithms must be vigilant to potential dangers, including dataset shift, accidental fitting of confounders, unintended discriminatory bias, the challenges of generalisation to new populations, and the unintended negative consequences of new algorithms on health outcomes. The safe and timely translation of AI research into clinically validated and appropriately regulated systems that can benefit everyone is challenging. Robust clinical evaluation, using metrics that are intuitive to clinicians and ideally go beyond measures of technical accuracy to include quality of care and patient outcomes, is essential. Further work is required (1) to identify themes of algorithmic bias and unfairness while developing mitigations to address these, (2) to reduce brittleness and improve generalisability, and (3) to develop methods for improved interpretability of machine learning predictions. If these goals can be achieved, the benefits for patients are likely to be transformational.

855 citations

Journal ArticleDOI
01 Aug 2019-Nature
TL;DR: A deep learning approach that predicts the risk of acute kidney injury and provides confidence assessments and a list of the clinical features that are most salient to each prediction, alongside predicted future trajectories for clinically relevant blood tests are developed.
Abstract: The early prediction of deterioration could have an important role in supporting healthcare professionals, as an estimated 11% of deaths in hospital follow a failure to promptly recognize and treat deteriorating patients1. To achieve this goal requires predictions of patient risk that are continuously updated and accurate, and delivered at an individual level with sufficient context and enough time to act. Here we develop a deep learning approach for the continuous risk prediction of future deterioration in patients, building on recent work that models adverse events from electronic health records2–17 and using acute kidney injury—a common and potentially life-threatening condition18—as an exemplar. Our model was developed on a large, longitudinal dataset of electronic health records that cover diverse clinical environments, comprising 703,782 adult patients across 172 inpatient and 1,062 outpatient sites. Our model predicts 55.8% of all inpatient episodes of acute kidney injury, and 90.2% of all acute kidney injuries that required subsequent administration of dialysis, with a lead time of up to 48 h and a ratio of 2 false alerts for every true alert. In addition to predicting future acute kidney injury, our model provides confidence assessments and a list of the clinical features that are most salient to each prediction, alongside predicted future trajectories for clinically relevant blood tests9. Although the recognition and prompt treatment of acute kidney injury is known to be challenging, our approach may offer opportunities for identifying patients at risk within a time window that enables early treatment. A deep learning approach that predicts the risk of acute kidney injury may help to identify patients at risk of health deterioration within a time window that enables early treatment.

617 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a mnemonic, Mindspace, which gathers up the nine most robust effects that influence our behaviour in mostly automatic (rather than deliberate) ways.

571 citations


Cited by
More filters
Journal ArticleDOI
Haidong Wang1, Mohsen Naghavi1, Christine Allen1, Ryan M Barber1  +841 moreInstitutions (293)
TL;DR: The Global Burden of Disease 2015 Study provides a comprehensive assessment of all-cause and cause-specific mortality for 249 causes in 195 countries and territories from 1980 to 2015, finding several countries in sub-Saharan Africa had very large gains in life expectancy, rebounding from an era of exceedingly high loss of life due to HIV/AIDS.

4,804 citations

Journal Article
TL;DR: Prospect Theory led cognitive psychology in a new direction that began to uncover other human biases in thinking that are probably not learned but are part of the authors' brain’s wiring.
Abstract: In 1974 an article appeared in Science magazine with the dry-sounding title “Judgment Under Uncertainty: Heuristics and Biases” by a pair of psychologists who were not well known outside their discipline of decision theory. In it Amos Tversky and Daniel Kahneman introduced the world to Prospect Theory, which mapped out how humans actually behave when faced with decisions about gains and losses, in contrast to how economists assumed that people behave. Prospect Theory turned Economics on its head by demonstrating through a series of ingenious experiments that people are much more concerned with losses than they are with gains, and that framing a choice from one perspective or the other will result in decisions that are exactly the opposite of each other, even if the outcomes are monetarily the same. Prospect Theory led cognitive psychology in a new direction that began to uncover other human biases in thinking that are probably not learned but are part of our brain’s wiring.

4,351 citations

Journal Article
TL;DR: Thaler and Sunstein this paper described a general explanation of and advocacy for libertarian paternalism, a term coined by the authors in earlier publications, as a general approach to how leaders, systems, organizations, and governments can nudge people to do the things the nudgers want and need done for the betterment of the nudgees, or of society.
Abstract: NUDGE: IMPROVING DECISIONS ABOUT HEALTH, WEALTH, AND HAPPINESS by Richard H. Thaler and Cass R. Sunstein Penguin Books, 2009, 312 pp, ISBN 978-0-14-311526-7This book is best described formally as a general explanation of and advocacy for libertarian paternalism, a term coined by the authors in earlier publications. Informally, it is about how leaders, systems, organizations, and governments can nudge people to do the things the nudgers want and need done for the betterment of the nudgees, or of society. It is paternalism in the sense that "it is legitimate for choice architects to try to influence people's behavior in order to make their lives longer, healthier, and better", (p. 5) It is libertarian in that "people should be free to do what they like - and to opt out of undesirable arrangements if they want to do so", (p. 5) The built-in possibility of opting out or making a different choice preserves freedom of choice even though people's behavior has been influenced by the nature of the presentation of the information or by the structure of the decisionmaking system. I had never heard of libertarian paternalism before reading this book, and I now find it fascinating.Written for a general audience, this book contains mostly social and behavioral science theory and models, but there is considerable discussion of structure and process that has roots in mathematical and quantitative modeling. One of the main applications of this social system is economic choice in investing, selecting and purchasing products and services, systems of taxes, banking (mortgages, borrowing, savings), and retirement systems. Other quantitative social choice systems discussed include environmental effects, health care plans, gambling, and organ donations. Softer issues that are also subject to a nudge-based approach are marriage, education, eating, drinking, smoking, influence, spread of information, and politics. There is something in this book for everyone.The basis for this libertarian paternalism concept is in the social theory called "science of choice", the study of the design and implementation of influence systems on various kinds of people. The terms Econs and Humans, are used to refer to people with either considerable or little rational decision-making talent, respectively. The various libertarian paternalism concepts and systems presented are tested and compared in light of these two types of people. Two foundational issues that this book has in common with another book, Network of Echoes: Imitation, Innovation and Invisible Leaders, that was also reviewed for this issue of the Journal are that 1 ) there are two modes of thinking (or components of the brain) - an automatic (intuitive) process and a reflective (rational) process and 2) the need for conformity and the desire for imitation are powerful forces in human behavior. …

3,435 citations

Journal ArticleDOI
Eric J. Topol1
TL;DR: Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but whether that will be used to improve the patient–doctor relationship or facilitate its erosion remains to be seen.
Abstract: The use of artificial intelligence, and the deep-learning subtype in particular, has been enabled by the use of labeled big data, along with markedly enhanced computing power and cloud storage, across all sectors. In medicine, this is beginning to have an impact at three levels: for clinicians, predominantly via rapid, accurate image interpretation; for health systems, by improving workflow and the potential for reducing medical errors; and for patients, by enabling them to process their own data to promote health. The current limitations, including bias, privacy and security, and lack of transparency, along with the future directions of these applications will be discussed in this article. Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but whether that will be used to improve the patient-doctor relationship or facilitate its erosion remains to be seen.

2,574 citations

Journal ArticleDOI
TL;DR: nnU-Net as mentioned in this paper is a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task.
Abstract: Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.

2,040 citations