scispace - formally typeset
Search or ask a question

Showing papers by "Philips published in 2019"


Journal ArticleDOI
TL;DR: A review of recent advances in medical imaging using the adversarial training scheme with the hope of benefiting researchers interested in this technique.

1,053 citations


Journal ArticleDOI
TL;DR: A powerful network architecture in detail, which considers transfer learning with and without fine-tuning as well as the training of a dedicated X-ray network from scratch is investigated: the ResNet-50.
Abstract: The increased availability of labeled X-ray image archives (e.g. ChestX-ray14 dataset) has triggered a growing interest in deep learning techniques. To provide better insight into the different approaches, and their applications to chest X-ray classification, we investigate a powerful network architecture in detail: the ResNet-50. Building on prior work in this domain, we consider transfer learning with and without fine-tuning as well as the training of a dedicated X-ray network from scratch. To leverage the high spatial resolution of X-ray data, we also include an extended ResNet-50 architecture, and a network integrating non-image data (patient age, gender and acquisition type) in the classification process. In a concluding experiment, we also investigate multiple ResNet depths (i.e. ResNet-38 and ResNet-101). In a systematic evaluation, using 5-fold re-sampling and a multi-label loss function, we compare the performance of the different approaches for pathology classification by ROC statistics and analyze differences between the classifiers using rank correlation. Overall, we observe a considerable spread in the achieved performance and conclude that the X-ray-specific ResNet-38, integrating non-image data yields the best overall results. Furthermore, class activation maps are used to understand the classification process, and a detailed analysis of the impact of non-image features is provided.

289 citations


Journal ArticleDOI
TL;DR: The neglected issue of suboptimal validation of tissue tracking techniques is addressed in this review, in order to advocate for this matter.
Abstract: Myocardial tissue tracking imaging techniques have been developed for a more accurate evaluation of myocardial deformation (i.e. strain), with the potential to overcome the limitations of ejection fraction (EF) and to contribute, incremental to EF, to the diagnosis and prognosis in cardiac diseases. While most of the deformation imaging techniques are based on the similar principles of detecting and tracking specific patterns within an image, there are intra- and inter-imaging modality inconsistencies limiting the wide clinical applicability of strain. In this review, we aimed to describe the particularities of the echocardiographic and cardiac magnetic resonance deformation techniques, in order to understand the discrepancies in strain measurement, focusing on the potential sources of variation: related to the software used to analyse the data, to the different physics of image acquisition and the different principles of 2D vs. 3D approaches. As strain measurements are not interchangeable, it is highly desirable to work with validated strain assessment tools, in order to derive information from evidence-based data. There is, however, a lack of solid validation of the current tissue tracking techniques, as only a few of the commercial deformation imaging softwares have been properly investigated. We have, therefore, addressed in this review the neglected issue of suboptimal validation of tissue tracking techniques, in order to advocate for this matter.

278 citations


Journal ArticleDOI
TL;DR: A consensus is presented on deficiencies in widely available MRS methodology and validated improvements that are currently in routine use at several clinical research institutions, and use of the semi‐adiabatic localization by adiabatic selective refocusing sequence is a recommended solution.
Abstract: Proton MRS (1 H MRS) provides noninvasive, quantitative metabolite profiles of tissue and has been shown to aid the clinical management of several brain diseases. Although most modern clinical MR scanners support MRS capabilities, routine use is largely restricted to specialized centers with good access to MR research support. Widespread adoption has been slow for several reasons, and technical challenges toward obtaining reliable good-quality results have been identified as a contributing factor. Considerable progress has been made by the research community to address many of these challenges, and in this paper a consensus is presented on deficiencies in widely available MRS methodology and validated improvements that are currently in routine use at several clinical research institutions. In particular, the localization error for the PRESS localization sequence was found to be unacceptably high at 3 T, and use of the semi-adiabatic localization by adiabatic selective refocusing sequence is a recommended solution. Incorporation of simulated metabolite basis sets into analysis routines is recommended for reliably capturing the full spectral detail available from short TE acquisitions. In addition, the importance of achieving a highly homogenous static magnetic field (B0 ) in the acquisition region is emphasized, and the limitations of current methods and hardware are discussed. Most recommendations require only software improvements, greatly enhancing the capabilities of clinical MRS on existing hardware. Implementation of these recommendations should strengthen current clinical applications and advance progress toward developing and validating new MRS biomarkers for clinical use.

237 citations


Journal ArticleDOI
TL;DR: Both the opportunities and challenges posed to biomedical research by the increasing ability to tackle large datasets are discussed, including the need for standardization of data content, format, and clinical definitions.
Abstract: For over a decade the term "Big data" has been used to describe the rapid increase in volume, variety and velocity of information available, not just in medical research but in almost every aspect of our lives. As scientists, we now have the capacity to rapidly generate, store and analyse data that, only a few years ago, would have taken many years to compile. However, "Big data" no longer means what it once did. The term has expanded and now refers not to just large data volume, but to our increasing ability to analyse and interpret those data. Tautologies such as "data analytics" and "data science" have emerged to describe approaches to the volume of available information as it grows ever larger. New methods dedicated to improving data collection, storage, cleaning, processing and interpretation continue to be developed, although not always by, or for, medical researchers. Exploiting new tools to extract meaning from large volume information has the potential to drive real change in clinical practice, from personalized therapy and intelligent drug design to population screening and electronic health record mining. As ever, where new technology promises "Big Advances," significant challenges remain. Here we discuss both the opportunities and challenges posed to biomedical research by our increasing ability to tackle large datasets. Important challenges include the need for standardization of data content, format, and clinical definitions, a heightened need for collaborative networks with sharing of both data and expertise and, perhaps most importantly, a need to reconsider how and when analytic methodology is taught to medical researchers. We also set "Big data" analytics in context: recent advances may appear to promise a revolution, sweeping away conventional approaches to medical science. However, their real promise lies in their synergy with, not replacement of, classical hypothesis-driven methods. The generation of novel, data-driven hypotheses based on interpretable models will always require stringent validation and experimental testing. Thus, hypothesis-generating research founded on large datasets adds to, rather than replaces, traditional hypothesis driven science. Each can benefit from the other and it is through using both that we can improve clinical practice.

211 citations


Journal ArticleDOI
TL;DR: The main objective of this scientific expert panel consensus document is to provide recommendations for CMR endpoint selection in experimental and clinical trials based on pathophysiology and its association with hard outcomes.

200 citations


Journal ArticleDOI
TL;DR: The different approaches to deep learning in pathology, the public grand challenges that have driven this innovation and a range of emerging applications in pathology are reviewed.
Abstract: There has been an exponential growth in the application of AI in health and in pathology. This is resulting in the innovation of deep learning technologies that are specifically aimed at cellular imaging and practical applications that could transform diagnostic pathology. This paper reviews the different approaches to deep learning in pathology, the public grand challenges that have driven this innovation and a range of emerging applications in pathology. The translation of AI into clinical practice will require applications to be embedded seamlessly within digital pathology workflows, driving an integrated approach to diagnostics and providing pathologists with new tools that accelerate workflow and improve diagnostic consistency and reduce errors. The clearance of digital pathology for primary diagnosis in the US by some manufacturers provides the platform on which to deliver practical AI. AI and computational pathology will continue to mature as researchers, clinicians, industry, regulatory organizations and patient advocacy groups work together to innovate and deliver new technologies to health care providers: technologies which are better, faster, cheaper, more precise, and safe.

153 citations


Journal ArticleDOI
TL;DR: A glucose-binding compound has been prepared that, despite its symmetry and simplicity, can match all but the strongest glucose- binding proteins, and the high binding affinity and outstanding selectivity of this receptor may translate into biomedical applications.
Abstract: Specific molecular recognition is routine for biology, but has proved difficult to achieve in synthetic systems. Carbohydrate substrates are especially challenging, because of their diversity and similarity to water, the biological solvent. Here we report a synthetic receptor for glucose, which is biomimetic in both design and capabilities. The core structure is simple and symmetrical, yet provides a cavity which almost perfectly complements the all-equatorial β-pyranoside substrate. The receptor’s affinity for glucose, at Ka ~ 18,000 M−1, compares well with natural receptor systems. Selectivities also reach biological levels. Most other saccharides are bound approximately 100 times more weakly, while non-carbohydrate substrates are ignored. Glucose-binding molecules are required for initiatives in diabetes treatment, such as continuous glucose monitoring and glucose-responsive insulin. The performance and tunability of this system augur well for such applications. Synthetic receptors can be used to help understand biological systems, but rarely compete in terms of affinity or selectivity. Now, a glucose-binding compound has been prepared that, despite its symmetry and simplicity, can match all but the strongest glucose-binding proteins. The high binding affinity and outstanding selectivity of this receptor may translate into biomedical applications.

149 citations


Journal ArticleDOI
TL;DR: A human-sized MPI device with low technical requirements designed for detection of brain ischemia is presented, which opens up a variety of medical applications and would allow monitoring of stroke on intensive care units.
Abstract: Determining the brain perfusion is an important task for diagnosis of vascular diseases such as occlusions and intracerebral haemorrhage. Even after successful diagnosis, there is a high risk of restenosis or rebleeding such that patients need intense attention in the days after treatment. Within this work, we present a diagnostic tomographic imager that allows access to brain perfusion quantitatively in short intervals. The device is based on the magnetic particle imaging technology and is designed for human scale. It is highly sensitive and allows the detection of an iron concentration of 263 pmolFe ml−1, which is one of the lowest iron concentrations imaged by MPI so far. The imager is self-shielded and can be used in unshielded environments such as intensive care units. In combination with the low technical requirements this opens up a variety of medical applications and would allow monitoring of stroke on intensive care units. Magnetic particle imaging (MPI) has been applied to various pre-clinical settings, including detection of ischemic stroke in mice. Translation of MPI to a clinical setting has been obstacled by the lack of a device with sufficient bore size and, at the same time, reasonable technical requirements. Here the authors present a human-sized MPI device with low technical requirements designed for detection of brain ischemia.

149 citations


Journal ArticleDOI
TL;DR: FLAIR delivers the most robust substrate for radiomic analyses and demonstrated excellent intraobserver and interobserver reproducibility (intraclass correlation coefficient ≥ 0.75); care must be taken in the interpretation of clinical studies using nonrobust features.
Abstract: ObjectivesThe aim of this study was to investigate the robustness and reproducibility of radiomic features in different magnetic resonance imaging sequences.Materials and MethodsA phantom was scanned on a clinical 3 T system using fluid-attenuated inversion recovery (FLAIR), T1-weighted (T1w), and T

147 citations


Journal ArticleDOI
TL;DR: T2 mapping during treatment identifies intracardiomyocyte edema generation as the earliest marker of anthracycline-induced cardiotoxicity, in the absence of T1 mapping, ECV, or LV motion defects, demonstrating that early T2 prolongation occurs at a reversible disease stage.

Journal ArticleDOI
TL;DR: Investigation of cutting-edge deep learning methods for information extraction from medical imaging free text reports at a multi-institutional scale and compares them to the state-of-the-art domain-specific rule-based system - PEFinder and traditional machine learning methods - SVM and Adaboost suggests feasibility of broader usage of neural network models in automated classification of multi-Institutional imaging text reports.

Journal ArticleDOI
TL;DR: Attitudes towards immersive virtual reality changed from neutral to positive after a first exposure to immersivevirtual reality, but not after exposure to time-lapse videos, implying that the contribution of VR applications to health in older adults will neither be hindered by negative attitudes nor by cybersickness.
Abstract: Immersive virtual reality has become increasingly popular to improve the assessment and treatment of health problems. This rising popularity is likely to be facilitated by the availability of affordable headsets that deliver high quality immersive experiences. As many health problems are more prevalent in older adults, who are less technology experienced, it is important to know whether they are willing to use immersive virtual reality. In this study, we assessed the initial attitude towards head-mounted immersive virtual reality in 76 older adults who had never used virtual reality before. Furthermore, we assessed changes in attitude as well as self-reported cybersickness after a first exposure to immersive virtual reality relative to exposure to time-lapse videos. Attitudes towards immersive virtual reality changed from neutral to positive after a first exposure to immersive virtual reality, but not after exposure to time-lapse videos. Moreover, self-reported cybersickness was minimal and had no association with exposure to immersive virtual reality. These results imply that the contribution of VR applications to health in older adults will neither be hindered by negative attitudes nor by cybersickness.

Journal ArticleDOI
01 Apr 2019-Spine
TL;DR: ARSN can be clinically used to place thoracic and lumbosacral pedicle screws with high accuracy and with acceptable navigation time, and the risk for revision surgery and complications could be minimized.
Abstract: Study design Prospective observational study. Objective The aim of this study was to evaluate the accuracy of pedicle screw placement using augmented reality surgical navigation (ARSN) in a clinical trial. Summary of background data Recent cadaveric studies have shown improved accuracy for pedicle screw placement in the thoracic spine using ARSN with intraoperative 3D imaging, without the need for periprocedural x-ray. In this clinical study, we used the same system to place pedicle screws in the thoracic and lumbosacral spine of 20 patients. Methods The study was performed in a hybrid operating room with an integrated ARSN system encompassing a surgical table, a motorized flat detector C-arm with intraoperative 2D/3D capabilities, integrated optical cameras for augmented reality navigation, and noninvasive patient motion tracking. Three independent reviewers assessed screw placement accuracy using the Gertzbein grading on 3D scans obtained before wound closure. In addition, the navigation time per screw placement was measured. Results One orthopedic spinal surgeon placed 253 lumbosacral and thoracic pedicle screws on 20 consenting patients scheduled for spinal fixation surgery. An overall accuracy of 94.1% of primarily thoracic pedicle screws was achieved. No screws were deemed severely misplaced (Gertzbein grade 3). Fifteen (5.9%) screws had 2 to 4 mm breach (Gertzbein grade 2), occurring in scoliosis patients only. Thirteen of those 15 screws were larger than the pedicle in which they were placed. Two medial breaches were observed and 13 were lateral. Thirteen of the grade 2 breaches were in the thoracic spine. The average screw placement time was 5.2 ± 4.1 minutes. During the study, no device-related adverse event occurred. Conclusion ARSN can be clinically used to place thoracic and lumbosacral pedicle screws with high accuracy and with acceptable navigation time. Consequently, the risk for revision surgery and complications could be minimized. Level of evidence 3.

Journal ArticleDOI
TL;DR: The DLM yielded accurate automated detection and segmentation of meningioma tissue despite diverse scanner data and thereby may improve and facilitate therapy planning as well as monitoring of this highly frequent tumour entity.
Abstract: Magnetic resonance imaging (MRI) is the method of choice for imaging meningiomas. Volumetric assessment of meningiomas is highly relevant for therapy planning and monitoring. We used a multiparametric deep-learning model (DLM) on routine MRI data including images from diverse referring institutions to investigate DLM performance in automated detection and segmentation of meningiomas in comparison to manual segmentations. We included 56 of 136 consecutive preoperative MRI datasets [T1/T2-weighted, T1-weighted contrast-enhanced (T1CE), FLAIR] of meningiomas that were treated surgically at the University Hospital Cologne and graded histologically as tumour grade I (n = 38) or grade II (n = 18). The DLM was trained on an independent dataset of 249 glioma cases and segmented different tumour classes as defined in the brain tumour image segmentation benchmark (BRATS benchmark). The DLM was based on the DeepMedic architecture. Results were compared to manual segmentations by two radiologists in a consensus reading in FLAIR and T1CE. The DLM detected meningiomas in 55 of 56 cases. Further, automated segmentations correlated strongly with manual segmentations: average Dice coefficients were 0.81 ± 0.10 (range, 0.46-0.93) for the total tumour volume (union of tumour volume in FLAIR and T1CE) and 0.78 ± 0.19 (range, 0.27-0.95) for contrast-enhancing tumour volume in T1CE. The DLM yielded accurate automated detection and segmentation of meningioma tissue despite diverse scanner data and thereby may improve and facilitate therapy planning as well as monitoring of this highly frequent tumour entity. • Deep learning allows for accurate meningioma detection and segmentation • Deep learning helps clinicians to assess patients with meningiomas • Meningioma monitoring and treatment planning can be improved

Journal ArticleDOI
TL;DR: This paper proposes the use of a deep learning model that takes specific image features into account in the loss function to denoise low-dose PET image slices and estimate their full-dose image quality equivalent.
Abstract: Positron emission tomography (PET) imaging is an effective tool used in determining disease stage and lesion malignancy; however, radiation exposure to patients and technicians during PET scans continues to draw concern. One way to minimize radiation exposure is to reduce the dose of radioactive tracer administered in order to obtain the scan. Yet, low-dose images are inherently noisy and have poor image quality making them difficult to read. This paper proposes the use of a deep learning model that takes specific image features into account in the loss function to denoise low-dose PET image slices and estimate their full-dose image quality equivalent. Testing on low-dose image slices indicates a significant improvement in image quality that is comparable to the ground truth full–dose image slices. Additionally, this approach can lower the cost of conducting a PET scan since less radioactive material is required per scan, which may promote the usage of PET scans for medical diagnosis.

Journal ArticleDOI
TL;DR: The main aim of the revised guideline is to summarize current evidence and also expert based- knowledge on the topic of "prolonged weaning" and, based on the evidence and the experience of experts, make recommendations with regard to "Prolonged Weaning" not only in the field of acute medicine but also for chronic critical care.
Abstract: Mechanical ventilation (MV) is an essential part of modern intensive care medicine. MV is performed in patients with severe respiratory failure caused by insufficiency of respiratory muscles and/or lung parenchymal disease when/after other treatments, (i. e. medication, oxygen, secretion management, continuous positive airway pressure or nasal highflow) have failed.MV is required to maintain gas exchange and to buy time for curative therapy of the underlying cause of respiratory failure. In the majority of patients weaning from MV is routine and causes no special problems. However, about 20 % of patients need ongoing MV despite resolution of the conditions which precipitated the need for MV. Approximately 40 - 50 % of time spent on MV is required to liberate the patient from the ventilator, a process called "weaning."There are numberous factors besides the acute respiratory failure that have an impact on duration and success rate of the weaning process such as age, comorbidities and conditions and complications acquired in the ICU. According to an international consensus conference "prolonged weaning" is defined as weaning process of patients who have failed at least three weaning attempts or require more than 7 days of weaning after the first spontaneous breathing trial (SBT). Prolonged weaning is a challenge, therefore, an inter- and multi-disciplinary approach is essential for a weaning success.In specialised weaning centers about 50 % of patients with initial weaning failure can be liberated from MV after prolonged weaning. However, heterogeneity of patients with prolonged weaning precludes direct comparisons of individual centers. Patients with persistant weaning failure either die during the weaning process or are discharged home or to a long term care facility with ongoing MV.Urged by the growing importance of prolonged weaning, this Sk2-guideline was first published in 2014 on the initiative of the German Respiratory Society (DGP) together with other scientific societies involved in prolonged weaning. Current research and study results, registry data and experience in daily practice made the revision of this guideline necessary.The following topics are dealt with in the guideline: Definitions, epidemiology, weaning categories, the underlying pathophysiology, prevention of prolonged weaning, treatment strategies in prolonged weaning, the weaning unit, discharge from hospital on MV and recommendations for end of life decisions.Special emphasis in the revision of the guideline was laid on the following topics:- A new classification of subgroups of patients in prolonged weaning- Important aspects of pneumological rehabilitation and neurorehabilitation in prolonged weaning- Infrastructure and process organization in the care of patients in prolonged weaning in the sense of a continuous treatment concept- Therapeutic goal change and communication with relativesAspects of pediatric weaning are given separately within the individual chapters.The main aim of the revised guideline is to summarize current evidence and also expert based- knowledge on the topic of "prolonged weaning" and, based on the evidence and the experience of experts, make recommendations with regard to "prolonged weaning" not only in the field of acute medicine but also for chronic critical care.Important addressees of this guideline are Intensivists, Pneumologists, Anesthesiologists, Internists, Cardiologists, Surgeons, Neurologists, Pediatricians, Geriatricians, Palliative care clinicians, Rehabilitation physicians, Nurses in intensive and chronic care, Physiotherapists, Respiratory therapists, Speech therapists, Medical service of health insurance and associated ventilator manufacturers.

Journal ArticleDOI
TL;DR: It is stressed the importance of certified pathologists having learned from the experience of previous revolutions and be willing to accept such disruptive technologies, ready to innovate and actively engage in the creation, application and validation of technologies and oversee the safe introduction of AI into diagnostic practice.
Abstract: Histopathology has undergone major changes firstly with the introduction of Immunohistochemistry, and latterly with Genomic Medicine. We argue that a third revolution is underway: Artificial Intelligence (AI). Coming on the back of Digital Pathology (DP), the introduction of AI has the potential to both challenge traditional practice and provide a totally new realm for pathology diagnostics. Hereby we stress the importance of certified pathologists having learned from the experience of previous revolutions and be willing to accept such disruptive technologies, ready to innovate and actively engage in the creation, application and validation of technologies and oversee the safe introduction of AI into diagnostic practice. This article is protected by copyright. All rights reserved.

Journal ArticleDOI
TL;DR: A similarity metric that is learned using a deep neural network can be used to assess the quality of any given image registration and can be use in conjunction with the aforementioned optimization framework to perform automatic registration that is robust to poor initialization.
Abstract: The fusion of transrectal ultrasound (TRUS) and magnetic resonance (MR) images for guiding targeted prostate biopsy has significantly improved the biopsy yield of aggressive cancers. A key component of MR–TRUS fusion is image registration. However, it is very challenging to obtain a robust automatic MR–TRUS registration due to the large appearance difference between the two imaging modalities. The work presented in this paper aims to tackle this problem by addressing two challenges: (i) the definition of a suitable similarity metric and (ii) the determination of a suitable optimization strategy. This work proposes the use of a deep convolutional neural network to learn a similarity metric for MR–TRUS registration. We also use a composite optimization strategy that explores the solution space in order to search for a suitable initialization for the second-order optimization of the learned metric. Further, a multi-pass approach is used in order to smooth the metric for optimization. The learned similarity metric outperforms the classical mutual information and also the state-of-the-art MIND feature-based methods. The results indicate that the overall registration framework has a large capture range. The proposed deep similarity metric-based approach obtained a mean TRE of 3.86 mm (with an initial TRE of 16 mm) for this challenging problem. A similarity metric that is learned using a deep neural network can be used to assess the quality of any given image registration and can be used in conjunction with the aforementioned optimization framework to perform automatic registration that is robust to poor initialization.

Journal ArticleDOI
TL;DR: A long short-term memory (LSTM) network is proposed as a solution to model long-term cardiac sleep architecture information and validated on a comprehensive data set, demonstrating the merit of deep temporal modelling using a diverse data set and advancing the state-of-the-art for HRV-based sleep stage classification.
Abstract: Automated sleep stage classification using heart rate variability (HRV) may provide an ergonomic and low-cost alternative to gold standard polysomnography, creating possibilities for unobtrusive home-based sleep monitoring. Current methods however are limited in their ability to take into account long-term sleep architectural patterns. A long short-term memory (LSTM) network is proposed as a solution to model long-term cardiac sleep architecture information and validated on a comprehensive data set (292 participants, 584 nights, 541.214 annotated 30 s sleep segments) comprising a wide range of ages and pathological profiles, annotated according to the Rechtschaffen and Kales (R&K) annotation standard. It is shown that the model outperforms state-of-the-art approaches which were often limited to non-temporal or short-term recurrent classifiers. The model achieves a Cohen’s k of 0.61 ± 0.15 and accuracy of 77.00 ± 8.90% across the entire database. Further analysis revealed that the performance for individuals aged 50 years and older may decline. These results demonstrate the merit of deep temporal modelling using a diverse data set and advance the state-of-the-art for HRV-based sleep stage classification. Further research is warranted into individuals over the age of 50 as performance tends to worsen in this sub-population.

Journal ArticleDOI
TL;DR: This study shows that telomere shortening in livers of telomerase knockout mice leads to a p53-dependent repression of all seven sirtuins, and establishes sIRTuins as downstream targets of dysfunctional telomeres and suggests that increasing Sirt1 activity alone or in combination with other sirtUins stabilizes telomees and mitigates telomee-dependent disorders.

Journal ArticleDOI
21 Jan 2019
TL;DR: The ligation of the intersphincteric fistula tract (LIFT) procedure preserves anal sphincter function and is an alternative to the endorectal advancement flap (AF) in patients with cryptoglandular and Crohn's perianal fistulas.
Abstract: Background High perianal fistulas require sphincter-preserving surgery because of the risk of faecal incontinence. The ligation of the intersphincteric fistula tract (LIFT) procedure preserves anal sphincter function and is an alternative to the endorectal advancement flap (AF). The aim of this study was to evaluate outcomes of these procedures in patients with cryptoglandular and Crohn's perianal fistulas. Methods A systematic literature search was performed using MEDLINE, Embase and the Cochrane Library. All RCTs, cohort studies and case series (more than 5 patients) describing one or both techniques were included. Main outcomes were overall success rate, recurrence and incontinence following either technique. A proportional meta-analysis was performed using a random-effects model. Results Some 30 studies comprising 1295 patients were included (AF, 797; LIFT, 498). For cryptoglandular fistula (1098 patients), there was no significant difference between AF and LIFT for weighted overall success (74·6 (95 per cent c.i. 65·6 to 83·7) versus 69·1 (53·9 to 84·3) per cent respectively) and recurrence (25·6 (4·7 to 46·4) versus 21·9 (14·8 to 29·0) per cent) rates. For Crohn's perianal fistula (64 patients), no significant differences were observed between AF and LIFT for overall success rate (61 (45 to 76) versus 53 per cent respectively), but data on recurrence were limited. Incontinence rates were significantly higher after AF compared with LIFT (7·8 (3·3 to 12·4) versus 1·6 (0·4 to 2·8) per cent). Conclusion Overall success and recurrence rates were not significantly different between the AF and LIFT procedure, but continence was better preserved after LIFT.

Journal ArticleDOI
TL;DR: This study shows an easy way to classify hyperspectral images with state of the art convolutional neural networks pre-trained for RGB image data, and the approach can easily be extended to other applications.

Journal ArticleDOI
TL;DR: In this work, the recent advancement of deep learning methods for automatic arrhythmia detection is reviewed from five aspects: utilized dataset, application, type of input data, model architecture, and performance evaluation.

Journal ArticleDOI
TL;DR: The extension of the database or the use of 3D data could contribute to further improve the performances especially for non-typical cases of extensively damaged menisci or multiple tears.
Abstract: Purpose This work presents our contribution to a data challenge organized by the French Radiology Society during the Journees Francophones de Radiologie in October 2018. This challenge consisted in classifying MR images of the knee with respect to the presence of tears in the knee menisci, on meniscal tear location, and meniscal tear orientation. Materials and methods We trained a mask region-based convolutional neural network (R-CNN) to explicitly localize normal and torn menisci, made it more robust with ensemble aggregation, and cascaded it into a shallow ConvNet to classify the orientation of the tear. Results Our approach predicted accurately tears in the database provided for the challenge. This strategy yielded a weighted AUC score of 0.906 for all three tasks, ranking first in this challenge. Conclusion The extension of the database or the use of 3D data could contribute to further improve the performances especially for non-typical cases of extensively damaged menisci or multiple tears.

Journal ArticleDOI
TL;DR: An overview of the research area around the design and development of digital technologies for health behavior change and to explore trends and patterns is provided, showing a clear and emerging trend after 2001 in technology-based behavior change.
Abstract: Background: Research on digital technology to change health behavior has increased enormously in recent decades. Due to the interdisciplinary nature of this topic, knowledge and technologies from different research areas are required. Up to now, it is not clear how the knowledge from those fields is combined in actual applications. A comprehensive analysis that systematically maps and explores the use of knowledge within this emerging interdisciplinary field is required. Objective: This study aims to provide an overview of the research area around the design and development of digital technologies for health behavior change and to explore trends and patterns. Methods: A bibliometric analysis is used to provide an overview of the field, and a scoping review is presented to identify the trends and possible gaps. The study is based on the publications related to persuasive technologies and health behavior change in the last 18 years, as indexed by the Web of Science and Scopus (317 and 314 articles, respectively). In the first part, regional and time-based publishing trends; research fields and keyword co-occurrence networks; influential journals; and collaboration network between influential authors, countries, and institutions are examined. In the second part, the behavioral domains, technological means and theoretical foundations are investigated via a scoping review. Results: The literature reviewed shows a clear and emerging trend after 2001 in technology-based behavior change, which grew exponentially after the introduction of the smartphone around 2009. Authors from the United States, Europe, and Australia have the highest number of publications in the field. The three most active research areas are computer science, public and occupational health, and psychology. The keyword “mhealth” was the dominant term and predominantly used together with the term “physical activity” and “ehealth”. A total of three strong clusters of coauthors have been found. Nearly half of the total reported papers were published in three journals. The United States, the United Kingdom, and the Netherlands have the highest degree of author collaboration and a strong institutional network. Mobile phones were most often used as a technology platform, regardless of the targeted behavioral domain. Physical activity and healthy eating were the most frequently targeted behavioral domains. Most articles did not report about the behavior change techniques that were applied. Among the reported behavior change techniques, goal setting and self-management were the most frequently reported. Conclusions: Closer cooperation and interaction between behavioral sciences and technological areas is needed, so that theoretical knowledge and new technological advancements are better connected in actual applications. Eventually, this could result in a larger societal impact, an increase of the effectiveness of digital technologies for health behavioral change, and more insight in the relationship between behavioral change strategies and persuasive technologies' effectiveness.

Journal ArticleDOI
Orson L. Sydora1
TL;DR: In this article, the development arc of ethylene oligomerization research and the ongoing technological shift from full-range (C4-C30) to selective (C6, C8) catalytic systems are discussed.

Journal ArticleDOI
TL;DR: An overview of the progress in AI and AmI interconnected with ICT through information-society laws, superintelligence, and several related disciplines, such as multi-agent systems and the Semantic Web, ambient assisted living and e-healthcare, AmI for assisting medical diagnosis, ambient intelligence for e-learning and ambient Intelligence for smart cities is given.
Abstract: Ambient intelligence (AmI) is intrinsically and thoroughly connected with artificial intelligence (AI). Some even say that it is, in essence, AI in the environment. AI, on the other hand, owes its success to the phenomenal development of the information and communication technologies (ICTs), based on principles such as Moore’s law. In this paper we give an overview of the progress in AI and AmI interconnected with ICT through information-society laws, superintelligence, and several related disciplines, such as multi-agent systems and the Semantic Web, ambient assisted living and e-healthcare, AmI for assisting medical diagnosis, ambient intelligence for e-learning and ambient intelligence for smart cities. Besides a short history and a description of the current state, the frontiers and the future of AmI and AI are also considered in the paper.

Journal ArticleDOI
TL;DR: Exhaled molecular phenotypes of severe asthma were identified and followed up, which were associated with changing inflammatory profile and oral steroid use and suggests that breath analysis can contribute to the management ofsevere asthma.
Abstract: Background: Severe asthma is a heterogeneous condition, as shown by independent cluster analyses based on demographic, clinical, and inflammatory characteristics. A next step is to identify molecularly driven phenotypes using “omics” technologies. Molecular fingerprints of exhaled breath are associated with inflammation and can qualify as noninvasive assessment of severe asthma phenotypes. Objectives: We aimed (1)to identify severe asthma phenotypes using exhaled metabolomic fingerprints obtained from a composite of electronic noses (eNoses)and (2)to assess the stability of eNose-derived phenotypes in relation to within-patient clinical and inflammatory changes. Methods: In this longitudinal multicenter study exhaled breath samples were taken from an unselected subset of adults with severe asthma from the U-BIOPRED cohort. Exhaled metabolites were analyzed centrally by using an assembly of eNoses. Unsupervised Ward clustering enhanced by similarity profile analysis together with K-means clustering was performed. For internal validation, partitioning around medoids and topological data analysis were applied. Samples at 12 to 18 months of prospective follow-up were used to assess longitudinal within-patient stability. Results: Data were available for 78 subjects (age, 55 years [interquartile range, 45-64 years]; 41% male). Three eNose-driven clusters (n = 26/33/19)were revealed, showing differences in circulating eosinophil (P =.045)and neutrophil (P =.017)percentages and ratios of patients using oral corticosteroids (P =.035). Longitudinal within-patient cluster stability was associated with changes in sputum eosinophil percentages (P =.045). Conclusions: We have identified and followed up exhaled molecular phenotypes of severe asthma, which were associated with changing inflammatory profile and oral steroid use. This suggests that breath analysis can contribute to the management of severe asthma.

Journal ArticleDOI
TL;DR: The presented classifier combining 3D texture features and regional vBMD including the complete thoracolumbar spine showed high discriminatory power to identify patients with vertebral fractures and had a better diagnostic performance than v BMD alone.
Abstract: Our study proposed an automatic pipeline for opportunistic osteoporosis screening using 3D texture features and regional vBMD using multi-detector CT images. A combination of different local and global texture features outperformed the global vBMD and showed high discriminative power to identify patients with vertebral fractures. Many patients at risk for osteoporosis undergo computed tomography (CT) scans, usable for opportunistic (non-dedicated) screening. We compared the performance of global volumetric bone mineral density (vBMD) with a random forest classifier based on regional vBMD and 3D texture features to separate patients with and without osteoporotic fractures. In total, 154 patients (mean age 64 ± 8.5, male; n = 103) were included in this retrospective single-center analysis, who underwent contrast-enhanced CT for other reasons than osteoporosis screening. Patients were dichotomized regarding prevalent vertebral osteoporotic fractures (noFX, n = 101; FX, n = 53). Vertebral bodies were automatically segmented, and trabecular vBMD was calculated with a dedicated phantom. For 3D texture analysis, we extracted gray-level co-occurrence matrix Haralick features (HAR), histogram of gradients (HoG), local binary patterns (LBP), and wavelets (WL). Fractured vertebrae were excluded for texture-feature and vBMD data extraction. The performance to identify patients with prevalent osteoporotic vertebral fractures was evaluated in a fourfold cross-validation. The random forest classifier showed a high discriminatory power (AUC = 0.88). Parameters of all vertebral levels significantly contributed to this classification. Importantly, the AUC of the proposed algorithm was significantly higher than that of volumetric global BMD alone (AUC = 0.64). The presented classifier combining 3D texture features and regional vBMD including the complete thoracolumbar spine showed high discriminatory power to identify patients with vertebral fractures and had a better diagnostic performance than vBMD alone.