Showing papers by "University of Trento published in 2019"
••
Northern Arizona University1, National Institutes of Health2, University of Minnesota3, Woods Hole Oceanographic Institution4, University of California, Davis5, Massachusetts Institute of Technology6, University of Copenhagen7, University of Trento8, Chinese Academy of Sciences9, University of California, San Francisco10, University of Pennsylvania11, Pacific Northwest National Laboratory12, North Carolina State University13, University of California, San Diego14, Institute for Systems Biology15, Dalhousie University16, University of British Columbia17, Statens Serum Institut18, Anschutz Medical Campus19, University of Washington20, Michigan State University21, Stanford University22, Harvard University23, Broad Institute24, Australian National University25, University of Düsseldorf26, University of New South Wales27, Sookmyung Women's University28, San Diego State University29, Howard Hughes Medical Institute30, Cornell University31, Max Planck Society32, Colorado State University33, Google34, Syracuse University35, Webster University36, United States Department of Agriculture37, University of Arkansas for Medical Sciences38, Colorado School of Mines39, University of Southern Mississippi40, National Oceanic and Atmospheric Administration41, University of California, Merced42, Wageningen University and Research Centre43, University of Arizona44, Environment Agency45, University of Florida46, Merck & Co.47
TL;DR: QIIME 2 development was primarily funded by NSF Awards 1565100 to J.G.C. and R.K.P. and partial support was also provided by the following: grants NIH U54CA143925 and U54MD012388.
Abstract: QIIME 2 development was primarily funded by NSF Awards 1565100 to J.G.C. and 1565057 to R.K. Partial support was also provided by the following: grants NIH U54CA143925 (J.G.C. and T.P.) and U54MD012388 (J.G.C. and T.P.); grants from the Alfred P. Sloan Foundation (J.G.C. and R.K.); ERCSTG project MetaPG (N.S.); the Strategic Priority Research Program of the Chinese Academy of Sciences QYZDB-SSW-SMC021 (Y.B.); the Australian National Health and Medical Research Council APP1085372 (G.A.H., J.G.C., Von Bing Yap and R.K.); the Natural Sciences and Engineering Research Council (NSERC) to D.L.G.; and the State of Arizona Technology and Research Initiative Fund (TRIF), administered by the Arizona Board of Regents, through Northern Arizona University. All NCI coauthors were supported by the Intramural Research Program of the National Cancer Institute. S.M.G. and C. Diener were supported by the Washington Research Foundation Distinguished Investigator Award.
8,821 citations
••
TL;DR: Thousands of microbial genomes from yet-to-be-named species are identified, the pangenomes of human-associated microbes are expanded, and better exploitation of metagenomic technologies are allowed.
947 citations
••
TL;DR: This large analysis integrating mCRPC genomics with histology and clinical outcomes identifies RB1 genomic alteration as a potent predictor of poor outcome, and is a community resource for further interrogation of clinical and molecular associations.
Abstract: Heterogeneity in the genomic landscape of metastatic prostate cancer has become apparent through several comprehensive profiling efforts, but little is known about the impact of this heterogeneity on clinical outcome. Here, we report comprehensive genomic and transcriptomic analysis of 429 patients with metastatic castration-resistant prostate cancer (mCRPC) linked with longitudinal clinical outcomes, integrating findings from whole-exome, transcriptome, and histologic analysis. For 128 patients treated with a first-line next-generation androgen receptor signaling inhibitor (ARSI; abiraterone or enzalutamide), we examined the association of 18 recurrent DNA- and RNA-based genomic alterations, including androgen receptor (AR) variant expression, AR transcriptional output, and neuroendocrine expression signatures, with clinical outcomes. Of these, only RB1 alteration was significantly associated with poor survival, whereas alterations in RB1, AR, and TP53 were associated with shorter time on treatment with an ARSI. This large analysis integrating mCRPC genomics with histology and clinical outcomes identifies RB1 genomic alteration as a potent predictor of poor outcome, and is a community resource for further interrogation of clinical and molecular associations.
712 citations
••
University of Copenhagen1, Lund University2, Molecular Medicine Partnership Unit3, ETH Zurich4, Fudan University5, German Cancer Research Center6, University of São Paulo7, University of Trento8, European Institute of Oncology9, Tokyo Institute of Technology10, Japan Society for the Promotion of Science11, University of Tokyo12, Osaka University13, National Presto Industries14, City University of New York15, National Institutes of Health16, Huntsman Cancer Institute17, University of Southern Denmark18
TL;DR: A meta-analysis of eight geographically and technically diverse fecal shotgun metagenomic studies of colorectal cancer identified a core set of 29 species significantly enriched in CRC metagenomes, establishing globally generalizable, predictive taxonomic and functional microbiome CRC signatures as a basis for future diagnostics.
Abstract: Association studies have linked microbiome alterations with many human diseases. However, they have not always reported consistent results, thereby necessitating cross-study comparisons. Here, a meta-analysis of eight geographically and technically diverse fecal shotgun metagenomic studies of colorectal cancer (CRC, n = 768), which was controlled for several confounders, identified a core set of 29 species significantly enriched in CRC metagenomes (false discovery rate (FDR) < 1 × 10−5). CRC signatures derived from single studies maintained their accuracy in other studies. By training on multiple studies, we improved detection accuracy and disease specificity for CRC. Functional analysis of CRC metagenomes revealed enriched protein and mucin catabolism genes and depleted carbohydrate degradation genes. Moreover, we inferred elevated production of secondary bile acids from CRC metagenomes, suggesting a metabolic link between cancer-associated gut microbes and a fat- and meat-rich diet. Through extensive validations, this meta-analysis firmly establishes globally generalizable, predictive taxonomic and functional microbiome CRC signatures as a basis for future diagnostics. Cross-study analysis defines fecal microbial species associated with colorectal cancer.
615 citations
••
University of Trento1, University of São Paulo2, European Institute of Oncology3, University of Turin4, Japan Society for the Promotion of Science5, Tokyo Institute of Technology6, University of Tokyo7, Osaka University8, National Presto Industries9, German Cancer Research Center10, Huntsman Cancer Institute11, University of Southern Denmark12, University of Copenhagen13, Virginia Bioinformatics Institute14, City University of New York15
TL;DR: The combined analysis of heterogeneous CRC cohorts identified reproducible microbiome biomarkers and accurate disease-predictive models that can form the basis for clinical prognostic tests and hypothesis-driven mechanistic studies.
Abstract: Several studies have investigated links between the gut microbiome and colorectal cancer (CRC), but questions remain about the replicability of biomarkers across cohorts and populations. We performed a meta-analysis of five publicly available datasets and two new cohorts and validated the findings on two additional cohorts, considering in total 969 fecal metagenomes. Unlike microbiome shifts associated with gastrointestinal syndromes, the gut microbiome in CRC showed reproducibly higher richness than controls (P < 0.01), partially due to expansions of species typically derived from the oral cavity. Meta-analysis of the microbiome functional potential identified gluconeogenesis and the putrefaction and fermentation pathways as being associated with CRC, whereas the stachyose and starch degradation pathways were associated with controls. Predictive microbiome signatures for CRC trained on multiple datasets showed consistently high accuracy in datasets not considered for model training and independent validation cohorts (average area under the curve, 0.84). Pooled analysis of raw metagenomes showed that the choline trimethylamine-lyase gene was overabundant in CRC (P = 0.001), identifying a relationship between microbiome choline metabolism and CRC. The combined analysis of heterogeneous CRC cohorts thus identified reproducible microbiome biomarkers and accurate disease-predictive models that can form the basis for clinical prognostic tests and hypothesis-driven mechanistic studies. Multicohort analysis identifies microbial signatures of colorectal cancer in fecal microbiomes.
478 citations
••
Günter Blöschl1, Marc F. P. Bierkens2, António Chambel3, Christophe Cudennec4 +209 more•Institutions (124)
TL;DR: In this article, a community initiative to identify major unsolved scientific problems in hydrology motivated by a need for stronger harmonisation of research efforts is described. But despite the diversity of the participants (230 scientists in total), the process revealed much about community priorities and the state of our science: a preference for continuity in research questions rather than radical departures or redirections from past and current work.
Abstract: This paper is the outcome of a community initiative to identify major unsolved scientific problems in hydrology motivated by a need for stronger harmonisation of research efforts. The procedure involved a public consultation through online media, followed by two workshops through which a large number of potential science questions were collated, prioritised, and synthesised. In spite of the diversity of the participants (230 scientists in total), the process revealed much about community priorities and the state of our science: a preference for continuity in research questions rather than radical departures or redirections from past and current work. Questions remain focused on the process-based understanding of hydrological variability and causality at all space and time scales. Increased attention to environmental change drives a new emphasis on understanding how change propagates across interfaces within the hydrological system and across disciplinary boundaries. In particular, the expansion of the human footprint raises a new set of questions related to human interactions with nature and water cycle feedbacks in the context of complex water management problems. We hope that this reflection and synthesis of the 23 unsolved problems in hydrology will help guide research efforts for some years to come.
469 citations
••
TL;DR: In this paper, the mass, spin, and redshift distributions of binary black hole (BBH) mergers with LIGO and Advanced Virgo observations were analyzed using phenomenological population models.
Abstract: We present results on the mass, spin, and redshift distributions with phenomenological population models using the 10 binary black hole (BBH) mergers detected in the first and second observing runs completed by Advanced LIGO and Advanced Virgo. We constrain properties of the BBH mass spectrum using models with a range of parameterizations of the BBH mass and spin distributions. We find that the mass distribution of the more massive BH in such binaries is well approximated by models with no more than 1% of BHs more massive than 45 M and a power-law index of (90% credibility). We also show that BBHs are unlikely to be composed of BHs with large spins aligned to the orbital angular momentum. Modeling the evolution of the BBH merger rate with redshift, we show that it is flat or increasing with redshift with 93% probability. Marginalizing over uncertainties in the BBH population, we find robust estimates of the BBH merger rate density of R= (90% credibility). As the BBH catalog grows in future observing runs, we expect that uncertainties in the population model parameters will shrink, potentially providing insights into the formation of BHs via supernovae, binary interactions of massive stars, stellar cluster dynamics, and the formation history of BHs across cosmic time.
464 citations
••
F. Kyle Satterstrom1, Jack A. Kosmicki1, Jiebiao Wang2, Michael S. Breen3 +150 more•Institutions (45)
TL;DR: Using an enhanced Bayesian framework to integrate de novo and case-control rare variation, 102 risk genes are identified at a false discovery rate of ≤ 0.1, consistent with multiple paths to an excitatory/inhibitory imbalance underlying ASD.
Abstract: We present the largest exome sequencing study of autism spectrum disorder (ASD) to date (n=35,584 total samples, 11,986 with ASD). Using an enhanced Bayesian framework to integrate de novo and case-control rare variation, we identify 102 risk genes at a false discovery rate ≤ 0.1. Of these genes, 49 show higher frequencies of disruptive de novo variants in individuals ascertained for severe neurodevelopmental delay, while 53 show higher frequencies in individuals ascertained for ASD; comparing ASD cases with mutations in these groups reveals phenotypic differences. Expressed early in brain development, most of the risk genes have roles in regulation of gene expression or neuronal communication (i.e., mutations effect neurodevelopmental and neurophysiological changes), and 13 fall within loci recurrently hit by copy number variants. In human cortex single-cell gene expression data, expression of risk genes is enriched in both excitatory and inhibitory neuronal lineages, consistent with multiple paths to an excitatory/inhibitory imbalance underlying ASD.
461 citations
•
01 Jan 2019TL;DR: This framework decouple appearance and motion information using a self-supervised formulation and uses a representation consisting of a set of learned keypoints along with their local affine transformations to support complex motions.
Abstract: Image animation consists of generating a video sequence so that an object in a source image is animated according to the motion of a driving video. Our framework addresses this problem without using any annotation or prior information about the specific object to animate. Once trained on a set of videos depicting objects of the same category (e.g. faces, human bodies), our method can be applied to any object of this class. To achieve this, we decouple appearance and motion information using a self-supervised formulation. To support complex motions, we use a representation consisting of a set of learned keypoints along with their local affine transformations. A generator network models occlusions arising during target motions and combines the appearance extracted from the source image and the motion derived from the driving video. Our framework scores best on diverse benchmarks and on a variety of object categories.
441 citations
••
TL;DR: In this paper, the authors place constraints on the dipole radiation and possible deviations from GR in the post-Newtonian coefficients that govern the inspiral regime of a binary neutron star inspiral.
Abstract: The recent discovery by Advanced LIGO and Advanced Virgo of a gravitational wave signal from a binary neutron star inspiral has enabled tests of general relativity (GR) with this new type of source. This source, for the first time, permits tests of strong-field dynamics of compact binaries in the presence of matter. In this Letter, we place constraints on the dipole radiation and possible deviations from GR in the post-Newtonian coefficients that govern the inspiral regime. Bounds on modified dispersion of gravitational waves are obtained; in combination with information from the observed electromagnetic counterpart we can also constrain effects due to large extra dimensions. Finally, the polarization content of the gravitational wave signal is studied. The results of all tests performed here show good agreement with GR.
430 citations
••
TL;DR: In this article, a search for invisible decays of a Higgs boson via vector boson fusion is performed using proton-proton collision data collected with the CMS detector at the LHC in 2016 at a center-of-mass energy root s = 13 TeV, corresponding to an integrated luminosity of 35.9fb(-1).
••
01 Jun 2019TL;DR: MuST-C is created, a multilingual speech translation corpus whose size and quality will facilitate the training of end-to-end systems for SLT from English into 8 languages and an empirical verification of its quality and SLT results computed with a state-of-the-art approach on each language direction.
Abstract: Current research on spoken language translation (SLT) has to confront with the scarcity of sizeable and publicly available training corpora. This problem hinders the adoption of neural end-to-end approaches, which represent the state of the art in the two parent tasks of SLT: automatic speech recognition and machine translation. To fill this gap, we created MuST-C, a multilingual speech translation corpus whose size and quality will facilitate the training of end-to-end systems for SLT from English into 8 languages. For each target language, MuST-C comprises at least 385 hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations. Together with a description of the corpus creation methodology (scalable to add new data and cover new languages), we provide an empirical verification of its quality and SLT results computed with a state-of-the-art approach on each language direction.
••
TL;DR: A novel unsupervised context-sensitive framework—deep change vector analysis (DCVA)—for CD in multitemporal VHR images that exploit convolutional neural network (CNN) features is proposed and experimental results on mult itemporal data sets of Worldview-2, Pleiades, and Quickbird images confirm the effectiveness of the proposed method.
Abstract: Change detection (CD) in multitemporal images is an important application of remote sensing. Recent technological evolution provided very high spatial resolution (VHR) multitemporal optical satellite images showing high spatial correlation among pixels and requiring an effective modeling of spatial context to accurately capture change information. Here, we propose a novel unsupervised context-sensitive framework—deep change vector analysis (DCVA)—for CD in multitemporal VHR images that exploit convolutional neural network (CNN) features. To have an unsupervised system, DCVA starts from a suboptimal pretrained multilayered CNN for obtaining deep features that can model spatial relationship among neighboring pixels and thus complex objects. An automatic feature selection strategy is employed layerwise to select features emphasizing both high and low prior probability change information. Selected features from multiple layers are combined into a deep feature hypervector providing a multiscale scene representation. The use of the same pretrained CNN for semantic segmentation of single images enables us to obtain coherent multitemporal deep feature hypervectors that can be compared pixelwise to obtain deep change vectors that also model spatial context information. Deep change vectors are analyzed based on their magnitude to identify changed pixels. Then, deep change vectors corresponding to identified changed pixels are binarized to obtain a compressed binary deep change vectors that preserve information about the direction (kind) of change. Changed pixels are analyzed for multiple CD based on the binary features, thus implicitly using the spatial information. Experimental results on multitemporal data sets of Worldview-2, Pleiades, and Quickbird images confirm the effectiveness of the proposed method.
••
Northern Arizona University1, National Institutes of Health2, University of Minnesota3, Woods Hole Oceanographic Institution4, University of California, Davis5, Massachusetts Institute of Technology6, University of Copenhagen7, University of Trento8, Chinese Academy of Sciences9, University of California, San Francisco10, University of Pennsylvania11, Pacific Northwest National Laboratory12, North Carolina State University13, University of Montana14, Institute for Systems Biology15, Dalhousie University16, University of British Columbia17, Statens Serum Institut18, Anschutz Medical Campus19, University of Washington20, University of California, San Diego21, Michigan State University22, Stanford University23, Harvard University24, Broad Institute25, Australian National University26, University of Düsseldorf27, University of New South Wales28, Sookmyung Women's University29, San Diego State University30, Howard Hughes Medical Institute31, Max Planck Society32, Cornell University33, Colorado State University34, Google35, Syracuse University36, Webster University37, United States Department of Agriculture38, University of Arkansas for Medical Sciences39, Colorado School of Mines40, University of Southern Mississippi41, National Oceanic and Atmospheric Administration42, University of California, Merced43, Wageningen University and Research Centre44, University of Arizona45, Environment Agency46, University of Florida47, Merck & Co.48
TL;DR: An amendment to this paper has been published and can be accessed via a link at the top of the paper.
Abstract: In the version of this article initially published, some reference citations were incorrect. The three references to Jupyter Notebooks should have cited Kluyver et al. instead of Gonzalez et al. The reference to Qiita should have cited Gonzalez et al. instead of Schloss et al. The reference to mothur should have cited Schloss et al. instead of McMurdie & Holmes. The reference to phyloseq should have cited McMurdie & Holmes instead of Huber et al. The reference to Bioconductor should have cited Huber et al. instead of Franzosa et al. And the reference to the biobakery suite should have cited Franzosa et al. instead of Kluyver et al. The errors have been corrected in the HTML and PDF versions of the article.
••
15 Jun 2019TL;DR: In this article, a deep learning framework for image animation and video generation is proposed, which consists of a keypoint detector unsupervisely trained to extract object keypoints, a dense motion prediction network for generating dense heatmaps from sparse keypoints and a motion transfer network for synthesizing the output frames.
Abstract: This paper introduces a novel deep learning framework for image animation. Given an input image with a target object and a driving video sequence depicting a moving object, our framework generates a video in which the target object is animated according to the driving sequence. This is achieved through a deep architecture that decouples appearance and motion information. Our framework consists of three main modules: (i) a Keypoint Detector unsupervisely trained to extract object keypoints, (ii) a Dense Motion prediction network for generating dense heatmaps from sparse keypoints, in order to better encode motion information and (iii) a Motion Transfer Network, which uses the motion heatmaps and appearance information extracted from the input image to synthesize the output frames. We demonstrate the effectiveness of our method on several benchmark datasets, spanning a wide variety of object appearances, and show that our approach outperforms state-of-the-art image animation and video generation methods.
••
TL;DR: The squeezing injection was fully automated and over the first 5 months of the third joint LIGO-Virgo observation run O3 squeezing was applied for more than 99% of the science time, and several gravitational-wave candidates have been recorded.
Abstract: Current interferometric gravitational-wave detectors are limited by quantum noise over a wide range of their measurement bandwidth. One method to overcome the quantum limit is the injection of squeezed vacuum states of light into the interferometer’s dark port. Here, we report on the successful application of this quantum technology to improve the shot noise limited sensitivity of the Advanced Virgo gravitational-wave detector. A sensitivity enhancement of up to 3.2±0.1 dB beyond the shot noise limit is achieved. This nonclassical improvement corresponds to a 5%–8% increase of the binary neutron star horizon. The squeezing injection was fully automated and over the first 5 months of the third joint LIGO-Virgo observation run O3 squeezing was applied for more than 99% of the science time. During this period several gravitational-wave candidates have been recorded.
••
TL;DR: In this article, the most significant and appealing mechanisms proposed for explaining the flash sintering event are analyzed and discussed, with the aim to point out the level of knowledge reached so far and identify, at least, possible shared theories useful to propose future scientific activities and potential technological implementations.
Abstract: Flash sintering is a novel densification technology for ceramics, which allows a dramatic reduction of processing time and temperature. It represents a promising sintering route to reduce economic, energetic and environmental costs associated to firing. Moreover, it allows to develop peculiar and out-of-equilibrium microstructures. The flash process is complex and unusual, including different simultaneous physical and chemical phenomena and their understanding, explanation and implementation require an interdisciplinary approach from physics, to chemistry and engineering. In spite of the intensive work of several researchers, there is still a wide debate as for the predominant mechanisms responsible for flash sintering process. In the present review, the most significant and appealing mechanisms proposed for explaining the “flash” event are analyzed and discussed, with the aim to point out the level of knowledge reached so far and identify, at least, possible shared theories useful to propose future scientific activities and potential technological implementations.
••
University of Leeds1, Pennsylvania State University2, University of North Carolina at Chapel Hill3, Agency for Science, Technology and Research4, Weizmann Institute of Science5, Johns Hopkins University6, University of Pittsburgh7, Fox Chase Cancer Center8, University of Freiburg9, University of Zurich10, University of Strasbourg11, Collège de France12, University of Amsterdam13, University of Trento14, École Polytechnique Fédérale de Lausanne15, Kavli Institute for Theoretical Physics16, Universidade Federal do Rio Grande do Sul17, École normale supérieure de Lyon18, University of California, San Diego19, University of California, Riverside20, University of Warsaw21, Interdisciplinary Center for Scientific Computing22, Heidelberg Institute for Theoretical Studies23, Technische Universität München24, Hebrew University of Jerusalem25, Tel Aviv University26, Stony Brook University27, University of York28
TL;DR: An overview of the progress and remaining limitations in the understanding of the mechanistic foundations of allostery gained from computational and experimental analyses of real protein systems and model systems is provided.
••
University of Trento1, New York University2, University of Naples Federico II3, Broad Institute4, National Institute for Medical Research5, Kwame Nkrumah University of Science and Technology6, University of Nebraska–Lincoln7, University of Vienna8, Karolinska Institutet9, Spanish National Research Council10, Bernhard Nocht Institute for Tropical Medicine11, Harvard University12
TL;DR: Genomic analysis showed substantial functional diversity in the P. copri complex with notable differences in carbohydrate metabolism, suggesting that multi-generational dietary modifications may be driving reduced prevalence in Westernized populations.
••
TL;DR: An exclusion limit on the H→invisible branching ratio of 0.26(0.17_{-0.05}^{+0.07}) at 95% confidence level is observed (expected) in combination with the results at sqrt[s]=7 and 8 TeV.
Abstract: Dark matter particles, if sufficiently light, may be produced in decays of the Higgs boson. This Letter presents a statistical combination of searches for H→invisible decays where H is produced according to the standard model via vector boson fusion, Z(ll)H, and W/Z(had)H, all performed with the ATLAS detector using 36.1 fb^{-1} of pp collisions at a center-of-mass energy of sqrt[s]=13 TeV at the LHC. In combination with the results at sqrt[s]=7 and 8 TeV, an exclusion limit on the H→invisible branching ratio of 0.26(0.17_{-0.05}^{+0.07}) at 95% confidence level is observed (expected).
••
TL;DR: In this article, an improved energy clustering algorithm is introduced, and its implications for the measurement and identification of prompt electrons and photons are discussed in detail, including corrections and calibrations that affect performance, including energy calibration, identification and isolation efficiencies.
Abstract: This paper describes the reconstruction of electrons and photons with the ATLAS detector, employed for measurements and searches exploiting the complete LHC Run 2 dataset. An improved energy clustering algorithm is introduced, and its implications for the measurement and identification of prompt electrons and photons are discussed in detail. Corrections and calibrations that affect performance, including energy calibration, identification and isolation efficiencies, and the measurement of the charge of reconstructed electron candidates are determined using up to 81 fb−1 of proton-proton collision data collected at √s=13 TeV between 2015 and 2017.
••
TL;DR: An increase in remote sensing and ancillary data sets opens up the possibility of utilizing multimodal data sets in a joint manner to further improve the performance of the processing approaches with respect to applications at hand.
Abstract: The recent, sharp increase in the availability of data captured by different sensors, combined with their considerable heterogeneity, poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary data sets, however, opens up the possibility of utilizing multimodal data sets in a joint manner to further improve the performance of the processing approaches with respect to applications at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several
••
TL;DR: A comprehensive overview of the development and uptake of NLP methods applied to free-text clinical notes related to chronic diseases is provided, including the investigation of challenges faced by NLP methodologies in understanding clinical narratives.
Abstract: Background: Novel approaches that complement and go beyond evidence-based medicine are required in the domain of chronic diseases, given the growing incidence of such conditions on the worldwide population. A promising avenue is the secondary use of electronic health records (EHRs), where patient data are analyzed to conduct clinical and translational research. Methods based on machine learning to process EHRs are resulting in improved understanding of patient clinical trajectories and chronic disease risk prediction, creating a unique opportunity to derive previously unknown clinical insights. However, a wealth of clinical histories remains locked behind clinical narratives in free-form text. Consequently, unlocking the full potential of EHR data is contingent on the development of natural language processing (NLP) methods to automatically transform clinical text into structured clinical data that can guide clinical decisions and potentially delay or prevent disease onset. Objective: The goal of the research was to provide a comprehensive overview of the development and uptake of NLP methods applied to free-text clinical notes related to chronic diseases, including the investigation of challenges faced by NLP methodologies in understanding clinical narratives. Methods: Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed and searches were conducted in 5 databases using “clinical notes,” “natural language processing,” and “chronic disease” and their variations as keywords to maximize coverage of the articles. Results: Of the 2652 articles considered, 106 met the inclusion criteria. Review of the included papers resulted in identification of 43 chronic diseases, which were then further classified into 10 disease categories using the International Classification of Diseases, 10th Revision. The majority of studies focused on diseases of the circulatory system (n=38) while endocrine and metabolic diseases were fewest (n=14). This was due to the structure of clinical records related to metabolic diseases, which typically contain much more structured data, compared with medical records for diseases of the circulatory system, which focus more on unstructured data and consequently have seen a stronger focus of NLP. The review has shown that there is a significant increase in the use of machine learning methods compared to rule-based approaches; however, deep learning methods remain emergent (n=3). Consequently, the majority of works focus on classification of disease phenotype with only a handful of papers addressing extraction of comorbidities from the free text or integration of clinical notes with structured data. There is a notable use of relatively simple methods, such as shallow classifiers (or combination with rule-based methods), due to the interpretability of predictions, which still represents a significant issue for more complex methods. Finally, scarcity of publicly available data may also have contributed to insufficient development of more advanced methods, such as extraction of word embeddings from clinical notes. Conclusions: Efforts are still required to improve (1) progression of clinical NLP methods from extraction toward understanding; (2) recognition of relations among entities rather than entities in isolation; (3) temporal extraction to understand past, current, and future clinical events; (4) exploitation of alternative sources of clinical knowledge; and (5) availability of large-scale, de-identified clinical corpora.
••
TL;DR: To fully exploit the available multitemporal HS images and their rich information content in change detection (CD), it is necessary to develop advanced automatic techniques that can address the complexity of the extraction of change information in an HS space.
Abstract: The expected increasing availability of remote sensing satellite hyperspectral (HS) images provides an important and unique data source for Earth observation (EO). HS images are characterized by a detailed spectral sampling (i.e., very high spectral resolution) over a wide spectral wavelength range, which makes it possible to monitor land-cover dynamics at a fine spectral scale. This is due to its capability of detecting subtle spectral variations in multitemporal images associated with land-cover changes that are not detectable in traditional multispectral (MS) images because of their limited spectral resolution (i.e., sufficient for representing only abrupt, strong changes in the spectral signature, as a rule). To fully exploit the available multitemporal HS images and their rich information content in change detection (CD), it is necessary to develop advanced automatic techniques that can address the complexity of the extraction of change information in an HS space. This article provides a comprehensive overview of the CD problem in HS images, as well as a survey on the main CD techniques available for multitemporal HS images. We review both widely used methods and new techniques proposed in the recent literature. The basic concepts, categories, open issues, and challenges related to CD in HS images are discussed and analyzed in detail. Experimental results obtained using state-of-the-art approaches are shown, to illustrate relevant concepts and problems.
••
05 Apr 2019TL;DR: Dynamic Generative Memory (DGM) as mentioned in this paper relies on conditional generative adversarial networks with learnable connection plasticity realized with neural masking and proposes a dynamic network expansion mechanism that ensures sufficient model capacity to accommodate for continually incoming tasks.
Abstract: Models trained in the context of continual learning (CL) should be able to learn from a stream of data over an undefined period of time. The main challenges herein are: 1) maintaining old knowledge while simultaneously benefiting from it when learning new tasks, and 2) guaranteeing model scalability with a growing amount of data to learn from. In order to tackle these challenges, we introduce Dynamic Generative Memory (DGM) - synaptic plasticity driven framework for continual learning. DGM relies on conditional generative adversarial networks with learnable connection plasticity realized with neural masking. Specifically, we evaluate two variants of neural masking: applied to (i) layer activations and (ii) to connection weights directly. Furthermore, we propose a dynamic network expansion mechanism that ensures sufficient model capacity to accommodate for continually incoming tasks. The amount of added capacity is determined dynamically from the learned binary mask. We evaluate DGM in the continual class-incremental setup on visual classification tasks.
••
TL;DR: In this article, the authors presented precision results on cosmic-ray electrons in the energy range from 0.5 to 1.4, based on 28.1×106 electrons collected by the Alpha Magnetic Spectrometer on the International Space Station.
Abstract: Precision results on cosmic-ray electrons are presented in the energy range from 0.5 GeV to 1.4 TeV based on 28.1×106 electrons collected by the Alpha Magnetic Spectrometer on the International Space Station. In the entire energy range the electron and positron spectra have distinctly different magnitudes and energy dependences. The electron flux exhibits a significant excess starting from 42.1-5.2+5.4 GeV compared to the lower energy trends, but the nature of this excess is different from the positron flux excess above 25.2±1.8 GeV. Contrary to the positron flux, which has an exponential energy cutoff of 810-180+310 GeV, at the 5σ level the electron flux does not have an energy cutoff below 1.9 TeV. In the entire energy range the electron flux is well described by the sum of two power law components. The different behavior of the cosmic-ray electrons and positrons measured by the Alpha Magnetic Spectrometer is clear evidence that most high energy electrons originate from different sources than high energy positrons.
••
TL;DR: In this article, the ATLAS Collaboration during Run 2 of the Large Hadron Collider (LHC) was used to identify jets containing b-hadrons, and the performance of the algorithms was evaluated in the s...
Abstract: The algorithms used by the ATLAS Collaboration during Run 2 of the Large Hadron Collider to identify jets containing b-hadrons are presented. The performance of the algorithms is evaluated in the s ...
••
TL;DR: A theoretical Circular Economy model for developing big cities in low-middle income countries is described within the study for effectively comparing which chances can spread for these countries as regard municipal solid waste exploitation.
••
TL;DR: The gut metagenomes of Italians with varying dietary habits were dissected, providing evidence of distinct gene repertoires characterizing different P. copri populations, with drug metabolism and complex carbohydrate degradation significantly associated with Western and non-Western individuals, respectively.
••
01 Jan 2019TL;DR: Generative Adversarial Nets (GANs), which are trained to generate only the normal distribution of the data, are proposed, which outperforms previous state-of-the-art methods in both the frame-level and the pixel-level evaluation.
Abstract: Abnormal crowd behaviour detection attracts a large interest due to its importance in video surveillance scenarios. However, the ambiguity and the lack of sufficient abnormal ground truth data makes end-to-end training of large deep networks hard in this domain. In this paper we propose to use Generative Adversarial Nets (GANs), which are trained to generate only the normal distribution of the data. During the adversarial GAN training, a discriminator (D) is used as a supervisor for the generator network (G) and vice versa. At testing time we use D to solve our discriminative task (abnormality detection), where D has been trained without the need of manually-annotated abnormal data. Moreover, in order to prevent G learn a trivial identity function, we use a cross-channel approach, forcing G to transform raw-pixel data in motion information and vice versa. The quantitative results on standard benchmarks show that our method outperforms previous state-of-the-art methods in both the frame-level and the pixel-level evaluation.