Showing papers by "University of California, San Diego published in 2018"
••
Clotilde Théry1, Kenneth W. Witwer2, Elena Aikawa3, María José Alcaraz4 +414 more•Institutions (209)
TL;DR: The MISEV2018 guidelines include tables and outlines of suggested protocols and steps to follow to document specific EV-associated functional activities, and a checklist is provided with summaries of key points.
Abstract: The last decade has seen a sharp increase in the number of scientific publications describing physiological and pathological functions of extracellular vesicles (EVs), a collective term covering various subtypes of cell-released, membranous structures, called exosomes, microvesicles, microparticles, ectosomes, oncosomes, apoptotic bodies, and many other names. However, specific issues arise when working with these entities, whose size and amount often make them difficult to obtain as relatively pure preparations, and to characterize properly. The International Society for Extracellular Vesicles (ISEV) proposed Minimal Information for Studies of Extracellular Vesicles (“MISEV”) guidelines for the field in 2014. We now update these “MISEV2014” guidelines based on evolution of the collective knowledge in the last four years. An important point to consider is that ascribing a specific function to EVs in general, or to subtypes of EVs, requires reporting of specific information beyond mere description of function in a crude, potentially contaminated, and heterogeneous preparation. For example, claims that exosomes are endowed with exquisite and specific activities remain difficult to support experimentally, given our still limited knowledge of their specific molecular machineries of biogenesis and release, as compared with other biophysically similar EVs. The MISEV2018 guidelines include tables and outlines of suggested protocols and steps to follow to document specific EV-associated functional activities. Finally, a checklist is provided with summaries of key points.
5,988 citations
••
Gregory A. Roth1, Gregory A. Roth2, Degu Abate3, Kalkidan Hassen Abate4 +1025 more•Institutions (333)
TL;DR: Non-communicable diseases comprised the greatest fraction of deaths, contributing to 73·4% (95% uncertainty interval [UI] 72·5–74·1) of total deaths in 2017, while communicable, maternal, neonatal, and nutritional causes accounted for 18·6% (17·9–19·6), and injuries 8·0% (7·7–8·2).
5,211 citations
••
18 Jun 2018TL;DR: Cascade R-CNN as mentioned in this paper proposes a multi-stage object detection architecture, which consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives.
Abstract: In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code is available at https://github.com/zhaoweicai/cascade-rcnn.
3,663 citations
••
Lorenzo Galluzzi1, Lorenzo Galluzzi2, Ilio Vitale3, Stuart A. Aaronson4 +183 more•Institutions (111)
TL;DR: The Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives.
Abstract: Over the past decade, the Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives. Since the field continues to expand and novel mechanisms that orchestrate multiple cell death pathways are unveiled, we propose an updated classification of cell death subroutines focusing on mechanistic and essential (as opposed to correlative and dispensable) aspects of the process. As we provide molecularly oriented definitions of terms including intrinsic apoptosis, extrinsic apoptosis, mitochondrial permeability transition (MPT)-driven necrosis, necroptosis, ferroptosis, pyroptosis, parthanatos, entotic cell death, NETotic cell death, lysosome-dependent cell death, autophagy-dependent cell death, immunogenic cell death, cellular senescence, and mitotic catastrophe, we discuss the utility of neologisms that refer to highly specialized instances of these processes. The mission of the NCCD is to provide a widely accepted nomenclature on cell death in support of the continued development of the field.
3,301 citations
••
TL;DR: In this paper, the cosmological parameter results from the final full-mission Planck measurements of the CMB anisotropies were presented, with good consistency with the standard spatially-flat 6-parameter CDM cosmology having a power-law spectrum of adiabatic scalar perturbations from polarization, temperature, and lensing separately and in combination.
Abstract: We present cosmological parameter results from the final full-mission Planck measurements of the CMB anisotropies. We find good consistency with the standard spatially-flat 6-parameter $\Lambda$CDM cosmology having a power-law spectrum of adiabatic scalar perturbations (denoted "base $\Lambda$CDM" in this paper), from polarization, temperature, and lensing, separately and in combination. A combined analysis gives dark matter density $\Omega_c h^2 = 0.120\pm 0.001$, baryon density $\Omega_b h^2 = 0.0224\pm 0.0001$, scalar spectral index $n_s = 0.965\pm 0.004$, and optical depth $\tau = 0.054\pm 0.007$ (in this abstract we quote $68\,\%$ confidence regions on measured parameters and $95\,\%$ on upper limits). The angular acoustic scale is measured to $0.03\,\%$ precision, with $100\theta_*=1.0411\pm 0.0003$. These results are only weakly dependent on the cosmological model and remain stable, with somewhat increased errors, in many commonly considered extensions. Assuming the base-$\Lambda$CDM cosmology, the inferred late-Universe parameters are: Hubble constant $H_0 = (67.4\pm 0.5)$km/s/Mpc; matter density parameter $\Omega_m = 0.315\pm 0.007$; and matter fluctuation amplitude $\sigma_8 = 0.811\pm 0.006$. We find no compelling evidence for extensions to the base-$\Lambda$CDM model. Combining with BAO we constrain the effective extra relativistic degrees of freedom to be $N_{\rm eff} = 2.99\pm 0.17$, and the neutrino mass is tightly constrained to $\sum m_
u< 0.12$eV. The CMB spectra continue to prefer higher lensing amplitudes than predicted in base -$\Lambda$CDM at over $2\,\sigma$, which pulls some parameters that affect the lensing amplitude away from the base-$\Lambda$CDM model; however, this is not supported by the lensing reconstruction or (in models that also change the background geometry) BAO data. (Abridged)
3,077 citations
••
Jeffrey D. Stanaway1, Ashkan Afshin1, Emmanuela Gakidou1, Stephen S Lim1 +1050 more•Institutions (346)
TL;DR: This study estimated levels and trends in exposure, attributable deaths, and attributable disability-adjusted life-years (DALYs) by age group, sex, year, and location for 84 behavioural, environmental and occupational, and metabolic risks or groups of risks from 1990 to 2017 and explored the relationship between development and risk exposure.
2,910 citations
••
TL;DR: A diagnostic tool based on a deep-learning framework for the screening of patients with common treatable blinding retinal diseases, which demonstrates performance comparable to that of human experts in classifying age-related macular degeneration and diabetic macular edema.
2,750 citations
••
TL;DR: This paper aims to demonstrate the efforts towards in-situ applicability of EMMARM, as to provide real-time information about concrete mechanical properties such as E-modulus and compressive strength.
2,734 citations
••
TL;DR: The results illustrate the importance of parameter tuning for optimizing classifier performance, and the recommendations regarding parameter choices for these classifiers under a range of standard operating conditions are made.
Abstract: Taxonomic classification of marker-gene sequences is an important step in microbiome analysis. We present q2-feature-classifier (
https://github.com/qiime2/q2-feature-classifier
), a QIIME 2 plugin containing several novel machine-learning and alignment-based methods for taxonomy classification. We evaluated and optimized several commonly used classification methods implemented in QIIME 1 (RDP, BLAST, UCLUST, and SortMeRNA) and several new methods implemented in QIIME 2 (a scikit-learn naive Bayes machine-learning classifier, and alignment-based taxonomy consensus methods based on VSEARCH, and BLAST+) for classification of bacterial 16S rRNA and fungal ITS marker-gene amplicon sequence data. The naive-Bayes, BLAST+-based, and VSEARCH-based classifiers implemented in QIIME 2 meet or exceed the species-level accuracy of other commonly used methods designed for classification of marker gene sequences that were evaluated in this work. These evaluations, based on 19 mock communities and error-free sequence simulations, including classification of simulated “novel” marker-gene sequences, are available in our extensible benchmarking framework, tax-credit (
https://github.com/caporaso-lab/tax-credit-data
). Our results illustrate the importance of parameter tuning for optimizing classifier performance, and we make recommendations regarding parameter choices for these classifiers under a range of standard operating conditions. q2-feature-classifier and tax-credit are both free, open-source, BSD-licensed packages available on GitHub.
2,475 citations
••
TL;DR: This paper aims to demonstrate the efforts towards in-situ applicability of EMMARM, as to provide real-time information about concrete mechanical properties such as E-modulus and compressive strength.
2,416 citations
••
18 Jun 2018TL;DR: This work directly operates on raw point clouds by popping up RGBD scans and leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects.
Abstract: In this work, we study 3D object detection from RGBD data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability.
••
Columbia University1, University of Amsterdam2, University of Bologna3, University of Mainz4, University of Münster5, University of Coimbra6, New York University Abu Dhabi7, University of Zurich8, Stockholm University9, Rensselaer Polytechnic Institute10, Max Planck Society11, Weizmann Institute of Science12, University of Freiburg13, University of Nantes14, University of California, San Diego15, University of Chicago16, Purdue University17, Rice University18, Pierre-and-Marie-Curie University19, University of California, Los Angeles20
TL;DR: In this article, a search for weakly interacting massive particles (WIMPs) using 278.8 days of data collected with the XENON1T experiment at LNGS is reported.
Abstract: We report on a search for weakly interacting massive particles (WIMPs) using 278.8 days of data collected with the XENON1T experiment at LNGS. XENON1T utilizes a liquid xenon time projection chamber with a fiducial mass of (1.30±0.01) ton, resulting in a 1.0 ton yr exposure. The energy region of interest, [1.4,10.6] keVee ([4.9,40.9] keVnr), exhibits an ultralow electron recoil background rate of [82-3+5(syst)±3(stat)] events/(ton yr keVee). No significant excess over background is found, and a profile likelihood analysis parametrized in spatial and energy dimensions excludes new parameter space for the WIMP-nucleon spin-independent elastic scatter cross section for WIMP masses above 6 GeV/c2, with a minimum of 4.1×10-47 cm2 at 30 GeV/c2 and a 90% confidence level.
••
TL;DR: It is suggested that deep learning approaches could be the vehicle for translating big biomedical data into improved human health and develop holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability.
Abstract: Gaining knowledge and actionable insights from complex, high-dimensional and heterogeneous biomedical data remains a key challenge in transforming health care. Various types of data have been emerging in modern biomedical research, including electronic health records, imaging, -omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured. Traditional data mining and statistical learning approaches typically need to first perform feature engineering to obtain effective and more robust features from those data, and then build prediction or clustering models on top of them. There are lots of challenges on both steps in a scenario of complicated data and lacking of sufficient domain knowledge. The latest advances in deep learning technologies provide new effective paradigms to obtain end-to-end learning models from complex data. In this article, we review the recent literature on applying deep learning technologies to advance the health care domain. Based on the analyzed work, we suggest that deep learning approaches could be the vehicle for translating big biomedical data into improved human health. However, we also note limitations and needs for improved methods development and applications, especially in terms of ease-of-understanding for domain experts and citizen scientists. We discuss such challenges and suggest developing holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability.
••
TL;DR: How the initial discovery of a role for NF-κB in linking inflammation and cancer led to an improved understanding of tumour-elicited inflammation and its effects on anticancer immunity is discussed.
Abstract: Fourteen years have passed since nuclear factor-κB (NF-κB) was first shown to serve as a molecular lynchpin that links persistent infections and chronic inflammation to increased cancer risk. The young field of inflammation and cancer has now come of age, and inflammation has been recognized by the broad cancer research community as a hallmark and cause of cancer. Here, we discuss how the initial discovery of a role for NF-κB in linking inflammation and cancer led to an improved understanding of tumour-elicited inflammation and its effects on anticancer immunity.
••
18 Jun 2018
TL;DR: In this paper, a multi-level adversarial network is proposed to perform output space domain adaptation at different feature levels, including synthetic-to-real and cross-city scenarios.
Abstract: Convolutional neural network-based approaches for semantic segmentation rely on supervision with pixel-level ground truth, but may not generalize well to unseen image domains. As the labeling process is tedious and labor intensive, developing algorithms that can adapt source ground truth labels to the target domain is of great interest. In this paper, we propose an adversarial learning method for domain adaptation in the context of semantic segmentation. Considering semantic segmentations as structured outputs that contain spatial similarities between the source and target domains, we adopt adversarial learning in the output space. To further enhance the adapted model, we construct a multi-level adversarial network to effectively perform output space domain adaptation at different feature levels. Extensive experiments and ablation study are conducted under various domain adaptation settings, including synthetic-to-real and cross-city scenarios. We show that the proposed method performs favorably against the state-of-the-art methods in terms of accuracy and visual quality.
••
Smithsonian Environmental Research Center1, University of California, San Diego2, Leibniz Institute of Marine Sciences3, University of Liège4, Monterey Bay Aquarium Research Institute5, Lund University6, Centre national de la recherche scientifique7, Fisheries and Oceans Canada8, Cayetano Heredia University9, University of the Philippines Diliman10, State University of New York College of Environmental Science and Forestry11, Kuwait Institute for Scientific Research12, University of Cape Town13, Department of Agriculture, Forestry and Fisheries14, Louisiana State University15, University of Maryland Center for Environmental Science16, University of South Florida St. Petersburg17, Polish Academy of Sciences18, University of Hong Kong19, East China Normal University20
TL;DR: Improved numerical models of oceanographic processes that control oxygen depletion and the large-scale influence of altered biogeochemical cycles are needed to better predict the magnitude and spatial patterns of deoxygenation in the open ocean, as well as feedbacks to climate.
Abstract: BACKGROUND Oxygen concentrations in both the open ocean and coastal waters have been declining since at least the middle of the 20th century. This oxygen loss, or deoxygenation, is one of the most important changes occurring in an ocean increasingly modified by human activities that have raised temperatures, CO 2 levels, and nutrient inputs and have altered the abundances and distributions of marine species. Oxygen is fundamental to biological and biogeochemical processes in the ocean. Its decline can cause major changes in ocean productivity, biodiversity, and biogeochemical cycles. Analyses of direct measurements at sites around the world indicate that oxygen-minimum zones in the open ocean have expanded by several million square kilometers and that hundreds of coastal sites now have oxygen concentrations low enough to limit the distribution and abundance of animal populations and alter the cycling of important nutrients. ADVANCES In the open ocean, global warming, which is primarily caused by increased greenhouse gas emissions, is considered the primary cause of ongoing deoxygenation. Numerical models project further oxygen declines during the 21st century, even with ambitious emission reductions. Rising global temperatures decrease oxygen solubility in water, increase the rate of oxygen consumption via respiration, and are predicted to reduce the introduction of oxygen from the atmosphere and surface waters into the ocean interior by increasing stratification and weakening ocean overturning circulation. In estuaries and other coastal systems strongly influenced by their watershed, oxygen declines have been caused by increased loadings of nutrients (nitrogen and phosphorus) and organic matter, primarily from agriculture; sewage; and the combustion of fossil fuels. In many regions, further increases in nitrogen discharges to coastal waters are projected as human populations and agricultural production rise. Climate change exacerbates oxygen decline in coastal systems through similar mechanisms as those in the open ocean, as well as by increasing nutrient delivery from watersheds that will experience increased precipitation. Expansion of low-oxygen zones can increase production of N 2 O, a potent greenhouse gas; reduce eukaryote biodiversity; alter the structure of food webs; and negatively affect food security and livelihoods. Both acidification and increasing temperature are mechanistically linked with the process of deoxygenation and combine with low-oxygen conditions to affect biogeochemical, physiological, and ecological processes. However, an important paradox to consider in predicting large-scale effects of future deoxygenation is that high levels of productivity in nutrient-enriched coastal systems and upwelling areas associated with oxygen-minimum zones also support some of the world’s most prolific fisheries. OUTLOOK Major advances have been made toward understanding patterns, drivers, and consequences of ocean deoxygenation, but there is a need to improve predictions at large spatial and temporal scales important to ecosystem services provided by the ocean. Improved numerical models of oceanographic processes that control oxygen depletion and the large-scale influence of altered biogeochemical cycles are needed to better predict the magnitude and spatial patterns of deoxygenation in the open ocean, as well as feedbacks to climate. Developing and verifying the next generation of these models will require increased in situ observations and improved mechanistic understanding on a variety of scales. Models useful for managing nutrient loads can simulate oxygen loss in coastal waters with some skill, but their ability to project future oxygen loss is often hampered by insufficient data and climate model projections on drivers at appropriate temporal and spatial scales. Predicting deoxygenation-induced changes in ecosystem services and human welfare requires scaling effects that are measured on individual organisms to populations, food webs, and fisheries stocks; considering combined effects of deoxygenation and other ocean stressors; and placing an increased research emphasis on developing nations. Reducing the impacts of other stressors may provide some protection to species negatively affected by low-oxygen conditions. Ultimately, though, limiting deoxygenation and its negative effects will necessitate a substantial global decrease in greenhouse gas emissions, as well as reductions in nutrient discharges to coastal waters.
••
University of East Anglia1, University of Exeter2, Alfred Wegener Institute for Polar and Marine Research3, Max Planck Society4, Ludwig Maximilian University of Munich5, Commonwealth Scientific and Industrial Research Organisation6, Karlsruhe Institute of Technology7, Atlantic Oceanographic and Meteorological Laboratory8, Cooperative Institute for Marine and Atmospheric Studies9, École Normale Supérieure10, Centre national de la recherche scientifique11, University of Maryland, College Park12, University of Virginia13, Flanders Marine Institute14, Oak Ridge National Laboratory15, Woods Hole Research Center16, University of Illinois at Urbana–Champaign17, Geophysical Institute, University of Bergen18, Met Office19, University of California, San Diego20, Netherlands Environmental Assessment Agency21, Utrecht University22, University of Paris23, Oeschger Centre for Climate Change Research24, Tsinghua University25, National Center for Atmospheric Research26, Institute of Arctic and Alpine Research27, National Institute for Environmental Studies28, Cooperative Research Centre29, Hobart Corporation30, Japan Agency for Marine-Earth Science and Technology31, University of Groningen32, Wageningen University and Research Centre33, Bjerknes Centre for Climate Research34, Goddard Space Flight Center35, Leibniz Institute for Baltic Sea Research36, Princeton University37, Leibniz Institute of Marine Sciences38, National Oceanic and Atmospheric Administration39, Auburn University40, Food and Agriculture Organization41, VU University Amsterdam42
TL;DR: In this article, the authors describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties, including emissions from land use and land-use change data and bookkeeping models.
Abstract: . Accurate assessment of anthropogenic carbon dioxide
( CO2 ) emissions and their redistribution among the atmosphere,
ocean, and terrestrial biosphere – the “global carbon budget” – is
important to better understand the global carbon cycle, support the
development of climate policies, and project future climate change. Here we
describe data sets and methodology to quantify the five major components of
the global carbon budget and their uncertainties. Fossil CO2
emissions ( EFF ) are based on energy statistics and cement
production data, while emissions from land use and land-use change ( ELUC ),
mainly deforestation, are based on land use and land-use change data and
bookkeeping models. Atmospheric CO2 concentration is measured
directly and its growth rate ( GATM ) is computed from the annual
changes in concentration. The ocean CO2 sink ( SOCEAN )
and terrestrial CO2 sink ( SLAND ) are estimated with
global process models constrained by observations. The resulting carbon
budget imbalance ( BIM ), the difference between the estimated
total emissions and the estimated changes in the atmosphere, ocean, and
terrestrial biosphere, is a measure of imperfect data and understanding of
the contemporary carbon cycle. All uncertainties are reported as ±1σ . For the last decade available (2008–2017), EFF was
9.4±0.5 GtC yr −1 , ELUC 1.5±0.7 GtC yr −1 , GATM 4.7±0.02 GtC yr −1 ,
SOCEAN 2.4±0.5 GtC yr −1 , and SLAND 3.2±0.8 GtC yr −1 , with a budget imbalance BIM of
0.5 GtC yr −1 indicating overestimated emissions and/or underestimated
sinks. For the year 2017 alone, the growth in EFF was about 1.6 %
and emissions increased to 9.9±0.5 GtC yr −1 . Also for 2017,
ELUC was 1.4±0.7 GtC yr −1 , GATM was 4.6±0.2 GtC yr −1 , SOCEAN was 2.5±0.5 GtC yr −1 , and SLAND was 3.8±0.8 GtC yr −1 ,
with a BIM of 0.3 GtC. The global atmospheric
CO2 concentration reached 405.0±0.1 ppm averaged over 2017.
For 2018, preliminary data for the first 6–9 months indicate a renewed
growth in EFF of + 2.7 % (range of 1.8 % to 3.7 %) based
on national emission projections for China, the US, the EU, and India and
projections of gross domestic product corrected for recent changes in the
carbon intensity of the economy for the rest of the world. The analysis
presented here shows that the mean and trend in the five components of the
global carbon budget are consistently estimated over the period of 1959–2017,
but discrepancies of up to 1 GtC yr −1 persist for the representation
of semi-decadal variability in CO2 fluxes. A detailed comparison
among individual estimates and the introduction of a broad range of
observations show (1) no consensus in the mean and trend in land-use change
emissions, (2) a persistent low agreement among the different methods on
the magnitude of the land CO2 flux in the northern extra-tropics,
and (3) an apparent underestimation of the CO2 variability by ocean
models, originating outside the tropics. This living data update documents
changes in the methods and data sets used in this new global carbon budget
and the progress in understanding the global carbon cycle compared with
previous publications of this data set (Le Quere et al., 2018, 2016,
2015a, b, 2014, 2013). All results presented here can be downloaded from
https://doi.org/10.18160/GCP-2018 .
••
TL;DR: This review focuses on studies in humans to describe challenges and propose strategies that leverage existing knowledge to move rapidly from correlation to causation and ultimately to translation into therapies.
Abstract: Our understanding of the link between the human microbiome and disease, including obesity, inflammatory bowel disease, arthritis and autism, is rapidly expanding. Improvements in the throughput and accuracy of DNA sequencing of the genomes of microbial communities that are associated with human samples, complemented by analysis of transcriptomes, proteomes, metabolomes and immunomes and by mechanistic experiments in model systems, have vastly improved our ability to understand the structure and function of the microbiome in both diseased and healthy states. However, many challenges remain. In this review, we focus on studies in humans to describe these challenges and propose strategies that leverage existing knowledge to move rapidly from correlation to causation and ultimately to translation into therapies.
••
12 Mar 2018
TL;DR: DUC is designed to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling, and a hybrid dilated convolution (HDC) framework in the encoding phase is proposed.
Abstract: Recent advances in deep learning, especially deep convolutional neural networks (CNNs), have led to significant improvement over previous semantic segmentation systems. Here we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are of both theoretical and practical value. First, we design dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields (RF) of the network to aggregate global information; 2) alleviates what we call the "gridding issue"caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a state-of-art result of 80.1% mIOU in the test set at the time of submission. We also have achieved state-of-theart overall on the KITTI road estimation benchmark and the PASCAL VOC2012 segmentation task. Our source code can be found at https://github.com/TuSimple/TuSimple-DUC.
••
Verneri Anttila1, Verneri Anttila2, Brendan Bulik-Sullivan2, Brendan Bulik-Sullivan1 +717 more•Institutions (270)
TL;DR: It is demonstrated that, in the general population, the personality trait neuroticism is significantly correlated with almost every psychiatric disorder and migraine, and it is shown that both psychiatric and neurological disorders have robust correlations with cognitive and personality measures.
Abstract: Disorders of the brain can exhibit considerable epidemiological comorbidity and often share symptoms, provoking debate about their etiologic overlap. We quantified the genetic sharing of 25 brain disorders from genome-wide association studies of 265,218 patients and 784,643 control participants and assessed their relationship to 17 phenotypes from 1,191,588 individuals. Psychiatric disorders share common variant risk, whereas neurological disorders appear more distinct from one another and from the psychiatric disorders. We also identified significant sharing between disorders and a number of brain phenotypes, including cognitive measures. Further, we conducted simulations to explore how statistical power, diagnostic misclassification, and phenotypic heterogeneity affect genetic correlations. These results highlight the importance of common genetic variation as a risk factor for brain disorders and the value of heritability-based methods in understanding their etiology.
••
TL;DR: With continued high rates of adult obesity and DM along with an aging population, NAFLD‐related liver disease and mortality will increase in the United States and strategies to slow the growth ofNAFLD cases and therapeutic options are necessary to mitigate disease burden.
••
TL;DR: ASTRAL-III is a faster version of the ASTRAL method for phylogenetic reconstruction and can scale up to 10,000 species and removes low support branches from gene trees, resulting in improved accuracy.
Abstract: Evolutionary histories can be discordant across the genome, and such discordances need to be considered in reconstructing the species phylogeny. ASTRAL is one of the leading methods for inferring species trees from gene trees while accounting for gene tree discordance. ASTRAL uses dynamic programming to search for the tree that shares the maximum number of quartet topologies with input gene trees, restricting itself to a predefined set of bipartitions. We introduce ASTRAL-III, which substantially improves the running time of ASTRAL-II and guarantees polynomial running time as a function of both the number of species (n) and the number of genes (k). ASTRAL-III limits the bipartition constraint set (X) to grow at most linearly with n and k. Moreover, it handles polytomies more efficiently than ASTRAL-II, exploits similarities between gene trees better, and uses several techniques to avoid searching parts of the search space that are mathematically guaranteed not to include the optimal tree. The asymptotic running time of ASTRAL-III in the presence of polytomies is $O\left ((nk)^{1.726} D \right)$
where D=O(nk) is the sum of degrees of all unique nodes in input trees. The running time improvements enable us to test whether contracting low support branches in gene trees improves the accuracy by reducing noise. In extensive simulations, we show that removing branches with very low support (e.g., below 10%) improves accuracy while overly aggressive filtering is harmful. We observe on a biological avian phylogenomic dataset of 14K genes that contracting low support branches greatly improve results. ASTRAL-III is a faster version of the ASTRAL method for phylogenetic reconstruction and can scale up to 10,000 species. With ASTRAL-III, low support branches can be removed, resulting in improved accuracy.
••
01 Nov 2018TL;DR: In this article, a self-attention based sequential model (SASRec) is proposed, which uses an attention mechanism to identify which items are'relevant' from a user's action history, and use them to predict the next item.
Abstract: Sequential dynamics are a key feature of many modern recommender systems, which seek to capture the 'context' of users' activities on the basis of actions they have performed recently. To capture such patterns, two approaches have proliferated: Markov Chains (MCs) and Recurrent Neural Networks (RNNs). Markov Chains assume that a user's next action can be predicted on the basis of just their last (or last few) actions, while RNNs in principle allow for longer-term semantics to be uncovered. Generally speaking, MC-based methods perform best in extremely sparse datasets, where model parsimony is critical, while RNNs perform better in denser datasets where higher model complexity is affordable. The goal of our work is to balance these two goals, by proposing a self-attention based sequential model (SASRec) that allows us to capture long-term semantics (like an RNN), but, using an attention mechanism, makes its predictions based on relatively few actions (like an MC). At each time step, SASRec seeks to identify which items are 'relevant' from a user's action history, and use them to predict the next item. Extensive empirical studies show that our method outperforms various state-of-the-art sequential models (including MC/CNN/RNN-based approaches) on both sparse and dense datasets. Moreover, the model is an order of magnitude more efficient than comparable CNN/RNN-based models. Visualizations on attention weights also show how our model adaptively handles datasets with various density, and uncovers meaningful patterns in activity sequences.
••
Joint Genome Institute1, Bigelow Laboratory For Ocean Sciences2, United States Department of Agriculture3, University of California, Merced4, Broad Institute5, Oak Ridge National Laboratory6, Michigan State University7, California State University, San Bernardino8, J. Craig Venter Institute9, Max Planck Society10, Argonne National Laboratory11, Pacific Northwest National Laboratory12, University of British Columbia13, University of Southern California14, Science for Life Laboratory15, University of Vermont16, Georgia Institute of Technology17, University of Illinois at Urbana–Champaign18, University of Texas at Austin19, University of Vienna20, University of California, Davis21, University of Nevada, Las Vegas22, University of Wisconsin-Madison23, Cooperative Institute for Research in Environmental Sciences24, University of California, San Diego25, European Bioinformatics Institute26, National Institutes of Health27, University of Queensland28, Saint Petersburg State University29, University of California, Berkeley30
TL;DR: Two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences are presented, including the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum information about a Metagenome-Assembled Genomes (MIMAG), including estimates of genome completeness and contamination.
Abstract: We present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a Metagenome-Assembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Gene Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.
••
Cornell University1, Yale University2, Washington University in St. Louis3, University of Michigan4, University of Vermont5, University of Colorado Boulder6, Florida International University7, Virginia Commonwealth University8, University of Minnesota9, University of California, San Diego10, Harvard University11
TL;DR: An overview of the imaging procedures of the ABCD study is provided, the basis for their selection and preliminary quality assurance and results that provide evidence for the feasibility and age-appropriateness of procedures and generalizability of findings to the existent literature are provided.
••
King's College London1, University of Nottingham2, University of Naples Federico II3, Royal College of Surgeons in Ireland4, Swansea University5, University of Calgary6, Scripps Research Institute7, University of Melbourne8, University of California, San Diego9, Nanjing Medical University10, University of Liverpool11, La Trobe University12, University College London13, Universidade Federal de Minas Gerais14, University of Bath15, Queen Mary University of London16
TL;DR: The guidelines have been simplified for ease of understanding by authors, to make it more straightforward for peer reviewers to check compliance and to facilitate the curation of the journal's efforts to improve standards.
Abstract: This article updates the guidance published in 2015 for authors submitting papers to British Journal of Pharmacology (Curtis et al., 2015) and is intended to provide the rubric for peer review. Thus, it is directed towards authors, reviewers and editors. Explanations for many of the requirements were outlined previously and are not restated here. The new guidelines are intended to replace those published previously. The guidelines have been simplified for ease of understanding by authors, to make it more straightforward for peer reviewers to check compliance and to facilitate the curation of the journal's efforts to improve standards.
••
TL;DR: HUMAnN2 is developed, a tiered search strategy that enables fast, accurate, and species-resolved functional profiling of host-associated and environmental communities and introduces ‘contributional diversity’ to explain patterns of ecological assembly across different microbial community types.
Abstract: Functional profiles of microbial communities are typically generated using comprehensive metagenomic or metatranscriptomic sequence read searches, which are time-consuming, prone to spurious mapping, and often limited to community-level quantification. We developed HUMAnN2, a tiered search strategy that enables fast, accurate, and species-resolved functional profiling of host-associated and environmental communities. HUMAnN2 identifies a community's known species, aligns reads to their pangenomes, performs translated search on unclassified reads, and finally quantifies gene families and pathways. Relative to pure translated search, HUMAnN2 is faster and produces more accurate gene family profiles. We applied HUMAnN2 to study clinal variation in marine metabolism, ecological contribution patterns among human microbiome pathways, variation in species' genomic versus transcriptional contributions, and strain profiling. Further, we introduce 'contributional diversity' to explain patterns of ecological assembly across different microbial community types.
••
TL;DR: This Review focuses on recent findings that suggest that operational taxonomic unit-based analyses should be replaced with new methods that are based on exact sequence variants, methods for integrating metagenomic and metabolomic data, and issues surrounding compositional data analysis.
Abstract: Complex microbial communities shape the dynamics of various environments, ranging from the mammalian gastrointestinal tract to the soil. Advances in DNA sequencing technologies and data analysis have provided drastic improvements in microbiome analyses, for example, in taxonomic resolution, false discovery rate control and other properties, over earlier methods. In this Review, we discuss the best practices for performing a microbiome study, including experimental design, choice of molecular analysis technology, methods for data analysis and the integration of multiple omics data sets. We focus on recent findings that suggest that operational taxonomic unit-based analyses should be replaced with new methods that are based on exact sequence variants, methods for integrating metagenomic and metabolomic data, and issues surrounding compositional data analysis, where advances have been particularly rapid. We note that although some of these approaches are new, it is important to keep sight of the classic issues that arise during experimental design and relate to research reproducibility. We describe how keeping these issues in mind allows researchers to obtain more insight from their microbiome data sets.
••
Newcastle University1, Hannover Medical School2, University of Palermo3, Saga University4, University of Würzburg5, Istituto Superiore di Sanità6, RWTH Aachen University7, University of Barcelona8, University of California, San Diego9, Yokohama City University10, French Institute of Health and Medical Research11, VCU Medical Center12, University of Mainz13, Hiroshima University14, Peking University15, Goethe University Frankfurt16
TL;DR: NAFLD and NASH represent a large and growing public health problem and efforts to understand this epidemic and to mitigate the disease burden are needed, if obesity and DM continue to increase at current and historical rates.
••
Bela Abolfathi1, D. S. Aguado2, Gabriela Aguilar3, Carlos Allende Prieto2 +361 more•Institutions (94)
TL;DR: SDSS-IV is the fourth generation of the Sloan Digital Sky Survey and has been in operation since 2014 July. as discussed by the authors describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14).
Abstract: The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in operation since 2014 July. This paper describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14). This release makes the data taken by SDSS-IV in its first two years of operation (2014-2016 July) public. Like all previous SDSS releases, DR14 is cumulative, including the most recent reductions and calibrations of all data taken by SDSS since the first phase began operations in 2000. New in DR14 is the first public release of data from the extended Baryon Oscillation Spectroscopic Survey; the first data from the second phase of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2), including stellar parameter estimates from an innovative data-driven machine-learning algorithm known as "The Cannon"; and almost twice as many data cubes from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous release (N = 2812 in total). This paper describes the location and format of the publicly available data from the SDSS-IV surveys. We provide references to the important technical papers describing how these data have been taken (both targeting and observation details) and processed for scientific use. The SDSS web site (www.sdss.org) has been updated for this release and provides links to data downloads, as well as tutorials and examples of data use. SDSS-IV is planning to continue to collect astronomical data until 2020 and will be followed by SDSS-V.