scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
Wei Zhao1, Zheng Zhong, Xingzhi Xie1, Qizhi Yu, Jun Liu1 
TL;DR: Patients with confirmed COVID-19 pneumonia have typical imaging features that can be helpful in early screening of highly suspected cases and in evaluation of the severity and extent of disease.
Abstract: OBJECTIVE. The increasing number of cases of confirmed coronavirus disease (COVID-19) in China is striking. The purpose of this study was to investigate the relation between chest CT findings and the clinical conditions of COVID-19 pneumonia. MATERIALS AND METHODS. Data on 101 cases of COVID-19 pneumonia were retrospectively collected from four institutions in Hunan, China. Basic clinical characteristics and detailed imaging features were evaluated and compared between two groups on the basis of clinical status: nonemergency (mild or common disease) and emergency (severe or fatal disease). RESULTS. Patients 21-50 years old accounted for most (70.2%) of the cohort, and five (5.0%) patients had disease associated with a family outbreak. Most patients (78.2%) had fever as the onset symptom. Most patients with COVID-19 pneumonia had typical imaging features, such as ground-glass opacities (GGO) (87 [86.1%]) or mixed GGO and consolidation (65 [64.4%]), vascular enlargement in the lesion (72 [71.3%]), and traction bronchiectasis (53 [52.5%]). Lesions present on CT images were more likely to have a peripheral distribution (88 [87.1%]) and bilateral involvement (83 [82.2%]) and be lower lung predominant (55 [54.5%]) and multifocal (55 [54.5%]). Patients in the emergency group were older than those in the non-emergency group. Architectural distortion, traction bronchiectasis, and CT involvement score aided in evaluation of the severity and extent of the disease. CONCLUSION. Patients with confirmed COVID-19 pneumonia have typical imaging features that can be helpful in early screening of highly suspected cases and in evaluation of the severity and extent of disease. Most patients with COVID-19 pneumonia have GGO or mixed GGO and consolidation and vascular enlargement in the lesion. Lesions are more likely to have peripheral distribution and bilateral involvement and be lower lung predominant and multifocal. CT involvement score can help in evaluation of the severity and extent of the disease.

983 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the anisotropic mechanical properties of a Ti-6Al-4V three-dimensional cruciform component fabricated using a directed energy deposition additive manufacturing (AM) process.

983 citations


Posted Content
TL;DR: This paper proposed bilinear models, which consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain an image descriptor, which can model local pairwise feature interactions in a translationally invariant manner.
Abstract: We propose bilinear models, a recognition architecture that consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain an image descriptor. This architecture can model local pairwise feature interactions in a translationally invariant manner which is particularly useful for fine-grained categorization. It also generalizes various orderless texture descriptors such as the Fisher vector, VLAD and O2P. We present experiments with bilinear models where the feature extractors are based on convolutional neural networks. The bilinear form simplifies gradient computation and allows end-to-end training of both networks using image labels only. Using networks initialized from the ImageNet dataset followed by domain specific fine-tuning we obtain 84.1% accuracy of the CUB-200-2011 dataset requiring only category labels at training time. We present experiments and visualizations that analyze the effects of fine-tuning and the choice two networks on the speed and accuracy of the models. Results show that the architecture compares favorably to the existing state of the art on a number of fine-grained datasets while being substantially simpler and easier to train. Moreover, our most accurate model is fairly efficient running at 8 frames/sec on a NVIDIA Tesla K40 GPU. The source code for the complete system will be made available at this http URL

983 citations


Posted Content
TL;DR: A new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching, which allows flexible model architectures, requires no sampling during training or the use of adversarial methods, and provides a learning objective that can be used for principled model comparisons.
Abstract: We introduce a new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching Because gradients can be ill-defined and hard to estimate when the data resides on low-dimensional manifolds, we perturb the data with different levels of Gaussian noise, and jointly estimate the corresponding scores, ie, the vector fields of gradients of the perturbed data distribution for all noise levels For sampling, we propose an annealed Langevin dynamics where we use gradients corresponding to gradually decreasing noise levels as the sampling process gets closer to the data manifold Our framework allows flexible model architectures, requires no sampling during training or the use of adversarial methods, and provides a learning objective that can be used for principled model comparisons Our models produce samples comparable to GANs on MNIST, CelebA and CIFAR-10 datasets, achieving a new state-of-the-art inception score of 887 on CIFAR-10 Additionally, we demonstrate that our models learn effective representations via image inpainting experiments

983 citations


Journal ArticleDOI
TL;DR: A method to identify pairs of traits that have multiple genetic causes in common that show evidence of a causal relationship is developed, and shows evidence that increased body mass index causally increases triglyceride levels.
Abstract: Joseph Pickrell and colleagues analyze genome-wide association data for 42 human phenotypes or diseases and identify several hundred loci influencing multiple traits. They also find several traits with overlapping genetic architectures as well as pairs of traits showing evidence of a causal relationship.

983 citations


Proceedings ArticleDOI
07 Jun 2015
TL;DR: This paper proposes a multi-context deep learning framework for salient object detection that employs deep Convolutional Neural Networks to model saliency of objects in images and investigates different pre-training strategies to provide a better initialization for training the deep neural networks.
Abstract: Low-level saliency cues or priors do not produce good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. This issue raises a serious problem for conventional approaches. In this paper, we tackle this problem by proposing a multi-context deep learning framework for salient object detection. We employ deep Convolutional Neural Networks to model saliency of objects in images. Global context and local context are both taken into account, and are jointly modeled in a unified multi-context deep learning framework. To provide a better initialization for training the deep neural networks, we investigate different pre-training strategies, and a task-specific pre-training scheme is designed to make the multi-context modeling suited for saliency detection. Furthermore, recently proposed contemporary deep models in the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection are investigated. Our approach is extensively evaluated on five public datasets, and experimental results show significant and consistent improvements over the state-of-the-art methods.

983 citations


Journal ArticleDOI
TL;DR: The Landsat 8 Operational Land Imager (OLI) atmospheric correction algorithm is developed using the Second Simulation of the Satellite Signal in the Solar Spectrum Vectorial (6SV) model, refined to take advantage of the narrow OLI spectral bands, improved radiometric resolution and signal-to-noise.

983 citations


Journal ArticleDOI
William J. Astle, Heather Elding1, Heather Elding2, Tao Jiang3, Dave Allen4, Dace Ruklisa3, Dace Ruklisa4, Alice L. Mann2, Daniel Mead2, Heleen J. Bouman2, Fernando Riveros-Mckay2, Myrto Kostadima3, Myrto Kostadima4, Myrto Kostadima5, John J. Lambourne4, John J. Lambourne3, Suthesh Sivapalaratnam6, Suthesh Sivapalaratnam3, Kate Downes3, Kate Downes4, Kousik Kundu3, Kousik Kundu2, Lorenzo Bomba2, Kim Berentsen7, John Bradley1, John Bradley3, Louise C. Daugherty3, Louise C. Daugherty4, Olivier Delaneau8, Kathleen Freson9, Stephen F. Garner3, Stephen F. Garner4, Luigi Grassi3, Luigi Grassi4, Jose A. Guerrero3, Jose A. Guerrero4, Matthias Haimel3, Eva M. Janssen-Megens7, Anita Kaan7, Mihir A Kamat3, Bowon Kim7, Amit Mandoli7, Jonathan Marchini10, Jonathan Marchini11, Joost H.A. Martens7, Stuart Meacham4, Stuart Meacham3, Karyn Megy3, Karyn Megy4, Jared O'Connell10, Jared O'Connell11, Romina Petersen3, Romina Petersen4, Nilofar Sharifi7, S.M. Sheard, James R Staley3, Salih Tuna3, Martijn van der Ent7, Klaudia Walter2, Shuang-Yin Wang7, Eleanor Wheeler2, Steven P. Wilder5, Valentina Iotchkova2, Valentina Iotchkova5, Carmel Moore3, Jennifer G. Sambrook3, Jennifer G. Sambrook4, Hendrik G. Stunnenberg7, Emanuele Di Angelantonio1, Emanuele Di Angelantonio3, Emanuele Di Angelantonio12, Stephen Kaptoge3, Stephen Kaptoge1, Taco W. Kuijpers13, Enrique Carrillo-de-Santa-Pau, David Juan, Daniel Rico14, Alfonso Valencia, Lu Chen2, Lu Chen3, Bing Ge15, Louella Vasquez2, Tony Kwan15, Diego Garrido-Martín16, Stephen Watt2, Ying Yang2, Roderic Guigó16, Stephan Beck17, Dirk S. Paul3, Dirk S. Paul17, Tomi Pastinen15, David Bujold15, Guillaume Bourque15, Mattia Frontini3, Mattia Frontini12, Mattia Frontini4, John Danesh, David J. Roberts18, David J. Roberts19, Willem H. Ouwehand, Adam S. Butterworth12, Adam S. Butterworth1, Adam S. Butterworth3, Nicole Soranzo 
17 Nov 2016-Cell
TL;DR: A genome-wide association analysis in the UK Biobank and INTERVAL studies is performed, providing evidence of shared genetic pathways linking blood cell indices with complex pathologies, including autoimmune diseases, schizophrenia, and coronary heart disease and evidence suggesting previously reported population associations betweenBlood cell indices and cardiovascular disease may be non-causal.

982 citations


Posted Content
TL;DR: Stochastic depth as discussed by the authors randomly drops a subset of layers during training and bypasses them with the identity function, which can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error.
Abstract: Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91% on CIFAR-10).

982 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this paper, a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision is presented, which operates on full-sized images and jointly computes pixel-level interest point locations and associated descriptors in one forward pass.
Abstract: This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision. As opposed to patch-based neural networks, our fully-convolutional model operates on full-sized images and jointly computes pixel-level interest point locations and associated descriptors in one forward pass. We introduce Homographic Adaptation, a multi-scale, multi-homography approach for boosting interest point detection repeatability and performing cross-domain adaptation (e.g., synthetic-to-real). Our model, when trained on the MS-COCO generic image dataset using Homographic Adaptation, is able to repeatedly detect a much richer set of interest points than the initial pre-adapted deep model and any other traditional corner detector. The final system gives rise to state-of-the-art homography estimation results on HPatches when compared to LIFT, SIFT and ORB.

982 citations


Proceedings ArticleDOI
Ilya Mironov1
01 Aug 2017
TL;DR: This work argues that the useful analytical tool can be used as a privacy definition, compactly and accurately representing guarantees on the tails of the privacy loss, and demonstrates that the new definition shares many important properties with the standard definition of differential privacy.
Abstract: We propose a natural relaxation of differential privacy based on the Renyi divergence. Closely related notions have appeared in several recent papers that analyzed composition of differentially private mechanisms. We argue that the useful analytical tool can be used as a privacy definition, compactly and accurately representing guarantees on the tails of the privacy loss.We demonstrate that the new definition shares many important properties with the standard definition of differential privacy, while additionally allowing tighter analysis of composite heterogeneous mechanisms.

Journal ArticleDOI
TL;DR: The results show that, in order to mitigate interference, the altitude of the UAVs must be properly adjusted based on the beamwidth of the directional antenna as well as coverage requirements.
Abstract: In this letter, the efficient deployment of multiple unmanned aerial vehicles (UAVs) acting as wireless base stations that provide coverage for ground users is analyzed. First, the downlink coverage probability for UAVs as a function of the altitude and the antenna gain is derived. Next, using circle packing theory, the 3-D locations of the UAVs is determined in a way that the total coverage area is maximized while maximizing the coverage lifetime of the UAVs. Our results show that, in order to mitigate interference, the altitude of the UAVs must be properly adjusted based on the beamwidth of the directional antenna as well as coverage requirements. Furthermore, the minimum number of UAVs needed to guarantee a target coverage probability for a given geographical area is determined. Numerical results evaluate various tradeoffs.

Journal ArticleDOI
TL;DR: This publication marks a historical moment-the first inclusion of qualitative research in APA Style, which is the basis of both the Publication Manual of the American Psychological Association (APA, 2010) andAPA Style CENTRAL, an online program to support APA style.
Abstract: The American Psychological Association Publications and Communications Board Working Group on Journal Article Reporting Standards for Qualitative Research (JARS-Qual Working Group) was charged with examining the state of journal article reporting standards as they applied to qualitative research and with generating recommendations for standards that would be appropriate for a wide range of methods within the discipline of psychology. These standards describe what should be included in a research report to enable and facilitate the review process. This publication marks a historical moment-the first inclusion of qualitative research in APA Style, which is the basis of both the Publication Manual of the American Psychological Association (APA, 2010) and APA Style CENTRAL, an online program to support APA Style. In addition to the general JARS-Qual guidelines, the Working Group has developed standards for both qualitative meta-analysis and mixed methods research. The reporting standards were developed for psychological qualitative research but may hold utility for a broad range of social sciences. They honor a range of qualitative traditions, methods, and reporting styles. The Working Group was composed of a group of researchers with backgrounds in varying methods, research topics, and approaches to inquiry. In this article, they present these standards and their rationale, and they detail the ways that the standards differ from the quantitative research reporting standards. They describe how the standards can be used by authors in the process of writing qualitative research for submission as well as by reviewers and editors in the process of reviewing research. (PsycINFO Database Record

Proceedings Article
01 Jan 2017
TL;DR: In this paper, the authors describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning.
Abstract: Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; text-based question answering using the bAbI suite of tasks; and complex reasoning about dynamical physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Thus, by simply augmenting convolutions, LSTMs, and MLPs with RNs, we can remove computational burden from network components that are not well-suited to handle relational reasoning, reduce overall network complexity, and gain a general ability to reason about the relations between entities and their properties.

Journal ArticleDOI
Pierre Friedlingstein1, Pierre Friedlingstein2, Matthew W. Jones3, Michael O'Sullivan2, Robbie M. Andrew, Judith Hauck4, Glen P. Peters, Wouter Peters5, Wouter Peters6, Julia Pongratz7, Julia Pongratz8, Stephen Sitch2, Corinne Le Quéré3, Dorothee C. E. Bakker3, Josep G. Canadell9, Philippe Ciais10, Robert B. Jackson11, Peter Anthoni12, Leticia Barbero13, Leticia Barbero14, Ana Bastos7, Vladislav Bastrikov10, Meike Becker15, Meike Becker16, Laurent Bopp1, Erik T. Buitenhuis3, Naveen Chandra17, Frédéric Chevallier10, Louise Chini18, Kim I. Currie19, Richard A. Feely20, Marion Gehlen10, Dennis Gilfillan21, Thanos Gkritzalis22, Daniel S. Goll23, Nicolas Gruber24, Sören B. Gutekunst25, Ian Harris26, Vanessa Haverd9, Richard A. Houghton27, George C. Hurtt18, Tatiana Ilyina8, Atul K. Jain28, Emilie Joetzjer10, Jed O. Kaplan29, Etsushi Kato, Kees Klein Goldewijk30, Kees Klein Goldewijk31, Jan Ivar Korsbakken, Peter Landschützer8, Siv K. Lauvset15, Nathalie Lefèvre32, Andrew Lenton33, Andrew Lenton34, Sebastian Lienert35, Danica Lombardozzi36, Gregg Marland21, Patrick C. McGuire37, Joe R. Melton, Nicolas Metzl32, David R. Munro38, Julia E. M. S. Nabel8, Shin-Ichiro Nakaoka39, Craig Neill33, Abdirahman M Omar33, Abdirahman M Omar15, Tsuneo Ono, Anna Peregon10, Anna Peregon40, Denis Pierrot13, Denis Pierrot14, Benjamin Poulter41, Gregor Rehder42, Laure Resplandy43, Eddy Robertson44, Christian Rödenbeck8, Roland Séférian10, Jörg Schwinger15, Jörg Schwinger30, Naomi E. Smith5, Naomi E. Smith45, Pieter P. Tans20, Hanqin Tian46, Bronte Tilbrook33, Bronte Tilbrook34, Francesco N. Tubiello47, Guido R. van der Werf48, Andy Wiltshire44, Sönke Zaehle8 
École Normale Supérieure1, University of Exeter2, Norwich Research Park3, Alfred Wegener Institute for Polar and Marine Research4, Wageningen University and Research Centre5, University of Groningen6, Ludwig Maximilian University of Munich7, Max Planck Society8, Commonwealth Scientific and Industrial Research Organisation9, Centre national de la recherche scientifique10, Stanford University11, Karlsruhe Institute of Technology12, Atlantic Oceanographic and Meteorological Laboratory13, Cooperative Institute for Marine and Atmospheric Studies14, Bjerknes Centre for Climate Research15, Geophysical Institute, University of Bergen16, Japan Agency for Marine-Earth Science and Technology17, University of Maryland, College Park18, National Institute of Water and Atmospheric Research19, National Oceanic and Atmospheric Administration20, Appalachian State University21, Flanders Marine Institute22, Augsburg College23, ETH Zurich24, Leibniz Institute of Marine Sciences25, University of East Anglia26, Woods Hole Research Center27, University of Illinois at Urbana–Champaign28, University of Hong Kong29, Netherlands Environmental Assessment Agency30, Utrecht University31, University of Paris32, Hobart Corporation33, University of Tasmania34, University of Bern35, National Center for Atmospheric Research36, University of Reading37, Cooperative Institute for Research in Environmental Sciences38, National Institute for Environmental Studies39, Russian Academy of Sciences40, Goddard Space Flight Center41, Leibniz Institute for Baltic Sea Research42, Princeton University43, Met Office44, Lund University45, Auburn University46, Food and Agriculture Organization47, VU University Amsterdam48
TL;DR: In this article, the authors describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties, including emissions from land use and land use change, and show that the difference between the estimated total emissions and the estimated changes in the atmosphere, ocean, and terrestrial biosphere is a measure of imperfect data and understanding of the contemporary carbon cycle.
Abstract: . Accurate assessment of anthropogenic carbon dioxide ( CO2 ) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere – the “global carbon budget” – is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties. Fossil CO2 emissions ( EFF ) are based on energy statistics and cement production data, while emissions from land use change ( ELUC ), mainly deforestation, are based on land use and land use change data and bookkeeping models. Atmospheric CO2 concentration is measured directly and its growth rate ( GATM ) is computed from the annual changes in concentration. The ocean CO2 sink ( SOCEAN ) and terrestrial CO2 sink ( SLAND ) are estimated with global process models constrained by observations. The resulting carbon budget imbalance ( BIM ), the difference between the estimated total emissions and the estimated changes in the atmosphere, ocean, and terrestrial biosphere, is a measure of imperfect data and understanding of the contemporary carbon cycle. All uncertainties are reported as ±1σ . For the last decade available (2009–2018), EFF was 9.5±0.5 GtC yr −1 , ELUC 1.5±0.7 GtC yr −1 , GATM 4.9±0.02 GtC yr −1 ( 2.3±0.01 ppm yr −1 ), SOCEAN 2.5±0.6 GtC yr −1 , and SLAND 3.2±0.6 GtC yr −1 , with a budget imbalance BIM of 0.4 GtC yr −1 indicating overestimated emissions and/or underestimated sinks. For the year 2018 alone, the growth in EFF was about 2.1 % and fossil emissions increased to 10.0±0.5 GtC yr −1 , reaching 10 GtC yr −1 for the first time in history, ELUC was 1.5±0.7 GtC yr −1 , for total anthropogenic CO2 emissions of 11.5±0.9 GtC yr −1 ( 42.5±3.3 GtCO2 ). Also for 2018, GATM was 5.1±0.2 GtC yr −1 ( 2.4±0.1 ppm yr −1 ), SOCEAN was 2.6±0.6 GtC yr −1 , and SLAND was 3.5±0.7 GtC yr −1 , with a BIM of 0.3 GtC. The global atmospheric CO2 concentration reached 407.38±0.1 ppm averaged over 2018. For 2019, preliminary data for the first 6–10 months indicate a reduced growth in EFF of +0.6 % (range of −0.2 % to 1.5 %) based on national emissions projections for China, the USA, the EU, and India and projections of gross domestic product corrected for recent changes in the carbon intensity of the economy for the rest of the world. Overall, the mean and trend in the five components of the global carbon budget are consistently estimated over the period 1959–2018, but discrepancies of up to 1 GtC yr −1 persist for the representation of semi-decadal variability in CO2 fluxes. A detailed comparison among individual estimates and the introduction of a broad range of observations shows (1) no consensus in the mean and trend in land use change emissions over the last decade, (2) a persistent low agreement between the different methods on the magnitude of the land CO2 flux in the northern extra-tropics, and (3) an apparent underestimation of the CO2 variability by ocean models outside the tropics. This living data update documents changes in the methods and data sets used in this new global carbon budget and the progress in understanding of the global carbon cycle compared with previous publications of this data set (Le Quere et al., 2018a, b, 2016, 2015a, b, 2014, 2013). The data generated by this work are available at https://doi.org/10.18160/gcp-2019 (Friedlingstein et al., 2019).

Proceedings Article
15 Feb 2018
TL;DR: A Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection, which significantly outperforms state-of-the-art anomaly detection techniques, and achieves up to 14% improvement based on the standard F1 score.
Abstract: Unsupervised anomaly detection on multi- or high-dimensional data is of great importance in both fundamental machine learning research and industrial applications, for which density estimation lies at the core Although previous approaches based on dimensionality reduction followed by density estimation have made fruitful progress, they mainly suffer from decoupled model learning with inconsistent optimization goals and incapability of preserving essential information in the low-dimensional space In this paper, we present a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection Our model utilizes a deep autoencoder to generate a low-dimensional representation and reconstruction error for each input data point, which is further fed into a Gaussian Mixture Model (GMM) Instead of using decoupled two-stage training and the standard Expectation-Maximization (EM) algorithm, DAGMM jointly optimizes the parameters of the deep autoencoder and the mixture model simultaneously in an end-to-end fashion, leveraging a separate estimation network to facilitate the parameter learning of the mixture model The joint optimization, which well balances autoencoding reconstruction, density estimation of latent representation, and regularization, helps the autoencoder escape from less attractive local optima and further reduce reconstruction errors, avoiding the need of pre-training Experimental results on several public benchmark datasets show that, DAGMM significantly outperforms state-of-the-art anomaly detection techniques, and achieves up to 14% improvement based on the standard F1 score

Journal ArticleDOI
TL;DR: The promise of targeting the inflammation pathway in the management of this challenging condition is today somewhat weaker, but this might not be the last word on the potential role of anti-inflammatory drugs in the treatment of bipolar depression.

Journal ArticleDOI
TL;DR: The Apache Point Observatory Galactic Evolution Experiment (APOGEE) as discussed by the authors collected a half million high resolution (R~22,500), high S/N (>100), infrared (1.51-1.70 microns) spectra for 146,000 stars, with time series information via repeat visits to most of these stars.
Abstract: The Apache Point Observatory Galactic Evolution Experiment (APOGEE), one of the programs in the Sloan Digital Sky Survey III (SDSS-III), has now completed its systematic, homogeneous spectroscopic survey sampling all major populations of the Milky Way. After a three year observing campaign on the Sloan 2.5-m Telescope, APOGEE has collected a half million high resolution (R~22,500), high S/N (>100), infrared (1.51-1.70 microns) spectra for 146,000 stars, with time series information via repeat visits to most of these stars. This paper describes the motivations for the survey and its overall design---hardware, field placement, target selection, operations---and gives an overview of these aspects as well as the data reduction, analysis and products. An index is also given to the complement of technical papers that describe various critical survey components in detail. Finally, we discuss the achieved survey performance and illustrate the variety of potential uses of the data products by way of a number of science demonstrations, which span from time series analysis of stellar spectral variations and radial velocity variations from stellar companions, to spatial maps of kinematics, metallicity and abundance patterns across the Galaxy and as a function of age, to new views of the interstellar medium, the chemistry of star clusters, and the discovery of rare stellar species. As part of SDSS-III Data Release 12, all of the APOGEE data products are now publicly available.

Journal ArticleDOI
TL;DR: The observation of stable magnetic skyrmions at room temperature in ultrathin transition metal ferromagnets with magnetic transmission soft X-ray microscopy is reported, providing experimental evidence of recent predictions and opening the door to room-temperature skyrMion spintronics in robust thin-film heterostructures.
Abstract: Magnetic skyrmions are topologically-protected spin textures that exhibit fascinating physical behaviors and large potential in highly energy efficient spintronic device applications. The main obstacles so far are that skyrmions have been observed in only a few exotic materials and at low temperatures, and manipulation of individual skyrmions has not yet been achieved. Here, we report the observation of stable magnetic skyrmions at room temperature in ultrathin transition metal ferromagnets with magnetic transmission soft x-ray microscopy. We demonstrate the ability to generate stable skyrmion lattices and drive trains of individual skyrmions by short current pulses along a magnetic racetrack. Our findings provide experimental evidence of recent predictions and open the door to room-temperature skyrmion spintronics in robust thin-film heterostructures.

Journal ArticleDOI
TL;DR: In this article, a modified synthetic method is reported for producing high-quality monolayer 2D transition metal carbide Ti3C2Tx flakes, and their electronic properties are measured.
Abstract: 2D transition metal carbide Ti3C2Tx (T stands for surface termination), the most widely studied MXene, has shown outstanding electrochemical properties and promise for a number of bulk applications. However, electronic properties of individual MXene flakes, which are important for understanding the potential of these materials, remain largely unexplored. Herein, a modified synthetic method is reported for producing high-quality monolayer Ti3C2Tx flakes. Field-effect transistors (FETs) based on monolayer Ti3C2Tx flakes are fabricated and their electronic properties are measured. Individual Ti3C2Tx flakes exhibit a high conductivity of 4600 ± 1100 S cm−1 and field-effect electron mobility of 2.6 ± 0.7 cm2 V−1 s−1. The resistivity of multilayer Ti3C2Tx films is only one order of magnitude higher than the resistivity of individual flakes, which indicates a surprisingly good electron transport through the surface terminations of different flakes, unlike in many other 2D materials. Finally, the fabricated FETs are used to investigate the environmental stability and kinetics of oxidation of Ti3C2Tx flakes in humid air. The high-quality Ti3C2Tx flakes are reasonably stable and remain highly conductive even after their exposure to air for more than 24 h. It is demonstrated that after the initial exponential decay the conductivity of Ti3C2Tx flakes linearly decreases with time, which is consistent with their edge oxidation.

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This paper decomposes the task of tracking into translation and scale estimation of objects and shows that the correlation between temporal context considerably improves the accuracy and reliability for translation estimation, and it is effective to learn discriminative correlation filters from the most confident frames to estimate the scale change.
Abstract: In this paper, we address the problem of long-term visual tracking where the target objects undergo significant appearance variation due to deformation, abrupt motion, heavy occlusion and out-of-view. In this setting, we decompose the task of tracking into translation and scale estimation of objects. We show that the correlation between temporal context considerably improves the accuracy and reliability for translation estimation, and it is effective to learn discriminative correlation filters from the most confident frames to estimate the scale change. In addition, we train an online random fern classifier to re-detect objects in case of tracking failure. Extensive experimental results on large-scale benchmark datasets show that the proposed algorithm performs favorably against state-of-the-art methods in terms of efficiency, accuracy, and robustness.

Journal ArticleDOI
TL;DR: This paper found that precipitation deficits in California were more than twice as likely to yield drought years if they occurred when conditions were warm and that anthropogenic warming is increasing the probability of co-occurring warm-dry conditions like those that have created the acute human and ecosystem impacts associated with the 2012-2014 drought in California.
Abstract: California is currently in the midst of a record-setting drought. The drought began in 2012 and now includes the lowest calendar-year and 12-mo precipitation, the highest annual temperature, and the most extreme drought indicators on record. The extremely warm and dry conditions have led to acute water shortages, groundwater overdraft, critically low streamflow, and enhanced wildfire risk. Analyzing historical climate observations from California, we find that precipitation deficits in California were more than twice as likely to yield drought years if they occurred when conditions were warm. We find that although there has not been a substantial change in the probability of either negative or moderately negative precipitation anomalies in recent decades, the occurrence of drought years has been greater in the past two decades than in the preceding century. In addition, the probability that precipitation deficits co-occur with warm conditions and the probability that precipitation deficits produce drought have both increased. Climate model experiments with and without anthropogenic forcings reveal that human activities have increased the probability that dry precipitation years are also warm. Further, a large ensemble of climate model realizations reveals that additional global warming over the next few decades is very likely to create ∼100% probability that any annual-scale dry period is also extremely warm. We therefore conclude that anthropogenic warming is increasing the probability of co-occurring warm–dry conditions like those that have created the acute human and ecosystem impacts associated with the “exceptional” 2012–2014 drought in California.

Posted ContentDOI
22 Dec 2020-medRxiv
TL;DR: In this paper, the authors describe a new SARS-CoV-2 lineage (501Y.V2) characterised by eight lineage-defining mutations in the spike protein, including three at important residues in the receptor-binding domain (K417N, E484K and N501Y).
Abstract: Summary Continued uncontrolled transmission of the severe acute respiratory syndrome-related coronavirus 2 (SARS-CoV-2) in many parts of the world is creating the conditions for significant virus evolution. Here, we describe a new SARS-CoV-2 lineage (501Y.V2) characterised by eight lineage-defining mutations in the spike protein, including three at important residues in the receptor-binding domain (K417N, E484K and N501Y) that may have functional significance. This lineage emerged in South Africa after the first epidemic wave in a severely affected metropolitan area, Nelson Mandela Bay, located on the coast of the Eastern Cape Province. This lineage spread rapidly, becoming within weeks the dominant lineage in the Eastern Cape and Western Cape Provinces. Whilst the full significance of the mutations is yet to be determined, the genomic data, showing the rapid displacement of other lineages, suggest that this lineage may be associated with increased transmissibility.

Proceedings Article
03 Dec 2018
TL;DR: A novel $\gamma$-decaying heuristic theory is developed that unifies a wide range of heuristics in a single framework, and proves that all these heuristic can be well approximated from local subgraphs.
Abstract: Link prediction is a key problem for network-structured data. Link prediction heuristics use some score functions, such as common neighbors and Katz index, to measure the likelihood of links. They have obtained wide practical uses due to their simplicity, interpretability, and for some of them, scalability. However, every heuristic has a strong assumption on when two nodes are likely to link, which limits their effectiveness on networks where these assumptions fail. In this regard, a more reasonable way should be learning a suitable heuristic from a given network instead of using predefined ones. By extracting a local subgraph around each target link, we aim to learn a function mapping the subgraph patterns to link existence, thus automatically learning a "heuristic" that suits the current network. In this paper, we study this heuristic learning paradigm for link prediction. First, we develop a novel γ-decaying heuristic theory. The theory unifies a wide range of heuristics in a single framework, and proves that all these heuristics can be well approximated from local subgraphs. Our results show that local subgraphs reserve rich information related to link existence. Second, based on the γ-decaying theory, we propose a new method to learn heuristics from local subgraphs using a graph neural network (GNN). Its experimental results show unprecedented performance, working consistently well on a wide range of problems.

Proceedings Article
23 May 2018
TL;DR: This work identifies that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms and proposes and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space.
Abstract: Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100.

Journal ArticleDOI
TL;DR: A review discusses recent studies on the early colonization and factors influencing this process which impact on health and an adequate establishment of microbiota and its maintenance throughout life would reduce the risk of disease in early and late life.
Abstract: The intestinal microbiota has become a relevant aspect of human health. Microbial colonization runs in parallel with immune system maturation and plays a role in intestinal physiology and regulation. Increasing evidence on early microbial contact suggest that human intestinal microbiota is seeded before birth. Maternal microbiota forms the first microbial inoculum, and from birth, the microbial diversity increases and converges toward an adult-like microbiota by the end of the first 3-5 years of life. Perinatal factors such as mode of delivery, diet, genetics, and intestinal mucin glycosylation all contribute to influence microbial colonization. Once established, the composition of the gut microbiota is relatively stable throughout adult life, but can be altered as a result of bacterial infections, antibiotic treatment, lifestyle, surgical, and a long-term change in diet. Shifts in this complex microbial system have been reported to increase the risk of disease. Therefore, an adequate establishment of microbiota and its maintenance throughout life would reduce the risk of disease in early and late life. This review discusses recent studies on the early colonization and factors influencing this process which impact on health.

Journal ArticleDOI
13 Mar 2018-JAMA
TL;DR: The United States spent approximately twice as much as other high-income countries on medical care, yet utilization rates in the United States were largely similar to those in other nations, and prices of labor and goods, including pharmaceuticals, and administrative costs appeared to be the major drivers of the difference in overall cost.
Abstract: Importance Health care spending in the United States is a major concern and is higher than in other high-income countries, but there is little evidence that efforts to reform US health care delivery have had a meaningful influence on controlling health care spending and costs. Objective To compare potential drivers of spending, such as structural capacity and utilization, in the United States with those of 10 of the highest-income countries (United Kingdom, Canada, Germany, Australia, Japan, Sweden, France, the Netherlands, Switzerland, and Denmark) to gain insight into what the United States can learn from these nations. Evidence Analysis of data primarily from 2013-2016 from key international organizations including the Organisation for Economic Co-operation and Development (OECD), comparing underlying differences in structural features, types of health care and social spending, and performance between the United States and 10 high-income countries. When data were not available for a given country or more accurate country-level estimates were available from sources other than the OECD, country-specific data sources were used. Findings In 2016, the US spent 17.8% of its gross domestic product on health care, and spending in the other countries ranged from 9.6% (Australia) to 12.4% (Switzerland). The proportion of the population with health insurance was 90% in the US, lower than the other countries (range, 99%-100%), and the US had the highest proportion of private health insurance (55.3%). For some determinants of health such as smoking, the US ranked second lowest of the countries (11.4% of the US population ≥15 years smokes daily; mean of all 11 countries, 16.6%), but the US had the highest percentage of adults who were overweight or obese at 70.1% (range for other countries, 23.8%-63.4%; mean of all 11 countries, 55.6%). Life expectancy in the US was the lowest of the 11 countries at 78.8 years (range for other countries, 80.7-83.9 years; mean of all 11 countries, 81.7 years), and infant mortality was the highest (5.8 deaths per 1000 live births in the US; 3.6 per 1000 for all 11 countries). The US did not differ substantially from the other countries in physician workforce (2.6 physicians per 1000; 43% primary care physicians), or nursing workforce (11.1 nurses per 1000). The US had comparable numbers of hospital beds (2.8 per 1000) but higher utilization of magnetic resonance imaging (118 per 1000) and computed tomography (245 per 1000) vs other countries. The US had similar rates of utilization (US discharges per 100 000 were 192 for acute myocardial infarction, 365 for pneumonia, 230 for chronic obstructive pulmonary disease; procedures per 100 000 were 204 for hip replacement, 226 for knee replacement, and 79 for coronary artery bypass graft surgery). Administrative costs of care (activities relating to planning, regulating, and managing health systems and services) accounted for 8% in the US vs a range of 1% to 3% in the other countries. For pharmaceutical costs, spending per capita was $1443 in the US vs a range of $466 to $939 in other countries. Salaries of physicians and nurses were higher in the US; for example, generalist physicians salaries were $218 173 in the US compared with a range of $86 607 to $154 126 in the other countries. Conclusions and Relevance The United States spent approximately twice as much as other high-income countries on medical care, yet utilization rates in the United States were largely similar to those in other nations. Prices of labor and goods, including pharmaceuticals, and administrative costs appeared to be the major drivers of the difference in overall cost between the United States and other high-income countries. As patients, physicians, policy makers, and legislators actively debate the future of the US health system, data such as these are needed to inform policy decisions.

Journal ArticleDOI
10 Sep 2015-Cell
TL;DR: New metabolic checkpoints for T cell activity are uncovered and it is demonstrated that metabolic reprogramming of tumor-reactive T cells can enhance anti-tumor T cell responses, illuminating new forms of immunotherapy.

Proceedings ArticleDOI
Tong He1, Zhi Zhang1, Hang Zhang1, Zhongyue Zhang1, Junyuan Xie1, Mu Li1 
01 Jun 2019
TL;DR: This article examined a collection of such refinements and empirically evaluated their impact on the final model accuracy through ablation study, and showed that by combining these refinements together, they are able to improve various CNN models significantly.
Abstract: Much of the recent progress made in image classification research can be credited to training procedure refinements, such as changes in data augmentations and optimization methods. In the literature, however, most refinements are either briefly mentioned as implementation details or only visible in source code. In this paper, we will examine a collection of such refinements and empirically evaluate their impact on the final model accuracy through ablation study. We will show that, by combining these refinements together, we are able to improve various CNN models significantly. For example, we raise ResNet-50's top-1 validation accuracy from 75.3% to 79.29% on ImageNet. We will also demonstrate that improvement on image classification accuracy leads to better transfer learning performance in other application domains such as object detection and semantic segmentation.

Journal ArticleDOI
TL;DR: Human-to-Human Coronavirus Transmission in Vietnam The authors describe transmission of 2019-nCoV from a father, who had flown with his wife from Wuhan to Hanoi, to the son, who met his father and ...
Abstract: Human-to-Human Coronavirus Transmission in Vietnam The authors describe transmission of 2019-nCoV from a father, who had flown with his wife from Wuhan to Hanoi, to the son, who met his father and ...