scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: A greater extent of weight loss, induced by lifestyle changes, is associated with the level of improvement in histologic features of NASH.

1,459 citations


Journal ArticleDOI
TL;DR: The presence of a lymphatic vessel network in the dura mater of the mouse brain is discovered and it is shown that these dural lymphatic vessels are important for the clearance of macromolecules from the brain.
Abstract: The central nervous system (CNS) is considered an organ devoid of lymphatic vasculature. Yet, part of the cerebrospinal fluid (CSF) drains into the cervical lymph nodes (LNs). The mechanism of CSF entry into the LNs has been unclear. Here we report the surprising finding of a lymphatic vessel network in the dura mater of the mouse brain. We show that dural lymphatic vessels absorb CSF from the adjacent subarachnoid space and brain interstitial fluid (ISF) via the glymphatic system. Dural lymphatic vessels transport fluid into deep cervical LNs (dcLNs) via foramina at the base of the skull. In a transgenic mouse model expressing a VEGF-C/D trap and displaying complete aplasia of the dural lymphatic vessels, macromolecule clearance from the brain was attenuated and transport from the subarachnoid space into dcLNs was abrogated. Surprisingly, brain ISF pressure and water content were unaffected. Overall, these findings indicate that the mechanism of CSF flow into the dcLNs is directly via an adjacent dural lymphatic network, which may be important for the clearance of macromolecules from the brain. Importantly, these results call for a reexamination of the role of the lymphatic system in CNS physiology and disease.

1,458 citations


Journal ArticleDOI
Corinne Le Quéré1, Robbie M. Andrew, Pierre Friedlingstein2, Stephen Sitch2, Judith Hauck3, Julia Pongratz4, Julia Pongratz5, Penelope A. Pickers1, Jan Ivar Korsbakken, Glen P. Peters, Josep G. Canadell6, Almut Arneth7, Vivek K. Arora, Leticia Barbero8, Leticia Barbero9, Ana Bastos4, Laurent Bopp10, Frédéric Chevallier11, Louise Chini12, Philippe Ciais11, Scott C. Doney13, Thanos Gkritzalis14, Daniel S. Goll11, Ian Harris1, Vanessa Haverd6, Forrest M. Hoffman15, Mario Hoppema3, Richard A. Houghton16, George C. Hurtt12, Tatiana Ilyina5, Atul K. Jain17, Truls Johannessen18, Chris D. Jones19, Etsushi Kato, Ralph F. Keeling20, Kees Klein Goldewijk21, Kees Klein Goldewijk22, Peter Landschützer5, Nathalie Lefèvre23, Sebastian Lienert24, Zhu Liu25, Zhu Liu1, Danica Lombardozzi26, Nicolas Metzl23, David R. Munro27, Julia E. M. S. Nabel5, Shin-Ichiro Nakaoka28, Craig Neill29, Craig Neill30, Are Olsen18, T. Ono, Prabir K. Patra31, Anna Peregon11, Wouter Peters32, Wouter Peters33, Philippe Peylin11, Benjamin Pfeil18, Benjamin Pfeil34, Denis Pierrot9, Denis Pierrot8, Benjamin Poulter35, Gregor Rehder36, Laure Resplandy37, Eddy Robertson19, Matthias Rocher11, Christian Rödenbeck5, Ute Schuster2, Jörg Schwinger34, Roland Séférian11, Ingunn Skjelvan34, Tobias Steinhoff38, Adrienne J. Sutton39, Pieter P. Tans39, Hanqin Tian40, Bronte Tilbrook29, Bronte Tilbrook30, Francesco N. Tubiello41, Ingrid T. van der Laan-Luijkx32, Guido R. van der Werf42, Nicolas Viovy11, Anthony P. Walker15, Andy Wiltshire19, Rebecca Wright1, Sönke Zaehle5, Bo Zheng11 
University of East Anglia1, University of Exeter2, Alfred Wegener Institute for Polar and Marine Research3, Ludwig Maximilian University of Munich4, Max Planck Society5, Commonwealth Scientific and Industrial Research Organisation6, Karlsruhe Institute of Technology7, Cooperative Institute for Marine and Atmospheric Studies8, Atlantic Oceanographic and Meteorological Laboratory9, École Normale Supérieure10, Centre national de la recherche scientifique11, University of Maryland, College Park12, University of Virginia13, Flanders Marine Institute14, Oak Ridge National Laboratory15, Woods Hole Research Center16, University of Illinois at Urbana–Champaign17, Geophysical Institute, University of Bergen18, Met Office19, University of California, San Diego20, Utrecht University21, Netherlands Environmental Assessment Agency22, University of Paris23, Oeschger Centre for Climate Change Research24, Tsinghua University25, National Center for Atmospheric Research26, Institute of Arctic and Alpine Research27, National Institute for Environmental Studies28, Cooperative Research Centre29, Hobart Corporation30, Japan Agency for Marine-Earth Science and Technology31, Wageningen University and Research Centre32, University of Groningen33, Bjerknes Centre for Climate Research34, Goddard Space Flight Center35, Leibniz Institute for Baltic Sea Research36, Princeton University37, Leibniz Institute of Marine Sciences38, National Oceanic and Atmospheric Administration39, Auburn University40, Food and Agriculture Organization41, VU University Amsterdam42
TL;DR: In this article, the authors describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties, including emissions from land use and land-use change data and bookkeeping models.
Abstract: . Accurate assessment of anthropogenic carbon dioxide ( CO2 ) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere – the “global carbon budget” – is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties. Fossil CO2 emissions ( EFF ) are based on energy statistics and cement production data, while emissions from land use and land-use change ( ELUC ), mainly deforestation, are based on land use and land-use change data and bookkeeping models. Atmospheric CO2 concentration is measured directly and its growth rate ( GATM ) is computed from the annual changes in concentration. The ocean CO2 sink ( SOCEAN ) and terrestrial CO2 sink ( SLAND ) are estimated with global process models constrained by observations. The resulting carbon budget imbalance ( BIM ), the difference between the estimated total emissions and the estimated changes in the atmosphere, ocean, and terrestrial biosphere, is a measure of imperfect data and understanding of the contemporary carbon cycle. All uncertainties are reported as ±1σ . For the last decade available (2008–2017), EFF was 9.4±0.5 GtC yr −1 , ELUC 1.5±0.7 GtC yr −1 , GATM 4.7±0.02 GtC yr −1 , SOCEAN 2.4±0.5 GtC yr −1 , and SLAND 3.2±0.8 GtC yr −1 , with a budget imbalance BIM of 0.5 GtC yr −1 indicating overestimated emissions and/or underestimated sinks. For the year 2017 alone, the growth in EFF was about 1.6 % and emissions increased to 9.9±0.5 GtC yr −1 . Also for 2017, ELUC was 1.4±0.7 GtC yr −1 , GATM was 4.6±0.2 GtC yr −1 , SOCEAN was 2.5±0.5 GtC yr −1 , and SLAND was 3.8±0.8 GtC yr −1 , with a BIM of 0.3 GtC. The global atmospheric CO2 concentration reached 405.0±0.1 ppm averaged over 2017. For 2018, preliminary data for the first 6–9 months indicate a renewed growth in EFF of + 2.7 % (range of 1.8 % to 3.7 %) based on national emission projections for China, the US, the EU, and India and projections of gross domestic product corrected for recent changes in the carbon intensity of the economy for the rest of the world. The analysis presented here shows that the mean and trend in the five components of the global carbon budget are consistently estimated over the period of 1959–2017, but discrepancies of up to 1 GtC yr −1 persist for the representation of semi-decadal variability in CO2 fluxes. A detailed comparison among individual estimates and the introduction of a broad range of observations show (1) no consensus in the mean and trend in land-use change emissions, (2) a persistent low agreement among the different methods on the magnitude of the land CO2 flux in the northern extra-tropics, and (3) an apparent underestimation of the CO2 variability by ocean models, originating outside the tropics. This living data update documents changes in the methods and data sets used in this new global carbon budget and the progress in understanding the global carbon cycle compared with previous publications of this data set (Le Quere et al., 2018, 2016, 2015a, b, 2014, 2013). All results presented here can be downloaded from https://doi.org/10.18160/GCP-2018 .

1,458 citations


Proceedings ArticleDOI
27 Jun 2016
TL;DR: There is a gap between current face detection performance and the real world requirements, and the WIDER FACE dataset, which is 10 times larger than existing datasets is introduced, which contains rich annotations, including occlusions, poses, event categories, and face bounding boxes.
Abstract: Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that there is a gap between current face detection performance and the real world requirements. To facilitate future face detection research, we introduce the WIDER FACE dataset1, which is 10 times larger than existing datasets. The dataset contains rich annotations, including occlusions, poses, event categories, and face bounding boxes. Faces in the proposed dataset are extremely challenging due to large variations in scale, pose and occlusion, as shown in Fig. 1. Furthermore, we show that WIDER FACE dataset is an effective training source for face detection. We benchmark several representative detection systems, providing an overview of state-of-the-art performance and propose a solution to deal with large scale variation. Finally, we discuss common failure cases that worth to be further investigated.

1,458 citations


Journal ArticleDOI
TL;DR: A massive quantitative analysis of Facebook shows that information related to distinct narratives––conspiracy theories and scientific news––generates homogeneous and polarized communities having similar information consumption patterns, and derives a data-driven percolation model of rumor spreading that demonstrates that homogeneity and polarization are the main determinants for predicting cascades’ size.
Abstract: The wide availability of user-provided content in online social media facilitates the aggregation of people around common interests, worldviews, and narratives. However, the World Wide Web (WWW) also allows for the rapid dissemination of unsubstantiated rumors and conspiracy theories that often elicit rapid, large, but naive social responses such as the recent case of Jade Helm 15––where a simple military exercise turned out to be perceived as the beginning of a new civil war in the United States. In this work, we address the determinants governing misinformation spreading through a thorough quantitative analysis. In particular, we focus on how Facebook users consume information related to two distinct narratives: scientific and conspiracy news. We find that, although consumers of scientific and conspiracy stories present similar consumption patterns with respect to content, cascade dynamics differ. Selective exposure to content is the primary driver of content diffusion and generates the formation of homogeneous clusters, i.e., “echo chambers.” Indeed, homogeneity appears to be the primary driver for the diffusion of contents and each echo chamber has its own cascade dynamics. Finally, we introduce a data-driven percolation model mimicking rumor spreading and we show that homogeneity and polarization are the main determinants for predicting cascades’ size.

1,457 citations


Proceedings Article
05 Jul 2017
TL;DR: A novel technique is presented which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering and may be seen as a form of implicit curriculum.
Abstract: Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task. The video presenting our experiments is available at https://goo.gl/SMrQnI.

1,457 citations


Journal ArticleDOI
28 May 2015-Nature
TL;DR: This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.
Abstract: How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

1,457 citations


Journal ArticleDOI
TL;DR: The 14th St Gallen International Breast Cancer Conference (2015) reviewed new evidence on locoregional and systemic therapies for early breast cancer and summarizes treatment-oriented classification of subgroups and treatment recommendations.

1,457 citations


Journal ArticleDOI
29 Sep 2017-Science
TL;DR: The mechanisms and strategies for improving thermoelectric efficiency are reviewed and how to report material performance is discussed, as well as how to develop high-performance materials out of nontoxic and earth-abundant elements.
Abstract: BACKGROUND Heat and electricity are two forms of energy that are at opposite ends of a spectrum Heat is ubiquitous, but with low quality, whereas electricity is versatile, but its production is demanding Thermoelectrics offers a simple and environmentally friendly solution for direct heat-to-electricity conversion A thermoelectric (TE) device can directly convert heat emanating from the Sun, radioisotopes, automobiles, industrial sectors, or even the human body to electricity Electricity also can drive a TE device to work as a solid-state heat pump for distributed spot-size refrigeration TE devices are free of moving parts and feasible for miniaturization, run quietly, and do not emit greenhouse gasses The full potential of TE devices may be unleashed by working in tandem with other energy-conversion technologies Thermoelectrics found niche applications in the 20th century, especially where efficiency was of a lower priority than energy availability and reliability Broader (beyond niche) application of thermoelectrics in the 21st century requires developing higher-performance materials The figure of merit, ZT, is the primary measure of material performance Enhancing the ZT requires optimizing the adversely interdependent electrical resistivity, Seebeck coefficient, and thermal conductivity, as a group On the microscopic level, high material performance stems from a delicate concert among trade-offs between phase stability and instability, structural order and disorder, bond covalency and ionicity, band convergence and splitting, itinerant and localized electronic states, and carrier mobility and effective mass ADVANCES Innovative transport mechanisms are the fountain of youth of TE materials research In the past two decades, many potentially paradigm-changing mechanisms were identified, eg, resonant levels, modulation doping, band convergence, classical and quantum size effects, anharmonicity, the Rashba effect, the spin Seebeck effect, and topological states These mechanisms embody the current states of understanding and manipulating the interplay among the charge, lattice, orbital, and spin degrees of freedom in TE materials Many strategies were successfully implemented in a wide range of materials, eg, V2VI3 compounds, VVI compounds, filled skutterudites and clathrates, half-Heusler alloys, diamond-like structured compounds, Zintl phases, oxides and mixed-anion oxides, silicides, transition metal chalcogenides, and organic materials In addition, advanced material synthesis and processing techniques, for example, melt spinning, self-sustaining heating synthesis, and field-assisted sintering, helped reach a much broader phase space where traditional metallurgy and melt-growth recipes fell short Given the ubiquity of heat and the modular aspects of TE devices, these advances ensure that thermoelectrics plays an important role as part of a solutions package to address our global energy needs OUTLOOK The emerging roles of spin and orbital states, new breakthroughs in multiscale defect engineering, and controlled anharmonicity may hold the key to developing next generation TE materials To accelerate exploring the broad phase space of higher multinary compounds, we need a synergy of theory, machine learning, three-dimensional printing, and fast experimental characterizations We expect this synergy to help refine current materials selection and make TE materials research more data driven We also expect increasing efforts to develop high-performance materials out of nontoxic and earth-abundant elements The desire to move away from Freon and other refrigerant-based cooling should shift TE materials research from power generation to solid-state refrigeration International round-robin measurements to cross-check the high ZT values of emerging materials will help identify those that hold the most promise We hope the renewable energy landscape will be reshaped if the recent trend of progress continues into the foreseeable future

1,457 citations


Journal ArticleDOI
TL;DR: The authors quantified China's anthropogenic emission trends from 2010 to 2017 and identified the major driving forces of these trends by using a combination of bottom-up emission inventory and index decomposition analysis (IDA) approaches.
Abstract: . To tackle the problem of severe air pollution, China has implemented active clean air policies in recent years. As a consequence, the emissions of major air pollutants have decreased and the air quality has substantially improved. Here, we quantified China's anthropogenic emission trends from 2010 to 2017 and identified the major driving forces of these trends by using a combination of bottom-up emission inventory and index decomposition analysis (IDA) approaches. The relative change rates of China's anthropogenic emissions during 2010–2017 are estimated as follows: −62 % for SO2 , −17 % for NOx , +11 % for nonmethane volatile organic compounds (NMVOCs), +1 % for NH3 , −27 % for CO, −38 % for PM 10 , −35 % for PM 2.5 , −27 % for BC, −35 % for OC, and +16 % for CO2 . The IDA results suggest that emission control measures are the main drivers of this reduction, in which the pollution controls on power plants and industries are the most effective mitigation measures. The emission reduction rates markedly accelerated after the year 2013, confirming the effectiveness of China's Clean Air Action that was implemented since 2013. We estimated that during 2013–2017, China's anthropogenic emissions decreased by 59 % for SO2 , 21 % for NOx , 23 % for CO, 36 % for PM 10 , 33 % for PM 2.5 , 28 % for BC, and 32 % for OC. NMVOC emissions increased and NH3 emissions remained stable during 2010–2017, representing the absence of effective mitigation measures for NMVOCs and NH3 in current policies. The relative contributions of different sectors to emissions have significantly changed after several years' implementation of clean air policies, indicating that it is paramount to introduce new policies to enable further emission reductions in the future.

1,456 citations


Posted Content
TL;DR: This work proposes BERTScore, an automatic evaluation metric for text generation that correlates better with human judgments and provides stronger model selection performance than existing metrics.
Abstract: We propose BERTScore, an automatic evaluation metric for text generation. Analogously to common metrics, BERTScore computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTScore correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task to show that BERTScore is more robust to challenging examples when compared to existing metrics.

Journal ArticleDOI
TL;DR: Selective laser melting (SLM) is a particular rapid prototyping, 3D printing, or additive manufacturing (AM) technique designed to use high power-density laser to melt and fuse metallic powders as mentioned in this paper.
Abstract: Selective Laser Melting (SLM) is a particular rapid prototyping, 3D printing, or Additive Manufacturing (AM) technique designed to use high power-density laser to melt and fuse metallic powders. A component is built by selectively melting and fusing powders within and between layers. The SLM technique is also commonly known as direct selective laser sintering, LaserCusing, and direct metal laser sintering, and this technique has been proven to produce near net-shape parts up to 99.9% relative density. This enables the process to build near full density functional parts and has viable economic benefits. Recent developments of fibre optics and high-power laser have also enabled SLM to process different metallic materials, such as copper, aluminium, and tungsten. Similarly, this has also opened up research opportunities in SLM of ceramic and composite materials. The review presents the SLM process and some of the common physical phenomena associated with this AM technology. It then focuses on the following a...

Journal ArticleDOI
TL;DR: The cross-platform software tool, TempEst (formerly known as Path-O-Gen), is introduced, for the visualization and analysis of temporally sampled sequence data and can be used to assess whether there is sufficient temporal signal in the data to proceed with phylogenetic molecular clock analysis, and identify sequences whose genetic divergence and sampling date are incongruent.
Abstract: Gene sequences sampled at different points in time can be used to infer molecular phylogenies on a natural timescale of months or years, provided that the sequences in question undergo measurable amounts of evolutionary change between sampling times. Data sets with this property are termed heterochronous and have become increasingly common in several fields of biology, most notably the molecular epidemiology of rapidly evolving viruses. Here we introduce the cross-platform software tool, TempEst (formerly known as Path-O-Gen), for the visualization and analysis of temporally sampled sequence data. Given a molecular phylogeny and the dates of sampling for each sequence, TempEst uses an interactive regression approach to explore the association between genetic divergence through time and sampling dates. TempEst can be used to (1) assess whether there is sufficient temporal signal in the data to proceed with phylogenetic molecular clock analysis, and (2) identify sequences whose genetic divergence and sampling date are incongruent. Examination of the latter can help identify data quality problems, including errors in data annotation, sample contamination, sequence recombination, or alignment error. We recommend that all users of the molecular clock models implemented in BEAST first check their data using TempEst prior to analysis.

Journal ArticleDOI
21 Aug 2015-Science
TL;DR: This experiment experimentally observed this nonergodic evolution for interacting fermions in a one-dimensional quasirandom optical lattice and identified the MBL transition through the relaxation dynamics of an initially prepared charge density wave.
Abstract: Many-body localization (MBL), the disorder-induced localization of interacting particles, signals a breakdown of conventional thermodynamics because MBL systems do not thermalize and show nonergodic time evolution. We experimentally observed this nonergodic evolution for interacting fermions in a one-dimensional quasirandom optical lattice and identified the MBL transition through the relaxation dynamics of an initially prepared charge density wave. For sufficiently weak disorder, the time evolution appears ergodic and thermalizing, erasing all initial ordering, whereas above a critical disorder strength, a substantial portion of the initial ordering persists. The critical disorder value shows a distinctive dependence on the interaction strength, which is in agreement with numerical simulations. Our experiment paves the way to further detailed studies of MBL, such as in noncorrelated disorder or higher dimensions.

Proceedings Article
05 Dec 2016
TL;DR: This paper proposes a new metric learning objective called multi-class N-pair loss, which generalizes triplet loss by allowing joint comparison among more than one negative examples and reduces the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy using only N pairs of examples.
Abstract: Deep metric learning has gained much popularity in recent years, following the success of deep learning. However, existing frameworks of deep metric learning based on contrastive loss and triplet loss often suffer from slow convergence, partially because they employ only one negative example while not interacting with the other negative classes in each update. In this paper, we propose to address this problem with a new metric learning objective called multi-class N-pair loss. The proposed objective function firstly generalizes triplet loss by allowing joint comparison among more than one negative examples - more specifically, N-1 negative examples - and secondly reduces the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy using only N pairs of examples, instead of (N+1) x N. We demonstrate the superiority of our proposed loss to the triplet loss as well as other competing loss functions for a variety of tasks on several visual recognition benchmark, including fine-grained object recognition and verification, image clustering and retrieval, and face verification and identification.

Posted Content
TL;DR: Key properties of the multiview contrastive learning approach are analyzed, finding that the contrastive loss outperforms a popular alternative based on cross-view prediction, and that the more views the authors learn from, the better the resulting representation captures underlying scene semantics.
Abstract: Humans view the world through many sensory channels, e.g., the long-wavelength light channel, viewed by the left eye, or the high-frequency vibrations channel, heard by the right ear. Each view is noisy and incomplete, but important factors, such as physics, geometry, and semantics, tend to be shared between all views (e.g., a "dog" can be seen, heard, and felt). We investigate the classic hypothesis that a powerful representation is one that models view-invariant factors. We study this hypothesis under the framework of multiview contrastive learning, where we learn a representation that aims to maximize mutual information between different views of the same scene but is otherwise compact. Our approach scales to any number of views, and is view-agnostic. We analyze key properties of the approach that make it work, finding that the contrastive loss outperforms a popular alternative based on cross-view prediction, and that the more views we learn from, the better the resulting representation captures underlying scene semantics. Our approach achieves state-of-the-art results on image and video unsupervised learning benchmarks. Code is released at: this http URL.

Proceedings ArticleDOI
19 Aug 2018
TL;DR: SentencePiece, a language-independent subword tokenizer and detokenizer designed for Neural-based text processing, finds that it is possible to achieve comparable accuracy to direct subword training from raw sentences.
Abstract: This paper describes SentencePiece, a language-independent subword tokenizer and detokenizer designed for Neural-based text processing, including Neural Machine Translation. It provides open-source C++ and Python implementations for subword units. While existing subword segmentation tools assume that the input is pre-tokenized into word sequences, SentencePiece can train subword models directly from raw sentences, which allows us to make a purely end-to-end and language independent system. We perform a validation experiment of NMT on English-Japanese machine translation, and find that it is possible to achieve comparable accuracy to direct subword training from raw sentences. We also compare the performance of subword training and segmentation with various configurations. SentencePiece is available under the Apache 2 license at https://github.com/google/sentencepiece.

Journal ArticleDOI
TL;DR: In this paper, the authors presented new models for low-mass stars down to the hydrogen-burning limit that consistently couple atmosphere and interior structures, thereby superseding the widely used BCAH98 models.
Abstract: We present new models for low-mass stars down to the hydrogen-burning limit that consistently couple atmosphere and interior structures, thereby superseding the widely used BCAH98 models. The new models include updated molecular linelists and solar abundances, as well as atmospheric convection parameters calibrated on 2D/3D radiative hydrodynamics simulations. Comparison of these models with observations in various colour-magnitude diagrams for various ages shows significant improvement over previous generations of models. The new models can solve flaws that are present in the previous ones, such as the prediction of optical colours that are too blue compared to M dwarf observations. They can also reproduce the four components of the young quadruple system LkCa 3 in a colour-magnitude diagram with one single isochrone, in contrast to any presently existing model. In this paper we also highlight the need for consistency when comparing models and observations, with the necessity of using evolutionary models and colours based on the same atmospheric structures.

Journal ArticleDOI
01 Jan 2016
TL;DR: This paper provides a review of how statistical models can be “trained” on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph) and how such statistical models of graphs can be combined with text-based information extraction methods for automatically constructing knowledge graphs from the Web.
Abstract: Relational machine learning studies methods for the statistical analysis of relational, or graph-structured, data. In this paper, we provide a review of how such statistical models can be “trained” on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph). In particular, we discuss two fundamentally different kinds of statistical relational models, both of which can scale to massive data sets. The first is based on latent feature models such as tensor factorization and multiway neural networks. The second is based on mining observable patterns in the graph. We also show how to combine these latent and observable models to get improved modeling power at decreased computational cost. Finally, we discuss how such statistical models of graphs can be combined with text-based information extraction methods for automatically constructing knowledge graphs from the Web. To this end, we also discuss Google's knowledge vault project as an example of such combination.

Journal ArticleDOI
Amit Agrawal1
TL;DR: The global diabetes prevalence in 20-79 year olds in 2021 was estimated to be 10.5% (536.6 million people), rising to 12.2% (783.2 million) in 2045 as mentioned in this paper .

Book
30 Jun 2020
TL;DR: Fausto-Sterling as discussed by the authors argues that even the most fundamental knowledge about sex is shaped by the culture in which scientific knowledge is produced, and argues that individuals born as mixtures of male and female exist as one of five natural human variants and should not be forced to compromise their differences to fit a flawed societal definition of normality.
Abstract: Why do some people prefer heterosexual love while others fancy the same sex? Is sexual identity biologically determined or a product of convention? In this brilliant and provocative book, the acclaimed author of Myths of Gender argues that even the most fundamental knowledge about sex is shaped by the culture in which scientific knowledge is produced.Drawing on astonishing real-life cases and a probing analysis of centuries of scientific research, Fausto-Sterling demonstrates how scientists have historically politicized the body. In lively and impassioned prose, she breaks down three key dualisms - sex/gender, nature/nurture, and real/constructed - and asserts that individuals born as mixtures of male and female exist as one of five natural human variants and, as such, should not be forced to compromise their differences to fit a flawed societal definition of normality.

Proceedings ArticleDOI
12 Mar 2018
TL;DR: This paper proposes Grad-CAM++, which uses a weighted combination of the positive partial derivatives of the last convolutional layer feature maps with respect to a specific class score as weights to generate a visual explanation for the class label under consideration, to provide better visual explanations of CNN model predictions.
Abstract: Over the last decade, Convolutional Neural Network (CNN) models have been highly successful in solving complex vision based problems. However, deep models are perceived as "black box" methods considering the lack of understanding of their internal functioning. There has been a significant recent interest to develop explainable deep learning models, and this paper is an effort in this direction. Building on a recently proposed method called Grad-CAM, we propose Grad-CAM++ to provide better visual explanations of CNN model predictions (when compared to Grad-CAM), in terms of better localization of objects as well as explaining occurrences of multiple objects of a class in a single image. We provide a mathematical explanation for the proposed method, Grad-CAM++, which uses a weighted combination of the positive partial derivatives of the last convolutional layer feature maps with respect to a specific class score as weights to generate a visual explanation for the class label under consideration. Our extensive experiments and evaluations, both subjective and objective, on standard datasets showed that Grad-CAM++ indeed provides better visual explanations for a given CNN architecture when compared to Grad-CAM.

Journal ArticleDOI
TL;DR: In this paper, a large scale synthesis, crystal structure, and optical characterization of the 2D (CH3(CH2)3NH3)n−1PbnI3n+1 (n = 1, 2, 3, 4, ∞) perovskites is presented.
Abstract: The hybrid two-dimensional (2D) halide perovskites have recently drawn significant interest because they can serve as excellent photoabsorbers in perovskite solar cells. Here we present the large scale synthesis, crystal structure, and optical characterization of the 2D (CH3(CH2)3NH3)2(CH3NH3)n−1PbnI3n+1 (n = 1, 2, 3, 4, ∞) perovskites, a family of layered compounds with tunable semiconductor characteristics. These materials consist of well-defined inorganic perovskite layers intercalated with bulky butylammonium cations that act as spacers between these fragments, adopting the crystal structure of the Ruddlesden–Popper type. We find that the perovskite thickness (n) can be synthetically controlled by adjusting the ratio between the spacer cation and the small organic cation, thus allowing the isolation of compounds in pure form and large scale. The orthorhombic crystal structures of (CH3(CH2)3NH3)2(CH3NH3)Pb2I7 (n = 2, Cc2m; a = 8.9470(4), b = 39.347(2) A, c = 8.8589(6)), (CH3(CH2)3NH3)2(CH3NH3)2Pb3I10 (...

Journal ArticleDOI
16 May 2017
TL;DR: OncoKB, a comprehensive and curated precision oncology knowledge base, offers oncologists detailed, evidence-based information about individual somatic mutations and structural alterations present in patient tumors with the goal of supporting optimal treatment decisions.
Abstract: PurposeWith prospective clinical sequencing of tumors emerging as a mainstay in cancer care, an urgent need exists for a clinical support tool that distills the clinical implications associated with specific mutation events into a standardized and easily interpretable format. To this end, we developed OncoKB, an expert-guided precision oncology knowledge base.MethodsOncoKB annotates the biologic and oncogenic effects and prognostic and predictive significance of somatic molecular alterations. Potential treatment implications are stratified by the level of evidence that a specific molecular alteration is predictive of drug response on the basis of US Food and Drug Administration labeling, National Comprehensive Cancer Network guidelines, disease-focused expert group recommendations, and scientific literature.ResultsTo date, > 3,000 unique mutations, fusions, and copy number alterations in 418 cancer-associated genes have been annotated. To test the utility of OncoKB, we annotated all genomic events in 5,98...

Journal ArticleDOI
TL;DR: New persistent opioid use after surgery is common and is not significantly different between minor and major surgical procedures but rather associated with behavioral and pain disorders, which suggests its use is not due to surgical pain but addressable patient-level predictors.
Abstract: Importance Despite increased focus on reducing opioid prescribing for long-term pain, little is known regarding the incidence and risk factors for persistent opioid use after surgery. Objective To determine the incidence of new persistent opioid use after minor and major surgical procedures. Design, Setting, and Participants Using a nationwide insurance claims data set from 2013 to 2014, we identified US adults aged 18 to 64 years without opioid use in the year prior to surgery (ie, no opioid prescription fulfillments from 12 months to 1 month prior to the procedure). For patients filling a perioperative opioid prescription, we calculated the incidence of persistent opioid use for more than 90 days among opioid-naive patients after both minor surgical procedures (ie, varicose vein removal, laparoscopic cholecystectomy, laparoscopic appendectomy, hemorrhoidectomy, thyroidectomy, transurethral prostate surgery, parathyroidectomy, and carpal tunnel) and major surgical procedures (ie, ventral incisional hernia repair, colectomy, reflux surgery, bariatric surgery, and hysterectomy). We then assessed data for patient-level predictors of persistent opioid use. Main Outcomes and Measures The primary outcome was defined a priori prior to data extraction. The primary outcome was new persistent opioid use, which was defined as an opioid prescription fulfillment between 90 and 180 days after the surgical procedure. Results A total of 36 177 patients met the inclusion criteria, with 29 068 (80.3%) receiving minor surgical procedures and 7109 (19.7%) receiving major procedures. The cohort had a mean (SD) age of 44.6 (11.9) years and was predominately female (23 913 [66.1%]) and white (26 091 [72.1%]). The rates of new persistent opioid use were similar between the 2 groups, ranging from 5.9% to 6.5%. By comparison, the incidence in the nonoperative control cohort was only 0.4%. Risk factors independently associated with new persistent opioid use included preoperative tobacco use (adjusted odds ratio [aOR], 1.35; 95% CI, 1.21-1.49), alcohol and substance abuse disorders (aOR, 1.34; 95% CI, 1.05-1.72), mood disorders (aOR, 1.15; 95% CI, 1.01-1.30), anxiety (aOR, 1.25; 95% CI, 1.10-1.42), and preoperative pain disorders (back pain: aOR, 1.57; 95% CI, 1.42-1.75; neck pain: aOR, 1.22; 95% CI, 1.07-1.39; arthritis: aOR, 1.56; 95% CI, 1.40-1.73; and centralized pain: aOR, 1.39; 95% CI, 1.26-1.54). Conclusions and Relevance New persistent opioid use after surgery is common and is not significantly different between minor and major surgical procedures but rather associated with behavioral and pain disorders. This suggests its use is not due to surgical pain but addressable patient-level predictors. New persistent opioid use represents a common but previously underappreciated surgical complication that warrants increased awareness.

Proceedings Article
25 Apr 2018
TL;DR: This work introduces a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, “sufficient” conditions for predictions, and proposes an algorithm to efficiently compute these explanations for any black-box model with high probability guarantees.
Abstract: We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, "sufficient" conditions for predictions. We propose an algorithm to efficiently compute these explanations for any black-box model with high-probability guarantees. We demonstrate the flexibility of anchors by explaining a myriad of different models for different domains and tasks. In a user study, we show that anchors enable users to predict how a model would behave on unseen instances with less effort and higher precision, as compared to existing linear explanations or no explanations.

Book
01 Oct 2015
TL;DR: The Critique of Practical Reason as mentioned in this paper criticizes the entire practical faculty of reason, including the pure faculty itself, in order to see whether reason in making such a claim does not presumptuously overstep itself (as is the case with speculative reason).
Abstract: This work is called the Critique of Practical Reason, not of the pure practical reason, although its parallelism with the speculative critique would seem to require the latter term The reason of this appears sufficiently from the treatise itself Its business is to show that there is pure practical reason, and for this purpose it criticizes the entire practical faculty of reason If it succeeds in this, it has no need to criticize the pure faculty itself in order to see whether reason in making such a claim does not presumptuously overstep itself (as is the case with the speculative reason) For if, as pure reason, it is actually practical, it proves its own reality and that of its concepts by fact, and all disputation against the possibility of its being real is futile

Journal ArticleDOI
21 Jul 2020-JAMA
TL;DR: There was a wide spectrum of presenting signs and symptoms and disease severity, ranging from fever and inflammation to myocardial injury, shock, and development of coronary artery aneurysms, and comparison with the characteristics of other pediatric inflammatory disorders.
Abstract: Importance In communities with high rates of coronavirus disease 2019, reports have emerged of children with an unusual syndrome of fever and inflammation. Objectives To describe the clinical and laboratory characteristics of hospitalized children who met criteria for the pediatric inflammatory multisystem syndrome temporally associated with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (PIMS-TS) and compare these characteristics with other pediatric inflammatory disorders. Design, Setting, and Participants Case series of 58 children from 8 hospitals in England admitted between March 23 and May 16, 2020, with persistent fever and laboratory evidence of inflammation meeting published definitions for PIMS-TS. The final date of follow-up was May 22, 2020. Clinical and laboratory characteristics were abstracted by medical record review, and were compared with clinical characteristics of patients with Kawasaki disease (KD) (n = 1132), KD shock syndrome (n = 45), and toxic shock syndrome (n = 37) who had been admitted to hospitals in Europe and the US from 2002 to 2019. Exposures Signs and symptoms and laboratory and imaging findings of children who met definitional criteria for PIMS-TS from the UK, the US, and World Health Organization. Main Outcomes and Measures Clinical, laboratory, and imaging characteristics of children meeting definitional criteria for PIMS-TS, and comparison with the characteristics of other pediatric inflammatory disorders. Results Fifty-eight children (median age, 9 years [interquartile range {IQR}, 5.7-14]; 20 girls [34%]) were identified who met the criteria for PIMS-TS. Results from SARS-CoV-2 polymerase chain reaction tests were positive in 15 of 58 patients (26%) and SARS-CoV-2 IgG test results were positive in 40 of 46 (87%). In total, 45 of 58 patients (78%) had evidence of current or prior SARS-CoV-2 infection. All children presented with fever and nonspecific symptoms, including vomiting (26/58 [45%]), abdominal pain (31/58 [53%]), and diarrhea (30/58 [52%]). Rash was present in 30 of 58 (52%), and conjunctival injection in 26 of 58 (45%) cases. Laboratory evaluation was consistent with marked inflammation, for example, C-reactive protein (229 mg/L [IQR, 156-338], assessed in 58 of 58) and ferritin (610 μg/L [IQR, 359-1280], assessed in 53 of 58). Of the 58 children, 29 developed shock (with biochemical evidence of myocardial dysfunction) and required inotropic support and fluid resuscitation (including 23/29 [79%] who received mechanical ventilation); 13 met the American Heart Association definition of KD, and 23 had fever and inflammation without features of shock or KD. Eight patients (14%) developed coronary artery dilatation or aneurysm. Comparison of PIMS-TS with KD and with KD shock syndrome showed differences in clinical and laboratory features, including older age (median age, 9 years [IQR, 5.7-14] vs 2.7 years [IQR, 1.4-4.7] and 3.8 years [IQR, 0.2-18], respectively), and greater elevation of inflammatory markers such as C-reactive protein (median, 229 mg/L [IQR 156-338] vs 67 mg/L [IQR, 40-150 mg/L] and 193 mg/L [IQR, 83-237], respectively). Conclusions and Relevance In this case series of hospitalized children who met criteria for PIMS-TS, there was a wide spectrum of presenting signs and symptoms and disease severity, ranging from fever and inflammation to myocardial injury, shock, and development of coronary artery aneurysms. The comparison with patients with KD and KD shock syndrome provides insights into this syndrome, and suggests this disorder differs from other pediatric inflammatory entities.

Journal ArticleDOI
TL;DR: It is hypothesized that Fe is the most-active site in the catalyst, while CoOOH primarily provides a conductive, high-surface area, chemically stabilizing host.
Abstract: Cobalt oxides and (oxy)hydroxides have been widely studied as electrocatalysts for the oxygen evolution reaction (OER). For related Ni-based materials, the addition of Fe dramatically enhances OER activity. The role of Fe in Co-based materials is not well-documented. We show that the intrinsic OER activity of Co1–xFex(OOH) is ∼100-fold higher for x ≈ 0.6–0.7 than for x = 0 on a per-metal turnover frequency basis. Fe-free CoOOH absorbs Fe from electrolyte impurities if the electrolyte is not rigorously purified. Fe incorporation and increased activity correlate with an anodic shift in the nominally Co2+/3+ redox wave, indicating strong electronic interactions between the two elements and likely substitutional doping of Fe for Co. In situ electrical measurements show that Co1–xFex(OOH) is conductive under OER conditions (∼0.7–4 mS cm–1 at ∼300 mV overpotential), but that FeOOH is an insulator with measurable conductivity (2.2 × 10–2 mS cm–1) only at high overpotentials >400 mV. The apparent OER activity of ...

Posted Content
TL;DR: In this paper, a large-scale dataset for RGB+D human action recognition was introduced with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects.
Abstract: Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art hand-crafted features on the suggested cross-subject and cross-view evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis.