scispace - formally typeset
Search or ask a question

Showing papers by "University of Washington published in 2018"


Proceedings ArticleDOI
15 Feb 2018
TL;DR: This paper introduced a new type of deep contextualized word representation that models both complex characteristics of word use (e.g., syntax and semantics), and how these uses vary across linguistic contexts (i.e., to model polysemy).
Abstract: We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.

7,412 citations


Journal ArticleDOI
Gregory A. Roth1, Gregory A. Roth2, Degu Abate3, Kalkidan Hassen Abate4  +1025 moreInstitutions (333)
TL;DR: Non-communicable diseases comprised the greatest fraction of deaths, contributing to 73·4% (95% uncertainty interval [UI] 72·5–74·1) of total deaths in 2017, while communicable, maternal, neonatal, and nutritional causes accounted for 18·6% (17·9–19·6), and injuries 8·0% (7·7–8·2).

5,211 citations


Journal ArticleDOI
TL;DR: In this paper, the authors assess the burden of 29 cancer groups over time to provide a framework for policy discussion, resource allocation, and research focus, and evaluate cancer incidence, mortality, years lived with disability, years of life lost, and disability-adjusted life-years (DALYs) for 195 countries and territories by age and sex using the Global Burden of Disease study estimation methods.
Abstract: Importance The increasing burden due to cancer and other noncommunicable diseases poses a threat to human development, which has resulted in global political commitments reflected in the Sustainable Development Goals as well as the World Health Organization (WHO) Global Action Plan on Non-Communicable Diseases. To determine if these commitments have resulted in improved cancer control, quantitative assessments of the cancer burden are required. Objective To assess the burden for 29 cancer groups over time to provide a framework for policy discussion, resource allocation, and research focus. Evidence Review Cancer incidence, mortality, years lived with disability, years of life lost, and disability-adjusted life-years (DALYs) were evaluated for 195 countries and territories by age and sex using the Global Burden of Disease study estimation methods. Levels and trends were analyzed over time, as well as by the Sociodemographic Index (SDI). Changes in incident cases were categorized by changes due to epidemiological vs demographic transition. Findings In 2016, there were 17.2 million cancer cases worldwide and 8.9 million deaths. Cancer cases increased by 28% between 2006 and 2016. The smallest increase was seen in high SDI countries. Globally, population aging contributed 17%; population growth, 12%; and changes in age-specific rates, −1% to this change. The most common incident cancer globally for men was prostate cancer (1.4 million cases). The leading cause of cancer deaths and DALYs was tracheal, bronchus, and lung cancer (1.2 million deaths and 25.4 million DALYs). For women, the most common incident cancer and the leading cause of cancer deaths and DALYs was breast cancer (1.7 million incident cases, 535 000 deaths, and 14.9 million DALYs). In 2016, cancer caused 213.2 million DALYs globally for both sexes combined. Between 2006 and 2016, the average annual age-standardized incidence rates for all cancers combined increased in 130 of 195 countries or territories, and the average annual age-standardized death rates decreased within that timeframe in 143 of 195 countries or territories. Conclusions and Relevance Large disparities exist between countries in cancer incidence, deaths, and associated disability. Scaling up cancer prevention and ensuring universal access to cancer care are required for health equity and to fulfill the global commitments for noncommunicable disease and cancer control.

4,621 citations


Journal ArticleDOI
Lorenzo Galluzzi1, Lorenzo Galluzzi2, Ilio Vitale3, Stuart A. Aaronson4  +183 moreInstitutions (111)
TL;DR: The Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives.
Abstract: Over the past decade, the Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives. Since the field continues to expand and novel mechanisms that orchestrate multiple cell death pathways are unveiled, we propose an updated classification of cell death subroutines focusing on mechanistic and essential (as opposed to correlative and dispensable) aspects of the process. As we provide molecularly oriented definitions of terms including intrinsic apoptosis, extrinsic apoptosis, mitochondrial permeability transition (MPT)-driven necrosis, necroptosis, ferroptosis, pyroptosis, parthanatos, entotic cell death, NETotic cell death, lysosome-dependent cell death, autophagy-dependent cell death, immunogenic cell death, cellular senescence, and mitotic catastrophe, we discuss the utility of neologisms that refer to highly specialized instances of these processes. The mission of the NCCD is to provide a widely accepted nomenclature on cell death in support of the continued development of the field.

3,301 citations


Proceedings ArticleDOI
01 Nov 2018
TL;DR: The gluebenchmark as mentioned in this paper is a benchmark of nine diverse NLU tasks, an auxiliary dataset for probing models for understanding of specific linguistic phenomena, and an online platform for evaluating and comparing models.
Abstract: Human ability to understand language is general, flexible, and robust. In contrast, most NLU models above the word level are designed for a specific task and struggle with out-of-domain data. If we aspire to develop models with understanding beyond the detection of superficial correspondences between inputs and outputs, then it is critical to develop a unified model that can execute a range of linguistic tasks across different domains. To facilitate research in this direction, we present the General Language Understanding Evaluation (GLUE, gluebenchmark.com): a benchmark of nine diverse NLU tasks, an auxiliary dataset for probing models for understanding of specific linguistic phenomena, and an online platform for evaluating and comparing models. For some benchmark tasks, training data is plentiful, but for others it is limited or does not match the genre of the test set. GLUE thus favors models that can represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge-transfer across tasks. While none of the datasets in GLUE were created from scratch for the benchmark, four of them feature privately-held test data, which is used to ensure that the benchmark is used fairly. We evaluate baselines that use ELMo (Peters et al., 2018), a powerful transfer learning technique, as well as state-of-the-art sentence representation models. The best models still achieve fairly low absolute scores. Analysis with our diagnostic dataset yields similarly weak performance over all phenomena tested, with some exceptions.

3,225 citations


Journal ArticleDOI
Jeffrey D. Stanaway1, Ashkan Afshin1, Emmanuela Gakidou1, Stephen S Lim1  +1050 moreInstitutions (346)
TL;DR: This study estimated levels and trends in exposure, attributable deaths, and attributable disability-adjusted life-years (DALYs) by age group, sex, year, and location for 84 behavioural, environmental and occupational, and metabolic risks or groups of risks from 1990 to 2017 and explored the relationship between development and risk exposure.

2,910 citations


Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo sampler (The Joker) is used to perform a search for companions to 96,231 red-giant stars observed in the APOGEE survey (DR14) with $ ≥ 3$ spectroscopic epochs.
Abstract: Multi-epoch radial velocity measurements of stars can be used to identify stellar, sub-stellar, and planetary-mass companions. Even a small number of observation epochs can be informative about companions, though there can be multiple qualitatively different orbital solutions that fit the data. We have custom-built a Monte Carlo sampler (The Joker) that delivers reliable (and often highly multi-modal) posterior samplings for companion orbital parameters given sparse radial-velocity data. Here we use The Joker to perform a search for companions to 96,231 red-giant stars observed in the APOGEE survey (DR14) with $\\geq 3$ spectroscopic epochs. We select stars with probable companions by making a cut on our posterior belief about the amplitude of the stellar radial-velocity variation induced by the orbit. We provide (1) a catalog of 320 companions for which the stellar companion properties can be confidently determined, (2) a catalog of 4,898 stars that likely have companions, but would require more observations to uniquely determine the orbital properties, and (3) posterior samplings for the full orbital parameters for all stars in the parent sample. We show the characteristics of systems with confidently determined companion properties and highlight interesting systems with candidate compact object companions.

2,564 citations


Journal ArticleDOI
TL;DR: Recommendations for specific organ system-based toxicity diagnosis and management are presented and, in general, permanent discontinuation of ICPis is recommended with grade 4 toxicities, with the exception of endocrinopathies that have been controlled by hormone replacement.
Abstract: PurposeTo increase awareness, outline strategies, and offer guidance on the recommended management of immune-related adverse events in patients treated with immune checkpoint inhibitor (ICPi) therapyMethodsA multidisciplinary, multi-organizational panel of experts in medical oncology, dermatology, gastroenterology, rheumatology, pulmonology, endocrinology, urology, neurology, hematology, emergency medicine, nursing, trialist, and advocacy was convened to develop the clinical practice guideline Guideline development involved a systematic review of the literature and an informal consensus process The systematic review focused on guidelines, systematic reviews and meta-analyses, randomized controlled trials, and case series published from 2000 through 2017ResultsThe systematic review identified 204 eligible publications Much of the evidence consisted of systematic reviews of observational data, consensus guidelines, case series, and case reports Due to the paucity of high-quality evidence on management

2,386 citations


Journal ArticleDOI
TL;DR: Nextstrain consists of a database of viral genomes, a bioinformatics pipeline for phylodynamics analysis, and an interactive visualization platform that presents a real-time view into the evolution and spread of a range of viral pathogens of high public health importance.
Abstract: Summary Understanding the spread and evolution of pathogens is important for effective public health measures and surveillance. Nextstrain consists of a database of viral genomes, a bioinformatics pipeline for phylodynamics analysis, and an interactive visualization platform. Together these present a real-time view into the evolution and spread of a range of viral pathogens of high public health importance. The visualization integrates sequence data with other data types such as geographic information, serology, or host species. Nextstrain compiles our current understanding into a single accessible location, open to health professionals, epidemiologists, virologists and the public alike. Availability and implementation All code (predominantly JavaScript and Python) is freely available from github.com/nextstrain and the web-application is available at nextstrain.org.

2,305 citations


Proceedings Article
20 Apr 2018
TL;DR: A benchmark of nine diverse NLU tasks, an auxiliary dataset for probing models for understanding of specific linguistic phenomena, and an online platform for evaluating and comparing models, which favors models that can represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge-transfer across tasks.
Abstract: Human ability to understand language is general, flexible, and robust. In contrast, most NLU models above the word level are designed for a specific task and struggle with out-of-domain data. If we aspire to develop models with understanding beyond the detection of superficial correspondences between inputs and outputs, then it is critical to develop a unified model that can execute a range of linguistic tasks across different domains. To facilitate research in this direction, we present the General Language Understanding Evaluation (GLUE, gluebenchmark.com): a benchmark of nine diverse NLU tasks, an auxiliary dataset for probing models for understanding of specific linguistic phenomena, and an online platform for evaluating and comparing models. For some benchmark tasks, training data is plentiful, but for others it is limited or does not match the genre of the test set. GLUE thus favors models that can represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge-transfer across tasks. While none of the datasets in GLUE were created from scratch for the benchmark, four of them feature privately-held test data, which is used to ensure that the benchmark is used fairly. We evaluate baselines that use ELMo (Peters et al., 2018), a powerful transfer learning technique, as well as state-of-the-art sentence representation models. The best models still achieve fairly low absolute scores. Analysis with our diagnostic dataset yields similarly weak performance over all phenomena tested, with some exceptions.

2,167 citations


Journal ArticleDOI
TL;DR: Substantial agreement was found among a large, interdisciplinary cohort of international experts regarding evidence supporting recommendations, and the remaining literature gaps in the assessment, prevention, and treatment of Pain, Agitation/sedation, Delirium, Immobility (mobilization/rehabilitation), and Sleep (disruption) in critically ill adults.
Abstract: Objective:To update and expand the 2013 Clinical Practice Guidelines for the Management of Pain, Agitation, and Delirium in Adult Patients in the ICU.Design:Thirty-two international experts, four methodologists, and four critical illness survivors met virtually at least monthly. All section groups g

Journal ArticleDOI
Max Griswold1, Nancy Fullman1, Caitlin Hawley1, Nicholas Arian1  +515 moreInstitutions (37)
TL;DR: It is found that the risk of all-cause mortality, and of cancers specifically, rises with increasing levels of consumption, and the level of consumption that minimises health loss is zero.

Journal ArticleDOI
TL;DR: Larotrectinib had marked and durable antitumor activity in patients with TRK fusion–positive cancer, regardless of the age of the patient or of the tumor type.
Abstract: Background Fusions involving one of three tropomyosin receptor kinases (TRK) occur in diverse cancers in children and adults. We evaluated the efficacy and safety of larotrectinib, a highly selective TRK inhibitor, in adults and children who had tumors with these fusions. Methods We enrolled patients with consecutively and prospectively identified TRK fusion–positive cancers, detected by molecular profiling as routinely performed at each site, into one of three protocols: a phase 1 study involving adults, a phase 1–2 study involving children, or a phase 2 study involving adolescents and adults. The primary end point for the combined analysis was the overall response rate according to independent review. Secondary end points included duration of response, progression-free survival, and safety. Results A total of 55 patients, ranging in age from 4 months to 76 years, were enrolled and treated. Patients had 17 unique TRK fusion–positive tumor types. The overall response rate was 75% (95% confidence ...

Journal ArticleDOI
TL;DR: It is found that the antibiotic consumption rate in low- and middle- income countries (LMICs) has been converging to (and in some countries surpassing) levels typically observed in high-income countries, and projected total global antibiotic consumption through 2030 was up to 200% higher than the 42 billion DDDs estimated in 2015.
Abstract: Tracking antibiotic consumption patterns over time and across countries could inform policies to optimize antibiotic prescribing and minimize antibiotic resistance, such as setting and enforcing per capita consumption targets or aiding investments in alternatives to antibiotics. In this study, we analyzed the trends and drivers of antibiotic consumption from 2000 to 2015 in 76 countries and projected total global antibiotic consumption through 2030. Between 2000 and 2015, antibiotic consumption, expressed in defined daily doses (DDD), increased 65% (21.1–34.8 billion DDDs), and the antibiotic consumption rate increased 39% (11.3–15.7 DDDs per 1,000 inhabitants per day). The increase was driven by low- and middle-income countries (LMICs), where rising consumption was correlated with gross domestic product per capita (GDPPC) growth (P = 0.004). In high-income countries (HICs), although overall consumption increased modestly, DDDs per 1,000 inhabitants per day fell 4%, and there was no correlation with GDPPC. Of particular concern was the rapid increase in the use of last-resort compounds, both in HICs and LMICs, such as glycylcyclines, oxazolidinones, carbapenems, and polymyxins. Projections of global antibiotic consumption in 2030, assuming no policy changes, were up to 200% higher than the 42 billion DDDs estimated in 2015. Although antibiotic consumption rates in most LMICs remain lower than in HICs despite higher bacterial disease burden, consumption in LMICs is rapidly converging to rates similar to HICs. Reducing global consumption is critical for reducing the threat of antibiotic resistance, but reduction efforts must balance access limitations in LMICs and take account of local and global resistance patterns.

Posted Content
TL;DR: This article introduced a new type of deep contextualized word representation that models both complex characteristics of word use (e.g., syntax and semantics), and how these uses vary across linguistic contexts (i.e., to model polysemy).
Abstract: We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.

Proceedings ArticleDOI
18 Jun 2018
TL;DR: This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.
Abstract: Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations. Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. With a perturbation in the form of only black and white stickers, we attack a real stop sign, causing targeted misclassification in 100% of the images obtained in lab settings, and in 84.8% of the captured video frames obtained on a moving vehicle (field test) for the target classifier.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Fausto Acernese3  +1235 moreInstitutions (132)
TL;DR: This analysis expands upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars.
Abstract: On 17 August 2017, the LIGO and Virgo observatories made the first direct detection of gravitational waves from the coalescence of a neutron star binary system. The detection of this gravitational-wave signal, GW170817, offers a novel opportunity to directly probe the properties of matter at the extreme conditions found in the interior of these stars. The initial, minimal-assumption analysis of the LIGO and Virgo data placed constraints on the tidal effects of the coalescing bodies, which were then translated to constraints on neutron star radii. Here, we expand upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars. Our analysis employs two methods: the use of equation-of-state-insensitive relations between various macroscopic properties of the neutron stars and the use of an efficient parametrization of the defining function pðρÞ of the equation of state itself. From the LIGO and Virgo data alone and the first method, we measure the two neutron star radii as R1 ¼ 10.8 þ2.0 −1.7 km for the heavier star and R2 ¼ 10.7 þ2.1 −1.5 km for the lighter star at the 90% credible level. If we additionally require that the equation of state supports neutron stars with masses larger than 1.97 M⊙ as required from electromagnetic observations and employ the equation-of-state parametrization, we further constrain R1 ¼ 11.9 þ1.4 −1.4 km and R2 ¼ 11.9 þ1.4 −1.4 km at the 90% credible level. Finally, we obtain constraints on pðρÞ at supranuclear densities, with pressure at twice nuclear saturation density measured at 3.5 þ2.7 −1.7 × 1034 dyn cm−2 at the 90% level.

Journal ArticleDOI
Daniel J. Benjamin1, James O. Berger2, Magnus Johannesson3, Magnus Johannesson1, Brian A. Nosek4, Brian A. Nosek5, Eric-Jan Wagenmakers6, Richard A. Berk7, Kenneth A. Bollen8, Björn Brembs9, Lawrence D. Brown7, Colin F. Camerer10, David Cesarini11, David Cesarini12, Christopher D. Chambers13, Merlise A. Clyde2, Thomas D. Cook14, Thomas D. Cook15, Paul De Boeck16, Zoltan Dienes17, Anna Dreber3, Kenny Easwaran18, Charles Efferson19, Ernst Fehr20, Fiona Fidler21, Andy P. Field17, Malcolm R. Forster22, Edward I. George7, Richard Gonzalez23, Steven N. Goodman24, Edwin J. Green25, Donald P. Green26, Anthony G. Greenwald27, Jarrod D. Hadfield28, Larry V. Hedges15, Leonhard Held20, Teck-Hua Ho29, Herbert Hoijtink30, Daniel J. Hruschka31, Kosuke Imai32, Guido W. Imbens24, John P. A. Ioannidis24, Minjeong Jeon33, James Holland Jones34, Michael Kirchler35, David Laibson36, John A. List37, Roderick J. A. Little23, Arthur Lupia23, Edouard Machery38, Scott E. Maxwell39, Michael A. McCarthy21, Don A. Moore40, Stephen L. Morgan41, Marcus R. Munafò42, Shinichi Nakagawa43, Brendan Nyhan44, Timothy H. Parker45, Luis R. Pericchi46, Marco Perugini47, Jeffrey N. Rouder48, Judith Rousseau49, Victoria Savalei50, Felix D. Schönbrodt51, Thomas Sellke52, Betsy Sinclair53, Dustin Tingley36, Trisha Van Zandt16, Simine Vazire54, Duncan J. Watts55, Christopher Winship36, Robert L. Wolpert2, Yu Xie32, Cristobal Young24, Jonathan Zinman44, Valen E. Johnson1, Valen E. Johnson18 
University of Southern California1, Duke University2, Stockholm School of Economics3, Center for Open Science4, University of Virginia5, University of Amsterdam6, University of Pennsylvania7, University of North Carolina at Chapel Hill8, University of Regensburg9, California Institute of Technology10, New York University11, Research Institute of Industrial Economics12, Cardiff University13, Mathematica Policy Research14, Northwestern University15, Ohio State University16, University of Sussex17, Texas A&M University18, Royal Holloway, University of London19, University of Zurich20, University of Melbourne21, University of Wisconsin-Madison22, University of Michigan23, Stanford University24, Rutgers University25, Columbia University26, University of Washington27, University of Edinburgh28, National University of Singapore29, Utrecht University30, Arizona State University31, Princeton University32, University of California, Los Angeles33, Imperial College London34, University of Innsbruck35, Harvard University36, University of Chicago37, University of Pittsburgh38, University of Notre Dame39, University of California, Berkeley40, Johns Hopkins University41, University of Bristol42, University of New South Wales43, Dartmouth College44, Whitman College45, University of Puerto Rico46, University of Milan47, University of California, Irvine48, Paris Dauphine University49, University of British Columbia50, Ludwig Maximilian University of Munich51, Purdue University52, Washington University in St. Louis53, University of California, Davis54, Microsoft55
TL;DR: The default P-value threshold for statistical significance is proposed to be changed from 0.05 to 0.005 for claims of new discoveries in order to reduce uncertainty in the number of discoveries.
Abstract: We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.

Journal ArticleDOI
TL;DR: A door‐to‐intervention time of <90 minutes is suggested, based on a framework of 30‐30‐30 minutes, for the management of the patient with a ruptured aneurysm, and the Vascular Quality Initiative mortality risk score is suggested for mutual decision‐making with patients considering aneurYSm repair.

Proceedings Article
25 Apr 2018
TL;DR: This work introduces a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, “sufficient” conditions for predictions, and proposes an algorithm to efficiently compute these explanations for any black-box model with high probability guarantees.
Abstract: We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, "sufficient" conditions for predictions. We propose an algorithm to efficiently compute these explanations for any black-box model with high-probability guarantees. We demonstrate the flexibility of anchors by explaining a myriad of different models for different domains and tasks. In a user study, we show that anchors enable users to predict how a model would behave on unseen instances with less effort and higher precision, as compared to existing linear explanations or no explanations.

Journal ArticleDOI
24 Sep 2018-Nature
TL;DR: Monolithically integrated lithium niobate electro-optic modulators that feature a CMOS-compatible drive voltage, support data rates up to 210 gigabits per second and show an on-chip optical loss of less than 0.5 decibels are demonstrated.
Abstract: Electro-optic modulators translate high-speed electronic signals into the optical domain and are critical components in modern telecommunication networks1,2 and microwave-photonic systems3,4. They are also expected to be building blocks for emerging applications such as quantum photonics5,6 and non-reciprocal optics7,8. All of these applications require chip-scale electro-optic modulators that operate at voltages compatible with complementary metal–oxide–semiconductor (CMOS) technology, have ultra-high electro-optic bandwidths and feature very low optical losses. Integrated modulator platforms based on materials such as silicon, indium phosphide or polymers have not yet been able to meet these requirements simultaneously because of the intrinsic limitations of the materials used. On the other hand, lithium niobate electro-optic modulators, the workhorse of the optoelectronic industry for decades9, have been challenging to integrate on-chip because of difficulties in microstructuring lithium niobate. The current generation of lithium niobate modulators are bulky, expensive, limited in bandwidth and require high drive voltages, and thus are unable to reach the full potential of the material. Here we overcome these limitations and demonstrate monolithically integrated lithium niobate electro-optic modulators that feature a CMOS-compatible drive voltage, support data rates up to 210 gigabits per second and show an on-chip optical loss of less than 0.5 decibels. We achieve this by engineering the microwave and photonic circuits to achieve high electro-optical efficiencies, ultra-low optical losses and group-velocity matching simultaneously. Our scalable modulator devices could provide cost-effective, low-power and ultra-high-speed solutions for next-generation optical communication networks and microwave photonic systems. Furthermore, our approach could lead to large-scale ultra-low-loss photonic circuits that are reconfigurable on a picosecond timescale, enabling a wide range of quantum and classical applications5,10,11 including feed-forward photonic quantum computation. Chip-scale lithium niobate electro-optic modulators that rapidly convert electrical to optical signals and use CMOS-compatible voltages could prove useful in optical communication networks, microwave photonic systems and photonic computation.

Journal ArticleDOI
22 Jun 2018-Science
TL;DR: It is demonstrated that, in the general population, the personality trait neuroticism is significantly correlated with almost every psychiatric disorder and migraine, and it is shown that both psychiatric and neurological disorders have robust correlations with cognitive and personality measures.
Abstract: Disorders of the brain can exhibit considerable epidemiological comorbidity and often share symptoms, provoking debate about their etiologic overlap. We quantified the genetic sharing of 25 brain disorders from genome-wide association studies of 265,218 patients and 784,643 control participants and assessed their relationship to 17 phenotypes from 1,191,588 individuals. Psychiatric disorders share common variant risk, whereas neurological disorders appear more distinct from one another and from the psychiatric disorders. We also identified significant sharing between disorders and a number of brain phenotypes, including cognitive measures. Further, we conducted simulations to explore how statistical power, diagnostic misclassification, and phenotypic heterogeneity affect genetic correlations. These results highlight the importance of common genetic variation as a risk factor for brain disorders and the value of heritability-based methods in understanding their etiology.


Journal ArticleDOI
TL;DR: The COSMIN guideline for systematic reviews of PROMs includes methodology to combine the methodological quality of studies on measurement properties with the quality of the PROM itself (i.e., its measurement properties).
Abstract: Systematic reviews of patient-reported outcome measures (PROMs) differ from reviews of interventions and diagnostic test accuracy studies and are complex. In fact, conducting a review of one or more PROMs comprises of multiple reviews (i.e., one review for each measurement property of each PROM). In the absence of guidance specifically designed for reviews on measurement properties, our aim was to develop a guideline for conducting systematic reviews of PROMs. Based on literature reviews and expert opinions, and in concordance with existing guidelines, the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) steering committee developed a guideline for systematic reviews of PROMs. A consecutive ten-step procedure for conducting a systematic review of PROMs is proposed. Steps 1–4 concern preparing and performing the literature search, and selecting relevant studies. Steps 5–8 concern the evaluation of the quality of the eligible studies, the measurement properties, and the interpretability and feasibility aspects. Steps 9 and 10 concern formulating recommendations and reporting the systematic review. The COSMIN guideline for systematic reviews of PROMs includes methodology to combine the methodological quality of studies on measurement properties with the quality of the PROM itself (i.e., its measurement properties). This enables reviewers to draw transparent conclusions and making evidence-based recommendations on the quality of PROMs, and supports the evidence-based selection of PROMs for use in research and in clinical practice.

Journal ArticleDOI
TL;DR: PM2.5 exposure may be related to additional causes of death than the five considered by the GBD and that incorporation of risk information from other, nonoutdoor, particle sources leads to underestimation of disease burden, especially at higher concentrations.
Abstract: Exposure to ambient fine particulate matter (PM2.5) is a major global health concern. Quantitative estimates of attributable mortality are based on disease-specific hazard ratio models that incorporate risk information from multiple PM2.5 sources (outdoor and indoor air pollution from use of solid fuels and secondhand and active smoking), requiring assumptions about equivalent exposure and toxicity. We relax these contentious assumptions by constructing a PM2.5-mortality hazard ratio function based only on cohort studies of outdoor air pollution that covers the global exposure range. We modeled the shape of the association between PM2.5 and nonaccidental mortality using data from 41 cohorts from 16 countries-the Global Exposure Mortality Model (GEMM). We then constructed GEMMs for five specific causes of death examined by the global burden of disease (GBD). The GEMM predicts 8.9 million [95% confidence interval (CI): 7.5-10.3] deaths in 2015, a figure 30% larger than that predicted by the sum of deaths among the five specific causes (6.9; 95% CI: 4.9-8.5) and 120% larger than the risk function used in the GBD (4.0; 95% CI: 3.3-4.8). Differences between the GEMM and GBD risk functions are larger for a 20% reduction in concentrations, with the GEMM predicting 220% higher excess deaths. These results suggest that PM2.5 exposure may be related to additional causes of death than the five considered by the GBD and that incorporation of risk information from other, nonoutdoor, particle sources leads to underestimation of disease burden, especially at higher concentrations.

Journal ArticleDOI
Mary F. Feitosa1, Aldi T. Kraja1, Daniel I. Chasman2, Yun J. Sung1  +296 moreInstitutions (86)
18 Jun 2018-PLOS ONE
TL;DR: In insights into the role of alcohol consumption in the genetic architecture of hypertension, a large two-stage investigation incorporating joint testing of main genetic effects and single nucleotide variant (SNV)-alcohol consumption interactions is conducted.
Abstract: Heavy alcohol consumption is an established risk factor for hypertension; the mechanism by which alcohol consumption impact blood pressure (BP) regulation remains unknown. We hypothesized that a genome-wide association study accounting for gene-alcohol consumption interaction for BP might identify additional BP loci and contribute to the understanding of alcohol-related BP regulation. We conducted a large two-stage investigation incorporating joint testing of main genetic effects and single nucleotide variant (SNV)-alcohol consumption interactions. In Stage 1, genome-wide discovery meta-analyses in ≈131K individuals across several ancestry groups yielded 3,514 SNVs (245 loci) with suggestive evidence of association (P < 1.0 x 10-5). In Stage 2, these SNVs were tested for independent external replication in ≈440K individuals across multiple ancestries. We identified and replicated (at Bonferroni correction threshold) five novel BP loci (380 SNVs in 21 genes) and 49 previously reported BP loci (2,159 SNVs in 109 genes) in European ancestry, and in multi-ancestry meta-analyses (P < 5.0 x 10-8). For African ancestry samples, we detected 18 potentially novel BP loci (P < 5.0 x 10-8) in Stage 1 that warrant further replication. Additionally, correlated meta-analysis identified eight novel BP loci (11 genes). Several genes in these loci (e.g., PINX1, GATA4, BLK, FTO and GABBR2) have been previously reported to be associated with alcohol consumption. These findings provide insights into the role of alcohol consumption in the genetic architecture of hypertension.


Posted ContentDOI
Spyridon Bakas1, Mauricio Reyes, Andras Jakab2, Stefan Bauer3  +435 moreInstitutions (111)
TL;DR: This study assesses the state-of-the-art machine learning methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018, and investigates the challenge of identifying the best ML algorithms for each of these tasks.
Abstract: Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumoris a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses thestate-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross tota lresection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.

Journal ArticleDOI
TL;DR: The findings show substantial progress in the reduction of lower respiratory infection burden, but this progress has not been equal across locations, has been driven by decreases in several primary risk factors, and might require more effort among elderly adults.
Abstract: Summary Background Lower respiratory infections are a leading cause of morbidity and mortality around the world The Global Burden of Diseases, Injuries, and Risk Factors (GBD) Study 2016, provides an up-to-date analysis of the burden of lower respiratory infections in 195 countries This study assesses cases, deaths, and aetiologies spanning the past 26 years and shows how the burden of lower respiratory infection has changed in people of all ages Methods We used three separate modelling strategies for lower respiratory infections in GBD 2016: a Bayesian hierarchical ensemble modelling platform (Cause of Death Ensemble model), which uses vital registration, verbal autopsy data, and surveillance system data to predict mortality due to lower respiratory infections; a compartmental meta-regression tool (DisMod-MR), which uses scientific literature, population representative surveys, and health-care data to predict incidence, prevalence, and mortality; and modelling of counterfactual estimates of the population attributable fraction of lower respiratory infection episodes due to Streptococcus pneumoniae, Haemophilus influenzae type b, influenza, and respiratory syncytial virus We calculated each modelled estimate for each age, sex, year, and location We modelled the exposure level in a population for a given risk factor using DisMod-MR and a spatio-temporal Gaussian process regression, and assessed the effectiveness of targeted interventions for each risk factor in children younger than 5 years We also did a decomposition analysis of the change in LRI deaths from 2000–16 using the risk factors associated with LRI in GBD 2016 Findings In 2016, lower respiratory infections caused 652 572 deaths (95% uncertainty interval [UI] 586 475–720 612) in children younger than 5 years (under-5s), 1 080 958 deaths (943 749–1 170 638) in adults older than 70 years, and 2 377 697 deaths (2 145 584–2 512 809) in people of all ages, worldwide Streptococcus pneumoniae was the leading cause of lower respiratory infection morbidity and mortality globally, contributing to more deaths than all other aetiologies combined in 2016 (1 189 937 deaths, 95% UI 690 445–1 770 660) Childhood wasting remains the leading risk factor for lower respiratory infection mortality among children younger than 5 years, responsible for 61·4% of lower respiratory infection deaths in 2016 (95% UI 45·7–69·6) Interventions to improve wasting, household air pollution, ambient particulate matter pollution, and expanded antibiotic use could avert one under-5 death due to lower respiratory infection for every 4000 children treated in the countries with the highest lower respiratory infection burden Interpretation Our findings show substantial progress in the reduction of lower respiratory infection burden, but this progress has not been equal across locations, has been driven by decreases in several primary risk factors, and might require more effort among elderly adults By highlighting regions and populations with the highest burden, and the risk factors that could have the greatest effect, funders, policy makers, and programme implementers can more effectively reduce lower respiratory infections among the world's most susceptible populations Funding Bill & Melinda Gates Foundation