scispace - formally typeset
Search or ask a question

Showing papers by "University of Wisconsin-Madison published in 2018"


Journal ArticleDOI
27 Apr 2018
TL;DR: This report provides updated ASD prevalence estimates for children aged 8 years during the 2014 surveillance year, on the basis of DSM-IV-TR criteria, and describes characteristics of the population of children with ASD.
Abstract: Problem/condition Autism spectrum disorder (ASD). Period covered 2014. Description of system The Autism and Developmental Disabilities Monitoring (ADDM) Network is an active surveillance system that provides estimates of the prevalence of autism spectrum disorder (ASD) among children aged 8 years whose parents or guardians reside within 11 ADDM sites in the United States (Arizona, Arkansas, Colorado, Georgia, Maryland, Minnesota, Missouri, New Jersey, North Carolina, Tennessee, and Wisconsin). ADDM surveillance is conducted in two phases. The first phase involves review and abstraction of comprehensive evaluations that were completed by professional service providers in the community. Staff completing record review and abstraction receive extensive training and supervision and are evaluated according to strict reliability standards to certify effective initial training, identify ongoing training needs, and ensure adherence to the prescribed methodology. Record review and abstraction occurs in a variety of data sources ranging from general pediatric health clinics to specialized programs serving children with developmental disabilities. In addition, most of the ADDM sites also review records for children who have received special education services in public schools. In the second phase of the study, all abstracted information is reviewed systematically by experienced clinicians to determine ASD case status. A child is considered to meet the surveillance case definition for ASD if he or she displays behaviors, as described on one or more comprehensive evaluations completed by community-based professional providers, consistent with the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR) diagnostic criteria for autistic disorder; pervasive developmental disorder-not otherwise specified (PDD-NOS, including atypical autism); or Asperger disorder. This report provides updated ASD prevalence estimates for children aged 8 years during the 2014 surveillance year, on the basis of DSM-IV-TR criteria, and describes characteristics of the population of children with ASD. In 2013, the American Psychiatric Association published the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5), which made considerable changes to ASD diagnostic criteria. The change in ASD diagnostic criteria might influence ADDM ASD prevalence estimates; therefore, most (85%) of the records used to determine prevalence estimates based on DSM-IV-TR criteria underwent additional review under a newly operationalized surveillance case definition for ASD consistent with the DSM-5 diagnostic criteria. Children meeting this new surveillance case definition could qualify on the basis of one or both of the following criteria, as documented in abstracted comprehensive evaluations: 1) behaviors consistent with the DSM-5 diagnostic features; and/or 2) an ASD diagnosis, whether based on DSM-IV-TR or DSM-5 diagnostic criteria. Stratified comparisons of the number of children meeting either of these two case definitions also are reported. Results For 2014, the overall prevalence of ASD among the 11 ADDM sites was 16.8 per 1,000 (one in 59) children aged 8 years. Overall ASD prevalence estimates varied among sites, from 13.1-29.3 per 1,000 children aged 8 years. ASD prevalence estimates also varied by sex and race/ethnicity. Males were four times more likely than females to be identified with ASD. Prevalence estimates were higher for non-Hispanic white (henceforth, white) children compared with non-Hispanic black (henceforth, black) children, and both groups were more likely to be identified with ASD compared with Hispanic children. Among the nine sites with sufficient data on intellectual ability, 31% of children with ASD were classified in the range of intellectual disability (intelligence quotient [IQ] 85). The distribution of intellectual ability varied by sex and race/ethnicity. Although mention of developmental concerns by age 36 months was documented for 85% of children with ASD, only 42% had a comprehensive evaluation on record by age 36 months. The median age of earliest known ASD diagnosis was 52 months and did not differ significantly by sex or race/ethnicity. For the targeted comparison of DSM-IV-TR and DSM-5 results, the number and characteristics of children meeting the newly operationalized DSM-5 case definition for ASD were similar to those meeting the DSM-IV-TR case definition, with DSM-IV-TR case counts exceeding DSM-5 counts by less than 5% and approximately 86% overlap between the two case definitions (kappa = 0.85). Interpretation Findings from the ADDM Network, on the basis of 2014 data reported from 11 sites, provide updated population-based estimates of the prevalence of ASD among children aged 8 years in multiple communities in the United States. The overall ASD prevalence estimate of 16.8 per 1,000 children aged 8 years in 2014 is higher than previously reported estimates from the ADDM Network. Because the ADDM sites do not provide a representative sample of the entire United States, the combined prevalence estimates presented in this report cannot be generalized to all children aged 8 years in the United States. Consistent with reports from previous ADDM surveillance years, findings from 2014 were marked by variation in ASD prevalence when stratified by geographic area, sex, and level of intellectual ability. Differences in prevalence estimates between black and white children have diminished in most sites, but remained notable for Hispanic children. For 2014, results from application of the DSM-IV-TR and DSM-5 case definitions were similar, overall and when stratified by sex, race/ethnicity, DSM-IV-TR diagnostic subtype, or level of intellectual ability. Public health action Beginning with surveillance year 2016, the DSM-5 case definition will serve as the basis for ADDM estimates of ASD prevalence in future surveillance reports. Although the DSM-IV-TR case definition will eventually be phased out, it will be applied in a limited geographic area to offer additional data for comparison. Future analyses will examine trends in the continued use of DSM-IV-TR diagnoses, such as autistic disorder, PDD-NOS, and Asperger disorder in health and education records, documentation of symptoms consistent with DSM-5 terminology, and how these trends might influence estimates of ASD prevalence over time. The latest findings from the ADDM Network provide evidence that the prevalence of ASD is higher than previously reported estimates and continues to vary among certain racial/ethnic groups and communities. With prevalence of ASD ranging from 13.1 to 29.3 per 1,000 children aged 8 years in different communities throughout the United States, the need for behavioral, educational, residential, and occupational services remains high, as does the need for increased research on both genetic and nongenetic risk factors for ASD.

3,967 citations


Journal ArticleDOI
James J. Lee1, Robbee Wedow2, Aysu Okbay3, Edward Kong4, Omeed Maghzian4, Meghan Zacher4, Tuan Anh Nguyen-Viet5, Peter Bowers4, Julia Sidorenko6, Julia Sidorenko7, Richard Karlsson Linnér3, Richard Karlsson Linnér8, Mark Alan Fontana9, Mark Alan Fontana5, Tushar Kundu5, Chanwook Lee4, Hui Li4, Ruoxi Li5, Rebecca Royer5, Pascal Timshel10, Pascal Timshel11, Raymond K. Walters12, Raymond K. Walters4, Emily A. Willoughby1, Loic Yengo6, Maris Alver7, Yanchun Bao13, David W. Clark14, Felix R. Day15, Nicholas A. Furlotte, Peter K. Joshi16, Peter K. Joshi14, Kathryn E. Kemper6, Aaron Kleinman, Claudia Langenberg15, Reedik Mägi7, Joey W. Trampush5, Shefali S. Verma17, Yang Wu6, Max Lam, Jing Hua Zhao15, Zhili Zheng6, Zhili Zheng18, Jason D. Boardman2, Harry Campbell14, Jeremy Freese19, Kathleen Mullan Harris20, Caroline Hayward14, Pamela Herd21, Pamela Herd13, Meena Kumari13, Todd Lencz22, Todd Lencz23, Jian'an Luan15, Anil K. Malhotra23, Anil K. Malhotra22, Andres Metspalu7, Lili Milani7, Ken K. Ong15, John R. B. Perry15, David J. Porteous14, Marylyn D. Ritchie17, Melissa C. Smart14, Blair H. Smith24, Joyce Y. Tung, Nicholas J. Wareham15, James F. Wilson14, Jonathan P. Beauchamp25, Dalton Conley26, Tõnu Esko7, Steven F. Lehrer27, Steven F. Lehrer28, Steven F. Lehrer29, Patrik K. E. Magnusson30, Sven Oskarsson31, Tune H. Pers10, Tune H. Pers11, Matthew R. Robinson32, Matthew R. Robinson6, Kevin Thom33, Chelsea Watson5, Christopher F. Chabris17, Michelle N. Meyer17, David Laibson4, Jian Yang6, Magnus Johannesson34, Philipp Koellinger3, Philipp Koellinger8, Patrick Turley12, Patrick Turley4, Peter M. Visscher6, Daniel J. Benjamin29, Daniel J. Benjamin5, David Cesarini33, David Cesarini29 
TL;DR: A joint (multi-phenotype) analysis of educational attainment and three related cognitive phenotypes generates polygenic scores that explain 11–13% of the variance ineducational attainment and 7–10% ofthe variance in cognitive performance, which substantially increases the utility ofpolygenic scores as tools in research.
Abstract: Here we conducted a large-scale genetic association analysis of educational attainment in a sample of approximately 1.1 million individuals and identify 1,271 independent genome-wide-significant SNPs. For the SNPs taken together, we found evidence of heterogeneous effects across environments. The SNPs implicate genes involved in brain-development processes and neuron-to-neuron communication. In a separate analysis of the X chromosome, we identify 10 independent genome-wide-significant SNPs and estimate a SNP heritability of around 0.3% in both men and women, consistent with partial dosage compensation. A joint (multi-phenotype) analysis of educational attainment and three related cognitive phenotypes generates polygenic scores that explain 11-13% of the variance in educational attainment and 7-10% of the variance in cognitive performance. This prediction accuracy substantially increases the utility of polygenic scores as tools in research.

1,658 citations


Journal ArticleDOI
Daniel J. Benjamin1, James O. Berger2, Magnus Johannesson1, Magnus Johannesson3, Brian A. Nosek4, Brian A. Nosek5, Eric-Jan Wagenmakers6, Richard A. Berk7, Kenneth A. Bollen8, Björn Brembs9, Lawrence D. Brown7, Colin F. Camerer10, David Cesarini11, David Cesarini12, Christopher D. Chambers13, Merlise A. Clyde2, Thomas D. Cook14, Thomas D. Cook15, Paul De Boeck16, Zoltan Dienes17, Anna Dreber3, Kenny Easwaran18, Charles Efferson19, Ernst Fehr20, Fiona Fidler21, Andy P. Field17, Malcolm R. Forster22, Edward I. George7, Richard Gonzalez23, Steven N. Goodman24, Edwin J. Green25, Donald P. Green26, Anthony G. Greenwald27, Jarrod D. Hadfield28, Larry V. Hedges15, Leonhard Held20, Teck-Hua Ho29, Herbert Hoijtink30, Daniel J. Hruschka31, Kosuke Imai32, Guido W. Imbens24, John P. A. Ioannidis24, Minjeong Jeon33, James Holland Jones34, Michael Kirchler35, David Laibson36, John A. List37, Roderick J. A. Little23, Arthur Lupia23, Edouard Machery38, Scott E. Maxwell39, Michael A. McCarthy21, Don A. Moore40, Stephen L. Morgan41, Marcus R. Munafò42, Shinichi Nakagawa43, Brendan Nyhan44, Timothy H. Parker45, Luis R. Pericchi46, Marco Perugini47, Jeffrey N. Rouder48, Judith Rousseau49, Victoria Savalei50, Felix D. Schönbrodt51, Thomas Sellke52, Betsy Sinclair53, Dustin Tingley36, Trisha Van Zandt16, Simine Vazire54, Duncan J. Watts55, Christopher Winship36, Robert L. Wolpert2, Yu Xie32, Cristobal Young24, Jonathan Zinman44, Valen E. Johnson1, Valen E. Johnson18 
University of Southern California1, Duke University2, Stockholm School of Economics3, Center for Open Science4, University of Virginia5, University of Amsterdam6, University of Pennsylvania7, University of North Carolina at Chapel Hill8, University of Regensburg9, California Institute of Technology10, Research Institute of Industrial Economics11, New York University12, Cardiff University13, Mathematica Policy Research14, Northwestern University15, Ohio State University16, University of Sussex17, Texas A&M University18, Royal Holloway, University of London19, University of Zurich20, University of Melbourne21, University of Wisconsin-Madison22, University of Michigan23, Stanford University24, Rutgers University25, Columbia University26, University of Washington27, University of Edinburgh28, National University of Singapore29, Utrecht University30, Arizona State University31, Princeton University32, University of California, Los Angeles33, Imperial College London34, University of Innsbruck35, Harvard University36, University of Chicago37, University of Pittsburgh38, University of Notre Dame39, University of California, Berkeley40, Johns Hopkins University41, University of Bristol42, University of New South Wales43, Dartmouth College44, Whitman College45, University of Puerto Rico46, University of Milan47, University of California, Irvine48, Paris Dauphine University49, University of British Columbia50, Ludwig Maximilian University of Munich51, Purdue University52, Washington University in St. Louis53, University of California, Davis54, Microsoft55
TL;DR: The default P-value threshold for statistical significance is proposed to be changed from 0.05 to 0.005 for claims of new discoveries in order to reduce uncertainty in the number of discoveries.
Abstract: We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.

1,586 citations


Journal ArticleDOI
TL;DR: It is found that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art.
Abstract: Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems-patient classification, fundamental biological processes and treatment of patients-and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine.

1,491 citations


Journal ArticleDOI
TL;DR: Two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences are presented, including the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum information about a Metagenome-Assembled Genomes (MIMAG), including estimates of genome completeness and contamination.
Abstract: We present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a Metagenome-Assembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Gene Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.

1,171 citations


Journal ArticleDOI
TL;DR: Patients who received dupilumab had significantly lower rates of severe asthma exacerbation than those who received placebo, as well as better lung function and asthma control.
Abstract: Background Dupilumab is a fully human anti–interleukin-4 receptor α monoclonal antibody that blocks both interleukin-4 and interleukin-13 signaling. We assessed its efficacy and safety in patients with uncontrolled asthma. Methods We randomly assigned 1902 patients 12 years of age or older with uncontrolled asthma in a 2:2:1:1 ratio to receive add-on subcutaneous dupilumab at a dose of 200 or 300 mg every 2 weeks or matched-volume placebos for 52 weeks. The primary end points were the annualized rate of severe asthma exacerbations and the absolute change from baseline to week 12 in the forced expiratory volume in 1 second (FEV1) before bronchodilator use in the overall trial population. Secondary end points included the exacerbation rate and FEV1 in patients with a blood eosinophil count of 300 or more per cubic millimeter. Asthma control and dupilumab safety were also assessed. Results The annualized rate of severe asthma exacerbations was 0.46 (95% confidence interval [CI], 0.39 to 0.53) among ...

1,115 citations


Journal ArticleDOI
09 Aug 2018-Nature
TL;DR: It is shown that bacterial, but not fungal, genetic diversity is highest in temperate habitats and that microbial gene composition varies more strongly with environmental variables than with geographic distance, and that the relative contributions of these microorganisms to global nutrient cycling varies spatially.
Abstract: Soils harbour some of the most diverse microbiomes on Earth and are essential for both nutrient cycling and carbon storage. To understand soil functioning, it is necessary to model the global distribution patterns and functional gene repertoires of soil microorganisms, as well as the biotic and environmental associations between the diversity and structure of both bacterial and fungal soil communities1–4. Here we show, by leveraging metagenomics and metabarcoding of global topsoil samples (189 sites, 7,560 subsamples), that bacterial, but not fungal, genetic diversity is highest in temperate habitats and that microbial gene composition varies more strongly with environmental variables than with geographic distance. We demonstrate that fungi and bacteria show global niche differentiation that is associated with contrasting diversity responses to precipitation and soil pH. Furthermore, we provide evidence for strong bacterial–fungal antagonism, inferred from antibiotic-resistance genes, in topsoil and ocean habitats, indicating the substantial role of biotic interactions in shaping microbial communities. Our results suggest that both competition and environmental filtering affect the abundance, composition and encoded gene functions of bacterial and fungal communities, indicating that the relative contributions of these microorganisms to global nutrient cycling varies spatially.

1,108 citations


Journal ArticleDOI
TL;DR: In patients with MCI, exercise training (6 months) is likely to improve cognitive measures and cognitive training may improve cognitive Measures, and no high-quality evidence exists to support pharmacologic treatments for MCI.
Abstract: Objective To update the 2001 American Academy of Neurology (AAN) guideline on mild cognitive impairment (MCI). Methods The guideline panel systematically reviewed MCI prevalence, prognosis, and treatment articles according to AAN evidence classification criteria, and based recommendations on evidence and modified Delphi consensus. Results MCI prevalence was 6.7% for ages 60–64, 8.4% for 65–69, 10.1% for 70–74, 14.8% for 75–79, and 25.2% for 80–84. Cumulative dementia incidence was 14.9% in individuals with MCI older than age 65 years followed for 2 years. No high-quality evidence exists to support pharmacologic treatments for MCI. In patients with MCI, exercise training (6 months) is likely to improve cognitive measures and cognitive training may improve cognitive measures. Major recommendations Clinicians should assess for MCI with validated tools in appropriate scenarios (Level B). Clinicians should evaluate patients with MCI for modifiable risk factors, assess for functional impairment, and assess for and treat behavioral/neuropsychiatric symptoms (Level B). Clinicians should monitor cognitive status of patients with MCI over time (Level B). Cognitively impairing medications should be discontinued where possible and behavioral symptoms treated (Level B). Clinicians may choose not to offer cholinesterase inhibitors (Level B); if offering, they must first discuss lack of evidence (Level A). Clinicians should recommend regular exercise (Level B). Clinicians may recommend cognitive training (Level C). Clinicians should discuss diagnosis, prognosis, long-term planning, and the lack of effective medicine options (Level B), and may discuss biomarker research with patients with MCI and families (Level C).

1,064 citations


Journal ArticleDOI
TL;DR: Important biological and pharmacological disparities in pre-clinical research and human translational studies are highlighted, and analyses of clinical trial failures and recent successes provide a rational pathway to MSC regulatory approval and deployment for disorders with unmet medical needs.

1,009 citations


Journal ArticleDOI
Bela Abolfathi1, D. S. Aguado2, Gabriela Aguilar3, Carlos Allende Prieto2  +361 moreInstitutions (94)
TL;DR: SDSS-IV is the fourth generation of the Sloan Digital Sky Survey and has been in operation since 2014 July. as discussed by the authors describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14).
Abstract: The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in operation since 2014 July. This paper describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14). This release makes the data taken by SDSS-IV in its first two years of operation (2014-2016 July) public. Like all previous SDSS releases, DR14 is cumulative, including the most recent reductions and calibrations of all data taken by SDSS since the first phase began operations in 2000. New in DR14 is the first public release of data from the extended Baryon Oscillation Spectroscopic Survey; the first data from the second phase of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2), including stellar parameter estimates from an innovative data-driven machine-learning algorithm known as "The Cannon"; and almost twice as many data cubes from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous release (N = 2812 in total). This paper describes the location and format of the publicly available data from the SDSS-IV surveys. We provide references to the important technical papers describing how these data have been taken (both targeting and observation details) and processed for scientific use. The SDSS web site (www.sdss.org) has been updated for this release and provides links to data downloads, as well as tutorials and examples of data use. SDSS-IV is planning to continue to collect astronomical data until 2020 and will be followed by SDSS-V.

965 citations


Journal ArticleDOI
TL;DR: The Modules for Experiments in Stellar Astrophysics (MESA) software instrument as discussed by the authors has been updated with the capability to handle floating point exceptions and stellar model optimization, as well as four new software tools.
Abstract: We update the capabilities of the software instrument Modules for Experiments in Stellar Astrophysics (MESA) and enhance its ease of use and availability. Our new approach to locating convective boundaries is consistent with the physics of convection, and yields reliable values of the convective-core mass during both hydrogen- and helium-burning phases. Stars with become white dwarfs and cool to the point where the electrons are degenerate and the ions are strongly coupled, a realm now available to study with MESA due to improved treatments of element diffusion, latent heat release, and blending of equations of state. Studies of the final fates of massive stars are extended in MESA by our addition of an approximate Riemann solver that captures shocks and conserves energy to high accuracy during dynamic epochs. We also introduce a 1D capability for modeling the effects of Rayleigh–Taylor instabilities that, in combination with the coupling to a public version of the radiation transfer instrument, creates new avenues for exploring Type II supernova properties. These capabilities are exhibited with exploratory models of pair-instability supernovae, pulsational pair-instability supernovae, and the formation of stellar-mass black holes. The applicability of MESA is now widened by the capability to import multidimensional hydrodynamic models into MESA. We close by introducing software modules for handling floating point exceptions and stellar model optimization, as well as four new software tools— , -Docker, , and mesastar.org—to enhance MESA's education and research impact.

Journal ArticleDOI
TL;DR: In this paper, a comprehensive review of negative emissions technologies (NETs) is presented, focusing on seven technologies: bioenergy with carbon capture and storage (BECCS), afforestation and reforestation, enhanced weathering, ocean fertilisation, biochar, and soil carbon sequestration.
Abstract: The most recent IPCC assessment has shown an important role for negative emissions technologies (NETs) in limiting global warming to 2 °C cost-effectively. However, a bottom-up, systematic, reproducible, and transparent literature assessment of the different options to remove CO2 from the atmosphere is currently missing. In part 1 of this three-part review on NETs, we assemble a comprehensive set of the relevant literature so far published, focusing on seven technologies: bioenergy with carbon capture and storage (BECCS), afforestation and reforestation, direct air carbon capture and storage (DACCS), enhanced weathering, ocean fertilisation, biochar, and soil carbon sequestration. In this part, part 2 of the review, we present estimates of costs, potentials, and side-effects for these technologies, and qualify them with the authors' assessment. Part 3 reviews the innovation and scaling challenges that must be addressed to realise NETs deployment as a viable climate mitigation strategy. Based on a systematic review of the literature, our best estimates for sustainable global NET potentials in 2050 are 0.5–3.6 GtCO₂ yr⁻¹ for afforestation and reforestation, 0.5–5 GtCO₂ yr⁻¹ for BECCS, 0.5–2 GtCO₂ yr⁻¹ for biochar, 2–4 GtCO₂ yr⁻¹ for enhanced weathering, 0.5–5 GtCO₂ yr⁻¹ for DACCS, and up to 5 GtCO2 yr⁻¹ for soil carbon sequestration. Costs vary widely across the technologies, as do their permanency and cumulative potentials beyond 2050. It is unlikely that a single NET will be able to sustainably meet the rates of carbon uptake described in integrated assessment pathways consistent with 1.5 °C of global warming.

Journal ArticleDOI
TL;DR: Making Neighborhood-Disadvantage Metrics Accessible Better understanding of variations in neighborhood disadvantage could lead to improved insight into the sociobiologic mechanisms that underlie health disparities, which could facilitate the development of improved therapeutics and interventions.
Abstract: Making Neighborhood-Disadvantage Metrics Accessible Better understanding of variations in neighborhood disadvantage could lead to improved insight into the sociobiologic mechanisms that underlie health disparities, which could, in turn, facilitate the development of improved therapeutics and interventions.

Journal ArticleDOI
29 Mar 2018-Nature
TL;DR: A two-qubit quantum processor in a silicon device is demonstrated in this paper, which can perform the Deutsch-Josza algorithm and the Grover search algorithm on demand.
Abstract: A two-qubit quantum processor in a silicon device is demonstrated, which can perform the Deutsch–Josza algorithm and the Grover search algorithm. The development of platforms for spin-based quantum computing continues apace. The individual components of such a system have been the subject of much investigation, and they have been assembled to implement specific quantum-computational algorithms. Thomas Watson and colleagues have now taken such component integration and control to the next level. Using two single-electron-spin qubits in a silicon-based double quantum dot, they realize a system that can be simply programmed to perform different quantum algorithms on demand. Now that it is possible to achieve measurement and control fidelities for individual quantum bits (qubits) above the threshold for fault tolerance, attention is moving towards the difficult task of scaling up the number of physical qubits to the large numbers that are needed for fault-tolerant quantum computing1,2. In this context, quantum-dot-based spin qubits could have substantial advantages over other types of qubit owing to their potential for all-electrical operation and ability to be integrated at high density onto an industrial platform3,4,5. Initialization, readout and single- and two-qubit gates have been demonstrated in various quantum-dot-based qubit representations6,7,8,9. However, as seen with small-scale demonstrations of quantum computers using other types of qubit10,11,12,13, combining these elements leads to challenges related to qubit crosstalk, state leakage, calibration and control hardware. Here we overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch–Josza algorithm and the Grover search algorithm—canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85–89 per cent and concurrences of 73–82 per cent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots.

Journal ArticleDOI
TL;DR: In this paper, the authors provide theoretical insights on how coded solutions can achieve significant gains compared with uncoded ones for matrix multiplication and data shuffling in large-scale distributed systems.
Abstract: Codes are widely used in many engineering applications to offer robustness against noise . In large-scale systems, there are several types of noise that can affect the performance of distributed machine learning algorithms—straggler nodes, system failures, or communication bottlenecks—but there has been little interaction cutting across codes, machine learning, and distributed systems. In this paper, we provide theoretical insights on how coded solutions can achieve significant gains compared with uncoded ones. We focus on two of the most basic building blocks of distributed learning algorithms: matrix multiplication and data shuffling . For matrix multiplication, we use codes to alleviate the effect of stragglers and show that if the number of homogeneous workers is $n$ , and the runtime of each subtask has an exponential tail, coded computation can speed up distributed matrix multiplication by a factor of $\log n$ . For data shuffling, we use codes to reduce communication bottlenecks, exploiting the excess in storage. We show that when a constant fraction $\alpha $ of the data matrix can be cached at each worker, and $n$ is the number of workers, coded shuffling reduces the communication cost by a factor of $\left({\alpha + \frac {1}{n}}\right)\gamma (n)$ compared with uncoded shuffling, where $\gamma (n)$ is the ratio of the cost of unicasting $n$ messages to $n$ users to multicasting a common message (of the same size) to $n$ users. For instance, $\gamma (n) \simeq n$ if multicasting a message to $n$ users is as cheap as unicasting a message to one user. We also provide experimental results, corroborating our theoretical gains of the coded algorithms.

Journal ArticleDOI
TL;DR: The NCCN Clinical Practice Guidelines in Oncology for Rectal Cancer address diagnosis, staging, surgical management, perioperative treatment, management of recurrent and metastatic disease, disease surveillance, and survivorship in patients with rectal cancer.
Abstract: The NCCN Clinical Practice Guidelines in Oncology (NCCN Guidelines) for Rectal Cancer address diagnosis, staging, surgical management, perioperative treatment, management of recurrent and metastatic disease, disease surveillance, and survivorship in patients with rectal cancer This portion of the guidelines focuses on the management of localized disease, which involves careful patient selection for curative-intent treatment options that sequence multimodality therapy usually comprised of chemotherapy, radiation, and surgical resection

Journal ArticleDOI
TL;DR: The NCCN Colon Cancer Panel discussions for the 2018 update of the guidelines regarding risk stratification and adjuvant treatment for patients with stage III colon cancer, and treatment of BRAF V600E mutation-positive metastatic colorectal cancer with regimens containing vemurafenib are summarized.
Abstract: The NCCN Guidelines for Colon Cancer provide recommendations regarding diagnosis, pathologic staging, surgical management, perioperative treatment, surveillance, management of recurrent and metastatic disease, and survivorship. These NCCN Guidelines Insights summarize the NCCN Colon Cancer Panel discussions for the 2018 update of the guidelines regarding risk stratification and adjuvant treatment for patients with stage III colon cancer, and treatment of BRAF V600E mutation-positive metastatic colorectal cancer with regimens containing vemurafenib.

Journal ArticleDOI
TL;DR: A tandem neural network architecture is demonstrated that tolerates inconsistent training instances in inverse design of nanophotonic devices and provides a way to train large neural networks for the inverseDesign of complex photonic structures.
Abstract: Data inconsistency leads to a slow training process when deep neural networks are used for the inverse design of photonic devices, an issue that arises from the fundamental property of nonuniqueness in all inverse scattering problems. Here we show that by combining forward modeling and inverse design in a tandem architecture, one can overcome this fundamental issue, allowing deep neural networks to be effectively trained by data sets that contain nonunique electromagnetic scattering instances. This paves the way for using deep neural networks to design complex photonic structures that require large training data sets.

Journal ArticleDOI
TL;DR: The clinical benefit from chemohormonal therapy in prolonging OS was confirmed for patients with high-volume disease; however, for patientsWith low- volume disease, no OS benefit was discerned.
Abstract: Purpose Docetaxel added to androgen-deprivation therapy (ADT) significantly increases the longevity of some patients with metastatic hormone-sensitive prostate cancer. Herein, we present the outcomes of the CHAARTED (Chemohormonal Therapy Versus Androgen Ablation Randomized Trial for Extensive Disease in Prostate Cancer) trial with more mature follow-up and focus on tumor volume. Patients and Methods In this phase III study, 790 patients with metastatic hormone-sensitive prostate cancer were equally randomly assigned to receive either ADT in combination with docetaxel 75 mg/m2 for up to six cycles or ADT alone. The primary end point of the study was overall survival (OS). Additional analyses of the prospectively defined low- and high-volume disease subgroups were performed. High-volume disease was defined as presence of visceral metastases and/or ≥ four bone metastases with at least one outside of the vertebral column and pelvis. Results At a median follow-up of 53.7 months, the median OS was 57.6 months for the chemohormonal therapy arm versus 47.2 months for ADT alone (hazard ratio [HR], 0.72; 95% CI, 0.59 to 0.89; P = .0018). For patients with high-volume disease (n = 513), the median OS was 51.2 months with chemohormonal therapy versus 34.4 months with ADT alone (HR, 0.63; 95% CI, 0.50 to 0.79; P < .001). For those with low-volume disease (n = 277), no OS benefit was observed (HR, 1.04; 95% CI, 0.70 to 1.55; P = .86). Conclusion The clinical benefit from chemohormonal therapy in prolonging OS was confirmed for patients with high-volume disease; however, for patients with low-volume disease, no OS benefit was discerned.

Journal ArticleDOI
TL;DR: Results support the notion that mindfulness-based interventions hold promise as evidence-based treatments and effects on specific disorder subgroups showed the most consistent evidence in support of mindfulness for depression, pain conditions, smoking, and addictive disorders.

Journal ArticleDOI
TL;DR: The wildland-urban interface (WUI) is the area where houses and wildland vegetation meet or intermingle, and where wildfire problems are most pronounced, and grew rapidly from 1990 to 2010, making it the fastest-growing land use type in the conterminous United States.
Abstract: The wildland-urban interface (WUI) is the area where houses and wildland vegetation meet or intermingle, and where wildfire problems are most pronounced. Here we report that the WUI in the United States grew rapidly from 1990 to 2010 in terms of both number of new houses (from 30.8 to 43.4 million; 41% growth) and land area (from 581,000 to 770,000 km2; 33% growth), making it the fastest-growing land use type in the conterminous United States. The vast majority of new WUI areas were the result of new housing (97%), not related to an increase in wildland vegetation. Within the perimeter of recent wildfires (1990–2015), there were 286,000 houses in 2010, compared with 177,000 in 1990. Furthermore, WUI growth often results in more wildfire ignitions, putting more lives and houses at risk. Wildfire problems will not abate if recent housing growth trends continue.


Journal ArticleDOI
TL;DR: The interpretation of the common fermentation end products, microbial populations, organoleptic properties, and changes in nutritive aspects of silage during storage of silages are discussed with emphasis on a North American perspective.

Journal ArticleDOI
TL;DR: This review provides a comprehensive survey of the electrochemical properties and electrocatalytic applications of aminoxyls, imidoxylS, and related reagents, of which the two prototypical and widely used examples are 2,2,6,6-tetramethylpiperidine N-oxyl (TEMPO) and phthalimide N- oxyl (PINO).
Abstract: N-Oxyl compounds represent a diverse group of reagents that find widespread use as catalysts for the selective oxidation of organic molecules in both laboratory and industrial applications. While turnover of N-oxyl catalysts in oxidation reactions may be accomplished with a variety of stoichiometric oxidants, N-oxyl reagents have also been extensively used as catalysts under electrochemical conditions in the absence of chemical oxidants. Several classes of N-oxyl compounds undergo facile redox reactions at electrode surfaces, enabling them to mediate a wide range of electrosynthetic reactions. Electrochemical studies also provide insights into the structural properties and mechanisms of chemical and electrochemical catalysis by N-oxyl compounds. This review provides a comprehensive survey of the electrochemical properties and electrocatalytic applications of aminoxyls, imidoxyls, and related reagents, of which the two prototypical and widely used examples are 2,2,6,6-tetramethylpiperidine N-oxyl (TEMPO) a...

Journal ArticleDOI
14 Sep 2018-Science
TL;DR: It is shown that glutamate is a wound signal in plants, which acts as sensors that convert this signal into an increase in intracellular calcium ion concentration that propagates to distant organs, where defense responses are then induced.
Abstract: Animals require rapid, long-range molecular signaling networks to integrate sensing and response throughout their bodies The amino acid glutamate acts as an excitatory neurotransmitter in the vertebrate central nervous system, facilitating long-range information exchange via activation of glutamate receptor channels Similarly, plants sense local signals, such as herbivore attack, and transmit this information throughout the plant body to rapidly activate defense responses in undamaged parts Here we show that glutamate is a wound signal in plants Ion channels of the GLUTAMATE RECEPTOR-LIKE family act as sensors that convert this signal into an increase in intracellular calcium ion concentration that propagates to distant organs, where defense responses are then induced

Journal ArticleDOI
20 Jun 2018-Joule
TL;DR: In light of the targets set out by the Paris Climate Agreement and the global energy sector's ongoing transition from fossil fuels to renewables, the chemical industry is searching for innovative ways of reducing greenhouse gas emissions associated with the production of ammonia as discussed by the authors.

Journal ArticleDOI
TL;DR: The motivation of this perspective paper is to summarize the state-of-art topology optimization methods for a variety of AM topics and the hope is to inspire both researchers and engineers to meet the challenges with innovative solutions.
Abstract: Manufacturing-oriented topology optimization has been extensively studied the past two decades, in particular for the conventional manufacturing methods, for example, machining and injection molding or casting. Both design and manufacturing engineers have benefited from these efforts because of the close-to-optimal and friendly-to-manufacture design solutions. Recently, additive manufacturing (AM) has received significant attention from both academia and industry. AM is characterized by producing geometrically complex components layer-by-layer, and greatly reduces the geometric complexity restrictions imposed on topology optimization by conventional manufacturing. In other words, AM can make near-full use of the freeform structural evolution of topology optimization. Even so, new rules and restrictions emerge due to the diverse and intricate AM processes, which should be carefully addressed when developing the AM-specific topology optimization algorithms. Therefore, the motivation of this perspective paper is to summarize the state-of-art topology optimization methods for a variety of AM topics. At the same time, this paper also expresses the authors' perspectives on the challenges and opportunities in these topics. The hope is to inspire both researchers and engineers to meet these challenges with innovative solutions.

Journal ArticleDOI
TL;DR: This work frames central issues regarding determination of protein-level variation and PTMs, including some paradoxes present in the field today, and uses this framework to assess existing data and ask the question, "How many distinct primary structures of proteins (proteoforms) are created from the 20,300 human genes?"
Abstract: Despite decades of accumulated knowledge about proteins and their post-translational modifications (PTMs), numerous questions remain regarding their molecular composition and biological function O

Journal ArticleDOI
TL;DR: The combination of information gained from mass spectrometry (MS) and visualization of spatial distributions in thin sample sections makes this a valuable chemical analysis tool useful for biological specimen characterization.
Abstract: Mass spectrometry imaging (MSI) is a powerful tool that enables untargeted investigations into the spatial distribution of molecular species in a variety of samples. It has the capability to image thousands of molecules, such as metabolites, lipids, peptides, proteins, and glycans, in a single experiment without labeling. The combination of information gained from mass spectrometry (MS) and visualization of spatial distributions in thin sample sections makes this a valuable chemical analysis tool useful for biological specimen characterization. After minimal but careful sample preparation, the general setup of an MSI experiment involves defining an (x, y) grid over the surface of the sample, with the grid area chosen by the user. The mass spectrometer then ionizes the molecules on the surface of the sample and collects a mass spectrum at each pixel on the section, with the resulting spatial resolution defined by the pixel size. After collecting the spectra, computational software can be used to select an ...

Journal ArticleDOI
TL;DR: Meta-analytic data is presented revealing distinct subregions within the vmPFC that correspond to each of these three functions, as well as the associations between these subregion and specific psychiatric disorders (depression, posttraumatic stress disorder, addiction, social anxiety disorder, bipolar disorder, schizophrenia, and attention-deficit/hyperactivity disorder).