scispace - formally typeset
Search or ask a question

Showing papers by "University of Colorado Boulder published in 2015"


Journal ArticleDOI
Theo Vos1, Ryan M Barber1, Brad Bell1, Amelia Bertozzi-Villa1  +686 moreInstitutions (287)
TL;DR: In the Global Burden of Disease Study 2013 (GBD 2013) as mentioned in this paper, the authors estimated the quantities for acute and chronic diseases and injuries for 188 countries between 1990 and 2013.

4,510 citations


Journal ArticleDOI
TL;DR: In this article, the most appropriate targets for systolic blood pressure to reduce cardiovascular morbidity and mortality among persons without diabetes remain uncertain, and the authors propose a target of less than 120 mm Hg.
Abstract: BACKGROUND The most appropriate targets for systolic blood pressure to reduce cardiovascular morbidity and mortality among persons without diabetes remain uncertain. METHODS We randomly assigned 9361 persons with a systolic blood pressure of 130 mm Hg or higher and an increased cardiovascular risk, but without diabetes, to a systolic blood-pressure target of less than 120 mm Hg (intensive treatment) or a target of less than 140 mm Hg (standard treatment). The primary composite outcome was myocardial infarction, other acute coronary syndromes, stroke, heart failure, or death from cardiovascular causes. RESULTS At 1 year, the mean systolic blood pressure was 121.4 mm Hg in the intensive-treatment group and 136.2 mm Hg in the standard-treatment group. The intervention was stopped early after a median follow-up of 3.26 years owing to a significantly lower rate of the primary composite outcome in the intensive-treatment group than in the standard-treatment group (1.65% per year vs. 2.19% per year; hazard ratio with intensive treatment, 0.75; 95% confidence interval [CI], 0.64 to 0.89; P<0.001). All-cause mortality was also significantly lower in the intensive-treatment group (hazard ratio, 0.73; 95% CI, 0.60 to 0.90; P=0.003). Rates of serious adverse events of hypotension, syncope, electrolyte abnormalities, and acute kidney injury or failure, but not of injurious falls, were higher in the intensive-treatment group than in the standard-treatment group. CONCLUSIONS Among patients at high risk for cardiovascular events but without diabetes, targeting a systolic blood pressure of less than 120 mm Hg, as compared with less than 140 mm Hg, resulted in lower rates of fatal and nonfatal major cardiovascular events and death from any cause, although significantly higher rates of some adverse events were observed in the intensive-treatment group. (Funded by the National Institutes of Health; ClinicalTrials.gov number, NCT01206062.).

4,125 citations



Proceedings ArticleDOI
28 Feb 2015
TL;DR: The authors introduced the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies, which outperformed all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
Abstract: A Long Short-Term Memory (LSTM) network is a type of recurrent neural network architecture which has recently obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. TreeLSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).

2,702 citations


Journal ArticleDOI
TL;DR: An analysis of global forest cover is conducted to reveal that 70% of remaining forest is within 1 km of the forest’s edge, subject to the degrading effects of fragmentation, indicating an urgent need for conservation and restoration measures to improve landscape connectivity.
Abstract: We conducted an analysis of global forest cover to reveal that 70% of remaining forest is within 1 km of the forest’s edge, subject to the degrading effects of fragmentation. A synthesis of fragmentation experiments spanning multiple biomes and scales, five continents, and 35 year sd emonstrates that habitatfragmentation reduces biodiversity by 13 to 75% and impairs key ecosystem functions by decreasing biomass and altering nutrient cycles. Effects are greatest in the smallest and most isolated fragments, and they magnify with the passage of time. These findings indicate an urgent need for conservation and restoration measures to improve landscape connectivity, which will reduce extinction rates and help maintain ecosystem services.

2,201 citations


Journal ArticleDOI
TL;DR: The Community Earth System Model (CESM) community designed the CESM Large Ensemble with the explicit goal of enabling assessment of climate change in the presence of internal climate variability as discussed by the authors.
Abstract: While internal climate variability is known to affect climate projections, its influence is often underappreciated and confused with model error. Why? In general, modeling centers contribute a small number of realizations to international climate model assessments [e.g., phase 5 of the Coupled Model Intercomparison Project (CMIP5)]. As a result, model error and internal climate variability are difficult, and at times impossible, to disentangle. In response, the Community Earth System Model (CESM) community designed the CESM Large Ensemble (CESM-LE) with the explicit goal of enabling assessment of climate change in the presence of internal climate variability. All CESM-LE simulations use a single CMIP5 model (CESM with the Community Atmosphere Model, version 5). The core simulations replay the twenty to twenty-first century (1920–2100) 30 times under historical and representative concentration pathway 8.5 external forcing with small initial condition differences. Two companion 1000+-yr-long preindu...

1,869 citations


Journal ArticleDOI
TL;DR: T-VEC is the first oncolytic immunotherapy to demonstrate therapeutic benefit against melanoma in a phase III clinical trial and represents a novel potential therapy for patients with metastatic melanoma.
Abstract: Purpose Talimogene laherparepvec (T-VEC) is a herpes simplex virus type 1‐derived oncolytic immunotherapy designed to selectively replicate within tumors and produce granulocyte macrophage colony-stimulating factor (GM-CSF) to enhance systemic antitumor immune responses. T-VEC was compared with GM-CSF in patients with unresected stage IIIB to IV melanoma in a randomized open-label phase III trial. Patients and Methods Patients with injectable melanoma that was not surgically resectable were randomly assigned at a two-to-one ratio to intralesional T-VEC or subcutaneous GM-CSF. The primary end point was durable response rate (DRR; objective response lasting continuously 6 months) per independent assessment. Key secondary end points included overall survival (OS) and overall response rate. Results Among 436 patients randomly assigned, DRR was significantly higher with T-VEC (16.3%; 95% CI, 12.1% to 20.5%) than GM-CSF (2.1%; 95% CI, 0% to 4.5%]; odds ratio, 8.9; P .001). Overall response rate was also higher in the T-VEC arm (26.4%; 95% CI, 21.4% to 31.5% v 5.7%; 95% CI, 1.9% to 9.5%). Median OS was 23.3 months (95% CI, 19.5 to 29.6 months) with T-VEC and 18.9 months (95% CI, 16.0 to 23.7 months) with GM-CSF (hazard ratio, 0.79; 95% CI, 0.62 to 1.00; P .051). T-VEC efficacy was most pronounced in patients with stage IIIB, IIIC, or IVM1a disease and in patients with treatment-naive disease. The most common adverse events (AEs) with T-VEC were fatigue, chills, and pyrexia. The only grade 3 or 4 AE occurring in 2% of T-VEC‐treated patients was cellulitis (2.1%). No fatal treatment-related AEs occurred.

1,815 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +5117 moreInstitutions (314)
TL;DR: A measurement of the Higgs boson mass is presented based on the combined data samples of the ATLAS and CMS experiments at the CERN LHC in the H→γγ and H→ZZ→4ℓ decay channels.
Abstract: A measurement of the Higgs boson mass is presented based on the combined data samples of the ATLAS and CMS experiments at the CERN LHC in the H→γγ and H→ZZ→4l decay channels. The results are obtained from a simultaneous fit to the reconstructed invariant mass peaks in the two channels and for the two experiments. The measured masses from the individual channels and the two experiments are found to be consistent among themselves. The combined measured mass of the Higgs boson is mH=125.09±0.21 (stat)±0.11 (syst) GeV.

1,567 citations


Journal ArticleDOI
TL;DR: A meta-analysis of rsFC studies provides an empirical foundation for a neurocognitive model in which network dysfunction underlies core cognitive and affective abnormalities in depression.
Abstract: Importance Major depressive disorder (MDD) has been linked to imbalanced communication among large-scale brain networks, as reflected by abnormal resting-state functional connectivity (rsFC). However, given variable methods and results across studies, identifying consistent patterns of network dysfunction in MDD has been elusive. Objective To investigate network dysfunction in MDD through a meta-analysis of rsFC studies. Data Sources Seed-based voxelwise rsFC studies comparing individuals with MDD with healthy controls (published before June 30, 2014) were retrieved from electronic databases (PubMed, Web of Science, and EMBASE) and authors contacted for additional data. Study Selection Twenty-seven seed-based voxel-wise rsFC data sets from 25 publications (556 individuals with MDD and 518 healthy controls) were included in the meta-analysis. Data Extraction and Synthesis Coordinates of seed regions of interest and between-group effects were extracted. Seeds were categorized into seed-networks by their location within a priori functional networks. Multilevel kernel density analysis of between-group effects identified brain systems in which MDD was associated with hyperconnectivity (increased positive or reduced negative connectivity) or hypoconnectivity (increased negative or reduced positive connectivity) with each seed-network. Results Major depressive disorder was characterized by hypoconnectivity within the frontoparietal network, a set of regions involved in cognitive control of attention and emotion regulation, and hypoconnectivity between frontoparietal systems and parietal regions of the dorsal attention network involved in attending to the external environment. Major depressive disorder was also associated with hyperconnectivity within the default network, a network believed to support internally oriented and self-referential thought, and hyperconnectivity between frontoparietal control systems and regions of the default network. Finally, the MDD groups exhibited hypoconnectivity between neural systems involved in processing emotion or salience and midline cortical regions that may mediate top-down regulation of such functions. Conclusions and Relevance Reduced connectivity within frontoparietal control systems and imbalanced connectivity between control systems and networks involved in internal or external attention may reflect depressive biases toward internal thoughts at the cost of engaging with the external world. Meanwhile, altered connectivity between neural systems involved in cognitive control and those that support salience or emotion processing may relate to deficits regulating mood. These findings provide an empirical foundation for a neurocognitive model in which network dysfunction underlies core cognitive and affective abnormalities in depression.

1,385 citations


Journal ArticleDOI
TL;DR: Experimental results based on several hyperspectral image data sets demonstrate that the proposed method can achieve better classification performance than some traditional methods, such as support vector machines and the conventional deep learning-based methods.
Abstract: Recently, convolutional neural networks have demonstrated excellent performance on various visual tasks, including the classification of common two-dimensional images. In this paper, deep convolutional neural networks are employed to classify hyperspectral images directly in spectral domain. More specifically, the architecture of the proposed classifier contains five layers with weights which are the input layer, the convolutional layer, the max pooling layer, the full connection layer, and the output layer. These five layers are implemented on each spectral signature to discriminate against others. Experimental results based on several hyperspectral image data sets demonstrate that the proposed method can achieve better classification performance than some traditional methods, such as support vector machines and the conventional deep learning-based methods.

1,316 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the disordered regions of key RNP granule components and the full-length granule protein hnRNPA1 can phase separate in vitro, producing dynamic liquid droplets.

Journal ArticleDOI
24 Dec 2015-Nature
TL;DR: This demonstration could represent the beginning of an era of chip-scale electronic–photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers.
Abstract: An electronic–photonic microprocessor chip manufactured using a conventional microelectronics foundry process is demonstrated; the chip contains 70 million transistors and 850 photonic components and directly uses light to communicate to other chips. The rapid transfer of data between chips in computer systems and data centres has become one of the bottlenecks in modern information processing. One way of increasing speeds is to use optical connections rather than electrical wires and the past decade has seen significant efforts to develop silicon-based nanophotonic approaches to integrate such links within silicon chips, but incompatibility between the manufacturing processes used in electronics and photonics has proved a hindrance. Now Chen Sun et al. describe a 'system on a chip' microprocessor that successfully integrates electronics and photonics yet is produced using standard microelectronic chip fabrication techniques. The resulting microprocessor combines 70 million transistors and 850 photonic components and can communicate optically with the outside world. This result promises a way forward for new fast, low-power computing systems architectures. Data transport across short electrical wires is limited by both bandwidth and power density, which creates a performance bottleneck for semiconductor microchips in modern computer systems—from mobile phones to large-scale data centres. These limitations can be overcome1,2,3 by using optical communications based on chip-scale electronic–photonic systems4,5,6,7 enabled by silicon-based nanophotonic devices8. However, combining electronics and photonics on the same chip has proved challenging, owing to microchip manufacturing conflicts between electronics and photonics. Consequently, current electronic–photonic chips9,10,11 are limited to niche manufacturing processes and include only a few optical devices alongside simple circuits. Here we report an electronic–photonic system on a single chip integrating over 70 million transistors and 850 photonic components that work together to provide logic, memory, and interconnect functions. This system is a realization of a microprocessor that uses on-chip photonic devices to directly communicate with other chips using light. To integrate electronics and photonics at the scale of a microprocessor chip, we adopt a ‘zero-change’ approach to the integration of photonics. Instead of developing a custom process to enable the fabrication of photonics12, which would complicate or eliminate the possibility of integration with state-of-the-art transistors at large scale and at high yield, we design optical devices using a standard microelectronics foundry process that is used for modern microprocessors13,14,15,16. This demonstration could represent the beginning of an era of chip-scale electronic–photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers.

Journal ArticleDOI
TL;DR: A mechanism where host-microbe interactions augment barrier function in the distal gut is highlighted, where the influences of butyrate are lost in cells lacking HIF, thus linkingbutyrate metabolism to stabilized HIF and barrier function.

Journal ArticleDOI
TL;DR: The Chicago Face Database is introduced, a free resource consisting of 158 high-resolution, standardized photographs of Black and White males and females between the ages of 18 and 40 years and extensive data about these targets and factors associated with researchers’ judgments of suitability.
Abstract: Researchers studying a range of psychological phenomena (e.g., theory of mind, emotion, stereotyping and prejudice, interpersonal attraction, etc.) sometimes employ photographs of people as stimuli. In this paper, we introduce the Chicago Face Database, a free resource consisting of 158 high-resolution, standardized photographs of Black and White males and females between the ages of 18 and 40 years and extensive data about these targets. In Study 1, we report pre-testing of these faces, which includes both subjective norming data and objective physical measurements of the images included in the database. In Study 2 we surveyed psychology researchers to assess the suitability of these targets for research purposes and explored factors that were associated with researchers' judgments of suitability. Instructions are outlined for those interested in obtaining access to the stimulus set and accompanying ratings and measures.

Journal ArticleDOI
TL;DR: The results suggest that elevated N and P inputs lead to predictable shifts in the taxonomic and functional traits of soil microbial communities, including increases in the relative abundances of faster-growing, copiotrophic bacterial taxa, with these shifts likely to impact belowground ecosystems worldwide.
Abstract: Soil microorganisms are critical to ecosystem functioning and the maintenance of soil fertility. However, despite global increases in the inputs of nitrogen (N) and phosphorus (P) to ecosystems due to human activities, we lack a predictive understanding of how microbial communities respond to elevated nutrient inputs across environmental gradients. Here we used high-throughput sequencing of marker genes to elucidate the responses of soil fungal, archaeal, and bacterial communities using an N and P addition experiment replicated at 25 globally distributed grassland sites. We also sequenced metagenomes from a subset of the sites to determine how the functional attributes of bacterial communities change in response to elevated nutrients. Despite strong compositional differences across sites, microbial communities shifted in a consistent manner with N or P additions, and the magnitude of these shifts was related to the magnitude of plant community responses to nutrient inputs. Mycorrhizal fungi and methanogenic archaea decreased in relative abundance with nutrient additions, as did the relative abundances of oligotrophic bacterial taxa. The metagenomic data provided additional evidence for this shift in bacterial life history strategies because nutrient additions decreased the average genome sizes of the bacterial community members and elicited changes in the relative abundances of representative functional genes. Our results suggest that elevated N and P inputs lead to predictable shifts in the taxonomic and functional traits of soil microbial communities, including increases in the relative abundances of faster-growing, copiotrophic bacterial taxa, with these shifts likely to impact belowground ecosystems worldwide.

Proceedings ArticleDOI
01 Jul 2015
TL;DR: This work presents a simple deep neural network that competes with and, in some cases, outperforms such models on sentiment analysis and factoid question answering tasks while taking only a fraction of the training time.
Abstract: Many existing deep learning models for natural language processing tasks focus on learning the compositionality of their inputs, which requires many expensive computations. We present a simple deep neural network that competes with and, in some cases, outperforms such models on sentiment analysis and factoid question answering tasks while taking only a fraction of the training time. While our model is syntactically-ignorant, we show significant improvements over previous bag-of-words models by deepening our network and applying a novel variant of dropout. Moreover, our model performs better than syntactic models on datasets with high syntactic variance. We show that our model makes similar errors to syntactically-aware models, indicating that for the tasks we consider, nonlinearly transforming the input is more important than tailoring a network to incorporate word order and syntax.

Posted Content
TL;DR: The Tree-LSTM is introduced, a generalization of LSTMs to tree-structured network topologies that outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences and sentiment classification.
Abstract: Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).

Journal ArticleDOI
TL;DR: It is demonstrated using simulations based on whole-genome sequencing data that ∼97% and ∼68% of variation at common and rare variants, respectively, can be captured by imputation, and evidence that height- and BMI-associated variants have been under natural selection is found.
Abstract: We propose a method (GREML-LDMS) to estimate heritability for human complex traits in unrelated individuals using whole-genome sequencing data. We demonstrate using simulations based on whole-genome sequencing data that ∼97% and ∼68% of variation at common and rare variants, respectively, can be captured by imputation. Using the GREML-LDMS method, we estimate from 44,126 unrelated individuals that all ∼17 million imputed variants explain 56% (standard error (s.e.) = 2.3%) of variance for height and 27% (s.e. = 2.5%) of variance for body mass index (BMI), and we find evidence that height- and BMI-associated variants have been under natural selection. Considering the imperfect tagging of imputation and potential overestimation of heritability from previous family-based studies, heritability is likely to be 60-70% for height and 30-40% for BMI. Therefore, the missing heritability is small for both traits. For further discovery of genes associated with complex traits, a study design with SNP arrays followed by imputation is more cost-effective than whole-genome sequencing at current prices.

Journal ArticleDOI
TL;DR: Large area, flexible thin-film black gold membranes are demonstrated, which have multiscale structures of varying metallic nanoscale gaps (0–200 nm) as well as microscale funnel structures that allow heat localization within the few micrometre-thick layer and continuous water provision through micropores.
Abstract: Efficient steam generation under solar irradiation is of interest for energy harvesting applications. Here, Bae et al. develop a plasmonic nanofocusing film consisting of metal coated alumina nanowires to efficiently generate solar vapour with an efficiency up to 57% at 20 kWm−2.

Journal ArticleDOI
TL;DR: A typology of sampling designs for qualitative researchers is provided, which represents a body of sampling strategies that facilitate credible comparisons of two or more different subgroups that are extracted from the same levels of study.
Abstract: The purpose of this paper is to provide a typology of sampling designs for qualitative researchers. We introduce the following sampling strategies: (a) parallel sampling designs, which represent a body of sampling strategies that facilitate credible comparisons of two or more different subgroups that are extracted from the same levels of study; (b) nested sampling designs, which are sampling strategies that facilitate credible comparisons of two or more members of the same subgroup, wherein one or more members of the subgroup represent a sub-sample of the full sample; and (c) multilevel sampling designs, which represent sampling strategies that facilitate credible comparisons of two or more subgroups that are extracted from different levels of study. Key Words: Qualitative Research, Sampling Designs, Random Sampling, Purposive Sampling, and Sample Size

Journal ArticleDOI
TL;DR: Aiken et al. as mentioned in this paper used the Aerodyne high-resolution time-of-flight mass spectrometer (HR-ToF-AMS) to measure OA elemental composition.
Abstract: . Elemental compositions of organic aerosol (OA) particles provide useful constraints on OA sources, chemical evolution, and effects. The Aerodyne high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS) is widely used to measure OA elemental composition. This study evaluates AMS measurements of atomic oxygen-to-carbon (O : C), hydrogen-to-carbon (H : C), and organic mass-to-organic carbon (OM : OC) ratios, and of carbon oxidation state ( OS C) for a vastly expanded laboratory data set of multifunctional oxidized OA standards. For the expanded standard data set, the method introduced by Aiken et al. (2008), which uses experimentally measured ion intensities at all ions to determine elemental ratios (referred to here as "Aiken-Explicit"), reproduces known O : C and H : C ratio values within 20% (average absolute value of relative errors) and 12%, respectively. The more commonly used method, which uses empirically estimated H2O+ and CO+ ion intensities to avoid gas phase air interferences at these ions (referred to here as "Aiken-Ambient"), reproduces O : C and H : C of multifunctional oxidized species within 28 and 14% of known values. The values from the latter method are systematically biased low, however, with larger biases observed for alcohols and simple diacids. A detailed examination of the H2O+, CO+, and CO2+ fragments in the high-resolution mass spectra of the standard compounds indicates that the Aiken-Ambient method underestimates the CO+ and especially H2O+ produced from many oxidized species. Combined AMS–vacuum ultraviolet (VUV) ionization measurements indicate that these ions are produced by dehydration and decarboxylation on the AMS vaporizer (usually operated at 600 °C). Thermal decomposition is observed to be efficient at vaporizer temperatures down to 200 °C. These results are used together to develop an "Improved-Ambient" elemental analysis method for AMS spectra measured in air. The Improved-Ambient method uses specific ion fragments as markers to correct for molecular functionality-dependent systematic biases and reproduces known O : C (H : C) ratios of individual oxidized standards within 28% (13%) of the known molecular values. The error in Improved-Ambient O : C (H : C) values is smaller for theoretical standard mixtures of the oxidized organic standards, which are more representative of the complex mix of species present in ambient OA. For ambient OA, the Improved-Ambient method produces O : C (H : C) values that are 27% (11%) larger than previously published Aiken-Ambient values; a corresponding increase of 9% is observed for OM : OC values. These results imply that ambient OA has a higher relative oxygen content than previously estimated. The OS C values calculated for ambient OA by the two methods agree well, however (average relative difference of 0.06 OS C units). This indicates that OS C is a more robust metric of oxidation than O : C, likely since OS C is not affected by hydration or dehydration, either in the atmosphere or during analysis.

Journal ArticleDOI
TL;DR: Although Mediator exists in all eukaryotes, a variety of Mediator functions seem to be specific to metazoans, which is indicative of more diverse regulatory requirements.
Abstract: The RNA polymerase II (Pol II) enzyme transcribes all protein-coding and most non-coding RNA genes and is globally regulated by Mediator - a large, conformationally flexible protein complex with a variable subunit composition (for example, a four-subunit cyclin-dependent kinase 8 module can reversibly associate with it) These biochemical characteristics are fundamentally important for Mediator's ability to control various processes that are important for transcription, including the organization of chromatin architecture and the regulation of Pol II pre-initiation, initiation, re-initiation, pausing and elongation Although Mediator exists in all eukaryotes, a variety of Mediator functions seem to be specific to metazoans, which is indicative of more diverse regulatory requirements

Journal ArticleDOI
TL;DR: This work performs a new accuracy evaluation of the JILA Sr clock, reducing many systematic uncertainties that limited previous measurements, such as those in the lattice ac Stark shift, the atoms' thermal environment and the atomic response to room-temperature blackbody radiation.
Abstract: Atomic clocks are increasingly important for many applications in scientific research and technology. Here, Nicholson et al. present a series of developments allowing them to achieve a new record in atomic clock performance, with a systematic uncertainty of just 2.1 × 10−18 for their 87Sr atomic clock.

Journal ArticleDOI
Vardan Khachatryan1, Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam  +2134 moreInstitutions (142)
TL;DR: The couplings of the Higgs boson are probed for deviations in magnitude from the standard model predictions in multiple ways, including searches for invisible and undetected decays, and no significant deviations are found.
Abstract: Properties of the Higgs boson with mass near 125 GeV are measured in proton-proton collisions with the CMS experiment at the LHC. Comprehensive sets of production and decay measurements are combined. The decay channels include gamma gamma, ZZ, WW, tau tau, bb, and mu mu pairs. The data samples were collected in 2011 and 2012 and correspond to integrated luminosities of up to 5.1 inverse femtobarns at 7 TeV and up to 19.7 inverse femtobarns at 8 TeV. From the high-resolution gamma gamma and ZZ channels, the mass of the Higgs boson is determined to be 125.02 +0.26 -0.27 (stat) +0.14 -0.15 (syst) GeV. For this mass value, the event yields obtained in the different analyses tagging specific decay channels and production mechanisms are consistent with those expected for the standard model Higgs boson. The combined best-fit signal relative to the standard model expectation is 1.00 +/- 0.09 (stat) +0.08 -0.07 (theo) +/- 0.07 (syst) at the measured mass. The couplings of the Higgs boson are probed for deviations in magnitude from the standard model predictions in multiple ways, including searches for invisible and undetected decays. No significant deviations are found.

Journal ArticleDOI
TL;DR: It is suggested that westernization significantly affects human microbiome diversity and that functional AR genes appear to be a feature of the human microbiome even in the absence of exposure to commercial antibiotics.
Abstract: Most studies of the human microbiome have focused on westernized people with life-style practices that decrease microbial survival and transmission, or on traditional societies that are currently in transition to westernization We characterize the fecal, oral, and skin bacterial microbiome and resistome of members of an isolated Yanomami Amerindian village with no documented previous contact with Western people These Yanomami harbor a microbiome with the highest diversity of bacteria and genetic functions ever reported in a human group Despite their isolation, presumably for >11,000 years since their ancestors arrived in South America, and no known exposure to antibiotics, they harbor bacteria that carry functional antibiotic resistance (AR) genes, including those that confer resistance to synthetic antibiotics and are syntenic with mobilization elements These results suggest that westernization significantly affects human microbiome diversity and that functional AR genes appear to be a feature of the human microbiome even in the absence of exposure to commercial antibiotics AR genes are likely poised for mobilization and enrichment upon exposure to pharmacological levels of antibiotics Our findings emphasize the need for extensive characterization of the function of the microbiome and resistome in remote nonwesternized populations before globalization of modern practices affects potentially beneficial bacteria harbored in the human body

Journal ArticleDOI
TL;DR: In this paper, four central research questions -now tractable through advances in models, concepts and observations -were proposed to accelerate future progress in understanding the interactions between clouds, circulation and climate.
Abstract: Our understanding of the interactions between clouds, circulation and climate is limited. Four central research questions — now tractable through advances in models, concepts and observations — are proposed to accelerate future progress.

Journal ArticleDOI
TL;DR: Adoption of this staging classification provides a standardized taxonomy for type 1 diabetes and will aid the development of therapies and the design of clinical trials to prevent symptomatic disease, promote precision medicine, and provide a framework for an optimized benefit/risk ratio.
Abstract: Insights from prospective, longitudinal studies of individuals at risk for developing type 1 diabetes have demonstrated that the disease is a continuum that progresses sequentially at variable but predictable rates through distinct identifiable stages prior to the onset of symptoms. Stage 1 is defined as the presence of β-cell autoimmunity as evidenced by the presence of two or more islet autoantibodies with normoglycemia and is presymptomatic, stage 2 as the presence of β-cell autoimmunity with dysglycemia and is presymptomatic, and stage 3 as onset of symptomatic disease. Adoption of this staging classification provides a standardized taxonomy for type 1 diabetes and will aid the development of therapies and the design of clinical trials to prevent symptomatic disease, promote precision medicine, and provide a framework for an optimized benefit/risk ratio that will impact regulatory approval, reimbursement, and adoption of interventions in the early stages of type 1 diabetes to prevent symptomatic disease.

Journal ArticleDOI
Colm O'Dushlaine1, Lizzy Rossin1, Phil Lee2, Laramie E. Duncan2  +401 moreInstitutions (115)
TL;DR: It is indicated that risk variants for psychiatric disorders aggregate in particular biological pathways and that these pathways are frequently shared between disorders.
Abstract: Genome-wide association studies (GWAS) of psychiatric disorders have identified multiple genetic associations with such disorders, but better methods are needed to derive the underlying biological mechanisms that these signals indicate. We sought to identify biological pathways in GWAS data from over 60,000 participants from the Psychiatric Genomics Consortium. We developed an analysis framework to rank pathways that requires only summary statistics. We combined this score across disorders to find common pathways across three adult psychiatric disorders: schizophrenia, major depression and bipolar disorder. Histone methylation processes showed the strongest association, and we also found statistically significant evidence for associations with multiple immune and neuronal signaling pathways and with the postsynaptic density. Our study indicates that risk variants for psychiatric disorders aggregate in particular biological pathways and that these pathways are frequently shared between disorders. Our results confirm known mechanisms and suggest several novel insights into the etiology of psychiatric disorders.

Journal ArticleDOI
TL;DR: The MAVEN spacecraft has eight science instruments (with nine sensors) that measure the energy and particle input from the Sun into the Mars upper atmosphere, the response of the upper atmosphere to that input, and the resulting escape of gas to space as mentioned in this paper.
Abstract: The MAVEN spacecraft launched in November 2013, arrived at Mars in September 2014, and completed commissioning and began its one-Earth-year primary science mission in November 2014 The orbiter’s science objectives are to explore the interactions of the Sun and the solar wind with the Mars magnetosphere and upper atmosphere, to determine the structure of the upper atmosphere and ionosphere and the processes controlling it, to determine the escape rates from the upper atmosphere to space at the present epoch, and to measure properties that allow us to extrapolate these escape rates into the past to determine the total loss of atmospheric gas to space through time These results will allow us to determine the importance of loss to space in changing the Mars climate and atmosphere through time, thereby providing important boundary conditions on the history of the habitability of Mars The MAVEN spacecraft contains eight science instruments (with nine sensors) that measure the energy and particle input from the Sun into the Mars upper atmosphere, the response of the upper atmosphere to that input, and the resulting escape of gas to space In addition, it contains an Electra relay that will allow it to relay commands and data between spacecraft on the surface and Earth

Journal ArticleDOI
TL;DR: Rociletinib was active in patients with EGFR-mutated NSCLC associated with the T790M resistance mutation and the only common dose-limiting adverse event was hyperglycemia.
Abstract: BackgroundNon–small-cell lung cancer (NSCLC) with a mutation in the gene encoding epidermal growth factor receptor (EGFR) is sensitive to approved EGFR inhibitors, but resistance develops, mediated by the T790M EGFR mutation in most cases. Rociletinib (CO-1686) is an EGFR inhibitor active in preclinical models of EGFR-mutated NSCLC with or without T790M. MethodsIn this phase 1–2 study, we administered rociletinib to patients with EGFR-mutated NSCLC who had disease progression during previous treatment with an existing EGFR inhibitor. In the expansion (phase 2) part of the study, patients with T790M-positive disease received rociletinib at a dose of 500 mg twice daily, 625 mg twice daily, or 750 mg twice daily. Key objectives were assessment of safety, side-effect profile, pharmacokinetics, and preliminary antitumor activity of rociletinib. Tumor biopsies to identify T790M were performed during screening. Treatment was administered in continuous 21-day cycles. ResultsA total of 130 patients were enrolled. ...