Showing papers by "University of California, Irvine published in 2013"
••
University of Illinois at Urbana–Champaign1, Joint Institute for the Study of the Atmosphere and Ocean2, Cooperative Institute for Research in Environmental Sciences3, University of Leeds4, University of Oslo5, United States Environmental Protection Agency6, University of Michigan7, Pacific Northwest National Laboratory8, German Aerospace Center9, United States Department of Energy10, Max Planck Society11, University of Tokyo12, National Oceanic and Atmospheric Administration13, Forschungszentrum Jülich14, Norwegian Meteorological Institute15, Indian Institute of Technology Bombay16, China Meteorological Administration17, Peking University18, Met Office19, Desert Research Institute20, Clarkson University21, Stanford University22, European Centre for Medium-Range Weather Forecasts23, International Institute of Minnesota24, Goddard Institute for Space Studies25, Yale University26, University of Washington27, University of California, Irvine28
TL;DR: In this paper, the authors provided an assessment of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that is quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption; influence on liquid, mixed phase, and ice clouds; and deposition on snow and ice.
Abstract: Black carbon aerosol plays a unique and important role in Earth's climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that is quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption; influence on liquid, mixed phase, and ice clouds; and deposition on snow and ice. These effects are calculated with climate models, but when possible, they are evaluated with both microphysical measurements and field observations. Predominant sources are combustion related, namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg yr−1 in the year 2000 with an uncertainty range of 2000 to 29000. However, global atmospheric absorption attributable to black carbon is too low in many models and should be increased by a factor of almost 3. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of atmospheric black carbon is +0.71 W m−2 with 90% uncertainty bounds of (+0.08, +1.27) W m−2. Total direct forcing by all black carbon sources, without subtracting the preindustrial background, is estimated as +0.88 (+0.17, +1.48) W m−2. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings, including rapid adjustments. The best estimate of industrial-era climate forcing of black carbon through all forcing mechanisms, including clouds and cryosphere forcing, is +1.1 W m−2 with 90% uncertainty bounds of +0.17 to +2.1 W m−2. Thus, there is a very high probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. We estimate that black carbon, with a total climate forcing of +1.1 W m−2, is the second most important human emission in terms of its climate forcing in the present-day atmosphere; only carbon dioxide is estimated to have a greater forcing. Sources that emit black carbon also emit other short-lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of short-lived co-emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil fuel and biofuel) have an industrial-era climate forcing of +0.22 (−0.50 to +1.08) W m−2 during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all short-lived emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all short-lived species from black-carbon-rich sources becomes slightly negative (−0.06 W m−2 with 90% uncertainty bounds of −1.45 to +1.29 W m−2). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.
4,591 citations
••
TL;DR: In this paper, the authors performed an integrated genomic, transcriptomic and proteomic characterization of 373 endometrial carcinomas using array-and-sequencing-based technologies, and classified them into four categories: POLE ultramutated, microsatellite instability hypermutated, copy-number low, and copy number high.
Abstract: We performed an integrated genomic, transcriptomic and proteomic characterization of 373 endometrial carcinomas using array- and sequencing-based technologies. Uterine serous tumours and ∼25% of high-grade endometrioid tumours had extensive copy number alterations, few DNA methylation changes, low oestrogen receptor/progesterone receptor levels, and frequent TP53 mutations. Most endometrioid tumours had few copy number alterations or TP53 mutations, but frequent mutations in PTEN, CTNNB1, PIK3CA, ARID1A and KRAS and novel mutations in the SWI/SNF chromatin remodelling complex gene ARID5B. A subset of endometrioid tumours that we identified had a markedly increased transversion mutation frequency and newly identified hotspot mutations in POLE. Our results classified endometrial cancers into four categories: POLE ultramutated, microsatellite instability hypermutated, copy-number low, and copy-number high. Uterine serous carcinomas share genomic features with ovarian serous and basal-like breast carcinomas. We demonstrated that the genomic features of endometrial carcinomas permit a reclassification that may affect post-surgical adjuvant treatment for women with aggressive tumours.
3,719 citations
••
TL;DR: In this paper, the authors present the first comprehensive, multidisciplinary review of consumer rebates that includes federal regulations, state laws, and academic research, and identify federal guidelines for rebates by reviewing the 18 Federal Trade Commision rebate-related complaints and the associated consent decrees.
Abstract: The authors present the first comprehensive, multidisciplinary review of consumer rebates that includes federal regulations, state laws, and academic research. They discuss four topics that have been the foci of consumer concerns and policy reform: rebate advertising, rebate redemption disclosures, rebate redemption processes, and rebate payment processes. With respect to each of these four topics, the authors identify federal guidelines for rebates by reviewing the 18 Federal Trade Commision rebate-related complaints and the 18 associated consent decrees. Furthermore, they discuss 15 rebate laws from 11 U.S. states, 7 of which were enacted since 2007. In addition, they review academic research related to rebates from diverse literatures including marketing, consumer behavior, psychology, and economics and identify research gaps. This information should help policy makers evaluate rebate policies to assess whether the policies are evidence based, and it should help academics identify unanswered research q...
2,266 citations
••
TL;DR: Empirical evidence of shared genetic etiology for psychiatric disorders can inform nosology and encourages the investigation of common pathophysiologies for related disorders.
Abstract: Most psychiatric disorders are moderately to highly heritable. The degree to which genetic variation is unique to individual disorders or shared across disorders is unclear. To examine shared genetic etiology, we use genome-wide genotype data from the Psychiatric Genomics Consortium (PGC) for cases and controls in schizophrenia, bipolar disorder, major depressive disorder, autism spectrum disorders (ASD) and attention-deficit/hyperactivity disorder (ADHD). We apply univariate and bivariate methods for the estimation of genetic variation within and covariation between disorders. SNPs explained 17-29% of the variance in liability. The genetic correlation calculated using common SNPs was high between schizophrenia and bipolar disorder (0.68 ± 0.04 s.e.), moderate between schizophrenia and major depressive disorder (0.43 ± 0.06 s.e.), bipolar disorder and major depressive disorder (0.47 ± 0.06 s.e.), and ADHD and major depressive disorder (0.32 ± 0.07 s.e.), low between schizophrenia and ASD (0.16 ± 0.06 s.e.) and non-significant for other pairs of disorders as well as between psychiatric disorders and the negative control of Crohn's disease. This empirical evidence of shared genetic etiology for psychiatric disorders can inform nosology and encourages the investigation of common pathophysiologies for related disorders.
2,058 citations
••
Kyle S. Dawson1, David J. Schlegel2, Christopher P. Ahn1, Scott F. Anderson3 +181 more•Institutions (51)
TL;DR: The Baryon Oscillation Spectroscopic Survey (BOSS) as discussed by the authors was designed to measure the scale of baryon acoustic oscillations (BAO) in the clustering of matter over a larger volume than the combined efforts of all previous spectroscopic surveys of large-scale structure.
Abstract: The Baryon Oscillation Spectroscopic Survey (BOSS) is designed to measure the scale of baryon acoustic oscillations (BAO) in the clustering of matter over a larger volume than the combined efforts of all previous spectroscopic surveys of large-scale structure. BOSS uses 1.5 million luminous galaxies as faint as i = 19.9 over 10,000 deg2 to measure BAO to redshifts z < 0.7. Observations of neutral hydrogen in the Lyα forest in more than 150,000 quasar spectra (g < 22) will constrain BAO over the redshift range 2.15 < z < 3.5. Early results from BOSS include the first detection of the large-scale three-dimensional clustering of the Lyα forest and a strong detection from the Data Release 9 data set of the BAO in the clustering of massive galaxies at an effective redshift z = 0.57. We project that BOSS will yield measurements of the angular diameter distance dA to an accuracy of 1.0% at redshifts z = 0.3 and z = 0.57 and measurements of H(z) to 1.8% and 1.7% at the same redshifts. Forecasts for Lyα forest constraints predict a measurement of an overall dilation factor that scales the highly degenerate DA (z) and H –1(z) parameters to an accuracy of 1.9% at z ~ 2.5 when the survey is complete. Here, we provide an overview of the selection of spectroscopic targets, planning of observations, and analysis of data and data quality of BOSS.
1,938 citations
••
British Antarctic Survey1, University of Bristol2, Columbia University3, National Institute of Geophysics and Volcanology4, University of Aberdeen5, University of Texas at Austin6, Centro de Estudios Científicos7, Université libre de Bruxelles8, University of Washington9, Swansea University10, Institute for Geosciences and Natural Resources11, Technical University of Denmark12, National Institute of Polar Research13, California Institute of Technology14, University of Kansas15, Stockholm University16, St. Olaf College17, Norwegian Polar Institute18, Wallops Flight Facility19, University of Canterbury20, University of Oslo21, University of California, Santa Barbara22, University of California, Irvine23, University of York24, Australian Antarctic Division25, Newcastle University26, Goddard Space Flight Center27, Polar Research Institute of China28
TL;DR: Bedmap2 as discussed by the authors is a suite of gridded products describing surface elevation, ice-thickness and the seafloor and subglacial bed elevation of the Antarctic south of 60° S. In particular, the Bedmap2 ice thickness grid is made from 25 million measurements, over two orders of magnitude more than were used in Bedmap1.
Abstract: We present Bedmap2, a new suite of gridded products describing surface elevation, ice-thickness and the seafloor and subglacial bed elevation of the Antarctic south of 60° S. We derived these products using data from a variety of sources, including many substantial surveys completed since the original Bedmap compilation (Bedmap1) in 2001. In particular, the Bedmap2 ice thickness grid is made from 25 million measurements, over two orders of magnitude more than were used in Bedmap1. In most parts of Antarctica the subglacial landscape is visible in much greater detail than was previously available and the improved data-coverage has in many areas revealed the full scale of mountain ranges, valleys, basins and troughs, only fragments of which were previously indicated in local surveys. The derived statistics for Bedmap2 show that the volume of ice contained in the Antarctic ice sheet (27 million km3) and its potential contribution to sea-level rise (58 m) are similar to those of Bedmap1, but the mean thickness of the ice sheet is 4.6% greater, the mean depth of the bed beneath the grounded ice sheet is 72 m lower and the area of ice sheet grounded on bed below sea level is increased by 10%. The Bedmap2 compilation highlights several areas beneath the ice sheet where the bed elevation is substantially lower than the deepest bed indicated by Bedmap1. These products, along with grids of data coverage and uncertainty, provide new opportunities for detailed modelling of the past and future evolution of the Antarctic ice sheets.
1,678 citations
••
Centre national de la recherche scientifique1, Commonwealth Scientific and Industrial Research Organisation2, National Oceanic and Atmospheric Administration3, United States Department of Energy4, University of California, Irvine5, Seconda Università degli Studi di Napoli6, Central Maine Community College7, Max Planck Society8, Swiss Federal Institute for Forest, Snow and Landscape Research9, Utrecht University10, Carma11, National Center for Atmospheric Research12, University of East Anglia13, Massachusetts Institute of Technology14, VU University Amsterdam15, Goddard Space Flight Center16, University of Bern17, Nagoya University18, Imperial College London19, Royal Netherlands Meteorological Institute20, University of California, San Diego21, National Institute of Water and Atmospheric Research22
TL;DR: In this paper, the authors construct decadal budgets for methane sources and sinks between 1980 and 2010, using a combination of atmospheric measurements and results from chemical transport models, ecosystem models, climate chemistry models and inventories of anthropogenic emissions.
Abstract: Methane is an important greenhouse gas, responsible for about 20% of the warming induced by long-lived greenhouse gases since pre-industrial times. By reacting with hydroxyl radicals, methane reduces the oxidizing capacity of the atmosphere and generates ozone in the troposphere. Although most sources and sinks of methane have been identified, their relative contributions to atmospheric methane levels are highly uncertain. As such, the factors responsible for the observed stabilization of atmospheric methane levels in the early 2000s, and the renewed rise after 2006, remain unclear. Here, we construct decadal budgets for methane sources and sinks between 1980 and 2010, using a combination of atmospheric measurements and results from chemical transport models, ecosystem models, climate chemistry models and inventories of anthropogenic emissions. The resultant budgets suggest that data-driven approaches and ecosystem models overestimate total natural emissions. We build three contrasting emission scenarios-which differ in fossil fuel and microbial emissions-to explain the decadal variability in atmospheric methane levels detected, here and in previous studies, since 1985. Although uncertainties in emission trends do not allow definitive conclusions to be drawn, we show that the observed stabilization of methane levels between 1999 and 2006 can potentially be explained by decreasing-to-stable fossil fuel emissions, combined with stable-to-increasing microbial emissions. We show that a rise in natural wetland emissions and fossil fuel emissions probably accounts for the renewed increase in global methane levels after 2006, although the relative contribution of these two sources remains uncertain. © 2013 Macmillan Publishers Limited.
1,668 citations
••
University College London1, University of Texas at Austin2, Goethe University Frankfurt3, Goddard Space Flight Center4, Utrecht University5, University of Rennes6, James Cook University7, University of California, Irvine8, University of Oxford9, United States Geological Survey10, United States Department of Agriculture11, Sun Yat-sen University12, British Geological Survey13, Rutgers University14, Colorado School of Mines15, San Francisco State University16, Simon Fraser University17, University of East Anglia18, Cranfield University19
TL;DR: In this paper, the authors critically review recent research assessing the impacts of climate on ground water through natural and human-induced processes as well as through groundwater-driven feedbacks on the climate system, and highlight the possible opportunities and challenges of using and sustaining groundwater resources in climate adaptation strategies.
Abstract: As the world's largest distributed store of fresh water, ground water plays a central part in sustaining ecosystems and enabling human adaptation to climate variability and change. The strategic importance of ground water for global water and food security will probably intensify under climate change as more frequent and intense climate extremes (droughts and floods) increase variability in precipitation, soil moisture and surface water. Here we critically review recent research assessing the impacts of climate on ground water through natural and human-induced processes as well as through groundwater-driven feedbacks on the climate system. Furthermore, we examine the possible opportunities and challenges of using and sustaining groundwater resources in climate adaptation strategies, and highlight the lack of groundwater observations, which, at present, limits our understanding of the dynamic relationship between ground water and climate.
1,536 citations
••
National Oceanography Centre, Southampton1, Stanford University2, Bar-Ilan University3, Centre national de la recherche scientifique4, University of Tasmania5, University of Otago6, McGill University7, University of Essex8, Pierre-and-Marie-Curie University9, ETH Zurich10, University of East Anglia11, University of Exeter12, Cornell University13, University of Vigo14, University of Pennsylvania15, University of California, Irvine16, Nagoya University17, Leibniz Institute of Marine Sciences18, Woods Hole Oceanographic Institution19, University of Bergen20, University of Tokyo21, University of Concepción22
TL;DR: In this paper, the authors reveal two broad regimes of phytoplankton nutrient limitation in the modern upper ocean: Nitrogen availability tends to limit productivity throughout much of the surface low-latitude ocean, where the supply of nutrients from the subsurface is relatively slow.
Abstract: Microbial activity is a fundamental component of oceanic nutrient cycles. Photosynthetic microbes, collectively termed phytoplankton, are responsible for the vast majority of primary production in marine waters. The availability of nutrients in the upper ocean frequently limits the activity and abundance of these organisms. Experimental data have revealed two broad regimes of phytoplankton nutrient limitation in the modern upper ocean. Nitrogen availability tends to limit productivity throughout much of the surface low-latitude ocean, where the supply of nutrients from the subsurface is relatively slow. In contrast, iron often limits productivity where subsurface nutrient supply is enhanced, including within the main oceanic upwelling regions of the Southern Ocean and the eastern equatorial Pacific. Phosphorus, vitamins and micronutrients other than iron may also (co-)limit marine phytoplankton. The spatial patterns and importance of co-limitation, however, remain unclear. Variability in the stoichiometries of nutrient supply and biological demand are key determinants of oceanic nutrient limitation. Deciphering the mechanisms that underpin this variability, and the consequences for marine microbes, will be a challenge. But such knowledge will be crucial for accurately predicting the consequences of ongoing anthropogenic perturbations to oceanic nutrient biogeochemistry.
1,516 citations
••
University of Adelaide1, University of Wisconsin-Madison2, Ghent University3, University of Canterbury4, University of Geneva5, Humboldt University of Berlin6, University of California, Irvine7, University of Mainz8, University of California, Berkeley9, Ohio State University10, Université libre de Bruxelles11, Ruhr University Bochum12, University of Wuppertal13, University of Maryland, College Park14, University of Kansas15, Lawrence Berkeley National Laboratory16, RWTH Aachen University17, Uppsala University18, University of Alberta19, Stockholm University20, Vrije Universiteit Brussel21, University of Bonn22, École Polytechnique Fédérale de Lausanne23, Georgia Institute of Technology24, Pennsylvania State University25, Technical University of Dortmund26, Southern University and A&M College27
TL;DR: The presence of a high-energy neutrino flux containing the most energetic neutrinos ever observed is revealed, including 28 events at energies between 30 and 1200 TeV, although the origin of this flux is unknown and the findings are consistent with expectations for a neutRino population with origins outside the solar system.
Abstract: We report on results of an all-sky search for high-energy neutrino events interacting within the IceCube neutrino detector conducted between May 2010 and May 2012. The search follows up on the previous detection of two PeV neutrino events, with improved sensitivity and extended energy coverage down to about 30 TeV. Twenty-six additional events were observed, substantially more than expected from atmospheric backgrounds. Combined, both searches reject a purely atmospheric origin for the 28 events at the 4 sigma level. These 28 events, which include the highest energy neutrinos ever observed, have flavors, directions, and energies inconsistent with those expected from the atmospheric muon and neutrino backgrounds. These properties are, however, consistent with generic predictions for an additional component of extraterrestrial origin.
1,490 citations
••
University of Pennsylvania1, Colorado State University2, Mayo Clinic3, Harvard University4, National Institutes of Health5, King's College London6, University of Freiburg7, Washington University in St. Louis8, Stanford University9, University of Miami10, Boston Biomedical Research Institute11, University of California, Irvine12
TL;DR: Dysregulated polymerization caused by a potent mutant steric zipper motif in a PrLD can initiate degenerative disease and related proteins with PrLDs should be considered candidates for initiating and perhaps propagating proteinopathies of muscle, brain, motor neuron and bone.
Abstract: Algorithms designed to identify canonical yeast prions predict that around 250 human proteins, including several RNA-binding proteins associated with neurodegenerative disease, harbour a distinctive prion-like domain (PrLD) enriched in uncharged polar amino acids and glycine. PrLDs in RNA-binding proteins are essential for the assembly of ribonucleoprotein granules. However, the interplay between human PrLD function and disease is not understood. Here we define pathogenic mutations in PrLDs of heterogeneous nuclear ribonucleoproteins (hnRNPs) A2B1 and A1 in families with inherited degeneration affecting muscle, brain, motor neuron and bone, and in one case of familial amyotrophic lateral sclerosis. Wild-type hnRNPA2 (the most abundant isoform of hnRNPA2B1) and hnRNPA1 show an intrinsic tendency to assemble into self-seeding fibrils, which is exacerbated by the disease mutations. Indeed, the pathogenic mutations strengthen a 'steric zipper' motif in the PrLD, which accelerates the formation of self-seeding fibrils that cross-seed polymerization of wild-type hnRNP. Notably, the disease mutations promote excess incorporation of hnRNPA2 and hnRNPA1 into stress granules and drive the formation of cytoplasmic inclusions in animal models that recapitulate the human pathology. Thus, dysregulated polymerization caused by a potent mutant steric zipper motif in a PrLD can initiate degenerative disease. Related proteins with PrLDs should therefore be considered candidates for initiating and perhaps propagating proteinopathies of muscle, brain, motor neuron and bone.
••
Ashley Beecham1, Nikolaos A. Patsopoulos2, Nikolaos A. Patsopoulos3, Dionysia K. Xifara4 +203 more•Institutions (73)
TL;DR: This study enhances the catalog of multiple sclerosis risk variants and illustrates the value of fine mapping in the resolution of GWAS signals.
Abstract: Using the ImmunoChip custom genotyping array, we analyzed 14,498 subjects with multiple sclerosis and 24,091 healthy controls for 161,311 autosomal variants and identified 135 potentially associated regions (P < 10 × 10(-4)) In a replication phase, we combined these data with previous genome-wide association study (GWAS) data from an independent 14,802 subjects with multiple sclerosis and 26,703 healthy controls In these 80,094 individuals of European ancestry, we identified 48 new susceptibility variants (P < 50 × 10(-8)), 3 of which we found after conditioning on previously identified variants Thus, there are now 110 established multiple sclerosis risk variants at 103 discrete loci outside of the major histocompatibility complex With high-resolution Bayesian fine mapping, we identified five regions where one variant accounted for more than 50% of the posterior probability of association This study enhances the catalog of multiple sclerosis risk variants and illustrates the value of fine mapping in the resolution of GWAS signals
••
TL;DR: The Global Fire Emissions Database (GFED4) as discussed by the authors provides global monthly burned area at 0.25°m spatial resolution from mid-1995 through the present and daily burned area for the time series extending back to August 2000.
Abstract: [1] We describe the fourth generation of the Global Fire Emissions Database (GFED4) burned area data set, which provides global monthly burned area at 0.25° spatial resolution from mid-1995 through the present and daily burned area for the time series extending back to August 2000. We produced the full data set by combining 500 m MODIS burned area maps with active fire data from the Tropical Rainfall Measuring Mission (TRMM) Visible and Infrared Scanner (VIRS) and the Along-Track Scanning Radiometer (ATSR) family of sensors. We found that the global annual area burned for the years 1997 through 2011 varied from 301 to 377Mha, with an average of 348Mha. We assessed the interannual variability and trends in burned area on the basis of a region-specific definition of fire years. With respect to trends, we found a gradual decrease of 1.7Mhayr − 1 ( − 1.4%yr − 1) in Northern Hemisphere Africa since 2000, a gradual increase of 2.3Mhayr − 1 (+1.8%yr − 1) in Southern Hemisphere Africa also since 2000, a slight increase of 0.2Mhayr − 1 (+2.5%yr − 1) in Southeast Asia since 1997, and a rapid decrease of approximately 5.5Mhayr − 1 ( − 10.7%yr − 1) from 2001 through 2011 in Australia, followed by a major upsurge in 2011 that exceeded the annual area burned in at least the previous 14 years. The net trend in global burned area from 2000 to 2012 was a modest decrease of 4.3Mhayr − 1 ( − 1.2%yr − 1). We also performed a spectral analysis of the daily burned area time series and found no vestiges of the 16 day MODIS repeat cycle.
••
TL;DR: A meta-analysis of 9 genome-wide association studies, including 10,052 breast cancer cases and 12,575 controls of European ancestry, and identified 29,807 SNPs for further genotyping suggests that more than 1,000 additional loci are involved in breast cancer susceptibility.
Abstract: Breast cancer is the most common cancer among women Common variants at 27 loci have been identified as associated with susceptibility to breast cancer, and these account for ∼9% of the familial risk of the disease We report here a meta-analysis of 9 genome-wide association studies, including 10,052 breast cancer cases and 12,575 controls of European ancestry, from which we selected 29,807 SNPs for further genotyping These SNPs were genotyped in 45,290 cases and 41,880 controls of European ancestry from 41 studies in the Breast Cancer Association Consortium (BCAC) The SNPs were genotyped as part of a collaborative genotyping experiment involving four consortia (Collaborative Oncological Gene-environment Study, COGS) and used a custom Illumina iSelect genotyping array, iCOGS, comprising more than 200,000 SNPs We identified SNPs at 41 new breast cancer susceptibility loci at genome-wide significance (P < 5 × 10(-8)) Further analyses suggest that more than 1,000 additional loci are involved in breast cancer susceptibility
••
TL;DR: Detailed glaciological estimates of ice-shelf melting around the entire continent of Antarctica show that basal melting accounts for as much mass loss as does calving, making ice- shelf melting the largest ablation process in Antarctica.
Abstract: We compare the volume flux divergence of Antarctic ice shelves in 2007 and 2008 with 1979 to 2010 surface accumulation and 2003 to 2008 thinning to determine their rates of melting and mass balance. Basal melt of 1325 ± 235 gigatons per year (Gt/year) exceeds a calving flux of 1089 ± 139 Gt/year, making ice-shelf melting the largest ablation process in Antarctica. The giant cold-cavity Ross, Filchner, and Ronne ice shelves covering two-thirds of the total ice-shelf area account for only 15% of net melting. Half of the meltwater comes from 10 small, warm-cavity Southeast Pacific ice shelves occupying 8% of the area. A similar high melt/area ratio is found for six East Antarctic ice shelves, implying undocumented strong ocean thermal forcing on their deep grounding lines.
••
TL;DR: It is demonstrated here that elongation itself, without exogenous cytokines, leads to the expression of M2 phenotype markers and reduces the secretion of inflammatory cytokine, suggesting an important role for cell shape in regulating macrophage function.
Abstract: Phenotypic polarization of macrophages is regulated by a milieu of cues in the local tissue microenvironment. Although much is known about how soluble factors influence macrophage polarization, relatively little is known about how physical cues present in the extracellular environment might modulate proinflammatory (M1) vs. prohealing (M2) activation. Specifically, the role of cell shape has not been explored, even though it has been observed that macrophages adopt different geometries in vivo. We and others observed that macrophages polarized toward different phenotypes in vitro exhibit dramatic changes in cell shape: M2 cells exhibit an elongated shape compared with M1 cells. Using a micropatterning approach to control macrophage cell shape directly, we demonstrate here that elongation itself, without exogenous cytokines, leads to the expression of M2 phenotype markers and reduces the secretion of inflammatory cytokines. Moreover, elongation enhances the effects of M2-inducing cytokines IL-4 and IL-13 and protects cells from M1-inducing stimuli LPS and IFN-γ. In addition shape- but not cytokine-induced polarization is abrogated when actin and actin/myosin contractility are inhibited by pharmacological agents, suggesting a role for the cytoskeleton in the control of macrophage polarization by cell geometry. Our studies demonstrate that alterations in cell shape associated with changes in ECM architecture may provide integral cues to modulate macrophage phenotype polarization.
••
Max Planck Society1, Centre national de la recherche scientifique2, University of Maryland, College Park3, University of California, Irvine4, Ludwig Maximilian University of Munich5, University of California, Berkeley6, ASTRON7, Grumman Aircraft Corporation8, University of California, Los Angeles9, Tel Aviv University10, University of Arizona11
TL;DR: In this paper, the IRAM Plateau de Bure high-z blue sequence CO 3-2 survey of the molecular gas properties in massive, main-sequence star-forming galaxies (SFGs) near the cosmic star formation peak is presented.
Abstract: We present PHIBSS, the IRAM Plateau de Bure high-z blue sequence CO 3-2 survey of the molecular gas properties in massive, main-sequence star-forming galaxies (SFGs) near the cosmic star formation peak. PHIBSS provides 52 CO detections in two redshift slices at z ~ 1.2 and 2.2, with log(M *(M ☉)) ≥ 10.4 and log(SFR(M ☉/yr)) ≥ 1.5. Including a correction for the incomplete coverage of the M* -SFR plane, and adopting a "Galactic" value for the CO-H2 conversion factor, we infer average gas fractions of ~0.33 at z ~ 1.2 and ~0.47 at z ~ 2.2. Gas fractions drop with stellar mass, in agreement with cosmological simulations including strong star formation feedback. Most of the z ~ 1-3 SFGs are rotationally supported turbulent disks. The sizes of CO and UV/optical emission are comparable. The molecular-gas-star-formation relation for the z = 1-3 SFGs is near-linear, with a ~0.7 Gyr gas depletion timescale; changes in depletion time are only a secondary effect. Since this timescale is much less than the Hubble time in all SFGs between z ~ 0 and 2, fresh gas must be supplied with a fairly high duty cycle over several billion years. At given z and M *, gas fractions correlate strongly with the specific star formation rate (sSFR). The variation of sSFR between z ~ 0 and 3 is mainly controlled by the fraction of baryonic mass that resides in cold gas.
••
TL;DR: The global niche models suggest that oceanic microbial communities will experience complex changes as a result of projected future climate conditions, and these changes may have large impacts on ocean ecosystems and biogeochemical cycles.
Abstract: The Cyanobacteria Prochlorococcus and Synechococcus account for a substantial fraction of marine primary production. Here, we present quantitative niche models for these lineages that assess present and future global abundances and distributions. These niche models are the result of neural network, nonparametric, and parametric analyses, and they rely on >35,000 discrete observations from all major ocean regions. The models assess cell abundance based on temperature and photosynthetically active radiation, but the individual responses to these environmental variables differ for each lineage. The models estimate global biogeographic patterns and seasonal variability of cell abundance, with maxima in the warm oligotrophic gyres of the Indian and the western Pacific Oceans and minima at higher latitudes. The annual mean global abundances of Prochlorococcus and Synechococcus are 2.9 ± 0.1 × 1027 and 7.0 ± 0.3 × 1026 cells, respectively. Using projections of sea surface temperature as a result of increased concentration of greenhouse gases at the end of the 21st century, our niche models projected increases in cell numbers of 29% and 14% for Prochlorococcus and Synechococcus, respectively. The changes are geographically uneven but include an increase in area. Thus, our global niche models suggest that oceanic microbial communities will experience complex changes as a result of projected future climate conditions. Because of the high abundances and contributions to primary production of Prochlorococcus and Synechococcus, these changes may have large impacts on ocean ecosystems and biogeochemical cycles.
••
TL;DR: A general, flexible mixture model that jointly captures spatial relations between part locations and co-occurrence Relations between part mixtures, augmenting standard pictorial structure models that encode just spatial relations.
Abstract: We describe a method for articulated human detection and human pose estimation in static images based on a new representation of deformable part models. Rather than modeling articulation using a family of warped (rotated and foreshortened) templates, we use a mixture of small, nonoriented parts. We describe a general, flexible mixture model that jointly captures spatial relations between part locations and co-occurrence relations between part mixtures, augmenting standard pictorial structure models that encode just spatial relations. Our models have several notable properties: 1) They efficiently model articulation by sharing computation across similar warps, 2) they efficiently model an exponentially large set of global mixtures through composition of local mixtures, and 3) they capture the dependency of global geometry on local appearance (parts look different at different locations). When relations are tree structured, our models can be efficiently optimized with dynamic programming. We learn all parameters, including local appearances, spatial relations, and co-occurrence relations (which encode local rigidity) with a structured SVM solver. Because our model is efficient enough to be used as a detector that searches over scales and image locations, we introduce novel criteria for evaluating pose estimation and human detection, both separately and jointly. We show that currently used evaluation criteria may conflate these two issues. Most previous approaches model limbs with rigid and articulated templates that are trained independently of each other, while we present an extensive diagnostic evaluation that suggests that flexible structure and joint training are crucial for strong performance. We present experimental results on standard benchmarks that suggest our approach is the state-of-the-art system for pose estimation, improving past work on the challenging Parse and Buffy datasets while being orders of magnitude faster.
••
National Institutes of Health1, Center for Drug Evaluation and Research2, Silver Spring Networks3, Johns Hopkins University4, Carolinas Medical Center5, Cornell University6, National Development and Research Institutes7, University of Maryland, Baltimore8, Veterans Health Administration9, University of California, Irvine10
TL;DR: RNA sequencing in primary human hepatocytes activated with synthetic double-stranded RNA to mimic HCV infection provides new insights into the genetic regulation of HCV clearance and its clinical management.
Abstract: Chronic infection with hepatitis C virus (HCV) is a common cause of liver cirrhosis and cancer. We performed RNA sequencing in primary human hepatocytes activated with synthetic double-stranded RNA to mimic HCV infection. Upstream of IFNL3 (IL28B) on chromosome 19q13.13, we discovered a new transiently induced region that harbors a dinucleotide variant ss469415590 (TT or ΔG), which is in high linkage disequilibrium with rs12979860, a genetic marker strongly associated with HCV clearance. ss469415590[ΔG] is a frameshift variant that creates a novel gene, designated IFNL4, encoding the interferon-λ4 protein (IFNL4), which is moderately similar to IFNL3. Compared to rs12979860, ss469415590 is more strongly associated with HCV clearance in individuals of African ancestry, although it provides comparable information in Europeans and Asians. Transient overexpression of IFNL4 in a hepatoma cell line induced STAT1 and STAT2 phosphorylation and the expression of interferon-stimulated genes. Our findings provide new insights into the genetic regulation of HCV clearance and its clinical management.
••
TL;DR: Uremia profoundly alters the composition of the gut microbiome and the biological impact of this phenomenon is unknown and awaits further investigation.
••
University of Groningen1, Showa University2, University of Chicago3, AbbVie4, University of Glasgow5, University of Copenhagen6, Mario Negri Institute for Pharmacological Research7, University of Texas Southwestern Medical Center8, University of California, Irvine9, University of Würzburg10, Stanford University11
TL;DR: Among patients with type 2 diabetes mellitus and stage 4 chronic kidney disease, bardoxolone methyl did not reduce the risk of end-stage renal disease (ESRD) or death from cardiovascular causes and was terminated on the recommendation of the independent data and safety monitoring committee.
Abstract: BACKGROUND: Although inhibitors of the renin-angiotensin-aldosterone system can slow the progression of diabetic kidney disease, the residual risk is high. Whether nuclear 1 factor (erythroid-derived 2)-related factor 2 activators further reduce this risk is unknown. METHODS: We randomly assigned 2185 patients with type 2 diabetes mellitus and stage 4 chronic kidney disease (estimated glomerular filtration rate [GFR], 15 to <30 ml per minute per 1.73 m(2) of body-surface area) to bardoxolone methyl, at a daily dose of 20 mg, or placebo. The primary composite outcome was end-stage renal disease (ESRD) or death from cardiovascular causes. RESULTS: The sponsor and the steering committee terminated the trial on the recommendation of the independent data and safety monitoring committee; the median follow-up was 9 months. A total of 69 of 1088 patients (6%) randomly assigned to bardoxolone methyl and 69 of 1097 (6%) randomly assigned to placebo had a primary composite outcome (hazard ratio in the bardoxolone methyl group vs. the placebo group, 0.98; 95% confidence interval [CI], 0.70 to 1.37; P=0.92). In the bardoxolone methyl group, ESRD developed in 43 patients, and 27 patients died from cardiovascular causes; in the placebo group, ESRD developed in 51 patients, and 19 patients died from cardiovascular causes. A total of 96 patients in the bardoxolone methyl group were hospitalized for heart failure or died from heart failure, as compared with 55 in the placebo group (hazard ratio, 1.83; 95% CI, 1.32 to 2.55; P<0.001). Estimated GFR, blood pressure, and the urinary albumin-to-creatinine ratio increased significantly and body weight decreased significantly in the bardoxolone methyl group, as compared with the placebo group. CONCLUSIONS: Among patients with type 2 diabetes mellitus and stage 4 chronic kidney disease, bardoxolone methyl did not reduce the risk of ESRD or death from cardiovascular causes. A higher rate of cardiovascular events with bardoxolone methyl than with placebo prompted termination of the trial. (Funded by Reata Pharmaceuticals; BEACON ClinicalTrials.gov number, NCT01351675.).
••
Georgia State University1, Niels Bohr Institute2, Ohio State University3, University of California, Irvine4, University of Arizona5, California Polytechnic State University6, University of California, Riverside7, University of California, Berkeley8, Princeton University9, University of California, Los Angeles10, California Institute of Technology11, University of California, Santa Barbara12, Seoul National University13
TL;DR: In this article, the authors present an updated and revised analysis of the relationship between the H{beta} broadline region (BLR) radius and the luminosity of the active galactic nucleus (AGN).
Abstract: We present an updated and revised analysis of the relationship between the H{beta} broad-line region (BLR) radius and the luminosity of the active galactic nucleus (AGN). Specifically, we have carried out two-dimensional surface brightness decompositions of the host galaxies of nine new AGNs imaged with the Hubble Space Telescope Wide Field Camera 3. The surface brightness decompositions allow us to create ''AGN-free'' images of the galaxies, from which we measure the starlight contribution to the optical luminosity measured through the ground-based spectroscopic aperture. We also incorporate 20 new reverberation-mapping measurements of the H{beta} time lag, which is assumed to yield the average H{beta} BLR radius. The final sample includes 41 AGNs covering four orders of magnitude in luminosity. The additions and updates incorporated here primarily affect the low-luminosity end of the R{sub BLR}-L relationship. The best fit to the relationship using a Bayesian analysis finds a slope of {alpha}= 0.533{sup +0.035}{sub -0.033}, consistent with previous work and with simple photoionization arguments. Only two AGNs appear to be outliers from the relationship, but both of them have monitoring light curves that raise doubt regarding the accuracy of their reported time lags. The scatter around the relationship is found to be 0.19more » {+-} 0.02 dex, but would be decreased to 0.13 dex by the removal of these two suspect measurements. A large fraction of the remaining scatter in the relationship is likely due to the inaccurate distances to the AGN host galaxies. Our results help support the possibility that the R{sub BLR}-L relationship could potentially be used to turn the BLRs of AGNs into standardizable candles. This would allow the cosmological expansion of the universe to be probed by a separate population of objects, and over a larger range of redshifts.« less
••
TL;DR: These two neutrino-induced events could be a first indication of an astrophysical neutrinos flux; the moderate significance, however, does not permit a definitive conclusion at this time.
Abstract: We report on the observation of two neutrino-induced events which have an estimated deposited energy in the IceCube detector of 1.04 +/- 0.16 and 1.14 +/- 0.17 PeV, respectively, the highest neutrino energies observed so far. These events are consistent with fully contained particle showers induced by neutral-current nu(e,mu,tau) ((nu) over bar (e,mu,tau)) or charged-current nu(e) ((nu) over bar (e)) interactions within the IceCube detector. The events were discovered in a search for ultrahigh energy neutrinos using data corresponding to 615.9 days effective live time. The expected number of atmospheric background is 0.082 +/- 0.004(stat)(-0.057)(+0.041)(syst). The probability of observing two or more candidate events under the atmospheric background-only hypothesis is 2.9 x 10(-3) (2.8 sigma) taking into account the uncertainty on the expected number of background events. These two events could be a first indication of an astrophysical neutrino flux; the moderate significance, however, does not permit a definitive conclusion at this time.
••
University of Pittsburgh1, University of California, Irvine2, University of California, Berkeley3, University of California, Santa Cruz4, Alfred P. Sloan Foundation5, University of California, San Diego6, Harvard University7, Max Planck Society8, Lawrence Berkeley National Laboratory9, University of Arizona10, University of Kentucky11, California Institute of Technology12, Goddard Space Flight Center13, Space Telescope Science Institute14, University of Southern California15, University of Washington16, Academia Sinica17, Sonoma State University18, Liverpool John Moores University19
TL;DR: The DEEP2 Galaxy Redshift Survey (DEEP2) as discussed by the authors is the largest high-precision redshift survey of galaxies at z ~ 1 completed to date, covering an area of 2.8 deg^2 divided into four separate fields observed to a limiting apparent magnitude of R_(AB) = 24.1.
Abstract: We describe the design and data analysis of the DEEP2 Galaxy Redshift Survey, the densest and largest high-precision redshift survey of galaxies at z ~ 1 completed to date. The survey was designed to conduct a comprehensive census of massive galaxies, their properties, environments, and large-scale structure down to absolute magnitude M_B = −20 at z ~ 1 via ~90 nights of observation on the Keck telescope. The survey covers an area of 2.8 deg^2 divided into four separate fields observed to a limiting apparent magnitude of R_(AB) = 24.1. Objects with z ≾0.7 are readily identifiable using BRI photometry and rejected in three of the four DEEP2 fields, allowing galaxies with z > 0.7 to be targeted ~2.5 times more efficiently than in a purely magnitude-limited sample. Approximately 60% of eligible targets are chosen for spectroscopy, yielding nearly 53,000 spectra and more than 38,000 reliable redshift measurements. Most of the targets that fail to yield secure redshifts are blue objects that lie beyond z ~ 1.45, where the [O ii] 3727 A doublet lies in the infrared. The DEIMOS 1200 line mm^(−1) grating used for the survey delivers high spectral resolution (R ~ 6000), accurate and secure redshifts, and unique internal kinematic information. Extensive ancillary data are available in the DEEP2 fields, particularly in the Extended Groth Strip, which has evolved into one of the richest multiwavelength regions on the sky. This paper is intended as a handbook for users of the DEEP2 Data Release 4, which includes all DEEP2 spectra and redshifts, as well as for the DEEP2 DEIMOS data reduction pipelines. Extensive details are provided on object selection, mask design, biases in target selection and redshift measurements, the spec2d two-dimensional data-reduction pipeline, the spec1d automated redshift pipeline, and the zspec visual redshift verification process, along with examples of instrumental signatures or other artifacts that in some cases remain after data reduction. Redshift errors and catastrophic failure rates are assessed through more than 2000 objects with duplicate observations. Sky subtraction is essentially photon-limited even under bright OH sky lines; we describe the strategies that permitted this, based on high image stability, accurate wavelength solutions, and powerful B-spline modeling methods. We also investigate the impact of targets that appear to be single objects in ground-based targeting imaging but prove to be composite in Hubble Space Telescope data; they constitute several percent of targets at z ~ 1, approaching ~5%–10% at z > 1.5. Summary data are given that demonstrate the superiority of DEEP2 over other deep high-precision redshift surveys at z ~ 1 in terms of redshift accuracy, sample number density, and amount of spectral information. We also provide an overview of the scientific highlights of the DEEP2 survey thus far.
••
TL;DR: Functional studies indicate that this mutation leads to the simultaneous decrease in cytochrome oxidation, increase in reactive oxygen, and increased reactive nitrogen, which suggests that mitochondrial DNA mutations resulting in increased reactive oxygen and reactive nitrogen generation may be involved in prostate cancer biology.
Abstract: Mitochondrial DNA (mtDNA) mutations have been found in many cancers but the physiological derangements caused by such mutations have remained elusive. Prostate cancer is associated with both inherited and somatic mutations in the cytochrome c oxidase (COI) gene. We present a prostate cancer patient-derived rare heteroplasmic mutation of this gene, part of mitochondrial respiratory complex IV. Functional studies indicate that this mutation leads to the simultaneous decrease in cytochrome oxidation, increase in reactive oxygen, and increased reactive nitrogen. These data suggest that mitochondrial DNA mutations resulting in increased reactive oxygen and reactive nitrogen generation may be involved in prostate cancer biology.
••
TL;DR: In this paper, an Earth System Model (ESM) that explicitly represents microbial soil carbon cycling mechanisms is used to simulate carbon pools that closely match observations and produce a much wider range of soil carbon responses to climate change over the twenty-first century.
Abstract: Earth system models (ESMs) generally have crude representations of the responses of soil carbon responses to changing climate. Now an ESM that explicitly represents microbial soil carbon cycling mechanisms is able to simulate carbon pools that closely match observations. Projections from this model produce a much wider range of soil carbon responses to climate change over the twenty-first century than conventional ESMs.
••
TL;DR: In this article, the effects of self-interacting dark matter (SIDM) on the density profiles and substructure counts of dark matte r halos from the scales of spiral galaxies to galaxy clusters are studied.
Abstract: We use cosmological simulations to study the effects of self-interacting dark matter (SIDM) on the density profiles and substructure counts of dark matte r halos from the scales of spiral galaxies to galaxy clusters, focusing explicitly on mod els with cross sections over dark matter particle mass σ/m = 1 and 0.1 cm 2 /g. Our simulations rely on a new SIDM N-body algorithm that is derived self-consistently from the Boltz mann equation and that reproduces analytic expectations in controlled numerical experiments. We find that well-resolved SIDM halos have constant-density cores, with significantly lowe r central densities than their CDM counterparts. In contrast, the subhalo content of SIDM halos is only modestly reduced compared to CDM, with the suppression greatest for large hosts and small halo-centric distances. Moreover, the large-scale clustering and halo circular vel ocity functions in SIDM are effectively identical to CDM, meaning that all of the large-scale successes of CDM are equally well matched by SIDM. From our largest cross section runs we are able to extract scaling relations for core sizes and central densities over a range o f halo sizes and find a strong correlation between the core radius of an SIDM halo and the NFW scale radius of its CDM counterpart. We construct a simple analytic model, based on CDM scaling relations, that captures all aspects of the scaling relations for SIDM halos. Our results show that halo core densities in σ/m = 1 cm 2 /g models are too low to match observations of galaxy clusters, low surface brightness spirals (LSBs), and dwarf spheroidal galaxies. However, SIDM with σ/m ≃ 0.1 cm 2 /g appears capable of reproducing reported core sizes and central densities of dwarfs, LSBs, and galaxy clusters without the need for velocity dependence. Higher resolution simulations over a wider range of masses will be required to confirm this expectation. We discuss constraints arising from the Bullet cluster observ ations, measurements of dark matter density on small-scales and subhalo survival requirements, and show that SIDM models with σ/m ≃ 0.1 cm 2 /g ≃ 0.2 barn/GeV are consistent with all observational constraints.
••
TL;DR: In this paper, spectroscopic metallicities of individual stars in seven gas-rich dwarf irregular galaxies (dIrrs) were analyzed and it was shown that dIrrs obey the same mass-metallicity relation as the dwarf spheroidal (dSph) satellites of both the Milky Way and M31: Z_* ∝ M_*^(0.30±0.02).
Abstract: We present spectroscopic metallicities of individual stars in seven gas-rich dwarf irregular galaxies (dIrrs), and we show that dIrrs obey the same mass-metallicity relation as the dwarf spheroidal (dSph) satellites of both the Milky Way and M31: Z_* ∝ M_*^(0.30±0.02). The uniformity of the relation is in contradiction to previous estimates of metallicity based on photometry. This relationship is roughly continuous with the stellar mass-stellar metallicity relation for galaxies as massive as M_* = 10^(12) M_☉. Although the average metallicities of dwarf galaxies depend only on stellar mass, the shapes of their metallicity distributions depend on galaxy type. The metallicity distributions of dIrrs resemble simple, leaky box chemical evolution models, whereas dSphs require an additional parameter, such as gas accretion, to explain the shapes of their metallicity distributions. Furthermore, the metallicity distributions of the more luminous dSphs have sharp, metal-rich cut-offs that are consistent with the sudden truncation of star formation due to ram pressure stripping.
••
TL;DR: The Moral Foundations Theory (MFT) as discussed by the authors was created to answer these questions, including: where does morality come from? Why are moral judgments often so similar across cultures, yet sometimes so variable? Is morality one thing, or many?
Abstract: Where does morality come from? Why are moral judgments often so similar across cultures, yet sometimes so variable? Is morality one thing, or many? Moral Foundations Theory (MFT) was created to answer these questions. In this chapter, we describe the origins, assumptions, and current conceptualization of the theory and detail the empirical findings that MFT has made possible, both within social psychology and beyond. Looking toward the future, we embrace several critiques of the theory and specify five criteria for determining what should be considered a foundation of human morality. Finally, we suggest a variety of future directions for MFT and moral psychology.