Showing papers by "University of Sussex published in 2018"
••
TL;DR: In this paper, the cosmological parameter results from the final full-mission Planck measurements of the CMB anisotropies were presented, with good consistency with the standard spatially-flat 6-parameter CDM cosmology having a power-law spectrum of adiabatic scalar perturbations from polarization, temperature, and lensing separately and in combination.
Abstract: We present cosmological parameter results from the final full-mission Planck measurements of the CMB anisotropies. We find good consistency with the standard spatially-flat 6-parameter $\Lambda$CDM cosmology having a power-law spectrum of adiabatic scalar perturbations (denoted "base $\Lambda$CDM" in this paper), from polarization, temperature, and lensing, separately and in combination. A combined analysis gives dark matter density $\Omega_c h^2 = 0.120\pm 0.001$, baryon density $\Omega_b h^2 = 0.0224\pm 0.0001$, scalar spectral index $n_s = 0.965\pm 0.004$, and optical depth $\tau = 0.054\pm 0.007$ (in this abstract we quote $68\,\%$ confidence regions on measured parameters and $95\,\%$ on upper limits). The angular acoustic scale is measured to $0.03\,\%$ precision, with $100\theta_*=1.0411\pm 0.0003$. These results are only weakly dependent on the cosmological model and remain stable, with somewhat increased errors, in many commonly considered extensions. Assuming the base-$\Lambda$CDM cosmology, the inferred late-Universe parameters are: Hubble constant $H_0 = (67.4\pm 0.5)$km/s/Mpc; matter density parameter $\Omega_m = 0.315\pm 0.007$; and matter fluctuation amplitude $\sigma_8 = 0.811\pm 0.006$. We find no compelling evidence for extensions to the base-$\Lambda$CDM model. Combining with BAO we constrain the effective extra relativistic degrees of freedom to be $N_{\rm eff} = 2.99\pm 0.17$, and the neutrino mass is tightly constrained to $\sum m_
u< 0.12$eV. The CMB spectra continue to prefer higher lensing amplitudes than predicted in base -$\Lambda$CDM at over $2\,\sigma$, which pulls some parameters that affect the lensing amplitude away from the base-$\Lambda$CDM model; however, this is not supported by the lensing reconstruction or (in models that also change the background geometry) BAO data. (Abridged)
3,077 citations
••
University of Southern California1, Duke University2, Stockholm School of Economics3, Center for Open Science4, University of Virginia5, University of Amsterdam6, University of Pennsylvania7, University of North Carolina at Chapel Hill8, University of Regensburg9, California Institute of Technology10, Research Institute of Industrial Economics11, New York University12, Cardiff University13, Mathematica Policy Research14, Northwestern University15, Ohio State University16, University of Sussex17, Texas A&M University18, Royal Holloway, University of London19, University of Zurich20, University of Melbourne21, University of Wisconsin-Madison22, University of Michigan23, Stanford University24, Rutgers University25, Columbia University26, University of Washington27, University of Edinburgh28, National University of Singapore29, Utrecht University30, Arizona State University31, Princeton University32, University of California, Los Angeles33, Imperial College London34, University of Innsbruck35, Harvard University36, University of Chicago37, University of Pittsburgh38, University of Notre Dame39, University of California, Berkeley40, Johns Hopkins University41, University of Bristol42, University of New South Wales43, Dartmouth College44, Whitman College45, University of Puerto Rico46, University of Milan47, University of California, Irvine48, Paris Dauphine University49, University of British Columbia50, Ludwig Maximilian University of Munich51, Purdue University52, Washington University in St. Louis53, University of California, Davis54, Microsoft55
TL;DR: The default P-value threshold for statistical significance is proposed to be changed from 0.05 to 0.005 for claims of new discoveries in order to reduce uncertainty in the number of discoveries.
Abstract: We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.
1,586 citations
••
TL;DR: A dysprosium compound is reported that manifests magnetic hysteresis at temperatures up to 80 kelvin, which overcomes an essential barrier toward the development of nanomagnet devices that function at practical temperatures.
Abstract: Single-molecule magnets (SMMs) containing only one metal center may represent the lower size limit for molecule-based magnetic information storage materials. Their current drawback is that all SMMs require liquid-helium cooling to show magnetic memory effects. We now report a chemical strategy to access the dysprosium metallocene cation [(CpiPr5)Dy(Cp*)]+ (CpiPr5 = penta-iso-propylcyclopentadienyl, Cp* = pentamethylcyclopentadienyl), which displays magnetic hysteresis above liquid-nitrogen temperatures. An effective energy barrier to reversal of the magnetization of Ueff = 1,541 cm–1 is also measured. The magnetic blocking temperature of TB = 80 K for this cation overcomes an essential barrier towards the development of nanomagnet devices that function at practical temperatures.
1,198 citations
••
TL;DR: In this paper, the authors identified two established frames as coexisting and dominant in contemporary innovation policy discussions and argued that all three frames are relevant for policymaking, but exploring options for transformative innovation policy should be a priority.
733 citations
••
TL;DR: It is the hope that this Review will inspire more interesting, robust, multi-method, comparative, interdisciplinary and impactful research that will accelerate the contribution that energy social science can make to both theory and practice.
Abstract: A series of weaknesses in creativity, research design, and quality of writing continue to handicap energy social science. Many studies ask uninteresting research questions, make only marginal contributions, and lack innovative methods or application to theory. Many studies also have no explicit research design, lack rigor, or suffer from mangled structure and poor quality of writing. To help remedy these shortcomings, this Review offers suggestions for how to construct research questions; thoughtfully engage with concepts; state objectives; and appropriately select research methods. Then, the Review offers suggestions for enhancing theoretical, methodological, and empirical novelty. In terms of rigor, codes of practice are presented across seven method categories: experiments, literature reviews, data collection, data analysis, quantitative energy modeling, qualitative analysis, and case studies. We also recommend that researchers beware of hierarchies of evidence utilized in some disciplines, and that researchers place more emphasis on balance and appropriateness in research design. In terms of style, we offer tips regarding macro and microstructure and analysis, as well as coherent writing. Our hope is that this Review will inspire more interesting, robust, multi-method, comparative, interdisciplinary and impactful research that will accelerate the contribution that energy social science can make to both theory and practice.
670 citations
••
University College London1, International Institute for Applied Systems Analysis2, University of Reading3, University of London4, University of Sydney5, World Bank6, Cooperative Institute for Research in Environmental Sciences7, Umeå University8, Tsinghua University9, University of Geneva10, University of New England (United States)11, University of Birmingham12, Paris-Sorbonne University13, University of Washington14, Heidelberg University15, International Livestock Research Institute16, University of York17, Cayetano Heredia University18, University of Sussex19, Nelson Marlborough Institute of Technology20, University of North Texas21, Centre for Environment, Fisheries and Aquaculture Science22, University of Colorado Boulder23, University of Essex24, Iran University of Medical Sciences25, University of Exeter26, Imperial College London27, Atlantic Oceanographic and Meteorological Laboratory28
TL;DR: The Lancet Countdown tracks 41 indicators across five domains: climate change impacts, exposures, and vulnerability; adaptation, planning, and resilience for health; mitigation actions and health co-benefits; finance and economics; and public and political engagement.
582 citations
••
Institut national de la recherche scientifique1, University of Sussex2, Swinburne University of Technology3, University of Auckland4, Centre national de la recherche scientifique5, Georgia Institute of Technology6, University of Brescia7, National Physical Laboratory8, Tsinghua University9, Purdue University10, University of Electronic Science and Technology of China11
TL;DR: In this paper, a review of optical frequency combs with a large spectrum is presented, where the frequency and the phase do not vary and are completely determined by the source physical parameters.
543 citations
••
TL;DR: The DarkSide-20k detector as discussed by the authors is a direct WIMP search detector using a two-phase Liquid Argon Time Projection Chamber (LAr TPC) with an active mass of 23 t (20 t).
Abstract: Building on the successful experience in operating the DarkSide-50 detector, the DarkSide Collaboration is going to construct DarkSide-20k, a direct WIMP search detector using a two-phase Liquid Argon Time Projection Chamber (LAr TPC) with an active (fiducial) mass of 23 t (20 t). This paper describes a preliminary design for the experiment, in which the DarkSide-20k LAr TPC is deployed within a shield/veto with a spherical Liquid Scintillator Veto (LSV) inside a cylindrical Water Cherenkov Veto (WCV). This preliminary design provides a baseline for the experiment to achieve its physics goals, while further development work will lead to the final optimization of the detector parameters and an eventual technical design. Operation of DarkSide-50 demonstrated a major reduction in the dominant 39Ar background when using argon extracted from an underground source, before applying pulse shape analysis. Data from DarkSide-50, in combination with MC simulation and analytical modeling, shows that a rejection factor for discrimination between electron and nuclear recoils of $>3 \times 10^{9}$
is achievable. This, along with the use of the veto system and utilizing silicon photomultipliers in the LAr TPC, are the keys to unlocking the path to large LAr TPC detector masses, while maintaining an experiment in which less than $< 0.1$
events (other than $
u$
-induced nuclear recoils) is expected to occur within the WIMP search region during the planned exposure. DarkSide-20k will have ultra-low backgrounds than can be measured in situ, giving sensitivity to WIMP-nucleon cross sections of $1.2 \times 10^{-47}$
cm2 (
$1.1 \times 10^{-46}$
cm2) for WIMPs of 1 TeV/c2 (10 TeV/c2) mass, to be achieved during a 5 yr run producing an exposure of 100 t yr free from any instrumental background.
534 citations
••
TL;DR: In this paper, the authors focus on the eco-innovation pathway towards a circular economy, and try to coordinate available but fragmented findings regarding how "transformative innovation" can foster this transition while removing obstacles to sustainability.
513 citations
••
TL;DR: The first public data release of the DES DR1 dataset is described in this paper, consisting of reduced single-epoch images, co-add images, and co-added source catalogs, and associated products and services.
Abstract: We describe the first public data release of the Dark Energy Survey, DES DR1, consisting of reduced single-epoch images, co-added images, co-added source catalogs, and associated products and services assembled over the first 3 yr of DES science operations. DES DR1 is based on optical/near-infrared imaging from 345 distinct nights (2013 August to 2016 February) by the Dark Energy Camera mounted on the 4 m Blanco telescope at the Cerro Tololo InterAmerican Observatory in Chile. We release data from the DES wide-area survey covering similar to 5000 deg(2) of the southern Galactic cap in five broad photometric bands, grizY. DES DR1 has a median delivered point-spread function of g = 1.12, r = 0.96, i = 0.88, z = 0.84, and Y = 0.'' 90 FWHM, a photometric precision of <1% in all bands, and an astrometric precision of 151 mas. The median co-added catalog depth for a 1.'' 95 diameter aperture at signal-to-noise ratio (S/N) = 10 is g = 24.33, r = 24.08, i = 23.44, z = 22.69, and Y = 21.44 mag. DES DR1 includes nearly 400 million distinct astronomical objects detected in similar to 10,000 co-add tiles of size 0.534 deg(2) produced from similar to 39,000 individual exposures. Benchmark galaxy and stellar samples contain similar to 310 million and similar to 80 million objects, respectively, following a basic object quality selection. These data are accessible through a range of interfaces, including query web clients, image cutout servers, jupyter notebooks, and an interactive co-add image visualization tool. DES DR1 constitutes the largest photometric data set to date at the achieved depth and photometric precision.
506 citations
••
Richard A. Klein1, Michelangelo Vianello2, Fred Hasselman3, Byron G. Adams4 +187 more•Institutions (118)
TL;DR: This paper conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings, and found that very little heterogeneity was attributable to the order in which the tasks were performed or whether the task were administered in lab versus online.
Abstract: We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings. Each protocol was administered to approximately half of 125 samples that comprised 15,305 participants from 36 countries and territories. Using the conventional criterion of statistical significance (p < .05), we found that 15 (54%) of the replications provided evidence of a statistically significant effect in the same direction as the original finding. With a strict significance criterion (p < .0001), 14 (50%) of the replications still provided such evidence, a reflection of the extremely high-powered design. Seven (25%) of the replications yielded effect sizes larger than the original ones, and 21 (75%) yielded effect sizes smaller than the original ones. The median comparable Cohen’s ds were 0.60 for the original findings and 0.15 for the replications. The effect sizes were small (< 0.20) in 16 of the replications (57%), and 9 effects (32%) were in the direction opposite the direction of the original effect. Across settings, the Q statistic indicated significant heterogeneity in 11 (39%) of the replication effects, and most of those were among the findings with the largest overall effect sizes; only 1 effect that was near zero in the aggregate showed significant heterogeneity according to this measure. Only 1 effect had a tau value greater than .20, an indication of moderate heterogeneity. Eight others had tau values near or slightly above .10, an indication of slight heterogeneity. Moderation tests indicated that very little heterogeneity was attributable to the order in which the tasks were performed or whether the tasks were administered in lab versus online. Exploratory comparisons revealed little heterogeneity between Western, educated, industrialized, rich, and democratic (WEIRD) cultures and less WEIRD cultures (i.e., cultures with relatively high and low WEIRDness scores, respectively). Cumulatively, variability in the observed effect sizes was attributable more to the effect being studied than to the sample or setting in which it was studied.
••
University of Sussex1, INSEAD2, University of Virginia3, University of Padua4, University of Cologne5, University of Cincinnati6, University of Economics, Prague7, Hong Kong Polytechnic University8, University of Liverpool9, Stockholm School of Economics10, Linnaeus University11, University of Hong Kong12, University of California, Berkeley13, City University of New York14, New York University15, University of Manchester16, Westat17, Temple University18, Northwestern University19, University of Zurich20, University of Sheffield21, Stockholm University22, Ludwig Maximilian University of Munich23, University of Minnesota24, Xiamen University25, Oregon State University26, Universidade Federal de Santa Catarina27, University of Washington28, Queen Mary University of London29, University of Nottingham30, Cardiff University31, University of Maryland, College Park32, Brigham Young University33, Loyola University Maryland34, University of Toronto35, University of Giessen36, United States Military Academy37, State University of New York at Oswego38, Concordia University39, University of Bamberg40, University of Amsterdam41, Center for Open Science42
TL;DR: In this paper, 29 teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skinned-players.
Abstract: Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players. Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship. Overall, the 29 different analyses used 21 unique combinations of covariates. Neither analysts’ prior beliefs about the effect of interest nor their level of expertise readily explained the variation in the outcomes of the analyses. Peer ratings of the quality of the analyses also did not account for the variability. These findings suggest that significant variation in the results of analyses of complex data may be difficult to avoid, even by experts with honest intentions. Crowdsourcing data analysis, a strategy in which numerous research teams are recruited to simultaneously investigate the same research question, makes transparent how defensible, yet subjective, analytic choices influence research results.
••
University of Essex1, University of Leeds2, Anglia Ruskin University3, University of East Anglia4, Iowa State University5, University of Oxford6, University of Sussex7, University of York8, Newbury College9, University of Nottingham10, Kansas State University11, Ohio State University12, Washington State University13, Potsdam Institute for Climate Impact Research14, Stockholm Resilience Centre15, University of Aberdeen16, International Livestock Research Institute17, Lincoln University (New Zealand)18
TL;DR: In this article, sustainable intensification of agricultural systems offers synergistic opportunities for the co-production of agricultural and natural capital outcomes, but system redesign is essential to deliver optimum outcomes as ecological and economic conditions change.
Abstract: The sustainable intensification of agricultural systems offers synergistic opportunities for the co-production of agricultural and natural capital outcomes. Efficiency and substitution are steps towards sustainable intensification, but system redesign is essential to deliver optimum outcomes as ecological and economic conditions change. We show global progress towards sustainable intensification by farms and hectares, using seven sustainable intensification sub-types: integrated pest management, conservation agriculture, integrated crop and biodiversity, pasture and forage, trees, irrigation management and small or patch systems. From 47 sustainable intensification initiatives at scale (each >104 farms or hectares), we estimate 163 million farms (29% of all worldwide) have crossed a redesign threshold, practising forms of sustainable intensification on 453 Mha of agricultural land (9% of worldwide total). Key challenges include investment to integrate more forms of sustainable intensification in farming systems, creating agricultural knowledge economies and establishing policy measures to scale sustainable intensification further. We conclude that sustainable intensification may be approaching a tipping point where it could be transformative.
••
TL;DR: In this paper, a search for new phenomena in final states with an energetic jet and large missing transverse momentum is reported, and the results are translated into exclusion limits in models with pair-produced weakly interacting dark-matter candidates, large extra spatial dimensions, and supersymmetric particles in several compressed scenarios.
Abstract: Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses proton-proton collision data corresponding to an integrated luminosity of 36.1 fb−1 at a centre-of-mass energy of 13 TeV collected in 2015 and 2016 with the ATLAS detector at the Large Hadron Collider. Events are required to have at least one jet with a transverse momentum above 250 GeV and no leptons (e or μ). Several signal regions are considered with increasing requirements on the missing transverse momentum above 250 GeV. Good agreement is observed between the number of events in data and Standard Model predictions. The results are translated into exclusion limits in models with pair-produced weakly interacting dark-matter candidates, large extra spatial dimensions, and supersymmetric particles in several compressed scenarios.
••
TL;DR: The authors reviewed the economic impacts of climate change and the policy implications of the results and concluded that climate change will likely have a limited impact on the economic well-being of the United States.
Abstract: This article reviews the economic impacts of climate change and the policy implications of the results. Current estimates indicate that climate change will likely have a limited impact on t...
••
TL;DR: In this article, the authors propose a meta-theoretical framework for analyzing national energy transitions by considering three types of systems: energy flows and markets, energy technologies, and energy-related policies.
Abstract: Economic development, technological innovation, and policy change are especially prominent factors shaping energy transitions. Therefore explaining energy transitions requires combining insights from disciplines investigating these factors. The existing literature is not consistent in identifying these disciplines nor proposing how they can be combined. We conceptualize national energy transitions as a co-evolution of three types of systems: energy flows and markets, energy technologies, and energy-related policies. The focus on the three types of systems gives rise to three perspectives on national energy transitions: techno-economic with its roots in energy systems analysis and various domains of economics; socio-technical with its roots in sociology of technology, STS, and evolutionary economics; and political with its roots in political science. We use the three perspectives as an organizing principle to propose a meta-theoretical framework for analyzing national energy transitions. Following Elinor Ostrom's approach, the proposed framework explains national energy transitions through a nested conceptual map of variables and theories. In comparison with the existing meta-theoretical literature, the three perspectives framework elevates the role of political science since policies are likely to be increasingly prominent in shaping 21st century energy transitions.
••
TL;DR: In this article, the observed significance is 5.8 standard deviations, compared to an expectation of 4.9 standard deviations and the observed (expected) significance is 6.3 (5.1) standard deviations.
••
TL;DR: In this article, the authors examined when and how organizations create agility, adaptability, and alignment as distinct supply chain properties to gain sustainable competitive advantage, and provided a holistic study of the antecedents of agility, adaptive and alignment.
Abstract: Purpose: To examine when and how organizations create agility, adaptability, and alignment as distinct supply chain properties to gain sustainable competitive advantage.
Design/methodology/approach: The current study utilizes the resource-based view (RBV) under the moderating effect of top management commitment. To test our research hypotheses, we gathered 351 usable responses using a pre-tested questionnaire.
Findings: Our statistical analyses suggest that information sharing and supply chain connectivity resources influence supply chain visibility capability, which, under the moderating effect of top management commitment, enhance supply chain agility, adaptability and alignment.
Originality/value: Our contribution lies in: (i) providing a holistic study of the antecedents of agility, adaptability and alignment; (ii) investigating the moderating role of top management commitment on supply chain agility, adaptability and alignment; (iii) following the RBV and addressing calls for investigating the role of resources in supply chain management, and for empirical studies with implications for supply chain design.
••
TL;DR: Future decisions on neonicotinoid use will benefit from weighing crop yield benefits versus environmental impacts to nontarget organisms and considering whether there are more environmentally benign alternatives.
Abstract: Neonicotinoid use has increased rapidly in recent years, with a global shift toward insecticide applications as seed coatings rather than aerial spraying. While the use of seed coatings can lessen the amount of overspray and drift, the near universal and prophylactic use of neonicotinoid seed coatings on major agricultural crops has led to widespread detections in the environment (pollen, soil, water, honey). Pollinators and aquatic insects appear to be especially susceptible to the effects of neonicotinoids with current research suggesting that chronic sublethal effects are more prevalent than acute toxicity. Meanwhile, evidence of clear and consistent yield benefits from the use of neonicotinoids remains elusive for most crops. Future decisions on neonicotinoid use will benefit from weighing crop yield benefits versus environmental impacts to nontarget organisms and considering whether there are more environmentally benign alternatives.
••
TL;DR: Data suggest that CST–Polα-mediated fill-in helps to control the repair of double-strand breaks by 53BP1, RIF1 and shieldin.
Abstract: In DNA repair, the resection of double-strand breaks dictates the choice between homology-directed repair—which requires a 3′ overhang—and classical non-homologous end joining, which can join unresected ends1,2. BRCA1-mutant cancers show minimal resection of double-strand breaks, which renders them deficient in homology-directed repair and sensitive to inhibitors of poly(ADP-ribose) polymerase 1 (PARP1)3–8. When BRCA1 is absent, the resection of double-strand breaks is thought to be prevented by 53BP1, RIF1 and the REV7–SHLD1–SHLD2–SHLD3 (shieldin) complex, and loss of these factors diminishes sensitivity to PARP1 inhibitors4,6–9. Here we address the mechanism by which 53BP1–RIF1–shieldin regulates the generation of recombinogenic 3′ overhangs. We report that CTC1–STN1–TEN1 (CST)10, a complex similar to replication protein A that functions as an accessory factor of polymerase-α (Polα)–primase11, is a downstream effector in the 53BP1 pathway. CST interacts with shieldin and localizes with Polα to sites of DNA damage in a 53BP1- and shieldin-dependent manner. As with loss of 53BP1, RIF1 or shieldin, the depletion of CST leads to increased resection. In BRCA1-deficient cells, CST blocks RAD51 loading and promotes the efficacy of PARP1 inhibitors. In addition, Polα inhibition diminishes the effect of PARP1 inhibitors. These data suggest that CST–Polα-mediated fill-in helps to control the repair of double-strand breaks by 53BP1, RIF1 and shieldin.
••
TL;DR: In contrast to the business case logic, a paradox perspective does not establish emphasize business considerations over concerns for environmental protection and social well-being at the societal level as discussed by the authors, and a framework to delineate its descriptive, instrumental, and normative aspects is proposed.
Abstract: The last decade has witnessed the emergence of a paradox perspective on corporate sustainability. By explicitly acknowledging tensions between different desirable, yet interdependent and conflicting sustainability objectives, a paradox perspective enables decision makers to achieve competing sustainability objectives simultaneously and creates leeway for superior business contributions to sustainable development. In stark contrast to the business case logic, a paradox perspective does not establish emphasize business considerations over concerns for environmental protection and social well-being at the societal level. In order to contribute to the consolidation of this emergent field of research, we offer a definition of the paradox perspective on corporate sustainability and a framework to delineate its descriptive, instrumental, and normative aspects. This framework clarifies the paradox perspective’s contents and its implications for research and practice. We use the framework to map the contributions to this thematic symposium on paradoxes in sustainability and to propose questions for future research.
••
TL;DR: In this article, an updated global fit to precision electroweak data, W + W − measurements at LEP, and Higgs and diboson data from runs 1 and 2 of the LHC in the framework of the Standard Model Effective Field Theory (SMEFT), allowing all coefficients to vary across the combined dataset, and present the results in both the Warsaw and SILH operator bases.
Abstract: The ATLAS and CMS collaborations have recently released significant new data on Higgs and diboson production in LHC Run 2. Measurements of Higgs properties have improved in many channels, while kinematic information for h→γγ and h→ZZ can now be more accurately incorporated in fits using the STXS method, and W + W − diboson production at high p T gives new sensitivity to deviations from the Standard Model. We have performed an updated global fit to precision electroweak data, W + W − measurements at LEP, and Higgs and diboson data from Runs 1 and 2 of the LHC in the framework of the Standard Model Effective Field Theory (SMEFT), allowing all coefficients to vary across the combined dataset, and present the results in both the Warsaw and SILH operator bases. We exhibit the improvement in the constraints on operator coefficients provided by the LHC Run 2 data, and discuss the correlations between them. We also explore the constraints our fit results impose on several models of physics beyond the Standard Model, including models that contribute to the operator coefficients at the tree level and stops in the MSSM that contribute via loops.
••
Washington University in St. Louis1, Australian National University2, Heidelberg University3, University of Michigan4, Flinders University5, Paris Diderot University6, King's College London7, Office of Population Research8, University of Montpellier9, French Institute of Health and Medical Research10, University of Otago11, Hungarian Academy of Sciences12, University of Sussex13, University of Bologna14, VU University Amsterdam15, University of Adelaide16, University of London17, University of Oxford18, University of Queensland19, University of New England (Australia)20, University of Pennsylvania21, Georgetown University22, National and Kapodistrian University of Athens23, Roy J. and Lucille A. Carver College of Medicine24, University of Basel25, Greifswald University Hospital26, University Medical Center Groningen27, Semmelweis University28, VU University Medical Center29, Michigan State University30, University of Cambridge31, Icahn School of Medicine at Mount Sinai32, University of Manchester33, University of Münster34, Uppsala University35, Research Triangle Park36, QIMR Berghofer Medical Research Institute37, University of Bristol38, Deakin University39, Charité40, University of Melbourne41, University of Molise42, University of California, San Francisco43, Indiana University44
TL;DR: If an interaction exists in which the S allele of 5-HTTLPR increases risk of depression only in stressed individuals, then it is not broadly generalisable, but must be of modest effect size and only observable in limited situations.
Abstract: The hypothesis that the S allele of the 5-HTTLPR serotonin transporter promoter region is associated with increased risk of depression, but only in individuals exposed to stressful situations, has generated much interest, research and controversy since first proposed in 2003. Multiple meta-analyses combining results from heterogeneous analyses have not settled the issue. To determine the magnitude of the interaction and the conditions under which it might be observed, we performed new analyses on 31 data sets containing 38 802 European ancestry subjects genotyped for 5-HTTLPR and assessed for depression and childhood maltreatment or other stressful life events, and meta-analysed the results. Analyses targeted two stressors (narrow, broad) and two depression outcomes (current, lifetime). All groups that published on this topic prior to the initiation of our study and met the assessment and sample size criteria were invited to participate. Additional groups, identified by consortium members or self-identified in response to our protocol (published prior to the start of analysis) with qualifying unpublished data, were also invited to participate. A uniform data analysis script implementing the protocol was executed by each of the consortium members. Our findings do not support the interaction hypothesis. We found no subgroups or variable definitions for which an interaction between stress and 5-HTTLPR genotype was statistically significant. In contrast, our findings for the main effects of life stressors (strong risk factor) and 5-HTTLPR genotype (no impact on risk) are strikingly consistent across our contributing studies, the original study reporting the interaction and subsequent meta-analyses. Our conclusion is that if an interaction exists in which the S allele of 5-HTTLPR increases risk of depression only in stressed individuals, then it is not broadly generalisable, but must be of modest effect size and only observable in limited situations.
••
Aix-Marseille University1, University of Oklahoma2, Azerbaijan National Academy of Sciences3, Université Paris-Saclay4, University of Toronto5, Santa Cruz Institute for Particle Physics6, University of Sussex7, Tel Aviv University8, Technion – Israel Institute of Technology9, University of Bergen10, Northern Illinois University11, University of Oslo12, University of Oregon13, Stockholm University14, King's College London15, International Centre for Theoretical Physics16, University of Tokyo17, AGH University of Science and Technology18, Ludwig Maximilian University of Munich19, Rutherford Appleton Laboratory20, University of Belgrade21, Alexandru Ioan Cuza University22
TL;DR: In this article, a search for heavy neutral Higgs bosons and Z' bosons was performed using a data sample corresponding to an integrated luminosity of 36.1 fb(-1) from proton-proton collisions at root s = 13 TeV reco...
Abstract: A search for heavy neutral Higgs bosons and Z' bosons is performed using a data sample corresponding to an integrated luminosity of 36.1 fb(-1) from proton-proton collisions at root s = 13 TeV reco ...
••
TL;DR: In this paper, the properties of the Higgs boson were measured in the two-photon final state using 36.1 fb-1 of proton? proton collision data recorded at ffiffi √s = 13 TeV by the ATLAS experiment at the Large Hadron Collider.
Abstract: Properties of the Higgs boson are measured in the two-photon final state using 36.1 fb-1 of proton? proton collision data recorded at ffiffi √s = 13 TeV by the ATLAS experiment at the Large Hadron Collider. Cross-section measurements for the production of a Higgs boson through gluon-gluon fusion, vectorboson fusion, and in association with a vector boson or a top-quark pair are reported. The signal strength, defined as the ratio of the observed to the expected signal yield, is measured for each of these production processes as well as inclusively. The global signal strength measurement of 0.99 ± 0.14 improves on the precision of the ATLAS measurement at √s = 7 and 8 TeV by a factor of two. Measurements of gluon-gluon fusion and vector-boson fusion productions yield signal strengths compatible with the Standard Model prediction. Measurements of simplified template cross sections, designed to quantify the different Higgs boson production processes in specific regions of phase space, are reported. The cross section for the production of the Higgs boson decaying to two isolated photons in a fiducial region closely matching the experimental selection of the photons is measured to be 55 ± 10 fb, which is in good agreement with the Standard Model prediction of 64 ± 2 fb. Furthermore, cross sections in fiducial regions enriched in Higgs boson production in vector-boson fusion or in association with large missing transverse momentum, leptons or top-quark pairs are reported. Differential and double-differential measurements are performed for several variables related to the diphoton kinematics as well as the kinematics and multiplicity of the jets produced in association with a Higgs boson. These differential cross sections are sensitive to higher order QCD corrections and properties of the Higgs boson, such as its spin and CP quantum numbers. No significant deviations from a wide array of Standard Model predictions are observed. Finally, the strength and tensor structure of the Higgs boson interactions are investigated using an effective Lagrangian, which introduces additional CP-even and CP-odd interactions. No significant new physics contributions are observed.
••
TL;DR: It is argued that appropriate conclusions match the Bayesian inferences, but not those based on significance testing, where they disagree; it is shown that a high-powered non-significant result is consistent with no evidence for H0 over H1 worth mentioning, which a Bayes factor can show.
Abstract: Inference using significance testing and Bayes factors is compared and contrasted in five case studies based on real research. The first study illustrates that the methods will often agree, both in motivating researchers to conclude that H1 is supported better than H0, and the other way round, that H0 is better supported than H1. The next four, however, show that the methods will also often disagree. In these cases, the aim of the paper will be to motivate the sensible evidential conclusion, and then see which approach matches those intuitions. Specifically, it is shown that a high-powered non-significant result is consistent with no evidence for H0 over H1 worth mentioning, which a Bayes factor can show, and, conversely, that a low-powered non-significant result is consistent with substantial evidence for H0 over H1, again indicated by Bayesian analyses. The fourth study illustrates that a high-powered significant result may not amount to any evidence for H1 over H0, matching the Bayesian conclusion. Finally, the fifth study illustrates that different theories can be evidentially supported to different degrees by the same data; a fact that P-values cannot reflect but Bayes factors can. It is argued that appropriate conclusions match the Bayesian inferences, but not those based on significance testing, where they disagree.
••
TL;DR: The impact of strategies to improve health-care provider practices varied substantially, although some approaches were more consistently effective than others.
••
TL;DR: It is shown that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network, which applies to any situation in which spatially localized sensors are unitarily encoded with independent parameters.
Abstract: We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.
••
National Center for Supercomputing Applications1, University of Illinois at Urbana–Champaign2, Stanford University3, Fermilab4, SLAC National Accelerator Laboratory5, Brookhaven National Laboratory6, Institut d'Astrophysique de Paris7, University of Pennsylvania8, IFAE9, University College London10, ETH Zurich11, Max Planck Society12, Austin Peay State University13, Rhodes University14, New York University15, Texas A&M University16, Indian Institute of Technology, Hyderabad17, Ludwig Maximilian University of Munich18, Ohio State University19, Autonomous University of Madrid20, University of Michigan21, University of Cambridge22, University of Washington23, Santa Cruz Institute for Particle Physics24, Australian Astronomical Observatory25, Argonne National Laboratory26, University of São Paulo27, Catalan Institution for Research and Advanced Studies28, Institut de Ciències de l'Espai29, University of Southampton30, State University of Campinas31, Princeton University32, California Institute of Technology33, University of Sussex34, Oak Ridge National Laboratory35
TL;DR: The Dark Energy Survey (DES) photometric data set Y3 GOLD as discussed by the authors contains nearly 5000 deg2 of grizY imaging in the south Galactic cap including nearly 390 million objects, with depth reaching a signal-to-noise ratio ∼10 for extended objects up to i AB ∼ 23.0, and top-of-the-atmosphere photometric uniformity 98% and purity >99% for galaxies with 19 < i AB < 22.5.
Abstract: We describe the Dark Energy Survey (DES) photometric data set assembled from the first three years of science operations to support DES Year 3 cosmologic analyses, and provide usage notes aimed at the broad astrophysics community. Y3 GOLD improves on previous releases from DES, Y1 GOLD, and Data Release 1 (DES DR1), presenting an expanded and curated data set that incorporates algorithmic developments in image detrending and processing, photometric calibration, and object classification. Y3 GOLD comprises nearly 5000 deg2 of grizY imaging in the south Galactic cap, including nearly 390 million objects, with depth reaching a signal-to-noise ratio ∼10 for extended objects up to i AB ∼ 23.0, and top-of-the-atmosphere photometric uniformity 98% and purity >99% for galaxies with 19 < i AB < 22.5. Additionally, it includes per-object quality information, and accompanying maps of the footprint coverage, masked regions, imaging depth, survey conditions, and astrophysical foregrounds that are used to select the cosmologic analysis samples.
••
TL;DR: A search for the decay of the Standard Model Higgs boson into a bb¯ pair when produced in association with a W or Z boson is performed with the ATLAS detector as mentioned in this paper.