Showing papers by "University of Sussex published in 2020"
••
TL;DR: In this article, the authors present cosmological parameter results from the full-mission Planck measurements of the cosmic microwave background (CMB) anisotropies, combining information from the temperature and polarization maps and the lensing reconstruction.
Abstract: We present cosmological parameter results from the final full-mission Planck measurements of the cosmic microwave background (CMB) anisotropies, combining information from the temperature and polarization maps and the lensing reconstruction Compared to the 2015 results, improved measurements of large-scale polarization allow the reionization optical depth to be measured with higher precision, leading to significant gains in the precision of other correlated parameters Improved modelling of the small-scale polarization leads to more robust constraints on manyparameters,withresidualmodellinguncertaintiesestimatedtoaffectthemonlyatthe05σlevelWefindgoodconsistencywiththestandard spatially-flat6-parameter ΛCDMcosmologyhavingapower-lawspectrumofadiabaticscalarperturbations(denoted“base ΛCDM”inthispaper), from polarization, temperature, and lensing, separately and in combination A combined analysis gives dark matter density Ωch2 = 0120±0001, baryon density Ωbh2 = 00224±00001, scalar spectral index ns = 0965±0004, and optical depth τ = 0054±0007 (in this abstract we quote 68% confidence regions on measured parameters and 95% on upper limits) The angular acoustic scale is measured to 003% precision, with 100θ∗ = 10411±00003Theseresultsareonlyweaklydependentonthecosmologicalmodelandremainstable,withsomewhatincreasederrors, in many commonly considered extensions Assuming the base-ΛCDM cosmology, the inferred (model-dependent) late-Universe parameters are: HubbleconstantH0 = (674±05)kms−1Mpc−1;matterdensityparameterΩm = 0315±0007;andmatterfluctuationamplitudeσ8 = 0811±0006 We find no compelling evidence for extensions to the base-ΛCDM model Combining with baryon acoustic oscillation (BAO) measurements (and consideringsingle-parameterextensions)weconstraintheeffectiveextrarelativisticdegreesoffreedomtobe Neff = 299±017,inagreementwith the Standard Model prediction Neff = 3046, and find that the neutrino mass is tightly constrained toPmν < 012 eV The CMB spectra continue to prefer higher lensing amplitudesthan predicted in base ΛCDM at over 2σ, which pulls some parameters that affect thelensing amplitude away from the ΛCDM model; however, this is not supported by the lensing reconstruction or (in models that also change the background geometry) BAOdataThejointconstraintwithBAOmeasurementsonspatialcurvatureisconsistentwithaflatuniverse, ΩK = 0001±0002Alsocombining with Type Ia supernovae (SNe), the dark-energy equation of state parameter is measured to be w0 = −103±003, consistent with a cosmological constant We find no evidence for deviations from a purely power-law primordial spectrum, and combining with data from BAO, BICEP2, and Keck Array data, we place a limit on the tensor-to-scalar ratio r0002 < 006 Standard big-bang nucleosynthesis predictions for the helium and deuterium abundances for the base-ΛCDM cosmology are in excellent agreement with observations The Planck base-ΛCDM results are in good agreement with BAO, SNe, and some galaxy lensing observations, but in slight tension with the Dark Energy Survey’s combined-probe results including galaxy clustering (which prefers lower fluctuation amplitudes or matter density parameters), and in significant, 36σ, tension with local measurements of the Hubble constant (which prefer a higher value) Simple model extensions that can partially resolve these tensions are not favoured by the Planck data
4,688 citations
••
New York University1, University of Chicago2, Mackenzie Presbyterian University3, Middlesex University4, University of Kent5, Nicolaus Copernicus University in Toruń6, Harvard University7, Yale University8, Stanford University9, Northwestern University10, University of Sussex11, Utrecht University12, University of California, San Diego13, University of Maryland, College Park14, McGovern Institute for Brain Research15, University of Queensland16, University of Michigan17, California Institute of Technology18, Lehigh University19, University of Regina20, University of Oregon21, Ohio State University22, Massachusetts Institute of Technology23, University of St Andrews24, University of Cambridge25, University of British Columbia26, University of Illinois at Chicago27, University of California, Berkeley28, Carleton University29, VU University Amsterdam30, Cornell University31
TL;DR: Evidence from a selection of research topics relevant to pandemics is discussed, including work on navigating threats, social and cultural influences on behaviour, science communication, moral decision-making, leadership, and stress and coping.
Abstract: The COVID-19 pandemic represents a massive global health crisis. Because the crisis requires large-scale behaviour change and places significant psychological burdens on individuals, insights from the social and behavioural sciences can be used to help align human behaviour with the recommendations of epidemiologists and public health experts. Here we discuss evidence from a selection of research topics relevant to pandemics, including work on navigating threats, social and cultural influences on behaviour, science communication, moral decision-making, leadership, and stress and coping. In each section, we note the nature and quality of prior research, including uncertainty and unsettled issues. We identify several insights for effective response to the COVID-19 pandemic and highlight important gaps researchers should move quickly to fill in the coming weeks and months.
3,223 citations
••
TL;DR: In this paper, a power-law fit to the angular power spectra of dust polarization at 353 GHz for six nested sky regions covering from 24 to 71 % of the sky is presented.
Abstract: The study of polarized dust emission has become entwined with the analysis of the cosmic microwave background (CMB) polarization. We use new Planck maps to characterize Galactic dust emission as a foreground to the CMB polarization. We present Planck EE, BB, and TE power spectra of dust polarization at 353 GHz for six nested sky regions covering from 24 to 71 % of the sky. We present power-law fits to the angular power spectra, yielding evidence for statistically significant variations of the exponents over sky regions and a difference between the values for the EE and BB spectra. The TE correlation and E/B power asymmetry extend to low multipoles that were not included in earlier Planck polarization papers. We also report evidence for a positive TB dust signal. Combining data from Planck and WMAP, we determine the amplitudes and spectral energy distributions (SEDs) of polarized foregrounds, including the correlation between dust and synchrotron polarized emission, for the six sky regions as a function of multipole. This quantifies the challenge of the component separation procedure required for detecting the reionization and recombination peaks of primordial CMB B modes. The SED of polarized dust emission is fit well by a single-temperature modified blackbody emission law from 353 GHz to below 70 GHz. For a dust temperature of 19.6 K, the mean spectral index for dust polarization is $\beta_{\rm d}^{P} = 1.53\pm0.02 $. By fitting multi-frequency cross-spectra, we examine the correlation of the dust polarization maps across frequency. We find no evidence for decorrelation. If the Planck limit for the largest sky region applies to the smaller sky regions observed by sub-orbital experiments, then decorrelation might not be a problem for CMB experiments aiming at a primordial B-mode detection limit on the tensor-to-scalar ratio $r\simeq0.01$ at the recombination peak.
1,749 citations
••
TL;DR: The extent of the trait data compiled in TRY is evaluated and emerging patterns of data coverage and representativeness are analyzed to conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements.
Abstract: Plant traits-the morphological, anatomical, physiological, biochemical and phenological characteristics of plants-determine how plants respond to environmental factors, affect other trophic levels, and influence ecosystem properties and their benefits and detriments to people. Plant trait data thus represent the basis for a vast area of research spanning from evolutionary biology, community and functional ecology, to biodiversity conservation, ecosystem and landscape management, restoration, biogeography and earth system modelling. Since its foundation in 2007, the TRY database of plant traits has grown continuously. It now provides unprecedented data coverage under an open access data policy and is the main plant trait database used by the research community worldwide. Increasingly, the TRY database also supports new frontiers of trait-based plant research, including the identification of data gaps and the subsequent mobilization or measurement of new data. To support this development, in this article we evaluate the extent of the trait data compiled in TRY and analyse emerging patterns of data coverage and representativeness. Best species coverage is achieved for categorical traits-almost complete coverage for 'plant growth form'. However, most traits relevant for ecology and vegetation modelling are characterized by continuous intraspecific variation and trait-environmental relationships. These traits have to be measured on individual plants in their respective environment. Despite unprecedented data coverage, we observe a humbling lack of completeness and representativeness of these continuous traits in many aspects. We, therefore, conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements. This can only be achieved in collaboration with other initiatives.
882 citations
••
TL;DR: In this paper, the authors present the cosmological legacy of the Planck satellite, which provides the strongest constraints on the parameters of the standard cosmology model and some of the tightest limits available on deviations from that model.
Abstract: The European Space Agency’s Planck satellite, which was dedicated to studying the early Universe and its subsequent evolution, was launched on 14 May 2009. It scanned the microwave and submillimetre sky continuously between 12 August 2009 and 23 October 2013, producing deep, high-resolution, all-sky maps in nine frequency bands from 30 to 857 GHz. This paper presents the cosmological legacy of Planck, which currently provides our strongest constraints on the parameters of the standard cosmological model and some of the tightest limits available on deviations from that model. The 6-parameter ΛCDM model continues to provide an excellent fit to the cosmic microwave background data at high and low redshift, describing the cosmological information in over a billion map pixels with just six parameters. With 18 peaks in the temperature and polarization angular power spectra constrained well, Planck measures five of the six parameters to better than 1% (simultaneously), with the best-determined parameter (θ*) now known to 0.03%. We describe the multi-component sky as seen by Planck, the success of the ΛCDM model, and the connection to lower-redshift probes of structure formation. We also give a comprehensive summary of the major changes introduced in this 2018 release. The Planck data, alone and in combination with other probes, provide stringent constraints on our models of the early Universe and the large-scale structure within which all astrophysical objects form and evolve. We discuss some lessons learned from the Planck mission, and highlight areas ripe for further experimental advances.
879 citations
••
Stockholm Resilience Centre1, University of Tasmania2, Australian National University3, Stockholm University4, Charles Darwin University5, University of Montana6, International Union for Conservation of Nature and Natural Resources7, National Autonomous University of Mexico8, The Pew Charitable Trusts9, McGill University10, Stellenbosch University11, University of Maryland, College Park12, University of Bern13, International Center for Tropical Agriculture14, Commonwealth Scientific and Industrial Research Organisation15, University of Wisconsin-Madison16, Royal Swedish Academy of Sciences17, Hobart Corporation18, Potsdam Institute for Climate Impact Research19, Pontifical Catholic University of Chile20, University of Sussex21, University College Cork22, Lüneburg University23, University of Arizona24, Azim Premji University25, University of the Witwatersrand26, Radboud University Nijmegen27, Utrecht University28
TL;DR: In this article, the authors propose a set of four general principles that underlie high-quality knowledge co-production for sustainability research, and offer practical guidance on how to engage in meaningful co-productive practices, and how to evaluate their quality and success.
Abstract: Research practice, funding agencies and global science organizations suggest that research aimed at addressing sustainability challenges is most effective when ‘co-produced’ by academics and non-academics. Co-production promises to address the complex nature of contemporary sustainability challenges better than more traditional scientific approaches. But definitions of knowledge co-production are diverse and often contradictory. We propose a set of four general principles that underlie high-quality knowledge co-production for sustainability research. Using these principles, we offer practical guidance on how to engage in meaningful co-productive practices, and how to evaluate their quality and success.
607 citations
••
TL;DR: In this article, the legacy Planck cosmic microwave background (CMB) likelihoods derived from the 2018 data release are described, with a hybrid method using different approximations at low (l ǫ ≥ 30) multipoles, implementing several methodological and data-analysis refinements compared to previous releases.
Abstract: We describe the legacy Planck cosmic microwave background (CMB) likelihoods derived from the 2018 data release. The overall approach is similar in spirit to the one retained for the 2013 and 2015 data release, with a hybrid method using different approximations at low (l ≥ 30) multipoles, implementing several methodological and data-analysis refinements compared to previous releases. With more realistic simulations, and better correction and modelling of systematic effects, we can now make full use of the CMB polarization observed in the High Frequency Instrument (HFI) channels. The low-multipole EE cross-spectra from the 100 GHz and 143 GHz data give a constraint on the ΛCDM reionization optical-depth parameter τ to better than 15% (in combination with the TT low-l data and the high-l temperature and polarization data), tightening constraints on all parameters with posterior distributions correlated with τ . We also update the weaker constraint on τ from the joint TEB likelihood using the Low Frequency Instrument (LFI) channels, which was used in 2015 as part of our baseline analysis. At higher multipoles, the CMB temperature spectrum and likelihood are very similar to previous releases. A better model of the temperature-to-polarization leakage and corrections for the effective calibrations of the polarization channels (i.e., the polarization efficiencies) allow us to make full use of polarization spectra, improving the ΛCDM constraints on the parameters θ MC , ω c , ω b , and H 0 by more than 30%, and ns by more than 20% compared to TT-only constraints. Extensive tests on the robustness of the modelling of the polarization data demonstrate good consistency, with some residual modelling uncertainties. At high multipoles, we are now limited mainly by the accuracy of the polarization efficiency modelling. Using our various tests, simulations, and comparison between different high-multipole likelihood implementations, we estimate the consistency of the results to be better than the 0.5 σ level on the ΛCDM parameters, as well as classical single-parameter extensions for the joint likelihood (to be compared to the 0.3 σ levels we achieved in 2015 for the temperature data alone on ΛCDM only). Minor curiosities already present in the previous releases remain, such as the differences between the best-fit ΛCDM parameters for the l > 800 ranges of the power spectrum, or the preference for more smoothing of the power-spectrum peaks than predicted in ΛCDM fits. These are shown to be driven by the temperature power spectrum and are not significantly modified by the inclusion of the polarization data. Overall, the legacy Planck CMB likelihoods provide a robust tool for constraining the cosmological model and represent a reference for future CMB observations.
523 citations
••
Institute of Cancer Research1, Keele University2, Royal Cornwall Hospital3, Beatson West of Scotland Cancer Centre4, University of Sussex5, Worcestershire Acute Hospitals NHS Trust6, University of Cambridge7, Torbay Hospital8, Norfolk and Norwich University Hospital9, The Royal Marsden NHS Foundation Trust10, University of Manchester11, Northwood University12, King's College London13, Clatterbridge Cancer Centre NHS Foundation Trust14
TL;DR: 26 Gy in five fractions over 1 week is non-inferior to the standard of 40 Gy in 15 fractions over 3 weeks for local tumour control, and is as safe in terms of normal tissue effects up to 5 years for patients prescribed adjuvant local radiotherapy after primary surgery for early-stage breast cancer.
519 citations
••
TL;DR: In this article, the authors presented an extensive set of tests of the robustness of the lensing-potential power spectrum, and constructed a minimum-variance estimator likelihood over lensing multipoles 8.
Abstract: We present measurements of the cosmic microwave background (CMB) lensing potential using the final Planck 2018 temperature and polarization data. Using polarization maps filtered to account for the noise anisotropy, we increase the significance of the detection of lensing in the polarization maps from 5σ to 9σ . Combined with temperature, lensing is detected at 40σ . We present an extensive set of tests of the robustness of the lensing-potential power spectrum, and construct a minimum-variance estimator likelihood over lensing multipoles 8 ≤ L ≤ 400 (extending the range to lower L compared to 2015), which we use to constrain cosmological parameters. We find good consistency between lensing constraints and the results from the Planck CMB power spectra within the ΛCDM model. Combined with baryon density and other weak priors, the lensing analysis alone constrains (1σ errors). Also combining with baryon acoustic oscillation data, we find tight individual parameter constraints, σ 8 = 0.811 ± 0.019, , and . Combining with Planck CMB power spectrum data, we measure σ 8 to better than 1% precision, finding σ 8 = 0.811 ± 0.006. CMB lensing reconstruction data are complementary to galaxy lensing data at lower redshift, having a different degeneracy direction in σ 8 − Ωm space; we find consistency with the lensing results from the Dark Energy Survey, and give combined lensing-only parameter constraints that are tighter than joint results using galaxy clustering. Using the Planck cosmic infrared background (CIB) maps as an additional tracer of high-redshift matter, we make a combined Planck -only estimate of the lensing potential over 60% of the sky with considerably more small-scale signal. We additionally demonstrate delensing of the Planck power spectra using the joint and individual lensing potential estimates, detecting a maximum removal of 40% of the lensing-induced power in all spectra. The improvement in the sharpening of the acoustic peaks by including both CIB and the quadratic lensing reconstruction is detected at high significance.
464 citations
••
TL;DR: In this paper, the Planck full-mission cosmic microwave background (CMB) temperature and E-mode polarization maps were used to obtain constraints on primordial non-Gaussianity.
Abstract: We analyse the Planck full-mission cosmic microwave background (CMB) temperature and E-mode polarization maps to obtain constraints on primordial non-Gaussianity (NG). We compare estimates obtained from separable template-fitting, binned, and optimal modal bispectrum estimators, finding consistent values for the local, equilateral, and orthogonal bispectrum amplitudes. Our combined temperature and polarization analysis produces the following final results: $f_{NL}^{local}$ = −0.9 ± 5.1; $f_{NL}^{equil}$ = −26 ± 47; and $f_{NL}^{ortho}$ = −38 ± 24 (68% CL, statistical). These results include low-multipole (4 ≤ l < 40) polarization data that are not included in our previous analysis. The results also pass an extensive battery of tests (with additional tests regarding foreground residuals compared to 2015), and they are stable with respect to our 2015 measurements (with small fluctuations, at the level of a fraction of a standard deviation, which is consistent with changes in data processing). Polarization-only bispectra display a significant improvement in robustness; they can now be used independently to set primordial NG constraints with a sensitivity comparable to WMAP temperature-based results and they give excellent agreement. In addition to the analysis of the standard local, equilateral, and orthogonal bispectrum shapes, we consider a large number of additional cases, such as scale-dependent feature and resonance bispectra, isocurvature primordial NG, and parity-breaking models, where we also place tight constraints but do not detect any signal. The non-primordial lensing bispectrum is, however, detected with an improved significance compared to 2015, excluding the null hypothesis at 3.5σ. Beyond estimates of individual shape amplitudes, we also present model-independent reconstructions and analyses of the Planck CMB bispectrum. Our final constraint on the local primordial trispectrum shape is $g_{NL}^{local}$ = (−5.8 ± 6.5) × 10$^4$ (68% CL, statistical), while constraints for other trispectrum shapes are also determined. Exploiting the tight limits on various bispectrum and trispectrum shapes, we constrain the parameter space of different early-Universe scenarios that generate primordial NG, including general single-field models of inflation, multi-field models (e.g. curvaton models), models of inflation with axion fields producing parity-violation bispectra in the tensor sector, and inflationary models involving vector-like fields with directionally-dependent bispectra. Our results provide a high-precision test for structure-formation scenarios, showing complete agreement with the basic picture of the ΛCDM cosmology regarding the statistics of the initial conditions, with cosmic structures arising from adiabatic, passive, Gaussian, and primordial seed perturbations.
441 citations
••
Paris Diderot University1, University of Granada2, Durham University3, Universidade Federal do Espírito Santo4, Universidade Federal de Minas Gerais5, University of Sussex6, University of Helsinki7, University of California, San Diego8, University of Illinois at Urbana–Champaign9, University of Massachusetts Amherst10, University of Stavanger11, Spanish National Research Council12, Autonomous University of Madrid13, University of Mainz14, University of Hamburg15, University of Nottingham16
TL;DR: In this paper, the potential for observing gravitational waves from cosmological phase transitions with LISA was investigated, based on current state-of-the-art simulations of sound waves in the cosmic fluid after the phase transition completes.
Abstract: We investigate the potential for observing gravitational waves from cosmological phase transitions with LISA in light of recent theoretical and experimental developments. Our analysis is based on current state-of-the-art simulations of sound waves in the cosmic fluid after the phase transition completes. We discuss the various sources of gravitational radiation, the underlying parameters describing the phase transition and a variety of viable particle physics models in this context, clarifying common misconceptions that appear in the literature and identifying open questions requiring future study. We also present a web-based tool, PTPlot, that allows users to obtain up-to-date detection prospects for a given set of phase transition parameters at LISA.
••
TL;DR: In this article, a search for the electroweak production of charginos and sleptons decaying into final states with two electrons or muons is presented, based on 139.fb$^{-1}$ of proton-proton collisions recorded by the ATLAS detector at the Large Hadron Collider at
Abstract: A search for the electroweak production of charginos and sleptons decaying into final states with two electrons or muons is presented. The analysis is based on 139 fb$^{-1}$ of proton–proton collisions recorded by the ATLAS detector at the Large Hadron Collider at $\sqrt{s}=13$ $\text {TeV}$. Three R-parity-conserving scenarios where the lightest neutralino is the lightest supersymmetric particle are considered: the production of chargino pairs with decays via either W bosons or sleptons, and the direct production of slepton pairs. The analysis is optimised for the first of these scenarios, but the results are also interpreted in the others. No significant deviations from the Standard Model expectations are observed and limits at 95% confidence level are set on the masses of relevant supersymmetric particles in each of the scenarios. For a massless lightest neutralino, masses up to 420 $\text {Ge}\text {V}$ are excluded for the production of the lightest-chargino pairs assuming W-boson-mediated decays and up to 1 $\text {TeV}$ for slepton-mediated decays, whereas for slepton-pair production masses up to 700 $\text {Ge}\text {V}$ are excluded assuming three generations of mass-degenerate sleptons.
••
University of Sussex1, Paul Scherrer Institute2, Rutherford Appleton Laboratory3, University of Caen Lower Normandy4, Jagiellonian University5, German National Metrology Institute6, University of Bern7, University of Grenoble8, University of Kentucky9, ETH Zurich10, University of Fribourg11, Katholieke Universiteit Leuven12, University of Mainz13
TL;DR: In this article, the authors present the result of an experiment to measure the electric dipole moment (EDM) of the neutron at the Paul Scherrer Institute using Ramsey's method of separated oscillating magnetic fields with ultracold neutrons (UCN).
Abstract: We present the result of an experiment to measure the electric dipole moment EDM) of the neutron at the Paul Scherrer Institute using Ramsey's method of separated oscillating magnetic fields with ultracold neutrons (UCN). Our measurement stands in the long history of EDM experiments probing physics violating time reversal invariance. The salient features of this experiment
were the use of a Hg-199 co-magnetometer and an array of optically pumped cesium vapor magnetometers to cancel and correct for magnetic field changes. The statistical analysis was performed on blinded datasets by two separate groups while the estimation of systematic effects profited from an
unprecedented knowledge of the magnetic field. The measured value of the neutron EDM is $d_{\rm n} = (0.0\pm1.1_{\rm stat}\pm0.2_{\rmsys})\times10^{-26}e\,{\rm cm}$.
••
TL;DR: In revascularisation of left main coronary artery disease, PCI was associated with an inferior clinical outcome at 5 years compared with CABG, andCABG was found to be superior to PCI for the primary composite endpoint.
••
Harvard University1, University of Washington2, Humboldt University of Berlin3, Imperial College London4, University of Belgrade5, Istituto Nazionale di Fisica Nucleare6, Technical University of Berlin7, University of Bordeaux8, University of Oxford9, University of Valencia10, University of Strathclyde11, Rutherford Appleton Laboratory12, King's College London13, Foundation for Research & Technology – Hellas14, University of Birmingham15, University College London16, University of Liverpool17, National Physical Laboratory18, University of Nottingham19, University of Sussex20, Northern Illinois University21, Fermilab22, Peking University23, University of Pisa24, University of California, Riverside25, University of Nevada, Reno26, CERN27, University of Niš28, National Institute of Chemical Physics and Biophysics29, British University in Egypt30, Beni-Suef University31, Leibniz University of Hanover32, Paul Sabatier University33, University of Paris34, University of Cambridge35, Wayne State University36, Stanford University37, University of Bergen38, University of Amsterdam39, Northwestern University40, University of Bristol41, University of Warsaw42, University of Illinois at Urbana–Champaign43, Fayoum University44, University of Crete45, Queen's University Belfast46, Brandeis University47, University of Bologna48, Cochin University of Science and Technology49, German Aerospace Center50, University of Manchester51, University of Copenhagen52, University of Düsseldorf53, University of Vienna54, Florida State University55, University of Florence56, University of Illinois at Chicago57, University of Bremen58, University of Mainz59, Chinese Academy of Sciences60, University of Cincinnati61
TL;DR: The Atomic Experiment for Dark Matter and Gravity Exploration (AEDGE) as mentioned in this paper is a space experiment using cold atoms to search for ultra-light dark matter, and to detect gravitational waves in the frequency range between the most sensitive ranges of LISA and the terrestrial LIGO/Virgo/KAGRA/INDIGO experiments.
Abstract: We propose in this White Paper a concept for a space experiment using cold atoms to search for ultra-light dark matter, and to detect gravitational waves in the frequency range between the most sensitive ranges of LISA and the terrestrial LIGO/Virgo/KAGRA/INDIGO experiments. This interdisciplinary experiment, called Atomic Experiment for Dark Matter and Gravity Exploration (AEDGE), will also complement other planned searches for dark matter, and exploit synergies with other gravitational wave detectors. We give examples of the extended range of sensitivity to ultra-light dark matter offered by AEDGE, and how its gravitational-wave measurements could explore the assembly of super-massive black holes, first-order phase transitions in the early universe and cosmic strings. AEDGE will be based upon technologies now being developed for terrestrial experiments using cold atoms, and will benefit from the space experience obtained with, e.g., LISA and cold atom experiments in microgravity.
••
Monash University1, Fudan University2, Zhejiang University3, Columbia University4, University of Sussex5, ETH Zurich6, University of Cambridge7, École Polytechnique Fédérale de Lausanne8, Tel Aviv University9, University of Reading10, University of Queensland11, University of Ulm12, Aarhus University13, University of Michigan14, Howard Hughes Medical Institute15
TL;DR: This review commemorates the conclusion of a half century since Eanes and Glenner's seminal study of amyloids in humans by documenting the major milestones in amyloid research to date, from the perspectives of structural biology, biophysics, medicine, microbiology, engineering and nanotechnology.
Abstract: Amyloid diseases are global epidemics with profound health, social and economic implications and yet remain without a cure. This dire situation calls for research into the origin and pathological manifestations of amyloidosis to stimulate continued development of new therapeutics. In basic science and engineering, the cross-β architecture has been a constant thread underlying the structural characteristics of pathological and functional amyloids, and realizing that amyloid structures can be both pathological and functional in nature has fuelled innovations in artificial amyloids, whose use today ranges from water purification to 3D printing. At the conclusion of a half century since Eanes and Glenner's seminal study of amyloids in humans, this review commemorates the occasion by documenting the major milestones in amyloid research to date, from the perspectives of structural biology, biophysics, medicine, microbiology, engineering and nanotechnology. We also discuss new challenges and opportunities to drive this interdisciplinary field moving forward.
••
TL;DR: This work identifies key sustainability challenges with practices used in industries that will supply the metals and minerals needed for technologies such as solar photovoltaics, batteries, electric vehicle (EV) motors, wind turbines, fuel cells, and nuclear reactors and proposes four holistic recommendations to make mining and metal processing more sustainable and just.
Abstract: Climate change mitigation will create new natural resource and supply chain opportunities and dilemmas, as substantial amounts of raw materials will be required to build new low carbon energy devices and infrastructure (1). Between 2015 and 2050, the global electric vehicle stock needs to jump from 1.2 million light-duty passenger cars to 965 million passenger cars; battery storage capacity needs to climb from 0.5 gigawatt -hours (GWh) to 12,380 GWh; and the amount of installed solar PV capacity must rise from 223 gigawatts (GW) to over 7,100 GW (2). The materials and metals demanded by a low-carbon economy will be immense. (3).
••
Wageningen University and Research Centre1, University of California, Davis2, University of Georgia3, Swedish University of Agricultural Sciences4, University of Maine5, University of Koblenz and Landau6, University of Bern7, University of Sussex8, Cornell University9, Michigan State University10, University of British Columbia11, University of Padua12, University of Worcester13, University of Reading14, University of California, Berkeley15, University of Göttingen16, Estonian University of Life Sciences17, Lancaster University18, Lincoln University (New Zealand)19
TL;DR: This synthesis identifies several important drivers of variability in effectiveness of plantings: pollination services declined exponentially with distance from plantings, and perennial and older flower strips with higher flowering plant diversity enhanced pollination more effectively.
Abstract: Floral plantings are promoted to foster ecological intensification of agriculture through provisioning of ecosystem services. However, a comprehensive assessment of the effectiveness of different floral plantings, their characteristics and consequences for crop yield is lacking. Here we quantified the impacts of flower strips and hedgerows on pest control (18 studies) and pollination services (17 studies) in adjacent crops in North America, Europe and New Zealand. Flower strips, but not hedgerows, enhanced pest control services in adjacent fields by 16% on average. However, effects on crop pollination and yield were more variable. Our synthesis identifies several important drivers of variability in effectiveness of plantings: pollination services declined exponentially with distance from plantings, and perennial and older flower strips with higher flowering plant diversity enhanced pollination more effectively. These findings provide promising pathways to optimise floral plantings to more effectively contribute to ecosystem service delivery and ecological intensification of agriculture in the future.
••
TL;DR: A brief overview of different conceptualisations of transformation, and a set of practical principles for effective research and action towards sustainability are outlined in this paper. But these approaches are not mutually exclusive.
••
University of Western Australia1, Commonwealth Scientific and Industrial Research Organisation2, Imperial College London3, Harper Adams University4, Scotland's Rural College5, University of Würzburg6, Bavarian Forest National Park7, York University8, University of New England (Australia)9, University of Sussex10, Smithsonian Institution11, University of Louisville12, University of Leeds13
TL;DR: In this paper, the authors identify seven key challenges in drawing robust inference about insect population declines: establishment of the historical baseline, representativeness of site selection, robustness of time series trend estimation, mitigation of detection bias effects, and ability to account for potential artefacts of density dependence, phenological shifts and scale-dependence in extrapolation from sample abundance to population level inference.
Abstract: 1. Many insect species are under threat from the anthropogenic drivers of global change. There have been numerous well‐documented examples of insect population declines and extinctions in the scientific literature, but recent weaker studies making extreme claims of a global crisis have drawn widespread media coverage and brought unprecedented public attention. This spotlight might be a double‐edged sword if the veracity of alarmist insect decline statements do not stand up to close scrutiny.
2. We identify seven key challenges in drawing robust inference about insect population declines: establishment of the historical baseline, representativeness of site selection, robustness of time series trend estimation, mitigation of detection bias effects, and ability to account for potential artefacts of density dependence, phenological shifts and scale‐dependence in extrapolation from sample abundance to population‐level inference.
3. Insect population fluctuations are complex. Greater care is needed when evaluating evidence for population trends and in identifying drivers of those trends. We present guidelines for best‐practise approaches that avoid methodological errors, mitigate potential biases and produce more robust analyses of time series trends.
4. Despite many existing challenges and pitfalls, we present a forward‐looking prospectus for the future of insect population monitoring, highlighting opportunities for more creative exploitation of existing baseline data, technological advances in sampling and novel computational approaches. Entomologists cannot tackle these challenges alone, and it is only through collaboration with citizen scientists, other research scientists in many disciplines, and data analysts that the next generation of researchers will bridge the gap between little bugs and big data.
••
Swedish University of Agricultural Sciences1, Norwegian University of Life Sciences2, Queen's University3, London School of Economics and Political Science4, King's College London5, SOAS, University of London6, Lund University7, University of Exeter8, Overseas Development Institute9, Cornell University10, Philippine Institute for Development Studies11, University of Sussex12, University of Leeds13
TL;DR: This article revisited important insights from the social sciences and humanities on the co-production of political economies, cultures, societies and biophysical relations and showed the possibilities for ontological pluralism to open up for new imaginations.
Abstract: Climate change research is at an impasse. The transformation of economies and everyday practices is more urgent, and yet appears ever more daunting as attempts at behaviour change, regulations, and global agreements confront material and social-political infrastructures that support the status quo. Effective action requires new ways of conceptualizing society, climate and environment and yet current research struggles to break free of established categories. In response, this contribution revisits important insights from the social sciences and humanities on the co-production of political economies, cultures, societies and biophysical relations and shows the possibilities for ontological pluralism to open up for new imaginations. Its intention is to help generate a different framing of socionatural change that goes beyond the current science-policy-behavioural change pathway. It puts forward several moments of inadvertent concealment in contemporary debates that stem directly from the way issues are framed and imagined in contemporary discourses. By placing values, normative commitments, and experiential and plural ways of knowing from around the world at the centre of climate knowledge, we confront climate change with contested politics and the everyday foundations of action rather than just data.
••
TL;DR: In this article, the authors present an extensive analysis of systematic effects, including the use of end-to-end simulations to facilitate their removal and characterize the residuals, for the Planck 2018 HFI data.
Abstract: This paper presents the High Frequency Instrument (HFI) data processing procedures for the Planck 2018 release. Major improvements in mapmaking have been achieved since the previous Planck 2015 release, many of which were used and described already in an intermediate paper dedicated to the Planck polarized data at low multipoles. These improvements enabled the first significant measurement of the reionization optical depth parameter using Planck -HFI data. This paper presents an extensive analysis of systematic effects, including the use of end-to-end simulations to facilitate their removal and characterize the residuals. The polarized data, which presented a number of known problems in the 2015 Planck release, are very significantly improved, especially the leakage from intensity to polarization. Calibration, based on the cosmic microwave background (CMB) dipole, is now extremely accurate and in the frequency range 100–353 GHz reduces intensity-to-polarization leakage caused by calibration mismatch. The Solar dipole direction has been determined in the three lowest HFI frequency channels to within one arc minute, and its amplitude has an absolute uncertainty smaller than 0.35 μ K, an accuracy of order 10−4 . This is a major legacy from the Planck HFI for future CMB experiments. The removal of bandpass leakage has been improved for the main high-frequency foregrounds by extracting the bandpass-mismatch coefficients for each detector as part of the mapmaking process; these values in turn improve the intensity maps. This is a major change in the philosophy of “frequency maps”, which are now computed from single detector data, all adjusted to the same average bandpass response for the main foregrounds. End-to-end simulations have been shown to reproduce very well the relative gain calibration of detectors, as well as drifts within a frequency induced by the residuals of the main systematic effect (analogue-to-digital convertor non-linearity residuals). Using these simulations, we have been able to measure and correct the small frequency calibration bias induced by this systematic effect at the 10−4 level. There is no detectable sign of a residual calibration bias between the first and second acoustic peaks in the CMB channels, at the 10−3 level.
•
17 Aug 2020TL;DR: The Hypothalamo-hypophysial System Statistical Methods to Investigate the Intrinsic Mechanisms Underlying Spike Patterning Summary and Conclusions
Abstract: A THEORETICAL OVERVIEW Introduction Deterministic Dynamical Systems Stochastic Dynamical Systems Information Theory Optimal Control ATOMISTIC SIMULATIONS OF ION CHANNELS Introduction Simulation Methods Selected Applications Outlook MODELING NEURONAL CALCIUM DYNAMICS Introduction Basic Principles Special Calcium Signaling for Neurons Conclusions STRUCTURE BASED MODELS OF NO DIFFUSION IN THE NERVOUS SYSTEM Introduction Methods Results Exploring Functional Roles with More Abstract Models Conclusions STOCHASTIC MODELING OF SINGLE ION CHANNELS Introduction Some Basic Probability Single Channel Models Transition Probabilities, Macroscopic Currents and Noise Macroscopic Currents and Noise Behaviour of Single Channels under Equilibrium Conditions Time Interval Omission Some Miscellaneous Topics THE BIOPHYSICAL BASIS OF FIRING VARIABILITY IN CORTICAL NEURONS Introduction Typical Input is Correlated and Irregular Synaptic Unreliability Postsynaptic Ion Channel Noise Integration of a Transient Input by Cortical Neurons Noisy Spike Generation Dynamics Dynamics of NMDA Receptors Class 1 and Class 2 Neurons Show Different Noise Sensitivities Cortical Cell Dynamical Classes Implications for Synchronous Firing Conclusions Generating Models of Single Neurons Introduction The Hypothalamo-Hypophysial System Statistical Methods to Investigate The Intrinsic Mechanisms Underlying Spike Patterning Summary and Conclusions GENERATING QUANTITATIVELY ACCURATE, BUT COMPUTATIONALLY CONCISE, MODELS OF SINGLE NEURONS Introduction The Hypothalamo-hypophysial System Statistical Methods to Investigate the Intrinsic Mechanisms Underlying Spike Patterning Summary and Conclusions BURSTING ACTIVITY IN WEAKLY ELECTRIC FISH Introduction Overview of the Electrosensory System Feature Extraction by Spike Bursts Factors Shaping Burst Firing In Vivo Conditional Action Potential Back Propagation Controls Burst Firing In Vitro Comparison with Other Bursting Neurons Conclusions LIKELIHOOD METHODS FOR NEURAL SPIKE TRAIN DATA ANALYSIS Introduction Theory Applications Conclusion Appendix BIOLOGICALLY-DETAILED NETWORK MODELING Introduction Cells Synapses Connections Inputs Implementation Validation Conclusions HEBBIAN LEARNING AND SPIKE-TIMING-DEPENDENT PLASTICITY Hebbian Models of Plasticity Spike-Timing Dependent Plasticity Role of Constraints in Hebbian Learning Competitive Hebbian Learning Through STDP Temporal Aspects of STDP STDP in a Network Conclusion CORRELATED NEURONAL ACTIVITY: HIGH-AND LOW-LEVEL VIEWS Introduction: the Timing Game Functional Roles for Spike Timing Correlations Arising from Common input Correlations Arising from Local Network Interactions When Are Neurons Sensitive to Correlated Input? A Simple, Quantitative Model Correlations and Neuronal Variability Conclusion Appendix A CASE STUDY OF POPULATION CODING: STIMULUS LOCALIZATION IN THE BARREL CORTEX Introduction Series Expansion Method The Whisker System Coding in the Whisker System Discussion Conclusions MODELING FLY MOTION VISION The Fly Motion Vision System: An Overview Mechanisms of Local Motion Detection: The Correlation Detector Spatial Processing of Local Motion Signals BY Lobula Plate Tangential Cells Conclusions MEAN-FIELD THEORY OF IRREGULARLY SPIKING NEURONAL POPULATIONS AND WORKING MEMORY IN RECURRENT CORTICAL NETWORKS Introduction Firing-Rate and Variability of a Spiking Neuron with Noisy input Self-Consistent Theory of Recurrent Cortical Circuits THE OPERATION OF MEMORY SYSTEMS IN THE BRAIN Introduction Functions of the Hippocampus in Long-Term Memory Short Term Memory Systems Invariant Visual Object Recognition Visual Stimulus-Reward Association, Emotion, and Motivation Effects of Mood on Memory and Visual Processing MODELING MOTOR CONTROL PARADIGMS Introduction: The Ecological Nature of Motor Control The Robotic Perspective The Biological Perspective The Role of Cerebellum in the Coordination of Multiple Joints Controlling Unstable Plants Motor Learning Paradigms COMPUTATIONAL MODELS FOR GENERIC CORTICAL MICROCIRCUITS Introduction A Conceptual Framework for Real-Time Neural Computation The Generic Neural Microcircuit Model Towards a Non-Turing theory for Real-Time Neural Computation A Generic Neural Microcircuit on the Computational Test Stand Temporal integration and Kernel Function of Neural Microcircuit Models Software for Evaluating the Computational Capabilities of Neural Microcircuit Models Discussion MODELING PRIMATE VISUAL ATTENTION Introduction Brain Areas Bottom-Up Control Top-Down Modulation of Early Vision Top-Down Deployment of Attention Attention and Scene Understanding Discussion
••
TL;DR: In this article, the authors examine the promise and peril of smart home technologies and suggest three areas of future research on the demographics and behavior of actual smart home adopters, rethinking the duality of "control,” and looking beyond "homes" towards socio-technical systems, practices, and justice.
Abstract: Smart home technologies refer to devices that provide some degree of digitally connected, automated, or enhanced services to building occupants. Smart homes have become central in recent technology and policy discussions about energy efficiency, climate change, and the sustainability of buildings. Nevertheless, do they truly promote sustainability goals? In addition, what sorts of benefits, risks, and policies do they entail? Based on an extensive original dataset involving expert interviews, site visits to retailers, and a comprehensive review of the literature, this study critically examines the promise and peril of smart home technologies. Drawing on original data collected in the United Kingdom, which has access to European markets, the study first examines definitions of smart homes before offering a new classification involving 13 categories of smart technology covering 267 specific options commercially available from 113 companies. It situates these different technology classes alongside six degrees or levels of smartness, from the basic or traditional home to the fully automated and sentient home. It then elaborates on the 13 distinct benefits smart homes may offer alongside potential 17 risks and barriers, before introducing seven policy recommendations from the material. It lastly suggests three areas of future research on the demographics and behavior of actual smart home adopters, rethinking the duality of “control,” and looking beyond “homes” towards socio-technical systems, practices, and justice.
••
TL;DR: The Deep Underground Neutrino Experiment (DUNE) as discussed by the authors is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation, stands ready to capture supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model.
Abstract: The preponderance of matter over antimatter in the early universe, the dynamics of the supernovae that produced the heavy elements necessary for life, and whether protons eventually decay—these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our universe, its current state, and its eventual fate. The Deep Underground Neutrino Experiment (DUNE) is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation, stands ready to capture supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. The DUNE far detector technical design report (TDR) describes the DUNE physics program and the technical designs of the single- and dual-phase DUNE liquid argon TPC far detector modules. This TDR is intended to justify the technical choices for the far detector that flow down from the high-level physics goals through requirements at all levels of the Project. Volume I contains an executive summary that introduces the DUNE science program, the far detector and the strategy for its modular designs, and the organization and management of the Project. The remainder of Volume I provides more detail on the science program that drives the choice of detector technologies and on the technologies themselves. It also introduces the designs for the DUNE near detector and the DUNE computing model, for which DUNE is planning design reports. Volume II of this TDR describes DUNE's physics program in detail. Volume III describes the technical coordination required for the far detector design, construction, installation, and integration, and its organizational structure. Volume IV describes the single-phase far detector technology. A planned Volume V will describe the dual-phase technology.
••
TL;DR: Attempts were made to highlight particularly important antibacterial results of these NCs in recent investigations to increase the mechanical and antibacterial properties of wound-healing tissue scaffolds.
••
TL;DR: In this paper, the trigger algorithms and selection were optimized to control the rates while retaining a high efficiency for physics analyses at the ATLAS experiment to cope with a fourfold increase of peak LHC luminosity from 2015 to 2018 (Run 2), and a similar increase in the number of interactions per beam-crossing to about 60.
Abstract: Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for the ATLAS experiment to record signals for a wide variety of physics: from Standard Model processes to searches for new phenomena in both proton–proton and heavy-ion collisions. To cope with a fourfold increase of peak LHC luminosity from 2015 to 2018 (Run 2), to 2.1×1034cm-2s-1, and a similar increase in the number of interactions per beam-crossing to about 60, trigger algorithms and selections were optimised to control the rates while retaining a high efficiency for physics analyses. For proton–proton collisions, the single-electron trigger efficiency relative to a single-electron offline selection is at least 75% for an offline electron of 31 GeV, and rises to 96% at 60 GeV; the trigger efficiency of a 25 GeV leg of the primary diphoton trigger relative to a tight offline photon selection is more than 96% for an offline photon of 30 GeV. For heavy-ion collisions, the primary electron and photon trigger efficiencies relative to the corresponding standard offline selections are at least 84% and 95%, respectively, at 5 GeV above the corresponding trigger threshold.
••
TL;DR: A search for heavy neutral Higgs bosons is performed using the LHC Run 2 data, corresponding to an integrated luminosity of 139 fb^{-1} of proton-proton collisions at sqrt[s]=13TeV recorded with the ATLAS detector.
Abstract: A search for heavy neutral Higgs bosons is performed using the LHC Run 2 data, corresponding to an integrated luminosity of 139 fb^{-1} of proton-proton collisions at sqrt[s]=13 TeV recorded with the ATLAS detector. The search for heavy resonances is performed over the mass range 0.2-2.5 TeV for the τ^{+}τ^{-} decay with at least one τ-lepton decaying into final states with hadrons. The data are in good agreement with the background prediction of the standard model. In the M_{h}^{125} scenario of the minimal supersymmetric standard model, values of tanβ>8 and tanβ>21 are excluded at the 95% confidence level for neutral Higgs boson masses of 1.0 and 1.5 TeV, respectively, where tanβ is the ratio of the vacuum expectation values of the two Higgs doublets.
••
University of California, Los Angeles1, University of Copenhagen2, Fermilab3, École Polytechnique Fédérale de Lausanne4, University of Chicago5, University of Liège6, European Southern Observatory7, University of California, Davis8, Inter-University Centre for Astronomy and Astrophysics9, Andrés Bello National University10, University of Cambridge11, University of Tokyo12, University of Wisconsin-Madison13, University College London14, University of Pennsylvania15, SLAC National Accelerator Laboratory16, University of Illinois at Urbana–Champaign17, IFAE18, Spanish National Research Council19, INAF20, Indian Institute of Technology, Hyderabad21, Ludwig Maximilian University of Munich22, University of Michigan23, Autonomous University of Madrid24, Santa Cruz Institute for Particle Physics25, Ohio State University26, Smithsonian Institution27, University of Arizona28, University of São Paulo29, Texas A&M University30, Princeton University31, University of Sussex32, Universidade Federal do Rio Grande do Sul33, Duke University34, University of Southampton35, Brandeis University36, Oak Ridge National Laboratory37
TL;DR: In this paper, a blind time-delay cosmographic analysis for the lens system DES J0408−5354 is presented, which combines the measured time delays, line-of-sight central velocity dispersion of the deflector, and statistically constrained external convergence with the lens models to estimate two cosmological distances.
Abstract: We present a blind time-delay cosmographic analysis for the lens system DES J0408−5354. This system is extraordinary for the presence of two sets of multiple images at different redshifts, which provide the opportunity to obtain more information at the cost of increased modelling complexity with respect to previously analysed systems. We perform detailed modelling of the mass distribution for this lens system using three band Hubble Space Telescope imaging. We combine the measured time delays, line-of-sight central velocity dispersion of the deflector, and statistically constrained external convergence with our lens models to estimate two cosmological distances. We measure the ‘effective’ time-delay distance corresponding to the redshifts of the deflector and the lensed quasar DeffΔt=3382+146−115 Mpc and the angular diameter distance to the deflector Dd = 1711+376−280 Mpc, with covariance between the two distances. From these constraints on the cosmological distances, we infer the Hubble constant H0= 74.2+2.7−3.0 km s−1 Mpc−1 assuming a flat ΛCDM cosmology and a uniform prior for Ωm as Ωm∼U(0.05,0.5). This measurement gives the most precise constraint on H0 to date from a single lens. Our measurement is consistent with that obtained from the previous sample of six lenses analysed by the H0 Lenses in COSMOGRAIL’s Wellspring (H0LiCOW) collaboration. It is also consistent with measurements of H0 based on the local distance ladder, reinforcing the tension with the inference from early Universe probes, for example, with 2.2σ discrepancy from the cosmic microwave background measurement.
••
University of São Paulo1, Spanish National Research Council2, Fermilab3, Stanford University4, Autonomous University of Madrid5, University of Portsmouth6, University of Wisconsin-Madison7, University of Sussex8, University of Pennsylvania9, Pierre-and-Marie-Curie University10, Institut d'Astrophysique de Paris11, Argonne National Laboratory12, Ludwig Maximilian University of Munich13, University College London14, University of Illinois at Urbana–Champaign15, University of Chicago16, University of Michigan17, Ohio State University18, University of Queensland19, Indian Institute of Technology, Hyderabad20, Carnegie Mellon University21, University of Arizona22, California Institute of Technology23, University of California, Santa Cruz24, University of Oslo25, University of Cambridge26, ETH Zurich27, Max Planck Society28, Harvard University29, Macquarie University30, Lowell Observatory31, Carnegie Institution for Science32, Princeton University33, Australian National University34, Texas A&M University35, University of Trieste36, Duke University37, Brookhaven National Laboratory38, Austin Peay State University39, University of Southampton40, Oak Ridge National Laboratory41, Stony Brook University42, University of Edinburgh43
TL;DR: In this paper, a joint analysis of the counts and weak lensing signal of redMaPPer clusters selected from the DES Year 1 dataset was performed using the same shear and source photometric redshifts estimates as were used in the DES combined probes analysis.
Abstract: We perform a joint analysis of the counts and weak lensing signal of redMaPPer clusters selected from the Dark Energy Survey (DES) Year 1 dataset. Our analysis uses the same shear and source photometric redshifts estimates as were used in the DES combined probes analysis. Our analysis results in surprisingly low values for S-8 = sigma(8)(Omega(m)/0.3)(0.5) = 0.65 0.04, driven by a low matter density parameter, Omega(m) = 0.179(-0.038)(+0.031), with sigma(8) - Omega(m) posteriors in 2.4 sigma tension with the DES Y1 3x2pt results, and in 5.6 sigma with the Planck CMB analysis. These results include the impact of post-unblinding changes to the analysis, which did not improve the level of consistency with other data sets compared to the results obtained at the unblinding. The fact that multiple cosmological probes (supernovae, baryon acoustic oscillations, cosmic shear, galaxy clustering and CMB anisotropies), and other galaxy cluster analyses all favor significantly higher matter densities suggests the presence of systematic errors in the data or an incomplete modeling of the relevant physics. Cross checks with x-ray and microwave data, as well as independent constraints on the observable -mass relation from Sunyaev-Zeldovich selected clusters, suggest that the discrepancy resides in our modeling of the weak lensing signal rather than the cluster abundance. Repeating our analysis using a higher richness threshold (lambda >= 30) significantly reduces the tension with other probes, and points to one or more richness -dependent effects not captured by our model.