Showing papers by "Stockholm University published in 2016"
••
TL;DR: In this article, the authors present a cosmological analysis based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation.
Abstract: This paper presents cosmological results based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation. Our results are in very good agreement with the 2013 analysis of the Planck nominal-mission temperature data, but with increased precision. The temperature and polarization power spectra are consistent with the standard spatially-flat 6-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations (denoted “base ΛCDM” in this paper). From the Planck temperature data combined with Planck lensing, for this cosmology we find a Hubble constant, H0 = (67.8 ± 0.9) km s-1Mpc-1, a matter density parameter Ωm = 0.308 ± 0.012, and a tilted scalar spectral index with ns = 0.968 ± 0.006, consistent with the 2013 analysis. Note that in this abstract we quote 68% confidence limits on measured parameters and 95% upper limits on other parameters. We present the first results of polarization measurements with the Low Frequency Instrument at large angular scales. Combined with the Planck temperature and lensing data, these measurements give a reionization optical depth of τ = 0.066 ± 0.016, corresponding to a reionization redshift of . These results are consistent with those from WMAP polarization measurements cleaned for dust emission using 353-GHz polarization maps from the High Frequency Instrument. We find no evidence for any departure from base ΛCDM in the neutrino sector of the theory; for example, combining Planck observations with other astrophysical data we find Neff = 3.15 ± 0.23 for the effective number of relativistic degrees of freedom, consistent with the value Neff = 3.046 of the Standard Model of particle physics. The sum of neutrino masses is constrained to ∑ mν < 0.23 eV. The spatial curvature of our Universe is found to be very close to zero, with | ΩK | < 0.005. Adding a tensor component as a single-parameter extension to base ΛCDM we find an upper limit on the tensor-to-scalar ratio of r0.002< 0.11, consistent with the Planck 2013 results and consistent with the B-mode polarization constraints from a joint analysis of BICEP2, Keck Array, and Planck (BKP) data. Adding the BKP B-mode data to our analysis leads to a tighter constraint of r0.002 < 0.09 and disfavours inflationarymodels with a V(φ) ∝ φ2 potential. The addition of Planck polarization data leads to strong constraints on deviations from a purely adiabatic spectrum of fluctuations. We find no evidence for any contribution from isocurvature perturbations or from cosmic defects. Combining Planck data with other astrophysical data, including Type Ia supernovae, the equation of state of dark energy is constrained to w = −1.006 ± 0.045, consistent with the expected value for a cosmological constant. The standard big bang nucleosynthesis predictions for the helium and deuterium abundances for the best-fit Planck base ΛCDM cosmology are in excellent agreement with observations. We also constraints on annihilating dark matter and on possible deviations from the standard recombination history. In neither case do we find no evidence for new physics. The Planck results for base ΛCDM are in good agreement with baryon acoustic oscillation data and with the JLA sample of Type Ia supernovae. However, as in the 2013 analysis, the amplitude of the fluctuation spectrum is found to be higher than inferred from some analyses of rich cluster counts and weak gravitational lensing. We show that these tensions cannot easily be resolved with simple modifications of the base ΛCDM cosmology. Apart from these tensions, the base ΛCDM cosmology provides an excellent description of the Planck CMB observations and many other astrophysical data sets.
10,728 citations
••
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes.
For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy.
Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.
5,187 citations
••
TL;DR: A basic standard acoustic parameter set for various areas of automatic voice analysis, such as paralinguistic or clinical speech analysis, is proposed and intended to provide a common baseline for evaluation of future research and eliminate differences caused by varying parameter sets or even different implementations of the same parameters.
Abstract: Work on voice sciences over recent decades has led to a proliferation of acoustic parameters that are used quite selectively and are not always extracted in a similar fashion. With many independent teams working in different research areas, shared standards become an essential safeguard to ensure compliance with state-of-the-art methods allowing appropriate comparison of results across studies and potential integration and combination of extraction and recognition systems. In this paper we propose a basic standard acoustic parameter set for various areas of automatic voice analysis, such as paralinguistic or clinical speech analysis. In contrast to a large brute-force parameter set, we present a minimalistic set of voice parameters here. These were selected based on a) their potential to index affective physiological changes in voice production, b) their proven value in former studies as well as their automatic extractability, and c) their theoretical significance. The set is intended to provide a common baseline for evaluation of future research and eliminate differences caused by varying parameter sets or even different implementations of the same parameters. Our implementation is publicly available with the openSMILE toolkit. Comparative evaluations of the proposed feature set and large baseline feature sets of INTERSPEECH challenges show a high performance of the proposed set in relation to its size.
1,158 citations
••
TL;DR: It is demonstrated that large drug and dye molecules can be encapsulated in zeolitic imidazolate framework (ZIF) crystals, and it is shown that ZIF-8 crystals loaded with the anticancer drug doxorubicin (DOX) are efficient drug delivery vehicles in cancer therapy using pH-responsive release.
Abstract: Many medical and chemical applications require target molecules to be delivered in a controlled manner at precise locations. Metal-organic frameworks (MOFs) have high porosity, large surface area, and tunable functionality and are promising carriers for such purposes. Current approaches for incorporating target molecules are based on multistep postfunctionalization. Here, we report a novel approach that combines MOF synthesis and molecule encapsulation in a one-pot process. We demonstrate that large drug and dye molecules can be encapsulated in zeolitic imidazolate framework (ZIF) crystals. The molecules are homogeneously distributed within the crystals, and their loadings can be tuned. We show that ZIF-8 crystals loaded with the anticancer drug doxorubicin (DOX) are efficient drug delivery vehicles in cancer therapy using pH-responsive release. Their efficacy on breast cancer cell lines is higher than that of free DOX. Our one-pot process opens new possibilities to construct multifunctional delivery systems for a wide range of applications.
947 citations
••
TL;DR: In this article, the authors present the Planck 2015 likelihoods, statistical descriptions of the 2-point correlation functions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties.
Abstract: This paper presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlationfunctions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties, both instrumental and astrophysical in nature. They are based on the same hybrid approach used for the previous release, i.e., a pixel-based likelihood at low multipoles (l< 30) and a Gaussian approximation to the distribution of cross-power spectra at higher multipoles. The main improvements are the use of more and better processed data and of Planck polarization information, along with more detailed models of foregrounds and instrumental uncertainties. The increased redundancy brought by more than doubling the amount of data analysed enables further consistency checks and enhanced immunity to systematic effects. It also improves the constraining power of Planck, in particular with regard to small-scale foreground properties. Progress in the modelling of foreground emission enables the retention of a larger fraction of the sky to determine the properties of the CMB, which also contributes to the enhanced precision of the spectra. Improvements in data processing and instrumental modelling further reduce uncertainties. Extensive tests establish the robustness and accuracy of the likelihood results, from temperature alone, from polarization alone, and from their combination. For temperature, we also perform a full likelihood analysis of realistic end-to-end simulations of the instrumental response to the sky, which were fed into the actual data processing pipeline; this does not reveal biases from residual low-level instrumental systematics. Even with the increase in precision and robustness, the ΛCDM cosmological model continues to offer a very good fit to the Planck data. The slope of the primordial scalar fluctuations, n_s, is confirmed smaller than unity at more than 5σ from Planck alone. We further validate the robustness of the likelihood results against specific extensions to the baseline cosmology, which are particularly sensitive to data at high multipoles. For instance, the effective number of neutrino species remains compatible with the canonical value of 3.046. For this first detailed analysis of Planck polarization spectra, we concentrate at high multipoles on the E modes, leaving the analysis of the weaker B modes to future work. At low multipoles we use temperature maps at all Planck frequencies along with a subset of polarization data. These data take advantage of Planck’s wide frequency coverage to improve the separation of CMB and foreground emission. Within the baseline ΛCDM cosmology this requires τ = 0.078 ± 0.019 for the reionization optical depth, which is significantly lower than estimates without the use of high-frequency data for explicit monitoring of dust emission. At high multipoles we detect residual systematic errors in E polarization, typically at the μK^2 level; we therefore choose to retain temperature information alone for high multipoles as the recommended baseline, in particular for testing non-minimal models. Nevertheless, the high-multipole polarization spectra from Planck are already good enough to enable a separate high-precision determination of the parameters of the ΛCDM model, showing consistency with those established independently from temperature information alone.
932 citations
••
TL;DR: The possibility that the dark matter comprises primordial black holes (PBHs) is considered in this paper, with particular emphasis on the currently allowed mass windows at 10(16)-10(17) g, 10(20)-10 (24) g and 1-...
Abstract: The possibility that the dark matter comprises primordial black holes (PBHs) is considered, with particular emphasis on the currently allowed mass windows at 10(16)-10(17) g, 10(20)-10(24) g and 1- ...
915 citations
••
TL;DR: In this paper, the implications of Planck data for models of dark energy (DE) and modified gravity (MG) beyond the standard cosmological constant scenario were studied, and it was shown that the density of DE at early times has to be below 2% of the critical density, even when forced to play a role for z < 50.
Abstract: We study the implications of Planck data for models of dark energy (DE) and modified gravity (MG) beyond the standard cosmological constant scenario. We start with cases where the DE only directly affects the background evolution, considering Taylor expansions of the equation of state w(a), as well as principal component analysis and parameterizations related to the potential of a minimally coupled DE scalar field. When estimating the density of DE at early times, we significantly improve present constraints and find that it has to be below ~2% (at 95% confidence) of the critical density, even when forced to play a role for z < 50 only. We then move to general parameterizations of the DE or MG perturbations that encompass both effective field theories and the phenomenology of gravitational potentials in MG models. Lastly, we test a range of specific models, such as k-essence, f(R) theories, and coupled DE. In addition to the latest Planck data, for our main analyses, we use background constraints from baryonic acoustic oscillations, type-Ia supernovae, and local measurements of the Hubble constant. We further show the impact of measurements of the cosmological perturbations, such as redshift-space distortions and weak gravitational lensing. These additional probes are important tools for testing MG models and for breaking degeneracies that are still present in the combination of Planck and background data sets. All results that include only background parameterizations (expansion of the equation of state, early DE, general potentials in minimally-coupled scalar fields or principal component analysis) are in agreement with ΛCDM. When testing models that also change perturbations (even when the background is fixed to ΛCDM), some tensions appear in a few scenarios: the maximum one found is ~2σ for Planck TT+lowP when parameterizing observables related to the gravitational potentials with a chosen time dependence; the tension increases to, at most, 3σ when external data sets are included. It however disappears when including CMB lensing.
816 citations
••
TL;DR: In this paper, the authors present a new time-slice reconstruction of the Eurasian ice sheets (British-Irish, Svalbard-Barents-Kara Seas and Scandinavian) documenting the spatial evolution of these interconnected ice sheets every 1000 years from 25 to 10 years and at four selected time periods back to 40 years.
Abstract: We present a new time-slice reconstruction of the Eurasian ice sheets (British–Irish, Svalbard–Barents–Kara Seas and Scandinavian) documenting the spatial evolution of these interconnected ice sheets every 1000 years from 25 to 10 ka, and at four selected time periods back to 40 ka. The time-slice maps of ice-sheet extent are based on a new Geographical Information System (GIS) database, where we have collected published numerical dates constraining the timing of ice-sheet advance and retreat, and additionally geomorphological and geological evidence contained within the existing literature. We integrate all uncertainty estimates into three ice-margin lines for each time-slice; a most-credible line, derived from our assessment of all available evidence, with bounding maximum and minimum limits allowed by existing data. This approach was motivated by the demands of glaciological, isostatic and climate modelling and to clearly display limitations in knowledge. The timing of advance and retreat were both remarkably spatially variable across the ice-sheet area. According to our compilation the westernmost limit along the British–Irish and Norwegian continental shelf was reached up to 7000 years earlier (at c. 27–26 ka) than the eastern limit on the Russian Plain (at c. 20–19 ka). The Eurasian ice sheet complex as a whole attained its maximum extent (5.5 Mkm2) and volume (~24 m Sea Level Equivalent) at c. 21 ka. Our continental-scale approach highlights instances of conflicting evidence and gaps in the ice-sheet chronology where uncertainties remain large and should be a focus for future research. Largest uncertainties coincide with locations presently below sea level and where contradicting evidence exists. This first version of the database and time-slices (DATED-1) has a census date of 1 January 2013 and both are available to download via the Bjerknes Climate Data Centre and PANGAEA (www.bcdc.no; http://doi.pangaea.de/10.1594/PANGAEA.848117).
757 citations
••
University of Graz1, Medical University of Graz2, Charité3, Hannover Medical School4, Rutgers University5, Joanneum Research6, University of Fribourg7, University of Freiburg8, Stockholm University9, University of Padua10, University of Kent11, Technische Universität München12, Ludwig Maximilian University of Munich13, King's College London14, University of Cambridge15, Innsbruck Medical University16, Ruhr University Bochum17, Free University of Berlin18
TL;DR: It is shown that oral supplementation of the natural polyamine spermidine extends the lifespan of mice and exerts cardioprotective effects, reducing cardiac hypertrophy and preserving diastolic function in old mice, and suggests a new and feasible strategy for protection against cardiovascular disease.
Abstract: Aging is associated with an increased risk of cardiovascular disease and death. Here we show that oral supplementation of the natural polyamine spermidine extends the lifespan of mice and exerts cardioprotective effects, reducing cardiac hypertrophy and preserving diastolic function in old mice. Spermidine feeding enhanced cardiac autophagy, mitophagy and mitochondrial respiration, and it also improved the mechano-elastical properties of cardiomyocytes in vivo, coinciding with increased titin phosphorylation and suppressed subclinical inflammation. Spermidine feeding failed to provide cardioprotection in mice that lack the autophagy-related protein Atg5 in cardiomyocytes. In Dahl salt-sensitive rats that were fed a high-salt diet, a model for hypertension-induced congestive heart failure, spermidine feeding reduced systemic blood pressure, increased titin phosphorylation and prevented cardiac hypertrophy and a decline in diastolic function, thus delaying the progression to heart failure. In humans, high levels of dietary spermidine, as assessed from food questionnaires, correlated with reduced blood pressure and a lower incidence of cardiovascular disease. Our results suggest a new and feasible strategy for protection against cardiovascular disease.
721 citations
••
TL;DR: In this article, the Planck full mission cosmic microwave background (CMB) temperature and E-mode polarization maps are analysed to obtain constraints on primordial non-Gaussianity (NG).
Abstract: The Planck full mission cosmic microwave background (CMB) temperature and E-mode polarization maps are analysed to obtain constraints on primordial non-Gaussianity (NG). Using three classes of optimal bispectrum estimators – separable template-fitting (KSW), binned, and modal – we obtain consistent values for the primordial local, equilateral, and orthogonal bispectrum amplitudes, quoting as our final result from temperature alone ƒlocalNL = 2.5 ± 5.7, ƒequilNL= -16 ± 70, , and ƒorthoNL = -34 ± 32 (68% CL, statistical). Combining temperature and polarization data we obtain ƒlocalNL = 0.8 ± 5.0, ƒequilNL= -4 ± 43, and ƒorthoNL = -26 ± 21 (68% CL, statistical). The results are based on comprehensive cross-validation of these estimators on Gaussian and non-Gaussian simulations, are stable across component separation techniques, pass an extensive suite of tests, and are consistent with estimators based on measuring the Minkowski functionals of the CMB. The effect of time-domain de-glitching systematics on the bispectrum is negligible. In spite of these test outcomes we conservatively label the results including polarization data as preliminary, owing to a known mismatch of the noise model in simulations and the data. Beyond estimates of individual shape amplitudes, we present model-independent, three-dimensional reconstructions of the Planck CMB bispectrum and derive constraints on early universe scenarios that generate primordial NG, including general single-field models of inflation, axion inflation, initial state modifications, models producing parity-violating tensor bispectra, and directionally dependent vector models. We present a wide survey of scale-dependent feature and resonance models, accounting for the “look elsewhere” effect in estimating the statistical significance of features. We also look for isocurvature NG, and find no signal, but we obtain constraints that improve significantly with the inclusion of polarization. The primordial trispectrum amplitude in the local model is constrained to be
652 citations
••
Roma Tre University1, Stockholm University2, Arizona State University3, University of Maryland, College Park4, Institut Universitaire de France5, Indian Institute of Technology Delhi6, Boston University7, University of Innsbruck8, Princeton University9, University of Tokyo10, Royal Institute of Technology11, Complutense University of Madrid12, Peking University13
TL;DR: The behavior of water in the regime from ambient conditions to the deeply supercooled region is described and some of the possible experimental lines of research that are essential to complete a global picture that still needs to be completed.
Abstract: Water is the most abundant liquid on earth and also the substance with the largest number of anomalies in its properties. It is a prerequisite for life and as such a most important subject of current research in chemical physics and physical chemistry. In spite of its simplicity as a liquid, it has an enormously rich phase diagram where different types of ices, amorphous phases, and anomalies disclose a path that points to unique thermodynamics of its supercooled liquid state that still hides many unraveled secrets. In this review we describe the behavior of water in the regime from ambient conditions to the deeply supercooled region. The review describes simulations and experiments on this anomalous liquid. Several scenarios have been proposed to explain the anomalous properties that become strongly enhanced in the supercooled region. Among those, the second critical-point scenario has been investigated extensively, and at present most experimental evidence point to this scenario. Starting from very low ...
••
Aix-Marseille University1, University of Oklahoma2, University of Iowa3, Azerbaijan National Academy of Sciences4, Université Paris-Saclay5, University of Amsterdam6, University of California, Santa Cruz7, University of Sussex8, Tel Aviv University9, Technion – Israel Institute of Technology10, University of Oregon11, Stockholm University12, King's College London13, International Centre for Theoretical Physics14, AGH University of Science and Technology15, Brookhaven National Laboratory16, Northern Illinois University17, Ludwig Maximilian University of Munich18, Rutherford Appleton Laboratory19, University of Liverpool20, University of Belgrade21, University of Göttingen22, University of Granada23, Boston University24, Joint Institute for Nuclear Research25, University of Rome Tor Vergata26, Lund University27, University of Bologna28, University of Victoria29, University of Grenoble30, National University of La Plata31, CERN32, National Technical University of Athens33, University of Salento34, University of Chicago35, Columbia University36, University of Birmingham37, University of Naples Federico II38, University of Copenhagen39, University of Washington40, University of Valencia41, Lawrence Berkeley National Laboratory42, Federal University of Rio de Janeiro43, Brandeis University44, University of Michigan45, University of Coimbra46, University of Lisbon47, University of Sheffield48, University of Geneva49, University of Texas at Austin50, Heidelberg University51, University of Milan52, National and Kapodistrian University of Athens53, Dresden University of Technology54, Novosibirsk State University55, IFAE56
TL;DR: In this article, a combined ATLAS and CMS measurements of the Higgs boson production and decay rates, as well as constraints on its couplings to vector bosons and fermions, are presented.
Abstract: Combined ATLAS and CMS measurements of the Higgs boson production and decay rates, as well as constraints on its couplings to vector bosons and fermions, are presented. The combination is based on the analysis of five production processes, namely gluon fusion, vector boson fusion, and associated production with a $W$ or a $Z$ boson or a pair of top quarks, and of the six decay modes $H \to ZZ, WW$, $\gamma\gamma, \tau\tau, bb$, and $\mu\mu$. All results are reported assuming a value of 125.09 GeV for the Higgs boson mass, the result of the combined measurement by the ATLAS and CMS experiments. The analysis uses the CERN LHC proton--proton collision data recorded by the ATLAS and CMS experiments in 2011 and 2012, corresponding to integrated luminosities per experiment of approximately 5 fb$^{-1}$ at $\sqrt{s}=7$ TeV and 20 fb$^{-1}$ at $\sqrt{s} = 8$ TeV. The Higgs boson production and decay rates measured by the two experiments are combined within the context of three generic parameterisations: two based on cross sections and branching fractions, and one on ratios of coupling modifiers. Several interpretations of the measurements with more model-dependent parameterisations are also given. The combined signal yield relative to the Standard Model prediction is measured to be 1.09 $\pm$ 0.11. The combined measurements lead to observed significances for the vector boson fusion production process and for the $H \to \tau\tau$ decay of $5.4$ and $5.5$ standard deviations, respectively. The data are consistent with the Standard Model predictions for all parameterisations considered.
••
TL;DR: Evidence shows that the effect of shift work on sleep mainly concerns acute sleep loss in connection with night shifts and early morning shifts, and Laboratory studies indicate that cardiometabolic stress and cognitive impairments are increased by shift work, as well as by sleep loss.
Abstract: This review summarises the literature on shift work and its relation to insufficient sleep, chronic diseases, and accidents. It is based on 38 meta-analyses and 24 systematic reviews, with additional narrative reviews and articles used for outlining possible mechanisms by which shift work may cause accidents and adverse health. Evidence shows that the effect of shift work on sleep mainly concerns acute sleep loss in connection with night shifts and early morning shifts. A link also exists between shift work and accidents, type 2 diabetes (relative risk range 1.09-1.40), weight gain, coronary heart disease (relative risk 1.23), stroke (relative risk 1.05), and cancer (relative risk range 1.01-1.32), although the original studies showed mixed results. The relations of shift work to cardiometabolic diseases and accidents mimic those with insufficient sleep. Laboratory studies indicate that cardiometabolic stress and cognitive impairments are increased by shift work, as well as by sleep loss. Given that the health and safety consequences of shift work and insufficient sleep are very similar, they are likely to share common mechanisms. However, additional research is needed to determine whether insufficient sleep is a causal pathway for the adverse health effects associated with shift work.
••
Columbia University1, University of Amsterdam2, University of Bologna3, University of Mainz4, University of Coimbra5, Weizmann Institute of Science6, New York University Abu Dhabi7, University of Zurich8, Stockholm University9, Rensselaer Polytechnic Institute10, Max Planck Society11, University of Münster12, University of Bern13, Purdue University14, École des mines de Nantes15, University of California, Los Angeles16, Rice University17
TL;DR: In this article, the expected sensitivity of the Xenon1T experiment to the spin-independent WIMP-nucleon interaction cross section was investigated based on Monte Carlo predictions of the electronic and nuclear recoil backgrounds.
Abstract: The XENON1T experiment is currently in the commissioning phase at the Laboratori Nazionali del Gran Sasso, Italy. In this article we study the experiment's expected sensitivity to the spin-independent WIMP-nucleon interaction cross section, based on Monte Carlo predictions of the electronic and nuclear recoil backgrounds. The total electronic recoil background in 1 tonne fiducial volume and (1, 12) keV electronic recoil equivalent energy region, before applying any selection to discriminate between electronic and nuclear recoils, is (1.80 ± 0.15) · 10(−)(4) (kg·day·keV)(−)(1), mainly due to the decay of (222)Rn daughters inside the xenon target. The nuclear recoil background in the corresponding nuclear recoil equivalent energy region (4, 50) keV, is composed of (0.6 ± 0.1) (t·y)(−)(1) from radiogenic neutrons, (1.8 ± 0.3) · 10(−)(2) (t·y)(−)(1) from coherent scattering of neutrinos, and less than 0.01 (t·y)(−)(1) from muon-induced neutrons. The sensitivity of XENON1T is calculated with the Profile Likelihood Ratio method, after converting the deposited energy of electronic and nuclear recoils into the scintillation and ionization signals seen in the detector. We take into account the systematic uncertainties on the photon and electron emission model, and on the estimation of the backgrounds, treated as nuisance parameters. The main contribution comes from the relative scintillation efficiency Script L(eff), which affects both the signal from WIMPs and the nuclear recoil backgrounds. After a 2 y measurement in 1 t fiducial volume, the sensitivity reaches a minimum cross section of 1.6 · 10(−)(47) cm(2) at m(χ) = 50 GeV/c(2).
••
University of Amsterdam1, University of Bologna2, University of Mainz3, University of Coimbra4, University of Bern5, Columbia University6, Weizmann Institute of Science7, New York University Abu Dhabi8, University of Zurich9, Rensselaer Polytechnic Institute10, Max Planck Society11, Stockholm University12, University of Nantes13, Karlsruhe Institute of Technology14, University of Münster15, University of Chicago16, Arizona State University17, Purdue University18, Rice University19, University of California, San Diego20, University of Freiburg21, Dresden University of Technology22, Imperial College London23, University of California, Los Angeles24
TL;DR: DARk matter WImp search with liquid xenoN (DARWIN) as mentioned in this paper is an experiment for the direct detection of dark matter using a multi-ton liquid xenon time projection chamber at its core.
Abstract: DARk matter WImp search with liquid xenoN (DARWIN(2)) will be an experiment for the direct detection of dark matter using a multi-ton liquid xenon time projection chamber at its core. Its primary g ...
••
Oeschger Centre for Climate Change Research1, Siberian Federal University2, Stockholm University3, Harvard University4, Princeton University5, Paul Scherrer Institute6, Max Planck Society7, University of Mainz8, University of Lausanne9, University UCINF10, University of Giessen11, ETH Zurich12, Sukachev Institute of Forest13
TL;DR: In this paper, the authors used tree-ring chronologies from the Russian Altai and European Alps to reconstruct summer temperatures over the past two millennia and found an unprecedented, longlasting and spatially synchronized cooling following a cluster of large volcanic eruptions in 536, 540 and 547 AD.
Abstract: Societal upheaval occurred across Eurasia in the sixth and seventh centuries. Tree-ring reconstructions suggest a period of pronounced cooling during this time associated with several volcanic eruptions. Climatic changes during the first half of the Common Era have been suggested to play a role in societal reorganizations in Europe1,2 and Asia3,4. In particular, the sixth century coincides with rising and falling civilizations1,2,3,4,5,6, pandemics7,8, human migration and political turmoil8,9,10,11,12,13. Our understanding of the magnitude and spatial extent as well as the possible causes and concurrences of climate change during this period is, however, still limited. Here we use tree-ring chronologies from the Russian Altai and European Alps to reconstruct summer temperatures over the past two millennia. We find an unprecedented, long-lasting and spatially synchronized cooling following a cluster of large volcanic eruptions in 536, 540 and 547 AD (ref. 14), which was probably sustained by ocean and sea-ice feedbacks15,16, as well as a solar minimum17. We thus identify the interval from 536 to about 660 AD as the Late Antique Little Ice Age. Spanning most of the Northern Hemisphere, we suggest that this cold phase be considered as an additional environmental factor contributing to the establishment of the Justinian plague7,8, transformation of the eastern Roman Empire and collapse of the Sasanian Empire1,2,5, movements out of the Asian steppe and Arabian Peninsula8,11,12, spread of Slavic-speaking peoples9,10 and political upheavals in China13.
••
TL;DR: In this paper, the authors consider the problem of diffuse astrophysical component separation, and process these maps within a Bayesian framework to derive an internally consistent set of full-sky astrophysical components maps.
Abstract: Planck has mapped the microwave sky in temperature over nine frequency bands between 30 and 857 GHz and in polarization over seven frequency bands between 30 and 353 GHz in polarization. In this paper we consider the problem of diffuse astrophysical component separation, and process these maps within a Bayesian framework to derive an internally consistent set of full-sky astrophysical component maps. Component separation dedicated to cosmic microwave background (CMB) reconstruction is described in a companion paper. For the temperature analysis, we combine the Planck observations with the 9-yr Wilkinson Microwave Anisotropy Probe (WMAP) sky maps and the Haslam et al. 408 MHz map, to derive a joint model of CMB, synchrotron, free-free, spinning dust, CO, line emission in the 94 and 100 GHz channels, and thermal dust emission. Full-sky maps are provided for each component, with an angular resolution varying between 7.5 and 1deg. Global parameters (monopoles, dipoles, relative calibration, and bandpass errors) are fitted jointly with the sky model, and best-fit values are tabulated. For polarization, the model includes CMB, synchrotron, and thermal dust emission. These models provide excellent fits to the observed data, with rms temperature residuals smaller than 4μK over 93% of the sky for all Planck frequencies up to 353 GHz, and fractional errors smaller than 1% in the remaining 7% of the sky. The main limitations of the temperature model at the lower frequencies are internal degeneracies among the spinning dust, free-free, and synchrotron components; additional observations from external low-frequency experiments will be essential to break these degeneracies. The main limitations of the temperature model at the higher frequencies are uncertainties in the 545 and 857 GHz calibration and zero-points. For polarization, the main outstanding issues are instrumental systematics in the 100–353 GHz bands on large angular scales in the form of temperature-to-polarization leakage, uncertainties in the analogue-to-digital conversion, and corrections for the very long time constant of the bolometer detectors, all of which are expected to improve in the near future.
••
TL;DR: In this paper, a feasibility study was performed to fabricate ITER In-Vessel components by Selective Laser Melting (SLM) supported by Fusion for Energy (F4E), almost fully dense 316L stainless steel (SS316L) components were prepared from gas-atomized powder and with optimized SLM processing parameters.
••
TL;DR: The most significant measurement of the cosmic microwave background (CMB) lensing potential at a level of 40σ using temperature and polarization data from the Planck 2015 full-mission release was presented in this article.
Abstract: We present the most significant measurement of the cosmic microwave background (CMB) lensing potential to date (at a level of 40σ), using temperature and polarization data from the Planck 2015 full-mission release. Using a polarization-only estimator, we detect lensing at a significance of 5σ. We cross-check the accuracy of our measurement using the wide frequency coverage and complementarity of the temperature and polarization measurements. Public products based on this measurement include an estimate of the lensing potential over approximately 70% of the sky, an estimate of the lensing potential power spectrum in bandpowers for the multipole range 40 ≤ L ≤ 400, and an associated likelihood for cosmological parameter constraints. We find good agreement between our measurement of the lensing potential power spectrum and that found in the ΛCDM model that best fits the Planck temperature and polarization power spectra. Using the lensing likelihood alone we obtain a percent-level measurement of the parameter combination σ8Ω0.25m = 0.591 ± 0.021. We combine our determination of the lensing potential with the E-mode polarization, also measured by Planck, to generate an estimate of the lensing B-mode. We show that this lensing B-mode estimate is correlated with the B-modes observed directly by Planck at the expected level and with a statistical significance of 10σ, confirming Planck’s sensitivity to this known sky signal. We also correlate our lensing potential estimate with the large-scale temperature anisotropies, detecting a cross-correlation at the 3σ level, as expected because of dark energy in the concordance ΛCDM model.
••
Paul Scherrer Institute1, Carnegie Mellon University2, CERN3, Goethe University Frankfurt4, University of Helsinki5, Stockholm University6, ETH Zurich7, Earth System Research Laboratory8, Cooperative Institute for Research in Environmental Sciences9, California Institute of Technology10, Helsinki Institute of Physics11, University of Innsbruck12, University of Eastern Finland13, Finnish Meteorological Institute14, National Center for Atmospheric Research15, Karlsruhe Institute of Technology16, University of Leeds17, University of California, Irvine18, University of Vienna19, University of Beira Interior20
TL;DR: It is shown that organic vapours alone can drive nucleation, and a particle growth model is presented that quantitatively reproduces the measurements and implements a parameterization of the first steps of growth in a global aerosol model that can change substantially in response to concentrations of atmospheric cloud concentration nuclei.
Abstract: About half of present-day cloud condensation nuclei originate from atmospheric nucleation, frequently appearing as a burst of new particles near midday. Atmospheric observations show that the growth rate of new particles often accelerates when the diameter of the particles is between one and ten nanometres. In this critical size range, new particles are most likely to be lost by coagulation with pre-existing particles, thereby failing to form new cloud condensation nuclei that are typically 50 to 100 nanometres across. Sulfuric acid vapour is often involved in nucleation but is too scarce to explain most subsequent growth, leaving organic vapours as the most plausible alternative, at least in the planetary boundary layer. Although recent studies predict that low-volatility organic vapours contribute during initial growth, direct evidence has been lacking. The accelerating growth may result from increased photolytic production of condensable organic species in the afternoon, and the presence of a possible Kelvin (curvature) effect, which inhibits organic vapour condensation on the smallest particles (the nano-Kohler theory), has so far remained ambiguous. Here we present experiments performed in a large chamber under atmospheric conditions that investigate the role of organic vapours in the initial growth of nucleated organic particles in the absence of inorganic acids and bases such as sulfuric acid or ammonia and amines, respectively. Using data from the same set of experiments, it has been shown that organic vapours alone can drive nucleation. We focus on the growth of nucleated particles and find that the organic vapours that drive initial growth have extremely low volatilities (saturation concentration less than 10(-4.5) micrograms per cubic metre). As the particles increase in size and the Kelvin barrier falls, subsequent growth is primarily due to more abundant organic vapours of slightly higher volatility (saturation concentrations of 10(-4.5) to 10(-0.5) micrograms per cubic metre). We present a particle growth model that quantitatively reproduces our measurements. Furthermore, we implement a parameterization of the first steps of growth in a global aerosol model and find that concentrations of atmospheric cloud concentration nuclei can change substantially in response, that is, by up to 50 per cent in comparison with previously assumed growth rate parameterizations.
••
TL;DR: In this paper, an isotropic, unbroken power-law flux with a normalization at 100 TeV neutrino energy of (0.90 -0.27 +0.30) × 10-18 Gev-1 cm-2 s-1 sr-1 and a hard spectral index of γ = 2.13 ± 0.13.
Abstract: The IceCube Collaboration has previously discovered a high-energy astrophysical neutrino flux using neutrino events with interaction vertices contained within the instrumented volume of the IceCube detector. We present a complementary measurement using charged current muon neutrino events where the interaction vertex can be outside this volume. As a consequence of the large muon range the effective area is significantly larger but the field of view is restricted to the Northern Hemisphere. IceCube data from 2009 through 2015 have been analyzed using a likelihood approach based on the reconstructed muon energy and zenith angle. At the highest neutrino energies between 194 TeV and 7.8 PeV a significant astrophysical contribution is observed, excluding a purely atmospheric origin of these events at 5.6s significance. The data are well described by an isotropic, unbroken power-law flux with a normalization at 100 TeV neutrino energy of (0.90 -0.27 +0.30) × 10-18 Gev-1 cm-2 s-1 sr-1and a hard spectral index of γ = 2.13 ± 0.13. The observed spectrum is harder in comparison to previous IceCube analyses with lower energy thresholds which may indicate a break in the astrophysical neutrino spectrum of unknown origin. The highest-energy event observed has a reconstructed muon energy of (4.5 ± 1.2) PeV which implies a probability of less than 0.005% for this event to be of atmospheric origin. Analyzing the arrival directions of all events with reconstructed muon energies above 200 TeV no correlation with known γ-ray sources was found. Using the high statistics of atmospheric neutrinos we report the current best constraints on a prompt atmospheric muon neutrino flux originating from charmed meson decays which is below 1.06 in units of the flux normalization of the model in Enberg et al.
••
Goethe University Frankfurt1, CERN2, Helsinki Institute of Physics3, University of Helsinki4, University of Leeds5, Paul Scherrer Institute6, University of Washington7, University of Innsbruck8, University of Lisbon9, ETH Zurich10, California Institute of Technology11, University of Eastern Finland12, Finnish Meteorological Institute13, Lebedev Physical Institute14, Stockholm University15, University of Vienna16, Leibniz Association17, University of Beira Interior18, Carnegie Mellon University19
TL;DR: Ion-induced nucleation of pure organic particles constitutes a potentially widespread source of aerosol particles in terrestrial environments with low sulfuric acid pollution.
Abstract: Atmospheric aerosols and their effect on clouds are thought to be important for anthropogenic radiative forcing of the climate, yet remain poorly understood. Globally, around half of cloud condensation nuclei originate from nucleation of atmospheric vapours. It is thought that sulfuric acid is essential to initiate most particle formation in the atmosphere, and that ions have a relatively minor role. Some laboratory studies, however, have reported organic particle formation without the intentional addition of sulfuric acid, although contamination could not be excluded. Here we present evidence for the formation of aerosol particles from highly oxidized biogenic vapours in the absence of sulfuric acid in a large chamber under atmospheric conditions. The highly oxygenated molecules (HOMs) are produced by ozonolysis of α-pinene. We find that ions from Galactic cosmic rays increase the nucleation rate by one to two orders of magnitude compared with neutral nucleation. Our experimental findings are supported by quantum chemical calculations of the cluster binding energies of representative HOMs. Ion-induced nucleation of pure organic particles constitutes a potentially widespread source of aerosol particles in terrestrial environments with low sulfuric acid pollution.
••
TL;DR: In this article, the authors present an international consensus on how urban ecology can advance along multiple research directions and suggest pathways for advancing urban ecology research to support the goals of improving urban sustainability and resilience, conserving urban biodiversity, and promoting human well-being on an urbanizing planet.
Abstract: Urban ecology is a field encompassing multiple disciplines and practical applications and has grown rapidly. However, the field is heterogeneous as a global inquiry with multiple theoretical and conceptual frameworks, variable research approaches, and a lack of coordination among multiple schools of thought and research foci. Here, we present an international consensus on how urban ecology can advance along multiple research directions. There is potential for the field to mature as a holistic, integrated science of urban systems. Such an integrated science could better inform decisionmakers who need increased understanding of complex relationships among social, ecological, economic, and built infrastructure systems. To advance the field requires conceptual synthesis, knowledge and data sharing, cross-city comparative research, new intellectual networks, and engagement with additional disciplines. We consider challenges and opportunities for understanding dynamics of urban systems. We suggest pathways for advancing urban ecology research to support the goals of improving urban sustainability and resilience, conserving urban biodiversity, and promoting human well-being on an urbanizing planet.
••
TL;DR: In this article, the authors provide a new reconstruction of the deglaciation of the Fennoscandian Ice Sheet, in the form of calendar-year time-slices, which are particularly useful for ice sheet modelling.
••
TL;DR: In this paper, the authors investigate constraints on cosmic reionization extracted from the Planck cosmic microwave background (CMB) data and find that the universe is ionized at less than the 10% level at redshifts above z ≃ 10.8.
Abstract: We investigate constraints on cosmic reionization extracted from the Planck cosmic microwave background (CMB) data. We combine the Planck CMB anisotropy data in temperature with the low-multipole polarization data to fit ΛCDM models with various parameterizations of the reionization history. We obtain a Thomson optical depth τ = 0.058 ± 0.012 for the commonly adopted instantaneous reionization model. This confirms, with data solely from CMB anisotropies, the low value suggested by combining Planck 2015 results with other data sets, and also reduces the uncertainties. We reconstruct the history of the ionization fraction using either a symmetric or an asymmetric model for the transition between the neutral and ionized phases. To determine better constraints on the duration of the reionization process, we also make use of measurements of the amplitude of the kinetic Sunyaev-Zeldovich (kSZ) effect using additional information from the high-resolution Atacama Cosmology Telescope and South Pole Telescope experiments. The average redshift at which reionization occurs is found to lie between z = 7.8 and 8.8, depending on the model of reionization adopted. Using kSZ constraints and a redshift-symmetric reionization model, we find an upper limit to the width of the reionization period of Δz < 2.8. In all cases, we find that the Universe is ionized at less than the 10% level at redshifts above z ≃ 10. This suggests that an early onset of reionization is strongly disfavoured by the Planck data. We show that this result also reduces the tension between CMB-based analyses and constraints from other astrophysical sources.
••
TL;DR: The Fermi Large Area Telescope (LAT) has provided the most detailed view to date of the emission toward the Galactic center (GC) in high-energy gamma-rays as mentioned in this paper.
Abstract: The Fermi Large Area Telescope (LAT) has provided the most detailed view to date of the emission toward the Galactic center (GC) in high-energy gamma-rays. This paper describes the analysis of data ...
••
TL;DR: In this article, the performance of the ATLAS muon identification and reconstruction using the first LHC dataset recorded at s√ = 13 TeV in 2015 was evaluated using the Monte Carlo simulations.
Abstract: This article documents the performance of the ATLAS muon identification and reconstruction using the first LHC dataset recorded at s√ = 13 TeV in 2015. Using a large sample of J/ψ→μμ and Z→μμ decays from 3.2 fb−1 of pp collision data, measurements of the reconstruction efficiency, as well as of the momentum scale and resolution, are presented and compared to Monte Carlo simulations. The reconstruction efficiency is measured to be close to 99% over most of the covered phase space (|η| 2.2, the pT resolution for muons from Z→μμ decays is 2.9% while the precision of the momentum scale for low-pT muons from J/ψ→μμ decays is about 0.2%.
••
TL;DR: In this article, the authors describe the development of the Galactic Interstellar Emission Model (GIEM) which is the standard adopted by the LAT Collaboration and is publicly available, based on a linear combination of maps for interstellar gas column density in Galactocentric annuli and for the inverse-Compton emission produced in the Galaxy.
Abstract: Most of the celestial γ rays detected by the Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope originate from the interstellar medium when energetic cosmic rays interact with interstellar nucleons and photons. Conventional point-source and extended-source studies rely on the modeling of this diffuse emission for accurate characterization. Here, we describe the development of the Galactic Interstellar Emission Model (GIEM), which is the standard adopted by the LAT Collaboration and is publicly available. This model is based on a linear combination of maps for interstellar gas column density in Galactocentric annuli and for the inverse-Compton emission produced in the Galaxy. In the GIEM, we also include large-scale structures like Loop I and the Fermi bubbles. The measured gas emissivity spectra confirm that the cosmic-ray proton density decreases with Galactocentric distance beyond 5 kpc from the Galactic Center. The measurements also suggest a softening of the proton spectrum with Galactocentric distance. We observe that the Fermi bubbles have boundaries with a shape similar to a catenary at latitudes below 20° and we observe an enhanced emission toward their base extending in the north and south Galactic directions and located within ∼4° of the Galactic Center.
••
TL;DR: This review covers an alternative approach, in which the lignin valorization is performed in concert with the pulping process, which enables the fractionation of all components of thelignocellulosic biomass into valorizable streams.
Abstract: Current processes for the fractionation of lignocellulosic biomass focus on the production of high-quality cellulosic fibers for paper, board, and viscose production. The other fractions that constitute a major part of lignocellulose are treated as waste or used for energy production. The transformation of lignocellulose beyond paper pulp to a commodity (e.g., fine chemicals, polymer precursors, and fuels) is the only feasible alternative to current refining of fossil fuels as a carbon feedstock. Inspired by this challenge, scientists and engineers have developed a plethora of methods for the valorization of biomass. However, most studies have focused on using one single purified component from lignocellulose that is not currently generated by the existing biomass fractionation processes. A lot of effort has been made to develop efficient methods for lignin depolymerization. The step to take this fundamental research to industrial applications is still a major challenge. This review covers an alternative approach, in which the lignin valorization is performed in concert with the pulping process. This enables the fractionation of all components of the lignocellulosic biomass into valorizable streams. Lignocellulose fractions obtained this way (e.g., lignin oil and glucose) can be utilized in a number of existing procedures. The review covers historic, current, and future perspectives, with respect to catalytic lignocellulose fractionation processes.
••
A. Abramowski1, Felix Aharonian2, Faical Ait Benkhali2, A. G. Akhperjanian +226 more•Institutions (28)
TL;DR: Deep γ-ray observations with arcminute angular resolution of the region surrounding the Galactic Centre are reported, which show the expected tracer of the presence of petaelectronvolt protons within the central 10 parsecs of the Galaxy, and it is proposed that the supermassive black hole Sagittarius A* is linked to this PeVatron.
Abstract: Galactic cosmic rays reach energies of at least a few petaelectronvolts(1) (of the order of 1015 electronvolts). This implies that our Galaxy contains petaelectronvolt accelerators ('PeVatrons'), b ...