scispace - formally typeset
Search or ask a question

Showing papers by "Centre national de la recherche scientifique published in 2016"


Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, M. Ashdown4  +334 moreInstitutions (82)
TL;DR: In this article, the authors present a cosmological analysis based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation.
Abstract: This paper presents cosmological results based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation. Our results are in very good agreement with the 2013 analysis of the Planck nominal-mission temperature data, but with increased precision. The temperature and polarization power spectra are consistent with the standard spatially-flat 6-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations (denoted “base ΛCDM” in this paper). From the Planck temperature data combined with Planck lensing, for this cosmology we find a Hubble constant, H0 = (67.8 ± 0.9) km s-1Mpc-1, a matter density parameter Ωm = 0.308 ± 0.012, and a tilted scalar spectral index with ns = 0.968 ± 0.006, consistent with the 2013 analysis. Note that in this abstract we quote 68% confidence limits on measured parameters and 95% upper limits on other parameters. We present the first results of polarization measurements with the Low Frequency Instrument at large angular scales. Combined with the Planck temperature and lensing data, these measurements give a reionization optical depth of τ = 0.066 ± 0.016, corresponding to a reionization redshift of . These results are consistent with those from WMAP polarization measurements cleaned for dust emission using 353-GHz polarization maps from the High Frequency Instrument. We find no evidence for any departure from base ΛCDM in the neutrino sector of the theory; for example, combining Planck observations with other astrophysical data we find Neff = 3.15 ± 0.23 for the effective number of relativistic degrees of freedom, consistent with the value Neff = 3.046 of the Standard Model of particle physics. The sum of neutrino masses is constrained to ∑ mν < 0.23 eV. The spatial curvature of our Universe is found to be very close to zero, with | ΩK | < 0.005. Adding a tensor component as a single-parameter extension to base ΛCDM we find an upper limit on the tensor-to-scalar ratio of r0.002< 0.11, consistent with the Planck 2013 results and consistent with the B-mode polarization constraints from a joint analysis of BICEP2, Keck Array, and Planck (BKP) data. Adding the BKP B-mode data to our analysis leads to a tighter constraint of r0.002 < 0.09 and disfavours inflationarymodels with a V(φ) ∝ φ2 potential. The addition of Planck polarization data leads to strong constraints on deviations from a purely adiabatic spectrum of fluctuations. We find no evidence for any contribution from isocurvature perturbations or from cosmic defects. Combining Planck data with other astrophysical data, including Type Ia supernovae, the equation of state of dark energy is constrained to w = −1.006 ± 0.045, consistent with the expected value for a cosmological constant. The standard big bang nucleosynthesis predictions for the helium and deuterium abundances for the best-fit Planck base ΛCDM cosmology are in excellent agreement with observations. We also constraints on annihilating dark matter and on possible deviations from the standard recombination history. In neither case do we find no evidence for new physics. The Planck results for base ΛCDM are in good agreement with baryon acoustic oscillation data and with the JLA sample of Type Ia supernovae. However, as in the 2013 analysis, the amplitude of the fluctuation spectrum is found to be higher than inferred from some analyses of rich cluster counts and weak gravitational lensing. We show that these tensions cannot easily be resolved with simple modifications of the base ΛCDM cosmology. Apart from these tensions, the base ΛCDM cosmology provides an excellent description of the Planck CMB observations and many other astrophysical data sets.

10,728 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy1  +1008 moreInstitutions (96)
TL;DR: This is the first direct detection of gravitational waves and the first observation of a binary black hole merger, and these observations demonstrate the existence of binary stellar-mass black hole systems.
Abstract: On September 14, 2015 at 09:50:45 UTC the two detectors of the Laser Interferometer Gravitational-Wave Observatory simultaneously observed a transient gravitational-wave signal. The signal sweeps upwards in frequency from 35 to 250 Hz with a peak gravitational-wave strain of $1.0 \times 10^{-21}$. It matches the waveform predicted by general relativity for the inspiral and merger of a pair of black holes and the ringdown of the resulting single black hole. The signal was observed with a matched-filter signal-to-noise ratio of 24 and a false alarm rate estimated to be less than 1 event per 203 000 years, equivalent to a significance greater than 5.1 {\sigma}. The source lies at a luminosity distance of $410^{+160}_{-180}$ Mpc corresponding to a redshift $z = 0.09^{+0.03}_{-0.04}$. In the source frame, the initial black hole masses are $36^{+5}_{-4} M_\odot$ and $29^{+4}_{-4} M_\odot$, and the final black hole mass is $62^{+4}_{-4} M_\odot$, with $3.0^{+0.5}_{-0.5} M_\odot c^2$ radiated in gravitational waves. All uncertainties define 90% credible intervals.These observations demonstrate the existence of binary stellar-mass black hole systems. This is the first direct detection of gravitational waves and the first observation of a binary black hole merger.

9,596 citations


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
TL;DR: Gaia as discussed by the authors is a cornerstone mission in the science programme of the European Space Agency (ESA). The spacecraft construction was approved in 2006, following a study in which the original interferometric concept was changed to a direct-imaging approach.
Abstract: Gaia is a cornerstone mission in the science programme of the EuropeanSpace Agency (ESA). The spacecraft construction was approved in 2006, following a study in which the original interferometric concept was changed to a direct-imaging approach. Both the spacecraft and the payload were built by European industry. The involvement of the scientific community focusses on data processing for which the international Gaia Data Processing and Analysis Consortium (DPAC) was selected in 2007. Gaia was launched on 19 December 2013 and arrived at its operating point, the second Lagrange point of the Sun-Earth-Moon system, a few weeks later. The commissioning of the spacecraft and payload was completed on 19 July 2014. The nominal five-year mission started with four weeks of special, ecliptic-pole scanning and subsequently transferred into full-sky scanning mode. We recall the scientific goals of Gaia and give a description of the as-built spacecraft that is currently (mid-2016) being operated to achieve these goals. We pay special attention to the payload module, the performance of which is closely related to the scientific performance of the mission. We provide a summary of the commissioning activities and findings, followed by a description of the routine operational mode. We summarise scientific performance estimates on the basis of in-orbit operations. Several intermediate Gaia data releases are planned and the data can be retrieved from the Gaia Archive, which is available through the Gaia home page.

5,164 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy3  +970 moreInstitutions (114)
TL;DR: This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from general relativity.
Abstract: We report the observation of a gravitational-wave signal produced by the coalescence of two stellar-mass black holes. The signal, GW151226, was observed by the twin detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) on December 26, 2015 at 03:38:53 UTC. The signal was initially identified within 70 s by an online matched-filter search targeting binary coalescences. Subsequent off-line analyses recovered GW151226 with a network signal-to-noise ratio of 13 and a significance greater than 5 σ. The signal persisted in the LIGO frequency band for approximately 1 s, increasing in frequency and amplitude over about 55 cycles from 35 to 450 Hz, and reached a peak gravitational strain of 3.4+0.7−0.9×10−22. The inferred source-frame initial black hole masses are 14.2+8.3−3.7M⊙ and 7.5+2.3−2.3M⊙ and the final black hole mass is 20.8+6.1−1.7M⊙. We find that at least one of the component black holes has spin greater than 0.2. This source is located at a luminosity distance of 440+180−190 Mpc corresponding to a redshift 0.09+0.03−0.04. All uncertainties define a 90 % credible interval. This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from general relativity.

3,448 citations


Journal ArticleDOI
John Allison1, K. Amako2, John Apostolakis3, Pedro Arce4, Makoto Asai5, Tsukasa Aso6, Enrico Bagli, Alexander Bagulya7, Sw. Banerjee8, G. Barrand9, B. R. Beck10, Alexey Bogdanov11, D. Brandt, Jeremy M. C. Brown12, Helmut Burkhardt3, Ph Canal8, D. Cano-Ott4, Stephane Chauvie, Kyung-Suk Cho13, G.A.P. Cirrone14, Gene Cooperman15, M. A. Cortés-Giraldo16, G. Cosmo3, Giacomo Cuttone14, G.O. Depaola17, Laurent Desorgher, X. Dong15, Andrea Dotti5, Victor Daniel Elvira8, Gunter Folger3, Ziad Francis18, A. Galoyan19, L. Garnier9, M. Gayer3, K. Genser8, Vladimir Grichine3, Vladimir Grichine7, Susanna Guatelli20, Susanna Guatelli21, Paul Gueye22, P. Gumplinger23, Alexander Howard24, Ivana Hřivnáčová9, S. Hwang13, Sebastien Incerti25, Sebastien Incerti26, A. Ivanchenko3, Vladimir Ivanchenko3, F.W. Jones23, S. Y. Jun8, Pekka Kaitaniemi27, Nicolas A. Karakatsanis28, Nicolas A. Karakatsanis29, M. Karamitrosi30, M.H. Kelsey5, Akinori Kimura31, Tatsumi Koi5, Hisaya Kurashige32, A. Lechner3, S. B. Lee33, Francesco Longo34, M. Maire, Davide Mancusi, A. Mantero, E. Mendoza4, B. Morgan35, K. Murakami2, T. Nikitina3, Luciano Pandola14, P. Paprocki3, J Perl5, Ivan Petrović36, Maria Grazia Pia, W. Pokorski3, J. M. Quesada16, M. Raine, Maria A.M. Reis37, Alberto Ribon3, A. Ristic Fira36, Francesco Romano14, Giorgio Ivan Russo14, Giovanni Santin38, Takashi Sasaki2, D. Sawkey39, J. I. Shin33, Igor Strakovsky40, A. Taborda37, Satoshi Tanaka41, B. Tome, Toshiyuki Toshito, H.N. Tran42, Pete Truscott, L. Urbán, V. V. Uzhinsky19, Jerome Verbeke10, M. Verderi43, B. Wendt44, H. Wenzel8, D. H. Wright5, Douglas Wright10, T. Yamashita, J. Yarba8, H. Yoshida45 
TL;DR: Geant4 as discussed by the authors is a software toolkit for the simulation of the passage of particles through matter, which is used by a large number of experiments and projects in a variety of application domains, including high energy physics, astrophysics and space science, medical physics and radiation protection.
Abstract: Geant4 is a software toolkit for the simulation of the passage of particles through matter. It is used by a large number of experiments and projects in a variety of application domains, including high energy physics, astrophysics and space science, medical physics and radiation protection. Over the past several years, major changes have been made to the toolkit in order to accommodate the needs of these user communities, and to efficiently exploit the growth of computing power made available by advances in technology. The adaptation of Geant4 to multithreading, advances in physics, detector modeling and visualization, extensions to the toolkit, including biasing and reverse Monte Carlo, and tools for physics and release validation are discussed here.

2,260 citations


Journal ArticleDOI
TL;DR: The first Gaia data release, Gaia DR1 as discussed by the authors, consists of three components: a primary astrometric data set which contains the positions, parallaxes, and mean proper motions for about 2 million of the brightest stars in common with the Hipparcos and Tycho-2 catalogues.
Abstract: Context. At about 1000 days after the launch of Gaia we present the first Gaia data release, Gaia DR1, consisting of astrometry and photometry for over 1 billion sources brighter than magnitude 20.7. Aims: A summary of Gaia DR1 is presented along with illustrations of the scientific quality of the data, followed by a discussion of the limitations due to the preliminary nature of this release. Methods: The raw data collected by Gaia during the first 14 months of the mission have been processed by the Gaia Data Processing and Analysis Consortium (DPAC) and turned into an astrometric and photometric catalogue. Results: Gaia DR1 consists of three components: a primary astrometric data set which contains the positions, parallaxes, and mean proper motions for about 2 million of the brightest stars in common with the Hipparcos and Tycho-2 catalogues - a realisation of the Tycho-Gaia Astrometric Solution (TGAS) - and a secondary astrometric data set containing the positions for an additional 1.1 billion sources. The second component is the photometric data set, consisting of mean G-band magnitudes for all sources. The G-band light curves and the characteristics of 3000 Cepheid and RR Lyrae stars, observed at high cadence around the south ecliptic pole, form the third component. For the primary astrometric data set the typical uncertainty is about 0.3 mas for the positions and parallaxes, and about 1 mas yr-1 for the proper motions. A systematic component of 0.3 mas should be added to the parallax uncertainties. For the subset of 94 000 Hipparcos stars in the primary data set, the proper motions are much more precise at about 0.06 mas yr-1. For the secondary astrometric data set, the typical uncertainty of the positions is 10 mas. The median uncertainties on the mean G-band magnitudes range from the mmag level to0.03 mag over the magnitude range 5 to 20.7. Conclusions: Gaia DR1 is an important milestone ahead of the next Gaia data release, which will feature five-parameter astrometry for all sources. Extensive validation shows that Gaia DR1 represents a major advance in the mapping of the heavens and the availability of basic stellar data that underpin observational astrophysics. Nevertheless, the very preliminary nature of this first Gaia data release does lead to a number of important limitations to the data quality which should be carefully considered before drawing conclusions from the data.

2,174 citations


Journal ArticleDOI
TL;DR: The updated version 2.2.2 of the HADDOCK portal is presented, which offers new features such as support for mixed molecule types, additional experimental restraints and improved protocols, all of this in a user-friendly interface.

1,762 citations


Journal ArticleDOI
TL;DR: In this paper, the authors review recent progress, from both in situ experiments and advanced simulation techniques, in understanding the charge storage mechanism in carbon- and oxide-based supercapacitors.
Abstract: Supercapacitors are electrochemical energy storage devices that operate on the simple mechanism of adsorption of ions from an electrolyte on a high-surface-area electrode. Over the past decade, the performance of supercapacitors has greatly improved, as electrode materials have been tuned at the nanoscale and electrolytes have gained an active role, enabling more efficient storage mechanisms. In porous carbon materials with subnanometre pores, the desolvation of the ions leads to surprisingly high capacitances. Oxide materials store charge by surface redox reactions, leading to the pseudocapacitive effect. Understanding the physical mechanisms underlying charge storage in these materials is important for further development of supercapacitors. Here we review recent progress, from both in situ experiments and advanced simulation techniques, in understanding the charge storage mechanism in carbon- and oxide-based supercapacitors. We also discuss the challenges that still need to be addressed for building better supercapacitors.

1,565 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used three long-term satellite leaf area index (LAI) records and ten global ecosystem models to investigate four key drivers of LAI trends during 1982-2009.
Abstract: Global environmental change is rapidly altering the dynamics of terrestrial vegetation, with consequences for the functioning of the Earth system and provision of ecosystem services(1,2). Yet how global vegetation is responding to the changing environment is not well established. Here we use three long-term satellite leaf area index (LAI) records and ten global ecosystem models to investigate four key drivers of LAI trends during 1982-2009. We show a persistent and widespread increase of growing season integrated LAI (greening) over 25% to 50% of the global vegetated area, whereas less than 4% of the globe shows decreasing LAI (browning). Factorial simulations with multiple global ecosystem models suggest that CO2 fertilization effects explain 70% of the observed greening trend, followed by nitrogen deposition (9%), climate change (8%) and land cover change (LCC) (4%). CO2 fertilization effects explain most of the greening trends in the tropics, whereas climate change resulted in greening of the high latitudes and the Tibetan Plateau. LCC contributed most to the regional greening observed in southeast China and the eastern United States. The regional effects of unexplained factors suggest that the next generation of ecosystem models will need to explore the impacts of forest demography, differences in regional management intensities for cropland and pastures, and other emerging productivity constraints such as phosphorus availability.

1,534 citations


Journal ArticleDOI
08 Jan 2016-Science
TL;DR: C climatic, biological, and geochemical signatures of human activity in sediments and ice cores, Combined with deposits of new materials and radionuclides, as well as human-caused modification of sedimentary processes, the Anthropocene stands alone stratigraphically as a new epoch beginning sometime in the mid–20th century.
Abstract: Human activity is leaving a pervasive and persistent signature on Earth. Vigorous debate continues about whether this warrants recognition as a new geologic time unit known as the Anthropocene. We review anthropogenic markers of functional changes in the Earth system through the stratigraphic record. The appearance of manufactured materials in sediments, including aluminum, plastics, and concrete, coincides with global spikes in fallout radionuclides and particulates from fossil fuel combustion. Carbon, nitrogen, and phosphorus cycles have been substantially modified over the past century. Rates of sea-level rise and the extent of human perturbation of the climate system exceed Late Holocene changes. Biotic changes include species invasions worldwide and accelerating rates of extinction. These combined signals render the Anthropocene stratigraphically distinct from the Holocene and earlier epochs.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy1  +976 moreInstitutions (107)
TL;DR: It is found that the final remnant's mass and spin, as determined from the low-frequency and high-frequency phases of the signal, are mutually consistent with the binary black-hole solution in general relativity.
Abstract: The LIGO detection of GW150914 provides an unprecedented opportunity to study the two-body motion of a compact-object binary in the large-velocity, highly nonlinear regime, and to witness the final merger of the binary and the excitation of uniquely relativistic modes of the gravitational field. We carry out several investigations to determine whether GW150914 is consistent with a binary black-hole merger in general relativity. We find that the final remnant’s mass and spin, as determined from the low-frequency (inspiral) and high-frequency (postinspiral) phases of the signal, are mutually consistent with the binary black-hole solution in general relativity. Furthermore, the data following the peak of GW150914 are consistent with the least-damped quasinormal mode inferred from the mass and spin of the remnant black hole. By using waveform models that allow for parametrized general-relativity violations during the inspiral and merger phases, we perform quantitative tests on the gravitational-wave phase in the dynamical regime and we determine the first empirical bounds on several high-order post-Newtonian coefficients. We constrain the graviton Compton wavelength, assuming that gravitons are dispersed in vacuum in the same way as particles with mass, obtaining a 90%-confidence lower bound of 1013 km. In conclusion, within our statistical uncertainties, we find no evidence for violations of general relativity in the genuinely strong-field regime of gravity.

Journal ArticleDOI
TL;DR: The report includes the description of a computational machinery for nonlinear optical spectroscopy through an interface to the QM/MM package Cobramm.
Abstract: In this report, we summarize and describe the recent unique updates and additions to the Molcas quantum chemistry program suite as contained in release version 8. These updates include natural and spin orbitals for studies of magnetic properties, local and linear scaling methods for the Douglas-Kroll-Hess transformation, the generalized active space concept in MCSCF methods, a combination of multiconfigurational wave functions with density functional theory in the MC-PDFT method, additional methods for computation of magnetic properties, methods for diabatization, analytical gradients of state average complete active space SCF in association with density fitting, methods for constrained fragment optimization, large-scale parallel multireference configuration interaction including analytic gradients via the interface to the Columbus package, and approximations of the CASPT2 method to be used for computations of large systems. In addition, the report includes the description of a computational machinery for nonlinear optical spectroscopy through an interface to the QM/MM package Cobramm. Further, a module to run molecular dynamics simulations is added, two surface hopping algorithms are included to enable nonadiabatic calculations, and the DQ method for diabatization is added. Finally, we report on the subject of improvements with respects to alternative file options and parallelization.

Journal ArticleDOI
TL;DR: In this paper, the state-of-the-art technologies on photonics-based terahertz communications are compared with competing technologies based on electronics and free-space optical communications.
Abstract: This Review covers the state-of-the-art technologies on photonics-based terahertz communications, which are compared with competing technologies based on electronics and free-space optical communications. Future prospects and challenges are also discussed. Almost 15 years have passed since the initial demonstrations of terahertz (THz) wireless communications were made using both pulsed and continuous waves. THz technologies are attracting great interest and are expected to meet the ever-increasing demand for high-capacity wireless communications. Here, we review the latest trends in THz communications research, focusing on how photonics technologies have played a key role in the development of first-age THz communication systems. We also provide a comparison with other competitive technologies, such as THz transceivers enabled by electronic devices as well as free-space lightwave communications.

Journal ArticleDOI
University of East Anglia1, University of Oslo2, Commonwealth Scientific and Industrial Research Organisation3, University of Exeter4, Oak Ridge National Laboratory5, National Oceanic and Atmospheric Administration6, Woods Hole Research Center7, University of California, San Diego8, Karlsruhe Institute of Technology9, Cooperative Institute for Marine and Atmospheric Studies10, Centre national de la recherche scientifique11, University of Maryland, College Park12, National Institute of Water and Atmospheric Research13, Woods Hole Oceanographic Institution14, Flanders Marine Institute15, Alfred Wegener Institute for Polar and Marine Research16, Netherlands Environmental Assessment Agency17, University of Illinois at Urbana–Champaign18, Leibniz Institute of Marine Sciences19, Max Planck Society20, University of Paris21, Hobart Corporation22, Oeschger Centre for Climate Change Research23, University of Bern24, National Center for Atmospheric Research25, University of Miami26, Council of Scientific and Industrial Research27, University of Colorado Boulder28, National Institute for Environmental Studies29, Joint Institute for the Study of the Atmosphere and Ocean30, Geophysical Institute, University of Bergen31, Montana State University32, Goddard Space Flight Center33, University of New Hampshire34, Bjerknes Centre for Climate Research35, Imperial College London36, Lamont–Doherty Earth Observatory37, Auburn University38, Wageningen University and Research Centre39, VU University Amsterdam40, Met Office41
TL;DR: In this article, the authors quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community.
Abstract: . Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere – the “global carbon budget” – is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and methodology to quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community. We discuss changes compared to previous estimates and consistency within and among components, alongside methodology and data limitations. CO2 emissions from fossil fuels and industry (EFF) are based on energy statistics and cement production data, respectively, while emissions from land-use change (ELUC), mainly deforestation, are based on combined evidence from land-cover change data, fire activity associated with deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (GATM) is computed from the annual changes in concentration. The mean ocean CO2 sink (SOCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. The variability in SOCEAN is evaluated with data products based on surveys of ocean CO2 measurements. The global residual terrestrial CO2 sink (SLAND) is estimated by the difference of the other terms of the global carbon budget and compared to results of independent dynamic global vegetation models. We compare the mean land and ocean fluxes and their variability to estimates from three atmospheric inverse methods for three broad latitude bands. All uncertainties are reported as ±1σ, reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. For the last decade available (2006–2015), EFF was 9.3 ± 0.5 GtC yr−1, ELUC 1.0 ± 0.5 GtC yr−1, GATM 4.5 ± 0.1 GtC yr−1, SOCEAN 2.6 ± 0.5 GtC yr−1, and SLAND 3.1 ± 0.9 GtC yr−1. For year 2015 alone, the growth in EFF was approximately zero and emissions remained at 9.9 ± 0.5 GtC yr−1, showing a slowdown in growth of these emissions compared to the average growth of 1.8 % yr−1 that took place during 2006–2015. Also, for 2015, ELUC was 1.3 ± 0.5 GtC yr−1, GATM was 6.3 ± 0.2 GtC yr−1, SOCEAN was 3.0 ± 0.5 GtC yr−1, and SLAND was 1.9 ± 0.9 GtC yr−1. GATM was higher in 2015 compared to the past decade (2006–2015), reflecting a smaller SLAND for that year. The global atmospheric CO2 concentration reached 399.4 ± 0.1 ppm averaged over 2015. For 2016, preliminary data indicate the continuation of low growth in EFF with +0.2 % (range of −1.0 to +1.8 %) based on national emissions projections for China and USA, and projections of gross domestic product corrected for recent changes in the carbon intensity of the economy for the rest of the world. In spite of the low growth of EFF in 2016, the growth rate in atmospheric CO2 concentration is expected to be relatively high because of the persistence of the smaller residual terrestrial sink (SLAND) in response to El Nino conditions of 2015–2016. From this projection of EFF and assumed constant ELUC for 2016, cumulative emissions of CO2 will reach 565 ± 55 GtC (2075 ± 205 GtCO2) for 1870–2016, about 75 % from EFF and 25 % from ELUC. This living data update documents changes in the methods and data sets used in this new carbon budget compared with previous publications of this data set (Le Quere et al., 2015b, a, 2014, 2013). All observations presented here can be downloaded from the Carbon Dioxide Information Analysis Center ( doi:10.3334/CDIAC/GCP_2016 ).

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy3  +978 moreInstitutions (112)
TL;DR: The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers as discussed by the authors.
Abstract: The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers. In this paper we present full results from a search for binary black hole merger signals with total masses up to 100M⊙ and detailed implications from our observations of these systems. Our search, based on general-relativistic models of gravitational wave signals from binary black hole systems, unambiguously identified two signals, GW150914 and GW151226, with a significance of greater than 5σ over the observing period. It also identified a third possible signal, LVT151012, with substantially lower significance, which has a 87% probability of being of astrophysical origin. We provide detailed estimates of the parameters of the observed systems. Both GW150914 and GW151226 provide an unprecedented opportunity to study the two-body motion of a compact-object binary in the large velocity, highly nonlinear regime. We do not observe any deviations from general relativity, and place improved empirical bounds on several high-order post-Newtonian coefficients. From our observations we infer stellar-mass binary black hole merger rates lying in the range 9−240Gpc−3yr−1. These observations are beginning to inform astrophysical predictions of binary black hole formation rates, and indicate that future observing runs of the Advanced detector network will yield many more gravitational wave detections.

Journal ArticleDOI
TL;DR: Evidence is provided that micro-PS cause feeding modifications and reproductive disruption in oysters, with significant impacts on offspring, providing ground-breaking data on microplastic impacts in an invertebrate model, helping to predict ecological impact in marine ecosystems.
Abstract: Plastics are persistent synthetic polymers that accumulate as waste in the marine environment. Microplastic (MP) particles are derived from the breakdown of larger debris or can enter the environment as microscopic fragments. Because filter-feeder organisms ingest MP while feeding, they are likely to be impacted by MP pollution. To assess the impact of polystyrene microspheres (micro-PS) on the physiology of the Pacific oyster, adult oysters were experimentally exposed to virgin micro-PS (2 and 6 µm in diameter; 0.023 mg·L−1) for 2 mo during a reproductive cycle. Effects were investigated on ecophysiological parameters; cellular, transcriptomic, and proteomic responses; fecundity; and offspring development. Oysters preferentially ingested the 6-µm micro-PS over the 2-µm-diameter particles. Consumption of microalgae and absorption efficiency were significantly higher in exposed oysters, suggesting compensatory and physical effects on both digestive parameters. After 2 mo, exposed oysters had significant decreases in oocyte number (−38%), diameter (−5%), and sperm velocity (−23%). The D-larval yield and larval development of offspring derived from exposed parents decreased by 41% and 18%, respectively, compared with control offspring. Dynamic energy budget modeling, supported by transcriptomic profiles, suggested a significant shift of energy allocation from reproduction to structural growth, and elevated maintenance costs in exposed oysters, which is thought to be caused by interference with energy uptake. Molecular signatures of endocrine disruption were also revealed, but no endocrine disruptors were found in the biological samples. This study provides evidence that micro-PS cause feeding modifications and reproductive disruption in oysters, with significant impacts on offspring.

Journal ArticleDOI
TL;DR: In this paper, a different method is proposed to analyze experimental results and it is employed here to reexamine experimental data taken from the literature, and it appears that the method generally used is flawed and that it unfairly favors pseudo-second order kinetics.

Journal ArticleDOI
Kurt Lejaeghere1, Gustav Bihlmayer2, Torbjörn Björkman3, Torbjörn Björkman4, Peter Blaha5, Stefan Blügel2, Volker Blum6, Damien Caliste7, Ivano E. Castelli8, Stewart J. Clark9, Andrea Dal Corso10, Stefano de Gironcoli10, Thierry Deutsch7, J. K. Dewhurst11, Igor Di Marco12, Claudia Draxl13, Claudia Draxl14, Marcin Dulak15, Olle Eriksson12, José A. Flores-Livas11, Kevin F. Garrity16, Luigi Genovese7, Paolo Giannozzi17, Matteo Giantomassi18, Stefan Goedecker19, Xavier Gonze18, Oscar Grånäs20, Oscar Grånäs12, E. K. U. Gross11, Andris Gulans13, Andris Gulans14, Francois Gygi21, D. R. Hamann22, P. J. Hasnip23, Natalie Holzwarth24, Diana Iusan12, Dominik B. Jochym25, F. Jollet, Daniel M. Jones26, Georg Kresse27, Klaus Koepernik28, Klaus Koepernik29, Emine Kucukbenli10, Emine Kucukbenli8, Yaroslav Kvashnin12, Inka L. M. Locht30, Inka L. M. Locht12, Sven Lubeck13, Martijn Marsman27, Nicola Marzari8, Ulrike Nitzsche29, Lars Nordström12, Taisuke Ozaki31, Lorenzo Paulatto32, Chris J. Pickard33, Ward Poelmans1, Matt Probert23, Keith Refson34, Keith Refson25, Manuel Richter29, Manuel Richter28, Gian-Marco Rignanese18, Santanu Saha19, Matthias Scheffler35, Matthias Scheffler14, Martin Schlipf21, Karlheinz Schwarz5, Sangeeta Sharma11, Francesca Tavazza16, Patrik Thunström5, Alexandre Tkatchenko36, Alexandre Tkatchenko14, Marc Torrent, David Vanderbilt22, Michiel van Setten18, Veronique Van Speybroeck1, John M. Wills37, Jonathan R. Yates26, Guo-Xu Zhang38, Stefaan Cottenier1 
25 Mar 2016-Science
TL;DR: A procedure to assess the precision of DFT methods was devised and used to demonstrate reproducibility among many of the most widely used DFT codes, demonstrating that the precisionof DFT implementations can be determined, even in the absence of one absolute reference code.
Abstract: The widespread popularity of density functional theory has given rise to an extensive range of dedicated codes for predicting molecular and crystalline properties. However, each code implements the formalism in a different way, raising questions about the reproducibility of such predictions. We report the results of a community-wide effort that compared 15 solid-state codes, using 40 different potentials or basis set types, to assess the quality of the Perdew-Burke-Ernzerhof equations of state for 71 elemental crystals. We conclude that predictions from recent codes and pseudopotentials agree very well, with pairwise differences that are comparable to those between different high-precision experiments. Older methods, however, have less precise agreement. Our benchmark provides a framework for users and developers to document the precision of new applications and methodological improvements.

Journal ArticleDOI
TL;DR: In this article, the authors study the semi-arid Loess Plateau in China, where the "Grain to Green" large-scale revegetation programme has been in operation since 1999.
Abstract: Revegetation of degraded ecosystems provides opportunities for carbon sequestration and bioenergy production(1,2). However, vegetation expansion inwater-limited areas creates potentially conflicting demands for water between the ecosystem and humans(3). Current understanding of these competing demands is still limited(4). Here, we study the semi-arid Loess Plateau in China, where the 'Grain to Green' large-scale revegetation programme has been in operation since 1999. As expected, we found that the new planting has caused both net primary productivity (NPP) and evapotranspiration (ET) to increase. Also the increase of ET has induced a significant (p < 0.001) decrease in the ratio of river runoff to annual precipitation across hydrological catchments. From currently revegetated areas and human water demand, we estimate a threshold of NPP of 400 +/- 5 g C m(-2) yr(-1) above which the population will suffer water shortages. NPP in this region is found to be already close to this limit. The threshold of NPP could change by 36% in the worst case of climate drying and high human withdrawals, to C 43% in the best case. Our results develop a new conceptual framework to determine the critical carbon sequestration that is sustainable in terms of both ecological and socio-economic resource demands in a coupled anthropogenic-biological system.

Journal ArticleDOI
TL;DR: The results support the hypothesis that rare coding variants can pinpoint causal genes within known genetic loci and illustrate that applying the approach systematically to detect new loci requires extremely large sample sizes.
Abstract: Advanced age-related macular degeneration (AMD) is the leading cause of blindness in the elderly, with limited therapeutic options. Here we report on a study of >12 million variants, including 163,714 directly genotyped, mostly rare, protein-altering variants. Analyzing 16,144 patients and 17,832 controls, we identify 52 independently associated common and rare variants (P < 5 × 10(-8)) distributed across 34 loci. Although wet and dry AMD subtypes exhibit predominantly shared genetics, we identify the first genetic association signal specific to wet AMD, near MMP9 (difference P value = 4.1 × 10(-10)). Very rare coding variants (frequency <0.1%) in CFH, CFI and TIMP3 suggest causal roles for these genes, as does a splice variant in SLC16A8. Our results support the hypothesis that rare coding variants can pinpoint causal genes within known genetic loci and illustrate that applying the approach systematically to detect new loci requires extremely large sample sizes.

Journal ArticleDOI
TL;DR: In this article, a single photon with near-unity indistinguishability was generated from quantum dots in electrically controlled cavity structures, which allowed for efficient photon collection while application of an electrical bias cancels charge noise effects.
Abstract: A single photon with near-unity indistinguishability is generated from quantum dots in electrically controlled cavity structures. The cavity allows for efficient photon collection while application of an electrical bias cancels charge noise effects.

Journal ArticleDOI
TL;DR: The COSMOS2015(24) catalog as mentioned in this paper contains precise photometric redshifts and stellar masses for more than half a million objects over the 2deg(2) COSmOS field, which is highly optimized for the study of galaxy evolution and environments in the early universe.
Abstract: We present the COSMOS2015(24) catalog, which contains precise photometric redshifts and stellar masses for more than half a million objects over the 2deg(2) COSMOS field. Including new YJHK(s) images from the UltraVISTA-DR2 survey, Y-band images from Subaru/Hyper-Suprime-Cam, and infrared data from the Spitzer Large Area Survey with the Hyper-Suprime-Cam Spitzer legacy program, this near-infrared-selected catalog is highly optimized for the study of galaxy evolution and environments in the early universe. To maximize catalog completeness for bluer objects and at higher redshifts, objects have been detected on a chi(2) sum of the YJHK(s) and z(++) images. The catalog contains similar to 6 x 10(5) objects in the 1.5 deg(2) UltraVISTA-DR2 region and similar to 1.5 x 10(5) objects are detected in the “ultra-deep stripes” (0.62 deg(2)) at K-s \textless= 24.7 (3 sigma, 3 `', AB magnitude). Through a comparison with the zCOSMOS-bright spectroscopic redshifts, we measure a photometric redshift precision of sigma(Delta z(1) (+ zs)) = 0.007 and a catastrophic failure fraction of eta = 0.5%. At 3 \textless z \textless 6, using the unique database of spectroscopic redshifts in COSMOS, we find sigma(Delta z(1) (+ zs)) = 0.021 and eta = 13.2%. The deepest regions reach a 90% completeness limit of 10(10)M(circle dot) to z = 4. Detailed comparisons of the color distributions, number counts, and clustering show excellent agreement with the literature in the same mass ranges. COSMOS2015 represents a unique, publicly available, valuable resource with which to investigate the evolution of galaxies within their environment back to the earliest stages of the history of the universe. The COSMOS2015 catalog is distributed via anonymous ftp and through the usual astronomical archive systems (CDS, ESO Phase 3, IRSA).

Journal ArticleDOI
TL;DR: In this article, the authors quantify potential global impacts of different negative emissions technologies on various factors (such as land, greenhouse gas emissions, water, albedo, nutrients and energy) to determine the biophysical limits to, and economic costs of, their widespread application.
Abstract: To have a >50% chance of limiting warming below 2 °C, most recent scenarios from integrated assessment models (IAMs) require large-scale deployment of negative emissions technologies (NETs). These are technologies that result in the net removal of greenhouse gases from the atmosphere. We quantify potential global impacts of the different NETs on various factors (such as land, greenhouse gas emissions, water, albedo, nutrients and energy) to determine the biophysical limits to, and economic costs of, their widespread application. Resource implications vary between technologies and need to be satisfactorily addressed if NETs are to have a significant role in achieving climate goals.

Journal ArticleDOI
07 Jan 2016-Nature
TL;DR: The difference between the planetary radius measured at optical and infrared wavelengths is an effective metric for distinguishing different atmosphere types, so that strong water absorption lines are seen in clear-atmosphere planets and the weakest features are associated with clouds and hazes.
Abstract: Thousands of transiting exoplanets have been discovered, but spectral analysis of their atmospheres has so far been dominated by a small number of exoplanets and data spanning relatively narrow wavelength ranges (such as 1.1-1.7 micrometres). Recent studies show that some hot-Jupiter exoplanets have much weaker water absorption features in their near-infrared spectra than predicted. The low amplitude of water signatures could be explained by very low water abundances, which may be a sign that water was depleted in the protoplanetary disk at the planet's formation location, but it is unclear whether this level of depletion can actually occur. Alternatively, these weak signals could be the result of obscuration by clouds or hazes, as found in some optical spectra. Here we report results from a comparative study of ten hot Jupiters covering the wavelength range 0.3-5 micrometres, which allows us to resolve both the optical scattering and infrared molecular absorption spectroscopically. Our results reveal a diverse group of hot Jupiters that exhibit a continuum from clear to cloudy atmospheres. We find that the difference between the planetary radius measured at optical and infrared wavelengths is an effective metric for distinguishing different atmosphere types. The difference correlates with the spectral strength of water, so that strong water absorption lines are seen in clear-atmosphere planets and the weakest features are associated with clouds and hazes. This result strongly suggests that primordial water depletion during formation is unlikely and that clouds and hazes are the cause of weaker spectral signatures.

Journal ArticleDOI
Nabila Aghanim1, Monique Arnaud2, M. Ashdown3, J. Aumont1  +291 moreInstitutions (73)
TL;DR: In this article, the authors present the Planck 2015 likelihoods, statistical descriptions of the 2-point correlation functions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties.
Abstract: This paper presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlationfunctions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties, both instrumental and astrophysical in nature. They are based on the same hybrid approach used for the previous release, i.e., a pixel-based likelihood at low multipoles (l< 30) and a Gaussian approximation to the distribution of cross-power spectra at higher multipoles. The main improvements are the use of more and better processed data and of Planck polarization information, along with more detailed models of foregrounds and instrumental uncertainties. The increased redundancy brought by more than doubling the amount of data analysed enables further consistency checks and enhanced immunity to systematic effects. It also improves the constraining power of Planck, in particular with regard to small-scale foreground properties. Progress in the modelling of foreground emission enables the retention of a larger fraction of the sky to determine the properties of the CMB, which also contributes to the enhanced precision of the spectra. Improvements in data processing and instrumental modelling further reduce uncertainties. Extensive tests establish the robustness and accuracy of the likelihood results, from temperature alone, from polarization alone, and from their combination. For temperature, we also perform a full likelihood analysis of realistic end-to-end simulations of the instrumental response to the sky, which were fed into the actual data processing pipeline; this does not reveal biases from residual low-level instrumental systematics. Even with the increase in precision and robustness, the ΛCDM cosmological model continues to offer a very good fit to the Planck data. The slope of the primordial scalar fluctuations, n_s, is confirmed smaller than unity at more than 5σ from Planck alone. We further validate the robustness of the likelihood results against specific extensions to the baseline cosmology, which are particularly sensitive to data at high multipoles. For instance, the effective number of neutrino species remains compatible with the canonical value of 3.046. For this first detailed analysis of Planck polarization spectra, we concentrate at high multipoles on the E modes, leaving the analysis of the weaker B modes to future work. At low multipoles we use temperature maps at all Planck frequencies along with a subset of polarization data. These data take advantage of Planck’s wide frequency coverage to improve the separation of CMB and foreground emission. Within the baseline ΛCDM cosmology this requires τ = 0.078 ± 0.019 for the reionization optical depth, which is significantly lower than estimates without the use of high-frequency data for explicit monitoring of dust emission. At high multipoles we detect residual systematic errors in E polarization, typically at the μK^2 level; we therefore choose to retain temperature information alone for high multipoles as the recommended baseline, in particular for testing non-minimal models. Nevertheless, the high-multipole polarization spectra from Planck are already good enough to enable a separate high-precision determination of the parameters of the ΛCDM model, showing consistency with those established independently from temperature information alone.

Journal ArticleDOI
Jingjing Liang1, Thomas W. Crowther2, Nicolas Picard3, Susan K. Wiser4, Mo Zhou1, Giorgio Alberti5, Ernst Detlef Schulze6, A. David McGuire7, Fabio Bozzato, Hans Pretzsch8, Sergio de-Miguel, Alain Paquette9, Bruno Hérault10, Michael Scherer-Lorenzen11, Christopher B. Barrett12, Henry B. Glick2, Geerten M. Hengeveld13, Gert-Jan Nabuurs13, Sebastian Pfautsch14, Helder Viana15, Helder Viana16, Alexander Christian Vibrans, Christian Ammer17, Peter Schall17, David David Verbyla7, N. M. Tchebakova18, Markus Fischer19, James V. Watson1, Han Y. H. Chen20, Xiangdong Lei, Mart-Jan Schelhaas13, Huicui Lu13, Damiano Gianelle, Elena I. Parfenova18, Christian Salas21, Eungul Lee1, Boknam Lee22, Hyun-Seok Kim, Helge Bruelheide23, David A. Coomes24, Daniel Piotto, Terry Sunderland25, Terry Sunderland26, Bernhard Schmid27, Sylvie Gourlet-Fleury, Bonaventure Sonké28, Rebecca Tavani3, Jun Zhu29, Susanne Brandl8, Jordi Vayreda, Fumiaki Kitahara, Eric B. Searle20, Victor J. Neldner30, Michael R. Ngugi30, Christopher Baraloto31, Christopher Baraloto32, Lorenzo Frizzera, Radomir Bałazy33, Jacek Oleksyn34, Jacek Oleksyn35, Tomasz Zawiła-Niedźwiecki36, Olivier Bouriaud37, Filippo Bussotti38, Leena Finér, Bogdan Jaroszewicz39, Tommaso Jucker24, Fernando Valladares40, Fernando Valladares41, Andrzej M. Jagodziński34, Pablo Luis Peri42, Pablo Luis Peri43, Pablo Luis Peri44, Christelle Gonmadje28, William Marthy45, Timothy G. O'Brien45, Emanuel H. Martin46, Andrew R. Marshall47, Francesco Rovero, Robert Bitariho, Pascal A. Niklaus27, Patricia Alvarez-Loayza48, Nurdin Chamuya49, Renato Valencia50, Frédéric Mortier, Verginia Wortel, Nestor L. Engone-Obiang51, Leandro Valle Ferreira52, David E. Odeke, R. Vásquez, Simon L. Lewis53, Simon L. Lewis54, Peter B. Reich14, Peter B. Reich35 
West Virginia University1, Yale University2, Food and Agriculture Organization3, Landcare Research4, University of Udine5, Max Planck Society6, University of Alaska Fairbanks7, Technische Universität München8, Université du Québec à Montréal9, University of the French West Indies and Guiana10, University of Freiburg Faculty of Biology11, Cornell University12, Wageningen University and Research Centre13, University of Sydney14, Polytechnic Institute of Viseu15, University of Trás-os-Montes and Alto Douro16, University of Göttingen17, Russian Academy of Sciences18, Oeschger Centre for Climate Change Research19, Lakehead University20, University of La Frontera21, Seoul National University22, Martin Luther University of Halle-Wittenberg23, University of Cambridge24, Center for International Forestry Research25, James Cook University26, University of Zurich27, University of Yaoundé I28, University of Wisconsin-Madison29, Queensland Government30, Florida International University31, Institut national de la recherche agronomique32, Forest Research Institute33, Polish Academy of Sciences34, University of Minnesota35, Warsaw University of Life Sciences36, Ştefan cel Mare University of Suceava37, University of Florence38, University of Warsaw39, Spanish National Research Council40, King Juan Carlos University41, National University of Austral Patagonia42, National Scientific and Technical Research Council43, International Trademark Association44, Wildlife Conservation Society45, College of African Wildlife Management46, University of York47, Durham University48, Ontario Ministry of Natural Resources49, Pontificia Universidad Católica del Ecuador50, Centre national de la recherche scientifique51, Museu Paraense Emílio Goeldi52, University College London53, University of Leeds54
14 Oct 2016-Science
TL;DR: A consistent positive concave-down effect of biodiversity on forest productivity across the world is revealed, showing that a continued biodiversity loss would result in an accelerating decline in forest productivity worldwide.
Abstract: The biodiversity-productivity relationship (BPR) is foundational to our understanding of the global extinction crisis and its impacts on ecosystem functioning. Understanding BPR is critical for the accurate valuation and effective conservation of biodiversity. Using ground-sourced data from 777,126 permanent plots, spanning 44 countries and most terrestrial biomes, we reveal a globally consistent positive concave-down BPR, showing that continued biodiversity loss would result in an accelerating decline in forest productivity worldwide. The value of biodiversity in maintaining commercial forest productivity alone-US$166 billion to 490 billion per year according to our estimation-is more than twice what it would cost to implement effective global conservation. This highlights the need for a worldwide reassessment of biodiversity values, forest management strategies, and conservation priorities.

Journal ArticleDOI
TL;DR: This work reports on the observation of stable skyrmions in sputtered ultrathin Pt/Co/MgO nanostructures at room temperature and zero external magnetic field, substantiated by micromagnetic simulations and numerical models.
Abstract: Magnetic skyrmions are chiral spin structures with a whirling configuration. Their topological properties, nanometre size and the fact that they can be moved by small current densities have opened a new paradigm for the manipulation of magnetization at the nanoscale. Chiral skyrmion structures have so far been experimentally demonstrated only in bulk materials and in epitaxial ultrathin films, and under an external magnetic field or at low temperature. Here, we report on the observation of stable skyrmions in sputtered ultrathin Pt/Co/MgO nanostructures at room temperature and zero external magnetic field. We use high lateral resolution X-ray magnetic circular dichroism microscopy to image their chiral Neel internal structure, which we explain as due to the large strength of the Dzyaloshinskii–Moriya interaction as revealed by spin wave spectroscopy measurements. Our results are substantiated by micromagnetic simulations and numerical models, which allow the identification of the physical mechanisms governing the size and stability of the skyrmions.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy1  +984 moreInstitutions (116)
TL;DR: The data around the time of the event were analyzed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity.
Abstract: On September 14, 2015, the Laser Interferometer Gravitational-wave Observatory (LIGO) detected a gravitational-wave transient (GW150914); we characterise the properties of the source and its parameters. The data around the time of the event were analysed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity. GW150914 was produced by a nearly equal mass binary black hole of $36^{+5}_{-4} M_\odot$ and $29^{+4}_{-4} M_\odot$ (for each parameter we report the median value and the range of the 90% credible interval). The dimensionless spin magnitude of the more massive black hole is bound to be $0.7$ (at 90% probability). The luminosity distance to the source is $410^{+160}_{-180}$ Mpc, corresponding to a redshift $0.09^{+0.03}_{-0.04}$ assuming standard cosmology. The source location is constrained to an annulus section of $590$ deg$^2$, primarily in the southern hemisphere. The binary merges into a black hole of $62^{+4}_{-4} M_\odot$ and spin $0.67^{+0.05}_{-0.07}$. This black hole is significantly more massive than any other known in the stellar-mass regime.

Journal ArticleDOI
TL;DR: The authors used an attribution approach to analyse 60 years of runoff and sediment load observations from the traverse of the Yellow River over China's Loess Plateau -the source of nearly 90% of its sediment load.
Abstract: The erosion, transport and redeposition of sediments shape the Earth's surface, and affect the structure and function of ecosystems and society(1,2). The Yellow River was once the world's largest carrier of fluvial sediment, but its sediment load has decreased by approximately 90% over the past 60 years(3). The decline in sediment load is due to changes in water discharge and sediment concentration, which are both influenced by regional climate change and human activities. Here we use an attribution approach to analyse 60 years of runoff and sediment load observations from the traverse of the Yellow River over China's Loess Plateau - the source of nearly 90% of its sediment load. We find that landscape engineering, terracing and the construction of check dams and reservoirs were the primary factors driving reduction in sediment load from the 1970s to 1990s, but large-scale vegetation restoration projects have also reduced soil erosion from the 1990s onwards. We suggest that, as the ability of existing dams and reservoirs to trap sediments declines in the future, erosion rates on the Loess Plateau will increasingly control the Yellow River's sediment load.