scispace - formally typeset
Search or ask a question

Showing papers by "University of Maryland, College Park published in 2016"


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy1  +1008 moreInstitutions (96)
TL;DR: This is the first direct detection of gravitational waves and the first observation of a binary black hole merger, and these observations demonstrate the existence of binary stellar-mass black hole systems.
Abstract: On September 14, 2015 at 09:50:45 UTC the two detectors of the Laser Interferometer Gravitational-Wave Observatory simultaneously observed a transient gravitational-wave signal. The signal sweeps upwards in frequency from 35 to 250 Hz with a peak gravitational-wave strain of $1.0 \times 10^{-21}$. It matches the waveform predicted by general relativity for the inspiral and merger of a pair of black holes and the ringdown of the resulting single black hole. The signal was observed with a matched-filter signal-to-noise ratio of 24 and a false alarm rate estimated to be less than 1 event per 203 000 years, equivalent to a significance greater than 5.1 {\sigma}. The source lies at a luminosity distance of $410^{+160}_{-180}$ Mpc corresponding to a redshift $z = 0.09^{+0.03}_{-0.04}$. In the source frame, the initial black hole masses are $36^{+5}_{-4} M_\odot$ and $29^{+4}_{-4} M_\odot$, and the final black hole mass is $62^{+4}_{-4} M_\odot$, with $3.0^{+0.5}_{-0.5} M_\odot c^2$ radiated in gravitational waves. All uncertainties define 90% credible intervals.These observations demonstrate the existence of binary stellar-mass black hole systems. This is the first direct detection of gravitational waves and the first observation of a binary black hole merger.

9,596 citations


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy3  +970 moreInstitutions (114)
TL;DR: This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from general relativity.
Abstract: We report the observation of a gravitational-wave signal produced by the coalescence of two stellar-mass black holes. The signal, GW151226, was observed by the twin detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) on December 26, 2015 at 03:38:53 UTC. The signal was initially identified within 70 s by an online matched-filter search targeting binary coalescences. Subsequent off-line analyses recovered GW151226 with a network signal-to-noise ratio of 13 and a significance greater than 5 σ. The signal persisted in the LIGO frequency band for approximately 1 s, increasing in frequency and amplitude over about 55 cycles from 35 to 450 Hz, and reached a peak gravitational strain of 3.4+0.7−0.9×10−22. The inferred source-frame initial black hole masses are 14.2+8.3−3.7M⊙ and 7.5+2.3−2.3M⊙ and the final black hole mass is 20.8+6.1−1.7M⊙. We find that at least one of the component black holes has spin greater than 0.2. This source is located at a luminosity distance of 440+180−190 Mpc corresponding to a redshift 0.09+0.03−0.04. All uncertainties define a 90 % credible interval. This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from general relativity.

3,448 citations


Journal ArticleDOI
TL;DR: The optimal simulation protocol for each program has been implemented in CHARMM-GUI and is expected to be applicable to the remainder of the additive C36 FF including the proteins, nucleic acids, carbohydrates, and small molecules.
Abstract: Proper treatment of nonbonded interactions is essential for the accuracy of molecular dynamics (MD) simulations, especially in studies of lipid bilayers. The use of the CHARMM36 force field (C36 FF) in different MD simulation programs can result in disagreements with published simulations performed with CHARMM due to differences in the protocols used to treat the long-range and 1-4 nonbonded interactions. In this study, we systematically test the use of the C36 lipid FF in NAMD, GROMACS, AMBER, OpenMM, and CHARMM/OpenMM. A wide range of Lennard-Jones (LJ) cutoff schemes and integrator algorithms were tested to find the optimal simulation protocol to best match bilayer properties of six lipids with varying acyl chain saturation and head groups. MD simulations of a 1,2-dipalmitoyl-sn-phosphatidylcholine (DPPC) bilayer were used to obtain the optimal protocol for each program. MD simulations with all programs were found to reasonably match the DPPC bilayer properties (surface area per lipid, chain order para...

2,182 citations


Journal ArticleDOI
TL;DR: The Scenario Model Intercomparison Project (ScenarioMIP) as discussed by the authors is the primary activity within Phase 6 of the Coupled Model Comparison Project (CMIP6) that will provide multi-model climate projections based on alternative scenarios of future emissions and land use changes produced with integrated assessment models.
Abstract: . Projections of future climate change play a fundamental role in improving understanding of the climate system as well as characterizing societal risks and response options. The Scenario Model Intercomparison Project (ScenarioMIP) is the primary activity within Phase 6 of the Coupled Model Intercomparison Project (CMIP6) that will provide multi-model climate projections based on alternative scenarios of future emissions and land use changes produced with integrated assessment models. In this paper, we describe ScenarioMIP's objectives, experimental design, and its relation to other activities within CMIP6. The ScenarioMIP design is one component of a larger scenario process that aims to facilitate a wide range of integrated studies across the climate science, integrated assessment modeling, and impacts, adaptation, and vulnerability communities, and will form an important part of the evidence base in the forthcoming Intergovernmental Panel on Climate Change (IPCC) assessments. At the same time, it will provide the basis for investigating a number of targeted science and policy questions that are especially relevant to scenario-based analysis, including the role of specific forcings such as land use and aerosols, the effect of a peak and decline in forcing, the consequences of scenarios that limit warming to below 2 °C, the relative contributions to uncertainty from scenarios, climate models, and internal variability, and long-term climate system outcomes beyond the 21st century. To serve this wide range of scientific communities and address these questions, a design has been identified consisting of eight alternative 21st century scenarios plus one large initial condition ensemble and a set of long-term extensions, divided into two tiers defined by relative priority. Some of these scenarios will also provide a basis for variants planned to be run in other CMIP6-Endorsed MIPs to investigate questions related to specific forcings. Harmonized, spatially explicit emissions and land use scenarios generated with integrated assessment models will be provided to participating climate modeling groups by late 2016, with the climate model simulations run within the 2017–2018 time frame, and output from the climate model projections made available and analyses performed over the 2018–2020 period.

1,758 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used three long-term satellite leaf area index (LAI) records and ten global ecosystem models to investigate four key drivers of LAI trends during 1982-2009.
Abstract: Global environmental change is rapidly altering the dynamics of terrestrial vegetation, with consequences for the functioning of the Earth system and provision of ecosystem services(1,2). Yet how global vegetation is responding to the changing environment is not well established. Here we use three long-term satellite leaf area index (LAI) records and ten global ecosystem models to investigate four key drivers of LAI trends during 1982-2009. We show a persistent and widespread increase of growing season integrated LAI (greening) over 25% to 50% of the global vegetated area, whereas less than 4% of the globe shows decreasing LAI (browning). Factorial simulations with multiple global ecosystem models suggest that CO2 fertilization effects explain 70% of the observed greening trend, followed by nitrogen deposition (9%), climate change (8%) and land cover change (LCC) (4%). CO2 fertilization effects explain most of the greening trends in the tropics, whereas climate change resulted in greening of the high latitudes and the Tibetan Plateau. LCC contributed most to the regional greening observed in southeast China and the eastern United States. The regional effects of unexplained factors suggest that the next generation of ecosystem models will need to explore the impacts of forest demography, differences in regional management intensities for cropland and pastures, and other emerging productivity constraints such as phosphorus availability.

1,534 citations


Proceedings ArticleDOI
22 May 2016
TL;DR: In this article, the authors present Hawk, a decentralized smart contract system that does not store financial transactions in the clear on the blockchain, thus retaining transactional privacy from the public's view.
Abstract: Emerging smart contract systems over decentralized cryptocurrencies allow mutually distrustful parties to transact safely without trusted third parties. In the event of contractual breaches or aborts, the decentralized blockchain ensures that honest parties obtain commensurate compensation. Existing systems, however, lack transactional privacy. All transactions, including flow of money between pseudonyms and amount transacted, are exposed on the blockchain. We present Hawk, a decentralized smart contract system that does not store financial transactions in the clear on the blockchain, thus retaining transactional privacy from the public's view. A Hawk programmer can write a private smart contract in an intuitive manner without having to implement cryptography, and our compiler automatically generates an efficient cryptographic protocol where contractual parties interact with the blockchain, using cryptographic primitives such as zero-knowledge proofs. To formally define and reason about the security of our protocols, we are the first to formalize the blockchain model of cryptography. The formal modeling is of independent interest. We advocate the community to adopt such a formal model when designing applications atop decentralized blockchains.

1,523 citations


Posted Content
TL;DR: The authors prune filters from CNNs that are identified as having a small effect on the output accuracy, by removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly.
Abstract: The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.

1,435 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy1  +976 moreInstitutions (107)
TL;DR: It is found that the final remnant's mass and spin, as determined from the low-frequency and high-frequency phases of the signal, are mutually consistent with the binary black-hole solution in general relativity.
Abstract: The LIGO detection of GW150914 provides an unprecedented opportunity to study the two-body motion of a compact-object binary in the large-velocity, highly nonlinear regime, and to witness the final merger of the binary and the excitation of uniquely relativistic modes of the gravitational field. We carry out several investigations to determine whether GW150914 is consistent with a binary black-hole merger in general relativity. We find that the final remnant’s mass and spin, as determined from the low-frequency (inspiral) and high-frequency (postinspiral) phases of the signal, are mutually consistent with the binary black-hole solution in general relativity. Furthermore, the data following the peak of GW150914 are consistent with the least-damped quasinormal mode inferred from the mass and spin of the remnant black hole. By using waveform models that allow for parametrized general-relativity violations during the inspiral and merger phases, we perform quantitative tests on the gravitational-wave phase in the dynamical regime and we determine the first empirical bounds on several high-order post-Newtonian coefficients. We constrain the graviton Compton wavelength, assuming that gravitons are dispersed in vacuum in the same way as particles with mass, obtaining a 90%-confidence lower bound of 1013 km. In conclusion, within our statistical uncertainties, we find no evidence for violations of general relativity in the genuinely strong-field regime of gravity.

1,421 citations


Book ChapterDOI
08 Oct 2016
TL;DR: In this article, the authors estimate the 3D pose of the human body as well as its 3D shape from a single unconstrained image by fitting a statistical body shape model to the 2D joints.
Abstract: We describe the first method to automatically estimate the 3D pose of the human body as well as its 3D shape from a single unconstrained image. We estimate a full 3D mesh and show that 2D joints alone carry a surprising amount of information about body shape. The problem is challenging because of the complexity of the human body, articulation, occlusion, clothing, lighting, and the inherent ambiguity in inferring 3D from 2D. To solve this, we first use a recently published CNN-based method, DeepCut, to predict (bottom-up) the 2D body joint locations. We then fit (top-down) a recently published statistical body shape model, called SMPL, to the 2D joints. We do so by minimizing an objective function that penalizes the error between the projected 3D model joints and detected 2D joints. Because SMPL captures correlations in human shape across the population, we are able to robustly fit it to very little data. We further leverage the 3D model to prevent solutions that cause interpenetration. We evaluate our method, SMPLify, on the Leeds Sports, HumanEva, and Human3.6M datasets, showing superior pose accuracy with respect to the state of the art.

1,366 citations


Journal ArticleDOI
University of East Anglia1, University of Oslo2, Commonwealth Scientific and Industrial Research Organisation3, University of Exeter4, Oak Ridge National Laboratory5, National Oceanic and Atmospheric Administration6, Woods Hole Research Center7, University of California, San Diego8, Karlsruhe Institute of Technology9, Cooperative Institute for Marine and Atmospheric Studies10, Centre national de la recherche scientifique11, University of Maryland, College Park12, National Institute of Water and Atmospheric Research13, Woods Hole Oceanographic Institution14, Flanders Marine Institute15, Alfred Wegener Institute for Polar and Marine Research16, Netherlands Environmental Assessment Agency17, University of Illinois at Urbana–Champaign18, Leibniz Institute of Marine Sciences19, Max Planck Society20, University of Paris21, Hobart Corporation22, University of Bern23, Oeschger Centre for Climate Change Research24, National Center for Atmospheric Research25, University of Miami26, Council of Scientific and Industrial Research27, University of Colorado Boulder28, National Institute for Environmental Studies29, Joint Institute for the Study of the Atmosphere and Ocean30, Geophysical Institute, University of Bergen31, Montana State University32, Goddard Space Flight Center33, University of New Hampshire34, Bjerknes Centre for Climate Research35, Imperial College London36, Lamont–Doherty Earth Observatory37, Auburn University38, Wageningen University and Research Centre39, VU University Amsterdam40, Met Office41
TL;DR: In this article, the authors quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community.
Abstract: . Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere – the “global carbon budget” – is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and methodology to quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community. We discuss changes compared to previous estimates and consistency within and among components, alongside methodology and data limitations. CO2 emissions from fossil fuels and industry (EFF) are based on energy statistics and cement production data, respectively, while emissions from land-use change (ELUC), mainly deforestation, are based on combined evidence from land-cover change data, fire activity associated with deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (GATM) is computed from the annual changes in concentration. The mean ocean CO2 sink (SOCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. The variability in SOCEAN is evaluated with data products based on surveys of ocean CO2 measurements. The global residual terrestrial CO2 sink (SLAND) is estimated by the difference of the other terms of the global carbon budget and compared to results of independent dynamic global vegetation models. We compare the mean land and ocean fluxes and their variability to estimates from three atmospheric inverse methods for three broad latitude bands. All uncertainties are reported as ±1σ, reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. For the last decade available (2006–2015), EFF was 9.3 ± 0.5 GtC yr−1, ELUC 1.0 ± 0.5 GtC yr−1, GATM 4.5 ± 0.1 GtC yr−1, SOCEAN 2.6 ± 0.5 GtC yr−1, and SLAND 3.1 ± 0.9 GtC yr−1. For year 2015 alone, the growth in EFF was approximately zero and emissions remained at 9.9 ± 0.5 GtC yr−1, showing a slowdown in growth of these emissions compared to the average growth of 1.8 % yr−1 that took place during 2006–2015. Also, for 2015, ELUC was 1.3 ± 0.5 GtC yr−1, GATM was 6.3 ± 0.2 GtC yr−1, SOCEAN was 3.0 ± 0.5 GtC yr−1, and SLAND was 1.9 ± 0.9 GtC yr−1. GATM was higher in 2015 compared to the past decade (2006–2015), reflecting a smaller SLAND for that year. The global atmospheric CO2 concentration reached 399.4 ± 0.1 ppm averaged over 2015. For 2016, preliminary data indicate the continuation of low growth in EFF with +0.2 % (range of −1.0 to +1.8 %) based on national emissions projections for China and USA, and projections of gross domestic product corrected for recent changes in the carbon intensity of the economy for the rest of the world. In spite of the low growth of EFF in 2016, the growth rate in atmospheric CO2 concentration is expected to be relatively high because of the persistence of the smaller residual terrestrial sink (SLAND) in response to El Nino conditions of 2015–2016. From this projection of EFF and assumed constant ELUC for 2016, cumulative emissions of CO2 will reach 565 ± 55 GtC (2075 ± 205 GtCO2) for 1870–2016, about 75 % from EFF and 25 % from ELUC. This living data update documents changes in the methods and data sets used in this new carbon budget compared with previous publications of this data set (Le Quere et al., 2015b, a, 2014, 2013). All observations presented here can be downloaded from the Carbon Dioxide Information Analysis Center ( doi:10.3334/CDIAC/GCP_2016 ).

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy3  +978 moreInstitutions (112)
TL;DR: The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers as discussed by the authors.
Abstract: The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers. In this paper we present full results from a search for binary black hole merger signals with total masses up to 100M⊙ and detailed implications from our observations of these systems. Our search, based on general-relativistic models of gravitational wave signals from binary black hole systems, unambiguously identified two signals, GW150914 and GW151226, with a significance of greater than 5σ over the observing period. It also identified a third possible signal, LVT151012, with substantially lower significance, which has a 87% probability of being of astrophysical origin. We provide detailed estimates of the parameters of the observed systems. Both GW150914 and GW151226 provide an unprecedented opportunity to study the two-body motion of a compact-object binary in the large velocity, highly nonlinear regime. We do not observe any deviations from general relativity, and place improved empirical bounds on several high-order post-Newtonian coefficients. From our observations we infer stellar-mass binary black hole merger rates lying in the range 9−240Gpc−3yr−1. These observations are beginning to inform astrophysical predictions of binary black hole formation rates, and indicate that future observing runs of the Advanced detector network will yield many more gravitational wave detections.

Proceedings Article
05 Nov 2016
TL;DR: The authors prune filters from CNNs that are identified as having a small effect on the output accuracy, by removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly.
Abstract: The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.

Journal ArticleDOI
Craig J. Pollock1, T. E. Moore1, A. D. Jacques1, James L. Burch2, U. Gliese1, Yoshifumi Saito, T. Omoto, Levon A. Avanov1, Levon A. Avanov3, A. C. Barrie1, Victoria N. Coffey4, John C. Dorelli1, Daniel J. Gershman1, Daniel J. Gershman5, Daniel J. Gershman3, Barbara L. Giles1, T. Rosnack1, C. Salo1, Shoichiro Yokota, M. L. Adrian1, C. Aoustin, C. Auletti1, S. Aung1, V. Bigio1, N. Cao1, Michael O. Chandler4, Dennis J. Chornay1, Dennis J. Chornay3, K. Christian1, George Clark6, George Clark1, George Clark7, Glyn Collinson1, Glyn Collinson6, T. Corris1, A. De Los Santos2, R. Devlin1, T. Diaz2, T. Dickerson1, C. Dickson1, A. Diekmann4, F. Diggs1, C. Duncan1, A. Figueroa-Vinas1, C. Firman1, M. Freeman2, N. Galassi1, K. Garcia1, G. Goodhart2, D. Guererro2, J. Hageman1, Jennifer Hanley2, E. Hemminger1, Matthew Holland1, M. Hutchins2, T. James1, W. Jones1, S. Kreisler1, Joseph Kujawski8, Joseph Kujawski1, V. Lavu1, J. V. Lobell1, E. LeCompte, A. Lukemire, Elizabeth MacDonald1, Al. Mariano1, Toshifumi Mukai, K. Narayanan1, Q. Nguyan1, M. Onizuka1, William R. Paterson1, S. Persyn2, Benjamin M. Piepgrass2, F. Cheney1, A. C. Rager1, A. C. Rager6, T. Raghuram1, A. Ramil1, L. S. Reichenthal1, H. Rodriguez2, Jean-Noël Rouzaud, A. Rucker1, Marilia Samara1, Jean-André Sauvaud, D. Schuster1, M. Shappirio1, K. Shelton1, D. Sher1, David Smith1, Kerrington D. Smith2, S. E. Smith6, S. E. Smith1, D. Steinfeld1, R. Szymkiewicz1, K. Tanimoto, J. Taylor2, Compton J. Tucker1, K. Tull1, A. Uhl1, J. Vloet2, P. Walpole1, P. Walpole2, S. Weidner2, D. White2, G. E. Winkert1, P.-S. Yeh1, M. Zeuch1 
TL;DR: The Fast Plasma Investigation (FPI) was developed for flight on the Magnetospheric Multiscale (MMS) mission to measure the differential directional flux of magnetospheric electrons and ions with unprecedented time resolution to resolve kinetic-scale plasma dynamics as mentioned in this paper.
Abstract: The Fast Plasma Investigation (FPI) was developed for flight on the Magnetospheric Multiscale (MMS) mission to measure the differential directional flux of magnetospheric electrons and ions with unprecedented time resolution to resolve kinetic-scale plasma dynamics. This increased resolution has been accomplished by placing four dual 180-degree top hat spectrometers for electrons and four dual 180-degree top hat spectrometers for ions around the periphery of each of four MMS spacecraft. Using electrostatic field-of-view deflection, the eight spectrometers for each species together provide 4pi-sr field-of-view with, at worst, 11.25-degree sample spacing. Energy/charge sampling is provided by swept electrostatic energy/charge selection over the range from 10 eV/q to 30000 eV/q. The eight dual spectrometers on each spacecraft are controlled and interrogated by a single block redundant Instrument Data Processing Unit, which in turn interfaces to the observatory’s Instrument Suite Central Instrument Data Processor. This paper describes the design of FPI, its ground and in-flight calibration, its operational concept, and its data products.

Journal ArticleDOI
TL;DR: The goal of this study is to review the fundamental structures and chemistries of wood and wood-derived materials, which are essential for a wide range of existing and new enabling technologies.
Abstract: With the arising of global climate change and resource shortage, in recent years, increased attention has been paid to environmentally friendly materials. Trees are sustainable and renewable materials, which give us shelter and oxygen and remove carbon dioxide from the atmosphere. Trees are a primary resource that human society depends upon every day, for example, homes, heating, furniture, and aircraft. Wood from trees gives us paper, cardboard, and medical supplies, thus impacting our homes, school, work, and play. All of the above-mentioned applications have been well developed over the past thousands of years. However, trees and wood have much more to offer us as advanced materials, impacting emerging high-tech fields, such as bioengineering, flexible electronics, and clean energy. Wood naturally has a hierarchical structure, composed of well-oriented microfibers and tracheids for water, ion, and oxygen transportation during metabolism. At higher magnification, the walls of fiber cells have an interes...

Journal ArticleDOI
TL;DR: The Landsat 8 Operational Land Imager (OLI) atmospheric correction algorithm is developed using the Second Simulation of the Satellite Signal in the Solar Spectrum Vectorial (6SV) model, refined to take advantage of the narrow OLI spectral bands, improved radiometric resolution and signal-to-noise.

Journal ArticleDOI
TL;DR: In this article, the authors quantify potential global impacts of different negative emissions technologies on various factors (such as land, greenhouse gas emissions, water, albedo, nutrients and energy) to determine the biophysical limits to, and economic costs of, their widespread application.
Abstract: To have a >50% chance of limiting warming below 2 °C, most recent scenarios from integrated assessment models (IAMs) require large-scale deployment of negative emissions technologies (NETs). These are technologies that result in the net removal of greenhouse gases from the atmosphere. We quantify potential global impacts of the different NETs on various factors (such as land, greenhouse gas emissions, water, albedo, nutrients and energy) to determine the biophysical limits to, and economic costs of, their widespread application. Resource implications vary between technologies and need to be satisfactorily addressed if NETs are to have a significant role in achieving climate goals.

Journal ArticleDOI
07 Jan 2016-Nature
TL;DR: The difference between the planetary radius measured at optical and infrared wavelengths is an effective metric for distinguishing different atmosphere types, so that strong water absorption lines are seen in clear-atmosphere planets and the weakest features are associated with clouds and hazes.
Abstract: Thousands of transiting exoplanets have been discovered, but spectral analysis of their atmospheres has so far been dominated by a small number of exoplanets and data spanning relatively narrow wavelength ranges (such as 1.1-1.7 micrometres). Recent studies show that some hot-Jupiter exoplanets have much weaker water absorption features in their near-infrared spectra than predicted. The low amplitude of water signatures could be explained by very low water abundances, which may be a sign that water was depleted in the protoplanetary disk at the planet's formation location, but it is unclear whether this level of depletion can actually occur. Alternatively, these weak signals could be the result of obscuration by clouds or hazes, as found in some optical spectra. Here we report results from a comparative study of ten hot Jupiters covering the wavelength range 0.3-5 micrometres, which allows us to resolve both the optical scattering and infrared molecular absorption spectroscopically. Our results reveal a diverse group of hot Jupiters that exhibit a continuum from clear to cloudy atmospheres. We find that the difference between the planetary radius measured at optical and infrared wavelengths is an effective metric for distinguishing different atmosphere types. The difference correlates with the spectral strength of water, so that strong water absorption lines are seen in clear-atmosphere planets and the weakest features are associated with clouds and hazes. This result strongly suggests that primordial water depletion during formation is unlikely and that clouds and hazes are the cause of weaker spectral signatures.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy1  +984 moreInstitutions (116)
TL;DR: The data around the time of the event were analyzed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity.
Abstract: On September 14, 2015, the Laser Interferometer Gravitational-wave Observatory (LIGO) detected a gravitational-wave transient (GW150914); we characterise the properties of the source and its parameters. The data around the time of the event were analysed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity. GW150914 was produced by a nearly equal mass binary black hole of $36^{+5}_{-4} M_\odot$ and $29^{+4}_{-4} M_\odot$ (for each parameter we report the median value and the range of the 90% credible interval). The dimensionless spin magnitude of the more massive black hole is bound to be $0.7$ (at 90% probability). The luminosity distance to the source is $410^{+160}_{-180}$ Mpc, corresponding to a redshift $0.09^{+0.03}_{-0.04}$ assuming standard cosmology. The source location is constrained to an annulus section of $590$ deg$^2$, primarily in the southern hemisphere. The binary merges into a black hole of $62^{+4}_{-4} M_\odot$ and spin $0.67^{+0.05}_{-0.07}$. This black hole is significantly more massive than any other known in the stellar-mass regime.

Journal ArticleDOI
12 May 2016-Nature
TL;DR: It is found that genes that were retained as duplicates after the teleost-specific whole-genome duplication 320 million years ago were not more likely to be retained after the Ss4R, and that the duplicate retention was not influenced to a great extent by the nature of the predicted protein interactions of the gene products.
Abstract: The whole-genome duplication 80 million years ago of the common ancestor of salmonids (salmonid-specific fourth vertebrate whole-genome duplication, Ss4R) provides unique opportunities to learn about the evolutionary fate of a duplicated vertebrate genome in 70 extant lineages. Here we present a high-quality genome assembly for Atlantic salmon (Salmo salar), and show that large genomic reorganizations, coinciding with bursts of transposon-mediated repeat expansions, were crucial for the post-Ss4R rediploidization process. Comparisons of duplicate gene expression patterns across a wide range of tissues with orthologous genes from a pre-Ss4R outgroup unexpectedly demonstrate far more instances of neofunctionalization than subfunctionalization. Surprisingly, we find that genes that were retained as duplicates after the teleost-specific whole-genome duplication 320 million years ago were not more likely to be retained after the Ss4R, and that the duplicate retention was not influenced to a great extent by the nature of the predicted protein interactions of the gene products. Finally, we demonstrate that the Atlantic salmon assembly can serve as a reference sequence for the study of other salmonids for a range of purposes.

Journal ArticleDOI
Sergey Alekhin, Wolfgang Altmannshofer1, Takehiko Asaka2, Brian Batell3, Fedor Bezrukov4, Kyrylo Bondarenko5, Alexey Boyarsky5, Ki-Young Choi6, Cristóbal Corral7, Nathaniel Craig8, David Curtin9, Sacha Davidson10, Sacha Davidson11, André de Gouvêa12, Stefano Dell'Oro, Patrick deNiverville13, P. S. Bhupal Dev14, Herbi K. Dreiner15, Marco Drewes16, Shintaro Eijima17, Rouven Essig18, Anthony Fradette13, Björn Garbrecht16, Belen Gavela19, Gian F. Giudice3, Mark D. Goodsell20, Mark D. Goodsell21, Dmitry Gorbunov22, Stefania Gori1, Christophe Grojean23, Alberto Guffanti24, Thomas Hambye25, Steen Honoré Hansen24, Juan Carlos Helo7, Juan Carlos Helo26, Pilar Hernández27, Alejandro Ibarra16, Artem Ivashko5, Artem Ivashko28, Eder Izaguirre1, Joerg Jaeckel29, Yu Seon Jeong30, Felix Kahlhoefer, Yonatan Kahn31, Andrey Katz3, Andrey Katz32, Andrey Katz33, Choong Sun Kim30, Sergey Kovalenko7, Gordan Krnjaic1, Valery E. Lyubovitskij34, Valery E. Lyubovitskij35, Valery E. Lyubovitskij36, Simone Marcocci, Matthew McCullough3, David McKeen37, Guenakh Mitselmakher38, Sven Moch39, Rabindra N. Mohapatra9, David E. Morrissey40, Maksym Ovchynnikov28, Emmanuel A. Paschos, Apostolos Pilaftsis14, Maxim Pospelov1, Maxim Pospelov13, Mary Hall Reno41, Andreas Ringwald, Adam Ritz13, Leszek Roszkowski, Valery Rubakov, Oleg Ruchayskiy17, Oleg Ruchayskiy24, Ingo Schienbein42, Daniel Schmeier15, Kai Schmidt-Hoberg, Pedro Schwaller3, Goran Senjanovic43, Osamu Seto44, Mikhail Shaposhnikov17, Lesya Shchutska38, J. Shelton45, Robert Shrock18, Brian Shuve1, Michael Spannowsky46, Andrew Spray47, Florian Staub3, Daniel Stolarski3, Matt Strassler33, Vladimir Tello, Francesco Tramontano48, Anurag Tripathi, Sean Tulin49, Francesco Vissani, Martin Wolfgang Winkler15, Kathryn M. Zurek50, Kathryn M. Zurek51 
Perimeter Institute for Theoretical Physics1, Niigata University2, CERN3, University of Connecticut4, Leiden University5, Korea Astronomy and Space Science Institute6, Federico Santa María Technical University7, University of California, Santa Barbara8, University of Maryland, College Park9, Claude Bernard University Lyon 110, University of Lyon11, Northwestern University12, University of Victoria13, University of Manchester14, University of Bonn15, Technische Universität München16, École Polytechnique Fédérale de Lausanne17, Stony Brook University18, Autonomous University of Madrid19, University of Paris20, Centre national de la recherche scientifique21, Moscow Institute of Physics and Technology22, Autonomous University of Barcelona23, University of Copenhagen24, Université libre de Bruxelles25, University of La Serena26, University of Valencia27, Taras Shevchenko National University of Kyiv28, Heidelberg University29, Yonsei University30, Princeton University31, University of Geneva32, Harvard University33, Tomsk Polytechnic University34, University of Tübingen35, Tomsk State University36, University of Washington37, University of Florida38, University of Hamburg39, TRIUMF40, University of Iowa41, University of Grenoble42, International Centre for Theoretical Physics43, Hokkai Gakuen University44, University of Illinois at Urbana–Champaign45, Durham University46, University of Melbourne47, University of Naples Federico II48, York University49, University of California, Berkeley50, Lawrence Berkeley National Laboratory51
TL;DR: It is demonstrated that the SHiP experiment has a unique potential to discover new physics and can directly probe a number of solutions of beyond the standard model puzzles, such as neutrino masses, baryon asymmetry of the Universe, dark matter, and inflation.
Abstract: This paper describes the physics case for a new fixed target facility at CERN SPS. The SHiP (search for hidden particles) experiment is intended to hunt for new physics in the largely unexplored domain of very weakly interacting particles with masses below the Fermi scale, inaccessible to the LHC experiments, and to study tau neutrino physics. The same proton beam setup can be used later to look for decays of tau-leptons with lepton flavour number non-conservation, $\tau \to 3\mu $ and to search for weakly-interacting sub-GeV dark matter candidates. We discuss the evidence for physics beyond the standard model and describe interactions between new particles and four different portals—scalars, vectors, fermions or axion-like particles. We discuss motivations for different models, manifesting themselves via these interactions, and how they can be probed with the SHiP experiment and present several case studies. The prospects to search for relatively light SUSY and composite particles at SHiP are also discussed. We demonstrate that the SHiP experiment has a unique potential to discover new physics and can directly probe a number of solutions of beyond the standard model puzzles, such as neutrino masses, baryon asymmetry of the Universe, dark matter, and inflation.

Book ChapterDOI
22 Feb 2016
TL;DR: In this article, the authors analyze how fundamental and circumstantial bottlenecks in Bitcoin limit the ability of its current peer-to-peer overlay network to support substantially higher throughputs and lower latencies.
Abstract: The increasing popularity of blockchain-based cryptocurrencies has made scalability a primary and urgent concern. We analyze how fundamental and circumstantial bottlenecks in Bitcoin limit the ability of its current peer-to-peer overlay network to support substantially higher throughputs and lower latencies. Our results suggest that reparameterization of block size and intervals should be viewed only as a first increment toward achieving next-generation, high-load blockchain protocols, and major advances will additionally require a basic rethinking of technical approaches. We offer a structured perspective on the design space for such approaches. Within this perspective, we enumerate and briefly discuss a number of recently proposed protocol ideas and offer several new ideas and open challenges.

Journal ArticleDOI
TL;DR: Improvements made to the fire detection algorithm and swath-level product that were implemented as part of the Collection 6 land-product reprocessing, which commenced in May 2015, indicated targeted improvements in the performance of the collection 6 activeFire detection algorithm compared to Collection 5, with reduced omission errors over large fires, and reduced false alarm rates in tropical ecosystems.

Journal ArticleDOI
Fengpeng An1, Guangpeng An, Qi An2, Vito Antonelli3  +226 moreInstitutions (55)
TL;DR: The Jiangmen Underground Neutrino Observatory (JUNO) as mentioned in this paper is a 20kton multi-purpose underground liquid scintillator detector with the determination of neutrino mass hierarchy (MH) as a primary physics goal.
Abstract: The Jiangmen Underground Neutrino Observatory (JUNO), a 20 kton multi-purpose underground liquid scintillator detector, was proposed with the determination of the neutrino mass hierarchy (MH) as a primary physics goal. The excellent energy resolution and the large fiducial volume anticipated for the JUNO detector offer exciting opportunities for addressing many important topics in neutrino and astro-particle physics. In this document, we present the physics motivations and the anticipated performance of the JUNO detector for various proposed measurements. Following an introduction summarizing the current status and open issues in neutrino physics, we discuss how the detection of antineutrinos generated by a cluster of nuclear power plants allows the determination of the neutrino MH at a 3–4σ significance with six years of running of JUNO. The measurement of antineutrino spectrum with excellent energy resolution will also lead to the precise determination of the neutrino oscillation parameters ${\mathrm{sin}}^{2}{\theta }_{12}$, ${\rm{\Delta }}{m}_{21}^{2}$, and $| {\rm{\Delta }}{m}_{{ee}}^{2}| $ to an accuracy of better than 1%, which will play a crucial role in the future unitarity test of the MNSP matrix. The JUNO detector is capable of observing not only antineutrinos from the power plants, but also neutrinos/antineutrinos from terrestrial and extra-terrestrial sources, including supernova burst neutrinos, diffuse supernova neutrino background, geoneutrinos, atmospheric neutrinos, and solar neutrinos. As a result of JUNO's large size, excellent energy resolution, and vertex reconstruction capability, interesting new data on these topics can be collected. For example, a neutrino burst from a typical core-collapse supernova at a distance of 10 kpc would lead to ∼5000 inverse-beta-decay events and ∼2000 all-flavor neutrino–proton ES events in JUNO, which are of crucial importance for understanding the mechanism of supernova explosion and for exploring novel phenomena such as collective neutrino oscillations. Detection of neutrinos from all past core-collapse supernova explosions in the visible universe with JUNO would further provide valuable information on the cosmic star-formation rate and the average core-collapse neutrino energy spectrum. Antineutrinos originating from the radioactive decay of uranium and thorium in the Earth can be detected in JUNO with a rate of ∼400 events per year, significantly improving the statistics of existing geoneutrino event samples. Atmospheric neutrino events collected in JUNO can provide independent inputs for determining the MH and the octant of the ${\theta }_{23}$ mixing angle. Detection of the (7)Be and (8)B solar neutrino events at JUNO would shed new light on the solar metallicity problem and examine the transition region between the vacuum and matter dominated neutrino oscillations. Regarding light sterile neutrino topics, sterile neutrinos with ${10}^{-5}\,{{\rm{eV}}}^{2}\lt {\rm{\Delta }}{m}_{41}^{2}\lt {10}^{-2}\,{{\rm{eV}}}^{2}$ and a sufficiently large mixing angle ${\theta }_{14}$ could be identified through a precise measurement of the reactor antineutrino energy spectrum. Meanwhile, JUNO can also provide us excellent opportunities to test the eV-scale sterile neutrino hypothesis, using either the radioactive neutrino sources or a cyclotron-produced neutrino beam. The JUNO detector is also sensitive to several other beyondthe-standard-model physics. Examples include the search for proton decay via the $p\to {K}^{+}+\bar{ u }$ decay channel, search for neutrinos resulting from dark-matter annihilation in the Sun, search for violation of Lorentz invariance via the sidereal modulation of the reactor neutrino event rate, and search for the effects of non-standard interactions. The proposed construction of the JUNO detector will provide a unique facility to address many outstanding crucial questions in particle and astrophysics in a timely and cost-effective fashion. It holds the great potential for further advancing our quest to understanding the fundamental properties of neutrinos, one of the building blocks of our Universe.

Proceedings ArticleDOI
01 Jun 2016
TL;DR: In this article, a generative model for regular motion patterns (termed as regularity) using multiple sources with very limited supervision is proposed, and two methods are built upon the autoencoders for their ability to work with little to no supervision.
Abstract: Perceiving meaningful activities in a long video sequence is a challenging problem due to ambiguous definition of 'meaningfulness' as well as clutters in the scene. We approach this problem by learning a generative model for regular motion patterns (termed as regularity) using multiple sources with very limited supervision. Specifically, we propose two methods that are built upon the autoencoders for their ability to work with little to no supervision. We first leverage the conventional handcrafted spatio-temporal local features and learn a fully connected autoencoder on them. Second, we build a fully convolutional feed-forward autoencoder to learn both the local features and the classifiers as an end-to-end learning framework. Our model can capture the regularities from multiple datasets. We evaluate our methods in both qualitative and quantitative ways - showing the learned regularity of videos in various aspects and demonstrating competitive performance on anomaly detection datasets as an application.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy1  +961 moreInstitutions (100)
TL;DR: The discovery of the GW150914 with the Advanced LIGO detectors provides the first observational evidence for the existence of binary black-hole systems that inspiral and merge within the age of the Universe as mentioned in this paper.
Abstract: The discovery of the gravitational-wave source GW150914 with the Advanced LIGO detectors provides the first observational evidence for the existence of binary black-hole systems that inspiral and merge within the age of the Universe. Such black-hole mergers have been predicted in two main types of formation models, involving isolated binaries in galactic fields or dynamical interactions in young and old dense stellar environments. The measured masses robustly demonstrate that relatively "heavy" black holes (≳25M⊙) can form in nature. This discovery implies relatively weak massive-star winds and thus the formation of GW150914 in an environment with metallicity lower than ∼1/2 of the solar value. The rate of binary black-hole mergers inferred from the observation of GW150914 is consistent with the higher end of rate predictions (≳1Gpc−3yr−1) from both types of formation models. The low measured redshift (z∼0.1) of GW150914 and the low inferred metallicity of the stellar progenitor imply either binary black-hole formation in a low-mass galaxy in the local Universe and a prompt merger, or formation at high redshift with a time delay between formation and merger of several Gyr. This discovery motivates further studies of binary-black-hole formation astrophysics. It also has implications for future detections and studies by Advanced LIGO and Advanced Virgo, and gravitational-wave detectors in space.

Journal ArticleDOI
TL;DR: The authors study how firms differ from their competitors using new time-varying measures of product similarity based on text-based analysis of firm 10-K product descriptions and find evidence that firm R&D and advertising are associated with subsequent differentiation from competitors.
Abstract: We study how firms differ from their competitors using new time-varying measures of product similarity based on text-based analysis of firm 10-K product descriptions. This year-by-year set of product similarity measures allows us to generate a new set of industries in which firms can have their own distinct set of competitors. Our new sets of competitors explain specific discussion of high competition, rivals identified by managers as peer firms, and changes to industry competitors following exogenous industry shocks. We also find evidence that firm R&D and advertising are associated with subsequent differentiation from competitors, consistent with theories of endogenous product differentiation.

Journal ArticleDOI
Lourens Poorter1, Frans Bongers1, T. Mitchell Aide2, Angelica M. Almeyda Zambrano3, Patricia Balvanera4, Justin M. Becknell5, Vanessa K. Boukili6, Pedro H. S. Brancalion7, Eben N. Broadbent3, Robin L. Chazdon6, Dylan Craven8, Dylan Craven9, Jarcilene S. Almeida-Cortez10, George A. L. Cabral10, Ben H. J. de Jong, Julie S. Denslow11, Daisy H. Dent12, Daisy H. Dent8, Saara J. DeWalt13, Juan Manuel Dupuy, Sandra M. Durán14, Mário M. Espírito-Santo, María C. Fandiño, Ricardo Gomes César7, Jefferson S. Hall8, José Luis Hernández-Stefanoni, Catarina C. Jakovac15, Catarina C. Jakovac1, André Braga Junqueira15, André Braga Junqueira1, Deborah K. Kennard16, Susan G. Letcher17, Juan Carlos Licona, Madelon Lohbeck18, Madelon Lohbeck1, Erika Marin-Spiotta19, Miguel Martínez-Ramos4, Paulo Eduardo dos Santos Massoca15, Jorge A. Meave4, Rita C. G. Mesquita15, Francisco Mora4, Rodrigo Muñoz4, Robert Muscarella20, Robert Muscarella21, Yule Roberta Ferreira Nunes, Susana Ochoa-Gaona, Alexandre Adalardo de Oliveira7, Edith Orihuela-Belmonte, Marielos Peña-Claros1, Eduardo A. Pérez-García4, Daniel Piotto, Jennifer S. Powers22, Jorge Rodríguez-Velázquez4, I. Eunice Romero-Pérez4, Jorge Ruiz23, Jorge Ruiz24, Juan Saldarriaga, Arturo Sanchez-Azofeifa14, Naomi B. Schwartz20, Marc K. Steininger, Nathan G. Swenson25, Marisol Toledo, María Uriarte20, Michiel van Breugel8, Michiel van Breugel26, Michiel van Breugel27, Hans van der Wal28, Maria das Dores Magalhães Veloso, Hans F. M. Vester29, Alberto Vicentini15, Ima Célia Guimarães Vieira30, Tony Vizcarra Bentos15, G. Bruce Williamson31, G. Bruce Williamson15, Danaë M. A. Rozendaal1, Danaë M. A. Rozendaal6, Danaë M. A. Rozendaal32 
11 Feb 2016-Nature
TL;DR: A biomass recovery map of Latin America is presented, which illustrates geographical and climatic variation in carbon sequestration potential during forest regrowth and will support policies to minimize forest loss in areas where biomass resilience is naturally low and promote forest regeneration and restoration in humid tropical lowland areas with high biomass resilience.
Abstract: Land-use change occurs nowhere more rapidly than in the tropics, where the imbalance between deforestation and forest regrowth has large consequences for the global carbon cycle. However, considerable uncertainty remains about the rate of biomass recovery in secondary forests, and how these rates are influenced by climate, landscape, and prior land use. Here we analyse aboveground biomass recovery during secondary succession in 45 forest sites and about 1,500 forest plots covering the major environmental gradients in the Neotropics. The studied secondary forests are highly productive and resilient. Aboveground biomass recovery after 20 years was on average 122 megagrams per hectare (Mg ha(-1)), corresponding to a net carbon uptake of 3.05 Mg C ha(-1) yr(-1), 11 times the uptake rate of old-growth forests. Aboveground biomass stocks took a median time of 66 years to recover to 90% of old-growth values. Aboveground biomass recovery after 20 years varied 11.3-fold (from 20 to 225 Mg ha(-1)) across sites, and this recovery increased with water availability (higher local rainfall and lower climatic water deficit). We present a biomass recovery map of Latin America, which illustrates geographical and climatic variation in carbon sequestration potential during forest regrowth. The map will support policies to minimize forest loss in areas where biomass resilience is naturally low (such as seasonally dry forest regions) and promote forest regeneration and restoration in humid tropical lowland areas with high biomass resilience.

Journal ArticleDOI
04 Aug 2016-Nature
TL;DR: A five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates, which provides the flexibility to implement a variety of algorithms without altering the hardware.
Abstract: Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.

Journal ArticleDOI
Vardan Khachatryan1, Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam  +2283 moreInstitutions (141)
TL;DR: Combined fits to CMS UE proton–proton data at 7TeV and to UEProton–antiproton data from the CDF experiment at lower s, are used to study the UE models and constrain their parameters, providing thereby improved predictions for proton-proton collisions at 13.
Abstract: New sets of parameters ("tunes") for the underlying-event (UE) modeling of the PYTHIA8, PYTHIA6 and HERWIG++ Monte Carlo event generators are constructed using different parton distribution functions. Combined fits to CMS UE data at sqrt(s) = 7 TeV and to UE data from the CDF experiment at lower sqrt(s), are used to study the UE models and constrain their parameters, providing thereby improved predictions for proton-proton collisions at 13 TeV. In addition, it is investigated whether the values of the parameters obtained from fits to UE observables are consistent with the values determined from fitting observables sensitive to double-parton scattering processes. Finally, comparisons of the UE tunes to "minimum bias" (MB) events, multijet, and Drell-Yan (q q-bar to Z / gamma* to lepton-antilepton + jets) observables at 7 and 8 TeV are presented, as well as predictions of MB and UE observables at 13 TeV.