scispace - formally typeset
Search or ask a question

Showing papers by "Jet Propulsion Laboratory published in 2015"


Journal ArticleDOI
TL;DR: The Global Burden of Disease, Injuries, and Risk Factor study 2013 (GBD 2013) as discussed by the authors provides a timely opportunity to update the comparative risk assessment with new data for exposure, relative risks, and evidence on the appropriate counterfactual risk distribution.

5,668 citations


Journal ArticleDOI
TL;DR: Patterns of the epidemiological transition with a composite indicator of sociodemographic status, which was constructed from income per person, average years of schooling after age 15 years, and the total fertility rate and mean age of the population, were quantified.

1,609 citations


Journal ArticleDOI
TL;DR: The MAVEN spacecraft has eight science instruments (with nine sensors) that measure the energy and particle input from the Sun into the Mars upper atmosphere, the response of the upper atmosphere to that input, and the resulting escape of gas to space as mentioned in this paper.
Abstract: The MAVEN spacecraft launched in November 2013, arrived at Mars in September 2014, and completed commissioning and began its one-Earth-year primary science mission in November 2014 The orbiter’s science objectives are to explore the interactions of the Sun and the solar wind with the Mars magnetosphere and upper atmosphere, to determine the structure of the upper atmosphere and ionosphere and the processes controlling it, to determine the escape rates from the upper atmosphere to space at the present epoch, and to measure properties that allow us to extrapolate these escape rates into the past to determine the total loss of atmospheric gas to space through time These results will allow us to determine the importance of loss to space in changing the Mars climate and atmosphere through time, thereby providing important boundary conditions on the history of the habitability of Mars The MAVEN spacecraft contains eight science instruments (with nine sensors) that measure the energy and particle input from the Sun into the Mars upper atmosphere, the response of the upper atmosphere to that input, and the resulting escape of gas to space In addition, it contains an Electra relay that will allow it to relay commands and data between spacecraft on the surface and Earth

628 citations


Journal ArticleDOI
TL;DR: In this article, the authors presented new limits on an isotropic stochastic gravitational wave background (GWB) using a six pulsar dataset spanning 18 yr of observations from the 2015 European Pulsar Timing Array data release.
Abstract: We present new limits on an isotropic stochastic gravitational-wave background (GWB) using a six pulsar dataset spanning 18 yr of observations from the 2015 European Pulsar Timing Array data release Performing a Bayesian analysis, we fit simultaneously for the intrinsic noise parameters for each pulsar, along with common correlated signals including clock, and Solar System ephemeris errors, obtaining a robust 95$\%$ upper limit on the dimensionless strain amplitude $A$ of the background of $A<30\times 10^{-15}$ at a reference frequency of $1\mathrm{yr^{-1}}$ and a spectral index of $13/3$, corresponding to a background from inspiralling super-massive black hole binaries, constraining the GW energy density to $\Omega_\mathrm{gw}(f)h^2 < 11\times10^{-9}$ at 28 nHz We also present limits on the correlated power spectrum at a series of discrete frequencies, and show that our sensitivity to a fiducial isotropic GWB is highest at a frequency of $\sim 5\times10^{-9}$~Hz Finally we discuss the implications of our analysis for the astrophysics of supermassive black hole binaries, and present 95$\%$ upper limits on the string tension, $G\mu/c^2$, characterising a background produced by a cosmic string network for a set of possible scenarios, and for a stochastic relic GWB For a Nambu-Goto field theory cosmic string network, we set a limit $G\mu/c^2<13\times10^{-7}$, identical to that set by the {\it Planck} Collaboration, when combining {\it Planck} and high-$\ell$ Cosmic Microwave Background data from other experiments For a stochastic relic background we set a limit of $\Omega^\mathrm{relic}_\mathrm{gw}(f)h^2<12 \times10^{-9}$, a factor of 9 improvement over the most stringent limits previously set by a pulsar timing array

526 citations


Journal ArticleDOI
S. A. Stern1, Fran Bagenal2, Kimberly Ennico3, G. R. Gladstone1  +147 moreInstitutions (26)
16 Oct 2015-Science
TL;DR: The New Horizons encounter revealed that Pluto displays a surprisingly wide variety of geological landforms, including those resulting from glaciological and surface-atmosphere interactions as well as impact, tectonic, possible cryovolcanic, and mass-wasting processes.
Abstract: The Pluto system was recently explored by NASA's New Horizons spacecraft, making closest approach on 14 July 2015. Pluto's surface displays diverse landforms, terrain ages, albedos, colors, and composition gradients. Evidence is found for a water-ice crust, geologically young surface units, surface ice convection, wind streaks, volatile transport, and glacial flow. Pluto's atmosphere is highly extended, with trace hydrocarbons, a global haze layer, and a surface pressure near 10 microbars. Pluto's diverse surface geology and long-term activity raise fundamental questions about how small planets remain active many billions of years after formation. Pluto's large moon Charon displays tectonics and evidence for a heterogeneous crustal composition; its north pole displays puzzling dark terrain. Small satellites Hydra and Nix have higher albedos than expected.

411 citations


Journal ArticleDOI
Alessandra Rotundi1, Alessandra Rotundi2, Holger Sierks3, Vincenzo Della Corte2, Marco Fulle2, Pedro J. Gutiérrez4, Luisa Lara4, Cesare Barbieri, Philippe Lamy5, Rafael Rodrigo4, Rafael Rodrigo6, Detlef Koschny7, Hans Rickman8, Hans Rickman9, H. U. Keller10, José Juan López-Moreno4, Mario Accolla2, Mario Accolla1, Jessica Agarwal3, Michael F. A'Hearn11, Nicolas Altobelli7, Francesco Angrilli12, M. Antonietta Barucci13, Jean-Loup Bertaux14, Ivano Bertini12, Dennis Bodewits11, E. Bussoletti1, Luigi Colangeli15, M. Cosi16, Gabriele Cremonese2, Jean-François Crifo14, Vania Da Deppo, Björn Davidsson9, Stefano Debei12, Mariolino De Cecco17, Francesca Esposito2, M. Ferrari2, M. Ferrari1, Sonia Fornasier13, F. Giovane18, Bo Å. S. Gustafson19, Simon F. Green20, Olivier Groussin5, Eberhard Grün3, Carsten Güttler3, M. Herranz4, Stubbe F. Hviid21, Wing Ip22, Stavro Ivanovski2, José M. Jerónimo4, Laurent Jorda5, J. Knollenberg21, R. Kramm3, Ekkehard Kührt21, Michael Küppers7, Monica Lazzarin, Mark Leese20, Antonio C. López-Jiménez4, F. Lucarelli1, Stephen C. Lowry23, Francesco Marzari12, Elena Mazzotta Epifani2, J. Anthony M. McDonnell20, J. Anthony M. McDonnell23, Vito Mennella2, Harald Michalik, A. Molina24, R. Morales4, Fernando Moreno4, Stefano Mottola21, Giampiero Naletto, Nilda Oklay3, Jose Luis Ortiz4, Ernesto Palomba2, Pasquale Palumbo1, Pasquale Palumbo2, Jean-Marie Perrin14, Jean-Marie Perrin25, J. E. Rodriguez4, L. Sabau26, Colin Snodgrass3, Colin Snodgrass20, Roberto Sordini2, Nicolas Thomas27, Cecilia Tubiana3, Jean-Baptiste Vincent3, Paul R. Weissman28, K. P. Wenzel7, Vladimir Zakharov13, John C. Zarnecki20, John C. Zarnecki6 
23 Jan 2015-Science
TL;DR: In this article, the GIADA (Grain Impact Analyser and Dust Accumulator) experiment on the European Space Agency's Rosetta spacecraft orbiting comet 67P/Churyumov-Gerasimenko was used to detect 35 outflowing grains of mass 10−10 to 10−7 kilograms.
Abstract: Critical measurements for understanding accretion and the dust/gas ratio in the solar nebula, where planets were forming 4.5 billion years ago, are being obtained by the GIADA (Grain Impact Analyser and Dust Accumulator) experiment on the European Space Agency’s Rosetta spacecraft orbiting comet 67P/Churyumov-Gerasimenko. Between 3.6 and 3.4 astronomical units inbound, GIADA and OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) detected 35 outflowing grains of mass 10−10 to 10−7 kilograms, and 48 grains of mass 10−5 to 10−2 kilograms, respectively. Combined with gas data from the MIRO (Microwave Instrument for the Rosetta Orbiter) and ROSINA (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis) instruments, we find a dust/gas mass ratio of 4 ± 2 averaged over the sunlit nucleus surface. A cloud of larger grains also encircles the nucleus in bound orbits from the previous perihelion. The largest orbiting clumps are meter-sized, confirming the dust/gas ratio of 3 inferred at perihelion from models of dust comae and trails.

373 citations


Journal ArticleDOI
23 Jan 2015-Science
TL;DR: The VIRTIS instrument on board the Rosetta spacecraft has provided evidence of carbon-bearing compounds on the nucleus of the comet 67P/Churyumov-Gerasimenko, and no ice-rich patches are observed, indicating a generally dehydrated nature for the surface currently illuminated by the Sun.
Abstract: The VIRTIS (Visible, Infrared and Thermal Imaging Spectrometer) instrument on board the Rosetta spacecraft has provided evidence of carbon-bearing compounds on the nucleus of the comet 67P/Churyumov-Gerasimenko The very low reflectance of the nucleus (normal albedo of 0060 ± 0003 at 055 micrometers), the spectral slopes in visible and infrared ranges (5 to 25 and 15 to 5% kA−1), and the broad absorption feature in the 29-to-36–micrometer range present across the entire illuminated surface are compatible with opaque minerals associated with nonvolatile organic macromolecular materials: a complex mixture of various types of carbon-hydrogen and/or oxygen-hydrogen chemical groups, with little contribution of nitrogen-hydrogen groups In active areas, the changes in spectral slope and absorption feature width may suggest small amounts of water-ice However, no ice-rich patches are observed, indicating a generally dehydrated nature for the surface currently illuminated by the Sun

350 citations


Journal ArticleDOI
04 Sep 2015-Science
TL;DR: GPS and interferometric synthetic aperture radar data are used to model the earthquake rupture as a slip pulse ~20 kilometers in width, ~6 seconds in duration, and with a peak sliding velocity of 1.1 meters per second, which propagated toward the Kathmandu basin at 3.3 kilometers per second over ~140 kilometers.
Abstract: Detailed geodetic imaging of earthquake ruptures enhances our understanding of earthquake physics and associated ground shaking. The 25 April 2015 moment magnitude 7.8 earthquake in Gorkha, Nepal was the first large continental megathrust rupture to have occurred beneath a high-rate (5-hertz) Global Positioning System (GPS) network. We used GPS and interferometric synthetic aperture radar data to model the earthquake rupture as a slip pulse ~20 kilometers in width, ~6 seconds in duration, and with a peak sliding velocity of 1.1 meters per second, which propagated toward the Kathmandu basin at ~3.3 kilometers per second over ~140 kilometers. The smooth slip onset, indicating a large (~5-meter) slip-weakening distance, caused moderate ground shaking at high frequencies (>1 hertz; peak ground acceleration, ~16% of Earth’s gravity) and minimized damage to vernacular dwellings. Whole-basin resonance at a period of 4 to 5 seconds caused the collapse of tall structures, including cultural artifacts.

312 citations


Journal ArticleDOI
TL;DR: It is concluded that a cooperation model is critical for safe and efficient robot navigation in dense human crowds and the salient characteristics of nearly any dynamic navigation algorithm.
Abstract: We consider the problem of navigating a mobile robot through dense human crowds. We begin by exploring a fundamental impediment to classical motion planning algorithms called the “freezing robot problem”: once the environment surpasses a certain level of dynamic complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place or performs unnecessary maneuvers to avoid collisions. We argue that this problem can be avoided if the robot anticipates human cooperation, and accordingly we develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a “multiple goal” extension that models the goal-driven nature of human decision making. We validate this model with an empirical study of robot navigation in dense human crowds 488 runs, specifically testing how cooperation models effect navigation performance. The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 0.8 humans/m2, while a state-of-the-art non-cooperative planner exhibits unsafe behavior more than three times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m2. We also show that our non-cooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.

258 citations


Journal ArticleDOI
TL;DR: It is found that plant phenological and physiological properties can be integrated in a robust index—the product of the length of CO2 uptake period and the seasonal maximal photosynthesis—to explain the GPP variability over space and time in response to climate extremes and during recovery after disturbance.
Abstract: Terrestrial gross primary productivity (GPP) varies greatly over time and space. A better understanding of this variability is necessary for more accurate predictions of the future climate–carbon cycle feedback. Recent studies have suggested that variability in GPP is driven by a broad range of biotic and abiotic factors operating mainly through changes in vegetation phenology and physiological processes. However, it is still unclear how plant phenology and physiology can be integrated to explain the spatiotemporal variability of terrestrial GPP. Based on analyses of eddy–covariance and satellite-derived data, we decomposed annual terrestrial GPP into the length of the CO2 uptake period (CUP) and the seasonal maximal capacity of CO2 uptake (GPPmax). The product of CUP and GPPmax explained >90% of the temporal GPP variability in most areas of North America during 2000–2010 and the spatial GPP variation among globally distributed eddy flux tower sites. It also explained GPP response to the European heatwave in 2003 (r2 = 0.90) and GPP recovery after a fire disturbance in South Dakota (r2 = 0.88). Additional analysis of the eddy–covariance flux data shows that the interbiome variation in annual GPP is better explained by that in GPPmax than CUP. These findings indicate that terrestrial GPP is jointly controlled by ecosystem-level plant phenology and photosynthetic capacity, and greater understanding of GPPmax and CUP responses to environmental and biological variations will, thus, improve predictions of GPP over time and space.

254 citations


18 Dec 2015
TL;DR: In this paper, the authors analyzed the spatial distribution of in situ data for carbon fluxes, stocks and plant traits globally and also evaluated the potential of remote sensing to observe these quantities.
Abstract: Terrestrial ecosystem and carbon cycle feedbacks will significantly impact future climate, but their responses are highly uncertain. Models and tipping point analyses suggest the tropics and arctic/boreal zone carbon-climate feedbacks could be disproportionately large. In situ observations in those regions are sparse, resulting in high uncertainties in carbon fluxes and fluxes. Key parameters controlling ecosystem carbon responses, such as plant traits, are also sparsely observed in the tropics, with the most diverse biome on the planet treated as a single type in models. We analyzed the spatial distribution of in situ data for carbon fluxes, stocks and plant traits globally and also evaluated the potential of remote sensing to observe these quantities. New satellite data products go beyond indices of greenness and can address spatial sampling gaps for specific ecosystem properties and parameters. Because environmental conditions and access limit in situ observations in tropical and arctic/boreal environments, use of space-based techniques can reduce sampling bias and uncertainty about tipping point feedbacks to climate. To reliably detect change and develop the understanding of ecosystems needed for prediction, significantly, more data are required in critical regions. This need can best be met with a strategic combination of remote and in situ data, with satellite observations providing the dense sampling in space and time required to characterize the heterogeneity of ecosystem structure and function.

Journal ArticleDOI
TL;DR: A brief system overview is presented, detailing Valkyrie's mechatronic subsystems, followed by a summarization of the inverse kinematics-based walking algorithm employed at the Trials, and some closing remarks are given about the competition.
Abstract: In December 2013, 16 teams from around the world gathered at Homestead Speedway near Miami, FL to participate in the DARPA Robotics Challenge DRC Trials, an aggressive robotics competition partly inspired by the aftermath of the Fukushima Daiichi reactor incident. While the focus of the DRC Trials is to advance robotics for use in austere and inhospitable environments, the objectives of the DRC are to progress the areas of supervised autonomy and mobile manipulation for everyday robotics. NASA's Johnson Space Center led a team comprised of numerous partners to develop Valkyrie, NASA's first bipedal humanoid robot. Valkyrie is a 44 degree-of-freedom, series elastic actuator-based robot that draws upon over 18 years of humanoid robotics design heritage. Valkyrie's application intent is aimed at not only responding to events like Fukushima, but also advancing human spaceflight endeavors in extraterrestrial planetary settings. This paper presents a brief system overview, detailing Valkyrie's mechatronic subsystems, followed by a summarization of the inverse kinematics-based walking algorithm employed at the Trials. Next, the software and control architectures are highlighted along with a description of the operator interface tools. Finally, some closing remarks are given about the competition, and a vision of future work is provided.

Journal ArticleDOI
TL;DR: In this paper, the authors quantified mean annual and monthly fluxes of Earth's water cycle over continents and ocean basins during the first decade of the millennium, using satellite measurements first and data-integrating models second.
Abstract: This study quantifies mean annual and monthly fluxes of Earth's water cycle over continents and ocean basins during the first decade of the millennium. To the extent possible, the flux estimates are based on satellite measurements first and data-integrating models second. A careful accounting of uncertainty in the estimates is included. It is applied within a routine that enforces multiple water and energy budget constraints simultaneously in a variational framework in order to produce objectively determined optimized flux estimates. In the majority of cases, the observed annual surface and atmospheric water budgets over the continents and oceans close with much less than 10% residual. Observed residuals and optimized uncertainty estimates are considerably larger for monthly surface and atmospheric water budget closure, often nearing or exceeding 20% in North America, Eurasia, Australia and neighboring islands, and the Arctic and South Atlantic Oceans. The residuals in South America and Africa tend to be smaller, possibly because cold land processes are negligible. Fluxes were poorly observed over the Arctic Ocean, certain seas, Antarctica, and the Australasian and Indonesian islands, leading to reliance on atmospheric analysis estimates. Many of the satellite systems that contributed data have been or will soon be lost or replaced. Models that integrate ground-based and remote observations will be critical for ameliorating gaps and discontinuities in the data records caused by these transitions. Continued development of such models is essential for maximizing the value of the observations. Next-generation observing systems are the best hope for significantly improving global water budget accounting.

Journal ArticleDOI
TL;DR: In this paper, a new Mars Global Ionosphere-Thermosphere Model (M-GITM) is presented that combines the terrestrial GITM framework with Mars fundamental physical parameters, ion-neutral chemistry, and key radiative processes in order to capture the basic observed features of the thermal, compositional, and dynamical structure of the Mars atmosphere from the ground to the exosphere (0-250 km).
Abstract: A new Mars Global Ionosphere-Thermosphere Model (M-GITM) is presented that combines the terrestrial GITM framework with Mars fundamental physical parameters, ion-neutral chemistry, and key radiative processes in order to capture the basic observed features of the thermal, compositional, and dynamical structure of the Mars atmosphere from the ground to the exosphere (0–250 km). Lower, middle, and upper atmosphere processes are included, based in part upon formulations used in previous lower and upper atmosphere Mars GCMs. This enables the M-GITM code to be run for various seasonal, solar cycle, and dust conditions. M-GITM validation studies have focused upon simulations for a range of solar and seasonal conditions. Key upper atmosphere measurements are selected for comparison to corresponding M-GITM neutral temperatures and neutral-ion densities. In addition, simulated lower atmosphere temperatures are compared with observations in order to provide a first-order confirmation of a realistic lower atmosphere. M-GITM captures solar cycle and seasonal trends in the upper atmosphere that are consistent with observations, yielding significant periodic changes in the temperature structure, the species density distributions, and the large-scale global wind system. For instance, mid afternoon temperatures near ∼200 km are predicted to vary from ∼210 to 350 K (equinox) and ∼190 to 390 k (aphelion to perihelion) over the solar cycle. These simulations will serve as a benchmark against which to compare episodic variations (e.g., due to solar flares and dust storms) in future M-GITM studies. Additionally, M-GITM will be used to support MAVEN mission activities (2014–2016).

Journal ArticleDOI
31 Jul 2015-Science
TL;DR: Measurements of the interior of Comet 67P/Churyumov-Gerasimenko by Radiowave Transmission (CONSERT) suggest the upper part of the “head” of 67P is fairly homogeneous on a spatial scale of tens of meters, and the dust component may be comparable to that of carbonaceous chondrites.
Abstract: The Philae lander provides a unique opportunity to investigate the internal structure of a comet nucleus, providing information about its formation and evolution in the early solar system. We present Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT) measurements of the interior of Comet 67P/Churyumov-Gerasimenko. From the propagation time and form of the signals, the upper part of the “head” of 67P is fairly homogeneous on a spatial scale of tens of meters. CONSERT also reduced the size of the uncertainty of Philae’s final landing site down to approximately 21 by 34 square meters. The average permittivity is about 1.27, suggesting that this region has a volumetric dust/ice ratio of 0.4 to 2.6 and a porosity of 75 to 85%. The dust component may be comparable to that of carbonaceous chondrites.

Journal ArticleDOI
TL;DR: A group of remote sensing scientists affiliated with government and academic institutions and conservation organizations identified 10 questions in conservation for which the potential to be answered would be greatly increased by use of remotely sensed data and analyses of those data.
Abstract: In an effort to increase conservation effectiveness through the use of Earth observation technologies, a group of remote sensing scientists affiliated with government and academic institutions and conservation organizations identified 10 questions in conservation for which the potential to be answered would be greatly increased by use of remotely sensed data and analyses of those data. Our goals were to increase conservation practitioners’ use of remote sensing to support their work, increase collaboration between the conservation science and remote sensing communities, identify and develop new and innovative uses of remote sensing for advancing conservation science, provide guidance to space agencies on how future satellite missions can support conservation science, and generate support from the public and private sector in the use of remote sensing data to address the 10 conservation questions. We identified a broad initial list of questions on the basis of an email chain-referral survey. We then used a workshop-based iterative and collaborative approach to whittle the list down to these final questions (which represent 10 major themes in conservation): How can global Earth observation data be used to model species distributions and abundances? How can remote sensing improve the understanding of animal movements? How can remotely sensed ecosystem variables be used to understand, monitor, and predict ecosystem response and resilience to multiple stressors? How can remote sensing be used to monitor the effects of climate on ecosystems? How can near real-time ecosystem monitoring catalyze threat reduction, governance and regulation compliance, and resource management decisions? How can remote sensing inform configuration of protected area networks at spatial extents relevant to populations of target species and ecosystem services? How can remote sensing-derived products be used to value and monitor changes in ecosystem services? How can remote sensing be used to monitor and evaluate the effectiveness of conservation efforts? How does the expansion and intensification of agriculture and aquaculture alter ecosystems and the services they provide? How can remote sensing be used to determine the degree to which ecosystems are being disturbed or degraded and the effects of these changes on species and ecosystem functions?

Journal ArticleDOI
TL;DR: In this article, the results of an extensive campaign to determine the physical, geological, and dynamical properties of asteroid (101955) Bennu were presented, and the results were used to develop a hypothetical timeline for Bennu's formation and evolution.
Abstract: We review the results of an extensive campaign to determine the physical, geological, and dynamical properties of asteroid (101955) Bennu. This investigation provides information on the orbit, shape, mass, rotation state, radar response, photometric, spectroscopic, thermal, regolith, and environmental properties of Bennu. We combine these data with cosmochemical and dynamical models to develop a hypothetical timeline for Bennu's formation and evolution. We infer that Bennu is an ancient object that has witnessed over 4.5 Gyr of solar system history. Its chemistry and mineralogy were established within the first 10 Myr of the solar system. It likely originated as a discrete asteroid in the inner Main Belt approximately 0.7–2 Gyr ago as a fragment from the catastrophic disruption of a large (approximately 100-km), carbonaceous asteroid. It was delivered to near-Earth space via a combination of Yarkovsky-induced drift and interaction with giant-planet resonances. During its journey, YORP processes and planetary close encounters modified Bennu's spin state, potentially reshaping and resurfacing the asteroid. We also review work on Bennu's future dynamical evolution and constrain its ultimate fate. It is one of the most Potentially Hazardous Asteroids with an approximately 1-in-2700 chance of impacting the Earth in the late 22nd century. It will most likely end its dynamical life by falling into the Sun. The highest probability for a planetary impact is with Venus, followed by the Earth. There is a chance that Bennu will be ejected from the inner solar system after a close encounter with Jupiter. OSIRIS-REx will return samples from the surface of this intriguing asteroid in September 2023.

Journal ArticleDOI
TL;DR: Satellite and in situ technologies assess surface drainage conditions on the southwestern ablation surface after an extreme 2012 melting event conclude that the ice sheet surface is efficiently drained under optimal conditions, that digital elevation models alone cannot fully describe supraglacial drainage and its connection to subglacial systems, and that predicting outflow from climate models alone, without recognition of sub glacial processes, may overestimate true meltwater release from theIce sheet.
Abstract: Thermally incised meltwater channels that flow each summer across melt-prone surfaces of the Greenland ice sheet have received little direct study. We use high-resolution WorldView-1/2 satellite mapping and in situ measurements to characterize supraglacial water storage, drainage pattern, and discharge across 6,812 km(2) of southwest Greenland in July 2012, after a record melt event. Efficient surface drainage was routed through 523 high-order stream/river channel networks, all of which terminated in moulins before reaching the ice edge. Low surface water storage (3.6 ± 0.9 cm), negligible impoundment by supraglacial lakes or topographic depressions, and high discharge to moulins (2.54-2.81 cm⋅d(-1)) indicate that the surface drainage system conveyed its own storage volume every <2 d to the bed. Moulin discharges mapped inside ∼52% of the source ice watershed for Isortoq, a major proglacial river, totaled ∼41-98% of observed proglacial discharge, highlighting the importance of supraglacial river drainage to true outflow from the ice edge. However, Isortoq discharges tended lower than runoff simulations from the Modele Atmospherique Regional (MAR) regional climate model (0.056-0.112 km(3)⋅d(-1) vs. ∼0.103 km(3)⋅d(-1)), and when integrated over the melt season, totaled just 37-75% of MAR, suggesting nontrivial subglacial water storage even in this melt-prone region of the ice sheet. We conclude that (i) the interior surface of the ice sheet can be efficiently drained under optimal conditions, (ii) that digital elevation models alone cannot fully describe supraglacial drainage and its connection to subglacial systems, and (iii) that predicting outflow from climate models alone, without recognition of subglacial processes, may overestimate true meltwater export from the ice sheet to the ocean.

Journal ArticleDOI
TL;DR: In this article, the effect of a partially inhomogeneous wind that imprints variability on to the X-ray emission via two distinct methods is considered, and the model is heavily dependent on both inclination to the line of sight and mass accretion rate, resulting in a series of qualitative and semiquantitative predictions.
Abstract: Ultraluminous X-ray sources (ULXs) with luminosities lying between ∼3 × 1039 and 2 × 1040 erg s−1 represent a contentious sample of objects as their brightness, together with a lack of unambiguous mass estimates for the vast majority of the central objects, leads to a degenerate scenario where the accretor could be a stellar remnant (black hole or neutron star) or intermediate-mass black hole (IMBH). Recent, high-quality observations imply that the presence of IMBHs in the majority of these objects is unlikely unless the accretion flow somehow deviates strongly from expectation based on objects with known masses. On the other hand, physically motivated models for supercritical inflows can re-create the observed X-ray spectra and their evolution, although have been lacking a robust explanation for their variability properties. In this paper, we include the effect of a partially inhomogeneous wind that imprints variability on to the X-ray emission via two distinct methods. The model is heavily dependent on both inclination to the line of sight and mass accretion rate, resulting in a series of qualitative and semiquantitative predictions. We study the time-averaged spectra and variability of a sample of well-observed ULXs, finding that the source behaviours can be explained by our model in both individual cases as well as across the entire sample, specifically in the trend of hardness-variability power. We present the covariance spectra for these sources for the first time, which shed light on the correlated variability and issues associated with modelling broad ULX spectra.

Journal ArticleDOI
TL;DR: In this article, the authors study the evolution of the radio spectral index and far-infrared/radio correlation across the star-formation rate across the SFR-M∗ plane up to 2.3.
Abstract: We study the evolution of the radio spectral index and far-infrared/radio correlation (FRC) across the star-formation rate – stellar masse (i.e. SFR–M∗) plane up to z ~ 2. We start from a stellar-mass-selected sample of galaxies with reliable SFR and redshift estimates. We then grid the SFR–M∗ plane in several redshift ranges and measure the infrared luminosity, radio luminosity, radio spectral index, and ultimately the FRC index (i.e. qFIR) of each SFR–M∗–z bin. The infrared luminosities of our SFR–M∗–z bins are estimated using their stacked far-infrared flux densities inferred from observations obtained with the Herschel Space Observatory. Their radio luminosities and radio spectral indices (i.e. α, where Sν ∝ ν−α) are estimated using their stacked 1.4 GHz and 610 MHz flux densities from the Very Large Array and Giant Metre-wave Radio Telescope, respectively. Our far-infrared and radio observations include the most widely studied blank extragalactic fields – GOODS-N, GOODS-S, ECDFS, and COSMOS – covering a total sky area of ~2.0 deg2. Using this methodology, we constrain the radio spectral index and FRC index of star-forming galaxies with M∗ > 1010 M⊙ and 0

Journal ArticleDOI
TL;DR: In this article, an objectively balanced observation-based reconstructions of global and continental energy budgets and their seasonal variability are presented that span the golden decade of Earth-observing satellites at the start of the twenty-first century.
Abstract: New objectively balanced observation-based reconstructions of global and continental energy budgets and their seasonal variability are presented that span the golden decade of Earth-observing satellites at the start of the twentyfirst century. In the absence of balance constraints, various combinations of modern flux datasets reveal that current estimates of net radiation into Earth’s surface exceed corresponding turbulent heat fluxes by 13–24 Wm 22 .T he largest imbalances occur over oceanic regions where the component algorithms operate independent of closure constraints.Recent uncertainty assessmentssuggestthat these imbalancesfall within anticipatederror bounds foreach dataset, but the systematic nature of required adjustments across different regions confirm the existence of biases in the component fluxes. To reintroduce energy and water cycle closure information lost in the development of independent flux datasets, a variational method is introduced that explicitly accounts for the relative accuracies in all component fluxes. Applying the technique to a 10-yr record of satellite observations yields new energy budget estimates that simultaneously satisfy all energy and water cycle balance constraints. Globally, 180 Wm 22 of atmospheric longwavecoolingisbalancedby74 Wm 22 ofshortwaveabsorptionand106 Wm 22 oflatentandsensibleheatrelease. Atthesurface,106Wm 22 ofdownwellingradiationisbalancedbyturbulentheattransfertowithinaresidualheatflux into the oceans of 0.45Wm 22 , consistent with recent observations of changes in ocean heat content. Annual mean energy budgets and their seasonal cycles for each of seven continents and nine ocean basins are also presented.

Journal ArticleDOI
TL;DR: A semiempirical lifetime based on Microwave Limb Sounder satellite measurements of stratospheric profiles of nitrous oxide, ozone, and temperature, laboratory cross‐section data for ozone and molecular oxygen plus kinetics for O(1D); the observed solar spectrum; and a simple radiative transfer model is calculated.
Abstract: Nitrous oxide lifetime is computed empirically from MLS satellite dataEmpirical N2O lifetimes compared with models including interannual variabilityResults improve values for present anthropogenic and preindustrial emissions.

Journal ArticleDOI
01 Oct 2015-Nature
TL;DR: It is found that basin-averaged erosion rates vary by three orders of magnitude over this latitudinal transect, implying that climate and the glacier thermal regime control erosion rates more than do extent of ice cover, ice flux or sliding speeds.
Abstract: Erosion and velocity data from 15 outlet glaciers covering temperate to polar glacier thermal regimes from Patagonia to the Antarctic Peninsula reveal that over the past century the basin-averaged erosion rates vary by three orders of magnitude as a function of climate across this latitudinal transect. Glacial erosion plays an important role in shaping the Earth's landscape, but attempts to quantify the long-term effect of the erosion of glaciers have proven inclusive and contradictory. Glacial erosion rates are expected to decrease towards the poles, where lower temperatures limit meltwater production, thereby reducing glacial sliding, erosion and sediment transfer. This study presents erosion and velocity data from 15 outlet glaciers covering temperate to polar glacier thermal regimes from Patagonia to the Antarctic Peninsula. The dataset reveals that during the past century the basin-averaged erosion rates vary by three orders of magnitude as a function of climate across this latitudinal transect. The authors suggest that climate and the glacier thermal regime exert more control over erosion rates than the extent of ice cover, ice flux or sliding speeds. Glacial erosion is fundamental to our understanding of the role of Cenozoic-era climate change in the development of topography worldwide, yet the factors that control the rate of erosion by ice remain poorly understood. In many tectonically active mountain ranges, glaciers have been inferred to be highly erosive, and conditions of glaciation are used to explain both the marked relief typical of alpine settings and the limit on mountain heights above the snowline, that is, the glacial buzzsaw1. In other high-latitude regions, glacial erosion is presumed to be minimal, where a mantle of cold ice effectively protects landscapes from erosion2,3,4. Glacial erosion rates are expected to increase with decreasing latitude, owing to the climatic control on basal temperature and the production of meltwater, which promotes glacial sliding, erosion and sediment transfer. This relationship between climate, glacier dynamics and erosion rate is the focus of recent numerical modelling5,6,7,8, yet it is qualitative and lacks an empirical database. Here we present a comprehensive data set that permits explicit examination of the factors controlling glacier erosion across climatic regimes. We report contemporary ice fluxes, sliding speeds and erosion rates inferred from sediment yields from 15 outlet glaciers spanning 19 degrees of latitude from Patagonia to the Antarctic Peninsula. Although this broad region has a relatively uniform tectonic and geologic history, the thermal regimes of its glaciers range from temperate to polar. We find that basin-averaged erosion rates vary by three orders of magnitude over this latitudinal transect. Our findings imply that climate and the glacier thermal regime control erosion rates more than do extent of ice cover, ice flux or sliding speeds.

Journal ArticleDOI
TL;DR: The Smithsonian Astrophysical Observatory (SAO) formaldehyde (H2CO) retrieval algorithm for the Ozone Monitoring Instrument (OMI) is the operational retrieval for NASA OMI H2CO.
Abstract: We present and discuss the Smithsonian Astrophysical Observatory (SAO) formaldehyde (H2CO) retrieval algorithm for the Ozone Monitoring Instrument (OMI) which is the operational retrieval for NASA OMI H2CO The version of the algorithm described here includes relevant changes with respect to the operational one, including differences in the reference spectra for H2CO, the fit of O2–O2 collisional complex, updates in the high-resolution solar reference spectrum, the use of a model reference sector over the remote Pacific Ocean to normalize the retrievals, an updated air mass factor (AMF) calculation scheme, and the inclusion of scattering weights and vertical H2CO profile in the level 2 products The setup of the retrieval is discussed in detail We compare the results of the updated retrieval with the results from the previous SAO H2CO retrieval The improvement in the slant column fit increases the temporal stability of the retrieval and slightly reduces the noise The change in the AMF calculation has increased the AMFs by 20%, mainly due to the consideration of the radiative cloud fraction Typical values for retrieved vertical columns are between 4 × 1015 and 4 × 1016 molecules cm−2, with typical fitting uncertainties ranging between 45 and 100% In high-concentration regions the errors are usually reduced to 30% The detection limit is estimated at 1 × 1016 molecules cm−2


Journal ArticleDOI
TL;DR: In this paper, a 64-pixel free-space-coupled array of superconducting nanowire single photon detectors optimized for high detection efficiency in the near-infrared range is presented.
Abstract: We demonstrate a 64-pixel free-space-coupled array of superconducting nanowire single photon detectors optimized for high detection efficiency in the near-infrared range. An integrated, readily scalable, multiplexed readout scheme is employed to reduce the number of readout lines to 16. The cryogenic, optical, and electronic packaging to read out the array as well as characterization measurements are discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors report results from the 2012 and 2013 observing seasons, during which the Keck Array consisted of five receivers all operating in the same (150 GHz) frequency band and observing field as BICEP2.
Abstract: The Keck Array is a system of cosmic microwave background (CMB) polarimeters, each similar to the BICEP2 experiment. In this paper we report results from the 2012 and 2013 observing seasons, during which the Keck Array consisted of five receivers all operating in the same (150 GHz) frequency band and observing field as BICEP2. We again find an excess of B-mode power over the lensed-$\Lambda$CDM expectation of $> 5 \sigma$ in the range $30 6\sigma$.

Journal ArticleDOI
10 Dec 2015-Nature
TL;DR: It is concluded that Ceres must have accreted material from beyond the ‘snow line’, which is the distance from the Sun at which water molecules condense, and is consistent with hydrated magnesium sulfates mixed with dark background material.
Abstract: The dwarf planet (1) Ceres, the largest object in the main asteroid belt with a mean diameter of about 950 kilometres, is located at a mean distance from the Sun of about 2.8 astronomical units (one astronomical unit is the Earth-Sun distance). Thermal evolution models suggest that it is a differentiated body with potential geological activity. Unlike on the icy satellites of Jupiter and Saturn, where tidal forces are responsible for spewing briny water into space, no tidal forces are acting on Ceres. In the absence of such forces, most objects in the main asteroid belt are expected to be geologically inert. The recent discovery of water vapour absorption near Ceres and previous detection of bound water and OH near and on Ceres (refs 5-7) have raised interest in the possible presence of surface ice. Here we report the presence of localized bright areas on Ceres from an orbiting imager. These unusual areas are consistent with hydrated magnesium sulfates mixed with dark background material, although other compositions are possible. Of particular interest is a bright pit on the floor of crater Occator that exhibits probable sublimation of water ice, producing haze clouds inside the crater that appear and disappear with a diurnal rhythm. Slow-moving condensed-ice or dust particles may explain this haze. We conclude that Ceres must have accreted material from beyond the 'snow line', which is the distance from the Sun at which water molecules condense.

Journal ArticleDOI
TL;DR: In this paper, the authors detected a broad H$\alpha$ emission with luminosity of 2.3x$10^{44}$erg/s, which has never been detected before among other H-poor superluminous supernova events.
Abstract: iPTF13ehe is a hydrogen-poor superluminous supernova (SLSN) at z=0.3434, with a slow-evolving light curve and spectral features similar to SN2007bi. It rises within (83-148)days (rest-frame) to reach a peak bolometric luminosity of 1.3x$10^{44}$erg/s, then decays very slowly at 0.015mag. per day. The measured ejecta velocity is 13000km/s. The inferred explosion characteristics, such as the ejecta mass (67-220$M_\odot$), the total radiative and kinetic energy ($10^{51}$ & 2x$10^{53}$erg respectively), is typical of a slow-evolving H-poor SLSN event. However, the late-time spectrum taken at +251days reveals a Balmer Halpha emission feature with broad and narrow components, which has never been detected before among other H-poor SLSNe. The broad component has a velocity width of ~4500km/s and has a ~300km/s blue-ward shift relative to the narrow component. We interpret this broad H$\alpha$ emission with luminosity of $\sim$2$\times10^{41}$\,erg\,s$^{-1}$ as resulting from the interaction between the supernova ejecta and a discrete H-rich shell, located at a distance of $\sim4\times10^{16}$\,cm from the explosion site. This ejecta-CSM interaction causes the rest-frame r-band LC to brighten at late times. The fact that the late-time spectra are not completely absorbed by the shock ionized CSM shell implies that its Thomson scattering optical depth is likely <1, thus setting upper limits on the CSM mass <30$M_\odot$ and the volume number density <4x$10^8cm^{-3}$. Of the existing models, a Pulsational Pair Instability Supernova model can naturally explain the observed 30$M_\odot$ H-shell, ejected from a progenitor star with an initial mass of (95-150)$M_\odot$ about 40 years ago. We estimate that at least $\sim$15\%\ of all SLSNe-I may have late-time Balmer emission lines.

Journal ArticleDOI
TL;DR: In this article, a backward trajectory analysis of dust days on the Arabian Peninsula, increased dust lifting and atmospheric dust concentration in the Fertile Crescent during this recent, prolonged drought episode supported a greater frequency of dust events across the peninsula with associated northerly trajectories and led to the dust regime shift.
Abstract: The Arabian Peninsula has experienced pronounced interannual to decadal variability in dust activity, including an abrupt regime shift around 2006 from an inactive dust period during 1998–2005 to an active period during 2007–2013. Corresponding in time to the onset of this regime shift, the climate state transitioned into a combined La Nina and negative phase of the Pacific Decadal Oscillation, which incited a hiatus in global warming in the 2000s. Superimposed upon a long-term regional drying trend, synergistic interactions between these teleconnection modes triggered the establishment of a devastating and prolonged drought, which engulfed the Fertile Crescent, namely, Iraq and Syria, and led to crop failure and civil unrest. Dried soils and diminished vegetation cover in the Fertile Crescent, as evident through remotely sensed enhanced vegetation indices, supported greater dust generation and transport to the Arabian Peninsula in 2007–2013, as identified both in increased dust days observed at weather stations and enhanced remotely sensed aerosol optical depth. According to backward trajectory analysis of dust days on the Arabian Peninsula, increased dust lifting and atmospheric dust concentration in the Fertile Crescent during this recent, prolonged drought episode supported a greater frequency of dust events across the peninsula with associated northerly trajectories and led to the dust regime shift. These findings are particularly concerning, considering projections of warming and drying for the eastern Mediterranean region and potential collapse of the Fertile Crescent during this century.