Showing papers by "Goddard Space Flight Center published in 2018"
••
TL;DR: This analysis expands upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars.
Abstract: On 17 August 2017, the LIGO and Virgo observatories made the first direct detection of gravitational waves from the coalescence of a neutron star binary system. The detection of this gravitational-wave signal, GW170817, offers a novel opportunity to directly probe the properties of matter at the extreme conditions found in the interior of these stars. The initial, minimal-assumption analysis of the LIGO and Virgo data placed constraints on the tidal effects of the coalescing bodies, which were then translated to constraints on neutron star radii. Here, we expand upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars. Our analysis employs two methods: the use of equation-of-state-insensitive relations between various macroscopic properties of the neutron stars and the use of an efficient parametrization of the defining function pðρÞ of the equation of state itself. From the LIGO and Virgo data alone and the first method, we measure the two neutron star radii as R1 ¼ 10.8 þ2.0 −1.7 km for the heavier star and R2 ¼ 10.7 þ2.1 −1.5 km for the lighter star at the 90% credible level. If we additionally require that the equation of state supports neutron stars with masses larger than 1.97 M⊙ as required from electromagnetic observations and employ the equation-of-state parametrization, we further constrain R1 ¼ 11.9 þ1.4 −1.4 km and R2 ¼ 11.9 þ1.4 −1.4 km at the 90% credible level. Finally, we obtain constraints on pðρÞ at supranuclear densities, with pressure at twice nuclear saturation density measured at 3.5 þ2.7 −1.7 × 1034 dyn cm−2 at the 90% level.
1,595 citations
••
University of East Anglia1, University of Exeter2, Alfred Wegener Institute for Polar and Marine Research3, Ludwig Maximilian University of Munich4, Max Planck Society5, Commonwealth Scientific and Industrial Research Organisation6, Karlsruhe Institute of Technology7, Cooperative Institute for Marine and Atmospheric Studies8, Atlantic Oceanographic and Meteorological Laboratory9, École Normale Supérieure10, Centre national de la recherche scientifique11, University of Maryland, College Park12, University of Virginia13, Flanders Marine Institute14, Oak Ridge National Laboratory15, Woods Hole Research Center16, University of Illinois at Urbana–Champaign17, Geophysical Institute, University of Bergen18, Met Office19, University of California, San Diego20, Utrecht University21, Netherlands Environmental Assessment Agency22, University of Paris23, Oeschger Centre for Climate Change Research24, Tsinghua University25, National Center for Atmospheric Research26, Institute of Arctic and Alpine Research27, National Institute for Environmental Studies28, Hobart Corporation29, Cooperative Research Centre30, Japan Agency for Marine-Earth Science and Technology31, Wageningen University and Research Centre32, University of Groningen33, Bjerknes Centre for Climate Research34, Goddard Space Flight Center35, Leibniz Institute for Baltic Sea Research36, Princeton University37, Leibniz Institute of Marine Sciences38, National Oceanic and Atmospheric Administration39, Auburn University40, Food and Agriculture Organization41, VU University Amsterdam42
TL;DR: In this article, the authors describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties, including emissions from land use and land-use change data and bookkeeping models.
Abstract: . Accurate assessment of anthropogenic carbon dioxide
( CO2 ) emissions and their redistribution among the atmosphere,
ocean, and terrestrial biosphere – the “global carbon budget” – is
important to better understand the global carbon cycle, support the
development of climate policies, and project future climate change. Here we
describe data sets and methodology to quantify the five major components of
the global carbon budget and their uncertainties. Fossil CO2
emissions ( EFF ) are based on energy statistics and cement
production data, while emissions from land use and land-use change ( ELUC ),
mainly deforestation, are based on land use and land-use change data and
bookkeeping models. Atmospheric CO2 concentration is measured
directly and its growth rate ( GATM ) is computed from the annual
changes in concentration. The ocean CO2 sink ( SOCEAN )
and terrestrial CO2 sink ( SLAND ) are estimated with
global process models constrained by observations. The resulting carbon
budget imbalance ( BIM ), the difference between the estimated
total emissions and the estimated changes in the atmosphere, ocean, and
terrestrial biosphere, is a measure of imperfect data and understanding of
the contemporary carbon cycle. All uncertainties are reported as ±1σ . For the last decade available (2008–2017), EFF was
9.4±0.5 GtC yr −1 , ELUC 1.5±0.7 GtC yr −1 , GATM 4.7±0.02 GtC yr −1 ,
SOCEAN 2.4±0.5 GtC yr −1 , and SLAND 3.2±0.8 GtC yr −1 , with a budget imbalance BIM of
0.5 GtC yr −1 indicating overestimated emissions and/or underestimated
sinks. For the year 2017 alone, the growth in EFF was about 1.6 %
and emissions increased to 9.9±0.5 GtC yr −1 . Also for 2017,
ELUC was 1.4±0.7 GtC yr −1 , GATM was 4.6±0.2 GtC yr −1 , SOCEAN was 2.5±0.5 GtC yr −1 , and SLAND was 3.8±0.8 GtC yr −1 ,
with a BIM of 0.3 GtC. The global atmospheric
CO2 concentration reached 405.0±0.1 ppm averaged over 2017.
For 2018, preliminary data for the first 6–9 months indicate a renewed
growth in EFF of + 2.7 % (range of 1.8 % to 3.7 %) based
on national emission projections for China, the US, the EU, and India and
projections of gross domestic product corrected for recent changes in the
carbon intensity of the economy for the rest of the world. The analysis
presented here shows that the mean and trend in the five components of the
global carbon budget are consistently estimated over the period of 1959–2017,
but discrepancies of up to 1 GtC yr −1 persist for the representation
of semi-decadal variability in CO2 fluxes. A detailed comparison
among individual estimates and the introduction of a broad range of
observations show (1) no consensus in the mean and trend in land-use change
emissions, (2) a persistent low agreement among the different methods on
the magnitude of the land CO2 flux in the northern extra-tropics,
and (3) an apparent underestimation of the CO2 variability by ocean
models, originating outside the tropics. This living data update documents
changes in the methods and data sets used in this new global carbon budget
and the progress in understanding the global carbon cycle compared with
previous publications of this data set (Le Quere et al., 2018, 2016,
2015a, b, 2014, 2013). All results presented here can be downloaded from
https://doi.org/10.18160/GCP-2018 .
1,458 citations
••
TL;DR: Satellite data for the period 1982–2016 reveal changes in land use and land cover at global and regional scales that reflect patterns of land change indicative of a human-dominated Earth system.
Abstract: Land change is a cause and consequence of global environmental change1,2. Changes in land use and land cover considerably alter the Earth’s energy balance and biogeochemical cycles, which contributes to climate change and—in turn—affects land surface properties and the provision of ecosystem services1–4. However, quantification of global land change is lacking. Here we analyse 35 years’ worth of satellite data and provide a comprehensive record of global land-change dynamics during the period 1982–2016. We show that—contrary to the prevailing view that forest area has declined globally5—tree cover has increased by 2.24 million km2 (+7.1% relative to the 1982 level). This overall net gain is the result of a net loss in the tropics being outweighed by a net gain in the extratropics. Global bare ground cover has decreased by 1.16 million km2 (−3.1%), most notably in agricultural regions in Asia. Of all land changes, 60% are associated with direct human activities and 40% with indirect drivers such as climate change. Land-use change exhibits regional dominance, including tropical deforestation and agricultural expansion, temperate reforestation or afforestation, cropland intensification and urbanization. Consistently across all climate domains, montane systems have gained tree cover and many arid and semi-arid ecosystems have lost vegetation cover. The mapped land changes and the driver attributions reflect a human-dominated Earth system. The dataset we developed may be used to improve the modelling of land-use changes, biogeochemical cycles and vegetation–climate interactions to advance our understanding of global environmental change1–4,6. Satellite data for the period 1982–2016 reveal changes in land use and land cover at global and regional scales that reflect patterns of land change indicative of a human-dominated Earth system.
1,096 citations
••
TL;DR: Analysis of 2002–2016 GRACE satellite observations of terrestrial water storage reveals substantial changes in freshwater resources globally, which are driven by natural and anthropogenic climate variability and human activities.
Abstract: Freshwater availability is changing worldwide. Here we quantify 34 trends in terrestrial water storage observed by the Gravity Recovery and Climate Experiment (GRACE) satellites during 2002–2016 and categorize their drivers as natural interannual variability, unsustainable groundwater consumption, climate change or combinations thereof. Several of these trends had been lacking thorough investigation and attribution, including massive changes in northwestern China and the Okavango Delta. Others are consistent with climate model predictions. This observation-based assessment of how the world’s water landscape is responding to human impacts and climate variations provides a blueprint for evaluating and predicting emerging threats to water and food security.
966 citations
••
TL;DR: In this article, the authors present possible observing scenarios for the Advanced LIGO, Advanced Virgo and KAGRA gravitational-wave detectors over the next decade, with the intention of providing information to the astronomy community to facilitate planning for multi-messenger astronomy with gravitational waves.
Abstract: We present possible observing scenarios for the Advanced LIGO, Advanced Virgo and KAGRA gravitational-wave detectors over the next decade, with the intention of providing information to the astronomy community to facilitate planning for multi-messenger astronomy with gravitational waves. We estimate the sensitivity of the network to transient gravitational-wave signals, and study the capability of the network to determine the sky location of the source. We report our findings for gravitational-wave transients, with particular focus on gravitational-wave signals from the inspiral of binary neutron star systems, which are the most promising targets for multi-messenger astronomy. The ability to localize the sources of the detected signals depends on the geographical distribution of the detectors and their relative sensitivity, and 90% credible regions can be as large as thousands of square degrees when only two sensitive detectors are operational. Determining the sky position of a significant fraction of detected signals to areas of 5– 20 deg2 requires at least three detectors of sensitivity within a factor of ∼2 of each other and with a broad frequency bandwidth. When all detectors, including KAGRA and the third LIGO detector in India, reach design sensitivity, a significant fraction of gravitational-wave signals will be localized to a few square degrees by gravitational-wave observations alone.
804 citations
••
University of Leeds1, California Institute of Technology2, University of California, Irvine3, University of Washington4, Durham University5, University of Grenoble6, Goddard Space Flight Center7, University of Bristol8, University of Colorado Boulder9, Geological Survey of Denmark and Greenland10, University at Buffalo11, National Space Institute12, University of South Florida13, University of Texas at Austin14, University College London15, Dresden University of Technology16, Georgia Institute of Technology17, University of Lincoln18, University of Arizona19, Alfred Wegener Institute for Polar and Marine Research20, Technische Universität München21, Danish Meteorological Institute22, Memorial University of Newfoundland23, National Institute of Geophysics and Volcanology24, Bergen University College25, University of Magallanes26, Remote Sensing Center27, Newcastle University28, University of Toronto29, University of Bonn30, Delft University of Technology31, Seoul National University32, University of Urbino33, University of Stuttgart34
TL;DR: This work combines satellite observations of its changing volume, flow and gravitational attraction with modelling of its surface mass balance to show that the Antarctic Ice Sheet lost 2,720 ± 1,390 billion tonnes of ice between 1992 and 2017, which corresponds to an increase in mean sea level of 7.6‚¬3.9 millimetres.
Abstract: The Antarctic Ice Sheet is an important indicator of climate change and driver of sea-level rise. Here we combine satellite observations of its changing volume, flow and gravitational attraction with modelling of its surface mass balance to show that it lost 2,720 ± 1,390 billion tonnes of ice between 1992 and 2017, which corresponds to an increase in mean sea level of 7.6 ± 3.9 millimetres (errors are one standard deviation). Over this period, ocean-driven melting has caused rates of ice loss from West Antarctica to increase from 53 ± 29 billion to 159 ± 26 billion tonnes per year; ice-shelf collapse has increased the rate of ice loss from the Antarctic Peninsula from 7 ± 13 billion to 33 ± 16 billion tonnes per year. We find large variations in and among model estimates of surface mass balance and glacial isostatic adjustment for East Antarctica, with its average rate of mass gain over the period 1992–2017 (5 ± 46 billion tonnes per year) being the least certain.
725 citations
••
TL;DR: Simple extrapolation of the quadratic implies global mean sea level could rise 65 ± 12 cm by 2100 compared with 2005, roughly in agreement with the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5) model projections.
Abstract: Using a 25-y time series of precision satellite altimeter data from TOPEX/Poseidon, Jason-1, Jason-2, and Jason-3, we estimate the climate-change-driven acceleration of global mean sea level over the last 25 y to be 0.084 ± 0.025 mm/y2 Coupled with the average climate-change-driven rate of sea level rise over these same 25 y of 2.9 mm/y, simple extrapolation of the quadratic implies global mean sea level could rise 65 ± 12 cm by 2100 compared with 2005, roughly in agreement with the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5) model projections.
671 citations
••
629 citations
••
Goddard Space Flight Center1, Cornell University2, West Virginia University3, Pennsylvania State University4, Notre Dame of Maryland University5, Montana State University6, Franklin & Marshall College7, University of Virginia8, University of British Columbia9, Lafayette College10, National Radio Astronomy Observatory11, Hillsdale College12, Norwich University13, McGill University14, University of Illinois at Urbana–Champaign15, The University of Texas Rio Grande Valley16, Columbia University17, University of Wisconsin–Milwaukee18, California Institute of Technology19, Haverford College20, First Green Bank21, United States Naval Research Laboratory22, Oberlin College23, Chinese Academy of Sciences24
TL;DR: In this article, the authors presented high-precision timing data over time spans of up to 11 years for 45 millisecond pulsars observed as part of the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) project, aimed at detecting and characterizing low-frequency gravitational waves.
Abstract: We present high-precision timing data over time spans of up to 11 years for 45 millisecond pulsars observed as part of the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) project, aimed at detecting and characterizing low-frequency gravitational waves. The pulsars were observed with the Arecibo Observatory and/or the Green Bank Telescope at frequencies ranging from 327 MHz to 2.3 GHz. Most pulsars were observed with approximately monthly cadence, and six high-timing-precision pulsars were observed weekly. All were observed at widely separated frequencies at each observing epoch in order to fit for time-variable dispersion delays. We describe our methods for data processing, time-of-arrival (TOA) calculation, and the implementation of a new, automated method for removing outlier TOAs. We fit a timing model for each pulsar that includes spin, astrometric, and (for binary pulsars) orbital parameters; time-variable dispersion delays; and parameters that quantify pulse-profile evolution with frequency. The timing solutions provide three new parallax measurements, two new Shapiro delay measurements, and two new measurements of significant orbital-period variations. We fit models that characterize sources of noise for each pulsar. We find that 11 pulsars show significant red noise, with generally smaller spectral indices than typically measured for non-recycled pulsars, possibly suggesting a different origin. A companion paper uses these data to constrain the strength of the gravitational-wave background.
481 citations
••
TL;DR: The new Version 2.3 of the GPCP Monthly analysis is described in terms of changes made to improve the homogeneity of the product, especially after 2002, and the general La Nina pattern for 2017 is noted and the evolution from the early 2016 El Nino pattern is described.
Abstract: The new Version 2.3 of the Global Precipitation Climatology Project (GPCP) Monthly analysis is described in terms of changes made to improve the homogeneity of the product, especially after 2002. These changes include corrections to cross-calibration of satellite data inputs and updates to the gauge analysis. Over-ocean changes starting in 2003 resulted in an overall precipitation increase of 1.8% after 2009. Updating the gauge analysis to its final, high-quality version increases the global land total by 1.8% for the post-2002 period. These changes correct a small, incorrect dip in the estimated global precipitation over the last decade given by the earlier Version 2.2. The GPCP analysis is also used to describe global precipitation in 2017. The general La Nina pattern for 2017 is noted and the evolution from the early 2016 El Nino pattern is described. The 2017 global value is one of the highest for the 1979–2017 period, exceeded only by 2016 and 1998 (both El Nino years), and reinforces the small positive trend. Results for 2017 also reinforce significant trends in precipitation intensity (on a monthly scale) in the tropics. These results for 2017 indicate the value of the GPCP analysis, in addition to research, for climate monitoring.
478 citations
••
TL;DR: In this paper, the authors describe the latest version of the algorithm MAIAC used for processing the MODIS Collection 6 data record, which has changed considerably to adapt to global processing and improve cloud/snow detection, aerosol retrievals and atmospheric correction of MODIS data.
Abstract: . This paper describes the latest version of the algorithm MAIAC
used for processing the MODIS Collection 6 data record. Since initial
publication in 2011–2012, MAIAC has changed considerably to adapt to global
processing and improve cloud/snow detection, aerosol retrievals and
atmospheric correction of MODIS data. The main changes include (1) transition
from a 25 to 1 km scale for retrieval of the spectral regression coefficient
(SRC) which helped to remove occasional blockiness at 25 km scale in the
aerosol optical depth (AOD) and in the surface reflectance, (2) continuous
improvements of cloud detection, (3) introduction of smoke and dust tests to
discriminate absorbing fine- and coarse-mode aerosols, (4) adding over-water
processing, (5) general optimization of the LUT-based radiative transfer for
the global processing, and others. MAIAC provides an interdisciplinary suite
of atmospheric and land products, including cloud mask (CM), column water
vapor (CWV), AOD at 0.47 and 0.55 µ m, aerosol type (background,
smoke or dust) and fine-mode fraction over water; spectral bidirectional
reflectance factors (BRF), parameters of Ross-thick Li-sparse (RTLS)
bidirectional reflectance distribution function (BRDF) model and
instantaneous albedo. For snow-covered surfaces, we provide subpixel snow
fraction and snow grain size. All products come in standard HDF4 format at
1 km resolution, except for BRF, which is also provided at 500 m resolution
on a sinusoidal grid adopted by the MODIS Land team. All products are
provided on per-observation basis in daily files except for the BRDF/Albedo
product, which is reported every 8 days. Because MAIAC uses a time
series approach, BRDF/Albedo is naturally gap-filled over land where missing
values are filled-in with results from the previous retrieval. While the BRDF
model is reported for MODIS Land bands 1–7 and ocean band 8, BRF is reported
for both land and ocean bands 1–12. This paper focuses on MAIAC cloud
detection, aerosol retrievals and atmospheric correction and describes MCD19
data products and quality assurance (QA) flags.
••
Goddard Space Flight Center1, West Virginia University2, Cornell University3, Pennsylvania State University4, Notre Dame of Maryland University5, Montana State University6, Franklin & Marshall College7, University of Virginia8, University of British Columbia9, Lafayette College10, National Radio Astronomy Observatory11, Hillsdale College12, Norwich University13, California Institute of Technology14, McGill University15, University of Illinois at Urbana–Champaign16, University of Washington17, University of Wisconsin–Milwaukee18, Columbia University19, Haverford College20, The University of Texas Rio Grande Valley21, First Green Bank22, United States Naval Research Laboratory23, Eötvös Loránd University24, Oberlin College25, Chinese Academy of Sciences26
TL;DR: In this paper, an isotropic stochastic GWB in the newly released 11-year data set from the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) was searched for and the first pulsar-timing array (PTA) constraints that are robust against SSE errors were obtained.
Abstract: We search for an isotropic stochastic gravitational-wave background (GWB) in the newly released 11 year data set from the North American Nanohertz Observatory for Gravitational Waves (NANOGrav). While we find no evidence for a GWB, we place constraints on a population of inspiraling supermassive black hole (SMBH) binaries, a network of decaying cosmic strings, and a primordial GWB. For the first time, we find that the GWB constraints are sensitive to the solar system ephemeris (SSE) model used and that SSE errors can mimic a GWB signal. We developed an approach that bridges systematic SSE differences, producing the first pulsar-timing array (PTA) constraints that are robust against SSE errors. We thus place a 95% upper limit on the GW-strain amplitude of A_(GWB) < 1.45 × 10^(−15) at a frequency of f = 1 yr^(−1) for a fiducial f^(−2/3) power-law spectrum and with interpulsar correlations modeled. This is a factor of ~2 improvement over the NANOGrav nine-year limit calculated using the same procedure. Previous PTA upper limits on the GWB (as well as their astrophysical and cosmological interpretations) will need revision in light of SSE systematic errors. We use our constraints to characterize the combined influence on the GWB of the stellar mass density in galactic cores, the eccentricity of SMBH binaries, and SMBH–galactic-bulge scaling relationships. We constrain the cosmic-string tension using recent simulations, yielding an SSE-marginalized 95% upper limit of Gμ < 5.3 × 10^(−11)—a factor of ~2 better than the published NANOGrav nine-year constraints. Our SSE-marginalized 95% upper limit on the energy density of a primordial GWB (for a radiation-dominated post-inflation universe) is Ω_(GWB)(f) h^2 < 3.4 × 10^(−10).
••
National Radio Astronomy Observatory1, California Institute of Technology2, University of Oxford3, Tel Aviv University4, Princeton University5, Texas Tech University6, Hebrew University of Jerusalem7, University of Sydney8, University of Wisconsin–Milwaukee9, Commonwealth Scientific and Industrial Research Organisation10, Stockholm University11, Tata Institute of Fundamental Research12, Swinburne University of Technology13, Radboud University Nijmegen14, Indian Institute of Technology Bombay15, Chalmers University of Technology16, Goddard Space Flight Center17
TL;DR: The cocoon model explains the radio light curve of GW170817, as well as the γ-ray and X-ray emission (and possibly also the ultraviolet and optical emission), and is the model that is most consistent with the observational data.
Abstract: GW170817 was the first gravitational-wave detection of a binary neutron-star merger. It was accompanied by radiation across the electromagnetic spectrum and localized to the galaxy NGC 4993 at a distance of 40 megaparsecs. It has been proposed that the observed γ-ray, X-ray and radio emission is due to an ultra-relativistic jet being launched during the merger (and successfully breaking out of the surrounding material), directed away from our line of sight (off-axis). The presence of such a jet is predicted from models that posit neutron-star mergers as the drivers of short hard-γ-ray bursts. Here we report that the radio light curve of GW170817 has no direct signature of the afterglow of an off-axis jet. Although we cannot completely rule out the existence of a jet directed away from the line of sight, the observed γ-ray emission could not have originated from such a jet. Instead, the radio data require the existence of a mildly relativistic wide-angle outflow moving towards us. This outflow could be the high-velocity tail of the neutron-rich material that was ejected dynamically during the merger, or a cocoon of material that breaks out when a jet launched during the merger transfers its energy to the dynamical ejecta. Because the cocoon model explains the radio light curve of GW170817, as well as the γ-ray and X-ray emission (and possibly also the ultraviolet and optical emission), it is the model that is most consistent with the observational data. Cocoons may be a ubiquitous phenomenon produced in neutron-star mergers, giving rise to a hitherto unidentified population of radio, ultraviolet, X-ray and γ-ray transients in the local Universe.
••
TL;DR: This study presents a new global baseline of mangrove extent for 2010 and has been released as the first output of the Global Mangrove Watch (GMW) initiative, the first study to apply a globally consistent and automated method for mapping mangroves.
Abstract: This study presents a new global baseline of mangrove extent for 2010 and has been released as the first output of the Global Mangrove Watch (GMW) initiative. This is the first study to apply a globally consistent and automated method for mapping mangroves, identifying a global extent of 137,600 km 2 . The overall accuracy for mangrove extent was 94.0% with a 99% likelihood that the true value is between 93.6–94.5%, using 53,878 accuracy points across 20 sites distributed globally. Using the geographic regions of the Ramsar Convention on Wetlands, Asia has the highest proportion of mangroves with 38.7% of the global total, while Latin America and the Caribbean have 20.3%, Africa has 20.0%, Oceania has 11.9%, North America has 8.4% and the European Overseas Territories have 0.7%. The methodology developed is primarily based on the classification of ALOS PALSAR and Landsat sensor data, where a habitat mask was first generated, within which the classification of mangrove was undertaken using the Extremely Randomized Trees classifier. This new globally consistent baseline will also form the basis of a mangrove monitoring system using JAXA JERS-1 SAR, ALOS PALSAR and ALOS-2 PALSAR-2 radar data to assess mangrove change from 1996 to the present. However, when using the product, users should note that a minimum mapping unit of 1 ha is recommended and that the error increases in regions of disturbance and where narrow strips or smaller fragmented areas of mangroves are present. Artefacts due to cloud cover and the Landsat-7 SLC-off error are also present in some areas, particularly regions of West Africa due to the lack of Landsat-5 data and persistence cloud cover. In the future, consideration will be given to the production of a new global baseline based on 10 m Sentinel-2 composites.
••
Goddard Space Flight Center1, Massachusetts Institute of Technology2, Carnegie Institution for Science3, Pierre-and-Marie-Curie University4, National Autonomous University of Mexico5, Jacobs Engineering Group6, Stony Brook University7, California Institute of Technology8, Imperial College London9, University of California, Davis10, Supélec11, Paris Diderot University12
TL;DR: In situ detection of organic matter preserved in lacustrine mudstones at the base of the ~3.5-billion-year-old Murray formation at Pahrump Hills, Gale crater, by the Sample Analysis at Mars instrument suite onboard the Curiosity rover is reported.
Abstract: Establishing the presence and state of organic matter, including its possible biosignatures, in martian materials has been an elusive quest, despite limited reports of the existence of organic matter on Mars. We report the in situ detection of organic matter preserved in lacustrine mudstones at the base of the ~3.5-billion-year-old Murray formation at Pahrump Hills, Gale crater, by the Sample Analysis at Mars instrument suite onboard the Curiosity rover. Diverse pyrolysis products, including thiophenic, aromatic, and aliphatic compounds released at high temperatures (500° to 820°C), were directly detected by evolved gas analysis. Thiophenes were also observed by gas chromatography–mass spectrometry. Their presence suggests that sulfurization aided organic matter preservation. At least 50 nanomoles of organic carbon persists, probably as macromolecules containing 5% carbon as organic sulfur molecules.
••
Space Telescope Science Institute1, Ames Research Center2, Search for extraterrestrial intelligence3, California Institute of Technology4, Massachusetts Institute of Technology5, Bishop's University6, Harvard University7, Space Science Institute8, University of Texas at Austin9, Villanova University10, Princeton University11, Goddard Space Flight Center12, University of Birmingham13, Aarhus University14, Principia College15, Lowell Observatory16, University of California, Berkeley17, Brigham Young University18, University of Nevada, Las Vegas19, San Diego State University20, Pennsylvania State University21
TL;DR: The Robovetter and the metrics it uses to decide which TCEs are called planet candidates in the DR25 KOI catalog are discussed and a value called the disposition score is discussed which provides an easy way to select a more reliable, albeit less complete, sample of candidates.
Abstract: We present the Kepler Object of Interest (KOI) catalog of transiting exoplanets based on searching 4 yr of Kepler time series photometry (Data Release 25, Q1–Q17). The catalog contains 8054 KOIs, of which 4034 are planet candidates with periods between 0.25 and 632 days. Of these candidates, 219 are new, including two in multiplanet systems (KOI-82.06 and KOI-2926.05) and 10 high-reliability, terrestrial-size, habitable zone candidates. This catalog was created using a tool called the Robovetter, which automatically vets the DR25 threshold crossing events (TCEs). The Robovetter also vetted simulated data sets and measured how well it was able to separate TCEs caused by noise from those caused by low signal-to-noise transits. We discuss the Robovetter and the metrics it uses to sort TCEs. For orbital periods less than 100 days the Robovetter completeness (the fraction of simulated transits that are determined to be planet candidates) across all observed stars is greater than 85%. For the same period range, the catalog reliability (the fraction of candidates that are not due to instrumental or stellar noise) is greater than 98%. However, for low signal-to-noise candidates between 200 and 500 days around FGK-dwarf stars, the Robovetter is 76.7% complete and the catalog is 50.5% reliable. The KOI catalog, the transit fits, and all of the simulated data used to characterize this catalog are available at the NASA Exoplanet Archive.
••
European Centre for Medium-Range Weather Forecasts1, University of Bristol2, National Space Institute3, Goddard Space Flight Center4, European Space Agency5, National Oceanic and Atmospheric Administration6, Goethe University Frankfurt7, University of South Florida8, University of Bremen9, Academia Sinica10, University of Texas at Austin11, Chinese Academy of Sciences12, University of New South Wales13, Trent University14, University of Siegen15, IFREMER16, Commonwealth Scientific and Industrial Research Organisation17, California Institute of Technology18, University of Bonn19, University of Urbino20, Dresden University of Technology21, Old Dominion University22, University of Leeds23, ETH Zurich24, University of Grenoble25, University of Bern26, Northern Oklahoma College27, Australian National University28, University of Oslo29, University of Rennes30, University of the Balearic Islands31, University of Reading32, University of California, San Diego33, University of Ottawa34, University of California, Irvine35, University of Colorado Boulder36, University of Zurich37, Woods Hole Oceanographic Institution38, Delft University of Technology39, Alfred Wegener Institute for Polar and Marine Research40, Ohio State University41, University of Hamburg42, Utrecht University43, University of California44, Bjerknes Centre for Climate Research45, University of Tasmania46, University of La Rochelle47
TL;DR: In this paper, the authors present estimates of the altimetry-based global mean sea level (average variance of 3.1 +/- 0.3 mm/yr and acceleration of 0.1 mm/r2 over 1993-present), as well as of the different components of the sea level budget over 2005-present, using GRACE-based ocean mass estimates.
Abstract: Global mean sea level is an integral of changes occurring in the climate system in response to
unforced climate variability as well as natural and anthropogenic forcing factors. Its temporal
evolution allows detecting changes (e.g., acceleration) in one or more components. Study of
the sea level budget provides constraints on missing or poorly known contributions, such as
the unsurveyed deep ocean or the still uncertain land water component. In the context of the
World Climate Research Programme Grand Challenge entitled “Regional Sea Level and
Coastal Impacts”, an international effort involving the sea level community worldwide has
been recently initiated with the objective of assessing the various data sets used to estimate
components of the sea level budget during the altimetry era (1993 to present). These data sets
are based on the combination of a broad range of space-based and in situ observations, model
estimates and algorithms. Evaluating their quality, quantifying uncertainties and identifying
sources of discrepancies between component estimates is extremely useful for various
applications in climate research. This effort involves several tens of scientists from about fifty
research teams/institutions worldwide (www.wcrp-climate.org/grand-challenges/gc-sea-
level). The results presented in this paper are a synthesis of the first assessment performed
during 2017-2018. We present estimates of the altimetry-based global mean sea level (average
rate of 3.1 +/- 0.3 mm/yr and acceleration of 0.1 mm/yr2 over 1993-present), as well as of the
different components of the sea level budget (http://doi.org/10.17882/54854). We further
examine closure of the sea level budget, comparing the observed global mean sea level with
the sum of components. Ocean thermal expansion, glaciers, Greenland and Antarctica
contribute by 42%, 21%, 15% and 8% to the global mean sea level over the 1993-present. We
also study the sea level budget over 2005-present, using GRACE-based ocean mass estimates
instead of sum of individual mass components. Results show closure of the sea level budget
within 0.3 mm/yr. Substantial uncertainty remains for the land water storage component, as
shown in examining individual mass contributions to sea level.
••
University of California, Riverside1, Virtual Planetary Laboratory2, Ames Research Center3, Goddard Institute for Space Studies4, Columbia University5, University of Maryland, Baltimore6, Arizona State University7, Goddard Space Flight Center8, NASA Astrobiology Institute9, University of Washington10, University of Edinburgh11, German Aerospace Center12, Cornell University13, University of St Andrews14, California Institute of Technology15
TL;DR: A comprehensive overview of the current understanding of potential exoplanet biosignatures, including gaseous, surface, and temporal signatures, can be found in this article, with a focus on recent advances in assessing biosignature plausibility.
Abstract: In the coming years and decades, advanced space- and ground-based observatories will allow an unprecedented opportunity to probe the atmospheres and surfaces of potentially habitable exoplanets for signatures of life. Life on Earth, through its gaseous products and reflectance and scattering properties, has left its fingerprint on the spectrum of our planet. Aided by the universality of the laws of physics and chemistry, we turn to Earth's biosphere, both in the present and through geologic time, for analog signatures that will aid in the search for life elsewhere. Considering the insights gained from modern and ancient Earth, and the broader array of hypothetical exoplanet possibilities, we have compiled a comprehensive overview of our current understanding of potential exoplanet biosignatures, including gaseous, surface, and temporal biosignatures. We additionally survey biogenic spectral features that are well known in the specialist literature but have not yet been robustly vetted in the context of exoplanet biosignatures. We briefly review advances in assessing biosignature plausibility, including novel methods for determining chemical disequilibrium from remotely obtainable data and assessment tools for determining the minimum biomass required to maintain short-lived biogenic gases as atmospheric signatures. We focus particularly on advances made since the seminal review by Des Marais et al. The purpose of this work is not to propose new biosignature strategies, a goal left to companion articles in this series, but to review the current literature, draw meaningful connections between seemingly disparate areas, and clear the way for a path forward.
••
University of Maryland, College Park1, Grinnell College2, University of Chicago3, Massachusetts Institute of Technology4, NASA Exoplanet Science Institute5, Harvard University6, California Institute of Technology7, University of Maryland, Baltimore County8, Goddard Space Flight Center9, University College London10, Space Telescope Science Institute11, Pennsylvania State University12, University of Colorado Boulder13, University of Amsterdam14, Technical University of Denmark15, McGill University16, The Catholic University of America17, Université de Montréal18, University of Bern19, University of California, Riverside20, Leibniz Institute for Astrophysics Potsdam21, University of Padua22, University of La Laguna23, Spanish National Research Council24, University of Michigan25, Arizona State University26, University of Exeter27, INAF28, Vanderbilt University29, Aarhus University30
TL;DR: In this article, the authors present a set of analytic metrics, quantifying the expected signal-to-noise in transmission and thermal emission spectroscopy for a given planet, that will allow the top atmospheric characterization targets to be readily identified among the TESS planet candidates.
Abstract: A key legacy of the recently launched the Transiting Exoplanet Survey Satellite (TESS) mission will be to provide the astronomical community with many of the best transiting exoplanet targets for atmospheric characterization. However, time is of the essence to take full advantage of this opportunity. The James Webb Space Telescope (JWST), although delayed, will still complete its nominal five year mission on a timeline that motivates rapid identification, confirmation, and mass measurement of the top atmospheric characterization targets from TESS. Beyond JWST, future dedicated missions for atmospheric studies such as the Atmospheric Remote-sensing Infrared Exoplanet Large-survey (ARIEL) require the discovery and confirmation of several hundred additional sub-Jovian size planets (R p < 10 R ⊕) orbiting bright stars, beyond those known today, to ensure a successful statistical census of exoplanet atmospheres. Ground-based extremely large telescopes (ELTs) will also contribute to surveying the atmospheres of the transiting planets discovered by TESS. Here we present a set of two straightforward analytic metrics, quantifying the expected signal-to-noise in transmission and thermal emission spectroscopy for a given planet, that will allow the top atmospheric characterization targets to be readily identified among the TESS planet candidates. Targets that meet our proposed threshold values for these metrics would be encouraged for rapid follow-up and confirmation via radial velocity mass measurements. Based on the catalog of simulated TESS detections by Sullivan et al., we determine appropriate cutoff values of the metrics, such that the TESS mission will ultimately yield a sample of ~300 high-quality atmospheric characterization targets across a range of planet size bins, extending down to Earth-size, potentially habitable worlds.
••
TL;DR: In this article, the authors present estimates of how many exoplanets the Transiting Exoplanet Survey Satellite (TESS) will detect, the physical properties of the detected planets, and the properties of those planets that those planets orbit.
Abstract: The Transiting Exoplanet Survey Satellite (TESS) has a goal of detecting small planets orbiting stars bright enough for mass determination via ground-based radial velocity observations. Here, we present estimates of how many exoplanets the TESS mission will detect, the physical properties of the detected planets, and the properties of the stars that those planets orbit. This work uses stars drawn from the TESS Input Catalog Candidate Target List and revises yields from prior studies that were based on Galactic models. We modeled the TESS observing strategy to select approximately 200,000 stars at 2-minute cadence, while the remaining stars are observed at 30-minute cadence in full-frame image data. We placed zero or more planets in orbit around each star, with physical properties following measured exoplanet occurrence rates, and used the TESS noise model to predict the derived properties of the detected exoplanets. In the TESS 2-minute cadence mode we estimate that TESS will find 1250 ± 70 exoplanets (90% confidence), including 250 smaller than 2 R(sub ⊕). Furthermore, we predict that an additional 3100 planets will be found in full-frame image data orbiting bright dwarf stars and more than 10,000 around fainter stars. We predict that TESS will find 500 planets orbiting M dwarfs, but the majority of planets will orbit stars larger than the Sun. Our simulated sample of planets contains hundreds of small planets amenable to radial velocity follow-up, potentially more than tripling the number of planets smaller than 4 R(sub ⊕) with mass measurements. This sample of simulated planets is available for use in planning follow-up observations and analyses.
••
TL;DR: The Orbiting Carbon Observatory-2 (OCO-2) is capable of measuring solar-induced chlorophyll fluorescence (SIF), a functional proxy for terrestrial gross primary productivity (GPP).
••
University of California, Berkeley1, Imperial College London2, University of Delaware3, University of Maryland, College Park4, Dartmouth College5, West Virginia University6, Southwest Research Institute7, University of New Hampshire8, The Catholic University of America9, Goddard Space Flight Center10, Swedish Institute of Space Physics11, University of Toulouse12, University of Colorado Boulder13, École Polytechnique14, University of California, Los Angeles15, Royal Institute of Technology16, Austrian Academy of Sciences17
TL;DR: Observations of electron-scale current sheets in Earth’s turbulent magnetosheath reveal electron reconnection without ion coupling, contrary to expectations from the standard model of magnetic reconnection.
Abstract: Magnetic reconnection in current sheets is a magnetic-to-particle energy conversion process that is fundamental to many space and laboratory plasma systems. In the standard model of reconnection, this process occurs in a minuscule electron-scale diffusion region1,2. On larger scales, ions couple to the newly reconnected magnetic-field lines and are ejected away from the diffusion region in the form of bi-directional ion jets at the ion Alfven speed3-5. Much of the energy conversion occurs in spatially extended ion exhausts downstream of the diffusion region 6 . In turbulent plasmas, which contain a large number of small-scale current sheets, reconnection has long been suggested to have a major role in the dissipation of turbulent energy at kinetic scales7-11. However, evidence for reconnection plasma jetting in small-scale turbulent plasmas has so far been lacking. Here we report observations made in Earth's turbulent magnetosheath region (downstream of the bow shock) of an electron-scale current sheet in which diverging bi-directional super-ion-Alfvenic electron jets, parallel electric fields and enhanced magnetic-to-particle energy conversion were detected. Contrary to the standard model of reconnection, the thin reconnecting current sheet was not embedded in a wider ion-scale current layer and no ion jets were detected. Observations of this and other similar, but unidirectional, electron jet events without signatures of ion reconnection reveal a form of reconnection that can drive turbulent energy transfer and dissipation in electron-scale current sheets without ion coupling.
••
12 May 2018
TL;DR: A review of recent advances in understanding of drought dynamics, drawing from studies of paleoclimate, the historical record, and model simulations of the past and future, can be found in this paper.
Abstract: Drought is a complex and multivariate phenomenon influenced by diverse physical and biological processes. Such complexity precludes simplistic explanations of cause and effect, making investigations of climate change and drought a challenging task. Here, we review important recent advances in our understanding of drought dynamics, drawing from studies of paleoclimate, the historical record, and model simulations of the past and future. Paleoclimate studies of drought variability over the last two millennia have progressed considerably through the development of new reconstructions and analyses combining reconstructions with process-based models. This work has generated new evidence for tropical Pacific forcing of megadroughts in Southwest North America, provided additional constraints for interpreting climate change projections in poorly characterized regions like East Africa, and demonstrated the exceptional magnitude of many modern era droughts. Development of high resolution proxy networks has lagged in many regions (e.g., South America, Africa), however, and quantitative comparisons between the paleoclimate record, models, and observations remain challenging. Fingerprints of anthropogenic climate change consistent with long-term warming projections have been identified for droughts in California, the Pacific Northwest, Western North America, and the Mediterranean. In other regions (e.g., Southwest North America, Australia, Africa), however, the degree to which climate change has affected recent droughts is more uncertain. While climate change-forced declines in precipitation have been detected for the Mediterranean, in most regions, the climate change signal has manifested through warmer temperatures that have increased evaporative losses and reduced snowfall and snowpack levels, amplifying deficits in soil moisture and runoff despite uncertain precipitation changes. Over the next century, projections indicate that warming will increase drought risk and severity across much of the subtropics and mid-latitudes in both hemispheres, a consequence of regional precipitation declines and widespread warming. For many regions, however, the magnitude, robustness, and even direction of climate change-forced trends in drought depends on how drought is defined, with often large differences across indicators of precipitation, soil moisture, runoff, and vegetation health. Increasing confidence in climate change projections of drought and the associated impacts will likely depend on resolving uncertainties in processes that are currently poorly constrained (e.g., land-atmosphere interactions, terrestrial vegetation) and improved consideration of the role for human policies and management in ameliorating and adapting to changes in drought risk.
••
TL;DR: In this article, a spherical harmonic model of the magnetic field of Jupiter was obtained from vector magnetic field observations acquired by the Juno spacecraft during its first nine polar orbits about the planet, which provided the first truly global coverage of Jupiter's magnetic field with a coarse longitudinal separation of ~45 deg between perijoves.
Abstract: A spherical harmonic model of the magnetic field of Jupiter is obtained from vector magnetic field observations acquired by the Juno spacecraft during its first nine polar orbits about the planet. Observations acquired during eight of these orbits provide the first truly global coverage of Jupiter's magnetic field with a coarse longitudinal separation of ~45 deg between perijoves. The magnetic field is represented with a degree 20 spherical harmonic model for the planetary ("internal") field, combined with a simple model of the magnetodisc for the field ("external") due to distributed magnetospheric currents. Partial solution of the underdetermined inverse problem using generalized inverse techniques yields a model ("Juno Reference Model through Perijove 9") of the planetary magnetic field with spherical harmonic coefficients well determined through degree and order 10, providing the first detailed view of a planetary dynamo beyond Earth.
••
Cooperative Institute for Research in Environmental Sciences1, Centre national de la recherche scientifique2, University of Toulouse3, University of Bremen4, Université libre de Bruxelles5, Paris Diderot University6, University of the Littoral Opal Coast7, Agencia Estatal de Meteorología8, Polytechnic University of Catalonia9, National Center for Atmospheric Research10, Karlsruhe Institute of Technology11, Harvard University12, University of Washington13, University of Wollongong14, Academy of Athens15, Rutherford Appleton Laboratory16, California Institute of Technology17, China Meteorological Administration18, University of Toronto19, University of Liège20, Cooperative Institute for Mesoscale Meteorological Studies21, National Institute of Water and Atmospheric Research22, Forschungszentrum Jülich23, State University of New York System24, Swiss Federal Laboratories for Materials Science and Technology25, National Institute for Environmental Studies26, Goddard Space Flight Center27, Belgian Institute for Space Aeronomy28, Morgan State University29
TL;DR: The Tropospheric Ozone Assessment Report (TOAR) is an activity of the International Global Atmospheric Chemistry Project as mentioned in this paper, which provides a detailed view of ozone in the lower troposphere across East Asia and Europe.
Abstract: The Tropospheric Ozone Assessment Report (TOAR) is an activity of the International Global Atmospheric Chemistry Project. This paper is a component of the report, focusing on the present-day distribution and trends of tropospheric ozone relevant to climate and global atmospheric chemistry model evaluation. Utilizing the TOAR surface ozone database, several figures present the global distribution and trends of daytime average ozone at 2702 non-urban monitoring sites, highlighting the regions and seasons of the world with the greatest ozone levels. Similarly, ozonesonde and commercial aircraft observations reveal ozone’s distribution throughout the depth of the free troposphere. Long-term surface observations are limited in their global spatial coverage, but data from remote locations indicate that ozone in the 21st century is greater than during the 1970s and 1980s. While some remote sites and many sites in the heavily polluted regions of East Asia show ozone increases since 2000, many others show decreases and there is no clear global pattern for surface ozone changes since 2000. Two new satellite products provide detailed views of ozone in the lower troposphere across East Asia and Europe, revealing the full spatial extent of the spring and summer ozone enhancements across eastern China that cannot be assessed from limited surface observations. Sufficient data are now available (ozonesondes, satellite, aircraft) across the tropics from South America eastwards to the western Pacific Ocean, to indicate a likely tropospheric column ozone increase since the 1990s. The 2014–2016 mean tropospheric ozone burden (TOB) between 60˚N–60˚S from five satellite products is 300 Tg ± 4%. While this agreement is excellent, the products differ in their quantification of TOB trends and further work is required to reconcile the differences. Satellites can now estimate ozone’s global long-wave radiative effect, but evaluation is difficult due to limited in situ observations where the radiative effect is greatest.
••
European Space Agency1, Leibniz University of Hanover2, Imperial College London3, Paris Diderot University4, University of Trento5, fondazione bruno kessler6, University of Urbino7, University of Birmingham8, ETH Zurich9, UK Astronomy Technology Centre10, Institut de Ciències de l'Espai11, European Space Operations Centre12, University of Zurich13, University of Glasgow14, Polytechnic University of Catalonia15, Goddard Space Flight Center16, University of Florida17
TL;DR: This performance provides an experimental benchmark demonstrating the ability to realize the low-frequency science potential of the LISA mission, recently selected by the European Space Agency.
Abstract: In the months since the publication of the first results, the noise performance of LISA Pathfinder has improved because of reduced Brownian noise due to the continued decrease in pressure around the test masses, from a better correction of noninertial effects, and from a better calibration of the electrostatic force actuation. In addition, the availability of numerous long noise measurement runs, during which no perturbation is purposely applied to the test masses, has allowed the measurement of noise with good statistics down to
20
μ
Hz
. The Letter presents the measured differential acceleration noise figure, which is at
(
1.74
±
0.01
)
fm
s
−
2
/
√
Hz
above 2 mHz and
(
6
±
1
)
×
10
fm
s
−
2
/
√
Hz
at
20
μ
Hz
, and discusses the physical sources for the measured noise. This performance provides an experimental benchmark demonstrating the ability to realize the low-frequency science potential of the LISA mission, recently selected by the European Space Agency.
•
01 Jan 2018TL;DR: This paper explores poisoning attacks on neural nets using "clean-labels", an optimization-based method for crafting poisons, and shows that just one single poison image can control classifier behavior when transfer learning is used.
Abstract: Data poisoning is an attack on machine learning models wherein the attacker adds examples to the training set to manipulate the behavior of the model at test time. This paper explores poisoning attacks on neural nets. The proposed attacks use ``clean-labels''; they don't require the attacker to have any control over the labeling of training data. They are also targeted; they control the behavior of the classifier on a specific test instance without degrading overall classifier performance. For example, an attacker could add a seemingly innocuous image (that is properly labeled) to a training set for a face recognition engine, and control the identity of a chosen person at test time. Because the attacker does not need to control the labeling function, poisons could be entered into the training set simply by putting them online and waiting for them to be scraped by a data collection bot. We present an optimization-based method for crafting poisons, and show that just one single poison image can control classifier behavior when transfer learning is used. For full end-to-end training, we present a ``watermarking'' strategy that makes poisoning reliable using multiple (approx. 50) poisoned training instances. We demonstrate our method by generating poisoned frog images from the CIFAR dataset and using them to manipulate image classifiers.
•
TL;DR: In this article, the authors present an optimization-based method for crafting poisons, and show that just one single poison image can control classifier behavior when transfer learning is used, and demonstrate their method by generating poisoned frog images from CIFAR dataset and using them to manipulate image classifiers.
Abstract: Data poisoning is an attack on machine learning models wherein the attacker adds examples to the training set to manipulate the behavior of the model at test time. This paper explores poisoning attacks on neural nets. The proposed attacks use "clean-labels"; they don't require the attacker to have any control over the labeling of training data. They are also targeted; they control the behavior of the classifier on a $\textit{specific}$ test instance without degrading overall classifier performance. For example, an attacker could add a seemingly innocuous image (that is properly labeled) to a training set for a face recognition engine, and control the identity of a chosen person at test time. Because the attacker does not need to control the labeling function, poisons could be entered into the training set simply by leaving them on the web and waiting for them to be scraped by a data collection bot.
We present an optimization-based method for crafting poisons, and show that just one single poison image can control classifier behavior when transfer learning is used. For full end-to-end training, we present a "watermarking" strategy that makes poisoning reliable using multiple ($\approx$50) poisoned training instances. We demonstrate our method by generating poisoned frog images from the CIFAR dataset and using them to manipulate image classifiers.
••
California Institute of Technology1, Goddard Space Flight Center2, United States Department of Agriculture3, Massachusetts Institute of Technology4, University of Texas at Austin5, Monash University6, University of Guelph7, Comisión Nacional de Actividades Espaciales8, University of Salamanca9, Technical University of Denmark10, University of Twente11, University of Tsukuba12, National Oceanic and Atmospheric Administration13, University of Colorado Boulder14, University of Arizona15
TL;DR: This article covers the development and assessment of the SMAP Level 2 Enhanced Passive Soil Moisture Product (L2_SM_P_E) and affirmed that the Single Channel Algorithm using the V-polarized TB channel (SCA-V) delivered the best retrieval performance among the various algorithms implemented for L2-SM-P, a result similar to a previous assessment.
••
TL;DR: In this paper, a broad-band study of GW170817 from radio to hard X-rays, including NuSTAR and Chandra observations up to 165 d after the merger, and a multimessenger analysis including LIGO constraints is presented, providing the first detailed comparison between non-trivial cocoon and jet models.
Abstract: We present our broad-band study of GW170817 from radio to hard X-rays, including NuSTAR and Chandra observations up to 165 d after the merger, and a multimessenger analysis including LIGO constraints. The data are compared with predictions from a wide range of models, providing the first detailed comparison between non-trivial cocoon and jet models. Homogeneous and power-law shaped jets, as well as simple cocoon models are ruled out by the data, while both a Gaussian shaped jet and a cocoon with energy injection can describe the current data set for a reasonable range of physical parameters, consistent with the typical values derived from short GRB afterglows. We propose that these models can be unambiguously discriminated by future observations measuring the post-peak behaviour, with F_ν ∝ t^(∼−1.0) for the cocoon and F_ν∝ t^(∼−2.5) for the jet model.