scispace - formally typeset
Search or ask a question

Showing papers by "University of Maryland, College Park published in 2019"


Journal ArticleDOI
TL;DR: HyperFace as discussed by the authors combines face detection, landmarks localization, pose estimation and gender recognition using deep convolutional neural networks (CNNs) and achieves significant improvement in performance by fusing intermediate layers of a deep CNN using a separate CNN followed by a multi-task learning algorithm that operates on the fused features.
Abstract: We present an algorithm for simultaneous face detection, landmarks localization, pose estimation and gender recognition using deep convolutional neural networks (CNN). The proposed method called, HyperFace, fuses the intermediate layers of a deep CNN using a separate CNN followed by a multi-task learning algorithm that operates on the fused features. It exploits the synergy among the tasks which boosts up their individual performances. Additionally, we propose two variants of HyperFace: (1) HyperFace-ResNet that builds on the ResNet-101 model and achieves significant improvement in performance, and (2) Fast-HyperFace that uses a high recall fast face detector for generating region proposals to improve the speed of the algorithm. Extensive experiments show that the proposed models are able to capture both global and local information in faces and performs significantly better than many competitive algorithms for each of these four tasks.

1,218 citations


Journal ArticleDOI
Eric C. Bellm1, Shrinivas R. Kulkarni2, Matthew J. Graham2, Richard Dekany2, Roger M. H. Smith2, Reed Riddle2, Frank J. Masci2, George Helou2, Thomas A. Prince2, Scott M. Adams2, Cristina Barbarino3, Tom A. Barlow2, James Bauer4, Ron Beck2, Justin Belicki2, Rahul Biswas3, Nadejda Blagorodnova2, Dennis Bodewits4, Bryce Bolin1, V. Brinnel5, Tim Brooke2, Brian D. Bue2, Mattia Bulla3, Rick Burruss2, S. Bradley Cenko4, S. Bradley Cenko6, Chan-Kao Chang7, Andrew J. Connolly1, Michael W. Coughlin2, John Cromer2, Virginia Cunningham4, Kaushik De2, Alex Delacroix2, Vandana Desai2, Dmitry A. Duev2, Gwendolyn Eadie1, Tony L. Farnham4, Michael Feeney2, Ulrich Feindt3, David Flynn2, Anna Franckowiak, Sara Frederick4, Christoffer Fremling2, Avishay Gal-Yam8, Suvi Gezari4, Matteo Giomi5, Daniel A. Goldstein2, V. Zach Golkhou1, Ariel Goobar3, Steven Groom2, Eugean Hacopians2, David Hale2, John Henning2, Anna Y. Q. Ho2, David Hover2, Justin Howell2, Tiara Hung4, Daniela Huppenkothen1, David Imel2, Wing-Huen Ip7, Wing-Huen Ip9, Željko Ivezić1, Edward Jackson2, Lynne Jones1, Mario Juric1, Mansi M. Kasliwal2, Shai Kaspi10, Stephen Kaye2, Michael S. P. Kelley4, Marek Kowalski5, Emily Kramer2, Thomas Kupfer11, Thomas Kupfer2, Walter Landry2, Russ R. Laher2, Chien De Lee7, Hsing Wen Lin12, Hsing Wen Lin7, Zhong-Yi Lin7, Ragnhild Lunnan3, Ashish Mahabal2, Peter H. Mao2, Adam A. Miller13, Adam A. Miller14, Serge Monkewitz2, Patrick J. Murphy2, Chow-Choong Ngeow7, Jakob Nordin5, Peter Nugent15, Peter Nugent16, Eran O. Ofek8, Maria T. Patterson1, Bryan E. Penprase17, Michael Porter2, L. Rauch, Umaa Rebbapragada2, Daniel J. Reiley2, Mickael Rigault18, Hector P. Rodriguez2, Jan van Roestel19, Ben Rusholme2, J. V. Santen, Steve Schulze8, David L. Shupe2, Leo Singer6, Leo Singer4, Maayane T. Soumagnac8, Robert Stein, Jason Surace2, Jesper Sollerman3, Paula Szkody1, Francesco Taddia3, Scott Terek2, Angela Van Sistine20, Sjoert van Velzen4, W. Thomas Vestrand21, Richard Walters2, Charlotte Ward4, Quanzhi Ye2, Po-Chieh Yu7, Lin Yan2, Jeffry Zolkower2 
TL;DR: The Zwicky Transient Facility (ZTF) as mentioned in this paper is a new optical time-domain survey that uses the Palomar 48 inch Schmidt telescope, which provides a 47 deg^2 field of view and 8 s readout time, yielding more than an order of magnitude improvement in survey speed relative to its predecessor survey.
Abstract: The Zwicky Transient Facility (ZTF) is a new optical time-domain survey that uses the Palomar 48 inch Schmidt telescope. A custom-built wide-field camera provides a 47 deg^2 field of view and 8 s readout time, yielding more than an order of magnitude improvement in survey speed relative to its predecessor survey, the Palomar Transient Factory. We describe the design and implementation of the camera and observing system. The ZTF data system at the Infrared Processing and Analysis Center provides near-real-time reduction to identify moving and varying objects. We outline the analysis pipelines, data products, and associated archive. Finally, we present on-sky performance analysis and first scientific results from commissioning and the early survey. ZTF's public alert stream will serve as a useful precursor for that of the Large Synoptic Survey Telescope.

1,009 citations


Journal ArticleDOI
Pierre Friedlingstein1, Pierre Friedlingstein2, Matthew W. Jones3, Michael O'Sullivan2, Robbie M. Andrew, Judith Hauck4, Glen P. Peters, Wouter Peters5, Wouter Peters6, Julia Pongratz7, Julia Pongratz8, Stephen Sitch2, Corinne Le Quéré3, Dorothee C. E. Bakker3, Josep G. Canadell9, Philippe Ciais10, Robert B. Jackson11, Peter Anthoni12, Leticia Barbero13, Leticia Barbero14, Ana Bastos8, Vladislav Bastrikov10, Meike Becker15, Meike Becker16, Laurent Bopp1, Erik T. Buitenhuis3, Naveen Chandra17, Frédéric Chevallier10, Louise Chini18, Kim I. Currie19, Richard A. Feely20, Marion Gehlen10, Dennis Gilfillan21, Thanos Gkritzalis22, Daniel S. Goll23, Nicolas Gruber24, Sören B. Gutekunst25, Ian Harris26, Vanessa Haverd9, Richard A. Houghton27, George C. Hurtt18, Tatiana Ilyina7, Atul K. Jain28, Emilie Joetzjer10, Jed O. Kaplan29, Etsushi Kato, Kees Klein Goldewijk30, Kees Klein Goldewijk31, Jan Ivar Korsbakken, Peter Landschützer7, Siv K. Lauvset16, Nathalie Lefèvre32, Andrew Lenton33, Andrew Lenton34, Sebastian Lienert35, Danica Lombardozzi36, Gregg Marland21, Patrick C. McGuire37, Joe R. Melton, Nicolas Metzl32, David R. Munro38, Julia E. M. S. Nabel7, Shin-Ichiro Nakaoka39, Craig Neill33, Abdirahman M Omar33, Abdirahman M Omar16, Tsuneo Ono, Anna Peregon40, Anna Peregon10, Denis Pierrot13, Denis Pierrot14, Benjamin Poulter41, Gregor Rehder42, Laure Resplandy43, Eddy Robertson44, Christian Rödenbeck7, Roland Séférian10, Jörg Schwinger16, Jörg Schwinger31, Naomi E. Smith45, Naomi E. Smith6, Pieter P. Tans20, Hanqin Tian46, Bronte Tilbrook34, Bronte Tilbrook33, Francesco N. Tubiello47, Guido R. van der Werf48, Andy Wiltshire44, Sönke Zaehle7 
École Normale Supérieure1, University of Exeter2, Norwich Research Park3, Alfred Wegener Institute for Polar and Marine Research4, University of Groningen5, Wageningen University and Research Centre6, Max Planck Society7, Ludwig Maximilian University of Munich8, Commonwealth Scientific and Industrial Research Organisation9, Centre national de la recherche scientifique10, Stanford University11, Karlsruhe Institute of Technology12, Cooperative Institute for Marine and Atmospheric Studies13, Atlantic Oceanographic and Meteorological Laboratory14, Geophysical Institute, University of Bergen15, Bjerknes Centre for Climate Research16, Japan Agency for Marine-Earth Science and Technology17, University of Maryland, College Park18, National Institute of Water and Atmospheric Research19, National Oceanic and Atmospheric Administration20, Appalachian State University21, Flanders Marine Institute22, Augsburg College23, ETH Zurich24, Leibniz Institute of Marine Sciences25, University of East Anglia26, Woods Hole Research Center27, University of Illinois at Urbana–Champaign28, University of Hong Kong29, Utrecht University30, Netherlands Environmental Assessment Agency31, University of Paris32, Hobart Corporation33, University of Tasmania34, University of Bern35, National Center for Atmospheric Research36, University of Reading37, Cooperative Institute for Research in Environmental Sciences38, National Institute for Environmental Studies39, Russian Academy of Sciences40, Goddard Space Flight Center41, Leibniz Institute for Baltic Sea Research42, Princeton University43, Met Office44, Lund University45, Auburn University46, Food and Agriculture Organization47, VU University Amsterdam48
TL;DR: In this article, the authors describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties, including emissions from land use and land use change, and show that the difference between the estimated total emissions and the estimated changes in the atmosphere, ocean, and terrestrial biosphere is a measure of imperfect data and understanding of the contemporary carbon cycle.
Abstract: . Accurate assessment of anthropogenic carbon dioxide ( CO2 ) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere – the “global carbon budget” – is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties. Fossil CO2 emissions ( EFF ) are based on energy statistics and cement production data, while emissions from land use change ( ELUC ), mainly deforestation, are based on land use and land use change data and bookkeeping models. Atmospheric CO2 concentration is measured directly and its growth rate ( GATM ) is computed from the annual changes in concentration. The ocean CO2 sink ( SOCEAN ) and terrestrial CO2 sink ( SLAND ) are estimated with global process models constrained by observations. The resulting carbon budget imbalance ( BIM ), the difference between the estimated total emissions and the estimated changes in the atmosphere, ocean, and terrestrial biosphere, is a measure of imperfect data and understanding of the contemporary carbon cycle. All uncertainties are reported as ±1σ . For the last decade available (2009–2018), EFF was 9.5±0.5 GtC yr −1 , ELUC 1.5±0.7 GtC yr −1 , GATM 4.9±0.02 GtC yr −1 ( 2.3±0.01 ppm yr −1 ), SOCEAN 2.5±0.6 GtC yr −1 , and SLAND 3.2±0.6 GtC yr −1 , with a budget imbalance BIM of 0.4 GtC yr −1 indicating overestimated emissions and/or underestimated sinks. For the year 2018 alone, the growth in EFF was about 2.1 % and fossil emissions increased to 10.0±0.5 GtC yr −1 , reaching 10 GtC yr −1 for the first time in history, ELUC was 1.5±0.7 GtC yr −1 , for total anthropogenic CO2 emissions of 11.5±0.9 GtC yr −1 ( 42.5±3.3 GtCO2 ). Also for 2018, GATM was 5.1±0.2 GtC yr −1 ( 2.4±0.1 ppm yr −1 ), SOCEAN was 2.6±0.6 GtC yr −1 , and SLAND was 3.5±0.7 GtC yr −1 , with a BIM of 0.3 GtC. The global atmospheric CO2 concentration reached 407.38±0.1 ppm averaged over 2018. For 2019, preliminary data for the first 6–10 months indicate a reduced growth in EFF of +0.6 % (range of −0.2 % to 1.5 %) based on national emissions projections for China, the USA, the EU, and India and projections of gross domestic product corrected for recent changes in the carbon intensity of the economy for the rest of the world. Overall, the mean and trend in the five components of the global carbon budget are consistently estimated over the period 1959–2018, but discrepancies of up to 1 GtC yr −1 persist for the representation of semi-decadal variability in CO2 fluxes. A detailed comparison among individual estimates and the introduction of a broad range of observations shows (1) no consensus in the mean and trend in land use change emissions over the last decade, (2) a persistent low agreement between the different methods on the magnitude of the land CO2 flux in the northern extra-tropics, and (3) an apparent underestimation of the CO2 variability by ocean models outside the tropics. This living data update documents changes in the methods and data sets used in this new global carbon budget and the progress in understanding of the global carbon cycle compared with previous publications of this data set (Le Quere et al., 2018a, b, 2016, 2015a, b, 2014, 2013). The data generated by this work are available at https://doi.org/10.18160/gcp-2019 (Friedlingstein et al., 2019).

981 citations


Journal ArticleDOI
TL;DR: The authors show the operational environment of asteroid Bennu, validate its photometric phase function and demonstrate the accelerating rotational rate due to YORP effect using the data acquired during the approach phase of OSIRIS-REx mission.
Abstract: During its approach to asteroid (101955) Bennu, NASA’s Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer (OSIRIS-REx) spacecraft surveyed Bennu’s immediate environment, photometric properties, and rotation state. Discovery of a dusty environment, a natural satellite, or unexpected asteroid characteristics would have had consequences for the mission’s safety and observation strategy. Here we show that spacecraft observations during this period were highly sensitive to satellites (sub-meter scale) but reveal none, although later navigational images indicate that further investigation is needed. We constrain average dust production in September 2018 from Bennu’s surface to an upper limit of 150 g s–1 averaged over 34 min. Bennu’s disk-integrated photometric phase function validates measurements from the pre-encounter astronomical campaign. We demonstrate that Bennu’s rotation rate is accelerating continuously at 3.63 ± 0.52 × 10–6 degrees day–2, likely due to the Yarkovsky–O’Keefe–Radzievskii–Paddack (YORP) effect, with evolutionary implications.

905 citations


Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper studied three representative solid electrolytes with neutron depth profiling and identified high electronic conductivity as the root cause for the dendrite issue, which is the most common cause of lithium dendrites.
Abstract: Solid electrolytes (SEs) are widely considered as an ‘enabler’ of lithium anodes for high-energy batteries. However, recent reports demonstrate that the Li dendrite formation in Li7La3Zr2O12 (LLZO) and Li2S–P2S5 is actually much easier than that in liquid electrolytes of lithium batteries, by mechanisms that remain elusive. Here we illustrate the origin of the dendrite formation by monitoring the dynamic evolution of Li concentration profiles in three popular but representative SEs (LiPON, LLZO and amorphous Li3PS4) during lithium plating using time-resolved operando neutron depth profiling. Although no apparent changes in the lithium concentration in LiPON can be observed, we visualize the direct deposition of Li inside the bulk LLZO and Li3PS4. Our findings suggest the high electronic conductivity of LLZO and Li3PS4 is mostly responsible for dendrite formation in these SEs. Lowering the electronic conductivity, rather than further increasing the ionic conductivity of SEs, is therefore critical for the success of all-solid-state Li batteries. Despite its importance in lithium batteries, the mechanism of Li dendrite growth is not well understood. Here the authors study three representative solid electrolytes with neutron depth profiling and identify high electronic conductivity as the root cause for the dendrite issue.

901 citations


Proceedings Article
01 Jan 2019
TL;DR: In this paper, the authors propose to reuse the gradient information computed when updating model parameters to eliminate the overhead cost of generating adversarial examples by recycling the gradients of the model parameters and achieve comparable robustness to PGD adversarial training.
Abstract: Adversarial training, in which a network is trained on adversarial examples, is one of the few defenses against adversarial attacks that withstands strong attacks. Unfortunately, the high cost of generating strong adversarial examples makes standard adversarial training impractical on large-scale problems like ImageNet. We present an algorithm that eliminates the overhead cost of generating adversarial examples by recycling the gradient information computed when updating model parameters. Our "free" adversarial training algorithm achieves comparable robustness to PGD adversarial training on the CIFAR-10 and CIFAR-100 datasets at negligible additional cost compared to natural training, and can be 7 to 30 times faster than other strong adversarial training methods. Using a single workstation with 4 P100 GPUs and 2 days of runtime, we can train a robust model for the large-scale ImageNet classification task that maintains 40% accuracy against PGD attacks.

772 citations


Journal ArticleDOI
TL;DR: In this paper, the mass and radius of the isolated 205.53 Hz millisecond pulsar PSR J0030+0451 were estimated using a Bayesian inference approach to analyze its energy-dependent thermal X-ray waveform, which was observed using the Neutron Star Interior Composition Explorer (NICER).
Abstract: Neutron stars are not only of astrophysical interest, but are also of great interest to nuclear physicists because their attributes can be used to determine the properties of the dense matter in their cores. One of the most informative approaches for determining the equation of state (EoS) of this dense matter is to measure both a star’s equatorial circumferential radius R e and its gravitational mass M. Here we report estimates of the mass and radius of the isolated 205.53 Hz millisecond pulsar PSR J0030+0451 obtained using a Bayesian inference approach to analyze its energy-dependent thermal X-ray waveform, which was observed using the Neutron Star Interior Composition Explorer (NICER). This approach is thought to be less subject to systematic errors than other approaches for estimating neutron star radii. We explored a variety of emission patterns on the stellar surface. Our best-fit model has three oval, uniform-temperature emitting spots and provides an excellent description of the pulse waveform observed using NICER. The radius and mass estimates given by this model are km and (68%). The independent analysis reported in the companion paper by Riley et al. explores different emitting spot models, but finds spot shapes and locations and estimates of R e and M that are consistent with those found in this work. We show that our measurements of R e and M for PSR J0030+0451 improve the astrophysical constraints on the EoS of cold, catalyzed matter above nuclear saturation density.

758 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott2, T. D. Abbott, Fausto Acernese3  +1157 moreInstitutions (70)
TL;DR: In this paper, the authors improved initial estimates of the binary's properties, including component masses, spins, and tidal parameters, using the known source location, improved modeling, and recalibrated Virgo data.
Abstract: On August 17, 2017, the Advanced LIGO and Advanced Virgo gravitational-wave detectors observed a low-mass compact binary inspiral. The initial sky localization of the source of the gravitational-wave signal, GW170817, allowed electromagnetic observatories to identify NGC 4993 as the host galaxy. In this work, we improve initial estimates of the binary's properties, including component masses, spins, and tidal parameters, using the known source location, improved modeling, and recalibrated Virgo data. We extend the range of gravitational-wave frequencies considered down to 23 Hz, compared to 30 Hz in the initial analysis. We also compare results inferred using several signal models, which are more accurate and incorporate additional physical effects as compared to the initial analysis. We improve the localization of the gravitational-wave source to a 90% credible region of 16 deg2. We find tighter constraints on the masses, spins, and tidal parameters, and continue to find no evidence for nonzero component spins. The component masses are inferred to lie between 1.00 and 1.89 M when allowing for large component spins, and to lie between 1.16 and 1.60 M (with a total mass 2.73-0.01+0.04 M) when the spins are restricted to be within the range observed in Galactic binary neutron stars. Using a precessing model and allowing for large component spins, we constrain the dimensionless spins of the components to be less than 0.50 for the primary and 0.61 for the secondary. Under minimal assumptions about the nature of the compact objects, our constraints for the tidal deformability parameter Λ are (0,630) when we allow for large component spins, and 300-230+420 (using a 90% highest posterior density interval) when restricting the magnitude of the component spins, ruling out several equation-of-state models at the 90% credible level. Finally, with LIGO and GEO600 data, we use a Bayesian analysis to place upper limits on the amplitude and spectral energy density of a possible postmerger signal.

715 citations


Journal ArticleDOI
24 May 2019-Science
TL;DR: By a process of complete delignification and densification of wood, a structural material with a mechanical strength of 404.3 megapascals is developed, more than eight times that of natural wood, resulting in continuous subambient cooling during both day and night.
Abstract: Reducing human reliance on energy-inefficient cooling methods such as air conditioning would have a large impact on the global energy landscape. By a process of complete delignification and densification of wood, we developed a structural material with a mechanical strength of 404.3 megapascals, more than eight times that of natural wood. The cellulose nanofibers in our engineered material backscatter solar radiation and emit strongly in mid-infrared wavelengths, resulting in continuous subambient cooling during both day and night. We model the potential impact of our cooling wood and find energy savings between 20 and 60%, which is most pronounced in hot and dry climates.

710 citations


Journal ArticleDOI
20 Mar 2019-Joule
TL;DR: In this article, a review of recent developments in photothermal materials, with a focus on their photothermal conversion mechanisms as light absorbers, is presented, and the potential applications of this attractive technology in a variety of energy and environmental fields are described.

690 citations


Journal ArticleDOI
29 Mar 2019-Science
TL;DR: A global, quantitative assessment of the amphibian chytridiomycosis panzootic demonstrates its role in the decline of at least 501 amphibian species over the past half-century and represents the greatest recorded loss of biodiversity attributable to a disease.
Abstract: Anthropogenic trade and development have broken down dispersal barriers, facilitating the spread of diseases that threaten Earth's biodiversity. We present a global, quantitative assessment of the amphibian chytridiomycosis panzootic, one of the most impactful examples of disease spread, and demonstrate its role in the decline of at least 501 amphibian species over the past half-century, including 90 presumed extinctions. The effects of chytridiomycosis have been greatest in large-bodied, range-restricted anurans in wet climates in the Americas and Australia. Declines peaked in the 1980s, and only 12% of declined species show signs of recovery, whereas 39% are experiencing ongoing decline. There is risk of further chytridiomycosis outbreaks in new areas. The chytridiomycosis panzootic represents the greatest recorded loss of biodiversity attributable to a disease.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a comprehensive review of the thermal runaway phenomenon and related fire dynamics in singe and multi-cell battery packs, as well as potential fire prevention measures.

Journal ArticleDOI
TL;DR: The estimated US national MS prevalence for 2010 is the highest reported to date and provides evidence that the north-south gradient persists and has the potential to be used for other chronic neurologic conditions.
Abstract: Objective To generate a national multiple sclerosis (MS) prevalence estimate for the United States by applying a validated algorithm to multiple administrative health claims (AHC) datasets. Methods A validated algorithm was applied to private, military, and public AHC datasets to identify adult cases of MS between 2008 and 2010. In each dataset, we determined the 3-year cumulative prevalence overall and stratified by age, sex, and census region. We applied insurance-specific and stratum-specific estimates to the 2010 US Census data and pooled the findings to calculate the 2010 prevalence of MS in the United States cumulated over 3 years. We also estimated the 2010 prevalence cumulated over 10 years using 2 models and extrapolated our estimate to 2017. Results The estimated 2010 prevalence of MS in the US adult population cumulated over 10 years was 309.2 per 100,000 (95% confidence interval [CI] 308.1–310.1), representing 727,344 cases. During the same time period, the MS prevalence was 450.1 per 100,000 (95% CI 448.1–451.6) for women and 159.7 (95% CI 158.7–160.6) for men (female:male ratio 2.8). The estimated 2010 prevalence of MS was highest in the 55- to 64-year age group. A US north-south decreasing prevalence gradient was identified. The estimated MS prevalence is also presented for 2017. Conclusion The estimated US national MS prevalence for 2010 is the highest reported to date and provides evidence that the north-south gradient persists. Our rigorous algorithm-based approach to estimating prevalence is efficient and has the potential to be used for other chronic neurologic conditions.

Journal ArticleDOI
29 Apr 2019-eLife
TL;DR: The goal is to facilitate a more accurate use of the stop-signal task and provide user-friendly open-source resources intended to inform statistical-power considerations, facilitate the correct implementation of the task, and assist in proper data analysis.
Abstract: Response inhibition is essential for navigating everyday life. Its derailment is considered integral to numerous neurological and psychiatric disorders, and more generally, to a wide range of behavioral and health problems. Response-inhibition efficiency furthermore correlates with treatment outcome in some of these conditions. The stop-signal task is an essential tool to determine how quickly response inhibition is implemented. Despite its apparent simplicity, there are many features (ranging from task design to data analysis) that vary across studies in ways that can easily compromise the validity of the obtained results. Our goal is to facilitate a more accurate use of the stop-signal task. To this end, we provide 12 easy-to-implement consensus recommendations and point out the problems that can arise when they are not followed. Furthermore, we provide user-friendly open-source resources intended to inform statistical-power considerations, facilitate the correct implementation of the task, and assist in proper data analysis.

Journal ArticleDOI
TL;DR: In this article, the mass and radius of the isolated 205.53 Hz millisecond pulsar PSR J0030+0451 were estimated using a Bayesian inference approach to analyze its energy-dependent thermal X-ray waveform.
Abstract: Neutron stars are not only of astrophysical interest, but are also of great interest to nuclear physicists, because their attributes can be used to determine the properties of the dense matter in their cores. One of the most informative approaches for determining the equation of state of this dense matter is to measure both a star's equatorial circumferential radius $R_e$ and its gravitational mass $M$. Here we report estimates of the mass and radius of the isolated 205.53 Hz millisecond pulsar PSR J0030+0451 obtained using a Bayesian inference approach to analyze its energy-dependent thermal X-ray waveform, which was observed using the Neutron Star Interior Composition Explorer (NICER). This approach is thought to be less subject to systematic errors than other approaches for estimating neutron star radii. We explored a variety of emission patterns on the stellar surface. Our best-fit model has three oval, uniform-temperature emitting spots and provides an excellent description of the pulse waveform observed using NICER. The radius and mass estimates given by this model are $R_e = 13.02^{+1.24}_{-1.06}$ km and $M = 1.44^{+0.15}_{-0.14}\ M_\odot$ (68%). The independent analysis reported in the companion paper by Riley et al. (2019) explores different emitting spot models, but finds spot shapes and locations and estimates of $R_e$ and $M$ that are consistent with those found in this work. We show that our measurements of $R_e$ and $M$ for PSR J0030$+$0451 improve the astrophysical constraints on the equation of state of cold, catalyzed matter above nuclear saturation density.

Proceedings ArticleDOI
01 Jan 2019
TL;DR: This article propose a gradient-guided search over tokens which finds short trigger sequences (e.g., one word for classification and four words for language modeling) that successfully trigger the target prediction.
Abstract: Adversarial examples highlight model vulnerabilities and are useful for evaluation and interpretation. We define universal adversarial triggers: input-agnostic sequences of tokens that trigger a model to produce a specific prediction when concatenated to any input from a dataset. We propose a gradient-guided search over tokens which finds short trigger sequences (e.g., one word for classification and four words for language modeling) that successfully trigger the target prediction. For example, triggers cause SNLI entailment accuracy to drop from 89.94% to 0.55%, 72% of “why” questions in SQuAD to be answered “to kill american people”, and the GPT-2 language model to spew racist output even when conditioned on non-racial contexts. Furthermore, although the triggers are optimized using white-box access to a specific model, they transfer to other models for all tasks we consider. Finally, since triggers are input-agnostic, they provide an analysis of global model behavior. For instance, they confirm that SNLI models exploit dataset biases and help to diagnose heuristics learned by reading comprehension models.

Journal ArticleDOI
TL;DR: A self-regenerating solar evaporator featuring excellent antifouling properties using a rationally designed artificial channel-array in a natural wood substrate is reported, exhibiting the highest efficiency in a highly concentrated salt solution under 1 sun irradiation, as well as long-term stability.
Abstract: Emerging solar desalination by interfacial evaporation shows great potential in response to global water scarcity because of its high solar-to-vapor efficiency, low environmental impact, and off-grid capability. However, solute accumulation at the heating interface has severely impacted the performance and long-term stability of current solar evaporation systems. Here, a self-regenerating solar evaporator featuring excellent antifouling properties using a rationally designed artificial channel-array in a natural wood substrate is reported. Upon solar evaporation, salt concentration gradients are formed between the millimeter-sized drilled channels (with a low salt concentration) and the microsized natural wood channels (with a high salt concentration) due to their different hydraulic conductivities. The concentration gradients allow spontaneous interchannel salt exchange through the 1-2 µm pits, leading to the dilution of salt in the microsized wood channels. The drilled channels with high hydraulic conductivities thus function as salt-rejection pathways, which can rapidly exchange the salt with the bulk solution, enabling the real-time self-regeneration of the evaporator. Compared to other salt-rejection designs, the solar evaporator exhibits the highest efficiency (≈75%) in a highly concentrated salt solution (20 wt% NaCl) under 1 sun irradiation, as well as long-term stability (over 100 h of continuous operation).

Journal ArticleDOI
TL;DR: The Third Pole (TP) is experiencing rapid warming and is currently in its warmest period in the past 2,000 years as mentioned in this paper, and the latest development in multidisciplinary TP research is reviewed in this paper.
Abstract: The Third Pole (TP) is experiencing rapid warming and is currently in its warmest period in the past 2,000 years. This paper reviews the latest development in multidisciplinary TP research ...

Journal ArticleDOI
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4  +1491 moreInstitutions (239)
TL;DR: In this article, the authors present the second volume of the Future Circular Collider Conceptual Design Report, devoted to the electron-positron collider FCC-ee, and present the accelerator design, performance reach, a staged operation scenario, the underlying technologies, civil engineering, technical infrastructure, and an implementation plan.
Abstract: In response to the 2013 Update of the European Strategy for Particle Physics, the Future Circular Collider (FCC) study was launched, as an international collaboration hosted by CERN. This study covers a highest-luminosity high-energy lepton collider (FCC-ee) and an energy-frontier hadron collider (FCC-hh), which could, successively, be installed in the same 100 km tunnel. The scientific capabilities of the integrated FCC programme would serve the worldwide community throughout the 21st century. The FCC study also investigates an LHC energy upgrade, using FCC-hh technology. This document constitutes the second volume of the FCC Conceptual Design Report, devoted to the electron-positron collider FCC-ee. After summarizing the physics discovery opportunities, it presents the accelerator design, performance reach, a staged operation scenario, the underlying technologies, civil engineering, technical infrastructure, and an implementation plan. FCC-ee can be built with today’s technology. Most of the FCC-ee infrastructure could be reused for FCC-hh. Combining concepts from past and present lepton colliders and adding a few novel elements, the FCC-ee design promises outstandingly high luminosity. This will make the FCC-ee a unique precision instrument to study the heaviest known particles (Z, W and H bosons and the top quark), offering great direct and indirect sensitivity to new physics.

Journal ArticleDOI
TL;DR: In this paper, the main existing safety and reliability challenges in hydrogen systems are reviewed, and the current state-of-the-art in safety analysis for hydrogen storage and delivery technologies is discussed, and recommendations are mentioned to help providing a foundation for future risk and reliability analysis to support safe, reliable operation.

Journal ArticleDOI
TL;DR: The results suggest endemicity of monkeypox virus in Nigeria, with some evidence of human-to-human transmission, and further studies are necessary to explore animal reservoirs and risk factors for transmission of the virus.
Abstract: Summary Background In September, 2017, human monkeypox re-emerged in Nigeria, 39 years after the last reported case. We aimed to describe the clinical and epidemiological features of the 2017–18 human monkeypox outbreak in Nigeria. Methods We reviewed the epidemiological and clinical characteristics of cases of human monkeypox that occurred between Sept 22, 2017, and Sept 16, 2018. Data were collected with a standardised case investigation form, with a case definition of human monkeypox that was based on previously established guidelines. Diagnosis was confirmed by viral identification with real-time PCR and by detection of positive anti-orthopoxvirus IgM antibodies. Whole-genome sequencing was done for seven cases. Haplotype analysis results, genetic distance data, and epidemiological data were used to infer a likely series of events for potential human-to-human transmission of the west African clade of monkeypox virus. Findings 122 confirmed or probable cases of human monkeypox were recorded in 17 states, including seven deaths (case fatality rate 6%). People infected with monkeypox virus were aged between 2 days and 50 years (median 29 years [IQR 14]), and 84 (69%) were male. All 122 patients had vesiculopustular rash, and fever, pruritus, headache, and lymphadenopathy were also common. The rash affected all parts of the body, with the face being most affected. The distribution of cases and contacts suggested both primary zoonotic and secondary human-to-human transmission. Two cases of health-care-associated infection were recorded. Genomic analysis suggested multiple introductions of the virus and a single introduction along with human-to-human transmission in a prison facility. Interpretation This study describes the largest documented human outbreak of the west African clade of the monkeypox virus. Our results suggest endemicity of monkeypox virus in Nigeria, with some evidence of human-to-human transmission. Further studies are necessary to explore animal reservoirs and risk factors for transmission of the virus in Nigeria. Funding None.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3  +1215 moreInstitutions (134)
TL;DR: In this paper, the mass, spin, and redshift distributions of binary black hole (BBH) mergers with LIGO and Advanced Virgo observations were analyzed using phenomenological population models.
Abstract: We present results on the mass, spin, and redshift distributions with phenomenological population models using the 10 binary black hole (BBH) mergers detected in the first and second observing runs completed by Advanced LIGO and Advanced Virgo. We constrain properties of the BBH mass spectrum using models with a range of parameterizations of the BBH mass and spin distributions. We find that the mass distribution of the more massive BH in such binaries is well approximated by models with no more than 1% of BHs more massive than 45 M and a power-law index of (90% credibility). We also show that BBHs are unlikely to be composed of BHs with large spins aligned to the orbital angular momentum. Modeling the evolution of the BBH merger rate with redshift, we show that it is flat or increasing with redshift with 93% probability. Marginalizing over uncertainties in the BBH population, we find robust estimates of the BBH merger rate density of R= (90% credibility). As the BBH catalog grows in future observing runs, we expect that uncertainties in the population model parameters will shrink, potentially providing insights into the formation of BHs via supernovae, binary interactions of massive stars, stellar cluster dynamics, and the formation history of BHs across cosmic time.

Journal ArticleDOI
TL;DR: The Multi-Source Weighted Ensemble Precipitation (MSWEP) dataset as discussed by the authors is a gridded precipitation P dataset spanning 1979-2017, which is unique in several aspects: i) full global co...
Abstract: We present Multi-Source Weighted-Ensemble Precipitation, version 2 (MSWEP V2), a gridded precipitation P dataset spanning 1979–2017. MSWEP V2 is unique in several aspects: i) full global co...

Journal ArticleDOI
TL;DR: The Zwicky Transient Facility (ZTF) as mentioned in this paper is a robotic time-domain survey currently in progress using the Palomar 48-inch Schmidt Telescope, which uses a 600 megapixel camera to scan the entire northern visible sky at rates of ~3760 square degrees/hour.
Abstract: The Zwicky Transient Facility (ZTF) is a new robotic time-domain survey currently in progress using the Palomar 48-inch Schmidt Telescope. ZTF uses a 47 square degree field with a 600 megapixel camera to scan the entire northern visible sky at rates of ~3760 square degrees/hour to median depths of g ~ 20.8 and r ~ 20.6 mag (AB, 5σ in 30 sec). We describe the Science Data System that is housed at IPAC, Caltech. This comprises the data-processing pipelines, alert production system, data archive, and user interfaces for accessing and analyzing the products. The real-time pipeline employs a novel image-differencing algorithm, optimized for the detection of point-source transient events. These events are vetted for reliability using a machine-learned classifier and combined with contextual information to generate data-rich alert packets. The packets become available for distribution typically within 13 minutes (95th percentile) of observation. Detected events are also linked to generate candidate moving-object tracks using a novel algorithm. Objects that move fast enough to streak in the individual exposures are also extracted and vetted. We present some preliminary results of the calibration performance delivered by the real-time pipeline. The reconstructed astrometric accuracy per science image with respect to Gaia DR1 is typically 45 to 85 milliarcsec. This is the RMS per-axis on the sky for sources extracted with photometric S/N ≥ 10 and hence corresponds to the typical astrometric uncertainty down to this limit. The derived photometric precision (repeatability) at bright unsaturated fluxes varies between 8 and 25 millimag. The high end of these ranges corresponds to an airmass approaching ~2—the limit of the public survey. Photometric calibration accuracy with respect to Pan-STARRS1 is generally better than 2%. The products support a broad range of scientific applications: fast and young supernovae; rare flux transients; variable stars; eclipsing binaries; variability from active galactic nuclei; counterparts to gravitational wave sources; a more complete census of Type Ia supernovae; and solar-system objects.

Journal ArticleDOI
Albert M. Sirunyan, Armen Tumasyan, Wolfgang Adam1, Federico Ambrogi1  +2265 moreInstitutions (153)
TL;DR: Combined measurements of the production and decay rates of the Higgs boson, as well as its couplings to vector bosons and fermions, are presented and constraints are placed on various two Higgs doublet models.
Abstract: Combined measurements of the production and decay rates of the Higgs boson, as well as its couplings to vector bosons and fermions, are presented. The analysis uses the LHC proton–proton collision data set recorded with the CMS detector in 2016 at $\sqrt{s}=13\,\text {Te}\text {V} $ , corresponding to an integrated luminosity of 35.9 ${\,\text {fb}^{-1}} $ . The combination is based on analyses targeting the five main Higgs boson production mechanisms (gluon fusion, vector boson fusion, and associated production with a $\mathrm {W}$ or $\mathrm {Z}$ boson, or a top quark-antiquark pair) and the following decay modes: $\mathrm {H} \rightarrow \gamma \gamma $ , $\mathrm {Z}\mathrm {Z}$ , $\mathrm {W}\mathrm {W}$ , $\mathrm {\tau }\mathrm {\tau }$ , $\mathrm {b} \mathrm {b} $ , and $\mathrm {\mu }\mathrm {\mu }$ . Searches for invisible Higgs boson decays are also considered. The best-fit ratio of the signal yield to the standard model expectation is measured to be $\mu =1.17\pm 0.10$ , assuming a Higgs boson mass of $125.09\,\text {Ge}\text {V} $ . Additional results are given for various assumptions on the scaling behavior of the production and decay modes, including generic parametrizations based on ratios of cross sections and branching fractions or couplings. The results are compatible with the standard model predictions in all parametrizations considered. In addition, constraints are placed on various two Higgs doublet models.

Journal ArticleDOI
Roel Aaij, C. Abellán Beteta1, Bernardo Adeva2, Marco Adinolfi3  +858 moreInstitutions (57)
TL;DR: This is the most precise measurement of R_{K} to date and is compatible with the standard model at the level of 2.5 standard deviations.
Abstract: A measurement of the ratio of branching fractions of the decays B + → K + μ + μ − and B + → K + e + e − is presented. The proton-proton collision data used correspond to an integrated luminosity of 5.0 fb − 1 recorded with the LHCb experiment at center-of-mass energies of 7, 8, and 13 TeV. For the dilepton mass-squared range 1.1 < q 2 < 6.0 GeV 2 / c 4 the ratio of branching fractions is measured to be R K = 0.84 6 + 0.060 − 0.054 + 0.016 − 0.014 , where the first uncertainty is statistical and the second systematic. This is the most precise measurement of R K to date and is compatible with the standard model at the level of 2.5 standard deviations.

Journal ArticleDOI
TL;DR: In this paper, the authors tame the affinity between solvents and Li ions by dissolving fluorinated electrolytes into highly fluorinated non-polar (non-Polar) solvants, enabling batteries that can operate at a wide temperature range (−125 to +70°C).
Abstract: Carbonate electrolytes are commonly used in commercial non-aqueous Li-ion batteries. However, the high affinity between the solvents and the ions and high flammability of the carbonate electrolytes limits the battery operation temperature window to −20 to + 50 °C and the voltage window to 0.0 to 4.3 V. Here, we tame the affinity between solvents and Li ions by dissolving fluorinated electrolytes into highly fluorinated non-polar solvents. In addition to their non-flammable characteristic, our electrolytes enable high electrochemical stability in a wide voltage window of 0.0 to 5.6 V, and high ionic conductivities in a wide temperature range from −125 to + 70 °C. We show that between −95 and + 70 °C, the electrolytes enable LiNi0.8Co0.15Al0.05O2 cathodes to achieve high Coulombic efficiencies of >99.9%, and the aggressive Li anodes and the high-voltage (5.4 V) LiCoMnO4 to achieve Coulombic efficiencies of >99.4% and 99%, respectively. Even at −85 °C, the LiNi0.8Co0.15Al0.05O2 || Li battery can still deliver ~50% of its room-temperature capacity. Batteries generally do not perform well at extreme temperatures, and electrolytes are mainly to blame. Here, the authors dissolve fluorinated electrolytes in highly fluorinated non-polar solvents, enabling batteries that can operate at a wide temperature range (−125 to +70 °C).


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Fausto Acernese3  +1237 moreInstitutions (131)
TL;DR: In this paper, the authors place constraints on the dipole radiation and possible deviations from GR in the post-Newtonian coefficients that govern the inspiral regime of a binary neutron star inspiral.
Abstract: The recent discovery by Advanced LIGO and Advanced Virgo of a gravitational wave signal from a binary neutron star inspiral has enabled tests of general relativity (GR) with this new type of source. This source, for the first time, permits tests of strong-field dynamics of compact binaries in the presence of matter. In this Letter, we place constraints on the dipole radiation and possible deviations from GR in the post-Newtonian coefficients that govern the inspiral regime. Bounds on modified dispersion of gravitational waves are obtained; in combination with information from the observed electromagnetic counterpart we can also constrain effects due to large extra dimensions. Finally, the polarization content of the gravitational wave signal is studied. The results of all tests performed here show good agreement with GR.

Journal ArticleDOI
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4  +1496 moreInstitutions (238)
TL;DR: In this paper, the authors describe the detailed design and preparation of a construction project for a post-LHC circular energy frontier collider in collaboration with national institutes, laboratories and universities worldwide, and enhanced by a strong participation of industrial partners.
Abstract: Particle physics has arrived at an important moment of its history. The discovery of the Higgs boson, with a mass of 125 GeV, completes the matrix of particles and interactions that has constituted the “Standard Model” for several decades. This model is a consistent and predictive theory, which has so far proven successful at describing all phenomena accessible to collider experiments. However, several experimental facts do require the extension of the Standard Model and explanations are needed for observations such as the abundance of matter over antimatter, the striking evidence for dark matter and the non-zero neutrino masses. Theoretical issues such as the hierarchy problem, and, more in general, the dynamical origin of the Higgs mechanism, do likewise point to the existence of physics beyond the Standard Model. This report contains the description of a novel research infrastructure based on a highest-energy hadron collider with a centre-of-mass collision energy of 100 TeV and an integrated luminosity of at least a factor of 5 larger than the HL-LHC. It will extend the current energy frontier by almost an order of magnitude. The mass reach for direct discovery will reach several tens of TeV, and allow, for example, to produce new particles whose existence could be indirectly exposed by precision measurements during the earlier preceding e+e– collider phase. This collider will also precisely measure the Higgs self-coupling and thoroughly explore the dynamics of electroweak symmetry breaking at the TeV scale, to elucidate the nature of the electroweak phase transition. WIMPs as thermal dark matter candidates will be discovered, or ruled out. As a single project, this particle collider infrastructure will serve the world-wide physics community for about 25 years and, in combination with a lepton collider (see FCC conceptual design report volume 2), will provide a research tool until the end of the 21st century. Collision energies beyond 100 TeV can be considered when using high-temperature superconductors. The European Strategy for Particle Physics (ESPP) update 2013 stated “To stay at the forefront of particle physics, Europe needs to be in a position to propose an ambitious post-LHC accelerator project at CERN by the time of the next Strategy update”. The FCC study has implemented the ESPP recommendation by developing a long-term vision for an “accelerator project in a global context”. This document describes the detailed design and preparation of a construction project for a post-LHC circular energy frontier collider “in collaboration with national institutes, laboratories and universities worldwide”, and enhanced by a strong participation of industrial partners. Now, a coordinated preparation effort can be based on a core of an ever-growing consortium of already more than 135 institutes worldwide. The technology for constructing a high-energy circular hadron collider can be brought to the technology readiness level required for constructing within the coming ten years through a focused R&D programme. The FCC-hh concept comprises in the baseline scenario a power-saving, low-temperature superconducting magnet system based on an evolution of the Nb3Sn technology pioneered at the HL-LHC, an energy-efficient cryogenic refrigeration infrastructure based on a neon-helium (Nelium) light gas mixture, a high-reliability and low loss cryogen distribution infrastructure based on Invar, high-power distributed beam transfer using superconducting elements and local magnet energy recovery and re-use technologies that are already gradually introduced at other CERN accelerators. On a longer timescale, high-temperature superconductors can be developed together with industrial partners to achieve an even more energy efficient particle collider or to reach even higher collision energies.The re-use of the LHC and its injector chain, which also serve for a concurrently running physics programme, is an essential lever to come to an overall sustainable research infrastructure at the energy frontier. Strategic R&D for FCC-hh aims at minimising construction cost and energy consumption, while maximising the socio-economic impact. It will mitigate technology-related risks and ensure that industry can benefit from an acceptable utility. Concerning the implementation, a preparatory phase of about eight years is both necessary and adequate to establish the project governance and organisation structures, to build the international machine and experiment consortia, to develop a territorial implantation plan in agreement with the host-states’ requirements, to optimise the disposal of land and underground volumes, and to prepare the civil engineering project. Such a large-scale, international fundamental research infrastructure, tightly involving industrial partners and providing training at all education levels, will be a strong motor of economic and societal development in all participating nations. The FCC study has implemented a set of actions towards a coherent vision for the world-wide high-energy and particle physics community, providing a collaborative framework for topically complementary and geographically well-balanced contributions. This conceptual design report lays the foundation for a subsequent infrastructure preparatory and technical design phase.