scispace - formally typeset
Search or ask a question

Showing papers by "University of California, Santa Barbara published in 2017"


Journal ArticleDOI
TL;DR: By identifying and synthesizing dispersed data on production, use, and end-of-life management of polymer resins, synthetic fibers, and additives, this work presents the first global analysis of all mass-produced plastics ever manufactured.
Abstract: Plastics have outgrown most man-made materials and have long been under environmental scrutiny. However, robust global information, particularly about their end-of-life fate, is lacking. By identifying and synthesizing dispersed data on production, use, and end-of-life management of polymer resins, synthetic fibers, and additives, we present the first global analysis of all mass-produced plastics ever manufactured. We estimate that 8300 million metric tons (Mt) as of virgin plastics have been produced to date. As of 2015, approximately 6300 Mt of plastic waste had been generated, around 9% of which had been recycled, 12% was incinerated, and 79% was accumulated in landfills or the natural environment. If current production and waste management trends continue, roughly 12,000 Mt of plastic waste will be in landfills or in the natural environment by 2050.

7,707 citations


Journal ArticleDOI
TL;DR: A binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors.
Abstract: On 2017 August 17 a binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors. The Fermi Gamma-ray Burst Monitor independently detected a gamma-ray burst (GRB 170817A) with a time delay of $\sim 1.7\,{\rm{s}}$ with respect to the merger time. From the gravitational-wave signal, the source was initially localized to a sky region of 31 deg2 at a luminosity distance of ${40}_{-8}^{+8}$ Mpc and with component masses consistent with neutron stars. The component masses were later measured to be in the range 0.86 to 2.26 $\,{M}_{\odot }$. An extensive observing campaign was launched across the electromagnetic spectrum leading to the discovery of a bright optical transient (SSS17a, now with the IAU identification of AT 2017gfo) in NGC 4993 (at $\sim 40\,{\rm{Mpc}}$) less than 11 hours after the merger by the One-Meter, Two Hemisphere (1M2H) team using the 1 m Swope Telescope. The optical transient was independently detected by multiple teams within an hour. Subsequent observations targeted the object and its environment. Early ultraviolet observations revealed a blue transient that faded within 48 hours. Optical and infrared observations showed a redward evolution over ~10 days. Following early non-detections, X-ray and radio emission were discovered at the transient's position $\sim 9$ and $\sim 16$ days, respectively, after the merger. Both the X-ray and radio emission likely arise from a physical process that is distinct from the one that generates the UV/optical/near-infrared emission. No ultra-high-energy gamma-rays and no neutrino candidates consistent with the source were found in follow-up searches. These observations support the hypothesis that GW170817 was produced by the merger of two neutron stars in NGC 4993 followed by a short gamma-ray burst (GRB 170817A) and a kilonova/macronova powered by the radioactive decay of r-process nuclei synthesized in the ejecta.

2,746 citations


Journal ArticleDOI
D. S. Akerib1, S. Alsum2, Henrique Araujo3, X. Bai4, A. J. Bailey3, J. Balajthy5, P. Beltrame, Ethan Bernard6, A. Bernstein7, T. P. Biesiadzinski1, E. M. Boulton6, R. Bramante1, P. Brás8, D. Byram9, Sidney Cahn10, M. C. Carmona-Benitez11, C. Chan12, A.A. Chiller9, C. Chiller9, A. Currie3, J. E. Cutter13, T. J. R. Davison, A. Dobi14, J. E. Y. Dobson15, E. Druszkiewicz16, B. N. Edwards10, C. H. Faham14, S. Fiorucci12, R. J. Gaitskell12, V. M. Gehman14, C. Ghag15, K.R. Gibson1, M. G. D. Gilchriese14, C. R. Hall5, M. Hanhardt4, S. J. Haselschwardt11, S. A. Hertel6, D. P. Hogan6, M. Horn6, D. Q. Huang12, C. M. Ignarra17, M. Ihm6, R.G. Jacobsen6, W. Ji1, K. Kamdin6, K. Kazkaz7, D. Khaitan16, R. Knoche5, N.A. Larsen10, C. Lee1, B. G. Lenardo7, K. T. Lesko14, A. Lindote8, M.I. Lopes8, A. Manalaysay13, R. L. Mannino18, M. F. Marzioni, Daniel McKinsey6, D. M. Mei9, J. Mock19, M. Moongweluwan16, J. A. Morad13, A. St. J. Murphy20, C. Nehrkorn11, H. N. Nelson11, F. Neves8, K. O’Sullivan6, K. C. Oliver-Mallory6, K. J. Palladino17, E. K. Pease6, P. Phelps1, L. Reichhart15, C. Rhyne12, S. Shaw15, T. A. Shutt1, C. Silva8, M. Solmaz11, V. N. Solovov8, P. Sorensen14, S. Stephenson13, T. J. Sumner3, Matthew Szydagis19, D. J. Taylor, W. C. Taylor12, B. P. Tennyson10, P. A. Terman18, D. R. Tiedt4, W. H. To1, Mani Tripathi13, L. Tvrznikova6, S. Uvarov13, J.R. Verbus12, R. C. Webb18, J. T. White18, T. J. Whitis1, M. S. Witherell14, F.L.H. Wolfs16, Jilei Xu7, K. Yazdani3, Sarah Young19, Chao Zhang9 
TL;DR: This search yields no evidence of WIMP nuclear recoils and constraints on spin-independent weakly interacting massive particle (WIMP)-nucleon scattering using a 3.35×10^{4} kg day exposure of the Large Underground Xenon experiment are reported.
Abstract: We report constraints on spin-independent weakly interacting massive particle (WIMP)-nucleon scattering using a 3.35×10^{4} kg day exposure of the Large Underground Xenon (LUX) experiment. A dual-phase xenon time projection chamber with 250 kg of active mass is operated at the Sanford Underground Research Facility under Lead, South Dakota (USA). With roughly fourfold improvement in sensitivity for high WIMP masses relative to our previous results, this search yields no evidence of WIMP nuclear recoils. At a WIMP mass of 50 GeV c^{-2}, WIMP-nucleon spin-independent cross sections above 2.2×10^{-46} cm^{2} are excluded at the 90% confidence level. When combined with the previously reported LUX exposure, this exclusion strengthens to 1.1×10^{-46} cm^{2} at 50 GeV c^{-2}.

1,844 citations


Journal ArticleDOI
20 Sep 2017-Nature
TL;DR: The approach to metal-based additive manufacturing is applicable to a wide range of alloys and can be implemented using a range of additive machines, and provides a foundation for broad industrial applicability, including where electron-beam melting or directed-energy-deposition techniques are used instead of selective laser melting.
Abstract: Metal-based additive manufacturing, or three-dimensional (3D) printing, is a potentially disruptive technology across multiple industries, including the aerospace, biomedical and automotive industries. Building up metal components layer by layer increases design freedom and manufacturing flexibility, thereby enabling complex geometries, increased product customization and shorter time to market, while eliminating traditional economy-of-scale constraints. However, currently only a few alloys, the most relevant being AlSi10Mg, TiAl6V4, CoCr and Inconel 718, can be reliably printed; the vast majority of the more than 5,500 alloys in use today cannot be additively manufactured because the melting and solidification dynamics during the printing process lead to intolerable microstructures with large columnar grains and periodic cracks. Here we demonstrate that these issues can be resolved by introducing nanoparticles of nucleants that control solidification during additive manufacturing. We selected the nucleants on the basis of crystallographic information and assembled them onto 7075 and 6061 series aluminium alloy powders. After functionalization with the nucleants, we found that these high-strength aluminium alloys, which were previously incompatible with additive manufacturing, could be processed successfully using selective laser melting. Crack-free, equiaxed (that is, with grains roughly equal in length, width and height), fine-grained microstructures were achieved, resulting in material strengths comparable to that of wrought material. Our approach to metal-based additive manufacturing is applicable to a wide range of alloys and can be implemented using a range of additive machines. It thus provides a foundation for broad industrial applicability, including where electron-beam melting or directed-energy-deposition techniques are used instead of selective laser melting, and will enable additive manufacturing of other alloy systems, such as non-weldable nickel superalloys and intermetallics. Furthermore, this technology could be used in conventional processing such as in joining, casting and injection moulding, in which solidification cracking and hot tearing are also common issues.

1,670 citations


Journal ArticleDOI
TL;DR: This Perspective looks at how microbial anabolism and the soil microbial carbon pump control microbial necromass accumulation and stabilization; the ‘entombing effect’.
Abstract: This Perspective looks at how microbial anabolism and the soil microbial carbon pump control microbial necromass accumulation and stabilization; the ‘entombing effect’.

1,042 citations


Journal ArticleDOI
TL;DR: This review summarizes the development of theories over the past four decades pertinent to SERS, especially those contributing to the current understanding of surface-plasmon (SP) resonances in the nanostructured conductor.
Abstract: Surface-enhanced Raman spectroscopy (SERS) and related spectroscopies are powered primarily by the concentration of the electromagnetic (EM) fields associated with light in or near appropriately nanostructured electrically-conducting materials, most prominently, but not exclusively high-conductivity metals such as silver and gold. This field concentration takes place on account of the excitation of surface-plasmon (SP) resonances in the nanostructured conductor. Optimizing nanostructures for SERS, therefore, implies optimizing the ability of plasmonic nanostructures to concentrate EM optical fields at locations where molecules of interest reside, and to enhance the radiation efficiency of the oscillating dipoles associated with these molecules and nanostructures. This review summarizes the development of theories over the past four decades pertinent to SERS, especially those contributing to our current understanding of SP-related SERS. Special emphasis is given to the salient strategies and theoretical approaches for optimizing nanostructures with hotspots as efficient EM near-field concentrating and far-field radiating substrates for SERS. A simple model is described in terms of which the upper limit of the SERS enhancement can be estimated. Several experimental strategies that may allow one to approach, or possibly exceed this limit, such as cascading the enhancement of the local and radiated EM field by the multiscale EM coupling of hierarchical structures, and generating hotspots by hybridizing an antenna mode with a plasmonic waveguide cavity mode, which would result in an increased local field enhancement, are discussed. Aiming to significantly broaden the application of SERS to other fields, and especially to material science, we consider hybrid structures of plasmonic nanostructures and other material phases and strategies for producing strong local EM fields at desired locations in such hybrid structures. In this vein, we consider some of the numerical strategies for simulating the optical properties and consequential SERS performance of particle-on-substrate systems that might guide the design of SERS-active systems. Finally, some current theoretical attempts are briefly discussed for unifying EM and non-EM contribution to SERS.

878 citations


Proceedings ArticleDOI
01 May 2017
TL;DR: Li et al. as discussed by the authors designed a hybrid convolutional neural network to integrate meta-data with text and showed that this hybrid approach can improve a text-only deep learning model.
Abstract: Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present LIAR: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.

869 citations


Journal ArticleDOI
Albert M. Sirunyan, Armen Tumasyan, Wolfgang Adam1, Ece Aşılar1  +2212 moreInstitutions (157)
TL;DR: A fully-fledged particle-flow reconstruction algorithm tuned to the CMS detector was developed and has been consistently used in physics analyses for the first time at a hadron collider as mentioned in this paper.
Abstract: The CMS apparatus was identified, a few years before the start of the LHC operation at CERN, to feature properties well suited to particle-flow (PF) reconstruction: a highly-segmented tracker, a fine-grained electromagnetic calorimeter, a hermetic hadron calorimeter, a strong magnetic field, and an excellent muon spectrometer. A fully-fledged PF reconstruction algorithm tuned to the CMS detector was therefore developed and has been consistently used in physics analyses for the first time at a hadron collider. For each collision, the comprehensive list of final-state particles identified and reconstructed by the algorithm provides a global event description that leads to unprecedented CMS performance for jet and hadronic τ decay reconstruction, missing transverse momentum determination, and electron and muon identification. This approach also allows particles from pileup interactions to be identified and enables efficient pileup mitigation methods. The data collected by CMS at a centre-of-mass energy of 8\TeV show excellent agreement with the simulation and confirm the superior PF performance at least up to an average of 20 pileup interactions.

719 citations


Journal ArticleDOI
31 May 2017-Nature
TL;DR: Proactive international efforts to increase crop yields, minimize land clearing and habitat fragmentation, and protect natural lands could increase food security in developing nations and preserve much of Earth's remaining biodiversity.
Abstract: Tens of thousands of species are threatened with extinction as a result of human activities. Here we explore how the extinction risks of terrestrial mammals and birds might change in the next 50 years. Future population growth and economic development are forecasted to impose unprecedented levels of extinction risk on many more species worldwide, especially the large mammals of tropical Africa, Asia and South America. Yet these threats are not inevitable. Proactive international efforts to increase crop yields, minimize land clearing and habitat fragmentation, and protect natural lands could increase food security in developing nations and preserve much of Earth's remaining biodiversity.

647 citations


Journal ArticleDOI
TL;DR: It is hypothesized that litter quality would increase with latitude (despite variation within regions) and traits would be correlated to produce ‘syndromes’ resulting from phylogeny and environmental variation, and it is found lower litter quality and higher nitrogen:phosphorus ratios in the tropics.
Abstract: Plant litter represents a major basal resource in streams, where its decomposition is partly regulated by litter traits. Litter-trait variation may determine the latitudinal gradient in decomposition in streams, which is mainly microbial in the tropics and detritivore-mediated at high latitudes. However, this hypothesis remains untested, as we lack information on large-scale trait variation for riparian litter. Variation cannot easily be inferred from existing leaf-trait databases, since nutrient resorption can cause traits of litter and green leaves to diverge. Here we present the first global-scale assessment of riparian litter quality by determining latitudinal variation (spanning 107°) in litter traits (nutrient concentrations; physical and chemical defences) of 151 species from 24 regions and their relationships with environmental factors and phylogeny. We hypothesized that litter quality would increase with latitude (despite variation within regions) and traits would be correlated to produce ‘syndromes’ resulting from phylogeny and environmental variation. We found lower litter quality and higher nitrogen:phosphorus ratios in the tropics. Traits were linked but showed no phylogenetic signal, suggesting that syndromes were environmentally determined. Poorer litter quality and greater phosphorus limitation towards the equator may restrict detritivore-mediated decomposition, contributing to the predominance of microbial decomposers in tropical streams.

616 citations


Journal ArticleDOI
TL;DR: In spite of its motivating properties, learning science in VR may overload and distract the learner, resulting in less opportunity to build learning outcomes (as reflected in poorer learning outcome test performance), according to EEG measures of cognitive load.

Journal ArticleDOI
Abstract: Global agricultural feeds over 7 billion people, but is also a leading cause of environmental degradation. Understanding how alternative agricultural production systems, agricultural input efficiency, and food choice drive environmental degradation is necessary for reducing agriculture's environmental impacts. A meta-analysis of life cycle assessments that includes 742 agricultural systems and over 90 unique foods produced primarily in high-input systems shows that, per unit of food, organic systems require more land, cause more eutrophication, use less energy, but emit similar greenhouse gas emissions (GHGs) as conventional systems; that grass-fed beef requires more land and emits similar GHG emissions as grain-feed beef; and that low-input aquaculture and non-trawling fisheries have much lower GHG emissions than trawling fisheries. In addition, our analyses show that increasing agricultural input efficiency (the amount of food produced per input of fertilizer or feed) would have environmental benefits for both crop and livestock systems. Further, for all environmental indicators and nutritional units examined, plant-based foods have the lowest environmental impacts; eggs, dairy, pork, poultry, non-trawling fisheries, and non-recirculating aquaculture have intermediate impacts; and ruminant meat has impacts ~100 times those of plant-based foods. Our analyses show that dietary shifts towards low-impact foods and increases in agricultural input use efficiency would offer larger environmental benefits than would switches from conventional agricultural systems to alternatives such as organic agriculture or grass-fed beef.

Journal ArticleDOI
19 Jul 2017
TL;DR: This study reports a class of soft pneumatic robot capable of a basic form of this behavior, growing substantially in length from the tip while actively controlling direction using onboard sensing of environmental stimuli, and demonstrates the abilities to lengthen through constrained environments by exploiting passive deformations and form three-dimensional structures by lengthening the body of the robot along a path.
Abstract: Across kingdoms and length scales, certain cells and organisms navigate their environments not through locomotion but through growth. This pattern of movement is found in fungal hyphae, developing neurons, and trailing plants, and is characterized by extension from the tip of the body, length change of hundreds of percent, and active control of growth direction. This results in the abilities to move through tightly constrained environments and form useful three-dimensional structures from the body. We report a class of soft pneumatic robot that is capable of a basic form of this behavior, growing substantially in length from the tip while actively controlling direction using onboard sensing of environmental stimuli; further, the peak rate of lengthening is comparable to rates of animal and robot locomotion. This is enabled by two principles: Pressurization of an inverted thin-walled vessel allows rapid and substantial lengthening of the tip of the robot body, and controlled asymmetric lengthening of the tip allows directional control. Further, we demonstrate the abilities to lengthen through constrained environments by exploiting passive deformations and form three-dimensional structures by lengthening the body of the robot along a path. Our study helps lay the foundation for engineered systems that grow to navigate the environment.

Journal ArticleDOI
TL;DR: In this article, the authors proposed scalable quantum computers composed of qubits encoded in aggregates of four or more Majorana zero modes, realized at the ends of topological superconducting wire segments that are assembled into super-conducting islands with significant charging energy.
Abstract: We present designs for scalable quantum computers composed of qubits encoded in aggregates of four or more Majorana zero modes, realized at the ends of topological superconducting wire segments that are assembled into superconducting islands with significant charging energy. Quantum information can be manipulated according to a measurement-only protocol, which is facilitated by tunable couplings between Majorana zero modes and nearby semiconductor quantum dots. Our proposed architecture designs have the following principal virtues: (1) the magnetic field can be aligned in the direction of all of the topological superconducting wires since they are all parallel; (2) topological T junctions are not used, obviating possible difficulties in their fabrication and utilization; (3) quasiparticle poisoning is abated by the charging energy; (4) Clifford operations are executed by a relatively standard measurement: detection of corrections to quantum dot energy, charge, or differential capacitance induced by quantum fluctuations; (5) it is compatible with strategies for producing good approximate magic states.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that circular economy activities can increase overall production, which can partially or fully offset their benefits, and they have termed this effect "circular economy rebound".
Abstract: Summary The so-called circular economy—the concept of closing material loops to preserve products, parts, and materials in the industrial system and extract their maximum utility—has recently started gaining momentum. The idea of substituting lower-impact secondary production for environmentally intensive primary production gives the circular economy a strong intuitive environmental appeal. However, proponents of the circular economy have tended to look at the world purely as an engineering system and have overlooked the economic part of the circular economy. Recent research has started to question the core of the circular economy—namely, whether closing material and product loops does, in fact, prevent primary production. In this article, we argue that circular economy activities can increase overall production, which can partially or fully offset their benefits. Because there is a strong parallel in this respect to energy efficiency rebound, we have termed this effect “circular economy rebound.” Circular economy rebound occurs when circular economy activities, which have lower per-unit-production impacts, also cause increased levels of production, reducing their benefit. We describe the mechanisms that cause circular economy rebound, which include the limited ability of secondary products to substitute for primary products, and price effects. We then offer some potential strategies for avoiding circular economy rebound. However, these strategies are unlikely to be attractive to for-profit firms, so we caution that simply encouraging private firms to find profitable opportunities in the circular economy is likely to cause rebound and lower or eliminate the potential environmental benefits.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the late time behavior of horizon fluctuations in large anti-de Sitter (AdS) black holes is governed by the random matrix dynamics characteristic of quantum chaotic systems.
Abstract: We argue that the late time behavior of horizon fluctuations in large anti-de Sitter (AdS) black holes is governed by the random matrix dynamics characteristic of quantum chaotic systems. Our main tool is the Sachdev-Ye-Kitaev (SYK) model, which we use as a simple model of a black hole. We use an analytically continued partition function |Z(β + it)|2 as well as correlation functions as diagnostics. Using numerical techniques we establish random matrix behavior at late times. We determine the early time behavior exactly in a double scaling limit, giving us a plausible estimate for the crossover time to random matrix behavior. We use these ideas to formulate a conjecture about general large AdS black holes, like those dual to 4D super-Yang-Mills theory, giving a provisional estimate of the crossover time. We make some preliminary comments about challenges to understanding the late time dynamics from a bulk point of view.

Journal ArticleDOI
TL;DR: In this article, the authors parse the vast literature to examine the forefront of the field of block polymers and identify exciting themes and challenging opportunities that portend a bracing future trajectory.
Abstract: Block polymers have undergone extraordinary evolution since their inception more than 60 years ago, maturing from simple surfactants to an expansive class of macromolecules encoded with exquisite attributes. Contemporary synthetic accessibility coupled with facile characterization and rigorous theoretical advances have conspired to continuously generate fundamental insights and enabling concepts that target applications spanning chemistry, biology, physics, and engineering. Here, we parse the vast literature to examine the forefront of the field and identify exciting themes and challenging opportunities that portend a bracing future trajectory. This Perspective celebrates the visionary role played by Macromolecules in advancing our understanding of this remarkable class of materials.

Journal ArticleDOI
TL;DR: In this paper, the trigger system consists of two levels designed to select events of potential physics interest from a GHz (MHz) interaction rate of proton-proton (heavy ion) collisions.
Abstract: This paper describes the CMS trigger system and its performance during Run 1 of the LHC. The trigger system consists of two levels designed to select events of potential physics interest from a GHz (MHz) interaction rate of proton-proton (heavy ion) collisions. The first level of the trigger is implemented in hardware, and selects events containing detector signals consistent with an electron, photon, muon, tau lepton, jet, or missing transverse energy. A programmable menu of up to 128 object-based algorithms is used to select events for subsequent processing. The trigger thresholds are adjusted to the LHC instantaneous luminosity during data taking in order to restrict the output rate to 100 kHz, the upper limit imposed by the CMS readout electronics. The second level, implemented in software, further refines the purity of the output stream, selecting an average rate of 400 Hz for offline event storage. The objectives, strategy and performance of the trigger system during the LHC Run 1 are described.

Proceedings ArticleDOI
20 Jul 2017
TL;DR: A novel reinforcement learning framework for learning multi-hop relational paths is described, which uses a policy-based agent with continuous states based on knowledge graph embeddings, which reasons in a KG vector-space by sampling the most promising relation to extend its path.
Abstract: We study the problem of learning to reason in large scale knowledge graphs (KGs). More specifically, we describe a novel reinforcement learning framework for learning multi-hop relational paths: we use a policy-based agent with continuous states based on knowledge graph embeddings, which reasons in a KG vector-space by sampling the most promising relation to extend its path. In contrast to prior work, our approach includes a reward function that takes the accuracy, diversity, and efficiency into consideration. Experimentally, we show that our proposed method outperforms a path-ranking based algorithm and knowledge graph embedding methods on Freebase and Never-Ending Language Learning datasets.

Journal ArticleDOI
16 Oct 2017-Nature
TL;DR: Optical to near-infrared observations of a transient coincident with the detection of the gravitational-wave signature of a binary neutron-star merger and with a low-luminosity short-duration γ-ray burst are reported.
Abstract: The merger of two neutron stars has been predicted to produce an optical–infrared transient (lasting a few days) known as a ‘kilonova’, powered by the radioactive decay of neutron-rich species synthesized in the merger. Evidence that short γ-ray bursts also arise from neutron-star mergers has been accumulating. In models of such mergers, a small amount of mass (10^(−4)–10^(−2) solar masses) with a low electron fraction is ejected at high velocities (0.1–0.3 times light speed) or carried out by winds from an accretion disk formed around the newly merged object. This mass is expected to undergo rapid neutron capture (r-process) nucleosynthesis, leading to the formation of radioactive elements that release energy as they decay, powering an electromagnetic transient. A large uncertainty in the composition of the newly synthesized material leads to various expected colours, durations and luminosities for such transients. Observational evidence for kilonovae has so far been inconclusive because it was based on cases of moderate excess emission detected in the afterglows of γ-ray bursts. Here we report optical to near-infrared observations of a transient coincident with the detection of the gravitational-wave signature of a binary neutron-star merger and with a low-luminosity short-duration γ-ray burst20. Our observations, taken roughly every eight hours over a few days following the gravitational-wave trigger, reveal an initial blue excess, with fast optical fading and reddening. Using numerical models, we conclude that our data are broadly consistent with a light curve powered by a few hundredths of a solar mass of low-opacity material corresponding to lanthanide-poor (a fraction of 10^(−4.5) by mass) ejecta.

Journal ArticleDOI
Khachatryan1, Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam  +2285 moreInstitutions (147)
TL;DR: In this paper, an improved jet energy scale corrections, based on a data sample corresponding to an integrated luminosity of 19.7 fb^(-1) collected by the CMS experiment in proton-proton collisions at a center-of-mass energy of 8 TeV, are presented.
Abstract: Improved jet energy scale corrections, based on a data sample corresponding to an integrated luminosity of 19.7 fb^(-1) collected by the CMS experiment in proton-proton collisions at a center-of-mass energy of 8 TeV, are presented. The corrections as a function of pseudorapidity η and transverse momentum p_T are extracted from data and simulated events combining several channels and methods. They account successively for the effects of pileup, uniformity of the detector response, and residual data-simulation jet energy scale differences. Further corrections, depending on the jet flavor and distance parameter (jet size) R, are also presented. The jet energy resolution is measured in data and simulated events and is studied as a function of pileup, jet size, and jet flavor. Typical jet energy resolutions at the central rapidities are 15–20% at 30 GeV, about 10% at 100 GeV, and 5% at 1 TeV. The studies exploit events with dijet topology, as well as photon+jet, Z+jet and multijet events. Several new techniques are used to account for the various sources of jet energy scale corrections, and a full set of uncertainties, and their correlations, are provided. The final uncertainties on the jet energy scale are below 3% across the phase space considered by most analyses (p_T > 30 GeV and 0|η| 30 GeV is reached, when excluding the jet flavor uncertainties, which are provided separately for different jet flavors. A new benchmark for jet energy scale determination at hadron colliders is achieved with 0.32% uncertainty for jets with p_T of the order of 165–330 GeV, and |η| < 0.8.

Journal ArticleDOI
14 Jun 2017-Nature
TL;DR: A Dam Environmental Vulnerability Index is introduced to quantify the current and potential impacts of dams in the Amazon basin and suggest institutional innovations to assess and avoid the likely impoverishment of Amazon rivers.
Abstract: More than a hundred hydropower dams have already been built in the Amazon basin and numerous proposals for further dam constructions are under consideration. The accumulated negative environmental effects of existing dams and proposed dams, if constructed, will trigger massive hydrophysical and biotic disturbances that will affect the Amazon basin's floodplains, estuary and sediment plume. We introduce a Dam Environmental Vulnerability Index to quantify the current and potential impacts of dams in the basin. The scale of foreseeable environmental degradation indicates the need for collective action among nations and states to avoid cumulative, far-reaching impacts. We suggest institutional innovations to assess and avoid the likely impoverishment of Amazon rivers.

Journal ArticleDOI
TL;DR: A country-specific version of Kaya’s identity is used to develop a statistically-based probabilistic forecast of CO2 emissions and temperature change to 2100, and a joint Bayesian hierarchical model for GDP per capita and carbon intensity is developed.
Abstract: The recently published Intergovernmental Panel on Climate Change (IPCC) projections to 2100 give likely ranges of global temperature increase in four scenarios for population, economic growth and carbon use1. However these projections are not based on a fully statistical approach. Here we use a country-specific version of Kaya's identity to develop a statistically-based probabilistic forecast of CO2 emissions and temperature change to 2100. Using data for 1960-2010, including the UN's probabilistic population projections for all countries2-4, we develop a joint Bayesian hierarchical model for GDP per capita and carbon intensity. We find that the 90% interval for cumulative CO2 emissions includes the IPCC's two middle scenarios but not the extreme ones. The likely range of global temperature increase is 2.0-4.9°C, with median 3.2°C and a 5% (1%) chance that it will be less than 2°C (1.5°C). Population growth is not a major contributing factor. Our model is not a "business as usual" scenario, but rather is based on data which already show the effect of emission mitigation policies. Achieving the goal of less than 1.5°C warming will require carbon intensity to decline much faster than in the recent past.

Journal ArticleDOI
23 Mar 2017-Nature
TL;DR: This work identifies a material geometry that achieves the Hashin–Shtrikman upper bounds on isotropic elastic stiffness, and finds that stiff but well distributed networks of plates are required to transfer loads efficiently between neighbouring members.
Abstract: A wide variety of high-performance applications require materials for which shape control is maintained under substantial stress, and that have minimal density. Bio-inspired hexagonal and square honeycomb structures and lattice materials based on repeating unit cells composed of webs or trusses, when made from materials of high elastic stiffness and low density, represent some of the lightest, stiffest and strongest materials available today. Recent advances in 3D printing and automated assembly have enabled such complicated material geometries to be fabricated at low (and declining) cost. These mechanical metamaterials have properties that are a function of their mesoscale geometry as well as their constituents, leading to combinations of properties that are unobtainable in solid materials; however, a material geometry that achieves the theoretical upper bounds for isotropic elasticity and strain energy storage (the Hashin-Shtrikman upper bounds) has yet to be identified. Here we evaluate the manner in which strain energy distributes under load in a representative selection of material geometries, to identify the morphological features associated with high elastic performance. Using finite-element models, supported by analytical methods, and a heuristic optimization scheme, we identify a material geometry that achieves the Hashin-Shtrikman upper bounds on isotropic elastic stiffness. Previous work has focused on truss networks and anisotropic honeycombs, neither of which can achieve this theoretical limit. We find that stiff but well distributed networks of plates are required to transfer loads efficiently between neighbouring members. The resulting low-density mechanical metamaterials have many advantageous properties: their mesoscale geometry can facilitate large crushing strains with high energy absorption, optical bandgaps and mechanically tunable acoustic bandgaps, high thermal insulation, buoyancy, and fluid storage and transport. Our relatively simple design can be manufactured using origami-like sheet folding and bonding methods.

Posted Content
Yonit Hochberg1, Yonit Hochberg2, A. N. Villano3, Andrei Afanasev4  +238 moreInstitutions (98)
TL;DR: The white paper summarizes the workshop "U.S. Cosmic Visions: New Ideas in Dark Matter" held at University of Maryland on March 23-25, 2017.
Abstract: This white paper summarizes the workshop "U.S. Cosmic Visions: New Ideas in Dark Matter" held at University of Maryland on March 23-25, 2017.

Journal ArticleDOI
TL;DR: This Review discusses how active matter concepts are important for understanding cell biology, and how the use of biochemical components enables the creation of new inherently non-equilibrium materials with unique properties that have so far been mostly restricted to living organisms.
Abstract: The remarkable processes that characterize living organisms, such as motility, self-healing and reproduction, are fuelled by a continuous injection of energy at the microscale. The field of active matter focuses on understanding how the collective behaviours of internally driven components can give rise to these biological phenomena, while also striving to produce synthetic materials composed of active energy-consuming components. The synergistic approach of studying active matter in both living cells and reconstituted systems assembled from biochemical building blocks has the potential to transform our understanding of both cell biology and materials science. This methodology can provide insight into the fundamental principles that govern the dynamical behaviours of self-organizing subcellular structures, and can lead to the design of artificial materials and machines that operate away from equilibrium and can thus attain life-like properties. In this Review, we focus on active materials made of cytoskeletal components, highlighting the role of active stresses and how they drive self-organization of both cellular structures and macroscale materials, which are machines powered by nanomachines. The field of active matter studies how internally driven motile components self-organize into large-scale dynamical states and patterns. This Review discusses how active matter concepts are important for understanding cell biology, and how the use of biochemical components enables the creation of new inherently non-equilibrium materials with unique properties that have so far been mostly restricted to living organisms.

Posted Content
TL;DR: Liar as mentioned in this paper is a large dataset of 12.8k manually labeled short statements in various contexts from this http URL, which provides detailed analysis report and links to source documents for each case.
Abstract: Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from this http URL, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.

Journal ArticleDOI
27 Oct 2017-Science
TL;DR: Inspired by cuticles of marine mussel byssi, a sacrificial, reversible iron-catechol cross-links into a dry, loosely cross-linked epoxy network exhibits two to three orders of magnitude increases in stiffness, tensile strength, and tensile toughness compared to its iron-free precursor while gaining recoverable hysteretic energy dissipation and maintaining its original extensibility.
Abstract: Materials often exhibit a trade-off between stiffness and extensibility; for example, strengthening elastomers by increasing their cross-link density leads to embrittlement and decreased toughness. Inspired by cuticles of marine mussel byssi, we circumvent this inherent trade-off by incorporating sacrificial, reversible iron-catechol cross-links into a dry, loosely cross-linked epoxy network. The iron-containing network exhibits two to three orders of magnitude increases in stiffness, tensile strength, and tensile toughness compared to its iron-free precursor while gaining recoverable hysteretic energy dissipation and maintaining its original extensibility. Compared to previous realizations of this chemistry in hydrogels, the dry nature of the network enables larger property enhancement owing to the cooperative effects of both the increased cross-link density given by the reversible iron-catecholate complexes and the chain-restricting ionomeric nanodomains that they form.

Journal ArticleDOI
TL;DR: In this article, the fundamental operating mechanisms and challenges of Li intercalation in layered oxides, contrasts how these challenges play out differently for different materials (with emphasis on Ni-Co-Al and Ni-Mn-Co (NMC) alloys), and summarizes the extensive corpus of modifications and extensions to the layered lithium oxides.
Abstract: Although layered lithium oxides have become the cathode of choice for state-of-the-art Li-ion batteries, substantial gaps remain between the practical and theoretical energy densities. With the aim of supporting efforts to close this gap, this work reviews the fundamental operating mechanisms and challenges of Li intercalation in layered oxides, contrasts how these challenges play out differently for different materials (with emphasis on Ni–Co–Al (NCA) and Ni–Mn–Co (NMC) alloys), and summarizes the extensive corpus of modifications and extensions to the layered lithium oxides. Particular emphasis is given to the fundamental mechanisms behind the operation and degradation of layered intercalation electrode materials as well as novel modifications and extensions, including Na-ion and cation-disordered materials.

Journal ArticleDOI
11 Oct 2017-Nature
TL;DR: This work curated an extensive set of ADAR1 and ADAR2 targets and showed that many editing sites display distinct tissue-specific regulation by the ADAR enzymes in vivo, suggesting stronger cis-directed regulation of RNA editing for most sites, although the small set of conserved coding sites is under stronger trans-regulation.
Abstract: Adenosine-to-inosine (A-to-I) RNA editing is a conserved post-transcriptional mechanism mediated by ADAR enzymes that diversifies the transcriptome by altering selected nucleotides in RNA molecules. Although many editing sites have recently been discovered, the extent to which most sites are edited and how the editing is regulated in different biological contexts are not fully understood. Here we report dynamic spatiotemporal patterns and new regulators of RNA editing, discovered through an extensive profiling of A-to-I RNA editing in 8,551 human samples (representing 53 body sites from 552 individuals) from the Genotype-Tissue Expression (GTEx) project and in hundreds of other primate and mouse samples. We show that editing levels in non-repetitive coding regions vary more between tissues than editing levels in repetitive regions. Globally, ADAR1 is the primary editor of repetitive sites and ADAR2 is the primary editor of non-repetitive coding sites, whereas the catalytically inactive ADAR3 predominantly acts as an inhibitor of editing. Cross-species analysis of RNA editing in several tissues revealed that species, rather than tissue type, is the primary determinant of editing levels, suggesting stronger cis-directed regulation of RNA editing for most sites, although the small set of conserved coding sites is under stronger trans-regulation. In addition, we curated an extensive set of ADAR1 and ADAR2 targets and showed that many editing sites display distinct tissue-specific regulation by the ADAR enzymes in vivo. Further analysis of the GTEx data revealed several potential regulators of editing, such as AIMP2, which reduces editing in muscles by enhancing the degradation of the ADAR proteins. Collectively, our work provides insights into the complex cis- and trans-regulation of A-to-I editing.