scispace - formally typeset
Search or ask a question

Showing papers by "California Institute of Technology published in 2018"


Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Yashar Akrami4  +229 moreInstitutions (70)
TL;DR: In this paper, the cosmological parameter results from the final full-mission Planck measurements of the CMB anisotropies were presented, with good consistency with the standard spatially-flat 6-parameter CDM cosmology having a power-law spectrum of adiabatic scalar perturbations from polarization, temperature, and lensing separately and in combination.
Abstract: We present cosmological parameter results from the final full-mission Planck measurements of the CMB anisotropies. We find good consistency with the standard spatially-flat 6-parameter $\Lambda$CDM cosmology having a power-law spectrum of adiabatic scalar perturbations (denoted "base $\Lambda$CDM" in this paper), from polarization, temperature, and lensing, separately and in combination. A combined analysis gives dark matter density $\Omega_c h^2 = 0.120\pm 0.001$, baryon density $\Omega_b h^2 = 0.0224\pm 0.0001$, scalar spectral index $n_s = 0.965\pm 0.004$, and optical depth $\tau = 0.054\pm 0.007$ (in this abstract we quote $68\,\%$ confidence regions on measured parameters and $95\,\%$ on upper limits). The angular acoustic scale is measured to $0.03\,\%$ precision, with $100\theta_*=1.0411\pm 0.0003$. These results are only weakly dependent on the cosmological model and remain stable, with somewhat increased errors, in many commonly considered extensions. Assuming the base-$\Lambda$CDM cosmology, the inferred late-Universe parameters are: Hubble constant $H_0 = (67.4\pm 0.5)$km/s/Mpc; matter density parameter $\Omega_m = 0.315\pm 0.007$; and matter fluctuation amplitude $\sigma_8 = 0.811\pm 0.006$. We find no compelling evidence for extensions to the base-$\Lambda$CDM model. Combining with BAO we constrain the effective extra relativistic degrees of freedom to be $N_{\rm eff} = 2.99\pm 0.17$, and the neutrino mass is tightly constrained to $\sum m_ u< 0.12$eV. The CMB spectra continue to prefer higher lensing amplitudes than predicted in base -$\Lambda$CDM at over $2\,\sigma$, which pulls some parameters that affect the lensing amplitude away from the base-$\Lambda$CDM model; however, this is not supported by the lensing reconstruction or (in models that also change the background geometry) BAO data. (Abridged)

3,077 citations


Journal ArticleDOI
06 Aug 2018
TL;DR: Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future, and the 100-qubit quantum computer will not change the world right away - but it should be regarded as a significant step toward the more powerful quantum technologies of the future.
Abstract: Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future. Quantum computers with 50-100 qubits may be able to perform tasks which surpass the capabilities of today's classical digital computers, but noise in quantum gates will limit the size of quantum circuits that can be executed reliably. NISQ devices will be useful tools for exploring many-body quantum physics, and may have other useful applications, but the 100-qubit quantum computer will not change the world right away --- we should regard it as a significant step toward the more powerful quantum technologies of the future. Quantum technologists should continue to strive for more accurate quantum gates and, eventually, fully fault-tolerant quantum computing.

2,598 citations


Journal ArticleDOI
TL;DR: The overall biomass composition of the biosphere is assembled, establishing a census of the ≈550 gigatons of carbon (Gt C) of biomass distributed among all of the kingdoms of life and shows that terrestrial biomass is about two orders of magnitude higher than marine biomass and estimate a total of ≈6 Gt C of marine biota, doubling the previous estimated quantity.
Abstract: A census of the biomass on Earth is key for understanding the structure and dynamics of the biosphere. However, a global, quantitative view of how the biomass of different taxa compare with one another is still lacking. Here, we assemble the overall biomass composition of the biosphere, establishing a census of the ≈550 gigatons of carbon (Gt C) of biomass distributed among all of the kingdoms of life. We find that the kingdoms of life concentrate at different locations on the planet; plants (≈450 Gt C, the dominant kingdom) are primarily terrestrial, whereas animals (≈2 Gt C) are mainly marine, and bacteria (≈70 Gt C) and archaea (≈7 Gt C) are predominantly located in deep subsurface environments. We show that terrestrial biomass is about two orders of magnitude higher than marine biomass and estimate a total of ≈6 Gt C of marine biota, doubling the previous estimated quantity. Our analysis reveals that the global marine biomass pyramid contains more consumers than producers, thus increasing the scope of previous observations on inverse food pyramids. Finally, we highlight that the mass of humans is an order of magnitude higher than that of all wild mammals combined and report the historical impact of humanity on the global biomass of prominent taxa, including mammals, fish, and plants.

1,714 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Fausto Acernese3  +1235 moreInstitutions (132)
TL;DR: This analysis expands upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars.
Abstract: On 17 August 2017, the LIGO and Virgo observatories made the first direct detection of gravitational waves from the coalescence of a neutron star binary system. The detection of this gravitational-wave signal, GW170817, offers a novel opportunity to directly probe the properties of matter at the extreme conditions found in the interior of these stars. The initial, minimal-assumption analysis of the LIGO and Virgo data placed constraints on the tidal effects of the coalescing bodies, which were then translated to constraints on neutron star radii. Here, we expand upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars. Our analysis employs two methods: the use of equation-of-state-insensitive relations between various macroscopic properties of the neutron stars and the use of an efficient parametrization of the defining function pðρÞ of the equation of state itself. From the LIGO and Virgo data alone and the first method, we measure the two neutron star radii as R1 ¼ 10.8 þ2.0 −1.7 km for the heavier star and R2 ¼ 10.7 þ2.1 −1.5 km for the lighter star at the 90% credible level. If we additionally require that the equation of state supports neutron stars with masses larger than 1.97 M⊙ as required from electromagnetic observations and employ the equation-of-state parametrization, we further constrain R1 ¼ 11.9 þ1.4 −1.4 km and R2 ¼ 11.9 þ1.4 −1.4 km at the 90% credible level. Finally, we obtain constraints on pðρÞ at supranuclear densities, with pressure at twice nuclear saturation density measured at 3.5 þ2.7 −1.7 × 1034 dyn cm−2 at the 90% level.

1,595 citations


Journal ArticleDOI
Daniel J. Benjamin1, James O. Berger2, Magnus Johannesson1, Magnus Johannesson3, Brian A. Nosek4, Brian A. Nosek5, Eric-Jan Wagenmakers6, Richard A. Berk7, Kenneth A. Bollen8, Björn Brembs9, Lawrence D. Brown7, Colin F. Camerer10, David Cesarini11, David Cesarini12, Christopher D. Chambers13, Merlise A. Clyde2, Thomas D. Cook14, Thomas D. Cook15, Paul De Boeck16, Zoltan Dienes17, Anna Dreber3, Kenny Easwaran18, Charles Efferson19, Ernst Fehr20, Fiona Fidler21, Andy P. Field17, Malcolm R. Forster22, Edward I. George7, Richard Gonzalez23, Steven N. Goodman24, Edwin J. Green25, Donald P. Green26, Anthony G. Greenwald27, Jarrod D. Hadfield28, Larry V. Hedges14, Leonhard Held20, Teck-Hua Ho29, Herbert Hoijtink30, Daniel J. Hruschka31, Kosuke Imai32, Guido W. Imbens24, John P. A. Ioannidis24, Minjeong Jeon33, James Holland Jones34, Michael Kirchler35, David Laibson36, John A. List37, Roderick J. A. Little23, Arthur Lupia23, Edouard Machery38, Scott E. Maxwell39, Michael A. McCarthy21, Don A. Moore40, Stephen L. Morgan41, Marcus R. Munafò42, Shinichi Nakagawa43, Brendan Nyhan44, Timothy H. Parker45, Luis R. Pericchi46, Marco Perugini47, Jeffrey N. Rouder48, Judith Rousseau49, Victoria Savalei50, Felix D. Schönbrodt51, Thomas Sellke52, Betsy Sinclair53, Dustin Tingley36, Trisha Van Zandt16, Simine Vazire54, Duncan J. Watts55, Christopher Winship36, Robert L. Wolpert2, Yu Xie32, Cristobal Young24, Jonathan Zinman44, Valen E. Johnson1, Valen E. Johnson18 
University of Southern California1, Duke University2, Stockholm School of Economics3, University of Virginia4, Center for Open Science5, University of Amsterdam6, University of Pennsylvania7, University of North Carolina at Chapel Hill8, University of Regensburg9, California Institute of Technology10, Research Institute of Industrial Economics11, New York University12, Cardiff University13, Northwestern University14, Mathematica Policy Research15, Ohio State University16, University of Sussex17, Texas A&M University18, Royal Holloway, University of London19, University of Zurich20, University of Melbourne21, University of Wisconsin-Madison22, University of Michigan23, Stanford University24, Rutgers University25, Columbia University26, University of Washington27, University of Edinburgh28, National University of Singapore29, Utrecht University30, Arizona State University31, Princeton University32, University of California, Los Angeles33, Imperial College London34, University of Innsbruck35, Harvard University36, University of Chicago37, University of Pittsburgh38, University of Notre Dame39, University of California, Berkeley40, Johns Hopkins University41, University of Bristol42, University of New South Wales43, Dartmouth College44, Whitman College45, University of Puerto Rico46, University of Milan47, University of California, Irvine48, Paris Dauphine University49, University of British Columbia50, Ludwig Maximilian University of Munich51, Purdue University52, Washington University in St. Louis53, University of California, Davis54, Microsoft55
TL;DR: The default P-value threshold for statistical significance is proposed to be changed from 0.05 to 0.005 for claims of new discoveries in order to reduce uncertainty in the number of discoveries.
Abstract: We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.

1,586 citations


Journal ArticleDOI
TL;DR: The capabilities and design philosophy of the current version of the PySCF package are document, which is as efficient as the best existing C or Fortran‐based quantum chemistry programs.
Abstract: Python-based simulations of chemistry framework (PySCF) is a general-purpose electronic structure platform designed from the ground up to emphasize code simplicity, so as to facilitate new method development and enable flexible computational workflows. The package provides a wide range of tools to support simulations of finite-size systems, extended systems with periodic boundary conditions, low-dimensional periodic systems, and custom Hamiltonians, using mean-field and post-mean-field methods with standard Gaussian basis functions. To ensure ease of extensibility, PySCF uses the Python language to implement almost all of its features, while computationally critical paths are implemented with heavily optimized C routines. Using this combined Python/C implementation, the package is as efficient as the best existing C or Fortran-based quantum chemistry programs. In this paper, we document the capabilities and design philosophy of the current version of the PySCF package. WIREs Comput Mol Sci 2018, 8:e1340. doi: 10.1002/wcms.1340 This article is categorized under: Structure and Mechanism > Computational Materials Science Electronic Structure Theory > Ab Initio Electronic Structure Methods Software > Quantum Chemistry

1,042 citations


Journal ArticleDOI
16 May 2018-Nature
TL;DR: Analysis of 2002–2016 GRACE satellite observations of terrestrial water storage reveals substantial changes in freshwater resources globally, which are driven by natural and anthropogenic climate variability and human activities.
Abstract: Freshwater availability is changing worldwide. Here we quantify 34 trends in terrestrial water storage observed by the Gravity Recovery and Climate Experiment (GRACE) satellites during 2002–2016 and categorize their drivers as natural interannual variability, unsustainable groundwater consumption, climate change or combinations thereof. Several of these trends had been lacking thorough investigation and attribution, including massive changes in northwestern China and the Okavango Delta. Others are consistent with climate model predictions. This observation-based assessment of how the world’s water landscape is responding to human impacts and climate variations provides a blueprint for evaluating and predicting emerging threats to water and food security.

966 citations


Journal ArticleDOI
Bela Abolfathi1, D. S. Aguado2, Gabriela Aguilar3, Carlos Allende Prieto2  +361 moreInstitutions (94)
TL;DR: SDSS-IV is the fourth generation of the Sloan Digital Sky Survey and has been in operation since 2014 July. as discussed by the authors describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14).
Abstract: The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in operation since 2014 July. This paper describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14). This release makes the data taken by SDSS-IV in its first two years of operation (2014-2016 July) public. Like all previous SDSS releases, DR14 is cumulative, including the most recent reductions and calibrations of all data taken by SDSS since the first phase began operations in 2000. New in DR14 is the first public release of data from the extended Baryon Oscillation Spectroscopic Survey; the first data from the second phase of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2), including stellar parameter estimates from an innovative data-driven machine-learning algorithm known as "The Cannon"; and almost twice as many data cubes from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous release (N = 2812 in total). This paper describes the location and format of the publicly available data from the SDSS-IV surveys. We provide references to the important technical papers describing how these data have been taken (both targeting and observation details) and processed for scientific use. The SDSS web site (www.sdss.org) has been updated for this release and provides links to data downloads, as well as tutorials and examples of data use. SDSS-IV is planning to continue to collect astronomical data until 2020 and will be followed by SDSS-V.

965 citations


Journal ArticleDOI
29 Jun 2018-Science
TL;DR: In this paper, the authors examine barriers and opportunities associated with these difficult-to-decarbonize services and processes, including possible technological solutions and research and development priorities, and examine the use of existing technologies to meet future demands for these services without net addition of CO2 to the atmosphere.
Abstract: Some energy services and industrial processes-such as long-distance freight transport, air travel, highly reliable electricity, and steel and cement manufacturing-are particularly difficult to provide without adding carbon dioxide (CO2) to the atmosphere. Rapidly growing demand for these services, combined with long lead times for technology development and long lifetimes of energy infrastructure, make decarbonization of these services both essential and urgent. We examine barriers and opportunities associated with these difficult-to-decarbonize services and processes, including possible technological solutions and research and development priorities. A range of existing technologies could meet future demands for these services and processes without net addition of CO2 to the atmosphere, but their use may depend on a combination of cost reductions via research and innovation, as well as coordinated deployment and integration of operations across currently discrete energy industries.

951 citations


Journal ArticleDOI
25 Jan 2018-Nature
TL;DR: Strong evidence of absorption of the spectrum of the quasar redwards of the Lyman α emission line (the Gunn–Peterson damping wing), as would be expected if a significant amount of the hydrogen in the intergalactic medium surrounding J1342 + 0928 is neutral, and a significant fraction of neutral hydrogen is derived, although the exact fraction depends on the modelling.
Abstract: Observations of a quasar at redshift 7.54, when the Universe was just five per cent of its current age, suggest that the Universe was significantly neutral at this epoch. Despite extensive searches, only one quasar has been known at redshifts greater than 7, at 7.09. Eduardo Banados and colleagues report observations of a quasar at a redshift of 7.54, when the Universe was just 690 million years old, with a black-hole mass 800 million times the mass of the Sun. The spectrum shows that the quasar's Lyman α emission is being substantially absorbed by an intergalactic medium containing significantly neutral hydrogen, indicating that reionization was not complete at that epoch. Quasars are the most luminous non-transient objects known and as a result they enable studies of the Universe at the earliest cosmic epochs. Despite extensive efforts, however, the quasar ULAS J1120 + 0641 at redshift z = 7.09 has remained the only one known at z > 7 for more than half a decade1. Here we report observations of the quasar ULAS J134208.10 + 092838.61 (hereafter J1342 + 0928) at redshift z = 7.54. This quasar has a bolometric luminosity of 4 × 1013 times the luminosity of the Sun and a black-hole mass of 8 × 108 solar masses. The existence of this supermassive black hole when the Universe was only 690 million years old—just five per cent of its current age—reinforces models of early black-hole growth that allow black holes with initial masses of more than about 104 solar masses2,3 or episodic hyper-Eddington accretion4,5. We see strong evidence of absorption of the spectrum of the quasar redwards of the Lyman α emission line (the Gunn–Peterson damping wing), as would be expected if a significant amount (more than 10 per cent) of the hydrogen in the intergalactic medium surrounding J1342 + 0928 is neutral. We derive such a significant fraction of neutral hydrogen, although the exact fraction depends on the modelling. However, even in our most conservative analysis we find a fraction of more than 0.33 (0.11) at 68 per cent (95 per cent) probability, indicating that we are probing well within the reionization epoch of the Universe.

857 citations


Proceedings Article
15 Feb 2018
TL;DR: In this paper, the authors propose to model the traffic flow as a diffusion process on a directed graph and introduce Diffusion Convolutional Recurrent Neural Network (DCRNN), a deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flows.
Abstract: Spatiotemporal forecasting has various applications in neuroscience, climate and transportation domain. Traffic forecasting is one canonical example of such learning task. The task is challenging due to (1) complex spatial dependency on road networks, (2) non-linear temporal dynamics with changing road conditions and (3) inherent difficulty of long-term forecasting. To address these challenges, we propose to model the traffic flow as a diffusion process on a directed graph and introduce Diffusion Convolutional Recurrent Neural Network (DCRNN), a deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flow. Specifically, DCRNN captures the spatial dependency using bidirectional random walks on the graph, and the temporal dependency using the encoder-decoder architecture with scheduled sampling. We evaluate the framework on two real-world large scale road network traffic datasets and observe consistent improvement of 12% - 15% over state-of-the-art baselines.

Journal ArticleDOI
15 Mar 2018-Nature
TL;DR: Measurements of a phononic quadrupole topological insulator are reported and topological corner states are found that are an important stepping stone to the experimental realization of topologically protected wave guides in higher dimensions, and thereby open up a new path for the design of metamaterials.
Abstract: The modern theory of charge polarization in solids is based on a generalization of Berry’s phase. The possibility of the quantization of this phase arising from parallel transport in momentum space is essential to our understanding of systems with topological band structures. Although based on the concept of charge polarization, this same theory can also be used to characterize the Bloch bands of neutral bosonic systems such as photonic or phononic crystals. The theory of this quantized polarization has recently been extended from the dipole moment to higher multipole moments. In particular, a two-dimensional quantized quadrupole insulator is predicted to have gapped yet topological one-dimensional edge modes, which stabilize zero-dimensional in-gap corner states. However, such a state of matter has not previously been observed experimentally. Here we report measurements of a phononic quadrupole topological insulator. We experimentally characterize the bulk, edge and corner physics of a mechanical metamaterial (a material with tailored mechanical properties) and find the predicted gapped edge and in-gap corner states. We corroborate our findings by comparing the mechanical properties of a topologically non-trivial system to samples in other phases that are predicted by the quadrupole theory. These topological corner states are an important stepping stone to the experimental realization of topologically protected wave guides in higher dimensions, and thereby open up a new path for the design of metamaterials.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy3  +1135 moreInstitutions (139)
TL;DR: In this article, the authors present possible observing scenarios for the Advanced LIGO, Advanced Virgo and KAGRA gravitational-wave detectors over the next decade, with the intention of providing information to the astronomy community to facilitate planning for multi-messenger astronomy with gravitational waves.
Abstract: We present possible observing scenarios for the Advanced LIGO, Advanced Virgo and KAGRA gravitational-wave detectors over the next decade, with the intention of providing information to the astronomy community to facilitate planning for multi-messenger astronomy with gravitational waves. We estimate the sensitivity of the network to transient gravitational-wave signals, and study the capability of the network to determine the sky location of the source. We report our findings for gravitational-wave transients, with particular focus on gravitational-wave signals from the inspiral of binary neutron star systems, which are the most promising targets for multi-messenger astronomy. The ability to localize the sources of the detected signals depends on the geographical distribution of the detectors and their relative sensitivity, and 90% credible regions can be as large as thousands of square degrees when only two sensitive detectors are operational. Determining the sky position of a significant fraction of detected signals to areas of 5– 20 deg2 requires at least three detectors of sensitivity within a factor of ∼2 of each other and with a broad frequency bandwidth. When all detectors, including KAGRA and the third LIGO detector in India, reach design sensitivity, a significant fraction of gravitational-wave signals will be localized to a few square degrees by gravitational-wave observations alone.

Posted Content
TL;DR: It is shown that a surprisingly simple model, and associated design choices, lead to superior predictions, and together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods.
Abstract: Per-pixel ground-truth depth data is challenging to acquire at scale. To overcome this limitation, self-supervised learning has emerged as a promising alternative for training models to perform monocular depth estimation. In this paper, we propose a set of improvements, which together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods. Research on self-supervised monocular training usually explores increasingly complex architectures, loss functions, and image formation models, all of which have recently helped to close the gap with fully-supervised methods. We show that a surprisingly simple model, and associated design choices, lead to superior predictions. In particular, we propose (i) a minimum reprojection loss, designed to robustly handle occlusions, (ii) a full-resolution multi-scale sampling method that reduces visual artifacts, and (iii) an auto-masking loss to ignore training pixels that violate camera motion assumptions. We demonstrate the effectiveness of each component in isolation, and show high quality, state-of-the-art results on the KITTI benchmark.

Journal ArticleDOI
TL;DR: A hybrid catalyst constructed by iron and dinickel phosphides on nickel foams that drives both the hydrogen and oxygen evolution reactions well in base, and thus substantially expedites overall water splitting is reported, which outperforms the integrated iridium (IV) oxide and platinum couple (1.57 V).
Abstract: Water electrolysis is an advanced energy conversion technology to produce hydrogen as a clean and sustainable chemical fuel, which potentially stores the abundant but intermittent renewable energy sources scalably. Since the overall water splitting is an uphill reaction in low efficiency, innovative breakthroughs are desirable to greatly improve the efficiency by rationally designing non-precious metal-based robust bifunctional catalysts for promoting both the cathodic hydrogen evolution and anodic oxygen evolution reactions. We report a hybrid catalyst constructed by iron and dinickel phosphides on nickel foams that drives both the hydrogen and oxygen evolution reactions well in base, and thus substantially expedites overall water splitting at 10 mA cm−2 with 1.42 V, which outperforms the integrated iridium (IV) oxide and platinum couple (1.57 V), and are among the best activities currently. Especially, it delivers 500 mA cm−2 at 1.72 V without decay even after the durability test for 40 h, providing great potential for large-scale applications.

Journal ArticleDOI
TL;DR: It is found that peer beliefs of replicability are strongly related to replicable, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.
Abstract: Being able to replicate scientific findings is crucial for scientific progress. We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 2015. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications are high powered, with sample sizes on average about five times higher than in the original studies. We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size. Replicability varies between 12 (57%) and 14 (67%) studies for complementary replicability indicators. Consistent with these results, the estimated true-positive rate is 67% in a Bayesian analysis. The relative effect size of true positives is estimated to be 71%, suggesting that both false positives and inflated effect sizes of true positives contribute to imperfect reproducibility. Furthermore, we find that peer beliefs of replicability are strongly related to replicability, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.

Journal ArticleDOI
TL;DR: In this article, the spectral proper orthogonal decomposition (SPOD) has been studied in the context of the analysis of the Ginzburg-Landau equation and a turbulent jet.
Abstract: We consider the frequency domain form of proper orthogonal decomposition (POD), called spectral proper orthogonal decomposition (SPOD). Spectral POD is derived from a space–time POD problem for statistically stationary flows and leads to modes that each oscillate at a single frequency. This form of POD goes back to the original work of Lumley (Stochastic Tools in Turbulence, Academic Press, 1970), but has been overshadowed by a space-only form of POD since the 1990s. We clarify the relationship between these two forms of POD and show that SPOD modes represent structures that evolve coherently in space and time, while space-only POD modes in general do not. We also establish a relationship between SPOD and dynamic mode decomposition (DMD); we show that SPOD modes are in fact optimally averaged DMD modes obtained from an ensemble DMD problem for stationary flows. Accordingly, SPOD modes represent structures that are dynamic in the same sense as DMD modes but also optimally account for the statistical variability of turbulent flows. Finally, we establish a connection between SPOD and resolvent analysis. The key observation is that the resolvent-mode expansion coefficients must be regarded as statistical quantities to ensure convergent approximations of the flow statistics. When the expansion coefficients are uncorrelated, we show that SPOD and resolvent modes are identical. Our theoretical results and the overall utility of SPOD are demonstrated using two example problems: the complex Ginzburg–Landau equation and a turbulent jet.

Journal ArticleDOI
TL;DR: The Feedback In Realistic Environments (FIRE) project explores feedback in cosmological galaxy formation simulations as mentioned in this paper, which has been used to explore new physics (e.g. magnetic fields).
Abstract: The Feedback In Realistic Environments (FIRE) project explores feedback in cosmological galaxy formation simulations. Previous FIRE simulations used an identical source code (“FIRE-1”) for consistency. Motivated by the development of more accurate numerics – including hydrodynamic solvers, gravitational softening, and supernova coupling algorithms – and exploration of new physics (e.g. magnetic fields), we introduce “FIRE-2”, an updated numerical implementation of FIRE physics for the GIZMO code. We run a suite of simulations and compare against FIRE-1: overall, FIRE-2 improvements do not qualitatively change galaxy-scale properties. We pursue an extensive study of numerics versus physics. Details of the star-formation algorithm, cooling physics, and chemistry have weak effects, provided that we include metal-line cooling and star formation occurs at higher-than-mean densities. We present new resolution criteria for high-resolution galaxy simulations. Most galaxy-scale properties are robust to numerics we test, provided: (1) Toomre masses are resolved; (2) feedback coupling ensures conservation, and (3) individual supernovae are time-resolved. Stellar masses and profiles are most robust to resolution, followed by metal abundances and morphologies, followed by properties of winds and circum-galactic media (CGM). Central (∼kpc) mass concentrations in massive (>L*) galaxies are sensitive to numerics (via trapping/recycling of winds in hot halos). Multiple feedback mechanisms play key roles: supernovae regulate stellar masses/winds; stellar mass-loss fuels late star formation; radiative feedback suppresses accretion onto dwarfs and instantaneous star formation in disks. We provide all initial conditions and numerical algorithms used.

Journal ArticleDOI
Andrew Shepherd1, Erik R. Ivins2, Eric Rignot3, Ben Smith4, Michiel R. van den Broeke, Isabella Velicogna3, Pippa L. Whitehouse5, Kate Briggs1, Ian Joughin4, Gerhard Krinner6, Sophie Nowicki7, Tony Payne8, Ted Scambos9, Nicole Schlegel2, Geruo A3, Cécile Agosta, Andreas P. Ahlstrøm10, Greg Babonis11, Valentina R. Barletta12, Alejandro Blazquez, Jennifer Bonin13, Beata Csatho11, Richard I. Cullather7, Denis Felikson14, Xavier Fettweis, René Forsberg12, Hubert Gallée6, Alex S. Gardner2, Lin Gilbert15, Andreas Groh16, Brian Gunter17, Edward Hanna18, Christopher Harig19, Veit Helm20, Alexander Horvath21, Martin Horwath16, Shfaqat Abbas Khan12, Kristian K. Kjeldsen10, Hannes Konrad1, Peter L. Langen22, Benoit S. Lecavalier23, Bryant D. Loomis7, Scott B. Luthcke7, Malcolm McMillan1, Daniele Melini24, Sebastian H. Mernild25, Sebastian H. Mernild26, Sebastian H. Mernild27, Yara Mohajerani3, Philip Moore28, Jeremie Mouginot6, Jeremie Mouginot3, Gorka Moyano, Alan Muir15, Thomas Nagler, Grace A. Nield5, Johan Nilsson2, Brice Noël, Ines Otosaka1, Mark E. Pattle, W. Richard Peltier29, Nadege Pie14, Roelof Rietbroek30, Helmut Rott, Louise Sandberg-Sørensen12, Ingo Sasgen20, Himanshu Save14, Bernd Scheuchl3, Ernst Schrama31, Ludwig Schröder16, Ki-Weon Seo32, Sebastian B. Simonsen12, Thomas Slater1, Giorgio Spada33, T. C. Sutterley3, Matthieu Talpe9, Lev Tarasov23, Willem Jan van de Berg, Wouter van der Wal31, Melchior van Wessem, Bramha Dutt Vishwakarma34, David N. Wiese2, Bert Wouters 
14 Jun 2018-Nature
TL;DR: This work combines satellite observations of its changing volume, flow and gravitational attraction with modelling of its surface mass balance to show that the Antarctic Ice Sheet lost 2,720 ± 1,390 billion tonnes of ice between 1992 and 2017, which corresponds to an increase in mean sea level of 7.6‚¬3.9 millimetres.
Abstract: The Antarctic Ice Sheet is an important indicator of climate change and driver of sea-level rise. Here we combine satellite observations of its changing volume, flow and gravitational attraction with modelling of its surface mass balance to show that it lost 2,720 ± 1,390 billion tonnes of ice between 1992 and 2017, which corresponds to an increase in mean sea level of 7.6 ± 3.9 millimetres (errors are one standard deviation). Over this period, ocean-driven melting has caused rates of ice loss from West Antarctica to increase from 53 ± 29 billion to 159 ± 26 billion tonnes per year; ice-shelf collapse has increased the rate of ice loss from the Antarctic Peninsula from 7 ± 13 billion to 33 ± 16 billion tonnes per year. We find large variations in and among model estimates of surface mass balance and glacial isostatic adjustment for East Antarctica, with its average rate of mass gain over the period 1992–2017 (5 ± 46 billion tonnes per year) being the least certain.

Journal ArticleDOI
29 Jun 2018-Science
TL;DR: The development and validation of dLight1 is reported, a novel suite of intensity-based genetically encoded dopamine indicators that enables ultrafast optical recording of neuronal dopamine dynamics in behaving mice and permits robust detection of physiologically and behaviorally relevant dopamine transients.
Abstract: Neuromodulatory systems exert profound influences on brain function. Understanding how these systems modify the operating mode of target circuits requires measuring spatiotemporally precise neuromodulator release. We developed dLight1, an intensity-based genetically encoded dopamine indicator, to enable optical recording of dopamine dynamics with high spatiotemporal resolution in behaving mice. We demonstrated the utility of dLight1 by imaging dopamine dynamics simultaneously with pharmacological manipulation, electrophysiological or optogenetic stimulation, and calcium imaging of local neuronal activity. dLight1 enabled chronic tracking of learning-induced changes in millisecond dopamine transients in striatum. Further, we used dLight1 to image spatially distinct, functionally heterogeneous dopamine transients relevant to learning and motor control in cortex. We also validated our sensor design platform for developing norepinephrine, serotonin, melatonin, and opioid neuropeptide indicators.

Journal ArticleDOI
10 Jan 2018-Nature
TL;DR: In this article, it was shown that the lowest exciton in caesium lead halide perovskites (CsPbX_3, with X = Cl, Br or I) involves a highly emissive triplet state.
Abstract: Nanostructured semiconductors emit light from electronic states known as excitons. For organic materials, Hund’s rules state that the lowest-energy exciton is a poorly emitting triplet state. For inorganic semiconductors, similar rules predict an analogue of this triplet state known as the ‘dark exciton’. Because dark excitons release photons slowly, hindering emission from inorganic nanostructures, materials that disobey these rules have been sought. However, despite considerable experimental and theoretical efforts, no inorganic semiconductors have been identified in which the lowest exciton is bright. Here we show that the lowest exciton in caesium lead halide perovskites (CsPbX_3, with X = Cl, Br or I) involves a highly emissive triplet state. We first use an effective-mass model and group theory to demonstrate the possibility of such a state existing, which can occur when the strong spin–orbit coupling in the conduction band of a perovskite is combined with the Rashba effect. We then apply our model to CsPbX_3 nanocrystals, and measure size- and composition-dependent fluorescence at the single-nanocrystal level. The bright triplet character of the lowest exciton explains the anomalous photon-emission rates of these materials, which emit about 20 and 1,000 times faster than any other semiconductor nanocrystal at room and cryogenic temperatures, respectively. The existence of this bright triplet exciton is further confirmed by analysis of the fine structure in low-temperature fluorescence spectra. For semiconductor nanocrystals, which are already used in lighting, lasers and displays, these excitons could lead to materials with brighter emission. More generally, our results provide criteria for identifying other semiconductors that exhibit bright excitons, with potential implications for optoelectronic devices.

Journal ArticleDOI
06 Sep 2018-Cell
TL;DR: Organization in the tumor-immune microenvironment that is structured in cellular composition, spatial arrangement, and regulatory-protein expression is demonstrated and provided a framework to apply multiplexed imaging to immune oncology.

Journal ArticleDOI
TL;DR: The evolution of nature's enzymes can lead to the discovery of new reactivity, transformations not known in biology, and reactivity inaccessible by small‐molecule catalysts.
Abstract: Tailor-made: Discussed herein is the ability to adapt biology's mechanisms for innovation and optimization to solving problems in chemistry and engineering. The evolution of nature's enzymes can lead to the discovery of new reactivity, transformations not known in biology, and reactivity inaccessible by small-molecule catalysts.

Journal ArticleDOI
25 Apr 2018-Nature
TL;DR: Any application of an optical-frequency source could benefit from the high-precision optical synthesis presented here, and leveraging high-volume semiconductor processing built around advanced materials could allow such low-cost, low-power and compact integrated-photonics devices to be widely used.
Abstract: Optical-frequency synthesizers, which generate frequency-stable light from a single microwave-frequency reference, are revolutionizing ultrafast science and metrology, but their size, power requirement and cost need to be reduced if they are to be more widely used. Integrated-photonics microchips can be used in high-coherence applications, such as data transmission1, highly optimized physical sensors2 and harnessing quantum states3, to lower cost and increase efficiency and portability. Here we describe a method for synthesizing the absolute frequency of a lightwave signal, using integrated photonics to create a phase-coherent microwave-to-optical link. We use a heterogeneously integrated III–V/silicon tunable laser, which is guided by nonlinear frequency combs fabricated on separate silicon chips and pumped by off-chip lasers. The laser frequency output of our optical-frequency synthesizer can be programmed by a microwave clock across 4 terahertz near 1,550 nanometres (the telecommunications C-band) with 1 hertz resolution. Our measurements verify that the output of the synthesizer is exceptionally stable across this region (synthesis error of 7.7 × 10−15 or below). Any application of an optical-frequency source could benefit from the high-precision optical synthesis presented here. Leveraging high-volume semiconductor processing built around advanced materials could allow such low-cost, low-power and compact integrated-photonics devices to be widely used. An optical-frequency synthesizer based on stabilized frequency combs has been developed utilizing chip-scale devices as key components, in a move towards using integrated photonics technology for ultrafast science and metrology.

Journal ArticleDOI
TL;DR: Third-generation in situ HCR v3.0 exploits automatic background suppression to enable multiplexed quantitative mRNA imaging and flow cytometry with dramatically enhanced performance and ease of use.
Abstract: In situ hybridization based on the mechanism of the hybridization chain reaction (HCR) has addressed multi-decade challenges that impeded imaging of mRNA expression in diverse organisms, offering a unique combination of multiplexing, quantitation, sensitivity, resolution and versatility. Here, with third-generation in situ HCR, we augment these capabilities using probes and amplifiers that combine to provide automatic background suppression throughout the protocol, ensuring that reagents will not generate amplified background even if they bind non-specifically within the sample. Automatic background suppression dramatically enhances performance and robustness, combining the benefits of a higher signal-to-background ratio with the convenience of using unoptimized probe sets for new targets and organisms. In situ HCR v3.0 enables three multiplexed quantitative analysis modes: (1) qHCR imaging – analog mRNA relative quantitation with subcellular resolution in the anatomical context of whole-mount vertebrate embryos; (2) qHCR flow cytometry – analog mRNA relative quantitation for high-throughput expression profiling of mammalian and bacterial cells; and (3) dHCR imaging – digital mRNA absolute quantitation via single-molecule imaging in thick autofluorescent samples.

Journal ArticleDOI
26 Jul 2018-Cell
TL;DR: This work develops split-pool recognition of interactions by tag extension (SPRITE), a method that enables genome-wide detection of higher-order interactions within the nucleus and generates a global model whereby nuclear bodies act as inter-chromosomal hubs that shape the overall packaging of DNA in the nucleus.

Proceedings ArticleDOI
18 Jun 2018
TL;DR: The iNaturalist dataset as discussed by the authors contains 859,000 images from over 5,000 different species of plants and animals captured in a wide variety of situations from all over the world.
Abstract: Existing image classification datasets used in computer vision tend to have a uniform distribution of images across object categories. In contrast, the natural world is heavily imbalanced, as some species are more abundant and easier to photograph than others. To encourage further progress in challenging real world conditions we present the iNaturalist species classification and detection dataset, consisting of 859,000 images from over 5,000 different species of plants and animals. It features visually similar species, captured in a wide variety of situations, from all over the world. Images were collected with different camera types, have varying image quality, feature a large class imbalance, and have been verified by multiple citizen scientists. We discuss the collection of the dataset and present extensive baseline experiments using state-of-the-art computer vision classification and detection models. Results show that current non-ensemble based methods achieve only 67% top one classification accuracy, illustrating the difficulty of the dataset. Specifically, we observe poor results for classes with small numbers of training examples suggesting more attention is needed in low-shot learning.

Journal ArticleDOI
30 Aug 2018-Nature
TL;DR: How optical metamaterials are expected to enhance the performance of the next generation of integrated photonic devices is reviewed, and some of the challenges encountered in the transition from concept demonstration to viable technology are explored.
Abstract: In the late nineteenth century, Heinrich Hertz demonstrated that the electromagnetic properties of materials are intimately related to their structure at the subwavelength scale by using wire grids with centimetre spacing to manipulate metre-long radio waves. More recently, the availability of nanometre-scale fabrication techniques has inspired scientists to investigate subwavelength-structured metamaterials with engineered optical properties at much shorter wavelengths, in the infrared and visible regions of the spectrum. Here we review how optical metamaterials are expected to enhance the performance of the next generation of integrated photonic devices, and explore some of the challenges encountered in the transition from concept demonstration to viable technology.

Journal ArticleDOI
TL;DR: In this paper, a non-local correction to the Schwarzian effective action is found by integrating out the bulk degrees of freedom in a certain variant of dilaton gravity, and general properties of out-of-time-order correlators are discussed.
Abstract: We give an exposition of the SYK model with several new results. A non-local correction to the Schwarzian effective action is found. The same action is obtained by integrating out the bulk degrees of freedom in a certain variant of dilaton gravity. We also discuss general properties of out-of-time-order correlators.

Journal ArticleDOI
01 Apr 2018
TL;DR: In this paper, the fabrication of large-area, high-quality 2D tellurium (tellurene) using a substrate-free solution process was reported. But this method suffers from a variety of drawbacks, including limitations in crystal size and stability.
Abstract: The reliable production of two-dimensional (2D) crystals is essential for the development of new technologies based on 2D materials. However, current synthesis methods suffer from a variety of drawbacks, including limitations in crystal size and stability. Here, we report the fabrication of large-area, high-quality 2D tellurium (tellurene) using a substrate-free solution process. Our approach can create crystals with process-tunable thickness, from a monolayer to tens of nanometres, and with lateral sizes of up to 100 µm. The chiral-chain van der Waals structure of tellurene gives rise to strong in-plane anisotropic properties and large thickness-dependent shifts in Raman vibrational modes, which is not observed in other 2D layered materials. We also fabricate tellurene field-effect transistors, which exhibit air-stable performance at room temperature for over two months, on/off ratios on the order of 106, and field-effect mobilities of about 700 cm^2 V^(−1) s^(−1). Furthermore, by scaling down the channel length and integrating with high-k dielectrics, transistors with a significant on-state current density of 1 A mm^(−1) are demonstrated.