scispace - formally typeset
Search or ask a question

Showing papers by "California Institute of Technology published in 2015"


Journal ArticleDOI
TL;DR: A standard protocol is used as a primary screen for evaluating the activity, short-term (2 h) stability, and electrochemically active surface area (ECSA) of 18 and 26 electrocatalysts for the hydrogen evolution reaction (HER and OER) under conditions relevant to an integrated solar water-splitting device in aqueous acidic or alkaline solution.
Abstract: Objective comparisons of electrocatalyst activity and stability using standard methods under identical conditions are necessary to evaluate the viability of existing electrocatalysts for integration into solar-fuel devices as well as to help inform the development of new catalytic systems. Herein, we use a standard protocol as a primary screen for evaluating the activity, short-term (2 h) stability, and electrochemically active surface area (ECSA) of 18 electrocatalysts for the hydrogen evolution reaction (HER) and 26 electrocatalysts for the oxygen evolution reaction (OER) under conditions relevant to an integrated solar water-splitting device in aqueous acidic or alkaline solution. Our primary figure of merit is the overpotential necessary to achieve a magnitude current density of 10 mA cm–2 per geometric area, the approximate current density expected for a 10% efficient solar-to-fuels conversion device under 1 sun illumination. The specific activity per ECSA of each material is also reported. Among HER...

2,877 citations


Journal ArticleDOI
TL;DR: The third generation of the Sloan Digital Sky Survey (SDSS-III) took data from 2008 to 2014 using the original SDSS wide-field imager, the original and an upgraded multi-object fiber-fed optical spectrograph, a new near-infrared high-resolution spectrogram, and a novel optical interferometer.
Abstract: The third generation of the Sloan Digital Sky Survey (SDSS-III) took data from 2008 to 2014 using the original SDSS wide-field imager, the original and an upgraded multi-object fiber-fed optical spectrograph, a new near-infrared high-resolution spectrograph, and a novel optical interferometer. All the data from SDSS-III are now made public. In particular, this paper describes Data Release 11 (DR11) including all data acquired through 2013 July, and Data Release 12 (DR12) adding data acquired through 2014 July (including all data included in previous data releases), marking the end of SDSS-III observing. Relative to our previous public release (DR10), DR12 adds one million new spectra of galaxies and quasars from the Baryon Oscillation Spectroscopic Survey (BOSS) over an additional 3000 sq. deg of sky, more than triples the number of H-band spectra of stars as part of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE), and includes repeated accurate radial velocity measurements of 5500 stars from the Multi-Object APO Radial Velocity Exoplanet Large-area Survey (MARVELS). The APOGEE outputs now include measured abundances of 15 different elements for each star. In total, SDSS-III added 2350 sq. deg of ugriz imaging; 155,520 spectra of 138,099 stars as part of the Sloan Exploration of Galactic Understanding and Evolution 2 (SEGUE-2) survey; 2,497,484 BOSS spectra of 1,372,737 galaxies, 294,512 quasars, and 247,216 stars over 9376 sq. deg; 618,080 APOGEE spectra of 156,593 stars; and 197,040 MARVELS spectra of 5,513 stars. Since its first light in 1998, SDSS has imaged over 1/3 of the Celestial sphere in five bands and obtained over five million astronomical spectra.

2,471 citations


Journal ArticleDOI
TL;DR: A summary of the technical advances that are incorporated in the fourth major release of the Q-Chem quantum chemistry program is provided in this paper, covering approximately the last seven years, including developments in density functional theory and algorithms, nuclear magnetic resonance (NMR) property evaluation, coupled cluster and perturbation theories, methods for electronically excited and open-shell species, tools for treating extended environments, algorithms for walking on potential surfaces, analysis tools, energy and electron transfer modelling, parallel computing capabilities, and graphical user interfaces.
Abstract: A summary of the technical advances that are incorporated in the fourth major release of the Q-Chem quantum chemistry program is provided, covering approximately the last seven years. These include developments in density functional theory methods and algorithms, nuclear magnetic resonance (NMR) property evaluation, coupled cluster and perturbation theories, methods for electronically excited and open-shell species, tools for treating extended environments, algorithms for walking on potential surfaces, analysis tools, energy and electron transfer modelling, parallel computing capabilities, and graphical user interfaces. In addition, a selection of example case studies that illustrate these capabilities is given. These include extensive benchmarks of the comparative accuracy of modern density functionals for bonded and non-bonded interactions, tests of attenuated second order Moller–Plesset (MP2) methods for intermolecular interactions, a variety of parallel performance benchmarks, and tests of the accuracy of implicit solvation models. Some specific chemical examples include calculations on the strongly correlated Cr_2 dimer, exploring zeolite-catalysed ethane dehydrogenation, energy decomposition analysis of a charged ter-molecular complex arising from glycerol photoionisation, and natural transition orbitals for a Frenkel exciton state in a nine-unit model of a self-assembling nanotube.

2,396 citations


Journal ArticleDOI
TL;DR: A metasurface platform based on high-contrast dielectric elliptical nanoposts that provides complete control of polarization and phase with subwavelength spatial resolution and an experimentally measured efficiency ranging from 72% to 97%, depending on the exact design.
Abstract: Metasurfaces are planar structures that locally modify the polarization, phase and amplitude of light in reflection or transmission, thus enabling lithographically patterned flat optical components with functionalities controlled by design. Transmissive metasurfaces are especially important, as most optical systems used in practice operate in transmission. Several types of transmissive metasurface have been realized, but with either low transmission efficiencies or limited control over polarization and phase. Here, we show a metasurface platform based on high-contrast dielectric elliptical nanoposts that provides complete control of polarization and phase with subwavelength spatial resolution and an experimentally measured efficiency ranging from 72% to 97%, depending on the exact design. Such complete control enables the realization of most free-space transmissive optical elements such as lenses, phase plates, wave plates, polarizers, beamsplitters, as well as polarization-switchable phase holograms and arbitrary vector beam generators using the same metamaterial platform.

2,126 citations


Journal ArticleDOI
09 Apr 2015-Cell
TL;DR: It is demonstrated that Indigenous spore-forming bacteria from the mouse and human microbiota promote 5-HT biosynthesis from colonic enterochromaffin cells (ECs), which supply 5- HT to the mucosa, lumen, and circulating platelets and elevating luminal concentrations of particular microbial metabolites increases colonic and blood5-HT in germ-free mice.

2,047 citations


01 Oct 2015
TL;DR: This is the eighteenth in a series of evaluated sets of rate constants, photochemical cross sections, heterogeneous parameters, and thermochemical parameters compiled by the NASA Panel for Data Evaluation as mentioned in this paper.
Abstract: This is the eighteenth in a series of evaluated sets of rate constants, photochemical cross sections, heterogeneous parameters, and thermochemical parameters compiled by the NASA Panel for Data Evaluation. The data are used primarily to model stratospheric and upper tropospheric processes, with particular emphasis on the ozone layer and its possible perturbation by anthropogenic and natural phenomena. The evaluation is available in electronic form from the following Internet URL: http://jpldataeval.jpl.nasa.gov/

1,830 citations


Journal ArticleDOI
19 Feb 2015-Nature
TL;DR: A fine-mapping algorithm is developed to identify candidate causal variants for 21 autoimmune diseases from genotyping data, and it is found that most non-coding risk variants, including those that alter gene expression, affect non-canonical sequence determinants not well-explained by current gene regulatory models.
Abstract: Genome-wide association studies have identified loci underlying human diseases, but the causal nucleotide changes and mechanisms remain largely unknown. Here we developed a fine-mapping algorithm to identify candidate causal variants for 21 autoimmune diseases from genotyping data. We integrated these predictions with transcription and cis-regulatory element annotations, derived by mapping RNA and chromatin in primary immune cells, including resting and stimulated CD4(+) T-cell subsets, regulatory T cells, CD8(+) T cells, B cells, and monocytes. We find that ∼90% of causal variants are non-coding, with ∼60% mapping to immune-cell enhancers, many of which gain histone acetylation and transcribe enhancer-associated RNA upon immune stimulation. Causal variants tend to occur near binding sites for master regulators of immune differentiation and stimulus-dependent gene activation, but only 10-20% directly alter recognizable transcription factor binding motifs. Rather, most non-coding risk variants, including those that alter gene expression, affect non-canonical sequence determinants not well-explained by current gene regulatory models.

1,622 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +5117 moreInstitutions (314)
TL;DR: A measurement of the Higgs boson mass is presented based on the combined data samples of the ATLAS and CMS experiments at the CERN LHC in the H→γγ and H→ZZ→4ℓ decay channels.
Abstract: A measurement of the Higgs boson mass is presented based on the combined data samples of the ATLAS and CMS experiments at the CERN LHC in the H→γγ and H→ZZ→4l decay channels. The results are obtained from a simultaneous fit to the reconstructed invariant mass peaks in the two channels and for the two experiments. The measured masses from the individual channels and the two experiments are found to be consistent among themselves. The combined measured mass of the Higgs boson is mH=125.09±0.21 (stat)±0.11 (syst) GeV.

1,567 citations


Journal ArticleDOI
03 Apr 2015-Science
TL;DR: A dramatic improvement of efficiency is shown in bismuth telluride samples by quickly squeezing out excess liquid during compaction, which presents an attractive path forward for thermoelectrics.
Abstract: The widespread use of thermoelectric technology is constrained by a relatively low conversion efficiency of the bulk alloys, which is evaluated in terms of a dimensionless figure of merit ( zT ). The zT of bulk alloys can be improved by reducing lattice thermal conductivity through grain boundary and point-defect scattering, which target low- and high-frequency phonons. Dense dislocation arrays formed at low-energy grain boundaries by liquid-phase compaction in Bi 0.5 Sb 1.5 Te 3 (bismuth antimony telluride) effectively scatter midfrequency phonons, leading to a substantially lower lattice thermal conductivity. Full-spectrum phonon scattering with minimal charge-carrier scattering dramatically improved the zT to 1.86 ± 0.15 at 320 kelvin (K). Further, a thermoelectric cooler confirmed the performance with a maximum temperature difference of 81 K, which is much higher than current commercial Peltier cooling devices.

1,429 citations


Journal ArticleDOI
21 Jan 2015-Neuron
TL;DR: The data suggest that BBB breakdown is an early event in the aging human brain that begins in the hippocampus and may contribute to cognitive impairment.

1,347 citations


Journal ArticleDOI
J. Aasi1, J. Abadie1, B. P. Abbott1, Richard J. Abbott1  +884 moreInstitutions (98)
TL;DR: In this paper, the authors review the performance of the LIGO instruments during this epoch, the work done to characterize the detectors and their data, and the effect that transient and continuous noise artefacts have on the sensitivity of the detectors to a variety of astrophysical sources.
Abstract: In 2009–2010, the Laser Interferometer Gravitational-Wave Observatory (LIGO) operated together with international partners Virgo and GEO600 as a network to search for gravitational waves (GWs) of astrophysical origin. The sensitivity of these detectors was limited by a combination of noise sources inherent to the instrumental design and its environment, often localized in time or frequency, that couple into the GW readout. Here we review the performance of the LIGO instruments during this epoch, the work done to characterize the detectors and their data, and the effect that transient and continuous noise artefacts have on the sensitivity of LIGO to a variety of astrophysical sources.

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Zeeshan Ahmed3, Randol W. Aikin4  +354 moreInstitutions (75)
TL;DR: Strong evidence for dust and no statistically significant evidence for tensor modes is found and various model variations and extensions are probe, including adding a synchrotron component in combination with lower frequency data, and find that these make little difference to the r constraint.
Abstract: We report the results of a joint analysis of data from BICEP2/Keck Array and Planck. BICEP2 and Keck Array have observed the same approximately 400 deg2 patch of sky centered on RA 0h, Dec. −57.5deg. The combined maps reach a depth of 57 nK deg in Stokes Q and U in a band centered at 150 GHz. Planck has observed the full sky in polarization at seven frequencies from 30 to 353 GHz, but much less deeply in any given region (1.2 μK deg in Q and U at 143 GHz). We detect 150×353 cross-correlation in B-modes at high significance. We fit the single- and cross-frequency power spectra at frequencies above 150 GHz to a lensed-ΛCDM model that includes dust and a possible contribution from inflationary gravitational waves (as parameterized by the tensor-to-scalar ratio r). We probe various model variations and extensions, including adding a synchrotron component in combination with lower frequency data, and find that these make little difference to the r constraint. Finally we present an alternative analysis which is similar to a map-based cleaning of the dust contribution, and show that this gives similar constraints. The final result is expressed as a likelihood curve for r, and yields an upper limit r0.05<0.12 at 95% confidence. Marginalizing over dust and r, lensing B-modes are detected at 7.0σ significance.

Journal ArticleDOI
TL;DR: In this paper, the authors present a loophole-free violation of local realism using entangled photon pairs, ensuring that all relevant events in their Bell test are spacelike separated by placing the parties far enough apart and by using fast random number generators and high-speed polarization measurements.
Abstract: We present a loophole-free violation of local realism using entangled photon pairs. We ensure that all relevant events in our Bell test are spacelike separated by placing the parties far enough apart and by using fast random number generators and high-speed polarization measurements. A high-quality polarization-entangled source of photons, combined with high-efficiency, low-noise, single-photon detectors, allows us to make measurements without requiring any fair-sampling assumptions. Using a hypothesis test, we compute p values as small as 5.9×10^{-9} for our Bell violation while maintaining the spacelike separation of our events. We estimate the degree to which a local realistic system could predict our measurement choices. Accounting for this predictability, our smallest adjusted p value is 2.3×10^{-7}. We therefore reject the hypothesis that local realism governs our experiment.

Journal ArticleDOI
TL;DR: In this article, a first order correction to the degenerate limit of L can be found based on the measured thermopower, |S|, independent of temperature or doping.
Abstract: In analyzing zT improvements due to lattice thermal conductivity (κ_L ) reduction, electrical conductivity (σ) and total thermal conductivity (κ_(Total)) are often used to estimate the electronic component of the thermal conductivity (κ_E) and in turn κ_L from κ_L = ∼ κ_(Total) − LσT. The Wiedemann-Franz law, κ_E = LσT, where L is Lorenz number, is widely used to estimate κ_E from σ measurements. It is a common practice to treat L as a universal factor with 2.44 × 10^(−8) WΩK^(−2) (degenerate limit). However, significant deviations from the degenerate limit (approximately 40% or more for Kane bands) are known to occur for non-degenerate semiconductors where L converges to 1.5 × 10^(−8) WΩK^(−2) for acoustic phonon scattering. The decrease in L is correlated with an increase in thermopower (absolute value of Seebeck coefficient (S)). Thus, a first order correction to the degenerate limit of L can be based on the measured thermopower, |S|, independent of temperature or doping. We propose the equation: L=1.5+exp[−_(|S|)_(116)] (where L is in 10^(−8) WΩK^(−2) and S in μV/K) as a satisfactory approximation for L. This equation is accurate within 5% for single parabolic band/acoustic phonon scattering assumption and within 20% for PbSe, PbS, PbTe, Si_(0.8) Ge _(0.2) where more complexity is introduced, such as non-parabolic Kane bands, multiple bands, and/or alternate scattering mechanisms. The use of this equation for L rather than a constant value (when detailed band structure and scattering mechanism is not known) will significantly improve the estimation of lattice thermal conductivity.

Journal ArticleDOI
TL;DR: In this article, a catalog of modified theories of gravity for which strong-field predictions have been computed and contrasted to Einstein's theory is presented, and the current understanding of the structure and dynamics of compact objects in these theories is summarized.
Abstract: One century after its formulation, Einstein's general relativity (GR) has made remarkable predictions and turned out to be compatible with all experimental tests. Most of these tests probe the theory in the weak-field regime, and there are theoretical and experimental reasons to believe that GR should be modified when gravitational fields are strong and spacetime curvature is large. The best astrophysical laboratories to probe strong-field gravity are black holes and neutron stars, whether isolated or in binary systems. We review the motivations to consider extensions of GR. We present a (necessarily incomplete) catalog of modified theories of gravity for which strong-field predictions have been computed and contrasted to Einstein's theory, and we summarize our current understanding of the structure and dynamics of compact objects in these theories. We discuss current bounds on modified gravity from binary pulsar and cosmological observations, and we highlight the potential of future gravitational wave measurements to inform us on the behavior of gravity in the strong-field regime.

Journal ArticleDOI
TL;DR: Air pollutants consist of a complex combination of gases and particulate matter, which is emitted directly into the atmosphere or formed in the atmosphere through gas-to-particle conversion (secondary) (Figure 1).
Abstract: Urban air pollution represents one of the greatest environmental challenges facing mankind in the 21st century. Noticeably, many developing countries, such as China and India, have experienced severe air pollution because of their fast-developing economy and urbanization. Globally, the urbanization trend is projected to continue: 70% of the world population will reside in urban centers by 2050, and there will exist 41 megacities (with more than 10 million inhabitants) by 2030. Air pollutants consist of a complex combination of gases and particulate matter (PM). In particular, fine PM (particles with the aerodynamic diameter smaller than 2.5 μm or PM_(2.5)) profoundly impacts human health, visibility, the ecosystem, the weather, and the climate, and these PM effects are largely dependent on the aerosol properties, including the number concentration, size, and chemical composition. PM is emitted directly into the atmosphere (primary) or formed in the atmosphere through gas-to-particle conversion (secondary) (Figure 1). Also, primary and secondary PM undergoes chemical and physical transformations and is subjected to transport, cloud processing, and removal from the atmosphere.

Journal ArticleDOI
TL;DR: P polarization-insensitive, micron-thick, high-contrast transmitarray micro-lenses with focal spots as small as 0.57 λ are reported, thus enabling widespread adoption and a rigorous method for ultrathin lens design is discussed.
Abstract: Flat optical devices thinner than a wavelength promise to replace conventional free-space components for wavefront and polarization control. Transmissive flat lenses are particularly interesting for applications in imaging and on-chip optoelectronic integration. Several designs based on plasmonic metasurfaces, high-contrast transmitarrays and gratings have been recently implemented but have not provided a performance comparable to conventional curved lenses. Here we report polarization-insensitive, micron-thick, high-contrast transmitarray micro-lenses with focal spots as small as 0.57 λ. The measured focusing efficiency is up to 82%. A rigorous method for ultrathin lens design, and the trade-off between high efficiency and small spot size (or large numerical aperture) are discussed. The micro-lenses, composed of silicon nano-posts on glass, are fabricated in one lithographic step that could be performed with high-throughput photo or nanoimprint lithography, thus enabling widespread adoption.

Journal ArticleDOI
06 Mar 2015-eLife
TL;DR: Optogenetic manipulations indicate that the hypothalamus plays an integral role to instantiate emotion states, and is not simply a passive effector of upstream emotion centers.
Abstract: Defensive behaviors reflect underlying emotion states, such as fear. The hypothalamus plays a role in such behaviors, but prevailing textbook views depict it as an effector of upstream emotion centers, such as the amygdala, rather than as an emotion center itself. We used optogenetic manipulations to probe the function of a specific hypothalamic cell type that mediates innate defensive responses. These neurons are sufficient to drive multiple defensive actions, and required for defensive behaviors in diverse contexts. The behavioral consequences of activating these neurons, moreover, exhibit properties characteristic of emotion states in general, including scalability, (negative) valence, generalization and persistence. Importantly, these neurons can also condition learned defensive behavior, further refuting long-standing claims that the hypothalamus is unable to support emotional learning and therefore is not an emotion center. These data indicate that the hypothalamus plays an integral role to instantiate emotion states, and is not simply a passive effector of upstream emotion centers.

Journal ArticleDOI
TL;DR: In this paper, a new model of the last deglaciation event of the Late Quaternary ice age is described and denoted as ICE-6G_C (VM5a), which has been explicitly refined by applying all of the available Global Positioning System (GPS) measurements of vertical motion of the crust that may be brought to bear to constrain the thickness of local ice cover as well as the timing of its removal.
Abstract: A new model of the last deglaciation event of the Late Quaternary ice age is here described and denoted as ICE-6G_C (VM5a). It differs from previously published models in this sequence in that it has been explicitly refined by applying all of the available Global Positioning System (GPS) measurements of vertical motion of the crust that may be brought to bear to constrain the thickness of local ice cover as well as the timing of its removal. Additional space geodetic constraints have also been applied to specify the reference frame within which the GPS data are described. The focus of the paper is upon the three main regions of Last Glacial Maximum ice cover, namely, North America, Northwestern Europe/Eurasia, and Antarctica, although Greenland and the British Isles will also be included, if peripherally, in the discussion. In each of the three major regions, the model predictions of the time rate of change of the gravitational field are also compared to that being measured by the Gravity Recovery and Climate Experiment satellites as an independent means of verifying the improvement of the model achieved by applying the GPS constraints. Several aspects of the global characteristics of this new model are also discussed, including the nature of relative sea level history predictions at far-field locations, in particular the Caribbean island of Barbados, from which especially high-quality records of postglacial sea level change are available but which records were not employed in the development of the model. Although ICE-6G_C (VM5a) is a significant improvement insofar as the most recently available GPS observations are concerned, comparison of model predictions with such far-field relative sea level histories enables us to identify a series of additional improvements that should follow from a further stage of model iteration.

Journal ArticleDOI
14 May 2015-Nature
TL;DR: A method to purify a lncRNA from cells and identify proteins interacting with it directly using quantitative mass spectrometry is developed and it is shown that SHARP, which interacts with the SMRT co-repressor that activates HDAC3, is not only essential for silencing, but is also required for the exclusion of RNA polymerase II from the inactive X.
Abstract: Many long non-coding RNAs (lncRNAs) affect gene expression, but the mechanisms by which they act are still largely unknown. One of the best-studied lncRNAs is Xist, which is required for transcriptional silencing of one X chromosome during development in female mammals. Despite extensive efforts to define the mechanism of Xist-mediated transcriptional silencing, we still do not know any proteins required for this role. The main challenge is that there are currently no methods to comprehensively define the proteins that directly interact with a lncRNA in the cell. Here we develop a method to purify a lncRNA from cells and identify proteins interacting with it directly using quantitative mass spectrometry. We identify ten proteins that specifically associate with Xist, three of these proteins--SHARP, SAF-A and LBR--are required for Xist-mediated transcriptional silencing. We show that SHARP, which interacts with the SMRT co-repressor that activates HDAC3, is not only essential for silencing, but is also required for the exclusion of RNA polymerase II (Pol II) from the inactive X. Both SMRT and HDAC3 are also required for silencing and Pol II exclusion. In addition to silencing transcription, SHARP and HDAC3 are required for Xist-mediated recruitment of the polycomb repressive complex 2 (PRC2) across the X chromosome. Our results suggest that Xist silences transcription by directly interacting with SHARP, recruiting SMRT, activating HDAC3, and deacetylating histones to exclude Pol II across the X chromosome.

Journal ArticleDOI
B. Flaugher1, H. T. Diehl1, K. Honscheid2, T. M. C. Abbott, O. Alvarez1, R. Angstadt1, J. Annis1, M. Antonik3, O. Ballester4, L. Beaufore2, Gary Bernstein5, R. A. Bernstein6, B. Bigelow7, Marco Bonati, D. Boprie7, David J. Brooks3, E. Buckley-Geer1, J. Campa, L. Cardiel-Sas4, Francisco J. Castander8, Javier Castilla, H. Cease1, J. M. Cela-Ruiz, S. Chappa1, Edward C. Chi1, C. Cooper7, L. N. da Costa, E. Dede7, G. Derylo1, Darren L. DePoy9, J. De Vicente, Peter Doel3, Alex Drlica-Wagner1, J. Eiting2, Ann Elliott2, J. Emes10, Juan Estrada1, A. Fausti Neto, D. A. Finley1, R. Flores1, Josh Frieman11, Josh Frieman1, D. W. Gerdes7, Michael D. Gladders11, B. Gregory, G. Gutierrez1, Jiangang Hao1, S.E. Holland10, Scott Holm1, D. Huffman1, Cheryl Jackson1, David J. James, M. Jonas1, Armin Karcher10, I. Karliner12, Steve Kent1, Richard Kessler11, Mark Kozlovsky1, Richard G. Kron11, Donna Kubik1, Kyler Kuehn13, S. E. Kuhlmann14, K. Kuk1, Ofer Lahav3, A. Lathrop1, J. Lee10, Michael Levi10, P. Lewis15, Tianjun Li9, I. Mandrichenko1, Jennifer L. Marshall9, G. Martinez, K. W. Merritt1, Ramon Miquel4, Ramon Miquel16, F. Munoz, Eric H. Neilsen1, Robert C. Nichol17, Brian Nord1, Ricardo L. C. Ogando, Jamieson Olsen1, N. Palaio9, K. Patton2, John Peoples1, A. A. Plazas18, A. A. Plazas19, J. Rauch1, Kevin Reil15, J.-P. Rheault9, Natalie A. Roe10, H. Rogers15, A. Roodman20, A. Roodman15, E. J. Sanchez, V. Scarpine1, Rafe Schindler15, Ricardo Schmidt, R. Schmitt1, Michael Schubnell7, Katherine Schultz1, P. Schurter, L. Scott1, S. Serrano8, Terri Shaw1, Robert Connon Smith, Marcelle Soares-Santos1, A. Stefanik1, W. Stuermer1, E. Suchyta2, A. Sypniewski7, G. Tarle7, Jon J Thaler12, R. Tighe, C. Tran10, Douglas L. Tucker1, Alistair R. Walker, G. Wang10, M. Watson1, Curtis Weaverdyck7, W. C. Wester1, Robert J. Woods1, Brian Yanny1 
TL;DR: The Dark Energy Camera as mentioned in this paper was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it.
Abstract: The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250 micron thick fully-depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2kx4k CCDs for imaging and 12 2kx2k CCDs for guiding and focus. The CCDs have 15 microns x15 microns pixels with a plate scale of 0.263 arc sec per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

Journal ArticleDOI
TL;DR: In the first worldwide synthesis of in situ and satellite-derived lake data, this paper found that lake summer surface water temperatures rose rapidly (global mean = 0.34°C decade−1) between 1985 and 2009.
Abstract: In this first worldwide synthesis of in situ and satellite-derived lake data, we find that lake summer surface water temperatures rose rapidly (global mean = 0.34°C decade−1) between 1985 and 2009. Our analyses show that surface water warming rates are dependent on combinations of climate and local characteristics, rather than just lake location, leading to the counterintuitive result that regional consistency in lake warming is the exception, rather than the rule. The most rapidly warming lakes are widely geographically distributed, and their warming is associated with interactions among different climatic factors—from seasonally ice-covered lakes in areas where temperature and solar radiation are increasing while cloud cover is diminishing (0.72°C decade−1) to ice-free lakes experiencing increases in air temperature and solar radiation (0.53°C decade−1). The pervasive and rapid warming observed here signals the urgent need to incorporate climate impacts into vulnerability assessments and adaptation efforts for lakes.

Journal ArticleDOI
TL;DR: In this paper, a Lagrangian method for hydrodynamics is proposed to simultaneously capture advantages of both SPH and grid-based/adaptive mesh refinement (AMR) schemes.
Abstract: We present two new Lagrangian methods for hydrodynamics, in a systematic comparison with moving-mesh, smoothed particle hydrodynamics (SPH), and stationary (non-moving) grid methods. The new methods are designed to simultaneously capture advantages of both SPH and grid-based/adaptive mesh refinement (AMR) schemes. They are based on a kernel discretization of the volume coupled to a high-order matrix gradient estimator and a Riemann solver acting over the volume ‘overlap’. We implement and test a parallel, second-order version of the method with self-gravity and cosmological integration, in the code gizmo:1 this maintains exact mass, energy and momentum conservation; exhibits superior angular momentum conservation compared to all other methods we study; does not require ‘artificial diffusion’ terms; and allows the fluid elements to move with the flow, so resolution is automatically adaptive. We consider a large suite of test problems, and find that on all problems the new methods appear competitive with moving-mesh schemes, with some advantages (particularly in angular momentum conservation), at the cost of enhanced noise. The new methods have many advantages versus SPH: proper convergence, good capturing of fluid-mixing instabilities, dramatically reduced ‘particle noise’ and numerical viscosity, more accurate sub-sonic flow evolution, and sharp shock-capturing. Advantages versus non-moving meshes include: automatic adaptivity, dramatically reduced advection errors and numerical overmixing, velocity-independent errors, accurate coupling to gravity, good angular momentum conservation and elimination of ‘grid alignment’ effects. We can, for example, follow hundreds of orbits of gaseous discs, while AMR and SPH methods break down in a few orbits. However, fixed meshes minimize ‘grid noise’. These differences are important for a range of astrophysical problems.

Journal ArticleDOI
TL;DR: Major improvements to the proteolysis targeting chimeras (PROTACs) method are described, a chemical knockdown strategy in which a heterobifunctional molecule recruits a specific protein target to an E3 ubiquitin ligase, resulting in the target's ubiquitination and degradation.
Abstract: The current predominant therapeutic paradigm is based on maximizing drug-receptor occupancy to achieve clinical benefit This strategy, however, generally requires excessive drug concentrations to ensure sufficient occupancy, often leading to adverse side effects Here, we describe major improvements to the proteolysis targeting chimeras (PROTACs) method, a chemical knockdown strategy in which a heterobifunctional molecule recruits a specific protein target to an E3 ubiquitin ligase, resulting in the target's ubiquitination and degradation These compounds behave catalytically in their ability to induce the ubiquitination of super-stoichiometric quantities of proteins, providing efficacy that is not limited by equilibrium occupancy We present two PROTACs that are capable of specifically reducing protein levels by >90% at nanomolar concentrations In addition, mouse studies indicate that they provide broad tissue distribution and knockdown of the targeted protein in tumor xenografts Together, these data demonstrate a protein knockdown system combining many of the favorable properties of small-molecule agents with the potent protein knockdown of RNAi and CRISPR

Journal ArticleDOI
TL;DR: That bulk logical operators can be represented on multiple boundary regions mimics the Rindlerwedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in [1].
Abstract: We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindlerwedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in [1].

Journal ArticleDOI
TL;DR: This work describes the LALInference software library for Bayesian parameter estimation of compact binary signals, which builds on several previous methods to provide a well-tested toolkit which has already been used for several studies.
Abstract: The Advanced LIGO and Advanced Virgo gravitational-wave (GW) detectors will begin operation in the coming years, with compact binary coalescence events a likely source for the first detections. The gravitational waveforms emitted directly encode information about the sources, including the masses and spins of the compact objects. Recovering the physical parameters of the sources from the GW observations is a key analysis task. This work describes the LALInference software library for Bayesian parameter estimation of compact binary signals, which builds on several previous methods to provide a well-tested toolkit which has already been used for several studies. We show that our implementation is able to correctly recover the parameters of compact binary signals from simulated data from the advanced GW detectors. We demonstrate this with a detailed comparison on three compact binary systems: a binary neutron star, a neutron star–black hole binary and a binary black hole, where we show a cross comparison of results obtained using three independent sampling algorithms. These systems were analyzed with nonspinning, aligned spin and generic spin configurations respectively, showing that consistent results can be obtained even with the full 15-dimensional parameter space of the generic spin configurations. We also demonstrate statistically that the Bayesian credible intervals we recover correspond to frequentist confidence intervals under correct prior assumptions by analyzing a set of 100 signals drawn from the prior. We discuss the computational cost of these algorithms, and describe the general and problem-specific sampling techniques we have used to improve the efficiency of sampling the compact binary coalescence parameter space.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss new constraints on the epoch of cosmic reionization and test the assumption that most of the ionizing photons responsible arose from high-redshift star-forming galaxies.
Abstract: We discuss new constraints on the epoch of cosmic reionization and test the assumption that most of the ionizing photons responsible arose from high-redshift star-forming galaxies. Good progress has been made in charting the end of reionization through spectroscopic studies of z≃ 6–8 QSOs, gamma-ray bursts, and galaxies expected to host Lyα emission. However, the most stringent constraints on its duration have come from the integrated optical depth, τ, of Thomson scattering to the cosmic microwave background. Using the latest data on the abundance and luminosity distribution of distant galaxies from Hubble Space Telescope imaging, we simultaneously match the reduced value τ = 0.066 ± 0.012 recently reported by the Planck collaboration and the evolving neutrality of the intergalactic medium with a reionization history within 6 ≾ z ≾ 10, thereby reducing the requirement for a significant population of very high redshift (z ≫ 10) galaxies. Our analysis strengthens the conclusion that star-forming galaxies dominated the reionization process and has important implications for upcoming 21 cm experiments and searches for early galaxies with the James Webb Space Telescope.

Journal ArticleDOI
TL;DR: The authors surveys the current state of knowledge of ENSO diversity, identifies key gaps in understanding, and outlines some promising future research directions, as well as identifying key gaps and promising future directions.
Abstract: El Nino–Southern Oscillation (ENSO) is a naturally occurring mode of tropical Pacific variability, with global impacts on society and natural ecosystems. While it has long been known that El Nino events display a diverse range of amplitudes, triggers, spatial patterns, and life cycles, the realization that ENSO’s impacts can be highly sensitive to this event-to-event diversity is driving a renewed interest in the subject. This paper surveys our current state of knowledge of ENSO diversity, identifies key gaps in understanding, and outlines some promising future research directions.

Journal ArticleDOI
TL;DR: Emerging evidence that the microbiome extends its influence to the brain via various pathways connecting the gut to the central nervous system is highlighted.

Journal ArticleDOI
TL;DR: The mascon basis functions allow for convenient application of a priori information derived from near-global geophysical models to prevent striping in the solutions, and do not necessitate empirical filters to remove north-south stripes, lowering the dependence on using scale factors as discussed by the authors.
Abstract: We discuss several classes of improvements to gravity solutions from the Gravity Recovery and Climate Experiment (GRACE) mission. These include both improvements in background geophysical models and orbital parameterization leading to the unconstrained spherical harmonic solution JPL RL05, and an alternate JPL RL05M mass concentration (mascon) solution benefitting from those same improvements but derived in surface spherical cap mascons. The mascon basis functions allow for convenient application of a priori information derived from near-global geophysical models to prevent striping in the solutions. The resulting mass flux solutions are shown to suffer less from leakage errors than harmonic solutions, and do not necessitate empirical filters to remove north-south stripes, lowering the dependence on using scale factors (the global mean scale factor decreases by 0.17) to gain accurate mass estimates. Ocean bottom pressure (OBP) time series derived from the mascon solutions are shown to have greater correlation with in situ data than do spherical harmonic solutions (increase in correlation coefficient of 0.08 globally), particularly in low-latitude regions with small signal power (increase in correlation coefficient of 0.35 regionally), in addition to reducing the error RMS with respect to the in situ data (reduction of 0.37 cm globally, and as much as 1 cm regionally). Greenland and Antarctica mass balance estimates derived from the mascon solutions agree within formal uncertainties with previously published results. Computing basin averages for hydrology applications shows general agreement between harmonic and mascon solutions for large basins; however, mascon solutions typically have greater resolution for smaller spatial regions, in particular when studying secular signals.