Showing papers by "California Institute of Technology published in 2012"
••
TL;DR: In this article, a search for the Standard Model Higgs boson in proton-proton collisions with the ATLAS detector at the LHC is presented, which has a significance of 5.9 standard deviations, corresponding to a background fluctuation probability of 1.7×10−9.
9,282 citations
••
TL;DR: In this paper, results from searches for the standard model Higgs boson in proton-proton collisions at 7 and 8 TeV in the CMS experiment at the LHC, using data samples corresponding to integrated luminosities of up to 5.8 standard deviations.
8,857 citations
••
Tohoku University1, University of Zurich2, Lawrence Berkeley National Laboratory3, Stanford University4, College of William & Mary5, University of Genoa6, University of Urbino7, CERN8, Budker Institute of Nuclear Physics9, University of California, Irvine10, Cornell University11, Argonne National Laboratory12, ETH Zurich13, Tata Institute of Fundamental Research14, Hillsdale College15, Spanish National Research Council16, Ohio State University17, University of Notre Dame18, Kent State University19, University of California, San Diego20, University of California, Berkeley21, University of Minnesota22, University of Alabama23, University of Helsinki24, Los Alamos National Laboratory25, California Institute of Technology26, George Washington University27, Syracuse University28, Lawrence Livermore National Laboratory29, Oklahoma State University–Stillwater30, University of Washington31, Max Planck Society32, Boston University33, University of California, Los Angeles34, Royal Holloway, University of London35, Université Paris-Saclay36, Fermilab37, University of Pennsylvania38, University of Illinois at Urbana–Champaign39, University of Bristol40, University of Tokyo41, University of Delaware42, Carnegie Mellon University43, University of California, Santa Cruz44, Karlsruhe Institute of Technology45, Heidelberg University46, Florida State University47, Carleton University48, University of Mainz49, University of Edinburgh50, Brookhaven National Laboratory51, Durham University52, University of Lausanne53, Massachusetts Institute of Technology54, University of Southampton55, Nagoya University56, University of Oxford57, Northwestern University58, University of British Columbia59, Columbia University60, Lund University61, University of Sheffield62, University of California, Santa Barbara63, Iowa State University64, University of Alberta65, University of Cambridge66
TL;DR: The Particle Data Group's biennial review as mentioned in this paper summarizes much of particle physics, using data from previous editions, plus 2658 new measurements from 644 papers, and lists, evaluates, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons.
Abstract: This biennial Review summarizes much of particle physics. Using data from previous editions, plus 2658 new measurements from 644 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. Among the 112 reviews are many that are new or heavily revised including those on Heavy-Quark and Soft-Collinear Effective Theory, Neutrino Cross Section Measurements, Monte Carlo Event Generators, Lattice QCD, Heavy Quarkonium Spectroscopy, Top Quark, Dark Matter, V-cb & V-ub, Quantum Chromodynamics, High-Energy Collider Parameters, Astrophysical Constants, Cosmological Parameters, and Dark Matter. A booklet is available containing the Summary Tables and abbreviated versions of some of the other sections of this full Review. All tables, listings, and reviews (and errata) are also available on the Particle Data Group website: http://pdg.lbl.gov.
4,465 citations
••
Cold Spring Harbor Laboratory1, University of California, Irvine2, California Institute of Technology3, Florida State University College of Arts and Sciences4, Yale University5, Wellcome Trust Sanger Institute6, Norwegian University of Science and Technology7, Affymetrix8, University of North Carolina at Chapel Hill9, University of Lausanne10, University of Geneva11, Genome Institute of Singapore12, Stanford University13, Pompeu Fabra University14
TL;DR: Evidence that three-quarters of the human genome is capable of being transcribed is reported, as well as observations about the range and levels of expression, localization, processing fates, regulatory regions and modifications of almost all currently annotated and thousands of previously unannotated RNAs that prompt a redefinition of the concept of a gene.
Abstract: Eukaryotic cells make many types of primary and processed RNAs that are found either in specific subcellular compartments or throughout the cells. A complete catalogue of these RNAs is not yet available and their characteristic subcellular localizations are also poorly understood. Because RNA represents the direct output of the genetic information encoded by genomes and a significant proportion of a cell's regulatory capabilities are focused on its synthesis, processing, transport, modification and translation, the generation of such a catalogue is crucial for understanding genome function. Here we report evidence that three-quarters of the human genome is capable of being transcribed, as well as observations about the range and levels of expression, localization, processing fates, regulatory regions and modifications of almost all currently annotated and thousands of previously unannotated RNAs. These observations, taken together, prompt a redefinition of the concept of a gene.
4,450 citations
••
Daniel J. Klionsky1, Fábio Camargo Abdalla2, Hagai Abeliovich3, Robert T. Abraham4 +1284 more•Institutions (463)
TL;DR: These guidelines are presented for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process vs. those that measure flux through the autophagy pathway (i.e., the complete process); thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from stimuli that result in increased autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field.
4,316 citations
••
TL;DR: High-density recordings of field activity in animals and subdural grid recordings in humans can provide insight into the cooperative behaviour of neurons, their average synaptic input and their spiking output, and can increase the understanding of how these processes contribute to the extracellular signal.
Abstract: Neuronal activity in the brain gives rise to transmembrane currents that can be measured in the extracellular medium. Although the major contributor of the extracellular signal is the synaptic transmembrane current, other sources — including Na+ and Ca2+ spikes, ionic fluxes through voltage- and ligand-gated channels, and intrinsic membrane oscillations — can substantially shape the extracellular field. High-density recordings of field activity in animals and subdural grid recordings in humans, combined with recently developed data processing tools and computational modelling, can provide insight into the cooperative behaviour of neurons, their average synaptic input and their spiking output, and can increase our understanding of how these processes contribute to the extracellular signal.
3,366 citations
••
TL;DR: An extensive evaluation of the state of the art in a unified framework of monocular pedestrian detection using sixteen pretrained state-of-the-art detectors across six data sets and proposes a refined per-frame evaluation methodology.
Abstract: Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.
3,170 citations
••
Allen Institute for Brain Science1, University of Edinburgh2, Radboud University Nijmegen3, University of California, Irvine4, Centre for Addiction and Mental Health5, University of Maryland, Baltimore6, University of Washington7, University of California, Los Angeles8, Georgetown University9, Icahn School of Medicine at Mount Sinai10, University of Oxford11, California Institute of Technology12
TL;DR: A transcriptional atlas of the adult human brain is described, comprising extensive histological analysis and comprehensive microarray profiling of ∼900 neuroanatomically precise subdivisions in two individuals, to form a high-resolution transcriptional baseline for neurogenetic studies of normal and abnormal human brain function.
Abstract: Neuroanatomically precise, genome-wide maps of transcript distributions are critical resources to complement genomic sequence data and to correlate functional and genetic brain architecture. Here we describe the generation and analysis of a transcriptional atlas of the adult human brain, comprising extensive histological analysis and comprehensive microarray profiling of ~900 neuroanatomically precise subdivisions in two individuals. Transcriptional regulation varies enormously by anatomical location, with different regions and their constituent cell types displaying robust molecular signatures that are highly conserved between individuals. Analysis of differential gene expression and gene co-expression relationships demonstrates that brain-wide variation strongly reflects the distributions of major cell classes such as neurons, oligodendrocytes, astrocytes and microglia. Local neighbourhood relationships between fine anatomical subdivisions are associated with discrete neuronal subtypes and genes involved with synaptic transmission. The neocortex displays a relatively homogeneous transcriptional pattern, but with distinct features associated selectively with primary sensorimotor cortices and with enriched frontal lobe expression. Notably, the spatial topography of the neocortex is strongly reflected in its molecular topography—the closer two cortical regions, the more similar their transcriptomes. This freely accessible online data resource forms a high-resolution transcriptional baseline for neurogenetic studies of normal and abnormal human brain function.
2,204 citations
••
TL;DR: The Daya Bay Reactor Neutrino Experiment has measured a nonzero value for the neutrino mixing angle θ(13) with a significance of 5.2 standard deviations.
Abstract: The Daya Bay Reactor Neutrino Experiment has measured a nonzero value for the neutrino mixing angle θ13 with a significance of 5.2 standard deviations. Antineutrinos from six 2.9 GW_(th) reactors were detected in six antineutrino detectors deployed in two near (flux-weighted baseline 470 m and 576 m) and one far (1648 m) underground experimental halls. With a 43 000 ton–GW_(th)–day live-time exposure in 55 days, 10 416 (80 376) electron-antineutrino candidates were detected at the far hall (near halls). The ratio of the observed to expected number of antineutrinos at the far hall is R=0.940± 0.011(stat.)±0.004(syst.). A rate-only analysis finds sin^22θ_(13)=0.092±0.016(stat.)±0.005(syst.) in a three-neutrino framework.
2,163 citations
••
Stanford University1, California Institute of Technology2, Massachusetts Institute of Technology3, Broad Institute4, University of California, Berkeley5, Harvard University6, Yale University7, Duke University8, University of Washington9, University of Texas at Austin10, University of Chicago11, Pennsylvania State University12, Baylor College of Medicine13, National Institutes of Health14, Ontario Institute for Cancer Research15, University of Massachusetts Medical School16, University of Southern California17, University of North Carolina at Chapel Hill18
TL;DR: This work discusses how ChIP quality, assessed in these ways, affects different uses of ChIP-seq data and develops a set of working standards and guidelines for ChIP experiments that are updated routinely.
Abstract: Chromatin immunoprecipitation (ChIP) followed by high-throughput DNA sequencing (ChIP-seq) has become a valuable and widely used approach for mapping the genomic location of transcription-factor binding and histone modifications in living cells. Despite its widespread use, there are considerable differences in how these experiments are conducted, how the results are scored and evaluated for quality, and how the data and metadata are archived for public use. These practices affect the quality and utility of any global ChIP experiment. Through our experience in performing ChIP-seq experiments, the ENCODE and modENCODE consortia have developed a set of working standards and guidelines for ChIP experiments that are updated routinely. The current guidelines address antibody validation, experimental replication, sequencing depth, data and metadata reporting, and data quality assessment. We discuss how ChIP quality, assessed in these ways, affects different uses of ChIP-seq data. All data sets used in the analysis have been deposited for public viewing and downloading at the ENCODE (http://encodeproject.org/ENCODE/) and modENCODE (http://www.modencode.org/) portals.
1,801 citations
••
Lawrence Berkeley National Laboratory1, University of California, Berkeley2, Australian Astronomical Observatory3, Pontifical Catholic University of Chile4, Harvard University5, Hamilton College6, University of Utah7, University of Tokyo8, Michigan State University9, Space Telescope Science Institute10, California Institute of Technology11, University of Colorado Boulder12, University of California, Santa Cruz13, University of Waterloo14, University of Chicago15, University of Florida16, Stockholm University17, University of Minnesota18, National Institutes of Natural Sciences, Japan19, Leiden University20, Northwestern University21, University of Bonn22, University of California, Davis23, University of Washington24, Kyoto University25, Pennsylvania State University26, European Southern Observatory27, Lawrence Livermore National Laboratory28, University of Lisbon29, Texas A&M University30, University of Toronto31
TL;DR: In this article, Advanced Camera for Surveys, NICMOS and Keck adaptive-optics-assisted photometry of 20 Type Ia supernovae (SNe Ia) from the Hubble Space Telescope (HST) Cluster Supernova Survey was presented.
Abstract: We present Advanced Camera for Surveys, NICMOS, and Keck adaptive-optics-assisted photometry of 20 Type Ia supernovae (SNe Ia) from the Hubble Space Telescope (HST) Cluster Supernova Survey. The SNe Ia were discovered over the redshift interval 0.623 1 SNe Ia. We describe how such a sample could be efficiently obtained by targeting cluster fields with WFC3 on board HST. The updated supernova Union2.1 compilation of 580 SNe is available at http://supernova.lbl.gov/Union.
••
TL;DR: This paper presents new probability inequalities for sums of independent, random, self-adjoint matrices and provides noncommutative generalizations of the classical bounds associated with the names Azuma, Bennett, Bernstein, Chernoff, Hoeffding, and McDiarmid.
Abstract: This paper presents new probability inequalities for sums of independent, random, self-adjoint matrices. These results place simple and easily verifiable hypotheses on the summands, and they deliver strong conclusions about the large-deviation behavior of the maximum eigenvalue of the sum. Tail bounds for the norm of a sum of random rectangular matrices follow as an immediate corollary. The proof techniques also yield some information about matrix-valued martingales.
In other words, this paper provides noncommutative generalizations of the classical bounds associated with the names Azuma, Bennett, Bernstein, Chernoff, Hoeffding, and McDiarmid. The matrix inequalities promise the same diversity of application, ease of use, and strength of conclusion that have made the scalar inequalities so valuable.
••
Christopher P. Ahn1, Rachael Alexandroff2, Carlos Allende Prieto3, Scott F. Anderson4 +256 more•Institutions (65)
TL;DR: In this paper, the authors presented the first spectroscopic data from the Baryon Oscillation Spectroscopic Survey (BOSS) for the Sloan Digital Sky Survey III (SDSS-III) dataset.
Abstract: The Sloan Digital Sky Survey III (SDSS-III) presents the first spectroscopic data from the Baryon Oscillation Spectroscopic Survey (BOSS). This ninth data release (DR9) of the SDSS project includes 535,995 new galaxy spectra (median z ~ 0.52), 102,100 new quasar spectra (median z ~ 2.32), and 90,897 new stellar spectra, along with the data presented in previous data releases. These spectra were obtained with the new BOSS spectrograph and were taken between 2009 December and 2011 July. In addition, the stellar parameters pipeline, which determines radial velocities, surface temperatures, surface gravities, and metallicities of stars, has been updated and refined with improvements in temperature estimates for stars with T eff -0.5. DR9 includes new stellar parameters for all stars presented in DR8, including stars from SDSS-I and II, as well as those observed as part of the SEGUE-2. The astrometry error introduced in the DR8 imaging catalogs has been corrected in the DR9 data products. The next data release for SDSS-III will be in Summer 2013, which will present the first data from the APOGEE along with another year of data from BOSS, followed by the final SDSS-III data release in 2014 December.
••
TL;DR: The results indicate a new strategy and direction for high-efficiency thermoelectric materials by exploring systems where there exists a crystalline sublattice for electronic conduction surrounded by liquid-like ions.
Abstract: Advanced thermoelectric technology offers a potential for converting waste industrial heat into useful electricity, and an emission-free method for solid state cooling. Worldwide efforts to find materials with thermoelectric figure of merit, zT values significantly above unity, are frequently focused on crystalline semiconductors with low thermal conductivity. Here we report on Cu_(2−x)Se, which reaches a zT of 1.5 at 1,000 K, among the highest values for any bulk materials. Whereas the Se atoms in Cu_(2−x)Se form a rigid face-centred cubic lattice, providing a crystalline pathway for semiconducting electrons (or more precisely holes), the copper ions are highly disordered around the Se sublattice and are superionic with liquid-like mobility. This extraordinary ‘liquid-like’ behaviour of copper ions around a crystalline sublattice of Se in Cu_(2−x)Se results in an intrinsically very low lattice thermal conductivity which enables high zT in this otherwise simple semiconductor. This unusual combination of properties leads to an ideal thermoelectric material. The results indicate a new strategy and direction for high-efficiency thermoelectric materials by exploring systems where there exists a crystalline sublattice for electronic conduction surrounded by liquid-like ions.
••
TL;DR: In this paper, a new type of global plate motion model consisting of a set of continuously-closing topological plate polygons with associated plate boundaries and plate velocities since the break-up of the supercontinent Pangea is presented.
••
TL;DR: This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems.
Abstract: In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered includes those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases from many technical fields such as sparse vectors (signal processing, statistics) and low-rank matrices (control, statistics), as well as several others including sums of a few permutation matrices (ranked elections, multiobject tracking), low-rank tensors (computer vision, neuroscience), orthogonal matrices (machine learning), and atomic measures (system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery, and this tradeoff is characterized via some examples. Thus this work extends the catalog of simple models (beyond sparse vectors and low-rank matrices) that can be recovered from limited linear information via tractable convex programming.
••
TL;DR: This Review focuses on manipulation of the electronic and atomic structural features which makes up the thermoelectric quality factor, and the principles used are equally applicable to most good thermoeLECTric materials that could enable improvement of thermoelectedric devices from niche applications into the mainstream of energy technologies.
Abstract: Lead chalcogenides have long been used for space-based and thermoelectric remote power generation applications, but recent discoveries have revealed a much greater potential for these materials. This renaissance of interest combined with the need for increased energy efficiency has led to active consideration of thermoelectrics for practical waste heat recovery systems—such as the conversion of car exhaust heat into electricity. The simple high symmetry NaCl-type cubic structure, leads to several properties desirable for thermoelectricity, such as high valley degeneracy for high electrical conductivity and phonon anharmonicity for low thermal conductivity. The rich capabilities for both band structure and microstructure engineering enable a variety of approaches for achieving high thermoelectric performance in lead chalcogenides. This Review focuses on manipulation of the electronic and atomic structural features which makes up the thermoelectric quality factor. While these strategies are well demonstrated in lead chalcogenides, the principles used are equally applicable to most good thermoelectric materials that could enable improvement of thermoelectric devices from niche applications into the mainstream of energy technologies.
••
TL;DR: In this article, a necessary and sufficient condition is provided to guarantee the existence of no duality gap for the optimal power flow problem, which is the dual of an equivalent form of the OPF problem.
Abstract: The optimal power flow (OPF) problem is nonconvex and generally hard to solve. In this paper, we propose a semidefinite programming (SDP) optimization, which is the dual of an equivalent form of the OPF problem. A global optimum solution to the OPF problem can be retrieved from a solution of this convex dual problem whenever the duality gap is zero. A necessary and sufficient condition is provided in this paper to guarantee the existence of no duality gap for the OPF problem. This condition is satisfied by the standard IEEE benchmark systems with 14, 30, 57, 118, and 300 buses as well as several randomly generated systems. Since this condition is hard to study, a sufficient zero-duality-gap condition is also derived. This sufficient condition holds for IEEE systems after small resistance (10-5 per unit) is added to every transformer that originally assumes zero resistance. We investigate this sufficient condition and justify that it holds widely in practice. The main underlying reason for the successful convexification of the OPF problem can be traced back to the modeling of transformers and transmission lines as well as the non-negativity of physical quantities such as resistance and inductance.
••
TL;DR: In conclusion, this study provides insights into transcription regulation by three-dimensional chromatin interactions for both housekeeping and cell-specific genes in human cells through widespread promoter-centered intragenic, extragenics, and intergenic interactions.
••
San Jose State University1, Ames Research Center2, Las Cumbres Observatory Global Telescope Network3, Harvard University4, University of California, Berkeley5, University of Florida6, Pennsylvania State University7, Georgia State University8, NASA Exoplanet Science Institute9, California Institute of Technology10, Carnegie Institution for Science11, University of Copenhagen12, Aarhus University13, University of Texas at Austin14, Massachusetts Institute of Technology15, Search for extraterrestrial intelligence16, Lawrence Hall of Science17, University of Hertfordshire18, Villanova University19, Fermilab20, Princeton University21, San Diego State University22
TL;DR: In this paper, the authors used the noise-weighted robust averaging of multi-quarter photo-center offsets derived from difference image analysis, which identifies likely background eclipsing binaries.
Abstract: New transiting planet candidates are identified in sixteen months (May 2009 - September 2010) of data from the Kepler spacecraft. Nearly five thousand periodic transit-like signals are vetted against astrophysical and instrumental false positives yielding 1,091 viable new planet candidates, bringing the total count up to over 2,300. Improved vetting metrics are employed, contributing to higher catalog reliability. Most notable is the noise-weighted robust averaging of multi-quarter photo-center offsets derived from difference image analysis which identifies likely background eclipsing binaries. Twenty-two months of photometry are used for the purpose of characterizing each of the new candidates. Ephemerides (transit epoch, T_0, and orbital period, P) are tabulated as well as the products of light curve modeling: reduced radius (Rp/R*), reduced semi-major axis (d/R*), and impact parameter (b). The largest fractional increases are seen for the smallest planet candidates (197% for candidates smaller than 2Re compared to 52% for candidates larger than 2Re) and those at longer orbital periods (123% for candidates outside of 50-day orbits versus 85% for candidates inside of 50-day orbits). The gains are larger than expected from increasing the observing window from thirteen months (Quarter 1-- Quarter 5) to sixteen months (Quarter 1 -- Quarter 6). This demonstrates the benefit of continued development of pipeline analysis software. The fraction of all host stars with multiple candidates has grown from 17% to 20%, and the paucity of short-period giant planets in multiple systems is still evident. The progression toward smaller planets at longer orbital periods with each new catalog release suggests that Earth-size planets in the Habitable Zone are forthcoming if, indeed, such planets are abundant.
••
University of California, Berkeley1, Ames Research Center2, San Jose State University3, Lowell Observatory4, Jet Propulsion Laboratory5, University of Texas at Austin6, Harvard University7, Las Cumbres Observatory Global Telescope Network8, Space Telescope Science Institute9, Niels Bohr Institute10, Aarhus University11, National Center for Atmospheric Research12, NASA Exoplanet Science Institute13, Massachusetts Institute of Technology14, Fermilab15, University of California, Santa Cruz16, Yale University17, University of Florida18, California Institute of Technology19, University of California, Santa Barbara20, University of Hertfordshire21, San Diego State University22, Carnegie Institution for Science23, Lawrence Hall of Science24, Villanova University25
TL;DR: In this paper, the authors report the distribution of planets as a function of planet radius, orbital period, and stellar effective temperature for orbital periods less than 50 days around solar-type (GK) stars.
Abstract: We report the distribution of planets as a function of planet radius, orbital period, and stellar effective temperature for orbital periods less than 50 days around solar-type (GK) stars. These results are based on the 1235 planets (formally "planet candidates") from the Kepler mission that include a nearly complete set of detected planets as small as 2 R_⊕. For each of the 156,000 target stars, we assess the detectability of planets as a function of planet radius, R_p, and orbital period, P, using a measure of the detection efficiency for each star. We also correct for the geometric probability of transit, R_*/a. We consider first Kepler target stars within the "solar subset" having T_eff = 4100-6100 K, log g = 4.0-4.9, and Kepler magnitude K_p 2 R_⊕ we measure an occurrence of less than 0.001 planets per star. For all planets with orbital periods less than 50 days, we measure occurrence of 0.130 ± 0.008, 0.023 ± 0.003, and 0.013 ± 0.002 planets per star for planets with radii 2-4, 4-8, and 8-32 R_⊕, in agreement with Doppler surveys. We fit occurrence as a function of P to a power-law model with an exponential cutoff below a critical period P_0. For smaller planets, P_0 has larger values, suggesting that the "parking distance" for migrating planets moves outward with decreasing planet size. We also measured planet occurrence over a broader stellar T_eff range of 3600-7100 K, spanning M0 to F2 dwarfs. Over this range, the occurrence of 2-4 R_⊕ planets in the Kepler field increases with decreasing T_eff, with these small planets being seven times more abundant around cool stars (3600-4100 K) than the hottest stars in our sample (6600-7100 K).
••
TL;DR: In this paper, the accuracy of global-gridded terrestrial water storage (TWS) estimates derived from temporal gravity field variations observed by the Gravity Recovery and Climate Experiment (GRACE) satellites is assessed.
Abstract: [1] We assess the accuracy of global-gridded terrestrial water storage (TWS) estimates derived from temporal gravity field variations observed by the Gravity Recovery and Climate Experiment (GRACE) satellites. The TWS data set has been corrected for signal modification due to filtering and truncation. Simulations of terrestrial water storage variations from land-hydrology models are used to infer relationships between regional time series representing different spatial scales. These relationships, which are independent of the actual GRACE data, are used to extrapolate the GRACE TWS estimates from their effective spatial resolution (length scales of a few hundred kilometers) to finer spatial scales (∼100 km). Gridded, scaled data like these enable users who lack expertise in processing and filtering the standard GRACE spherical harmonic geopotential coefficients to estimate the time series of TWS over arbitrarily shaped regions. In addition, we provide gridded fields of leakage and GRACE measurement errors that allow users to rigorously estimate the associated regional TWS uncertainties. These fields are available for download from the GRACE project website (available at http://grace.jpl.nasa.gov). Three scaling relationships are examined: a single gain factor based on regionally averaged time series, spatially distributed (i.e., gridded) gain factors based on time series at each grid point, and gridded-gain factors estimated as a function of temporal frequency. While regional gain factors have typically been used in previously published studies, we find that comparable accuracies can be obtained from scaled time series based on gridded gain factors. In regions where different temporal modes of TWS variability have significantly different spatial scales, gain factors based on the first two methods may reduce the accuracy of the scaled time series. In these cases, gain factors estimated separately as a function of frequency may be necessary to achieve accurate results.
••
Stockholm University1, University of New Hampshire2, University of Alaska Fairbanks3, Scott Polar Research Institute4, Canadian Hydrographic Service5, Norwegian Mapping Authority6, University Centre in Svalbard7, Alfred Wegener Institute for Polar and Marine Research8, Science Applications International Corporation9, Johns Hopkins University Applied Physics Laboratory10, University of Barcelona11, University of New Brunswick12, University of Hawaii at Manoa13, University of Bergen14, Geological Survey of Denmark and Greenland15, Geological Survey of Canada16, California Institute of Technology17, British Oceanographic Data Centre18
TL;DR: The International Bathymetric Chart of the Arctic Ocean (IBCAO) released its first gridded bathymetric compilation in 1999 as discussed by the authors, which has since supported a wide range of Arc...
Abstract: The International Bathymetric Chart of the Arctic Ocean (IBCAO) released its first gridded bathymetric compilation in 1999. The IBCAO bathymetric portrayals have since supported a wide range of Arc ...
••
TL;DR: By enabling content mixing between mitochondria, fusion and fission serve to maintain a homogeneous and healthy mitochondrial population and lead to improvements in human health.
Abstract: Mitochondria are dynamic organelles that continually undergo fusion and fission. These opposing processes work in concert to maintain the shape, size, and number of mitochondria and their physiological function. Some of the major molecules mediating mitochondrial fusion and fission in mammals have been discovered, but the underlying molecular mechanisms are only partially unraveled. In particular, the cast of characters involved in mitochondrial fission needs to be clarified. By enabling content mixing between mitochondria, fusion and fission serve to maintain a homogeneous and healthy mitochondrial population. Mitochondrial dynamics has been linked to multiple mitochondrial functions, including mitochondrial DNA stability, respiratory capacity, apoptosis, response to cellular stress, and mitophagy. Because of these important functions, mitochondrial fusion and fission are essential in mammals, and even mild defects in mitochondrial dynamics are associated with disease. A better understanding of these processes likely will ultimately lead to improvements in human health.
••
TL;DR: It is demonstrated with a change blindness data set that the distance between images induced by the image signature is closer to human perceptual distance than can be achieved using other saliency algorithms, pixel-wise, or GIST descriptor methods.
Abstract: We introduce a simple image descriptor referred to as the image signature. We show, within the theoretical framework of sparse signal mixing, that this quantity spatially approximates the foreground of an image. We experimentally investigate whether this approximate foreground overlaps with visually conspicuous image locations by developing a saliency algorithm based on the image signature. This saliency algorithm predicts human fixation points best among competitors on the Bruce and Tsotsos [1] benchmark data set and does so in much shorter running time. In a related experiment, we demonstrate with a change blindness data set that the distance between images induced by the image signature is closer to human perceptual distance than can be achieved using other saliency algorithms, pixel-wise, or GIST [2] descriptor methods.
••
Space Telescope Science Institute1, Spanish National Research Council2, University of the Basque Country3, Michigan State University4, Johns Hopkins University5, Tel Aviv University6, University of California, Berkeley7, California Institute of Technology8, European Southern Observatory9, Academia Sinica10, Leiden University11, University College London12, Pontifical Catholic University of Chile13, Rutgers University14, Carnegie Institution for Science15, Ohio State University16, INAF17, University of California, San Diego18, CERN19, Max Planck Society20
TL;DR: The Cluster Lensing And Supernova Survey with Hubble (CLASH) as mentioned in this paper is a 524-orbit Multi-Cycle Treasury Program to use the gravitational lensing properties of 25 galaxy clusters to accurately constrain their mass distributions.
Abstract: The Cluster Lensing And Supernova survey with Hubble (CLASH) is a 524-orbit Multi-Cycle Treasury Program to use the gravitational lensing properties of 25 galaxy clusters to accurately constrain their mass distributions. The survey, described in detail in this paper, will definitively establish the degree of concentration of dark matter in the cluster cores, a key prediction of structure formation models. The CLASH cluster sample is larger and less biased than current samples of space-based imaging studies of clusters to similar depth, as we have minimized lensing-based selection that favors systems with overly dense cores. Specifically, 20 CLASH clusters are solely X-ray selected. The X-ray-selected clusters are massive (kT > 5 keV) and, in most cases, dynamically relaxed. Five additional clusters are included for their lensing strength (θ_Ein > 35" at z_s = 2) to optimize the likelihood of finding highly magnified high-z (z > 7) galaxies. A total of 16 broadband filters, spanning the near-UV to near-IR, are employed for each 20-orbit campaign on each cluster. These data are used to measure precise (σ_z ~ 0.02(1 + z)) photometric redshifts for newly discovered arcs. Observations of each cluster are spread over eight epochs to enable a search for Type Ia supernovae at z > 1 to improve constraints on the time dependence of the dark energy equation of state and the evolution of supernovae. We present newly re-derived X-ray luminosities, temperatures, and Fe abundances for the CLASH clusters as well as a representative source list for MACS1149.6+2223 (z = 0.544).
••
University of Sussex1, Jet Propulsion Laboratory2, California Institute of Technology3, European Space Agency4, Ames Research Center5, University of Edinburgh6, Paris Diderot University7, Imperial College London8, University of Paris-Sud9, Aix-Marseille University10, Cornell University11, Spanish National Research Council12, University of La Laguna13, Complutense University of Madrid14, UK Astronomy Technology Centre15, University of Colorado Boulder16, University of California, Irvine17, Goddard Space Flight Center18, University of Nottingham19, Cardiff University20, University of Padua21, Institut d'Astrophysique de Paris22, University of Cambridge23, University of British Columbia24, European Space Research and Technology Centre25, University of Manchester26, University College London27, Rutherford Appleton Laboratory28, University of Lethbridge29, University of Oxford30, Commonwealth Scientific and Industrial Research Organisation31, University of Hertfordshire32, Harvard University33
TL;DR: The Herschel Multi-tiered Extragalactic Survey (HerMES) is a legacy program designed to map a set of nested fields totalling ∼380deg^2 as mentioned in this paper.
Abstract: The Herschel Multi-tiered Extragalactic Survey (HerMES) is a legacy programme designed to map a set of nested fields totalling ∼380 deg^2. Fields range in size from 0.01 to ∼20 deg^2, using the Herschel-Spectral and Photometric Imaging Receiver (SPIRE) (at 250, 350 and 500 μm) and the Herschel-Photodetector Array Camera and Spectrometer (PACS) (at 100 and 160 μm), with an additional wider component of 270 deg^2 with SPIRE alone. These bands cover the peak of the redshifted thermal spectral energy distribution from interstellar dust and thus capture the reprocessed optical and ultraviolet radiation from star formation that has been absorbed by dust, and are critical for forming a complete multiwavelength understanding of galaxy formation and evolution.
The survey will detect of the order of 100 000 galaxies at 5σ in some of the best-studied fields in the sky. Additionally, HerMES is closely coordinated with the PACS Evolutionary Probe survey. Making maximum use of the full spectrum of ancillary data, from radio to X-ray wavelengths, it is designed to facilitate redshift determination, rapidly identify unusual objects and understand the relationships between thermal emission from dust and other processes. Scientific questions HerMES will be used to answer include the total infrared emission of galaxies, the evolution of the luminosity function, the clustering properties of dusty galaxies and the properties of populations of galaxies which lie below the confusion limit through lensing and statistical techniques.
This paper defines the survey observations and data products, outlines the primary scientific goals of the HerMES team, and reviews some of the early results.
••
TL;DR: This work analyzes an intuitive Gaussian process upper confidence bound algorithm, and bound its cumulative regret in terms of maximal in- formation gain, establishing a novel connection between GP optimization and experimental design and obtaining explicit sublinear regret bounds for many commonly used covariance functions.
Abstract: Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low norm in a reproducing kernel Hilbert space. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze an intuitive Gaussian process upper confidence bound (GP-UCB) algorithm, and bound its cumulative regret in terms of maximal in- formation gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.
••
TL;DR: New approaches to light management that systematically minimize thermodynamic losses will enable ultrahigh efficiencies previously considered impossible, according to researchers at the Massachusetts Institute of Technology.
Abstract: For decades, solar-cell efficiencies have remained below the thermodynamic limits. However, new approaches to light management that systematically minimize thermodynamic losses will enable ultrahigh efficiencies previously considered impossible.
••
Swinburne University of Technology1, Australian Astronomical Observatory2, University of Sydney3, University of Queensland4, California Institute of Technology5, University of Chicago6, Australia Telescope National Facility7, Carnegie Institution for Science8, Monash University, Clayton campus9, Australian National University10, University of British Columbia11, University of Toronto12
TL;DR: In this paper, the authors performed a joint determination of the distance-redshift relation and cosmic expansion rate at redshifts z = 0.44, 0.6 and 0.73 by combining measurements of the baryon acoustic peak and Alcock-Paczynski distortion from galaxy clustering in the WiggleZ Dark Energy Survey, using a large ensemble of mock catalogues to calculate the covariance between the measurements.
Abstract: We perform a joint determination of the distance–redshift relation and cosmic expansion rate at redshifts z = 0.44, 0.6 and 0.73 by combining measurements of the baryon acoustic peak and Alcock–Paczynski distortion from galaxy clustering in the WiggleZ Dark Energy Survey, using a large ensemble of mock catalogues to calculate the covariance between the measurements. We find that D_A(z) = (1205 ± 114, 1380 ± 95, 1534 ± 107) Mpc and H(z) = (82.6 ± 7.8, 87.9 ± 6.1, 97.3 ± 7.0) km s^(−1) Mpc^(−1) at these three redshifts. Further combining our results with other baryon acoustic oscillation and distant supernovae data sets, we use a Monte Carlo Markov Chain technique to determine the evolution of the Hubble parameter H(z) as a stepwise function in nine redshift bins of width Δz = 0.1, also marginalizing over the spatial curvature. Our measurements of H(z), which have precision better than 7 per cent in most redshift bins, are consistent with the expansion history predicted by a cosmological constant dark energy model, in which the expansion rate accelerates at redshift z < 0.7.