scispace - formally typeset
Search or ask a question

Showing papers by "California Institute of Technology published in 2012"


Journal ArticleDOI
Georges Aad1, T. Abajyan2, Brad Abbott3, Jalal Abdallah4  +2964 moreInstitutions (200)
TL;DR: In this article, a search for the Standard Model Higgs boson in proton-proton collisions with the ATLAS detector at the LHC is presented, which has a significance of 5.9 standard deviations, corresponding to a background fluctuation probability of 1.7×10−9.

9,282 citations


Journal ArticleDOI
TL;DR: In this paper, results from searches for the standard model Higgs boson in proton-proton collisions at 7 and 8 TeV in the CMS experiment at the LHC, using data samples corresponding to integrated luminosities of up to 5.8 standard deviations.

8,857 citations


Journal ArticleDOI
Kaoru Hagiwara, Ken Ichi Hikasa1, Koji Nakamura, Masaharu Tanabashi1, M. Aguilar-Benitez, Claude Amsler2, R. M. Barnett3, P. R. Burchat4, C. D. Carone5, C. Caso6, G. Conforto7, Olav Dahl3, Michael Doser8, Semen Eidelman9, Jonathan L. Feng10, L. K. Gibbons11, M. C. Goodman12, Christoph Grab13, D. E. Groom3, Atul Gurtu14, Atul Gurtu8, K. G. Hayes15, J.J. Hernández-Rey16, K. Honscheid17, Christopher Kolda18, Michelangelo L. Mangano8, D. M. Manley19, Aneesh V. Manohar20, John March-Russell8, Alberto Masoni, Ramon Miquel3, Klaus Mönig, Hitoshi Murayama21, Hitoshi Murayama3, S. Sánchez Navas13, Keith A. Olive22, Luc Pape8, C. Patrignani6, A. Piepke23, Matts Roos24, John Terning25, Nils A. Tornqvist24, T. G. Trippe3, Petr Vogel26, C. G. Wohl3, Ron L. Workman27, W-M. Yao3, B. Armstrong3, P. S. Gee3, K. S. Lugovsky, S. B. Lugovsky, V. S. Lugovsky, Marina Artuso28, D. Asner29, K. S. Babu30, E. L. Barberio8, Marco Battaglia8, H. Bichsel31, O. Biebel32, P. Bloch8, Robert N. Cahn3, Ariella Cattai8, R.S. Chivukula33, R. Cousins34, G. A. Cowan35, Thibault Damour36, K. Desler, R. J. Donahue3, D. A. Edwards, Victor Daniel Elvira37, Jens Erler38, V. V. Ezhela, A Fassò8, W. Fetscher13, Brian D. Fields39, B. Foster40, Daniel Froidevaux8, Masataka Fukugita41, Thomas K. Gaisser42, L. A. Garren37, H J Gerber13, Frederick J. Gilman43, Howard E. Haber44, C. A. Hagmann29, J.L. Hewett4, Ian Hinchliffe3, Craig J. Hogan31, G. Höhler45, P. Igo-Kemenes46, John David Jackson3, Kurtis F Johnson47, D. Karlen48, B. Kayser37, S. R. Klein3, Konrad Kleinknecht49, I.G. Knowles50, P. Kreitz4, Yu V. Kuyanov, R. Landua8, Paul Langacker38, L. S. Littenberg51, Alan D. Martin52, Tatsuya Nakada53, Tatsuya Nakada8, Meenakshi Narain33, Paolo Nason, John A. Peacock54, H. R. Quinn55, Stuart Raby17, Georg G. Raffelt32, E. A. Razuvaev, B. Renk49, L. Rolandi8, Michael T Ronan3, L.J. Rosenberg54, C.T. Sachrajda55, A. I. Sanda56, Subir Sarkar57, Michael Schmitt58, O. Schneider53, Douglas Scott59, W. G. Seligman60, M. H. Shaevitz60, Torbjörn Sjöstrand61, George F. Smoot3, Stefan M Spanier4, H. Spieler3, N. J. C. Spooner62, Mark Srednicki63, Achim Stahl, Todor Stanev42, M. Suzuki3, N. P. Tkachenko, German Valencia64, K. van Bibber29, Manuella Vincter65, D. R. Ward66, Bryan R. Webber66, M R Whalley52, Lincoln Wolfenstein43, J. Womersley37, C. L. Woody51, Oleg Zenin 
Tohoku University1, University of Zurich2, Lawrence Berkeley National Laboratory3, Stanford University4, College of William & Mary5, University of Genoa6, University of Urbino7, CERN8, Budker Institute of Nuclear Physics9, University of California, Irvine10, Cornell University11, Argonne National Laboratory12, ETH Zurich13, Tata Institute of Fundamental Research14, Hillsdale College15, Spanish National Research Council16, Ohio State University17, University of Notre Dame18, Kent State University19, University of California, San Diego20, University of California, Berkeley21, University of Minnesota22, University of Alabama23, University of Helsinki24, Los Alamos National Laboratory25, California Institute of Technology26, George Washington University27, Syracuse University28, Lawrence Livermore National Laboratory29, Oklahoma State University–Stillwater30, University of Washington31, Max Planck Society32, Boston University33, University of California, Los Angeles34, Royal Holloway, University of London35, Université Paris-Saclay36, Fermilab37, University of Pennsylvania38, University of Illinois at Urbana–Champaign39, University of Bristol40, University of Tokyo41, University of Delaware42, Carnegie Mellon University43, University of California, Santa Cruz44, Karlsruhe Institute of Technology45, Heidelberg University46, Florida State University47, Carleton University48, University of Mainz49, University of Edinburgh50, Brookhaven National Laboratory51, Durham University52, University of Lausanne53, Massachusetts Institute of Technology54, University of Southampton55, Nagoya University56, University of Oxford57, Northwestern University58, University of British Columbia59, Columbia University60, Lund University61, University of Sheffield62, University of California, Santa Barbara63, Iowa State University64, University of Alberta65, University of Cambridge66
TL;DR: The Particle Data Group's biennial review as mentioned in this paper summarizes much of particle physics, using data from previous editions, plus 2658 new measurements from 644 papers, and lists, evaluates, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons.
Abstract: This biennial Review summarizes much of particle physics. Using data from previous editions, plus 2658 new measurements from 644 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. Among the 112 reviews are many that are new or heavily revised including those on Heavy-Quark and Soft-Collinear Effective Theory, Neutrino Cross Section Measurements, Monte Carlo Event Generators, Lattice QCD, Heavy Quarkonium Spectroscopy, Top Quark, Dark Matter, V-cb & V-ub, Quantum Chromodynamics, High-Energy Collider Parameters, Astrophysical Constants, Cosmological Parameters, and Dark Matter. A booklet is available containing the Summary Tables and abbreviated versions of some of the other sections of this full Review. All tables, listings, and reviews (and errata) are also available on the Particle Data Group website: http://pdg.lbl.gov.

4,465 citations


Journal ArticleDOI
Sarah Djebali, Carrie A. Davis1, Angelika Merkel, Alexander Dobin1, Timo Lassmann, Ali Mortazavi2, Ali Mortazavi3, Andrea Tanzer, Julien Lagarde, Wei Lin1, Felix Schlesinger1, Chenghai Xue1, Georgi K. Marinov3, Jainab Khatun4, Brian A. Williams3, Chris Zaleski1, Joel Rozowsky5, Marion S. Röder, Felix Kokocinski6, Rehab F. Abdelhamid, Tyler Alioto, Igor Antoshechkin3, Michael T. Baer1, Nadav Bar7, Philippe Batut1, Kimberly Bell1, Ian Bell8, Sudipto K. Chakrabortty1, Xian Chen9, Jacqueline Chrast10, Joao Curado, Thomas Derrien, Jorg Drenkow1, Erica Dumais8, Jacqueline Dumais8, Radha Duttagupta8, Emilie Falconnet11, Meagan Fastuca1, Kata Fejes-Toth1, Pedro G. Ferreira, Sylvain Foissac8, Melissa J. Fullwood12, Hui Gao8, David Gonzalez, Assaf Gordon1, Harsha P. Gunawardena9, Cédric Howald10, Sonali Jha1, Rory Johnson, Philipp Kapranov8, Brandon King3, Colin Kingswood, Oscar Junhong Luo12, Eddie Park2, Kimberly Persaud1, Jonathan B. Preall1, Paolo Ribeca, Brian A. Risk4, Daniel Robyr11, Michael Sammeth, Lorian Schaffer3, Lei-Hoon See1, Atif Shahab12, Jørgen Skancke7, Ana Maria Suzuki, Hazuki Takahashi, Hagen Tilgner13, Diane Trout3, Nathalie Walters10, Huaien Wang1, John A. Wrobel4, Yanbao Yu9, Xiaoan Ruan12, Yoshihide Hayashizaki, Jennifer Harrow6, Mark Gerstein5, Tim Hubbard6, Alexandre Reymond10, Stylianos E. Antonarakis11, Gregory J. Hannon1, Morgan C. Giddings4, Morgan C. Giddings9, Yijun Ruan12, Barbara J. Wold3, Piero Carninci, Roderic Guigó14, Thomas R. Gingeras8, Thomas R. Gingeras1 
06 Sep 2012-Nature
TL;DR: Evidence that three-quarters of the human genome is capable of being transcribed is reported, as well as observations about the range and levels of expression, localization, processing fates, regulatory regions and modifications of almost all currently annotated and thousands of previously unannotated RNAs that prompt a redefinition of the concept of a gene.
Abstract: Eukaryotic cells make many types of primary and processed RNAs that are found either in specific subcellular compartments or throughout the cells. A complete catalogue of these RNAs is not yet available and their characteristic subcellular localizations are also poorly understood. Because RNA represents the direct output of the genetic information encoded by genomes and a significant proportion of a cell's regulatory capabilities are focused on its synthesis, processing, transport, modification and translation, the generation of such a catalogue is crucial for understanding genome function. Here we report evidence that three-quarters of the human genome is capable of being transcribed, as well as observations about the range and levels of expression, localization, processing fates, regulatory regions and modifications of almost all currently annotated and thousands of previously unannotated RNAs. These observations, taken together, prompt a redefinition of the concept of a gene.

4,450 citations


Journal ArticleDOI
TL;DR: These guidelines are presented for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process vs. those that measure flux through the autophagy pathway (i.e., the complete process); thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from stimuli that result in increased autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field.

4,316 citations


Journal ArticleDOI
TL;DR: High-density recordings of field activity in animals and subdural grid recordings in humans can provide insight into the cooperative behaviour of neurons, their average synaptic input and their spiking output, and can increase the understanding of how these processes contribute to the extracellular signal.
Abstract: Neuronal activity in the brain gives rise to transmembrane currents that can be measured in the extracellular medium. Although the major contributor of the extracellular signal is the synaptic transmembrane current, other sources — including Na+ and Ca2+ spikes, ionic fluxes through voltage- and ligand-gated channels, and intrinsic membrane oscillations — can substantially shape the extracellular field. High-density recordings of field activity in animals and subdural grid recordings in humans, combined with recently developed data processing tools and computational modelling, can provide insight into the cooperative behaviour of neurons, their average synaptic input and their spiking output, and can increase our understanding of how these processes contribute to the extracellular signal.

3,366 citations


Journal ArticleDOI
TL;DR: An extensive evaluation of the state of the art in a unified framework of monocular pedestrian detection using sixteen pretrained state-of-the-art detectors across six data sets and proposes a refined per-frame evaluation methodology.
Abstract: Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.

3,170 citations


Journal ArticleDOI
20 Sep 2012-Nature
TL;DR: A transcriptional atlas of the adult human brain is described, comprising extensive histological analysis and comprehensive microarray profiling of ∼900 neuroanatomically precise subdivisions in two individuals, to form a high-resolution transcriptional baseline for neurogenetic studies of normal and abnormal human brain function.
Abstract: Neuroanatomically precise, genome-wide maps of transcript distributions are critical resources to complement genomic sequence data and to correlate functional and genetic brain architecture. Here we describe the generation and analysis of a transcriptional atlas of the adult human brain, comprising extensive histological analysis and comprehensive microarray profiling of ~900 neuroanatomically precise subdivisions in two individuals. Transcriptional regulation varies enormously by anatomical location, with different regions and their constituent cell types displaying robust molecular signatures that are highly conserved between individuals. Analysis of differential gene expression and gene co-expression relationships demonstrates that brain-wide variation strongly reflects the distributions of major cell classes such as neurons, oligodendrocytes, astrocytes and microglia. Local neighbourhood relationships between fine anatomical subdivisions are associated with discrete neuronal subtypes and genes involved with synaptic transmission. The neocortex displays a relatively homogeneous transcriptional pattern, but with distinct features associated selectively with primary sensorimotor cortices and with enriched frontal lobe expression. Notably, the spatial topography of the neocortex is strongly reflected in its molecular topography—the closer two cortical regions, the more similar their transcriptomes. This freely accessible online data resource forms a high-resolution transcriptional baseline for neurogenetic studies of normal and abnormal human brain function.

2,204 citations


Journal ArticleDOI
F. P. An, J. Z. Bai, A. B. Balantekin1, H. R. Band1  +271 moreInstitutions (34)
TL;DR: The Daya Bay Reactor Neutrino Experiment has measured a nonzero value for the neutrino mixing angle θ(13) with a significance of 5.2 standard deviations.
Abstract: The Daya Bay Reactor Neutrino Experiment has measured a nonzero value for the neutrino mixing angle θ13 with a significance of 5.2 standard deviations. Antineutrinos from six 2.9 GW_(th) reactors were detected in six antineutrino detectors deployed in two near (flux-weighted baseline 470 m and 576 m) and one far (1648 m) underground experimental halls. With a 43 000 ton–GW_(th)–day live-time exposure in 55 days, 10 416 (80 376) electron-antineutrino candidates were detected at the far hall (near halls). The ratio of the observed to expected number of antineutrinos at the far hall is R=0.940± 0.011(stat.)±0.004(syst.). A rate-only analysis finds sin^22θ_(13)=0.092±0.016(stat.)±0.005(syst.) in a three-neutrino framework.

2,163 citations


Journal ArticleDOI
TL;DR: This work discusses how ChIP quality, assessed in these ways, affects different uses of ChIP-seq data and develops a set of working standards and guidelines for ChIP experiments that are updated routinely.
Abstract: Chromatin immunoprecipitation (ChIP) followed by high-throughput DNA sequencing (ChIP-seq) has become a valuable and widely used approach for mapping the genomic location of transcription-factor binding and histone modifications in living cells. Despite its widespread use, there are considerable differences in how these experiments are conducted, how the results are scored and evaluated for quality, and how the data and metadata are archived for public use. These practices affect the quality and utility of any global ChIP experiment. Through our experience in performing ChIP-seq experiments, the ENCODE and modENCODE consortia have developed a set of working standards and guidelines for ChIP experiments that are updated routinely. The current guidelines address antibody validation, experimental replication, sequencing depth, data and metadata reporting, and data quality assessment. We discuss how ChIP quality, assessed in these ways, affects different uses of ChIP-seq data. All data sets used in the analysis have been deposited for public viewing and downloading at the ENCODE (http://encodeproject.org/ENCODE/) and modENCODE (http://www.modencode.org/) portals.

1,801 citations


Journal ArticleDOI
TL;DR: In this article, Advanced Camera for Surveys, NICMOS and Keck adaptive-optics-assisted photometry of 20 Type Ia supernovae (SNe Ia) from the Hubble Space Telescope (HST) Cluster Supernova Survey was presented.
Abstract: We present Advanced Camera for Surveys, NICMOS, and Keck adaptive-optics-assisted photometry of 20 Type Ia supernovae (SNe Ia) from the Hubble Space Telescope (HST) Cluster Supernova Survey. The SNe Ia were discovered over the redshift interval 0.623 1 SNe Ia. We describe how such a sample could be efficiently obtained by targeting cluster fields with WFC3 on board HST. The updated supernova Union2.1 compilation of 580 SNe is available at http://supernova.lbl.gov/Union.

Journal ArticleDOI
TL;DR: This paper presents new probability inequalities for sums of independent, random, self-adjoint matrices and provides noncommutative generalizations of the classical bounds associated with the names Azuma, Bennett, Bernstein, Chernoff, Hoeffding, and McDiarmid.
Abstract: This paper presents new probability inequalities for sums of independent, random, self-adjoint matrices. These results place simple and easily verifiable hypotheses on the summands, and they deliver strong conclusions about the large-deviation behavior of the maximum eigenvalue of the sum. Tail bounds for the norm of a sum of random rectangular matrices follow as an immediate corollary. The proof techniques also yield some information about matrix-valued martingales. In other words, this paper provides noncommutative generalizations of the classical bounds associated with the names Azuma, Bennett, Bernstein, Chernoff, Hoeffding, and McDiarmid. The matrix inequalities promise the same diversity of application, ease of use, and strength of conclusion that have made the scalar inequalities so valuable.

Journal ArticleDOI
TL;DR: In this paper, the authors presented the first spectroscopic data from the Baryon Oscillation Spectroscopic Survey (BOSS) for the Sloan Digital Sky Survey III (SDSS-III) dataset.
Abstract: The Sloan Digital Sky Survey III (SDSS-III) presents the first spectroscopic data from the Baryon Oscillation Spectroscopic Survey (BOSS). This ninth data release (DR9) of the SDSS project includes 535,995 new galaxy spectra (median z ~ 0.52), 102,100 new quasar spectra (median z ~ 2.32), and 90,897 new stellar spectra, along with the data presented in previous data releases. These spectra were obtained with the new BOSS spectrograph and were taken between 2009 December and 2011 July. In addition, the stellar parameters pipeline, which determines radial velocities, surface temperatures, surface gravities, and metallicities of stars, has been updated and refined with improvements in temperature estimates for stars with T eff -0.5. DR9 includes new stellar parameters for all stars presented in DR8, including stars from SDSS-I and II, as well as those observed as part of the SEGUE-2. The astrometry error introduced in the DR8 imaging catalogs has been corrected in the DR9 data products. The next data release for SDSS-III will be in Summer 2013, which will present the first data from the APOGEE along with another year of data from BOSS, followed by the final SDSS-III data release in 2014 December.

Journal ArticleDOI
TL;DR: The results indicate a new strategy and direction for high-efficiency thermoelectric materials by exploring systems where there exists a crystalline sublattice for electronic conduction surrounded by liquid-like ions.
Abstract: Advanced thermoelectric technology offers a potential for converting waste industrial heat into useful electricity, and an emission-free method for solid state cooling. Worldwide efforts to find materials with thermoelectric figure of merit, zT values significantly above unity, are frequently focused on crystalline semiconductors with low thermal conductivity. Here we report on Cu_(2−x)Se, which reaches a zT of 1.5 at 1,000 K, among the highest values for any bulk materials. Whereas the Se atoms in Cu_(2−x)Se form a rigid face-centred cubic lattice, providing a crystalline pathway for semiconducting electrons (or more precisely holes), the copper ions are highly disordered around the Se sublattice and are superionic with liquid-like mobility. This extraordinary ‘liquid-like’ behaviour of copper ions around a crystalline sublattice of Se in Cu_(2−x)Se results in an intrinsically very low lattice thermal conductivity which enables high zT in this otherwise simple semiconductor. This unusual combination of properties leads to an ideal thermoelectric material. The results indicate a new strategy and direction for high-efficiency thermoelectric materials by exploring systems where there exists a crystalline sublattice for electronic conduction surrounded by liquid-like ions.

Journal ArticleDOI
TL;DR: In this paper, a new type of global plate motion model consisting of a set of continuously-closing topological plate polygons with associated plate boundaries and plate velocities since the break-up of the supercontinent Pangea is presented.

Journal ArticleDOI
TL;DR: This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems.
Abstract: In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered includes those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases from many technical fields such as sparse vectors (signal processing, statistics) and low-rank matrices (control, statistics), as well as several others including sums of a few permutation matrices (ranked elections, multiobject tracking), low-rank tensors (computer vision, neuroscience), orthogonal matrices (machine learning), and atomic measures (system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery, and this tradeoff is characterized via some examples. Thus this work extends the catalog of simple models (beyond sparse vectors and low-rank matrices) that can be recovered from limited linear information via tractable convex programming.

Journal ArticleDOI
TL;DR: This Review focuses on manipulation of the electronic and atomic structural features which makes up the thermoelectric quality factor, and the principles used are equally applicable to most good thermoeLECTric materials that could enable improvement of thermoelectedric devices from niche applications into the mainstream of energy technologies.
Abstract: Lead chalcogenides have long been used for space-based and thermoelectric remote power generation applications, but recent discoveries have revealed a much greater potential for these materials. This renaissance of interest combined with the need for increased energy efficiency has led to active consideration of thermoelectrics for practical waste heat recovery systems—such as the conversion of car exhaust heat into electricity. The simple high symmetry NaCl-type cubic structure, leads to several properties desirable for thermoelectricity, such as high valley degeneracy for high electrical conductivity and phonon anharmonicity for low thermal conductivity. The rich capabilities for both band structure and microstructure engineering enable a variety of approaches for achieving high thermoelectric performance in lead chalcogenides. This Review focuses on manipulation of the electronic and atomic structural features which makes up the thermoelectric quality factor. While these strategies are well demonstrated in lead chalcogenides, the principles used are equally applicable to most good thermoelectric materials that could enable improvement of thermoelectric devices from niche applications into the mainstream of energy technologies.

Journal ArticleDOI
TL;DR: In this article, a necessary and sufficient condition is provided to guarantee the existence of no duality gap for the optimal power flow problem, which is the dual of an equivalent form of the OPF problem.
Abstract: The optimal power flow (OPF) problem is nonconvex and generally hard to solve. In this paper, we propose a semidefinite programming (SDP) optimization, which is the dual of an equivalent form of the OPF problem. A global optimum solution to the OPF problem can be retrieved from a solution of this convex dual problem whenever the duality gap is zero. A necessary and sufficient condition is provided in this paper to guarantee the existence of no duality gap for the OPF problem. This condition is satisfied by the standard IEEE benchmark systems with 14, 30, 57, 118, and 300 buses as well as several randomly generated systems. Since this condition is hard to study, a sufficient zero-duality-gap condition is also derived. This sufficient condition holds for IEEE systems after small resistance (10-5 per unit) is added to every transformer that originally assumes zero resistance. We investigate this sufficient condition and justify that it holds widely in practice. The main underlying reason for the successful convexification of the OPF problem can be traced back to the modeling of transformers and transmission lines as well as the non-negativity of physical quantities such as resistance and inductance.


Journal ArticleDOI
TL;DR: In this paper, the authors used the noise-weighted robust averaging of multi-quarter photo-center offsets derived from difference image analysis, which identifies likely background eclipsing binaries.
Abstract: New transiting planet candidates are identified in sixteen months (May 2009 - September 2010) of data from the Kepler spacecraft. Nearly five thousand periodic transit-like signals are vetted against astrophysical and instrumental false positives yielding 1,091 viable new planet candidates, bringing the total count up to over 2,300. Improved vetting metrics are employed, contributing to higher catalog reliability. Most notable is the noise-weighted robust averaging of multi-quarter photo-center offsets derived from difference image analysis which identifies likely background eclipsing binaries. Twenty-two months of photometry are used for the purpose of characterizing each of the new candidates. Ephemerides (transit epoch, T_0, and orbital period, P) are tabulated as well as the products of light curve modeling: reduced radius (Rp/R*), reduced semi-major axis (d/R*), and impact parameter (b). The largest fractional increases are seen for the smallest planet candidates (197% for candidates smaller than 2Re compared to 52% for candidates larger than 2Re) and those at longer orbital periods (123% for candidates outside of 50-day orbits versus 85% for candidates inside of 50-day orbits). The gains are larger than expected from increasing the observing window from thirteen months (Quarter 1-- Quarter 5) to sixteen months (Quarter 1 -- Quarter 6). This demonstrates the benefit of continued development of pipeline analysis software. The fraction of all host stars with multiple candidates has grown from 17% to 20%, and the paucity of short-period giant planets in multiple systems is still evident. The progression toward smaller planets at longer orbital periods with each new catalog release suggests that Earth-size planets in the Habitable Zone are forthcoming if, indeed, such planets are abundant.

Journal ArticleDOI
TL;DR: In this paper, the authors report the distribution of planets as a function of planet radius, orbital period, and stellar effective temperature for orbital periods less than 50 days around solar-type (GK) stars.
Abstract: We report the distribution of planets as a function of planet radius, orbital period, and stellar effective temperature for orbital periods less than 50 days around solar-type (GK) stars. These results are based on the 1235 planets (formally "planet candidates") from the Kepler mission that include a nearly complete set of detected planets as small as 2 R_⊕. For each of the 156,000 target stars, we assess the detectability of planets as a function of planet radius, R_p, and orbital period, P, using a measure of the detection efficiency for each star. We also correct for the geometric probability of transit, R_*/a. We consider first Kepler target stars within the "solar subset" having T_eff = 4100-6100 K, log g = 4.0-4.9, and Kepler magnitude K_p 2 R_⊕ we measure an occurrence of less than 0.001 planets per star. For all planets with orbital periods less than 50 days, we measure occurrence of 0.130 ± 0.008, 0.023 ± 0.003, and 0.013 ± 0.002 planets per star for planets with radii 2-4, 4-8, and 8-32 R_⊕, in agreement with Doppler surveys. We fit occurrence as a function of P to a power-law model with an exponential cutoff below a critical period P_0. For smaller planets, P_0 has larger values, suggesting that the "parking distance" for migrating planets moves outward with decreasing planet size. We also measured planet occurrence over a broader stellar T_eff range of 3600-7100 K, spanning M0 to F2 dwarfs. Over this range, the occurrence of 2-4 R_⊕ planets in the Kepler field increases with decreasing T_eff, with these small planets being seven times more abundant around cool stars (3600-4100 K) than the hottest stars in our sample (6600-7100 K).

Journal ArticleDOI
TL;DR: In this paper, the accuracy of global-gridded terrestrial water storage (TWS) estimates derived from temporal gravity field variations observed by the Gravity Recovery and Climate Experiment (GRACE) satellites is assessed.
Abstract: [1] We assess the accuracy of global-gridded terrestrial water storage (TWS) estimates derived from temporal gravity field variations observed by the Gravity Recovery and Climate Experiment (GRACE) satellites. The TWS data set has been corrected for signal modification due to filtering and truncation. Simulations of terrestrial water storage variations from land-hydrology models are used to infer relationships between regional time series representing different spatial scales. These relationships, which are independent of the actual GRACE data, are used to extrapolate the GRACE TWS estimates from their effective spatial resolution (length scales of a few hundred kilometers) to finer spatial scales (∼100 km). Gridded, scaled data like these enable users who lack expertise in processing and filtering the standard GRACE spherical harmonic geopotential coefficients to estimate the time series of TWS over arbitrarily shaped regions. In addition, we provide gridded fields of leakage and GRACE measurement errors that allow users to rigorously estimate the associated regional TWS uncertainties. These fields are available for download from the GRACE project website (available at http://grace.jpl.nasa.gov). Three scaling relationships are examined: a single gain factor based on regionally averaged time series, spatially distributed (i.e., gridded) gain factors based on time series at each grid point, and gridded-gain factors estimated as a function of temporal frequency. While regional gain factors have typically been used in previously published studies, we find that comparable accuracies can be obtained from scaled time series based on gridded gain factors. In regions where different temporal modes of TWS variability have significantly different spatial scales, gain factors based on the first two methods may reduce the accuracy of the scaled time series. In these cases, gain factors estimated separately as a function of frequency may be necessary to achieve accurate results.

Journal ArticleDOI
TL;DR: The International Bathymetric Chart of the Arctic Ocean (IBCAO) released its first gridded bathymetric compilation in 1999 as discussed by the authors, which has since supported a wide range of Arc...
Abstract: The International Bathymetric Chart of the Arctic Ocean (IBCAO) released its first gridded bathymetric compilation in 1999. The IBCAO bathymetric portrayals have since supported a wide range of Arc ...

Journal ArticleDOI
TL;DR: By enabling content mixing between mitochondria, fusion and fission serve to maintain a homogeneous and healthy mitochondrial population and lead to improvements in human health.
Abstract: Mitochondria are dynamic organelles that continually undergo fusion and fission. These opposing processes work in concert to maintain the shape, size, and number of mitochondria and their physiological function. Some of the major molecules mediating mitochondrial fusion and fission in mammals have been discovered, but the underlying molecular mechanisms are only partially unraveled. In particular, the cast of characters involved in mitochondrial fission needs to be clarified. By enabling content mixing between mitochondria, fusion and fission serve to maintain a homogeneous and healthy mitochondrial population. Mitochondrial dynamics has been linked to multiple mitochondrial functions, including mitochondrial DNA stability, respiratory capacity, apoptosis, response to cellular stress, and mitophagy. Because of these important functions, mitochondrial fusion and fission are essential in mammals, and even mild defects in mitochondrial dynamics are associated with disease. A better understanding of these processes likely will ultimately lead to improvements in human health.

Journal ArticleDOI
TL;DR: It is demonstrated with a change blindness data set that the distance between images induced by the image signature is closer to human perceptual distance than can be achieved using other saliency algorithms, pixel-wise, or GIST descriptor methods.
Abstract: We introduce a simple image descriptor referred to as the image signature. We show, within the theoretical framework of sparse signal mixing, that this quantity spatially approximates the foreground of an image. We experimentally investigate whether this approximate foreground overlaps with visually conspicuous image locations by developing a saliency algorithm based on the image signature. This saliency algorithm predicts human fixation points best among competitors on the Bruce and Tsotsos [1] benchmark data set and does so in much shorter running time. In a related experiment, we demonstrate with a change blindness data set that the distance between images induced by the image signature is closer to human perceptual distance than can be achieved using other saliency algorithms, pixel-wise, or GIST [2] descriptor methods.

Journal ArticleDOI
TL;DR: The Cluster Lensing And Supernova Survey with Hubble (CLASH) as mentioned in this paper is a 524-orbit Multi-Cycle Treasury Program to use the gravitational lensing properties of 25 galaxy clusters to accurately constrain their mass distributions.
Abstract: The Cluster Lensing And Supernova survey with Hubble (CLASH) is a 524-orbit Multi-Cycle Treasury Program to use the gravitational lensing properties of 25 galaxy clusters to accurately constrain their mass distributions. The survey, described in detail in this paper, will definitively establish the degree of concentration of dark matter in the cluster cores, a key prediction of structure formation models. The CLASH cluster sample is larger and less biased than current samples of space-based imaging studies of clusters to similar depth, as we have minimized lensing-based selection that favors systems with overly dense cores. Specifically, 20 CLASH clusters are solely X-ray selected. The X-ray-selected clusters are massive (kT > 5 keV) and, in most cases, dynamically relaxed. Five additional clusters are included for their lensing strength (θ_Ein > 35" at z_s = 2) to optimize the likelihood of finding highly magnified high-z (z > 7) galaxies. A total of 16 broadband filters, spanning the near-UV to near-IR, are employed for each 20-orbit campaign on each cluster. These data are used to measure precise (σ_z ~ 0.02(1 + z)) photometric redshifts for newly discovered arcs. Observations of each cluster are spread over eight epochs to enable a search for Type Ia supernovae at z > 1 to improve constraints on the time dependence of the dark energy equation of state and the evolution of supernovae. We present newly re-derived X-ray luminosities, temperatures, and Fe abundances for the CLASH clusters as well as a representative source list for MACS1149.6+2223 (z = 0.544).

Journal ArticleDOI
Seb Oliver1, James J. Bock2, James J. Bock3, Bruno Altieri4, Alexandre Amblard5, V. Arumugam6, Herve Aussel7, Tom Babbedge8, Alexandre Beelen9, Matthieu Béthermin9, Matthieu Béthermin7, Andrew Blain3, Alessandro Boselli10, C. Bridge3, Drew Brisbin11, V. Buat10, Denis Burgarella10, N. Castro-Rodríguez12, N. Castro-Rodríguez13, Antonio Cava14, P. Chanial7, Michele Cirasuolo15, David L. Clements8, A. Conley16, L. Conversi4, Asantha Cooray3, Asantha Cooray17, C. D. Dowell3, C. D. Dowell2, Elizabeth Dubois1, Eli Dwek18, Simon Dye19, Stephen Anthony Eales20, David Elbaz7, Duncan Farrah1, A. Feltre21, P. Ferrero13, P. Ferrero12, N. Fiolet22, N. Fiolet9, M. Fox8, Alberto Franceschini21, Walter Kieran Gear20, E. Giovannoli10, Jason Glenn16, Yan Gong17, E. A. González Solares23, Matthew Joseph Griffin20, Mark Halpern24, Martin Harwit, Evanthia Hatziminaoglou, Sebastien Heinis10, Peter Hurley1, Ho Seong Hwang7, A. Hyde8, Edo Ibar15, O. Ilbert10, K. G. Isaak25, Rob Ivison15, Rob Ivison6, Guilaine Lagache9, E. Le Floc'h7, L. R. Levenson2, L. R. Levenson3, B. Lo Faro21, Nanyao Y. Lu3, S. C. Madden7, Bruno Maffei26, Georgios E. Magdis7, G. Mainetti21, Lucia Marchetti21, G. Marsden24, J. Marshall3, J. Marshall2, A. M. J. Mortier8, Hien Nguyen3, Hien Nguyen2, B. O'Halloran8, Alain Omont22, Mat Page27, P. Panuzzo7, Andreas Papageorgiou20, H. Patel8, Chris Pearson28, Chris Pearson29, Ismael Perez-Fournon13, Ismael Perez-Fournon12, Michael Pohlen20, Jonathan Rawlings27, Gwenifer Raymond20, Dimitra Rigopoulou30, Dimitra Rigopoulou28, L. Riguccini7, D. Rizzo8, Giulia Rodighiero21, Isaac Roseboom1, Isaac Roseboom6, Michael Rowan-Robinson8, M. Sanchez Portal4, Benjamin L. Schulz3, Douglas Scott24, Nick Seymour27, Nick Seymour31, D. L. Shupe3, A. J. Smith1, Jamie Stevens32, M. Symeonidis27, Markos Trichas33, K. E. Tugwell27, Mattia Vaccari21, Ivan Valtchanov4, Joaquin Vieira3, Marco P. Viero3, L. Vigroux22, Lifan Wang1, Robyn L. Ward1, Julie Wardlow17, G. Wright15, C. K. Xu3, Michael Zemcov3, Michael Zemcov2 
TL;DR: The Herschel Multi-tiered Extragalactic Survey (HerMES) is a legacy program designed to map a set of nested fields totalling ∼380deg^2 as mentioned in this paper.
Abstract: The Herschel Multi-tiered Extragalactic Survey (HerMES) is a legacy programme designed to map a set of nested fields totalling ∼380 deg^2. Fields range in size from 0.01 to ∼20 deg^2, using the Herschel-Spectral and Photometric Imaging Receiver (SPIRE) (at 250, 350 and 500 μm) and the Herschel-Photodetector Array Camera and Spectrometer (PACS) (at 100 and 160 μm), with an additional wider component of 270 deg^2 with SPIRE alone. These bands cover the peak of the redshifted thermal spectral energy distribution from interstellar dust and thus capture the reprocessed optical and ultraviolet radiation from star formation that has been absorbed by dust, and are critical for forming a complete multiwavelength understanding of galaxy formation and evolution. The survey will detect of the order of 100 000 galaxies at 5σ in some of the best-studied fields in the sky. Additionally, HerMES is closely coordinated with the PACS Evolutionary Probe survey. Making maximum use of the full spectrum of ancillary data, from radio to X-ray wavelengths, it is designed to facilitate redshift determination, rapidly identify unusual objects and understand the relationships between thermal emission from dust and other processes. Scientific questions HerMES will be used to answer include the total infrared emission of galaxies, the evolution of the luminosity function, the clustering properties of dusty galaxies and the properties of populations of galaxies which lie below the confusion limit through lensing and statistical techniques. This paper defines the survey observations and data products, outlines the primary scientific goals of the HerMES team, and reviews some of the early results.

Journal ArticleDOI
TL;DR: This work analyzes an intuitive Gaussian process upper confidence bound algorithm, and bound its cumulative regret in terms of maximal in- formation gain, establishing a novel connection between GP optimization and experimental design and obtaining explicit sublinear regret bounds for many commonly used covariance functions.
Abstract: Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low norm in a reproducing kernel Hilbert space. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze an intuitive Gaussian process upper confidence bound (GP-UCB) algorithm, and bound its cumulative regret in terms of maximal in- formation gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.

Journal ArticleDOI
TL;DR: New approaches to light management that systematically minimize thermodynamic losses will enable ultrahigh efficiencies previously considered impossible, according to researchers at the Massachusetts Institute of Technology.
Abstract: For decades, solar-cell efficiencies have remained below the thermodynamic limits. However, new approaches to light management that systematically minimize thermodynamic losses will enable ultrahigh efficiencies previously considered impossible.

Journal ArticleDOI
TL;DR: In this paper, the authors performed a joint determination of the distance-redshift relation and cosmic expansion rate at redshifts z = 0.44, 0.6 and 0.73 by combining measurements of the baryon acoustic peak and Alcock-Paczynski distortion from galaxy clustering in the WiggleZ Dark Energy Survey, using a large ensemble of mock catalogues to calculate the covariance between the measurements.
Abstract: We perform a joint determination of the distance–redshift relation and cosmic expansion rate at redshifts z = 0.44, 0.6 and 0.73 by combining measurements of the baryon acoustic peak and Alcock–Paczynski distortion from galaxy clustering in the WiggleZ Dark Energy Survey, using a large ensemble of mock catalogues to calculate the covariance between the measurements. We find that D_A(z) = (1205 ± 114, 1380 ± 95, 1534 ± 107) Mpc and H(z) = (82.6 ± 7.8, 87.9 ± 6.1, 97.3 ± 7.0) km s^(−1) Mpc^(−1) at these three redshifts. Further combining our results with other baryon acoustic oscillation and distant supernovae data sets, we use a Monte Carlo Markov Chain technique to determine the evolution of the Hubble parameter H(z) as a stepwise function in nine redshift bins of width Δz = 0.1, also marginalizing over the spatial curvature. Our measurements of H(z), which have precision better than 7 per cent in most redshift bins, are consistent with the expansion history predicted by a cosmological constant dark energy model, in which the expansion rate accelerates at redshift z < 0.7.