scispace - formally typeset
Search or ask a question

Showing papers by "University of Geneva published in 2014"


Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, C. Armitage-Caplan3, Monique Arnaud4  +324 moreInstitutions (70)
TL;DR: In this paper, the authors present the first cosmological results based on Planck measurements of the cosmic microwave background (CMB) temperature and lensing-potential power spectra, which are extremely well described by the standard spatially-flat six-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations.
Abstract: This paper presents the first cosmological results based on Planck measurements of the cosmic microwave background (CMB) temperature and lensing-potential power spectra. We find that the Planck spectra at high multipoles (l ≳ 40) are extremely well described by the standard spatially-flat six-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations. Within the context of this cosmology, the Planck data determine the cosmological parameters to high precision: the angular size of the sound horizon at recombination, the physical densities of baryons and cold dark matter, and the scalar spectral index are estimated to be θ∗ = (1.04147 ± 0.00062) × 10-2, Ωbh2 = 0.02205 ± 0.00028, Ωch2 = 0.1199 ± 0.0027, and ns = 0.9603 ± 0.0073, respectively(note that in this abstract we quote 68% errors on measured parameters and 95% upper limits on other parameters). For this cosmology, we find a low value of the Hubble constant, H0 = (67.3 ± 1.2) km s-1 Mpc-1, and a high value of the matter density parameter, Ωm = 0.315 ± 0.017. These values are in tension with recent direct measurements of H0 and the magnitude-redshift relation for Type Ia supernovae, but are in excellent agreement with geometrical constraints from baryon acoustic oscillation (BAO) surveys. Including curvature, we find that the Universe is consistent with spatial flatness to percent level precision using Planck CMB data alone. We use high-resolution CMB data together with Planck to provide greater control on extragalactic foreground components in an investigation of extensions to the six-parameter ΛCDM model. We present selected results from a large grid of cosmological models, using a range of additional astrophysical data sets in addition to Planck and high-resolution CMB data. None of these models are favoured over the standard six-parameter ΛCDM cosmology. The deviation of the scalar spectral index from unity isinsensitive to the addition of tensor modes and to changes in the matter content of the Universe. We find an upper limit of r0.002< 0.11 on the tensor-to-scalar ratio. There is no evidence for additional neutrino-like relativistic particles beyond the three families of neutrinos in the standard model. Using BAO and CMB data, we find Neff = 3.30 ± 0.27 for the effective number of relativistic degrees of freedom, and an upper limit of 0.23 eV for the sum of neutrino masses. Our results are in excellent agreement with big bang nucleosynthesis and the standard value of Neff = 3.046. We find no evidence for dynamical dark energy; using BAO and CMB data, the dark energy equation of state parameter is constrained to be w = -1.13-0.10+0.13. We also use the Planck data to set limits on a possible variation of the fine-structure constant, dark matter annihilation and primordial magnetic fields. Despite the success of the six-parameter ΛCDM model in describing the Planck data at high multipoles, we note that this cosmology does not provide a good fit to the temperature power spectrum at low multipoles. The unusual shape of the spectrum in the multipole range 20 ≲ l ≲ 40 was seen previously in the WMAP data and is a real feature of the primordial CMB anisotropies. The poor fit to the spectrum at low multipoles is not of decisive significance, but is an “anomaly” in an otherwise self-consistent analysis of the Planck temperature data.

7,060 citations


Journal ArticleDOI
TL;DR: It is proposed that downstream topographical biomarkers of the disease, such as volumetric MRI and fluorodeoxyglucose PET, might better serve in the measurement and monitoring of the course of disease.
Abstract: In the past 8 years, both the International Working Group (IWG) and the US National Institute on Aging-Alzheimer's Association have contributed criteria for the diagnosis of Alzheimer's disease (AD) that better define clinical phenotypes and integrate biomarkers into the diagnostic process, covering the full staging of the disease. This Position Paper considers the strengths and limitations of the IWG research diagnostic criteria and proposes advances to improve the diagnostic framework. On the basis of these refinements, the diagnosis of AD can be simplified, requiring the presence of an appropriate clinical AD phenotype (typical or atypical) and a pathophysiological biomarker consistent with the presence of Alzheimer's pathology. We propose that downstream topographical biomarkers of the disease, such as volumetric MRI and fluorodeoxyglucose PET, might better serve in the measurement and monitoring of the course of disease. This paper also elaborates on the specific diagnostic criteria for atypical forms of AD, for mixed AD, and for the preclinical states of AD.

2,581 citations


Journal ArticleDOI
Andrew R. Wood1, Tõnu Esko2, Jian Yang3, Sailaja Vedantam4  +441 moreInstitutions (132)
TL;DR: This article identified 697 variants at genome-wide significance that together explained one-fifth of the heritability for adult height, and all common variants together captured 60% of heritability.
Abstract: Using genome-wide data from 253,288 individuals, we identified 697 variants at genome-wide significance that together explained one-fifth of the heritability for adult height. By testing different numbers of variants in independent studies, we show that the most strongly associated ∼2,000, ∼3,700 and ∼9,500 SNPs explained ∼21%, ∼24% and ∼29% of phenotypic variance. Furthermore, all common variants together captured 60% of heritability. The 697 variants clustered in 423 loci were enriched for genes, pathways and tissue types known to be involved in growth and together implicated genes and pathways not highlighted in earlier efforts, such as signaling by fibroblast growth factors, WNT/β-catenin and chondroitin sulfate-related genes. We identified several genes and pathways not previously connected with human skeletal growth, including mTOR, osteoglycin and binding of hyaluronic acid. Our results indicate a genetic architecture for human height that is characterized by a very large but finite number (thousands) of causal variants.

1,872 citations


Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, M. I. R. Alves2, C. Armitage-Caplan3  +469 moreInstitutions (89)
TL;DR: The European Space Agency's Planck satellite, dedicated to studying the early Universe and its subsequent evolution, was launched 14 May 2009 and has been scanning the microwave and submillimetre sky continuously since 12 August 2009 as discussed by the authors.
Abstract: The European Space Agency’s Planck satellite, dedicated to studying the early Universe and its subsequent evolution, was launched 14 May 2009 and has been scanning the microwave and submillimetre sky continuously since 12 August 2009. In March 2013, ESA and the Planck Collaboration released the initial cosmology products based on the first 15.5 months of Planck data, along with a set of scientific and technical papers and a web-based explanatory supplement. This paper gives an overview of the mission and its performance, the processing, analysis, and characteristics of the data, the scientific results, and the science data products and papers in the release. The science products include maps of the cosmic microwave background (CMB) and diffuse extragalactic foregrounds, a catalogue of compact Galactic and extragalactic sources, and a list of sources detected through the Sunyaev-Zeldovich effect. The likelihood code used to assess cosmological models against the Planck data and a lensing likelihood are described. Scientific results include robust support for the standard six-parameter ΛCDM model of cosmology and improved measurements of its parameters, including a highly significant deviation from scale invariance of the primordial power spectrum. The Planck values for these parameters and others derived from them are significantly different from those previously determined. Several large-scale anomalies in the temperature distribution of the CMB, first detected by WMAP, are confirmed with higher confidence. Planck sets new limits on the number and mass of neutrinos, and has measured gravitational lensing of CMB anisotropies at greater than 25σ. Planck finds no evidence for non-Gaussianity in the CMB. Planck’s results agree well with results from the measurements of baryon acoustic oscillations. Planck finds a lower Hubble constant than found in some more local measures. Some tension is also present between the amplitude of matter fluctuations (σ8) derived from CMB data and that derived from Sunyaev-Zeldovich data. The Planck and WMAP power spectra are offset from each other by an average level of about 2% around the first acoustic peak. Analysis of Planck polarization data is not yet mature, therefore polarization results are not released, although the robust detection of E-mode polarization around CMB hot and cold spots is shown graphically.

1,719 citations


Journal ArticleDOI
TL;DR: All data manually curated by the MINT curators have been moved into the IntAct database at EMBL-EBI and are merged with the existing IntAct dataset.
Abstract: IntAct (freely available at http://www.ebi.ac.uk/intact) is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. IntAct has developed a sophisticated web-based curation tool, capable of supporting both IMEx- and MIMIx-level curation. This tool is now utilized by multiple additional curation teams, all of whom annotate data directly into the IntAct database. Members of the IntAct team supply appropriate levels of training, perform quality control on entries and take responsibility for long-term data maintenance. Recently, the MINT and IntAct databases decided to merge their separate efforts to make optimal use of limited developer resources and maximize the curation output. All data manually curated by the MINT curators have been moved into the IntAct database at EMBL-EBI and are merged with the existing IntAct dataset. Both IntAct and MINT are active contributors to the IMEx consortium (http://www.imexconsortium.org).

1,602 citations


Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, Frederico Arroja4  +321 moreInstitutions (79)
TL;DR: In this article, the authors present the implications for cosmic inflation of the Planck measurements of the cosmic microwave background (CMB) anisotropies in both temperature and polarization based on the full Planck survey.
Abstract: We present the implications for cosmic inflation of the Planck measurements of the cosmic microwave background (CMB) anisotropies in both temperature and polarization based on the full Planck survey, which includes more than twice the integration time of the nominal survey used for the 2013 release papers. The Planck full mission temperature data and a first release of polarization data on large angular scales measure the spectral index of curvature perturbations to be ns = 0.968 ± 0.006 and tightly constrain its scale dependence to dns/ dlnk = −0.003 ± 0.007 when combined with the Planck lensing likelihood. When the Planck high-l polarization data are included, the results are consistent and uncertainties are further reduced. The upper bound on the tensor-to-scalar ratio is r0.002< 0.11 (95% CL). This upper limit is consistent with the B-mode polarization constraint r< 0.12 (95% CL) obtained from a joint analysis of the BICEP2/Keck Array and Planck data. These results imply that V(φ) ∝ φ2 and natural inflation are now disfavoured compared to models predicting a smaller tensor-to-scalar ratio, such as R2 inflation. We search for several physically motivated deviations from a simple power-law spectrum of curvature perturbations, including those motivated by a reconstruction of the inflaton potential not relying on the slow-roll approximation. We find that such models are not preferred, either according to a Bayesian model comparison or according to a frequentist simulation-based analysis. Three independent methods reconstructing the primordial power spectrum consistently recover a featureless and smooth over the range of scales 0.008 Mpc-1 ≲ k ≲ 0.1 Mpc-1. At large scales, each method finds deviations from a power law, connected to a deficit at multipoles l ≈ 20−40 in the temperature power spectrum, but at an uncompelling statistical significance owing to the large cosmic variance present at these multipoles. By combining power spectrum and non-Gaussianity bounds, we constrain models with generalized Lagrangians, including Galileon models and axion monodromy models. The Planck data are consistent with adiabatic primordial perturbations, and the estimated values for the parameters of the base Λ cold dark matter (ΛCDM) model are not significantly altered when more general initial conditions are admitted. In correlated mixed adiabatic and isocurvature models, the 95% CL upper bound for the non-adiabatic contribution to the observed CMB temperature variance is | αnon - adi | < 1.9%, 4.0%, and 2.9% for CDM, neutrino density, and neutrino velocity isocurvature modes, respectively. We have tested inflationary models producing an anisotropic modulation of the primordial curvature power spectrum findingthat the dipolar modulation in the CMB temperature field induced by a CDM isocurvature perturbation is not preferred at a statistically significant level. We also establish tight constraints on a possible quadrupolar modulation of the curvature perturbation. These results are consistent with the Planck 2013 analysis based on the nominal mission data and further constrain slow-roll single-field inflationary models, as expected from the increased precision of Planck data using the full set of observations.

1,401 citations


Journal ArticleDOI
M. G. Aartsen1, Markus Ackermann, Jenni Adams2, Juanan Aguilar3  +299 moreInstitutions (41)
TL;DR: Results from an analysis with a third year of data from the complete IceCube detector are consistent with the previously reported astrophysical flux in the 100 TeV-PeV range at the level of 10(-8) GeV cm-2 s-1 sr-1 per flavor and reject a purely atmospheric explanation for the combined three-year data at 5.7σ.
Abstract: A search for high-energy neutrinos interacting within the IceCube detector between 2010 and 2012 provided the first evidence for a high-energy neutrino flux of extraterrestrial origin. Results from an analysis using the same methods with a third year (2012-2013) of data from the complete IceCube detector are consistent with the previously reported astrophysical flux in the 100 TeV-PeV range at the level of 10(-8) GeV cm(-2) s(-1) sr(-1) per flavor and reject a purely atmospheric explanation for the combined three-year data at 5.7 sigma. The data are consistent with expectations for equal fluxes of all three neutrino flavors and with isotropic arrival directions, suggesting either numerous or spatially extended sources. The three-year data set, with a live time of 988 days, contains a total of 37 neutrino candidate events with deposited energies ranging from 30 to 2000 TeV. The 2000-TeV event is the highest-energy neutrino interaction ever observed.

1,183 citations


Journal ArticleDOI
24 Apr 2014-Nature
TL;DR: The key challenges of assessing sequence variants in human disease are discussed, integrating both gene-level and variant-level support for causality and guidelines for summarizing confidence in variant pathogenicity are proposed.
Abstract: The discovery of rare genetic variants is accelerating, and clear guidelines for distinguishing disease-causing sequence variants from the many potentially functional variants present in any human genome are urgently needed. Without rigorous standards we risk an acceleration of false-positive reports of causality, which would impede the translation of genomic research findings into the clinical diagnostic setting and hinder biological understanding of disease. Here we discuss the key challenges of assessing sequence variants in human disease, integrating both gene-level and variant-level support for causality. We propose guidelines for summarizing confidence in variant pathogenicity and highlight several areas that require further resource development.

1,165 citations


Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, C. Armitage-Caplan3, Monique Arnaud4  +273 moreInstitutions (59)
TL;DR: In this article, the authors characterized the effective beams, the effective beam window functions and the associated errors for the Planck High Frequency Instrument (HFI) detectors, including the effect of the optics, detectors, data processing and the scan strategy.
Abstract: This paper characterizes the effective beams, the effective beam window functions and the associated errors for the Planck High Frequency Instrument (HFI) detectors. The effective beam is the angular response including the effect of the optics, detectors, data processing and the scan strategy. The window function is the representation of this beam in the harmonic domain which is required to recover an unbiased measurement of the cosmic microwave background angular power spectrum. The HFI is a scanning instrument and its effective beams are the convolution of: a) the optical response of the telescope and feeds; b) the processing of the time-ordered data and deconvolution of the bolometric and electronic transfer function; and c) the merging of several surveys to produce maps. The time response transfer functions are measured using observations of Jupiter and Saturn and by minimizing survey difference residuals. The scanning beam is the post-deconvolution angular response of the instrument, and is characterized with observations of Mars. The main beam solid angles are determined to better than 0.5% at each HFI frequency band. Observations of Jupiter and Saturn limit near sidelobes (within 5 degrees) to about 0.1% of the total solid angle. Time response residuals remain as long tails in the scanning beams, but contribute less than 0.1% of the total solid angle. The bias and uncertainty in the beam products are estimated using ensembles of simulated planet observations that include the impact of instrumental noise and known systematic effects. The correlation structure of these ensembles is well-described by five errors eigenmodes that are sub-dominant to sample variance and instrumental noise in the harmonic domain. A suite of consistency tests provide confidence that the error model represents a sufficient description of the data. The total error in the effective beam window functions is below 1% at 100 GHz up to multiple l similar to 1500, below 0.5% at 143 and 217 GHz up to l similar to 2000.

1,124 citations


Journal ArticleDOI
TL;DR: Photoluminescence, transient absorption, time-resolved terahertz and microwave conductivity measurements are applied to determine the time scales of generation and recombination of charge carriers as well as their transport properties in solution-processed CH3NH3PbI3 perovskite materials to unravel the remarkable intrinsic properties of the material.
Abstract: Organometal halide perovskite-based solar cells have recently been reported to be highly efficient, giving an overall power conversion efficiency of up to 15%. However, much of the fundamental photophysical properties underlying this performance has remained unknown. Here, we apply photoluminescence, transient absorption, time-resolved terahertz and microwave conductivity measurements to determine the time scales of generation and recombination of charge carriers as well as their transport properties in solution-processed CH3NH3PbI3 perovskite materials. We found that electron–hole pairs are generated almost instantaneously after photoexcitation and dissociate in 2 ps forming highly mobile charges (25 cm2 V–1 s–1) in the neat perovskite and in perovskite/alumina blends; almost balanced electron and hole mobilities remain very high up to the microsecond time scale. When the perovskite is introduced into a TiO2 mesoporous structure, electron injection from perovskite to the metal oxide is efficient in less ...

1,093 citations


Journal ArticleDOI
01 Aug 2014-Allergy
TL;DR: The current understanding of the manifestations of food allergy, the role of diagnostic tests, and the effective management of patients of all ages with food allergy is presented.
Abstract: Food allergy can result in considerable morbidity, impact negatively on quality of life, and prove costly in terms of medical care. These guidelines have been prepared by the European Academy of Allergy and Clinical Immunology's (EAACI) Guidelines for Food Allergy and Anaphylaxis Group, building on previous EAACI position papers on adverse reaction to foods and three recent systematic reviews on the epidemiology, diagnosis, and management of food allergy, and provide evidence-based recommendations for the diagnosis and management of food allergy. While the primary audience is allergists, this document is relevant for all other healthcare professionals, including primary care physicians, and pediatric and adult specialists, dieticians, pharmacists and paramedics. Our current understanding of the manifestations of food allergy, the role of diagnostic tests, and the effective management of patients of all ages with food allergy is presented. The acute management of non-life-threatening reactions is covered in these guidelines, but for guidance on the emergency management of anaphylaxis, readers are referred to the related EAACI Anaphylaxis Guidelines.

Journal ArticleDOI
Anubha Mahajan1, Min Jin Go, Weihua Zhang2, Jennifer E. Below3  +392 moreInstitutions (104)
TL;DR: In this paper, the authors aggregated published meta-analyses of genome-wide association studies (GWAS), including 26,488 cases and 83,964 controls of European, east Asian, south Asian and Mexican and Mexican American ancestry.
Abstract: To further understanding of the genetic basis of type 2 diabetes (T2D) susceptibility, we aggregated published meta-analyses of genome-wide association studies (GWAS), including 26,488 cases and 83,964 controls of European, east Asian, south Asian and Mexican and Mexican American ancestry. We observed a significant excess in the directional consistency of T2D risk alleles across ancestry groups, even at SNPs demonstrating only weak evidence of association. By following up the strongest signals of association from the trans-ethnic meta-analysis in an additional 21,491 cases and 55,647 controls of European ancestry, we identified seven new T2D susceptibility loci. Furthermore, we observed considerable improvements in the fine-mapping resolution of common variant association signals at several T2D susceptibility loci. These observations highlight the benefits of trans-ethnic GWAS for the discovery and characterization of complex trait loci and emphasize an exciting opportunity to extend insight into the genetic architecture and pathogenesis of human diseases across populations of diverse ancestry.

Journal ArticleDOI
Patrick J. Keeling1, Patrick J. Keeling2, Fabien Burki1, Heather M. Wilcox3, Bassem Allam4, Eric E. Allen5, Linda A. Amaral-Zettler6, Linda A. Amaral-Zettler7, E. Virginia Armbrust8, John M. Archibald2, John M. Archibald9, Arvind K. Bharti10, Callum J. Bell10, Bank Beszteri11, Kay D. Bidle12, Connor Cameron10, Lisa Campbell13, David A. Caron14, Rose Ann Cattolico8, Jackie L. Collier4, Kathryn J. Coyne15, Simon K. Davy16, Phillipe Deschamps17, Sonya T. Dyhrman18, Bente Edvardsen19, Ruth D. Gates20, Christopher J. Gobler4, Spencer J. Greenwood21, Stephanie Guida10, Jennifer L. Jacobi10, Kjetill S. Jakobsen19, Erick R. James1, Bethany D. Jenkins22, Uwe John11, Matthew D. Johnson23, Andrew R. Juhl18, Anja Kamp24, Anja Kamp25, Laura A. Katz26, Ronald P. Kiene27, Alexander Kudryavtsev28, Alexander Kudryavtsev29, Brian S. Leander1, Senjie Lin30, Connie Lovejoy31, Denis H. Lynn1, Denis H. Lynn32, Adrian Marchetti33, George B. McManus30, Aurora M. Nedelcu34, Susanne Menden-Deuer22, Cristina Miceli35, Thomas Mock36, Marina Montresor37, Mary Ann Moran38, Shauna A. Murray39, Govind Nadathur40, Satoshi Nagai, Peter B. Ngam10, Brian Palenik5, Jan Pawlowski28, Giulio Petroni41, Gwenael Piganeau42, Matthew C. Posewitz43, Karin Rengefors44, Giovanna Romano37, Mary E. Rumpho30, Tatiana A. Rynearson22, Kelly B. Schilling10, Declan C. Schroeder, Alastair G. B. Simpson2, Alastair G. B. Simpson9, Claudio H. Slamovits2, Claudio H. Slamovits9, David Roy Smith45, G. Jason Smith46, Sarah R. Smith5, Heidi M. Sosik23, Peter Stief25, Edward C. Theriot47, Scott N. Twary48, Pooja E. Umale10, Daniel Vaulot49, Boris Wawrik50, Glen L. Wheeler51, William H. Wilson52, Yan Xu53, Adriana Zingone37, Alexandra Z. Worden3, Alexandra Z. Worden2 
University of British Columbia1, Canadian Institute for Advanced Research2, Monterey Bay Aquarium Research Institute3, Stony Brook University4, University of California, San Diego5, Marine Biological Laboratory6, Brown University7, University of Washington8, Dalhousie University9, National Center for Genome Resources10, Alfred Wegener Institute for Polar and Marine Research11, Rutgers University12, Texas A&M University13, University of Southern California14, University of Delaware15, Victoria University of Wellington16, University of Paris-Sud17, Columbia University18, University of Oslo19, University of Hawaii at Manoa20, University of Prince Edward Island21, University of Rhode Island22, Woods Hole Oceanographic Institution23, Jacobs University Bremen24, Max Planck Society25, Smith College26, University of South Alabama27, University of Geneva28, Saint Petersburg State University29, University of Connecticut30, Laval University31, University of Guelph32, University of North Carolina at Chapel Hill33, University of New Brunswick34, University of Camerino35, University of East Anglia36, Stazione Zoologica Anton Dohrn37, University of Georgia38, University of Technology, Sydney39, University of Puerto Rico40, University of Pisa41, Centre national de la recherche scientifique42, Colorado School of Mines43, Lund University44, University of Western Ontario45, California State University46, University of Texas at Austin47, Los Alamos National Laboratory48, Pierre-and-Marie-Curie University49, University of Oklahoma50, Plymouth Marine Laboratory51, Bigelow Laboratory For Ocean Sciences52, Princeton University53
TL;DR: In this paper, the authors describe a resource of 700 transcriptomes from marine microbial eukaryotes to help understand their role in the world's oceans and their biology, evolution, and ecology.
Abstract: Current sampling of genomic sequence data from eukaryotes is relatively poor, biased, and inadequate to address important questions about their biology, evolution, and ecology; this Community Page describes a resource of 700 transcriptomes from marine microbial eukaryotes to help understand their role in the world's oceans.

Journal ArticleDOI
01 Aug 2014-Allergy
TL;DR: These guidelines aim to provide evidence‐based recommendations for the recognition, risk factor assessment, and the management of patients who are at risk of, are experiencing, or have experienced anaphylaxis, and to prevent future episodes by developing personalized risk reduction strategies including, where possible, commencing allergen immunotherapy.
Abstract: Anaphylaxis is a clinical emergency, and all healthcare professionals should be familiar with its recognition and acute and ongoing management. These guidelines have been prepared by the European Academy of Allergy and Clinical Immunology (EAACI) Taskforce on Anaphylaxis. They aim to provide evidence-based recommendations for the recognition, risk factor assessment, and the management of patients who are at risk of, are experiencing, or have experienced anaphylaxis. While the primary audience is allergists, these guidelines are also relevant to all other healthcare professionals. The development of these guidelines has been underpinned by two systematic reviews of the literature, both on the epidemiology and on clinical management of anaphylaxis. Anaphylaxis is a potentially life-threatening condition whose clinical diagnosis is based on recognition of a constellation of presenting features. First-line treatment for anaphylaxis is intramuscular adrenaline. Useful second-line interventions may include removing the trigger where possible, calling for help, correct positioning of the patient, high-flow oxygen, intravenous fluids, inhaled short-acting bronchodilators, and nebulized adrenaline. Discharge arrangements should involve an assessment of the risk of further reactions, a management plan with an anaphylaxis emergency action plan, and, where appropriate, prescribing an adrenaline auto-injector. If an adrenaline auto-injector is prescribed, education on when and how to use the device should be provided. Specialist follow-up is essential to investigate possible triggers, to perform a comprehensive risk assessment, and to prevent future episodes by developing personalized risk reduction strategies including, where possible, commencing allergen immunotherapy. Training for the patient and all caregivers is essential. There are still many gaps in the evidence base for anaphylaxis.

Journal ArticleDOI
M. Aguilar, D. Aisa1, Behcet Alpat, A. Alvino  +291 moreInstitutions (33)
TL;DR: In this paper, a precise measurement of the proton flux in primary cosmic rays with rigidity (momentum/charge) from 1.GV to 1.8TV is presented based on 300 million events.
Abstract: A precise measurement of the proton flux in primary cosmic rays with rigidity (momentum/charge) from 1 GV to 1.8 TV is presented based on 300 million events. Knowledge of the rigidity dependence of the proton flux is important in understanding the origin, acceleration, and propagation of cosmic rays. We present the detailed variation with rigidity of the flux spectral index for the first time. The spectral index progressively hardens at high rigidities.

Journal ArticleDOI
Alain Abergel1, Peter A. R. Ade2, Nabila Aghanim1, M. I. R. Alves1  +307 moreInstitutions (66)
TL;DR: In this article, the authors presented an all-sky model of dust emission from the Planck 857, 545 and 353 GHz, and IRAS 100 micron data.
Abstract: This paper presents an all-sky model of dust emission from the Planck 857, 545 and 353 GHz, and IRAS 100 micron data. Using a modified black-body fit to the data we present all-sky maps of the dust optical depth, temperature, and spectral index over the 353-3000 GHz range. This model is a tight representation of the data at 5 arcmin. It shows variations of the order of 30 % compared with the widely-used model of Finkbeiner, Davis, and Schlegel. The Planck data allow us to estimate the dust temperature uniformly over the whole sky, providing an improved estimate of the dust optical depth compared to previous all-sky dust model, especially in high-contrast molecular regions. An increase of the dust opacity at 353 GHz, tau_353/N_H, from the diffuse to the denser interstellar medium (ISM) is reported. It is associated with a decrease in the observed dust temperature, T_obs, that could be due at least in part to the increased dust opacity. We also report an excess of dust emission at HI column densities lower than 10^20 cm^-2 that could be the signature of dust in the warm ionized medium. In the diffuse ISM at high Galactic latitude, we report an anti-correlation between tau_353/N_H and T_obs while the dust specific luminosity, i.e., the total dust emission integrated over frequency (the radiance) per hydrogen atom, stays about constant. The implication is that in the diffuse high-latitude ISM tau_353 is not as reliable a tracer of dust column density as we conclude it is in molecular clouds where the correlation of tau_353 with dust extinction estimated using colour excess measurements on stars is strong. To estimate Galactic E(B-V) in extragalactic fields at high latitude we develop a new method based on the thermal dust radiance, instead of the dust optical depth, calibrated to E(B-V) using reddening measurements of quasars deduced from Sloan Digital Sky Survey data.

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Yashar Akrami3, Yashar Akrami4  +310 moreInstitutions (70)
TL;DR: In this article, the statistical isotropy and Gaussianity of the cosmic microwave background (CMB) anisotropies using observations made by the Planck satellite were investigated.
Abstract: We test the statistical isotropy and Gaussianity of the cosmic microwave background (CMB) anisotropies using observations made by the Planck satellite. Our results are based mainly on the full Planck mission for temperature, but also include some polarization measurements. In particular, we consider the CMB anisotropy maps derived from the multi-frequency Planck data by several component-separation methods. For the temperature anisotropies, we find excellent agreement between results based on these sky maps over both a very large fraction of the sky and a broad range of angular scales, establishing that potential foreground residuals do not affect our studies. Tests of skewness, kurtosis, multi-normality, N-point functions, and Minkowski functionals indicate consistency with Gaussianity, while a power deficit at large angular scales is manifested in several ways, for example low map variance. The results of a peak statistics analysis are consistent with the expectations of a Gaussian random field. The “Cold Spot” is detected with several methods, including map kurtosis, peak statistics, and mean temperature profile. We thoroughly probe the large-scale dipolar power asymmetry, detecting it with several independent tests, and address the subject of a posteriori correction. Tests of directionality suggest the presence of angular clustering from large to small scales, but at a significance that is dependent on the details of the approach. We perform the first examination of polarization data, finding the morphology of stacked peaks to be consistent with the expectations of statistically isotropic simulations. Where they overlap, these results are consistent with the Planck 2013 analysis based on the nominal mission data and provide our most thorough view of the statistics of the CMB fluctuations to date.

Journal ArticleDOI
TL;DR: In this paper, a large-scale hydrodynamical cosmological simulation, Horizon-AGN, is used to investigate the alignment between the spin of galaxies and the largescale cosmic filaments above red-shift one.
Abstract: A large-scale hydrodynamical cosmological simulation, Horizon-AGN , is used to investigate the alignment between the spin of galaxies and the large-scale cosmic filaments above redshift one. The analysis of more than 150 000 galaxies with morphological diversity in a 100h −1 Mpc comoving box size shows that the spin of low-mass, rotationdominated, blue, star-forming galaxies is preferentially aligned with their neighbouring filaments. High-mass, dispersion-dominated, red, quiescent galaxies tend to have a spin perpendicular to nearby filaments. The reorientation of the spin of massive galaxies is provided by galaxy mergers which are significant in the mass build up of high-mass galaxies. We find that the stellar mass transition from alignment to misalignment happens around 3×10 10 M⊙. This is consistent with earlier findings of a dark matter mass transition for the orientation of the spin of halos (5 × 10 11 M⊙ at the same redshift from Codis et al. 2012). With these numerical evidence, we advocate a scenario in which galaxies form in the vorticity-rich neighbourhood of filaments, and migrate towards the nodes of the cosmic web as they convert their orbital angular momentum into spin. The signature of this process can be traced to the physical and morphological properties of galaxies, as measured relative to the cosmic web. We argue that a strong source of feedback such as Active Galactic Nuclei is mandatory to quench in situ star formation in massive galaxies. It allows mergers to play their key role by reducing post-merger gas inflows and, therefore, keeping galaxy spins misaligned with cosmic filaments. It also promotes diversity amongst galaxy properties.

Journal ArticleDOI
TL;DR: In this article, the state-of-the-art literature about 21st century climate change in the Alps based on existing literature and additional analyses is reviewed, which explicitly considers the reliability and uncertainty of climate projections.

Journal ArticleDOI
TL;DR: It is demonstrated that the full four-helical bundle domain (4HBD) in the N-terminal region of MLKL is required and sufficient to induce its oligomerization and trigger cell death, and found that a patch of positively charged amino acids on the surface of the 4HBD binds to phosphatidylinositol phosphates (PIPs) and allows recruitment ofMLKL to the plasma membrane.

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, M. Ashdown  +282 moreInstitutions (70)
TL;DR: In this article, the authors presented cluster counts and corresponding cosmological constraints from the Planck full mission data set and extended their analysis to the two-dimensional distribution in redshift and signal-to-noise.
Abstract: We present cluster counts and corresponding cosmological constraints from the Planck full mission data set. Our catalogue consists of 439 clusters detected via their Sunyaev-Zeldovich (SZ) signal down to a signal-to-noise ratio of 6, and is more than a factor of 2 larger than the 2013 Planck cluster cosmology sample. The counts are consistent with those from 2013 and yield compatible constraints under the same modelling assumptions. Taking advantage of the larger catalogue, we extend our analysis to the two-dimensional distribution in redshift and signal-to-noise. We use mass estimates from two recent studies of gravitational lensing of background galaxies by Planck clusters to provide priors on the hydrostatic bias parameter, (1−b). In addition, we use lensing of cosmic microwave background (CMB) temperature fluctuations by Planck clusters as an independent constraint on this parameter. These various calibrations imply constraints on the present-day amplitude of matter fluctuations in varying degrees of tension with those from the Planck analysis of primary fluctuations in the CMB; for the lowest estimated values of (1−b) the tension is mild, only a little over one standard deviation, while it remains substantial (3.7σ) for the largest estimated value. We also examine constraints on extensions to the base flat ΛCDM model by combining the cluster and CMB constraints. The combination appears to favour non-minimal neutrino masses, but this possibility does little to relieve the overall tension because it simultaneously lowers the implied value of the Hubble parameter, thereby exacerbating the discrepancy with most current astrophysical estimates. Improving the precision of cluster mass calibrations from the current 10%-level to 1% would significantly strengthen these combined analyses and provide a stringent test of the base ΛCDM model.

Journal ArticleDOI
S. Chatrchyan, Khachatryan1, Albert M. Sirunyan, Armen Tumasyan  +2384 moreInstitutions (207)
26 May 2014
TL;DR: In this paper, a description of the software algorithms developed for the CMS tracker both for reconstructing charged-particle trajectories in proton-proton interactions and for using the resulting tracks to estimate the positions of the LHC luminous region and individual primary-interaction vertices is provided.
Abstract: A description is provided of the software algorithms developed for the CMS tracker both for reconstructing charged-particle trajectories in proton-proton interactions and for using the resulting tracks to estimate the positions of the LHC luminous region and individual primary-interaction vertices. Despite the very hostile environment at the LHC, the performance obtained with these algorithms is found to be excellent. For tt events under typical 2011 pileup conditions, the average track-reconstruction efficiency for promptly-produced charged particles with transverse momenta of p_T > 0.9GeV is 94% for pseudorapidities of |η| < 0.9 and 85% for 0.9 < |η| < 2.5. The inefficiency is caused mainly by hadrons that undergo nuclear interactions in the tracker material. For isolated muons, the corresponding efficiencies are essentially 100%. For isolated muons of p_T = 100GeV emitted at |η| < 1.4, the resolutions are approximately 2.8% in p_T, and respectively, 10μm and 30μm in the transverse and longitudinal impact parameters. The position resolution achieved for reconstructed primary vertices that correspond to interesting pp collisions is 10–12μm in each of the three spatial dimensions. The tracking and vertexing software is fast and flexible, and easily adaptable to other functions, such as fast tracking for the trigger, or dedicated tracking for electrons that takes into account bremsstrahlung.

Journal ArticleDOI
TL;DR: The development of treatments for Alzheimer's disease during the past 30 years is reviewed, considering the drugs, potential targets, late‐stage clinical trials, development methods, emerging use of biomarkers and evolution of regulatory considerations to summarize advances and anticipate future developments.
Abstract: The modern era of drug development for Alzheimer's disease began with the proposal of the cholinergic hypothesis of memory impairment and the 1984 research criteria for Alzheimer's disease. Since then, despite the evaluation of numerous potential treatments in clinical trials, only four cholinesterase inhibitors and memantine have shown sufficient safety and efficacy to allow marketing approval at an international level. Although this is probably because the other drugs tested were ineffective, inadequate clinical development methods have also been blamed for the failures. Here, we review the development of treatments for Alzheimer's disease during the past 30 years, considering the drugs, potential targets, late-stage clinical trials, development methods, emerging use of biomarkers and evolution of regulatory considerations in order to summarize advances and anticipate future developments. We have considered late-stage Alzheimer's disease drug development from 1984 to 2013, including individual clinical trials, systematic and qualitative reviews, meta-analyses, methods, commentaries, position papers and guidelines. We then review the evolution of drugs in late clinical development, methods, biomarkers and regulatory issues. Although a range of small molecules and biological products against many targets have been investigated in clinical trials, the predominant drug targets have been the cholinergic system and the amyloid cascade. Trial methods have evolved incrementally: inclusion criteria have largely remained focused on mild-to-moderate Alzheimer's disease criteria, recently extending to early or prodromal Alzheimer disease or 'mild cognitive impairment due to Alzheimer's disease', for drugs considered to be disease modifying. The duration of trials has remained at 6-12 months for drugs intended to improve symptoms; 18- to 24-month trials have been established for drugs expected to attenuate clinical course. Cognitive performance, activities of daily living, global change and severity ratings have persisted as the primary clinically relevant outcomes. Regulatory guidance and oversight have evolved to allow for enrichment of early-stage Alzheimer's disease trial samples using biomarkers and phase-specific outcomes. In conclusion, validated drug targets for Alzheimer's disease remain to be developed. Only drugs that affect an aspect of cholinergic function have shown consistent, but modest, clinical effects in late-phase trials. There is opportunity for substantial improvements in drug discovery and clinical development methods.

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, C. Armitage-Caplan3, Monique Arnaud2  +326 moreInstitutions (66)
TL;DR: In this article, the authors describe the all-sky Planck catalogue of clusters and cluster candidates derived from Sunyaev-Zeldovich (SZ) effect detections using the first 15.5 months of Planck satellite observations.
Abstract: We describe the all-sky Planck catalogue of clusters and cluster candidates derived from Sunyaev-Zeldovich (SZ) effect detections using the first 15.5 months of Planck satellite observations. The catalogue contains 1227 entries, making it over six times the size of the Planck Early SZ (ESZ) sample and the largest SZ-selected catalogue to date. It contains 861 confirmed clusters, of which 178 have been confirmed as clusters, mostly through follow-up observations, and a further 683 are previously-known clusters. The remaining 366 have the status of cluster candidates, and we divide them into three classes according to the quality of evidence that they are likely to be true clusters. The Planck SZ catalogue is the deepest all-sky cluster catalogue, with redshifts up to about one, and spans the broadest cluster mass range from (0.1 to 1.6) x 10(15) M-circle dot. Confirmation of cluster candidates through comparison with existing surveys or cluster catalogues is extensively described, as is the statistical characterization of the catalogue in terms of completeness and statistical reliability. The outputs of the validation process are provided as additional information. This gives, in particular, an ensemble of 813 cluster redshifts, and for all these Planck clusters we also include a mass estimated from a newly-proposed SZ-mass proxy. A refined measure of the SZ Compton parameter for the clusters with X-ray counter-parts is provided, as is an X-ray flux for all the Planck clusters not previously detected in X-ray surveys.

Journal ArticleDOI
TL;DR: It is found, analytically and numerically, that decorrelation does not imply an increase in information, and the effect of differential correlations on information can be detected with relatively simple decoders.
Abstract: Computational strategies used by the brain strongly depend on the amount of information that can be stored in population activity, which in turn strongly depends on the pattern of noise correlations. In vivo, noise correlations tend to be positive and proportional to the similarity in tuning properties. Such correlations are thought to limit information, which has led to the suggestion that decorrelation increases information. In contrast, we found, analytically and numerically, that decorrelation does not imply an increase in information. Instead, the only information-limiting correlations are what we refer to as differential correlations: correlations proportional to the product of the derivatives of the tuning curves. Unfortunately, differential correlations are likely to be very small and buried under correlations that do not limit information, making them particularly difficult to detect. We found, however, that the effect of differential correlations on information can be detected with relatively simple decoders.

Journal ArticleDOI
K. Abe1, J. Adam2, Hiroaki Aihara3, T. Akiri4  +335 moreInstitutions (52)
TL;DR: The T2K experiment has observed electron neutrino appearance in a muon neutrinos beam produced 295 km from the Super-Kamiokande detector with a peak energy of 0.6 GeV, corresponding to a significance of 7.3σ.
Abstract: The T2K experiment has observed electron neutrino appearance in a muon neutrino beam produced 295 km from the Super-Kamiokande detector with a peak energy of 0.6 GeV. A total of 28 electron neutrino events were detected with an energy distribution consistent with an appearance signal, corresponding to a significance of 7.3 sigma when compared to 4.92 +/- 0.55 expected background events. In the Pontecorvo-Maki-Nakagawa-Sakata mixing model, the electron neutrino appearance signal depends on several parameters including three mixing angles theta(12), theta(23), theta(13), a mass difference vertical bar Delta m(32)(2)vertical bar and a CP violating phase delta(CP). In this neutrino oscillation scenario, assuming vertical bar Delta m(32)(2)vertical bar = 2.4 x 10(-3) eV(2), sin theta(2)(23) = 0.5, and vertical bar Delta m(32)(2)vertical bar > 0 (vertical bar Delta m(32)(2)vertical bar <0), a best- fit value of sin2 theta(2)(13) = 0.140(- 0.032)(+0.038) (0.170(-0.037)(+0.045)) is obtained at delta(CP) = 0. When combining the result with the current best knowledge of oscillation parameters including the world average value of theta(13) from reactor experiments, some values of delta(CP) are disfavored at the 90% C. L.

Journal ArticleDOI
L. Accardo1, M. Aguilar, D. Aisa1, D. Aisa2  +308 moreInstitutions (28)
TL;DR: The new results show, for the first time, that above ∼200 GeV the positron fraction no longer exhibits an increase with energy.
Abstract: A precision measurement by AMS of the positron fraction in primary cosmic rays in the energy range from 0.5 to 500 GeV based on 10.9 million positron and electron events is presented. This measurement extends the energy range of our previous observation and increases its precision. The new results show, for the first time, that above ∼200 GeV the positron fraction no longer exhibits an increase with energy.

Journal ArticleDOI
TL;DR: P is presented, which captures the basic phenomenology of the seven-dimensional parameter space of binary configurations with only three key physical parameters and can be used to develop GW searches, to study the implications for astrophysical measurements, and as a simple conceptual framework to form the basis of generic-binary waveform modeling in the advanced-detector era.
Abstract: The construction of a model of the gravitational-wave (GW) signal from generic configurations of spinning-black-hole binaries, through inspiral, merger, and ringdown, is one of the most pressing theoretical problems in the buildup to the era of GW astronomy. We present the first such model in the frequency domain, PhenomP, which captures the basic phenomenology of the seven-dimensional parameter space of binary configurations with only three key physical parameters. Two of these (the binary’s mass ratio and an effective total spin parallel to the orbital angular momentum, which determines the inspiral rate) define an underlying nonprecessing-binary model. The nonprecessing-binary waveforms are then twisted up with approximate expressions for the precessional motion, which require only one additional physical parameter, an effective precession spin, χp. All other parameters (total mass, sky location, orientation and polarization, and initial phase) can be specified trivially. The model is constructed in the frequency domain, which will be essential for efficient GW searches and source measurements. We have tested the model’s fidelity for GW applications by comparison against hybrid post-Newtonian-numerical-relativity waveforms at a variety of configurations—although we did not use these numerical simulations in the construction of the model. Our model can be used to develop GW searches, to study the implications for astrophysical measurements, and as a simple conceptual framework to form the basis of generic-binary waveform modeling in the advanced-detector era.

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, C. Armitage-Caplan3, Monique Arnaud4  +325 moreInstitutions (68)
TL;DR: The Planck 2013 likelihood as mentioned in this paper is a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature.
Abstract: This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best estimate of the CMB angular power spectrum from Planck over three decades in multipole moment, l, covering 2 ≤ l ≤ 2500. The main source of uncertainty at l ≲ 1500 is cosmic variance. Uncertainties in small-scale foreground modelling and instrumental noise dominate the error budget at higher ls. For l < 50, our likelihood exploits all Planck frequency channels from 30 to 353 GHz, separating the cosmological CMB signal from diffuse Galactic foregrounds through a physically motivated Bayesian component separation technique. At l ≥ 50, we employ a correlated Gaussian likelihood approximation based on a fine-grained set of angular cross-spectra derived from multiple detector combinations between the 100, 143, and 217 GHz frequency channels, marginalising over power spectrum foreground templates. We validate our likelihood through an extensive suite of consistency tests, and assess the impact of residual foreground and instrumental uncertainties on the final cosmological parameters. We find good internal agreement among the high-l cross-spectra with residuals below a few μK2 at l ≲ 1000, in agreement with estimated calibration uncertainties. We compare our results with foreground-cleaned CMB maps derived from all Planck frequencies, as well as with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. We further show that the best-fit ΛCDM cosmology is in excellent agreement with preliminary PlanckEE and TE polarisation spectra. We find that the standard ΛCDM cosmology is well constrained by Planck from the measurements at l ≲ 1500. One specific example is the spectral index of scalar perturbations, for which we report a 5.4σ deviation from scale invariance, ns = 1. Increasing the multipole range beyond l ≃ 1500 does not increase our accuracy for the ΛCDM parameters, but instead allows us to study extensions beyond the standard model. We find no indication of significant departures from the ΛCDM framework. Finally, we report a tension between the Planck best-fit ΛCDM model and the low-l spectrum in the form of a power deficit of 5–10% at l ≲ 40, with a statistical significance of 2.5–3σ. Without a theoretically motivated model for this power deficit, we do not elaborate further on its cosmological implications, but note that this is our most puzzling finding in an otherwise remarkably consistent data set.

Journal ArticleDOI
TL;DR: Allowing EC to compete with cigarettes in the market-place might decrease smoking-related morbidity and mortality and health professionals may consider advising smokers unable or unwilling to quit through other routes to switch to EC as a safer alternative to smoking and a possible pathway to complete cessation of nicotine use.
Abstract: Aims We reviewed available research on the use, content and safety of electronic cigarettes (EC), and on their effects on users, to assess their potential for harm or benefit and to extract evidence that can guide future policy. Methods Studies were identified by systematic database searches and screening references to February 2014. Results EC aerosol can contain some of the toxicants present in tobacco smoke, but at levels which are much lower. Long-term health effects of EC use are unknown but compared with cigarettes, EC are likely to be much less, if at all, harmful to users or bystanders. EC are increasingly popular among smokers, but to date there is no evidence of regular use by never-smokers or by non-smoking children. EC enable some users to reduce or quit smoking. Conclusions Allowing EC to compete with cigarettes in the market-place might decrease smoking-related morbidity and mortality. Regulating EC as strictly as cigarettes, or even more strictly as some regulators propose, is not warranted on current evidence. Health professionals may consider advising smokers unable or unwilling to quit through other routes to switch to EC as a safer alternative to smoking and a possible pathway to complete cessation of nicotine use.