scispace - formally typeset
Search or ask a question

Showing papers by "Imperial College London published in 2014"


Journal ArticleDOI
Keith A. Olive1, Kaustubh Agashe2, Claude Amsler3, Mario Antonelli  +222 moreInstitutions (107)
TL;DR: The review as discussed by the authors summarizes much of particle physics and cosmology using data from previous editions, plus 3,283 new measurements from 899 Japers, including the recently discovered Higgs boson, leptons, quarks, mesons and baryons.
Abstract: The Review summarizes much of particle physics and cosmology. Using data from previous editions, plus 3,283 new measurements from 899 Japers, we list, evaluate, and average measured properties of gauge bosons and the recently discovered Higgs boson, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as heavy neutrinos, supersymmetric and technicolor particles, axions, dark photons, etc. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as Supersymmetry, Extra Dimensions, Particle Detectors, Probability, and Statistics. Among the 112 reviews are many that are new or heavily revised including those on: Dark Energy, Higgs Boson Physics, Electroweak Model, Neutrino Cross Section Measurements, Monte Carlo Neutrino Generators, Top Quark, Dark Matter, Dynamical Electroweak Symmetry Breaking, Accelerator Physics of Colliders, High-Energy Collider Parameters, Big Bang Nucleosynthesis, Astrophysical Constants and Cosmological Parameters.

7,337 citations


Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, C. Armitage-Caplan3, Monique Arnaud4  +324 moreInstitutions (70)
TL;DR: In this paper, the authors present the first cosmological results based on Planck measurements of the cosmic microwave background (CMB) temperature and lensing-potential power spectra, which are extremely well described by the standard spatially-flat six-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations.
Abstract: This paper presents the first cosmological results based on Planck measurements of the cosmic microwave background (CMB) temperature and lensing-potential power spectra. We find that the Planck spectra at high multipoles (l ≳ 40) are extremely well described by the standard spatially-flat six-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations. Within the context of this cosmology, the Planck data determine the cosmological parameters to high precision: the angular size of the sound horizon at recombination, the physical densities of baryons and cold dark matter, and the scalar spectral index are estimated to be θ∗ = (1.04147 ± 0.00062) × 10-2, Ωbh2 = 0.02205 ± 0.00028, Ωch2 = 0.1199 ± 0.0027, and ns = 0.9603 ± 0.0073, respectively(note that in this abstract we quote 68% errors on measured parameters and 95% upper limits on other parameters). For this cosmology, we find a low value of the Hubble constant, H0 = (67.3 ± 1.2) km s-1 Mpc-1, and a high value of the matter density parameter, Ωm = 0.315 ± 0.017. These values are in tension with recent direct measurements of H0 and the magnitude-redshift relation for Type Ia supernovae, but are in excellent agreement with geometrical constraints from baryon acoustic oscillation (BAO) surveys. Including curvature, we find that the Universe is consistent with spatial flatness to percent level precision using Planck CMB data alone. We use high-resolution CMB data together with Planck to provide greater control on extragalactic foreground components in an investigation of extensions to the six-parameter ΛCDM model. We present selected results from a large grid of cosmological models, using a range of additional astrophysical data sets in addition to Planck and high-resolution CMB data. None of these models are favoured over the standard six-parameter ΛCDM cosmology. The deviation of the scalar spectral index from unity isinsensitive to the addition of tensor modes and to changes in the matter content of the Universe. We find an upper limit of r0.002< 0.11 on the tensor-to-scalar ratio. There is no evidence for additional neutrino-like relativistic particles beyond the three families of neutrinos in the standard model. Using BAO and CMB data, we find Neff = 3.30 ± 0.27 for the effective number of relativistic degrees of freedom, and an upper limit of 0.23 eV for the sum of neutrino masses. Our results are in excellent agreement with big bang nucleosynthesis and the standard value of Neff = 3.046. We find no evidence for dynamical dark energy; using BAO and CMB data, the dark energy equation of state parameter is constrained to be w = -1.13-0.10+0.13. We also use the Planck data to set limits on a possible variation of the fine-structure constant, dark matter annihilation and primordial magnetic fields. Despite the success of the six-parameter ΛCDM model in describing the Planck data at high multipoles, we note that this cosmology does not provide a good fit to the temperature power spectrum at low multipoles. The unusual shape of the spectrum in the multipole range 20 ≲ l ≲ 40 was seen previously in the WMAP data and is a real feature of the primordial CMB anisotropies. The poor fit to the spectrum at low multipoles is not of decisive significance, but is an “anomaly” in an otherwise self-consistent analysis of the Planck temperature data.

7,060 citations


Journal ArticleDOI
TL;DR: LCZ696 was superior to enalapril in reducing the risks of death and of hospitalization for heart failure and decreased the symptoms and physical limitations of heart failure.
Abstract: Background We compared the angiotensin receptor–neprilysin inhibitor LCZ696 with enalapril in patients who had heart failure with a reduced ejection fraction. In previous studies, enalapril improved survival in such patients. Methods In this double-blind trial, we randomly assigned 8442 patients with class II, III, or IV heart failure and an ejection fraction of 40% or less to receive either LCZ696 (at a dose of 200 mg twice daily) or enalapril (at a dose of 10 mg twice daily), in addition to recommended therapy. The primary outcome was a composite of death from cardiovascular causes or hospitalization for heart failure, but the trial was designed to detect a difference in the rates of death from cardiovascular causes. Results The trial was stopped early, according to prespecified rules, after a median followup of 27 months, because the boundary for an overwhelming benefit with LCZ696 had been crossed. At the time of study closure, the primary outcome had occurred in 914 patients (21.8%) in the LCZ696 group and 1117 patients (26.5%) in the enalapril group (hazard ratio in the LCZ696 group, 0.80; 95% confidence interval [CI], 0.73 to 0.87; P<0.001). A total of 711 patients (17.0%) receiving LCZ696 and 835 patients (19.8%) receiving enalapril died (hazard ratio for death from any cause, 0.84; 95% CI, 0.76 to 0.93; P<0.001); of these patients, 558 (13.3%) and 693 (16.5%), respectively, died from cardiovascular causes (hazard ratio, 0.80; 95% CI, 0.71 to 0.89; P<0.001). As compared with enalapril, LCZ696 also reduced the risk of hospitalization for heart failure by 21% (P<0.001) and decreased the symptoms and physical limitations of heart failure (P = 0.001). The LCZ696 group had higher proportions of patients with hypotension and nonserious angioedema but lower proportions with renal impairment, hyperkalemia, and cough than the enalapril group. Conclusions LCZ696 was superior to enalapril in reducing the risks of death and of hospitalization for heart failure. (Funded by Novartis; PARADIGM-HF ClinicalTrials.gov number, NCT01035255.)

4,727 citations



Journal ArticleDOI
TL;DR: In patients with idiopathic pulmonary fibrosis, nintedanib reduced the decline in FVC, which is consistent with a slowing of disease progression; nintinganib was frequently associated with diarrhea, which led to discontinuation of the study medication in less than 5% of patients.
Abstract: Background Nintedanib (formerly known as BIBF 1120) is an intracellular inhibitor that targets multiple tyrosine kinases. A phase 2 trial suggested that treatment with 150 mg of nintedanib twice daily reduced lung-function decline and acute exacerbations in patients with idiopathic pulmonary fibrosis. Methods We conducted two replicate 52-week, randomized, double-blind, phase 3 trials (INPULSIS-1 and INPULSIS-2) to evaluate the efficacy and safety of 150 mg of nintedanib twice daily as compared with placebo in patients with idiopathic pulmonary fibrosis. The primary end point was the annual rate of decline in forced vital capacity (FVC). Key secondary end points were the time to the first acute exacerbation and the change from baseline in the total score on the St. George’s Respiratory Questionnaire, both assessed over a 52-week period. Results A total of 1066 patients were randomly assigned in a 3:2 ratio to receive nintedanib or placebo. The adjusted annual rate of change in FVC was −114.7 ml with nintedanib versus −239.9 ml with placebo (difference, 125.3 ml; 95% confidence interval [CI], 77.7 to 172.8; P<0.001) in INPULSIS-1 and −113.6 ml with nintedanib versus −207.3 ml with placebo (difference, 93.7 ml; 95% CI, 44.8 to 142.7; P<0.001) in INPULSIS-2. In INPULSIS-1, there was no significant difference between the nintedanib and placebo groups in the time to the first acute exacerbation (hazard ratio with nintedanib, 1.15; 95% CI, 0.54 to 2.42; P = 0.67); in INPULSIS-2, there was a significant benefit with nintedanib versus placebo (hazard ratio, 0.38; 95% CI, 0.19 to 0.77; P = 0.005). The most frequent adverse event in the nintedanib groups was diarrhea, with rates of 61.5% and 18.6% in the nintedanib and placebo groups, respectively, in INPULSIS-1 and 63.2% and 18.3% in the two groups, respectively, in INPULSIS-2. Conclusions In patients with idiopathic pulmonary fibrosis, nintedanib reduced the decline in FVC, which is consistent with a slowing of disease progression; nintedanib was frequently associated with diarrhea, which led to discontinuation of the study medication in less than 5% of patients. (Funded by Boehringer Ingelheim; INPULSIS-1 and INPULSIS-2 ClinicalTrials.gov numbers, NCT01335464 and NCT01335477.)

2,936 citations


Journal ArticleDOI
29 May 2014-Nature
TL;DR: A draft map of the human proteome is presented using high-resolution Fourier-transform mass spectrometry to discover a number of novel protein-coding regions, which includes translated pseudogenes, non-c coding RNAs and upstream open reading frames.
Abstract: The availability of human genome sequence has transformed biomedical research over the past decade. However, an equivalent map for the human proteome with direct measurements of proteins and peptides does not exist yet. Here we present a draft map of the human proteome using high-resolution Fourier-transform mass spectrometry. In-depth proteomic profiling of 30 histologically normal human samples, including 17 adult tissues, 7 fetal tissues and 6 purified primary haematopoietic cells, resulted in identification of proteins encoded by 17,294 genes accounting for approximately 84% of the total annotated protein-coding genes in humans. A unique and comprehensive strategy for proteogenomic analysis enabled us to discover a number of novel protein-coding regions, which includes translated pseudogenes, non-coding RNAs and upstream open reading frames. This large human proteome catalogue (available as an interactive web-based resource at http://www.humanproteomemap.org) will complement available human genome and transcriptome data to accelerate biomedical research in health and disease.

1,965 citations


Journal ArticleDOI
D. S. Akerib1, Henrique Araujo2, X. Bai3, A. J. Bailey2, J. Balajthy4, S. Bedikian5, Ethan Bernard5, A. Bernstein6, Alexander Bolozdynya1, A. W. Bradley1, D. Byram7, Sidney Cahn5, M. C. Carmona-Benitez8, C. Chan9, J.J. Chapman9, A. A. Chiller7, C. Chiller7, K. Clark1, T. Coffey1, A. Currie2, A. Curioni5, Steven Dazeley6, L. de Viveiros10, A. Dobi4, J. E. Y. Dobson11, E. M. Dragowsky1, E. Druszkiewicz12, B. N. Edwards5, C. H. Faham13, S. Fiorucci9, C. E. Flores14, R. J. Gaitskell9, V. M. Gehman13, C. Ghag15, K.R. Gibson1, Murdock Gilchriese13, C. R. Hall4, M. Hanhardt3, S. A. Hertel5, M. Horn5, D. Q. Huang9, M. Ihm16, R. G. Jacobsen16, L. Kastens5, K. Kazkaz6, R. Knoche4, S. Kyre8, R. L. Lander14, N. A. Larsen5, C. Lee1, David Leonard4, K. T. Lesko13, A. Lindote10, M.I. Lopes10, A. Lyashenko5, D.C. Malling9, R. L. Mannino17, Daniel McKinsey5, Dongming Mei7, J. Mock14, M. Moongweluwan12, J. A. Morad14, M. Morii18, A. St. J. Murphy11, C. Nehrkorn8, H. N. Nelson8, F. Neves10, James Nikkel5, R. A. Ott14, M. Pangilinan9, P. D. Parker5, E. K. Pease5, K. Pech1, P. Phelps1, L. Reichhart15, T. A. Shutt1, C. Silva10, W. Skulski12, C. Sofka17, V. N. Solovov10, P. Sorensen6, T.M. Stiegler17, K. O'Sullivan5, T. J. Sumner2, Robert Svoboda14, M. Sweany14, Matthew Szydagis14, D. J. Taylor, B. P. Tennyson5, D. R. Tiedt3, Mani Tripathi14, S. Uvarov14, J.R. Verbus9, N. Walsh14, R. C. Webb17, J. T. White17, D. White8, M. S. Witherell8, M. Wlasenko18, F.L.H. Wolfs12, M. Woods14, Chao Zhang7 
TL;DR: The first WIMP search data set is reported, taken during the period from April to August 2013, presenting the analysis of 85.3 live days of data, finding that the LUX data are in disagreement with low-mass W IMP signal interpretations of the results from several recent direct detection experiments.
Abstract: The Large Underground Xenon (LUX) experiment is a dual-phase xenon time-projection chamber operating at the Sanford Underground Research Facility (Lead, South Dakota). The LUX cryostat was filled for the first time in the underground laboratory in February 2013. We report results of the first WIMP search data set, taken during the period from April to August 2013, presenting the analysis of 85.3 live days of data with a fiducial volume of 118 kg. A profile-likelihood analysis technique shows our data to be consistent with the background-only hypothesis, allowing 90% confidence limits to be set on spin-independent WIMP-nucleon elastic scattering with a minimum upper limit on the cross section of 7.6 × 10(-46) cm(2) at a WIMP mass of 33 GeV/c(2). We find that the LUX data are in disagreement with low-mass WIMP signal interpretations of the results from several recent direct detection experiments.

1,962 citations


Journal ArticleDOI
Andrew R. Wood1, Tõnu Esko2, Jian Yang3, Sailaja Vedantam4  +441 moreInstitutions (132)
TL;DR: This article identified 697 variants at genome-wide significance that together explained one-fifth of the heritability for adult height, and all common variants together captured 60% of heritability.
Abstract: Using genome-wide data from 253,288 individuals, we identified 697 variants at genome-wide significance that together explained one-fifth of the heritability for adult height. By testing different numbers of variants in independent studies, we show that the most strongly associated ∼2,000, ∼3,700 and ∼9,500 SNPs explained ∼21%, ∼24% and ∼29% of phenotypic variance. Furthermore, all common variants together captured 60% of heritability. The 697 variants clustered in 423 loci were enriched for genes, pathways and tissue types known to be involved in growth and together implicated genes and pathways not highlighted in earlier efforts, such as signaling by fibroblast growth factors, WNT/β-catenin and chondroitin sulfate-related genes. We identified several genes and pathways not previously connected with human skeletal growth, including mTOR, osteoglycin and binding of hyaluronic acid. Our results indicate a genetic architecture for human height that is characterized by a very large but finite number (thousands) of causal variants.

1,872 citations


Journal ArticleDOI
TL;DR: In this article, the authors review the leading CO2 capture technologies, available in the short and long term, and their technological maturity, before discussing CO2 transport and storage, as well as the economic and legal aspects of CCS.
Abstract: In recent years, Carbon Capture and Storage (Sequestration) (CCS) has been proposed as a potential method to allow the continued use of fossil-fuelled power stations whilst preventing emissions of CO2 from reaching the atmosphere. Gas, coal (and biomass)-fired power stations can respond to changes in demand more readily than many other sources of electricity production, hence the importance of retaining them as an option in the energy mix. Here, we review the leading CO2 capture technologies, available in the short and long term, and their technological maturity, before discussing CO2 transport and storage. Current pilot plants and demonstrations are highlighted, as is the importance of optimising the CCS system as a whole. Other topics briefly discussed include the viability of both the capture of CO2 from the air and CO2 reutilisation as climate change mitigation strategies. Finally, we discuss the economic and legal aspects of CCS.

1,752 citations


Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, M. I. R. Alves2, C. Armitage-Caplan3  +469 moreInstitutions (89)
TL;DR: The European Space Agency's Planck satellite, dedicated to studying the early Universe and its subsequent evolution, was launched 14 May 2009 and has been scanning the microwave and submillimetre sky continuously since 12 August 2009 as discussed by the authors.
Abstract: The European Space Agency’s Planck satellite, dedicated to studying the early Universe and its subsequent evolution, was launched 14 May 2009 and has been scanning the microwave and submillimetre sky continuously since 12 August 2009. In March 2013, ESA and the Planck Collaboration released the initial cosmology products based on the first 15.5 months of Planck data, along with a set of scientific and technical papers and a web-based explanatory supplement. This paper gives an overview of the mission and its performance, the processing, analysis, and characteristics of the data, the scientific results, and the science data products and papers in the release. The science products include maps of the cosmic microwave background (CMB) and diffuse extragalactic foregrounds, a catalogue of compact Galactic and extragalactic sources, and a list of sources detected through the Sunyaev-Zeldovich effect. The likelihood code used to assess cosmological models against the Planck data and a lensing likelihood are described. Scientific results include robust support for the standard six-parameter ΛCDM model of cosmology and improved measurements of its parameters, including a highly significant deviation from scale invariance of the primordial power spectrum. The Planck values for these parameters and others derived from them are significantly different from those previously determined. Several large-scale anomalies in the temperature distribution of the CMB, first detected by WMAP, are confirmed with higher confidence. Planck sets new limits on the number and mass of neutrinos, and has measured gravitational lensing of CMB anisotropies at greater than 25σ. Planck finds no evidence for non-Gaussianity in the CMB. Planck’s results agree well with results from the measurements of baryon acoustic oscillations. Planck finds a lower Hubble constant than found in some more local measures. Some tension is also present between the amplitude of matter fluctuations (σ8) derived from CMB data and that derived from Sunyaev-Zeldovich data. The Planck and WMAP power spectra are offset from each other by an average level of about 2% around the first acoustic peak. Analysis of Planck polarization data is not yet mature, therefore polarization results are not released, although the robust detection of E-mode polarization around CMB hot and cold spots is shown graphically.

1,719 citations


Journal ArticleDOI
27 Mar 2014-Nature
TL;DR: For example, the authors mapped transcription start sites (TSSs) and their usage in human and mouse primary cells, cell lines and tissues to produce a comprehensive overview of mammalian gene expression across the human body.
Abstract: Regulated transcription controls the diversity, developmental pathways and spatial organization of the hundreds of cell types that make up a mammal Using single-molecule cDNA sequencing, we mapped transcription start sites (TSSs) and their usage in human and mouse primary cells, cell lines and tissues to produce a comprehensive overview of mammalian gene expression across the human body We find that few genes are truly 'housekeeping', whereas many mammalian promoters are composite entities composed of several closely separated TSSs, with independent cell-type-specific expression profiles TSSs specific to different cell types evolve at different rates, whereas promoters of broadly expressed genes are the most conserved Promoter-based expression analysis reveals key transcription factors defining cell states and links them to binding-site motifs The functions of identified novel transcripts can be predicted by coexpression and sample ontology enrichment analyses The functional annotation of the mammalian genome 5 (FANTOM5) project provides comprehensive expression profiles and functional annotation of mammalian cell-type-specific transcriptomes with wide applications in biomedical research

Journal ArticleDOI
01 Jan 2014-Brain
TL;DR: A novel model of the posterior cingulate cortex's function is synthesized into a model that influences attentional focus by 'tuning' whole-brain metastability and so adjusts how stable brain network activity is over time, and is tested within the framework of complex dynamic systems theory.
Abstract: The posterior cingulate cortex is a highly connected and metabolically active brain region. Recent studies suggest it has an important cognitive role, although there is no consensus about what this is. The region is typically discussed as having a unitary function because of a common pattern of relative deactivation observed during attentionally demanding tasks. One influential hypothesis is that the posterior cingulate cortex has a central role in supporting internally-directed cognition. It is a key node in the default mode network and shows increased activity when individuals retrieve autobiographical memories or plan for the future, as well as during unconstrained ‘rest’ when activity in the brain is ‘free-wheeling’. However, other evidence suggests that the region is highly heterogeneous and may play a direct role in regulating the focus of attention. In addition, its activity varies with arousal state and its interactions with other brain networks may be important for conscious awareness. Understanding posterior cingulate cortex function is likely to be of clinical importance. It is well protected against ischaemic stroke, and so there is relatively little neuropsychological data about the consequences of focal lesions. However, in other conditions abnormalities in the region are clearly linked to disease. For example, amyloid deposition and reduced metabolism is seen early in Alzheimer’s disease. Functional neuroimaging studies show abnormalities in a range of neurological and psychiatric disorders including Alzheimer’s disease, schizophrenia, autism, depression and attention deficit hyperactivity disorder, as well as ageing. Our own work has consistently shown abnormal posterior cingulate cortex function following traumatic brain injury, which predicts attentional impairments. Here we review the anatomy and physiology of the region and how it is affected in a range of clinical conditions, before discussing its proposed functions. We synthesize key findings into a novel model of the region’s function (the ‘Arousal, Balance and Breadth of Attention’ model). Dorsal and ventral subcomponents are functionally separated and differences in regional activity are explained by considering: (i) arousal state; (ii) whether attention is focused internally or externally; and (iii) the breadth of attentional focus. The predictions of the model can be tested within the framework of complex dynamic systems theory, and we propose that the dorsal posterior cingulate cortex influences attentional focus by ‘tuning’ whole-brain metastability and so adjusts how stable brain network activity is over time.

Journal ArticleDOI
TL;DR: An updated version of wannier90 is presented, wannIER90 2.0, including minor bug fixes and parallel (MPI) execution for band-structure interpolation and the calculation of properties such as density of states, Berry curvature and orbital magnetisation.

Journal ArticleDOI
TL;DR: Dual antiplatelet therapy beyond 1 year after placement of a drug-eluting stent, as compared with aspirin therapy alone, significantly reduced the risks of stent thrombosis and major adverse cardiovascular and cerebrovascular events but was associated with an increased risk of bleeding.
Abstract: 0.29 [95% confidence interval {CI}, 0.17 to 0.48]; P<0.001) and major adverse cardiovascular and cerebrovascular events (4.3% vs. 5.9%; hazard ratio, 0.71 [95% CI, 0.59 to 0.85]; P<0.001). The rate of myocardial infarction was lower with thienopyridine treatment than with placebo (2.1% vs. 4.1%; hazard ratio, 0.47; P<0.001). The rate of death from any cause was 2.0% in the group that continued thienopyridine therapy and 1.5% in the placebo group (hazard ratio, 1.36 [95% CI, 1.00 to 1.85]; P = 0.05). The rate of moderate or severe bleeding was increased with continued thienopyridine treatment (2.5% vs. 1.6%, P = 0.001). An elevated risk of stent thrombosis and myocardial infarction was observed in both groups during the 3 months after discontinuation of thienopyridine treatment. Conclusions Dual antiplatelet therapy beyond 1 year after placement of a drug-eluting stent, as compared with aspirin therapy alone, significantly reduced the risks of stent thrombosis and major adverse cardiovascular and cerebrovascular events but was associated with an increased risk of bleeding. (Funded by a consortium of eight device and drug manufacturers and others; DAPT ClinicalTrials.gov number, NCT00977938.)

Journal ArticleDOI
TL;DR: In this paper, an up-to-date perspective on the use of anion-exchange membranes in fuel cells, electrolysers, redox flow batteries, reverse electrodialysis cells, and bioelectrochemical systems (e.g. microbial fuel cells).
Abstract: This article provides an up-to-date perspective on the use of anion-exchange membranes in fuel cells, electrolysers, redox flow batteries, reverse electrodialysis cells, and bioelectrochemical systems (e.g. microbial fuel cells). The aim is to highlight key concepts, misconceptions, the current state-of-the-art, technological and scientific limitations, and the future challenges (research priorities) related to the use of anion-exchange membranes in these energy technologies. All the references that the authors deemed relevant, and were available on the web by the manuscript submission date (30th April 2014), are included.

Journal ArticleDOI
12 Jun 2014-PLOS ONE
TL;DR: The additional genes identified in this study, have an array of functions previously implicated in Alzheimer's disease, including aspects of energy metabolism, protein degradation and the immune system and add further weight to these pathways as potential therapeutic targets in Alzheimers disease.
Abstract: Background: Alzheimer's disease is a common debilitating dementia with known heritability, for which 20 late onset susceptibility loci have been identified, but more remain to be discovered. This s ...

Journal ArticleDOI
TL;DR: A fine particulate mass–based RR model that covered the global range of exposure by integrating RR information from different combustion types that generate emissions of particulate matter is developed.
Abstract: Background: Estimating the burden of disease attributable to long-term exposure to fine particulate matter (PM2.5) in ambient air requires knowledge of both the shape and magnitude of the relative ...

Journal ArticleDOI
TL;DR: It is illustrated how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps and its application to neuroimaging data provides a versatile tool to study the brain.
Abstract: Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g. multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g. resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain.

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, Frederico Arroja4  +321 moreInstitutions (79)
TL;DR: In this article, the authors present the implications for cosmic inflation of the Planck measurements of the cosmic microwave background (CMB) anisotropies in both temperature and polarization based on the full Planck survey.
Abstract: We present the implications for cosmic inflation of the Planck measurements of the cosmic microwave background (CMB) anisotropies in both temperature and polarization based on the full Planck survey, which includes more than twice the integration time of the nominal survey used for the 2013 release papers. The Planck full mission temperature data and a first release of polarization data on large angular scales measure the spectral index of curvature perturbations to be ns = 0.968 ± 0.006 and tightly constrain its scale dependence to dns/ dlnk = −0.003 ± 0.007 when combined with the Planck lensing likelihood. When the Planck high-l polarization data are included, the results are consistent and uncertainties are further reduced. The upper bound on the tensor-to-scalar ratio is r0.002< 0.11 (95% CL). This upper limit is consistent with the B-mode polarization constraint r< 0.12 (95% CL) obtained from a joint analysis of the BICEP2/Keck Array and Planck data. These results imply that V(φ) ∝ φ2 and natural inflation are now disfavoured compared to models predicting a smaller tensor-to-scalar ratio, such as R2 inflation. We search for several physically motivated deviations from a simple power-law spectrum of curvature perturbations, including those motivated by a reconstruction of the inflaton potential not relying on the slow-roll approximation. We find that such models are not preferred, either according to a Bayesian model comparison or according to a frequentist simulation-based analysis. Three independent methods reconstructing the primordial power spectrum consistently recover a featureless and smooth over the range of scales 0.008 Mpc-1 ≲ k ≲ 0.1 Mpc-1. At large scales, each method finds deviations from a power law, connected to a deficit at multipoles l ≈ 20−40 in the temperature power spectrum, but at an uncompelling statistical significance owing to the large cosmic variance present at these multipoles. By combining power spectrum and non-Gaussianity bounds, we constrain models with generalized Lagrangians, including Galileon models and axion monodromy models. The Planck data are consistent with adiabatic primordial perturbations, and the estimated values for the parameters of the base Λ cold dark matter (ΛCDM) model are not significantly altered when more general initial conditions are admitted. In correlated mixed adiabatic and isocurvature models, the 95% CL upper bound for the non-adiabatic contribution to the observed CMB temperature variance is | αnon - adi | < 1.9%, 4.0%, and 2.9% for CDM, neutrino density, and neutrino velocity isocurvature modes, respectively. We have tested inflationary models producing an anisotropic modulation of the primordial curvature power spectrum findingthat the dipolar modulation in the CMB temperature field induced by a CDM isocurvature perturbation is not preferred at a statistically significant level. We also establish tight constraints on a possible quadrupolar modulation of the curvature perturbation. These results are consistent with the Planck 2013 analysis based on the nominal mission data and further constrain slow-roll single-field inflationary models, as expected from the increased precision of Planck data using the full set of observations.

Journal ArticleDOI
TL;DR: In this article, the authors bring together the recent literature on industry platforms and show how it relates to managing innovation within and outside the firm as well as to dealing with technological and market disruptions and change over time.

Journal ArticleDOI
TL;DR: A theoretical framework for assessing the quality of application of PDSA cycles is proposed and the consistency with which the method has been applied in peer-reviewed literature against this framework is explored.
Abstract: Background Plan–do–study–act (PDSA) cycles provide a structure for iterative testing of changes to improve quality of systems. The method is widely accepted in healthcare improvement; however there is little overarching evaluation of how the method is applied. This paper proposes a theoretical framework for assessing the quality of application of PDSA cycles and explores the consistency with which the method has been applied in peer-reviewed literature against this framework. Methods NHS Evidence and Cochrane databases were searched by three independent reviewers. Empirical studies were included that reported application of the PDSA method in healthcare. Application of PDSA cycles was assessed against key features of the method, including documentation characteristics, use of iterative cycles, prediction-based testing of change, initial small-scale testing and use of data


Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, C. Armitage-Caplan3, Monique Arnaud4  +273 moreInstitutions (59)
TL;DR: In this article, the authors characterized the effective beams, the effective beam window functions and the associated errors for the Planck High Frequency Instrument (HFI) detectors, including the effect of the optics, detectors, data processing and the scan strategy.
Abstract: This paper characterizes the effective beams, the effective beam window functions and the associated errors for the Planck High Frequency Instrument (HFI) detectors. The effective beam is the angular response including the effect of the optics, detectors, data processing and the scan strategy. The window function is the representation of this beam in the harmonic domain which is required to recover an unbiased measurement of the cosmic microwave background angular power spectrum. The HFI is a scanning instrument and its effective beams are the convolution of: a) the optical response of the telescope and feeds; b) the processing of the time-ordered data and deconvolution of the bolometric and electronic transfer function; and c) the merging of several surveys to produce maps. The time response transfer functions are measured using observations of Jupiter and Saturn and by minimizing survey difference residuals. The scanning beam is the post-deconvolution angular response of the instrument, and is characterized with observations of Mars. The main beam solid angles are determined to better than 0.5% at each HFI frequency band. Observations of Jupiter and Saturn limit near sidelobes (within 5 degrees) to about 0.1% of the total solid angle. Time response residuals remain as long tails in the scanning beams, but contribute less than 0.1% of the total solid angle. The bias and uncertainty in the beam products are estimated using ensembles of simulated planet observations that include the impact of instrumental noise and known systematic effects. The correlation structure of these ensembles is well-described by five errors eigenmodes that are sub-dominant to sample variance and instrumental noise in the harmonic domain. A suite of consistency tests provide confidence that the error model represents a sufficient description of the data. The total error in the effective beam window functions is below 1% at 100 GHz up to multiple l similar to 1500, below 0.5% at 143 and 217 GHz up to l similar to 2000.

Journal ArticleDOI
01 Jul 2014-Ecology
TL;DR: A global species-level compilation of key attributes for all 9993 and 5400 extant bird and mammal species derived from key literature sources enables a much finer distinction of species' foraging ecology than typical categorical guild assignments allow.
Abstract: Species are characterized by physiological, behavioral, and ecological attributes that are all subject to varying evolutionary and ecological constraints and jointly determine species' role and function in ecosystems. Attributes such as diet, foraging strata, foraging time, and body size, in particular, characterize a large portion of the “Eltonian” niches of species. Here we present a global species-level compilation of these key attributes for all 9993 and 5400 extant bird and mammal species derived from key literature sources. Global handbooks and monographs allowed the consistent sourcing of attributes for most species. For diet and foraging stratum we followed a defined protocol to translate the verbal descriptions into standardized, semiquantitative information about relative importance of different categories. Together with body size (continuous) and activity time (categorical) this enables a much finer distinction of species' foraging ecology than typical categorical guild assignments allow. Attri...

Journal ArticleDOI
TL;DR: MRI can often identify changes in iron homoeostasis, thus providing a potential diagnostic biomarker of neurodegenerative diseases and an important avenue to reduce iron accumulation is the use of iron chelators that are able to cross the blood-brain barrier, penetrate cells, and reduce excessive iron accumulation, thereby affording neuroprotection.
Abstract: Summary In the CNS, iron in several proteins is involved in many important processes such as oxygen transportation, oxidative phosphorylation, myelin production, and the synthesis and metabolism of neurotransmitters. Abnormal iron homoeostasis can induce cellular damage through hydroxyl radical production, which can cause the oxidation and modification of lipids, proteins, carbohydrates, and DNA. During ageing, different iron complexes accumulate in brain regions associated with motor and cognitive impairment. In various neurodegenerative diseases, such as Alzheimer's disease and Parkinson's disease, changes in iron homoeostasis result in altered cellular iron distribution and accumulation. MRI can often identify these changes, thus providing a potential diagnostic biomarker of neurodegenerative diseases. An important avenue to reduce iron accumulation is the use of iron chelators that are able to cross the blood–brain barrier, penetrate cells, and reduce excessive iron accumulation, thereby affording neuroprotection.

Journal ArticleDOI
TL;DR: One of the advantages of RTILs as compared to their high-temperature molten salt (HTMS) “sister-systems” is that the dissolved molecules are not imbedded in a harsh high temperature environment which could be destructive for many classes of fragile (organic) molecules.
Abstract: Until recently, “room-temperature” (<100–150 °C) liquid-state electrochemistry was mostly electrochemistry of diluted electrolytes(1)–(4) where dissolved salt ions were surrounded by a considerable amount of solvent molecules. Highly concentrated liquid electrolytes were mostly considered in the narrow (albeit important) niche of high-temperature electrochemistry of molten inorganic salts(5-9) and in the even narrower niche of “first-generation” room temperature ionic liquids, RTILs (such as chloro-aluminates and alkylammonium nitrates).(10-14) The situation has changed dramatically in the 2000s after the discovery of new moisture- and temperature-stable RTILs.(15, 16) These days, the “later generation” RTILs attracted wide attention within the electrochemical community.(17-31) Indeed, RTILs, as a class of compounds, possess a unique combination of properties (high charge density, electrochemical stability, low/negligible volatility, tunable polarity, etc.) that make them very attractive substances from fundamental and application points of view.(32-38) Most importantly, they can mix with each other in “cocktails” of one’s choice to acquire the desired properties (e.g., wider temperature range of the liquid phase(39, 40)) and can serve as almost “universal” solvents.(37, 41, 42) It is worth noting here one of the advantages of RTILs as compared to their high-temperature molten salt (HTMS)(43) “sister-systems”.(44) In RTILs the dissolved molecules are not imbedded in a harsh high temperature environment which could be destructive for many classes of fragile (organic) molecules.

Journal ArticleDOI
TL;DR: In this article, an integrative framework is proposed to advance management research on technological platforms, bridging two theoretical perspectives: economics, which sees platforms as double-sided markets, and engineering design, which see platforms as technological architectures.

Journal ArticleDOI
TL;DR: In this article, a Bayesian framework was developed for model comparison and parameter estimation of X-ray spectra of active galactic nuclei (AGN) in the 4 Ms Chandra Deep Field South.
Abstract: Context. Aims. Active galactic nuclei are known to have complex X-ray spectra that depend on both the properties of the accreting super-massive black hole (e.g. mass, accretion rate) and the distribution of obscuring material in its vicinity (i.e. the “torus”). Often however, simple and even unphysical models are adopted to represent the X-ray spectra of AGN, which do not capture the complexity and diversity of the observations. In the case of blank field surveys in particular, this should have an impact on e.g. the determination of the AGN luminosity function, the inferred accretion history of the Universe and also on our understanding of the relation between AGN and their host galaxies. Methods. We develop a Bayesian framework for model comparison and parameter estimation of X-ray spectra. We take into account uncertainties associated with both the Poisson nature of X-ray data and the determination of source redshift using photometric methods. We also demonstrate how Bayesian model comparison can be used to select among ten different physically motivated X-ray spectral models the one that provides a better representation of the observations. This methodology is applied to X-ray AGN in the 4 Ms Chandra Deep Field South.Results. For the ~350 AGN in that field, our analysis identifies four components needed to represent the diversity of the observed X-ray spectra: (1) an intrinsic power law; (2) a cold obscurer which reprocesses the radiation due to photo-electric absorption, Compton scattering and Fe-K fluorescence; (3) an unabsorbed power law associated with Thomson scattering off ionised clouds; and (4) Compton reflection, most noticeable from a stronger-than-expected Fe-K line. Simpler models, such as a photo-electrically absorbed power law with a Thomson scattering component, are ruled out with decisive evidence (B > 100). We also find that ignoring the Thomson scattering component results in underestimation of the inferred column density, N H , of the obscurer. Regarding the geometry of the obscurer, there is strong evidence against both a completely closed (e.g. sphere), or entirely open (e.g. blob of material along the line of sight), toroidal geometry in favour of an intermediate case. Conclusions. Despite the use of low-count spectra, our methodology is able to draw strong inferences on the geometry of the torus. Simpler models are ruled out in favour of a geometrically extended structure with significant Compton scattering. We confirm the presence of a soft component, possibly associated with Thomson scattering off ionised clouds in the opening angle of the torus. The additional Compton reflection required by data over that predicted by toroidal geometry models, may be a sign of a density gradient in the torus or reflection off the accretion disk. Finally, we release a catalogue of AGN in the CDFS with estimated parameters such as the accretion luminosity in the 2−10 keV band and the column density, N H , of the obscurer.

Journal ArticleDOI
TL;DR: It is demonstrated through 13C high-resolution magic-angle-spinning that 13C acetate from fermentation of 13C-labelled carbohydrate in the colon increases hypothalamic 13Cacetate above baseline levels, suggesting that acetate has a direct role in central appetite regulation.
Abstract: Increased intake of dietary carbohydrate that is fermented in the colon by the microbiota has been reported to decrease body weight, although the mechanism remains unclear. Here we use in vivo11C-acetate and PET-CT scanning to show that colonic acetate crosses the blood–brain barrier and is taken up by the brain. Intraperitoneal acetate results in appetite suppression and hypothalamic neuronal activation patterning. We also show that acetate administration is associated with activation of acetyl-CoA carboxylase and changes in the expression profiles of regulatory neuropeptides that favour appetite suppression. Furthermore, we demonstrate through 13C high-resolution magic-angle-spinning that 13C acetate from fermentation of 13C-labelled carbohydrate in the colon increases hypothalamic 13C acetate above baseline levels. Hypothalamic 13C acetate regionally increases the 13C labelling of the glutamate–glutamine and GABA neuroglial cycles, with hypothalamic 13C lactate reaching higher levels than the ‘remaining brain’. These observations suggest that acetate has a direct role in central appetite regulation.

Journal ArticleDOI
Rob Beelen1, Ole Raaschou-Nielsen, Massimo Stafoggia, Zorana Jovanovic Andersen2, Gudrun Weinmayr3, Gudrun Weinmayr4, Barbara Hoffmann3, Kathrin Wolf, Evangelia Samoli5, Paul Fischer, Mark J. Nieuwenhuijsen, Paolo Vineis6, Wei W. Xun6, Wei W. Xun7, Klea Katsouyanni5, Konstantina Dimakopoulou5, Anna Oudin8, Bertil Forsberg8, Lars Modig8, Aki S. Havulinna9, Timo Lanki9, Anu W. Turunen9, Bente Oftedal10, Wenche Nystad10, Per Nafstad10, Per Nafstad11, Ulf de Faire12, Nancy L. Pedersen12, Claes-Göran Östenson12, Laura Fratiglioni12, Johanna Penell12, Michal Korek12, Göran Pershagen12, Kirsten Thorup Eriksen, Kim Overvad13, Thomas Ellermann13, Marloes Eeftens1, Petra H.M. Peeters14, Petra H.M. Peeters6, Kees Meliefste1, Meng Wang1, Bas Bueno-de-Mesquita, Dorothea Sugiri3, Ursula Krämer3, Joachim Heinrich, Kees de Hoogh6, Timothy J. Key15, Annette Peters, Regina Hampel, Hans Concin, Gabriele Nagel4, Alex Ineichen16, Alex Ineichen17, Emmanuel Schaffner17, Emmanuel Schaffner16, Nicole Probst-Hensch16, Nicole Probst-Hensch17, Nino Künzli17, Nino Künzli16, Christian Schindler17, Christian Schindler16, Tamara Schikowski16, Tamara Schikowski17, Martin Adam16, Martin Adam17, Harish C. Phuleria17, Harish C. Phuleria16, Alice Vilier18, Alice Vilier19, Françoise Clavel-Chapelon19, Françoise Clavel-Chapelon18, Christophe Declercq, Sara Grioni, Vittorio Krogh, Ming-Yi Tsai16, Ming-Yi Tsai17, Ming-Yi Tsai20, Fulvio Ricceri, Carlotta Sacerdote21, C Galassi21, Enrica Migliore21, Andrea Ranzi, Giulia Cesaroni, Chiara Badaloni, Francesco Forastiere, Ibon Tamayo22, Pilar Amiano22, Miren Dorronsoro22, Michail Katsoulis, Antonia Trichopoulou, Bert Brunekreef1, Bert Brunekreef14, Gerard Hoek1 
TL;DR: In this article, the authors investigated the association between natural-cause mortality and long-term exposure to several air pollutants, such as PM2.5, nitrogen oxides, and NOx.