scispace - formally typeset
Search or ask a question

Showing papers by "University of Manchester published in 2016"


Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, M. Ashdown4  +334 moreInstitutions (82)
TL;DR: In this article, the authors present a cosmological analysis based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation.
Abstract: This paper presents cosmological results based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation. Our results are in very good agreement with the 2013 analysis of the Planck nominal-mission temperature data, but with increased precision. The temperature and polarization power spectra are consistent with the standard spatially-flat 6-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations (denoted “base ΛCDM” in this paper). From the Planck temperature data combined with Planck lensing, for this cosmology we find a Hubble constant, H0 = (67.8 ± 0.9) km s-1Mpc-1, a matter density parameter Ωm = 0.308 ± 0.012, and a tilted scalar spectral index with ns = 0.968 ± 0.006, consistent with the 2013 analysis. Note that in this abstract we quote 68% confidence limits on measured parameters and 95% upper limits on other parameters. We present the first results of polarization measurements with the Low Frequency Instrument at large angular scales. Combined with the Planck temperature and lensing data, these measurements give a reionization optical depth of τ = 0.066 ± 0.016, corresponding to a reionization redshift of . These results are consistent with those from WMAP polarization measurements cleaned for dust emission using 353-GHz polarization maps from the High Frequency Instrument. We find no evidence for any departure from base ΛCDM in the neutrino sector of the theory; for example, combining Planck observations with other astrophysical data we find Neff = 3.15 ± 0.23 for the effective number of relativistic degrees of freedom, consistent with the value Neff = 3.046 of the Standard Model of particle physics. The sum of neutrino masses is constrained to ∑ mν < 0.23 eV. The spatial curvature of our Universe is found to be very close to zero, with | ΩK | < 0.005. Adding a tensor component as a single-parameter extension to base ΛCDM we find an upper limit on the tensor-to-scalar ratio of r0.002< 0.11, consistent with the Planck 2013 results and consistent with the B-mode polarization constraints from a joint analysis of BICEP2, Keck Array, and Planck (BKP) data. Adding the BKP B-mode data to our analysis leads to a tighter constraint of r0.002 < 0.09 and disfavours inflationarymodels with a V(φ) ∝ φ2 potential. The addition of Planck polarization data leads to strong constraints on deviations from a purely adiabatic spectrum of fluctuations. We find no evidence for any contribution from isocurvature perturbations or from cosmic defects. Combining Planck data with other astrophysical data, including Type Ia supernovae, the equation of state of dark energy is constrained to w = −1.006 ± 0.045, consistent with the expected value for a cosmological constant. The standard big bang nucleosynthesis predictions for the helium and deuterium abundances for the best-fit Planck base ΛCDM cosmology are in excellent agreement with observations. We also constraints on annihilating dark matter and on possible deviations from the standard recombination history. In neither case do we find no evidence for new physics. The Planck results for base ΛCDM are in good agreement with baryon acoustic oscillation data and with the JLA sample of Type Ia supernovae. However, as in the 2013 analysis, the amplitude of the fluctuation spectrum is found to be higher than inferred from some analyses of rich cluster counts and weak gravitational lensing. We show that these tensions cannot easily be resolved with simple modifications of the base ΛCDM cosmology. Apart from these tensions, the base ΛCDM cosmology provides an excellent description of the Planck CMB observations and many other astrophysical data sets.

10,728 citations


Journal ArticleDOI
TL;DR: The FAIR Data Principles as mentioned in this paper are a set of data reuse principles that focus on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals.
Abstract: There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders—representing academia, industry, funding agencies, and scholarly publishers—have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.

7,602 citations


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
29 Jul 2016-Science
TL;DR: Two-dimensional heterostructures with extended range of functionalities yields a range of possible applications, and spectrum reconstruction in graphene interacting with hBN allowed several groups to study the Hofstadter butterfly effect and topological currents in such a system.
Abstract: BACKGROUND Materials by design is an appealing idea that is very hard to realize in practice. Combining the best of different ingredients in one ultimate material is a task for which we currently have no general solution. However, we do have some successful examples to draw upon: Composite materials and III-V heterostructures have revolutionized many aspects of our lives. Still, we need a general strategy to solve the problem of mixing and matching crystals with different properties, creating combinations with predetermined attributes and functionalities. ADVANCES Two-dimensional (2D) materials offer a platform that allows creation of heterostructures with a variety of properties. One-atom-thick crystals now comprise a large family of these materials, collectively covering a very broad range of properties. The first material to be included was graphene, a zero-overlap semimetal. The family of 2D crystals has grown to includes metals (e.g., NbSe 2 ), semiconductors (e.g., MoS 2 ), and insulators [e.g., hexagonal boron nitride (hBN)]. Many of these materials are stable at ambient conditions, and we have come up with strategies for handling those that are not. Surprisingly, the properties of such 2D materials are often very different from those of their 3D counterparts. Furthermore, even the study of familiar phenomena (like superconductivity or ferromagnetism) in the 2D case, where there is no long-range order, raises many thought-provoking questions. A plethora of opportunities appear when we start to combine several 2D crystals in one vertical stack. Held together by van der Waals forces (the same forces that hold layered materials together), such heterostructures allow a far greater number of combinations than any traditional growth method. As the family of 2D crystals is expanding day by day, so too is the complexity of the heterostructures that could be created with atomic precision. When stacking different crystals together, the synergetic effects become very important. In the first-order approximation, charge redistribution might occur between the neighboring (and even more distant) crystals in the stack. Neighboring crystals can also induce structural changes in each other. Furthermore, such changes can be controlled by adjusting the relative orientation between the individual elements. Such heterostructures have already led to the observation of numerous exciting physical phenomena. Thus, spectrum reconstruction in graphene interacting with hBN allowed several groups to study the Hofstadter butterfly effect and topological currents in such a system. The possibility of positioning crystals in very close (but controlled) proximity to one another allows for the study of tunneling and drag effects. The use of semiconducting monolayers leads to the creation of optically active heterostructures. The extended range of functionalities of such heterostructures yields a range of possible applications. Now the highest-mobility graphene transistors are achieved by encapsulating graphene with hBN. Photovoltaic and light-emitting devices have been demonstrated by combining optically active semiconducting layers and graphene as transparent electrodes. OUTLOOK Currently, most 2D heterostructures are composed by direct stacking of individual monolayer flakes of different materials. Although this method allows ultimate flexibility, it is slow and cumbersome. Thus, techniques involving transfer of large-area crystals grown by chemical vapor deposition (CVD), direct growth of heterostructures by CVD or physical epitaxy, or one-step growth in solution are being developed. Currently, we are at the same level as we were with graphene 10 years ago: plenty of interesting science and unclear prospects for mass production. Given the fast progress of graphene technology over the past few years, we can expect similar advances in the production of the heterostructures, making the science and applications more achievable.

4,851 citations


Journal ArticleDOI
Bin Zhou1, Yuan Lu2, Kaveh Hajifathalian2, James Bentham1  +494 moreInstitutions (170)
TL;DR: In this article, the authors used a Bayesian hierarchical model to estimate trends in diabetes prevalence, defined as fasting plasma glucose of 7.0 mmol/L or higher, or history of diagnosis with diabetes, or use of insulin or oral hypoglycaemic drugs in 200 countries and territories in 21 regions, by sex and from 1980 to 2014.

2,782 citations


Journal ArticleDOI
TL;DR: In this article, a systematic review and meta-analysis of large-scale blood pressure lowering trials, published between Jan 1, 1966, and July 7, 2015, was performed.

2,296 citations


Journal ArticleDOI
John Allison1, K. Amako2, John Apostolakis3, Pedro Arce4, Makoto Asai5, Tsukasa Aso6, Enrico Bagli, Alexander Bagulya7, Sw. Banerjee8, G. Barrand9, B. R. Beck10, Alexey Bogdanov11, D. Brandt, Jeremy M. C. Brown12, Helmut Burkhardt3, Ph Canal8, D. Cano-Ott4, Stephane Chauvie, Kyung-Suk Cho13, G.A.P. Cirrone14, Gene Cooperman15, M. A. Cortés-Giraldo16, G. Cosmo3, Giacomo Cuttone14, G.O. Depaola17, Laurent Desorgher, X. Dong15, Andrea Dotti5, Victor Daniel Elvira8, Gunter Folger3, Ziad Francis18, A. Galoyan19, L. Garnier9, M. Gayer3, K. Genser8, Vladimir Grichine3, Vladimir Grichine7, Susanna Guatelli20, Susanna Guatelli21, Paul Gueye22, P. Gumplinger23, Alexander Howard24, Ivana Hřivnáčová9, S. Hwang13, Sebastien Incerti25, Sebastien Incerti26, A. Ivanchenko3, Vladimir Ivanchenko3, F.W. Jones23, S. Y. Jun8, Pekka Kaitaniemi27, Nicolas A. Karakatsanis28, Nicolas A. Karakatsanis29, M. Karamitrosi30, M.H. Kelsey5, Akinori Kimura31, Tatsumi Koi5, Hisaya Kurashige32, A. Lechner3, S. B. Lee33, Francesco Longo34, M. Maire, Davide Mancusi, A. Mantero, E. Mendoza4, B. Morgan35, K. Murakami2, T. Nikitina3, Luciano Pandola14, P. Paprocki3, J Perl5, Ivan Petrović36, Maria Grazia Pia, W. Pokorski3, J. M. Quesada16, M. Raine, Maria A.M. Reis37, Alberto Ribon3, A. Ristic Fira36, Francesco Romano14, Giorgio Ivan Russo14, Giovanni Santin38, Takashi Sasaki2, D. Sawkey39, J. I. Shin33, Igor Strakovsky40, A. Taborda37, Satoshi Tanaka41, B. Tome, Toshiyuki Toshito, H.N. Tran42, Pete Truscott, L. Urbán, V. V. Uzhinsky19, Jerome Verbeke10, M. Verderi43, B. Wendt44, H. Wenzel8, D. H. Wright5, Douglas Wright10, T. Yamashita, J. Yarba8, H. Yoshida45 
TL;DR: Geant4 as discussed by the authors is a software toolkit for the simulation of the passage of particles through matter, which is used by a large number of experiments and projects in a variety of application domains, including high energy physics, astrophysics and space science, medical physics and radiation protection.
Abstract: Geant4 is a software toolkit for the simulation of the passage of particles through matter. It is used by a large number of experiments and projects in a variety of application domains, including high energy physics, astrophysics and space science, medical physics and radiation protection. Over the past several years, major changes have been made to the toolkit in order to accommodate the needs of these user communities, and to efficiently exploit the growth of computing power made available by advances in technology. The adaptation of Geant4 to multithreading, advances in physics, detector modeling and visualization, extensions to the toolkit, including biasing and reverse Monte Carlo, and tools for physics and release validation are discussed here.

2,260 citations


Journal ArticleDOI
TL;DR: Recent advances in the use of graphene and other 2D materials in catalytic applications are reviewed, focusing in particular on the catalytic activity of heterogeneous systems such as van der Waals heterostructures (stacks of several 2D crystals).
Abstract: Graphene and other 2D atomic crystals are of considerable interest in catalysis because of their unique structural and electronic properties. Over the past decade, the materials have been used in a variety of reactions, including the oxygen reduction reaction, water splitting and CO2 activation, and have been shown to exhibit a range of catalytic mechanisms. Here, we review recent advances in the use of graphene and other 2D materials in catalytic applications, focusing in particular on the catalytic activity of heterogeneous systems such as van der Waals heterostructures (stacks of several 2D crystals). We discuss the advantages of these materials for catalysis and the different routes available to tune their electronic states and active sites. We also explore the future opportunities of these catalytic materials and the challenges they face in terms of both fundamental understanding and the development of industrial applications.

1,683 citations


Journal ArticleDOI
Nicholas J Kassebaum1, Megha Arora1, Ryan M Barber1, Zulfiqar A Bhutta2  +679 moreInstitutions (268)
TL;DR: In this paper, the authors used the Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) for all-cause mortality, cause-specific mortality, and non-fatal disease burden to derive HALE and DALYs by sex for 195 countries and territories from 1990 to 2015.

1,533 citations


Journal ArticleDOI
TL;DR: Parton distribution functions (PDFs) are crucial ingredients for the calculation of the relevant cross sections for various scattering processes at the Large Hadron Collider (LHC). as mentioned in this paper found new PDFs, which will be important for the data analysis at the LHC Run-2.
Abstract: Parton distribution functions (PDFs) are crucial ingredients for the calculation of the relevant cross sections for various scattering processes at the Large Hadron Collider (LHC). Including data from several previous experiments, the authors find new PDFs, which will be important for the data analysis at the LHC Run-2.

1,521 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a framework to combat the threat to human health and biosecurity from antimicrobial resistance, an understanding of its mechanisms and drivers is needed.

Journal ArticleDOI
26 Jul 2016-eLife
TL;DR: The height differential between the tallest and shortest populations was 19-20 cm a century ago, and has remained the same for women and increased for men a century later despite substantial changes in the ranking of countries.
Abstract: Being taller is associated with enhanced longevity, and higher education and earnings. We reanalysed 1472 population-based studies, with measurement of height on more than 18.6 million participants to estimate mean height for people born between 1896 and 1996 in 200 countries. The largest gain in adult height over the past century has occurred in South Korean women and Iranian men, who became 20.2 cm (95% credible interval 17.5–22.7) and 16.5 cm (13.3–19.7) taller, respectively. In contrast, there was little change in adult height in some sub-Saharan African countries and in South Asia over the century of analysis. The tallest people over these 100 years are men born in the Netherlands in the last quarter of 20th century, whose average heights surpassed 182.5 cm, and the shortest were women born in Guatemala in 1896 (140.3 cm; 135.8–144.8). The height differential between the tallest and shortest populations was 19-20 cm a century ago, and has remained the same for women and increased for men a century later despite substantial changes in the ranking of countries.

Journal ArticleDOI
TL;DR: The rationale underlying the iterated racing procedures in irace is described and a number of recent extensions are introduced, including a restart mechanism to avoid premature convergence, the use of truncated sampling distributions to handle correctly parameter bounds, and an elitist racing procedure for ensuring that the best configurations returned are also those evaluated in the highest number of training instances.

Journal ArticleDOI
Aysu Okbay1, Jonathan P. Beauchamp2, Mark Alan Fontana3, James J. Lee4  +293 moreInstitutions (81)
26 May 2016-Nature
TL;DR: In this article, the results of a genome-wide association study (GWAS) for educational attainment were reported, showing that single-nucleotide polymorphisms associated with educational attainment disproportionately occur in genomic regions regulating gene expression in the fetal brain.
Abstract: Educational attainment is strongly influenced by social and other environmental factors, but genetic factors are estimated to account for at least 20% of the variation across individuals. Here we report the results of a genome-wide association study (GWAS) for educational attainment that extends our earlier discovery sample of 101,069 individuals to 293,723 individuals, and a replication study in an independent sample of 111,349 individuals from the UK Biobank. We identify 74 genome-wide significant loci associated with the number of years of schooling completed. Single-nucleotide polymorphisms associated with educational attainment are disproportionately found in genomic regions regulating gene expression in the fetal brain. Candidate genes are preferentially expressed in neural tissue, especially during the prenatal period, and enriched for biological pathways involved in neural development. Our findings demonstrate that, even for a behavioural phenotype that is mostly environmentally determined, a well-powered GWAS identifies replicable associated genetic variants that suggest biologically relevant pathways. Because educational attainment is measured in large numbers of individuals, it will continue to be useful as a proxy phenotype in efforts to characterize the genetic influences of related phenotypes, including cognition and neuropsychiatric diseases.

Journal ArticleDOI
TL;DR: This work designed cobalt-based multilayered thin thin metals in which the cobalt layer is sandwiched between two heavy metals and so provides additive interfacial Dzyaloshinskii-Moriya interactions (DMIs), which reach a value close to 2 mJ m(-2) in the case of the Ir|Co|Pt asymmetric multilayers.
Abstract: Facing the ever-growing demand for data storage will most probably require a new paradigm. Nanoscale magnetic skyrmions are anticipated to solve this issue as they are arguably the smallest spin textures in magnetic thin films in nature. We designed cobalt-based multilayered thin films in which the cobalt layer is sandwiched between two heavy metals and so provides additive interfacial Dzyaloshinskii-Moriya interactions (DMIs), which reach a value close to 2 mJ m(-2) in the case of the Ir|Co|Pt asymmetric multilayers. Using a magnetization-sensitive scanning X-ray transmission microscopy technique, we imaged small magnetic domains at very low fields in these multilayers. The study of their behaviour in a perpendicular magnetic field allows us to conclude that they are actually magnetic skyrmions stabilized by the large DMI. This discovery of stable sub-100 nm individual skyrmions at room temperature in a technologically relevant material opens the way for device applications in the near future.

Journal ArticleDOI
07 Apr 2016
TL;DR: In this paper, the authors explore and discuss how soil scientists can help to reach the recently adopted UN Sustainable Development Goals (SDGs) in the most effective manner and recommend the following steps to be taken by the soil science community as a whole: (i) embrace the UN SDGs, as they provide a platform that allows soil science to demonstrate its relevance for realizing a sustainable society by 2030; (ii) show the specific value of soil science: research should explicitly show how using modern soil information can improve the results of inter-and transdisciplinary studies on SDGs related to food security
Abstract: . In this forum paper we discuss how soil scientists can help to reach the recently adopted UN Sustainable Development Goals (SDGs) in the most effective manner. Soil science, as a land-related discipline, has important links to several of the SDGs, which are demonstrated through the functions of soils and the ecosystem services that are linked to those functions (see graphical abstract in the Supplement). We explore and discuss how soil scientists can rise to the challenge both internally, in terms of our procedures and practices, and externally, in terms of our relations with colleague scientists in other disciplines, diverse groups of stakeholders and the policy arena. To meet these goals we recommend the following steps to be taken by the soil science community as a whole: (i) embrace the UN SDGs, as they provide a platform that allows soil science to demonstrate its relevance for realizing a sustainable society by 2030; (ii) show the specific value of soil science: research should explicitly show how using modern soil information can improve the results of inter- and transdisciplinary studies on SDGs related to food security, water scarcity, climate change, biodiversity loss and health threats; (iii) take leadership in overarching system analysis of ecosystems, as soils and soil scientists have an integrated nature and this places soil scientists in a unique position; (iii) raise awareness of soil organic matter as a key attribute of soils to illustrate its importance for soil functions and ecosystem services; (iv) improve the transfer of knowledge through knowledge brokers with a soil background; (v) start at the basis: educational programmes are needed at all levels, starting in primary schools, and emphasizing practical, down-to-earth examples; (vi) facilitate communication with the policy arena by framing research in terms that resonate with politicians in terms of the policy cycle or by considering drivers, pressures and responses affecting impacts of land use change; and finally (vii) all this is only possible if researchers, with soil scientists in the front lines, look over the hedge towards other disciplines, to the world at large and to the policy arena, reaching over to listen first, as a basis for genuine collaboration.

Journal ArticleDOI
TL;DR: In this paper, the authors present an analysis of the online sharing economy discourse, identifying that the sharing economy is framed as an economic opportunity; a more sustainable form of consumption; a pathway to a decentralised, equitable and sustainable economy; creating unregulated marketplaces; reinforcing the neoliberal paradigm; and, an incoherent field of innovation.

Journal ArticleDOI
TL;DR: The group agreed on sets of uniform sampling criteria, placental gross descriptors, pathologic terminologies, and diagnostic criteria for placental lesions, which will assist in international comparability of clinicopathologic and scientific studies and assist in refining the significance of lesions associated with adverse pregnancy and later health outcomes.
Abstract: Context.—The value of placental examination in investigations of adverse pregnancy outcomes may be compromised by sampling and definition differences between laboratories. Objective.—To establish an agreed-upon protocol for sampling the placenta, and for diagnostic criteria for placental lesions. Recommendations would cover reporting placentas in tertiary centers as well as in community hospitals and district general hospitals, and are also relevant to the scientific research community. Data Sources.—Areas of controversy or uncertainty were explored prior to a 1-day meeting where placental and perinatal pathologists, and maternal-fetal medicine specialists discussed available evidence and subsequently reached consensus where possible. Conclusions.—The group agreed on sets of uniform sampling criteria, placental gross descriptors, pathologic terminologies, and diagnostic criteria. The terminology and microscopic descriptions for maternal vascular malperfusion, fetal vascular malperfusion, delayed villous m...

Journal ArticleDOI
Nabila Aghanim1, Monique Arnaud2, M. Ashdown3, J. Aumont1  +291 moreInstitutions (73)
TL;DR: In this article, the authors present the Planck 2015 likelihoods, statistical descriptions of the 2-point correlation functions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties.
Abstract: This paper presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlationfunctions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties, both instrumental and astrophysical in nature. They are based on the same hybrid approach used for the previous release, i.e., a pixel-based likelihood at low multipoles (l< 30) and a Gaussian approximation to the distribution of cross-power spectra at higher multipoles. The main improvements are the use of more and better processed data and of Planck polarization information, along with more detailed models of foregrounds and instrumental uncertainties. The increased redundancy brought by more than doubling the amount of data analysed enables further consistency checks and enhanced immunity to systematic effects. It also improves the constraining power of Planck, in particular with regard to small-scale foreground properties. Progress in the modelling of foreground emission enables the retention of a larger fraction of the sky to determine the properties of the CMB, which also contributes to the enhanced precision of the spectra. Improvements in data processing and instrumental modelling further reduce uncertainties. Extensive tests establish the robustness and accuracy of the likelihood results, from temperature alone, from polarization alone, and from their combination. For temperature, we also perform a full likelihood analysis of realistic end-to-end simulations of the instrumental response to the sky, which were fed into the actual data processing pipeline; this does not reveal biases from residual low-level instrumental systematics. Even with the increase in precision and robustness, the ΛCDM cosmological model continues to offer a very good fit to the Planck data. The slope of the primordial scalar fluctuations, n_s, is confirmed smaller than unity at more than 5σ from Planck alone. We further validate the robustness of the likelihood results against specific extensions to the baseline cosmology, which are particularly sensitive to data at high multipoles. For instance, the effective number of neutrino species remains compatible with the canonical value of 3.046. For this first detailed analysis of Planck polarization spectra, we concentrate at high multipoles on the E modes, leaving the analysis of the weaker B modes to future work. At low multipoles we use temperature maps at all Planck frequencies along with a subset of polarization data. These data take advantage of Planck’s wide frequency coverage to improve the separation of CMB and foreground emission. Within the baseline ΛCDM cosmology this requires τ = 0.078 ± 0.019 for the reionization optical depth, which is significantly lower than estimates without the use of high-frequency data for explicit monitoring of dust emission. At high multipoles we detect residual systematic errors in E polarization, typically at the μK^2 level; we therefore choose to retain temperature information alone for high multipoles as the recommended baseline, in particular for testing non-minimal models. Nevertheless, the high-multipole polarization spectra from Planck are already good enough to enable a separate high-precision determination of the parameters of the ΛCDM model, showing consistency with those established independently from temperature information alone.

Journal ArticleDOI
TL;DR: The origin and structure of exosomes as well as their biological functions are outlined and specific applications of exOSomes as drug delivery systems in pharmaceutical drug development are focused on.

Journal ArticleDOI
TL;DR: In this paper, the authors extend the literature on how managerial traits relate to corporate choices by documenting that firms run by female CEOs have lower leverage, less volatile earnings, and a higher chance of survival than otherwise similar firms running by male CEOs, and that transitions from male to female CEOs are associated with economically and statistically significant reductions in corporate risk-taking.

Journal ArticleDOI
10 Mar 2016-Nature
TL;DR: These repeat bursts with high dispersion measure and variable spectra specifically seen from the direction of FRB 121102 support an origin in a young, highly magnetized, extragalactic neutron star.
Abstract: Observations of repeated fast radio bursts, having dispersion measures and sky positions consistent with those of FRB 121102, show that the signals do not originate in a single cataclysmic event and may come from a young, highly magnetized, extragalactic neutron star. Fast radio bursts (FRBs) are transient radio pulses that last a few milliseconds. They are thought to be extragalactic, and are of unknown physical origin. Many FRB models have proposed the cause to be one-time-only cataclysmic events. Follow-up monitoring of detected bursts did not reveal repeat bursts, consistent with such models. However, this paper reports ten additional bursts from the direction of FRB 121102, demonstrating that its source survived the energetic events that caused the bursts. Although there may be multiple physical origins for the burst, the repeating bursts seen from FRB 121102 support an origin in a young, highly magnetized, extragalactic neutron star. Fast radio bursts are millisecond-duration astronomical radio pulses of unknown physical origin that appear to come from extragalactic distances1,2,3,4,5,6,7,8. Previous follow-up observations have failed to find additional bursts at the same dispersion measure (that is, the integrated column density of free electrons between source and telescope) and sky position as the original detections9. The apparent non-repeating nature of these bursts has led to the suggestion that they originate in cataclysmic events10. Here we report observations of ten additional bursts from the direction of the fast radio burst FRB 121102. These bursts have dispersion measures and sky positions consistent with the original burst4. This unambiguously identifies FRB 121102 as repeating and demonstrates that its source survives the energetic events that cause the bursts. Additionally, the bursts from FRB 121102 show a wide range of spectral shapes that appear to be predominantly intrinsic to the source and which vary on timescales of minutes or less. Although there may be multiple physical origins for the population of fast radio bursts, these repeat bursts with high dispersion measure and variable spectra specifically seen from the direction of FRB 121102 support an origin in a young, highly magnetized, extragalactic neutron star11,12.


Journal ArticleDOI
Sergey Alekhin, Wolfgang Altmannshofer1, Takehiko Asaka2, Brian Batell3, Fedor Bezrukov4, Kyrylo Bondarenko5, Alexey Boyarsky5, Ki-Young Choi6, Cristóbal Corral7, Nathaniel Craig8, David Curtin9, Sacha Davidson10, Sacha Davidson11, André de Gouvêa12, Stefano Dell'Oro, Patrick deNiverville13, P. S. Bhupal Dev14, Herbi K. Dreiner15, Marco Drewes16, Shintaro Eijima17, Rouven Essig18, Anthony Fradette13, Björn Garbrecht16, Belen Gavela19, Gian F. Giudice3, Mark D. Goodsell20, Mark D. Goodsell21, Dmitry Gorbunov22, Stefania Gori1, Christophe Grojean23, Alberto Guffanti24, Thomas Hambye25, Steen Honoré Hansen24, Juan Carlos Helo26, Juan Carlos Helo7, Pilar Hernández27, Alejandro Ibarra16, Artem Ivashko28, Artem Ivashko5, Eder Izaguirre1, Joerg Jaeckel29, Yu Seon Jeong30, Felix Kahlhoefer, Yonatan Kahn31, Andrey Katz32, Andrey Katz3, Andrey Katz33, Choong Sun Kim30, Sergey Kovalenko7, Gordan Krnjaic1, Valery E. Lyubovitskij34, Valery E. Lyubovitskij35, Valery E. Lyubovitskij36, Simone Marcocci, Matthew McCullough3, David McKeen37, Guenakh Mitselmakher38, Sven Moch39, Rabindra N. Mohapatra9, David E. Morrissey40, Maksym Ovchynnikov28, Emmanuel A. Paschos, Apostolos Pilaftsis14, Maxim Pospelov13, Maxim Pospelov1, Mary Hall Reno41, Andreas Ringwald, Adam Ritz13, Leszek Roszkowski, Valery Rubakov, Oleg Ruchayskiy24, Oleg Ruchayskiy17, Ingo Schienbein42, Daniel Schmeier15, Kai Schmidt-Hoberg, Pedro Schwaller3, Goran Senjanovic43, Osamu Seto44, Mikhail Shaposhnikov17, Lesya Shchutska38, J. Shelton45, Robert Shrock18, Brian Shuve1, Michael Spannowsky46, Andrew Spray47, Florian Staub3, Daniel Stolarski3, Matt Strassler32, Vladimir Tello, Francesco Tramontano48, Anurag Tripathi, Sean Tulin49, Francesco Vissani, Martin Wolfgang Winkler15, Kathryn M. Zurek50, Kathryn M. Zurek51 
Perimeter Institute for Theoretical Physics1, Niigata University2, CERN3, University of Connecticut4, Leiden University5, Korea Astronomy and Space Science Institute6, Federico Santa María Technical University7, University of California, Santa Barbara8, University of Maryland, College Park9, Claude Bernard University Lyon 110, University of Lyon11, Northwestern University12, University of Victoria13, University of Manchester14, University of Bonn15, Technische Universität München16, École Polytechnique Fédérale de Lausanne17, Stony Brook University18, Autonomous University of Madrid19, Centre national de la recherche scientifique20, University of Paris21, Moscow Institute of Physics and Technology22, Autonomous University of Barcelona23, University of Copenhagen24, Université libre de Bruxelles25, University of La Serena26, University of Valencia27, Taras Shevchenko National University of Kyiv28, Heidelberg University29, Yonsei University30, Princeton University31, Harvard University32, University of Geneva33, Tomsk Polytechnic University34, University of Tübingen35, Tomsk State University36, University of Washington37, University of Florida38, University of Hamburg39, TRIUMF40, University of Iowa41, University of Grenoble42, International Centre for Theoretical Physics43, Hokkai Gakuen University44, University of Illinois at Urbana–Champaign45, Durham University46, University of Melbourne47, University of Naples Federico II48, York University49, Lawrence Berkeley National Laboratory50, University of California, Berkeley51
TL;DR: It is demonstrated that the SHiP experiment has a unique potential to discover new physics and can directly probe a number of solutions of beyond the standard model puzzles, such as neutrino masses, baryon asymmetry of the Universe, dark matter, and inflation.
Abstract: This paper describes the physics case for a new fixed target facility at CERN SPS. The SHiP (search for hidden particles) experiment is intended to hunt for new physics in the largely unexplored domain of very weakly interacting particles with masses below the Fermi scale, inaccessible to the LHC experiments, and to study tau neutrino physics. The same proton beam setup can be used later to look for decays of tau-leptons with lepton flavour number non-conservation, $\tau \to 3\mu $ and to search for weakly-interacting sub-GeV dark matter candidates. We discuss the evidence for physics beyond the standard model and describe interactions between new particles and four different portals—scalars, vectors, fermions or axion-like particles. We discuss motivations for different models, manifesting themselves via these interactions, and how they can be probed with the SHiP experiment and present several case studies. The prospects to search for relatively light SUSY and composite particles at SHiP are also discussed. We demonstrate that the SHiP experiment has a unique potential to discover new physics and can directly probe a number of solutions of beyond the standard model puzzles, such as neutrino masses, baryon asymmetry of the Universe, dark matter, and inflation.

Journal ArticleDOI
14 Oct 2016-Science
TL;DR: If the Integrated Assessment Models informing policy-makers assume the large-scale use of negative-emission technologies and they are not deployed or are unsuccessful at removing CO2 from the atmosphere at the levels assumed, society will be locked into a high-temperature pathway.
Abstract: In December 2015, member states of the United Nations Framework Convention on Climate Change (UNFCCC) adopted the Paris Agreement, which aims to hold the increase in the global average temperature to below 2°C and to pursue efforts to limit the temperature increase to 1.5°C. The Paris Agreement requires that anthropogenic greenhouse gas emission sources and sinks are balanced by the second half of this century. Because some nonzero sources are unavoidable, this leads to the abstract concept of “negative emissions,” the removal of carbon dioxide (CO2) from the atmosphere through technical means. The Integrated Assessment Models (IAMs) informing policy-makers assume the large-scale use of negative-emission technologies. If we rely on these and they are not deployed or are unsuccessful at removing CO2 from the atmosphere at the levels assumed, society will be locked into a high-temperature pathway.

Journal ArticleDOI
TL;DR: An overview of the NGT and Delphi technique is provided, including the steps involved and the types of research questions best suited to each method, with examples from the pharmacy literature.
Abstract: Introduction The Nominal Group Technique (NGT) and Delphi Technique are consensus methods used in research that is directed at problem-solving, idea-generation, or determining priorities. While consensus methods are commonly used in health services literature, few studies in pharmacy practice use these methods. This paper provides an overview of the NGT and Delphi technique, including the steps involved and the types of research questions best suited to each method, with examples from the pharmacy literature. Methodology The NGT entails face-to-face discussion in small groups, and provides a prompt result for researchers. The classic NGT involves four key stages: silent generation, round robin, clarification and voting (ranking). Variations have occurred in relation to generating ideas, and how 'consensus' is obtained from participants. The Delphi technique uses a multistage self-completed questionnaire with individual feedback, to determine consensus from a larger group of 'experts.' Questionnaires have been mailed, or more recently, e-mailed to participants. When to use The NGT has been used to explore consumer and stakeholder views, while the Delphi technique is commonly used to develop guidelines with health professionals. Method choice is influenced by various factors, including the research question, the perception of consensus required, and associated practicalities such as time and geography. Limitations The NGT requires participants to personally attend a meeting. This may prove difficult to organise and geography may limit attendance. The Delphi technique can take weeks or months to conclude, especially if multiple rounds are required, and may be complex for lay people to complete.

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, M. Ashdown4  +301 moreInstitutions (72)
TL;DR: In this paper, the implications of Planck data for models of dark energy (DE) and modified gravity (MG) beyond the standard cosmological constant scenario were studied, and it was shown that the density of DE at early times has to be below 2% of the critical density, even when forced to play a role for z < 50.
Abstract: We study the implications of Planck data for models of dark energy (DE) and modified gravity (MG) beyond the standard cosmological constant scenario. We start with cases where the DE only directly affects the background evolution, considering Taylor expansions of the equation of state w(a), as well as principal component analysis and parameterizations related to the potential of a minimally coupled DE scalar field. When estimating the density of DE at early times, we significantly improve present constraints and find that it has to be below ~2% (at 95% confidence) of the critical density, even when forced to play a role for z < 50 only. We then move to general parameterizations of the DE or MG perturbations that encompass both effective field theories and the phenomenology of gravitational potentials in MG models. Lastly, we test a range of specific models, such as k-essence, f(R) theories, and coupled DE. In addition to the latest Planck data, for our main analyses, we use background constraints from baryonic acoustic oscillations, type-Ia supernovae, and local measurements of the Hubble constant. We further show the impact of measurements of the cosmological perturbations, such as redshift-space distortions and weak gravitational lensing. These additional probes are important tools for testing MG models and for breaking degeneracies that are still present in the combination of Planck and background data sets. All results that include only background parameterizations (expansion of the equation of state, early DE, general potentials in minimally-coupled scalar fields or principal component analysis) are in agreement with ΛCDM. When testing models that also change perturbations (even when the background is fixed to ΛCDM), some tensions appear in a few scenarios: the maximum one found is ~2σ for Planck TT+lowP when parameterizing observables related to the gravitational potentials with a chosen time dependence; the tension increases to, at most, 3σ when external data sets are included. It however disappears when including CMB lensing.

Journal ArticleDOI
TL;DR: The recommendations of the present document represent the best clinical wisdom upon which physicians, nurses and families should base their decisions and should encourage public policy makers to develop a global effort to improve identification and treatment of high blood pressure among children and adolescents.
Abstract: Increasing prevalence of hypertension (HTN) in children and adolescents has become a significant public health issue driving a considerable amount of research. Aspects discussed in this document include advances in the definition of HTN in 16 year or older, clinical significance of isolated systolic HTN in youth, the importance of out of office and central blood pressure measurement, new risk factors for HTN, methods to assess vascular phenotypes, clustering of cardiovascular risk factors and treatment strategies among others. The recommendations of the present document synthesize a considerable amount of scientific data and clinical experience and represent the best clinical wisdom upon which physicians, nurses and families should base their decisions. In addition, as they call attention to the burden of HTN in children and adolescents, and its contribution to the current epidemic of cardiovascular disease, these guidelines should encourage public policy makers to develop a global effort to improve identification and treatment of high blood pressure among children and adolescents.

Journal ArticleDOI
T. M. C. Abbott, F. B. Abdalla1, Jelena Aleksić2, S. Allam3  +153 moreInstitutions (43)
TL;DR: In this paper, the authors presented the results of the Dark Energy Survey (DES) 2013, 2014, 2015, 2016, 2017, 2018, 2019 and 2019 at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign.
Abstract: US Department of Energy; US National Science Foundation; Ministry of Science and Education of Spain; Science and Technology Facilities Council of the United Kingdom; Higher Education Funding Council for England; National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign; Kavli Institute of Cosmological Physics at the University of Chicago; Center for Cosmology and Astro-Particle Physics at the Ohio State University; Mitchell Institute for Fundamental Physics and Astronomy at Texas AM University; Financiadora de Estudos e Projetos; Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro; Conselho Nacional de Desenvolvimento Cientifico e Tecnologico and the Ministerio da Ciencia; Tecnologia e Inovacao; Deutsche Forschungsgemeinschaft; Collaborating Institutions in the Dark Energy Survey; National Science Foundation [AST-1138766]; University of California at Santa Cruz; University of Cambridge, Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas-Madrid; University of Chicago, University College London; DES-Brazil Consortium; University of Edinburgh; Eidgenossische Technische Hochschule (ETH) Zurich, Fermi National Accelerator Laboratory; University of Illinois at Urbana-Champaign; Institut de Ciencies de l'Espai (IEEC/CSIC); Institut de Fisica d'Altes Energies, Lawrence Berkeley National Laboratory; Ludwig-Maximilians Universitat Munchen; European Research Council [FP7/291329]; MINECO [AYA2012-39559, ESP2013-48274, FPA2013-47986]; Centro de Excelencia Severo Ochoa [SEV-2012-0234]; European Research Council under the European Union [240672, 291329, 306478]

Journal ArticleDOI
01 Dec 2016-Nature
TL;DR: In this article, the authors present a comprehensive analysis of warming-induced changes in soil carbon stocks by assembling data from 49 field experiments located across North America, Europe and Asia, and provide estimates of soil carbon sensitivity to warming that may help to constrain Earth system model projections.
Abstract: The majority of the Earth's terrestrial carbon is stored in the soil. If anthropogenic warming stimulates the loss of this carbon to the atmosphere, it could drive further planetary warming. Despite evidence that warming enhances carbon fluxes to and from the soil, the net global balance between these responses remains uncertain. Here we present a comprehensive analysis of warming-induced changes in soil carbon stocks by assembling data from 49 field experiments located across North America, Europe and Asia. We find that the effects of warming are contingent on the size of the initial soil carbon stock, with considerable losses occurring in high-latitude areas. By extrapolating this empirical relationship to the global scale, we provide estimates of soil carbon sensitivity to warming that may help to constrain Earth system model projections. Our empirical relationship suggests that global soil carbon stocks in the upper soil horizons will fall by 30 ± 30 petagrams of carbon to 203 ± 161 petagrams of carbon under one degree of warming, depending on the rate at which the effects of warming are realized. Under the conservative assumption that the response of soil carbon to warming occurs within a year, a business-as-usual climate scenario would drive the loss of 55 ± 50 petagrams of carbon from the upper soil horizons by 2050. This value is around 12-17 per cent of the expected anthropogenic emissions over this period. Despite the considerable uncertainty in our estimates, the direction of the global soil carbon response is consistent across all scenarios. This provides strong empirical support for the idea that rising temperatures will stimulate the net loss of soil carbon to the atmosphere, driving a positive land carbon-climate feedback that could accelerate climate change.