scispace - formally typeset
Search or ask a question

Showing papers by "Stony Brook University published in 2019"


Journal ArticleDOI
Peter A. R. Ade1, James E. Aguirre2, Z. Ahmed3, Simone Aiola4  +276 moreInstitutions (53)
TL;DR: The Simons Observatory (SO) is a new cosmic microwave background experiment being built on Cerro Toco in Chile, due to begin observations in the early 2020s as mentioned in this paper.
Abstract: The Simons Observatory (SO) is a new cosmic microwave background experiment being built on Cerro Toco in Chile, due to begin observations in the early 2020s. We describe the scientific goals of the experiment, motivate the design, and forecast its performance. SO will measure the temperature and polarization anisotropy of the cosmic microwave background in six frequency bands centered at: 27, 39, 93, 145, 225 and 280 GHz. The initial configuration of SO will have three small-aperture 0.5-m telescopes and one large-aperture 6-m telescope, with a total of 60,000 cryogenic bolometers. Our key science goals are to characterize the primordial perturbations, measure the number of relativistic species and the mass of neutrinos, test for deviations from a cosmological constant, improve our understanding of galaxy evolution, and constrain the duration of reionization. The small aperture telescopes will target the largest angular scales observable from Chile, mapping ≈ 10% of the sky to a white noise level of 2 μK-arcmin in combined 93 and 145 GHz bands, to measure the primordial tensor-to-scalar ratio, r, at a target level of σ(r)=0.003. The large aperture telescope will map ≈ 40% of the sky at arcminute angular resolution to an expected white noise level of 6 μK-arcmin in combined 93 and 145 GHz bands, overlapping with the majority of the Large Synoptic Survey Telescope sky region and partially with the Dark Energy Spectroscopic Instrument. With up to an order of magnitude lower polarization noise than maps from the Planck satellite, the high-resolution sky maps will constrain cosmological parameters derived from the damping tail, gravitational lensing of the microwave background, the primordial bispectrum, and the thermal and kinematic Sunyaev-Zel'dovich effects, and will aid in delensing the large-angle polarization signal to measure the tensor-to-scalar ratio. The survey will also provide a legacy catalog of 16,000 galaxy clusters and more than 20,000 extragalactic sources.

1,027 citations


Journal ArticleDOI
TL;DR: In this paper, a global analysis of the neutrino oscillation data available as of fall 2018 in the framework of three massive mixed neutrinos with the goal at determining the ranges of allowed values for the six relevant parameters.
Abstract: We present the results of a global analysis of the neutrino oscillation data available as of fall 2018 in the framework of three massive mixed neutrinos with the goal at determining the ranges of allowed values for the six relevant parameters. We describe the complementarity and quantify the tensions among the results of the different data samples contributing to the determination of each parameter. We also show how those vary when combining our global likelihood with the χ2 map provided by Super-Kamiokande for their atmospheric neutrino data analysis in the same framework. The best fit of the analysis is for the normal mass ordering with inverted ordering being disfavoured with a Δχ2 = 4.7 (9.3) without (with) SK-atm. We find a preference for the second octant of θ23, disfavouring the first octant with Δχ2 = 4.4 (6.0) without (with) SK-atm. The best fit for the complex phase is δCP = 215° with CP conservation being allowed at Δχ2 = 1.5 (1.8). As a byproduct we quantify the correlated ranges for the laboratory observables sensitive to the absolute neutrino mass scale in beta decay, $$ {m}_{ u_e} $$ , and neutrino-less double beta decay, mee, and the total mass of the neutrinos, Σ, which is most relevant in Cosmology.

860 citations


Journal ArticleDOI
01 Nov 2019-Science
TL;DR: Graphene, with a low lattice mismatch for Zn, is shown to be effective in driving deposition of Zn with a locked crystallographic orientation relation, and the resultant epitaxial Zn anodes achieve exceptional reversibility over thousands of cycles at moderate and high rates.
Abstract: The propensity of metals to form irregular and nonplanar electrodeposits at liquid-solid interfaces has emerged as a fundamental barrier to high-energy, rechargeable batteries that use metal anodes. We report an epitaxial mechanism to regulate nucleation, growth, and reversibility of metal anodes. The crystallographic, surface texturing, and electrochemical criteria for reversible epitaxial electrodeposition of metals are defined and their effectiveness demonstrated by using zinc (Zn), a safe, low-cost, and energy-dense battery anode material. Graphene, with a low lattice mismatch for Zn, is shown to be effective in driving deposition of Zn with a locked crystallographic orientation relation. The resultant epitaxial Zn anodes achieve exceptional reversibility over thousands of cycles at moderate and high rates. Reversible electrochemical epitaxy of metals provides a general pathway toward energy-dense batteries with high reversibility.

855 citations


Book
02 Dec 2019
TL;DR: In this article, the authors present a series of tests for univariate normality with Censored data, including plots, probability plots and regression tests, as well as a robust estimation of location and scale.
Abstract: 1. Introduction Part 1: Testing for Univariate Normality 2. Plots, Probability Plots and Regression Tests 3. Test Using Moments 4. Other Tests for Univariate Normality 5. Goodness of Fit Tests 6. Tests for Outliers 7. Power Comparisons for Univariate Tests for Normality 8. Testing for Normality with Censored Data Part 2: Testing for Multivariate Normality 9. Assessing Multivariate Normality 10. Testing for Multivariate Outliers Part 3: Additional Topics 11. Testing for Normal Mixtures 12. Robust Estimation of Location and Scale 13. Computational Issues

768 citations


Journal ArticleDOI
TL;DR: In this paper, the mass and radius of the isolated 205.53 Hz millisecond pulsar PSR J0030+0451 were estimated using a Bayesian inference approach to analyze its energy-dependent thermal X-ray waveform, which was observed using the Neutron Star Interior Composition Explorer (NICER).
Abstract: Neutron stars are not only of astrophysical interest, but are also of great interest to nuclear physicists because their attributes can be used to determine the properties of the dense matter in their cores. One of the most informative approaches for determining the equation of state (EoS) of this dense matter is to measure both a star’s equatorial circumferential radius R e and its gravitational mass M. Here we report estimates of the mass and radius of the isolated 205.53 Hz millisecond pulsar PSR J0030+0451 obtained using a Bayesian inference approach to analyze its energy-dependent thermal X-ray waveform, which was observed using the Neutron Star Interior Composition Explorer (NICER). This approach is thought to be less subject to systematic errors than other approaches for estimating neutron star radii. We explored a variety of emission patterns on the stellar surface. Our best-fit model has three oval, uniform-temperature emitting spots and provides an excellent description of the pulse waveform observed using NICER. The radius and mass estimates given by this model are km and (68%). The independent analysis reported in the companion paper by Riley et al. explores different emitting spot models, but finds spot shapes and locations and estimates of R e and M that are consistent with those found in this work. We show that our measurements of R e and M for PSR J0030+0451 improve the astrophysical constraints on the EoS of cold, catalyzed matter above nuclear saturation density.

758 citations


Journal ArticleDOI
TL;DR: In this paper, the mass and equatorial radius of the millisecond pulsar PSR J0030+0451 were estimated based on a relativistic ray-tracing of thermal emission from hot regions of the pulsar surface.
Abstract: We report on Bayesian parameter estimation of the mass and equatorial radius of the millisecond pulsar PSR J0030+0451, conditional on pulse-profile modeling of Neutron Star Interior Composition Explorer X-ray spectral-timing event data. We perform relativistic ray-tracing of thermal emission from hot regions of the pulsar’s surface. We assume two distinct hot regions based on two clear pulsed components in the phase-folded pulse-profile data; we explore a number of forms (morphologies and topologies) for each hot region, inferring their parameters in addition to the stellar mass and radius. For the family of models considered, the evidence (prior predictive probability of the data) strongly favors a model that permits both hot regions to be located in the same rotational hemisphere. Models wherein both hot regions are assumed to be simply connected circular single-temperature spots, in particular those where the spots are assumed to be reflection-symmetric with respect to the stellar origin, are strongly disfavored. For the inferred configuration, one hot region subtends an angular extent of only a few degrees (in spherical coordinates with origin at the stellar center) and we are insensitive to other structural details; the second hot region is far more azimuthally extended in the form of a narrow arc, thus requiring a larger number of parameters to describe. The inferred mass M and equatorial radius R eq are, respectively, and , while the compactness is more tightly constrained; the credible interval bounds reported here are approximately the 16% and 84% quantiles in marginal posterior mass.

737 citations


Journal ArticleDOI
TL;DR: In this article, the mass and radius of the isolated 205.53 Hz millisecond pulsar PSR J0030+0451 were estimated using a Bayesian inference approach to analyze its energy-dependent thermal X-ray waveform.
Abstract: Neutron stars are not only of astrophysical interest, but are also of great interest to nuclear physicists, because their attributes can be used to determine the properties of the dense matter in their cores. One of the most informative approaches for determining the equation of state of this dense matter is to measure both a star's equatorial circumferential radius $R_e$ and its gravitational mass $M$. Here we report estimates of the mass and radius of the isolated 205.53 Hz millisecond pulsar PSR J0030+0451 obtained using a Bayesian inference approach to analyze its energy-dependent thermal X-ray waveform, which was observed using the Neutron Star Interior Composition Explorer (NICER). This approach is thought to be less subject to systematic errors than other approaches for estimating neutron star radii. We explored a variety of emission patterns on the stellar surface. Our best-fit model has three oval, uniform-temperature emitting spots and provides an excellent description of the pulse waveform observed using NICER. The radius and mass estimates given by this model are $R_e = 13.02^{+1.24}_{-1.06}$ km and $M = 1.44^{+0.15}_{-0.14}\ M_\odot$ (68%). The independent analysis reported in the companion paper by Riley et al. (2019) explores different emitting spot models, but finds spot shapes and locations and estimates of $R_e$ and $M$ that are consistent with those found in this work. We show that our measurements of $R_e$ and $M$ for PSR J0030$+$0451 improve the astrophysical constraints on the equation of state of cold, catalyzed matter above nuclear saturation density.

586 citations


Journal ArticleDOI
TL;DR: In this article, the mass and equatorial radius of the millisecond pulsar PSR J0030$+$0451 were estimated from the ICER X-ray spectral-timing event data.
Abstract: We report on Bayesian parameter estimation of the mass and equatorial radius of the millisecond pulsar PSR J0030$+$0451, conditional on pulse-profile modeling of Neutron Star Interior Composition Explorer (NICER) X-ray spectral-timing event data. We perform relativistic ray-tracing of thermal emission from hot regions of the pulsar's surface. We assume two distinct hot regions based on two clear pulsed components in the phase-folded pulse-profile data; we explore a number of forms (morphologies and topologies) for each hot region, inferring their parameters in addition to the stellar mass and radius. For the family of models considered, the evidence (prior predictive probability of the data) strongly favors a model that permits both hot regions to be located in the same rotational hemisphere. Models wherein both hot regions are assumed to be simply-connected circular single-temperature spots, in particular those where the spots are assumed to be reflection-symmetric with respect to the stellar origin, are strongly disfavored. For the inferred configuration, one hot region subtends an angular extent of only a few degrees (in spherical coordinates with origin at the stellar center) and we are insensitive to other structural details; the second hot region is far more azimuthally extended in the form of a narrow arc, thus requiring a larger number of parameters to describe. The inferred mass $M$ and equatorial radius $R_\mathrm{eq}$ are, respectively, $1.34_{-0.16}^{+0.15}$ M$_{\odot}$ and $12.71_{-1.19}^{+1.14}$ km, whilst the compactness $GM/R_\mathrm{eq}c^2 = 0.156_{-0.010}^{+0.008}$ is more tightly constrained; the credible interval bounds reported here are approximately the $16\%$ and $84\%$ quantiles in marginal posterior mass.

541 citations


Journal ArticleDOI
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4  +1491 moreInstitutions (239)
TL;DR: In this article, the authors present the second volume of the Future Circular Collider Conceptual Design Report, devoted to the electron-positron collider FCC-ee, and present the accelerator design, performance reach, a staged operation scenario, the underlying technologies, civil engineering, technical infrastructure, and an implementation plan.
Abstract: In response to the 2013 Update of the European Strategy for Particle Physics, the Future Circular Collider (FCC) study was launched, as an international collaboration hosted by CERN. This study covers a highest-luminosity high-energy lepton collider (FCC-ee) and an energy-frontier hadron collider (FCC-hh), which could, successively, be installed in the same 100 km tunnel. The scientific capabilities of the integrated FCC programme would serve the worldwide community throughout the 21st century. The FCC study also investigates an LHC energy upgrade, using FCC-hh technology. This document constitutes the second volume of the FCC Conceptual Design Report, devoted to the electron-positron collider FCC-ee. After summarizing the physics discovery opportunities, it presents the accelerator design, performance reach, a staged operation scenario, the underlying technologies, civil engineering, technical infrastructure, and an implementation plan. FCC-ee can be built with today’s technology. Most of the FCC-ee infrastructure could be reused for FCC-hh. Combining concepts from past and present lepton colliders and adding a few novel elements, the FCC-ee design promises outstandingly high luminosity. This will make the FCC-ee a unique precision instrument to study the heaviest known particles (Z, W and H bosons and the top quark), offering great direct and indirect sensitivity to new physics.

526 citations


Journal ArticleDOI
TL;DR: A new class of single-atom nanozymes with atomically dispersed enzyme-like active sites in nanomaterials is discovered, which significantly enhanced catalytic performance, and the underlying mechanism is uncovered.
Abstract: Conventional nanozyme technologies face formidable challenges of intricate size-, composition-, and facet-dependent catalysis and inherently low active site density. We discovered a new class of single-atom nanozymes with atomically dispersed enzyme-like active sites in nanomaterials, which significantly enhanced catalytic performance, and uncovered the underlying mechanism. With oxidase catalysis as a model reaction, experimental studies and theoretical calculations revealed that single-atom nanozymes with carbon nanoframe–confined FeN5 active centers (FeN5 SA/CNF) catalytically behaved like the axial ligand–coordinated heme of cytochrome P450. The definite active moieties and crucial synergistic effects endow FeN5 SA/CNF with a clear electron push-effect mechanism, as well as the highest oxidase-like activity among other nanozymes (the rate constant is 70 times higher than that of commercial Pt/C) and versatile antibacterial applications. These suggest that the single-atom nanozymes have great potential to become the next-generation nanozymes.

486 citations


Journal ArticleDOI
TL;DR: Control of Confounding and Reporting of Results in Causal Inference Studies Guidance for Authors from Editors of Respiratory, Sleep, and Critical Care Journals is published.
Abstract: Control of Confounding and Reporting of Results in Causal Inference Studies Guidance for Authors fromEditors of Respiratory, Sleep, andCritical Care Journals David J. Lederer*, Scott C. Bell*, Richard D. Branson*, James D. Chalmers*, Rachel Marshall*, David M. Maslove*, David E. Ost*, Naresh M. Punjabi*, Michael Schatz*, Alan R. Smyth*, Paul W. Stewart*, Samy Suissa*, Alex A. Adjei, Cezmi A. Akdis, Élie Azoulay, Jan Bakker, Zuhair K. Ballas, Philip G. Bardin, Esther Barreiro, Rinaldo Bellomo, Jonathan A. Bernstein, Vito Brusasco, Timothy G. Buchman, Sudhansu Chokroverty, Nancy A. Collop, James D. Crapo, Dominic A. Fitzgerald, Lauren Hale, Nicholas Hart, Felix J. Herth, Theodore J. Iwashyna, Gisli Jenkins, Martin Kolb, Guy B. Marks, Peter Mazzone, J. Randall Moorman, ThomasM.Murphy, Terry L. Noah, Paul Reynolds, Dieter Riemann, Richard E. Russell, Aziz Sheikh, Giovanni Sotgiu, Erik R. Swenson, Rhonda Szczesniak, Ronald Szymusiak, Jean-Louis Teboul, and Jean-Louis Vincent Department of Medicine and Department of Epidemiology, Columbia University Irving Medical Center, New York, New York; Editor-inChief, Annals of the American Thoracic Society; Department of Thoracic Medicine, The Prince Charles Hospital, Brisbane, Queensland, Australia; Editor-in-Chief, Journal of Cystic Fibrosis; Department of Surgery, University of Cincinnati, Cincinnati, Ohio; Editor-in-Chief, Respiratory Care; University of Dundee, Dundee, Scotland; Deputy Chief Editor, European Respiratory Journal; London, England; Deputy Editor, The Lancet Respiratory Medicine; Department of Medicine, Queen’s University, Kingston, Ontario, Canada; Associate Editor for Data Science, Critical Care Medicine; Department of Pulmonary Medicine, University of Texas MD Anderson Cancer Center, Houston, Texas; Editor-in-Chief, Journal of Bronchology and Interventional Pulmonology; Division of Pulmonary and Critical Care Medicine, Johns Hopkins University, Baltimore, Maryland; Deputy Editor-in-Chief, SLEEP; Department of Allergy, Kaiser Permanente Medical Center, San Diego, California; Editor-in-Chief, The Journal of Allergy & Clinical Immunology: In Practice; Division of Child Health, Obstetrics, and Gynecology, University of Nottingham, Nottingham, England; Joint Editor-in-Chief, Thorax; Department of Biostatistics, University of North Carolina, Chapel Hill, North Carolina; Associate Editor, Pediatric Pulmonology; Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, Montreal, Quebec, Canada; Advisor, COPD: Journal of Chronic Obstructive Pulmonary Disease; Department of Oncology, Mayo Clinic, Rochester, Minnesota; Editor-in-Chief, Journal of Thoracic Oncology; Swiss Institute of Allergy and Asthma Research, University of Zurich, Davos, Switzerland; Editor-in-Chief, Allergy; St. Louis Hospital, University of Paris, Paris, France; Editor-in-Chief, Intensive Care Medicine; Department of Medicine, Columbia University Irving Medical Center, and Division of Pulmonary, Critical Care, and Sleep, NYU Langone Health, New York, New York; Department of Intensive Care Adults, Erasmus MC University Medical Center, Rotterdam, the Netherlands; Department of Intensive Care, Pontificia Universidad Católica de Chile, Santiago, Chile; Editor-in-Chief, Journal of Critical Care; Department of Internal Medicine, University of Iowa and the Iowa City Veterans Affairs Medical Center, Iowa City, Iowa; Editor-in-Chief, The Journal of Allergy and Clinical Immunology; Monash Lung and Sleep, Monash Hospital and University, Melbourne, Victoria, Australia; Co-Editor-in-Chief, Respirology; Pulmonology Department, Muscle and Lung Cancer Research Group, Research Institute of Hospital del Mar and Centro de Investigación Biomédica en Red Enfermedades Respiratorias Instituto de Salud Carlos III, Barcelona, Spain; Editor-in-Chief, Archivos de Bronconeumologia; Department of Intensive Care Medicine, Austin Hospital and University of Melbourne, Melbourne, Victoria, Australia; Editor-in-Chief, Critical Care & Resuscitation; Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; Editor-in-Chief, Journal of Asthma; Department of Internal Medicine, University of Genoa, Genoa, Italy; Editor-in-Chief, COPD: Journal of Chronic Obstructive Pulmonary Disease; Department of Surgery, Department of Anesthesiology, and Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, Georgia; Editor-in-Chief,Critical CareMedicine; JFKNewJersey Neuroscience Institute, HackensackMeridian Health–JFKMedical Center, Edison, New Jersey; Editor-in-Chief, Sleep Medicine; Department of Medicine and Department of Neurology, Emory University School of Medicine, Atlanta, Georgia; Editor-in-Chief, Journal of Clinical Sleep Medicine; Department of Medicine, National Jewish Hospital, Denver, Colorado; Editor-in-Chief, Journal of the COPD Foundation; The Children’s Hospital at Westmead, Sydney Medical School, University of

Posted Content
TL;DR: This paper reviews deep SOD algorithms from different perspectives, including network architecture, level of supervision, learning paradigm, and object-/instance-level detection, and looks into the generalization and difficulty of existing SOD datasets.
Abstract: As an essential problem in computer vision, salient object detection (SOD) has attracted an increasing amount of research attention over the years. Recent advances in SOD are predominantly led by deep learning-based solutions (named deep SOD). To enable in-depth understanding of deep SOD, in this paper, we provide a comprehensive survey covering various aspects, ranging from algorithm taxonomy to unsolved issues. In particular, we first review deep SOD algorithms from different perspectives, including network architecture, level of supervision, learning paradigm, and object-/instance-level detection. Following that, we summarize and analyze existing SOD datasets and evaluation metrics. Then, we benchmark a large group of representative SOD models, and provide detailed analyses of the comparison results. Moreover, we study the performance of SOD algorithms under different attribute settings, which has not been thoroughly explored previously, by constructing a novel SOD dataset with rich attribute annotations covering various salient object types, challenging factors, and scene categories. We further analyze, for the first time in the field, the robustness of SOD models to random input perturbations and adversarial attacks. We also look into the generalization and difficulty of existing SOD datasets. Finally, we discuss several open issues of SOD and outline future research directions.

Journal ArticleDOI
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4  +1496 moreInstitutions (238)
TL;DR: In this paper, the authors describe the detailed design and preparation of a construction project for a post-LHC circular energy frontier collider in collaboration with national institutes, laboratories and universities worldwide, and enhanced by a strong participation of industrial partners.
Abstract: Particle physics has arrived at an important moment of its history. The discovery of the Higgs boson, with a mass of 125 GeV, completes the matrix of particles and interactions that has constituted the “Standard Model” for several decades. This model is a consistent and predictive theory, which has so far proven successful at describing all phenomena accessible to collider experiments. However, several experimental facts do require the extension of the Standard Model and explanations are needed for observations such as the abundance of matter over antimatter, the striking evidence for dark matter and the non-zero neutrino masses. Theoretical issues such as the hierarchy problem, and, more in general, the dynamical origin of the Higgs mechanism, do likewise point to the existence of physics beyond the Standard Model. This report contains the description of a novel research infrastructure based on a highest-energy hadron collider with a centre-of-mass collision energy of 100 TeV and an integrated luminosity of at least a factor of 5 larger than the HL-LHC. It will extend the current energy frontier by almost an order of magnitude. The mass reach for direct discovery will reach several tens of TeV, and allow, for example, to produce new particles whose existence could be indirectly exposed by precision measurements during the earlier preceding e+e– collider phase. This collider will also precisely measure the Higgs self-coupling and thoroughly explore the dynamics of electroweak symmetry breaking at the TeV scale, to elucidate the nature of the electroweak phase transition. WIMPs as thermal dark matter candidates will be discovered, or ruled out. As a single project, this particle collider infrastructure will serve the world-wide physics community for about 25 years and, in combination with a lepton collider (see FCC conceptual design report volume 2), will provide a research tool until the end of the 21st century. Collision energies beyond 100 TeV can be considered when using high-temperature superconductors. The European Strategy for Particle Physics (ESPP) update 2013 stated “To stay at the forefront of particle physics, Europe needs to be in a position to propose an ambitious post-LHC accelerator project at CERN by the time of the next Strategy update”. The FCC study has implemented the ESPP recommendation by developing a long-term vision for an “accelerator project in a global context”. This document describes the detailed design and preparation of a construction project for a post-LHC circular energy frontier collider “in collaboration with national institutes, laboratories and universities worldwide”, and enhanced by a strong participation of industrial partners. Now, a coordinated preparation effort can be based on a core of an ever-growing consortium of already more than 135 institutes worldwide. The technology for constructing a high-energy circular hadron collider can be brought to the technology readiness level required for constructing within the coming ten years through a focused R&D programme. The FCC-hh concept comprises in the baseline scenario a power-saving, low-temperature superconducting magnet system based on an evolution of the Nb3Sn technology pioneered at the HL-LHC, an energy-efficient cryogenic refrigeration infrastructure based on a neon-helium (Nelium) light gas mixture, a high-reliability and low loss cryogen distribution infrastructure based on Invar, high-power distributed beam transfer using superconducting elements and local magnet energy recovery and re-use technologies that are already gradually introduced at other CERN accelerators. On a longer timescale, high-temperature superconductors can be developed together with industrial partners to achieve an even more energy efficient particle collider or to reach even higher collision energies.The re-use of the LHC and its injector chain, which also serve for a concurrently running physics programme, is an essential lever to come to an overall sustainable research infrastructure at the energy frontier. Strategic R&D for FCC-hh aims at minimising construction cost and energy consumption, while maximising the socio-economic impact. It will mitigate technology-related risks and ensure that industry can benefit from an acceptable utility. Concerning the implementation, a preparatory phase of about eight years is both necessary and adequate to establish the project governance and organisation structures, to build the international machine and experiment consortia, to develop a territorial implantation plan in agreement with the host-states’ requirements, to optimise the disposal of land and underground volumes, and to prepare the civil engineering project. Such a large-scale, international fundamental research infrastructure, tightly involving industrial partners and providing training at all education levels, will be a strong motor of economic and societal development in all participating nations. The FCC study has implemented a set of actions towards a coherent vision for the world-wide high-energy and particle physics community, providing a collaborative framework for topically complementary and geographically well-balanced contributions. This conceptual design report lays the foundation for a subsequent infrastructure preparatory and technical design phase.

Journal ArticleDOI
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4  +1501 moreInstitutions (239)
TL;DR: In this article, the physics opportunities of the Future Circular Collider (FC) were reviewed, covering its e+e-, pp, ep and heavy ion programs, and the measurement capabilities of each FCC component, addressing the study of electroweak, Higgs and strong interactions.
Abstract: We review the physics opportunities of the Future Circular Collider, covering its e+e-, pp, ep and heavy ion programmes. We describe the measurement capabilities of each FCC component, addressing the study of electroweak, Higgs and strong interactions, the top quark and flavour, as well as phenomena beyond the Standard Model. We highlight the synergy and complementarity of the different colliders, which will contribute to a uniquely coherent and ambitious research programme, providing an unmatchable combination of precision and sensitivity to new physics.

Journal ArticleDOI
TL;DR: This Review draws together developments in neuroimaging techniques that led to the unanticipated finding that dopaminergic dysfunction in schizophrenia is greatest within nigrostriatal pathways, implicating the dorsal striatum in the pathophysiology and calling into question the mesolimbic theory.

Journal ArticleDOI
TL;DR: It is suggested that stringent genetic validation of the mechanism of action of cancer drugs in the preclinical setting may decrease the number of therapies tested in human patients that fail to provide any clinical benefit.
Abstract: Ninety-seven percent of drug-indication pairs that are tested in clinical trials in oncology never advance to receive U.S. Food and Drug Administration approval. While lack of efficacy and dose-limiting toxicities are the most common causes of trial failure, the reason(s) why so many new drugs encounter these problems is not well understood. Using CRISPR-Cas9 mutagenesis, we investigated a set of cancer drugs and drug targets in various stages of clinical testing. We show that-contrary to previous reports obtained predominantly with RNA interference and small-molecule inhibitors-the proteins ostensibly targeted by these drugs are nonessential for cancer cell proliferation. Moreover, the efficacy of each drug that we tested was unaffected by the loss of its putative target, indicating that these compounds kill cells via off-target effects. By applying a genetic target-deconvolution strategy, we found that the mischaracterized anticancer agent OTS964 is actually a potent inhibitor of the cyclin-dependent kinase CDK11 and that multiple cancer types are addicted to CDK11 expression. We suggest that stringent genetic validation of the mechanism of action of cancer drugs in the preclinical setting may decrease the number of therapies tested in human patients that fail to provide any clinical benefit.

Journal ArticleDOI
TL;DR: Nielsen et al. as discussed by the authors presented a statistical analysis of the first 300 stars observed by the Gemini Planet Imager Exoplanet Survey (GPEES) to infer the underlying distributions of substellar companions with respect to their mass, semimajor axis, and host stellar mass.
Abstract: Author(s): Nielsen, EL; De Rosa, RJ; Macintosh, B; Wang, JJ; Ruffio, JB; Chiang, E; Marley, MS; Saumon, D; Savransky, D; Mark Ammons, S; Bailey, VP; Barman, T; Blain, C; Bulger, J; Burrows, A; Chilcote, J; Cotten, T; Czekala, I; Doyon, R; Duchene, G; Esposito, TM; Fabrycky, D; Fitzgerald, MP; Follette, KB; Fortney, JJ; Gerard, BL; Goodsell, SJ; Graham, JR; Greenbaum, AZ; Hibon, P; Hinkley, S; Hirsch, LA; Hom, J; Hung, LW; Ilene Dawson, R; Ingraham, P; Kalas, P; Konopacky, Q; Larkin, JE; Lee, EJ; Lin, JW; Maire, J; Marchis, F; Marois, C; Metchev, S; Millar-Blanchaer, MA; Morzinski, KM; Oppenheimer, R; Palmer, D; Patience, J; Perrin, M; Poyneer, L; Pueyo, L; Rafikov, RR; Rajan, A; Rameau, J; Rantakyro, FT; Ren, B; Schneider, AC; Sivaramakrishnan, A; Song, I; Soummer, R; Tallis, M; Thomas, S; Ward-Duong, K; Wolff, S | Abstract: We present a statistical analysis of the first 300 stars observed by the Gemini Planet Imager Exoplanet Survey. This subsample includes six detected planets and three brown dwarfs; from these detections and our contrast curves we infer the underlying distributions of substellar companions with respect to their mass, semimajor axis, and host stellar mass. We uncover a strong correlation between planet occurrence rate and host star mass, with stars M ∗ g1.5 M o more likely to host planets with masses between 2 and 13M Jup and semimajor axes of 3-100 au at 99.92% confidence. We fit a double power-law model in planet mass (m) and semimajor axis (a) for planet populations around high-mass stars (M ∗ g1.5 M o) of the form , finding α = -2.4 +0.8 and β = -2.0 +0.5, and an integrated occurrence rate of % between 5-13M Jup and 10-100 au. A significantly lower occurrence rate is obtained for brown dwarfs around all stars, with % of stars hosting a brown dwarf companion between 13-80M Jup and 10-100 au. Brown dwarfs also appear to be distributed differently in mass and semimajor axis compared to giant planets; whereas giant planets follow a bottom-heavy mass distribution and favor smaller semimajor axes, brown dwarfs exhibit just the opposite behaviors. Comparing to studies of short-period giant planets from the radial velocity method, our results are consistent with a peak in occurrence of giant planets between ∼1 and 10 au. We discuss how these trends, including the preference of giant planets for high-mass host stars, point to formation of giant planets by core/pebble accretion, and formation of brown dwarfs by gravitational instability.

Journal ArticleDOI
TL;DR: It is found that obesity results in the accumulation of senescent glial cells in proximity to the lateral ventricle, a region in which adult neurogenesis occurs, and that senolytics are a potential new therapeutic avenue for treating neuropsychiatric disorders.

Journal ArticleDOI
TL;DR: There is increasing evidence that even brief durations of systolic arterial pressure <100 mm Hg and mean arterial pressured are harmful during non‐cardiac surgery, and there is insufficient evidence to recommend a general upper limit of arterial Pressure at which therapy should be initiated.
Abstract: Background A multidisciplinary international working subgroup of the third Perioperative Quality Initiative consensus meeting appraised the evidence on the influence of preoperative arterial blood pressure and community cardiovascular medications on perioperative risk. Methods A modified Delphi technique was used, evaluating papers published in MEDLINE on associations between preoperative numerical arterial pressure values or cardiovascular medications and perioperative outcomes. The strength of the recommendations was graded by National Institute for Health and Care Excellence guidelines. Results Significant heterogeneity in study design, including arterial pressure measures and perioperative outcomes, hampered the comparison of studies. Nonetheless, consensus recommendations were that (i) preoperative arterial pressure measures may be used to define targets for perioperative management; (ii) elective surgery should not be cancelled based solely upon a preoperative arterial pressure value; (iii) there is insufficient evidence to support lowering arterial pressure in the immediate preoperative period to minimise perioperative risk; and (iv) there is insufficient evidence that any one measure of arterial pressure (systolic, diastolic, mean, or pulse) is better than any other for risk prediction of adverse perioperative events. Conclusions Future research should define which preoperative arterial pressure values best correlate with adverse outcomes, and whether modifying arterial pressure in the preoperative setting will change the perioperative morbidity or mortality. Additional research should define optimum strategies for continuation or discontinuation of preoperative cardiovascular medications.


Journal ArticleDOI
04 Mar 2019
TL;DR: Recent studies of global changes in splicing in cancer, splicing regulation of mitogenic pathways critical in cancer transformation, and efforts to therapeutically target splicing suggest that alterations insplicing are drivers of tumorigenesis.
Abstract: RNA splicing, the enzymatic process of removing segments of premature RNA to produce mature RNA, is a key mediator of proteome diversity and regulator of gene expression. Increased systematic sequencing of the genome and transcriptome of cancers has identified a variety of means by which RNA splicing is altered in cancer relative to normal cells. These findings, in combination with the discovery of recurrent change-of-function mutations in splicing factors in a variety of cancers, suggest that alterations in splicing are drivers of tumorigenesis. Greater characterization of altered splicing in cancer parallels increasing efforts to pharmacologically perturb splicing and early-phase clinical development of small molecules that disrupt splicing in patients with cancer. Here we review recent studies of global changes in splicing in cancer, splicing regulation of mitogenic pathways critical in cancer transformation, and efforts to therapeutically target splicing in cancer.

Journal ArticleDOI
TL;DR: An energy-efficient dynamic offloading and resource scheduling (eDors) policy to reduce energy consumption and shorten application completion time is provided and the eDors algorithm can effectively reduce EEC by optimally adjusting CPU clock frequency of SMDs in local computing, and adapting the transmission power for wireless channel conditions in cloud computing.
Abstract: Mobile cloud computing (MCC) as an emerging and prospective computing paradigm, can significantly enhance computation capability and save energy for smart mobile devices (SMDs) by offloading computation-intensive tasks from resource-constrained SMDs onto resource-rich cloud. However, how to achieve energy-efficient computation offloading under hard constraint for application completion time remains a challenge. To address such a challenge, in this paper, we provide an energy-efficient dynamic offloading and resource scheduling (eDors) policy to reduce energy consumption and shorten application completion time. We first formulate the eDors problem into an energy-efficiency cost (EEC) minimization problem while satisfying task-dependency requirement and completion time deadline constraint. We then propose a distributed eDors algorithm consisting of three subalgorithms of computation offloading selection, clock frequency control, and transmission power allocation. Next, we show that computation offloading selection depends on not only the computing workload of a task, but also the maximum completion time of its immediate predecessors and the clock frequency and transmission power of the mobile device. Finally, we provide experimental results in a real testbed and demonstrate that the eDors algorithm can effectively reduce EEC by optimally adjusting CPU clock frequency of SMDs in local computing, and adapting the transmission power for wireless channel conditions in cloud computing.

Journal ArticleDOI
TL;DR: In this paper, a Bayesian framework for incorporating selection effects into population analyses is proposed, which allows for both measurement uncertainty in individual measurements and selection biases on the population of measurements, and shows how to extract the parameters of the underlying distribution based on a set of observations sampled from this distribution.
Abstract: We derive a Bayesian framework for incorporating selection effects into population analyses. We allow for both measurement uncertainty in individual measurements and, crucially, for selection biases on the population of measurements, and show how to extract the parameters of the underlying distribution based on a set of observations sampled from this distribution. We illustrate the performance of this framework with an example from gravitational-wave astrophysics, demonstrating that the mass ratio distribution of merging compact-object binaries can be extracted from Malmquist-biased observations with substantial measurement uncertainty.

Journal ArticleDOI
TL;DR: Analysis of gravitational-wave data from the first LIGO detection of a binary black-hole merger finds evidence of the fundamental quasinormal mode and at least one overtone associated with the dominant angular mode, and supports the hypothesis that the GW150914 merger produced a Kerr black hole, as predicted by general relativity.
Abstract: We analyze gravitational-wave data from the first LIGO detection of a binary black-hole merger (GW150914) in search of the ringdown of the remnant black hole. Using observations beginning at the peak of the signal, we find evidence of the fundamental quasinormal mode and at least one overtone, both associated with the dominant angular mode (l=m=2), with 3.6σ confidence. A ringdown model including overtones allows us to measure the final mass and spin magnitude of the remnant exclusively from postinspiral data, obtaining an estimate in agreement with the values inferred from the full signal. The mass and spin values we measure from the ringdown agree with those obtained using solely the fundamental mode at a later time, but have smaller uncertainties. Agreement between the postinspiral measurements of mass and spin and those using the full waveform supports the hypothesis that the GW150914 merger produced a Kerr black hole, as predicted by general relativity, and provides a test of the no-hair theorem at the ∼10% level. An independent measurement of the frequency of the first overtone yields agreement with the no-hair hypothesis at the ∼20% level. As the detector sensitivity improves and the detected population of black-hole mergers grows, we can expect that using overtones will provide even stronger tests.

Journal ArticleDOI
Georges Aad1, Alexander Kupco2, Samuel Webb3, Timo Dreyer4  +3380 moreInstitutions (206)
TL;DR: In this article, a search for high-mass dielectron and dimuon resonances in the mass range of 250 GeV to 6 TeV was performed at the Large Hadron Collider.

Journal ArticleDOI
TL;DR: An overview of the progress and remaining limitations in the understanding of the mechanistic foundations of allostery gained from computational and experimental analyses of real protein systems and model systems is provided.

Journal ArticleDOI
TL;DR: The first intercomparison project of global storm-resolving models, DYAMOND, was presented in 2016 as discussed by the authors, where nine models submitted simulation output for a 40-day (1 August-10 September 2016) intercomarcison period, eight of these employed a tiling of the sphere that was uniformly less than 5 km.
Abstract: A review of the experimental protocol and motivation for DYAMOND, the first intercomparison project of global storm-resolving models, is presented. Nine models submitted simulation output for a 40-day (1 August–10 September 2016) intercomparison period. Eight of these employed a tiling of the sphere that was uniformly less than 5 km. By resolving the transient dynamics of convective storms in the tropics, global storm-resolving models remove the need to parameterize tropical deep convection, providing a fundamentally more sound representation of the climate system and a more natural link to commensurately high-resolution data from satellite-borne sensors. The models and some basic characteristics of their output are described in more detail, as is the availability and planned use of this output for future scientific study. Tropically and zonally averaged energy budgets, precipitable water distributions, and precipitation from the model ensemble are evaluated, as is their representation of tropical cyclones and the predictability of column water vapor, the latter being important for tropical weather.

Journal ArticleDOI
Morad Aaboud, Georges Aad1, Brad Abbott2, Dale Charles Abbott3  +2936 moreInstitutions (198)
TL;DR: An exclusion limit on the H→invisible branching ratio of 0.26(0.17_{-0.05}^{+0.07}) at 95% confidence level is observed (expected) in combination with the results at sqrt[s]=7 and 8 TeV.
Abstract: Dark matter particles, if sufficiently light, may be produced in decays of the Higgs boson. This Letter presents a statistical combination of searches for H→invisible decays where H is produced according to the standard model via vector boson fusion, Z(ll)H, and W/Z(had)H, all performed with the ATLAS detector using 36.1 fb^{-1} of pp collisions at a center-of-mass energy of sqrt[s]=13 TeV at the LHC. In combination with the results at sqrt[s]=7 and 8 TeV, an exclusion limit on the H→invisible branching ratio of 0.26(0.17_{-0.05}^{+0.07}) at 95% confidence level is observed (expected).

Journal ArticleDOI
TL;DR: The authors provide an overview of the rapidly developing field of climate change vulnerability assessment (CCVA) and describe key concepts, terms, steps and considerations, and stress the importance of identifying the full range of pressures, impacts and their associated mechanisms that species face and using this as a basis for selecting the appropriate assessment approaches for quantifying vulnerability.
Abstract: Assessing species' vulnerability to climate change is a prerequisite for developing effective strategies to conserve them. The last three decades have seen exponential growth in the number of studies evaluating how, how much, why, when, and where species will be impacted by climate change. We provide an overview of the rapidly developing field of climate change vulnerability assessment (CCVA) and describe key concepts, terms, steps and considerations. We stress the importance of identifying the full range of pressures, impacts and their associated mechanisms that species face and using this as a basis for selecting the appropriate assessment approaches for quantifying vulnerability. We outline four CCVA assessment approaches, namely trait-based, correlative, mechanistic and combined approaches and discuss their use. Since any assessment can deliver unreliable or even misleading results when incorrect data and parameters are applied, we discuss finding, selecting, and applying input data and provide examples of open-access resources. Because rare, small-range, and declining-range species are often of particular conservation concern while also posing significant challenges for CCVA, we describe alternative ways to assess them. We also describe how CCVAs can be used to inform IUCN Red List assessments of extinction risk. Finally, we suggest future directions in this field and propose areas where research efforts may be particularly valuable.

Journal ArticleDOI
Georges Aad1, Alexander Kupco2, Samuel Webb3, Timo Dreyer4  +2962 moreInstitutions (195)
TL;DR: In this article, an improved energy clustering algorithm is introduced, and its implications for the measurement and identification of prompt electrons and photons are discussed in detail, including corrections and calibrations that affect performance, including energy calibration, identification and isolation efficiencies.
Abstract: This paper describes the reconstruction of electrons and photons with the ATLAS detector, employed for measurements and searches exploiting the complete LHC Run 2 dataset. An improved energy clustering algorithm is introduced, and its implications for the measurement and identification of prompt electrons and photons are discussed in detail. Corrections and calibrations that affect performance, including energy calibration, identification and isolation efficiencies, and the measurement of the charge of reconstructed electron candidates are determined using up to 81 fb−1 of proton-proton collision data collected at √s=13 TeV between 2015 and 2017.