scispace - formally typeset
Search or ask a question

Showing papers by "International School for Advanced Studies published in 2019"


Journal ArticleDOI
Peter A. R. Ade1, James E. Aguirre2, Z. Ahmed3, Simone Aiola4  +276 moreInstitutions (53)
TL;DR: The Simons Observatory (SO) is a new cosmic microwave background experiment being built on Cerro Toco in Chile, due to begin observations in the early 2020s as mentioned in this paper.
Abstract: The Simons Observatory (SO) is a new cosmic microwave background experiment being built on Cerro Toco in Chile, due to begin observations in the early 2020s. We describe the scientific goals of the experiment, motivate the design, and forecast its performance. SO will measure the temperature and polarization anisotropy of the cosmic microwave background in six frequency bands centered at: 27, 39, 93, 145, 225 and 280 GHz. The initial configuration of SO will have three small-aperture 0.5-m telescopes and one large-aperture 6-m telescope, with a total of 60,000 cryogenic bolometers. Our key science goals are to characterize the primordial perturbations, measure the number of relativistic species and the mass of neutrinos, test for deviations from a cosmological constant, improve our understanding of galaxy evolution, and constrain the duration of reionization. The small aperture telescopes will target the largest angular scales observable from Chile, mapping ≈ 10% of the sky to a white noise level of 2 μK-arcmin in combined 93 and 145 GHz bands, to measure the primordial tensor-to-scalar ratio, r, at a target level of σ(r)=0.003. The large aperture telescope will map ≈ 40% of the sky at arcminute angular resolution to an expected white noise level of 6 μK-arcmin in combined 93 and 145 GHz bands, overlapping with the majority of the Large Synoptic Survey Telescope sky region and partially with the Dark Energy Spectroscopic Instrument. With up to an order of magnitude lower polarization noise than maps from the Planck satellite, the high-resolution sky maps will constrain cosmological parameters derived from the damping tail, gravitational lensing of the microwave background, the primordial bispectrum, and the thermal and kinematic Sunyaev-Zel'dovich effects, and will aid in delensing the large-angle polarization signal to measure the tensor-to-scalar ratio. The survey will also provide a legacy catalog of 16,000 galaxy clusters and more than 20,000 extragalactic sources.

1,027 citations


Journal ArticleDOI
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4  +1491 moreInstitutions (239)
TL;DR: In this article, the authors present the second volume of the Future Circular Collider Conceptual Design Report, devoted to the electron-positron collider FCC-ee, and present the accelerator design, performance reach, a staged operation scenario, the underlying technologies, civil engineering, technical infrastructure, and an implementation plan.
Abstract: In response to the 2013 Update of the European Strategy for Particle Physics, the Future Circular Collider (FCC) study was launched, as an international collaboration hosted by CERN. This study covers a highest-luminosity high-energy lepton collider (FCC-ee) and an energy-frontier hadron collider (FCC-hh), which could, successively, be installed in the same 100 km tunnel. The scientific capabilities of the integrated FCC programme would serve the worldwide community throughout the 21st century. The FCC study also investigates an LHC energy upgrade, using FCC-hh technology. This document constitutes the second volume of the FCC Conceptual Design Report, devoted to the electron-positron collider FCC-ee. After summarizing the physics discovery opportunities, it presents the accelerator design, performance reach, a staged operation scenario, the underlying technologies, civil engineering, technical infrastructure, and an implementation plan. FCC-ee can be built with today’s technology. Most of the FCC-ee infrastructure could be reused for FCC-hh. Combining concepts from past and present lepton colliders and adding a few novel elements, the FCC-ee design promises outstandingly high luminosity. This will make the FCC-ee a unique precision instrument to study the heaviest known particles (Z, W and H bosons and the top quark), offering great direct and indirect sensitivity to new physics.

526 citations


Journal ArticleDOI
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4  +1496 moreInstitutions (238)
TL;DR: In this paper, the authors describe the detailed design and preparation of a construction project for a post-LHC circular energy frontier collider in collaboration with national institutes, laboratories and universities worldwide, and enhanced by a strong participation of industrial partners.
Abstract: Particle physics has arrived at an important moment of its history. The discovery of the Higgs boson, with a mass of 125 GeV, completes the matrix of particles and interactions that has constituted the “Standard Model” for several decades. This model is a consistent and predictive theory, which has so far proven successful at describing all phenomena accessible to collider experiments. However, several experimental facts do require the extension of the Standard Model and explanations are needed for observations such as the abundance of matter over antimatter, the striking evidence for dark matter and the non-zero neutrino masses. Theoretical issues such as the hierarchy problem, and, more in general, the dynamical origin of the Higgs mechanism, do likewise point to the existence of physics beyond the Standard Model. This report contains the description of a novel research infrastructure based on a highest-energy hadron collider with a centre-of-mass collision energy of 100 TeV and an integrated luminosity of at least a factor of 5 larger than the HL-LHC. It will extend the current energy frontier by almost an order of magnitude. The mass reach for direct discovery will reach several tens of TeV, and allow, for example, to produce new particles whose existence could be indirectly exposed by precision measurements during the earlier preceding e+e– collider phase. This collider will also precisely measure the Higgs self-coupling and thoroughly explore the dynamics of electroweak symmetry breaking at the TeV scale, to elucidate the nature of the electroweak phase transition. WIMPs as thermal dark matter candidates will be discovered, or ruled out. As a single project, this particle collider infrastructure will serve the world-wide physics community for about 25 years and, in combination with a lepton collider (see FCC conceptual design report volume 2), will provide a research tool until the end of the 21st century. Collision energies beyond 100 TeV can be considered when using high-temperature superconductors. The European Strategy for Particle Physics (ESPP) update 2013 stated “To stay at the forefront of particle physics, Europe needs to be in a position to propose an ambitious post-LHC accelerator project at CERN by the time of the next Strategy update”. The FCC study has implemented the ESPP recommendation by developing a long-term vision for an “accelerator project in a global context”. This document describes the detailed design and preparation of a construction project for a post-LHC circular energy frontier collider “in collaboration with national institutes, laboratories and universities worldwide”, and enhanced by a strong participation of industrial partners. Now, a coordinated preparation effort can be based on a core of an ever-growing consortium of already more than 135 institutes worldwide. The technology for constructing a high-energy circular hadron collider can be brought to the technology readiness level required for constructing within the coming ten years through a focused R&D programme. The FCC-hh concept comprises in the baseline scenario a power-saving, low-temperature superconducting magnet system based on an evolution of the Nb3Sn technology pioneered at the HL-LHC, an energy-efficient cryogenic refrigeration infrastructure based on a neon-helium (Nelium) light gas mixture, a high-reliability and low loss cryogen distribution infrastructure based on Invar, high-power distributed beam transfer using superconducting elements and local magnet energy recovery and re-use technologies that are already gradually introduced at other CERN accelerators. On a longer timescale, high-temperature superconductors can be developed together with industrial partners to achieve an even more energy efficient particle collider or to reach even higher collision energies.The re-use of the LHC and its injector chain, which also serve for a concurrently running physics programme, is an essential lever to come to an overall sustainable research infrastructure at the energy frontier. Strategic R&D for FCC-hh aims at minimising construction cost and energy consumption, while maximising the socio-economic impact. It will mitigate technology-related risks and ensure that industry can benefit from an acceptable utility. Concerning the implementation, a preparatory phase of about eight years is both necessary and adequate to establish the project governance and organisation structures, to build the international machine and experiment consortia, to develop a territorial implantation plan in agreement with the host-states’ requirements, to optimise the disposal of land and underground volumes, and to prepare the civil engineering project. Such a large-scale, international fundamental research infrastructure, tightly involving industrial partners and providing training at all education levels, will be a strong motor of economic and societal development in all participating nations. The FCC study has implemented a set of actions towards a coherent vision for the world-wide high-energy and particle physics community, providing a collaborative framework for topically complementary and geographically well-balanced contributions. This conceptual design report lays the foundation for a subsequent infrastructure preparatory and technical design phase.

425 citations


Journal ArticleDOI
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4  +1501 moreInstitutions (239)
TL;DR: In this article, the physics opportunities of the Future Circular Collider (FC) were reviewed, covering its e+e-, pp, ep and heavy ion programs, and the measurement capabilities of each FCC component, addressing the study of electroweak, Higgs and strong interactions.
Abstract: We review the physics opportunities of the Future Circular Collider, covering its e+e-, pp, ep and heavy ion programmes. We describe the measurement capabilities of each FCC component, addressing the study of electroweak, Higgs and strong interactions, the top quark and flavour, as well as phenomena beyond the Standard Model. We highlight the synergy and complementarity of the different colliders, which will contribute to a uniquely coherent and ambitious research programme, providing an unmatchable combination of precision and sensitivity to new physics.

407 citations


Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Yashar Akrami4  +213 moreInstitutions (66)
TL;DR: The 2018 Planck CMB likelihoods were presented in this paper, following a hybrid approach similar to the 2015 one, with different approximations at low and high multipoles, and implementing several methodological and analysis refinements.
Abstract: This paper describes the 2018 Planck CMB likelihoods, following a hybrid approach similar to the 2015 one, with different approximations at low and high multipoles, and implementing several methodological and analysis refinements. With more realistic simulations, and better correction and modelling of systematics, we can now make full use of the High Frequency Instrument polarization data. The low-multipole 100x143 GHz EE cross-spectrum constrains the reionization optical-depth parameter $\tau$ to better than 15% (in combination with with the other low- and high-$\ell$ likelihoods). We also update the 2015 baseline low-$\ell$ joint TEB likelihood based on the Low Frequency Instrument data, which provides a weaker $\tau$ constraint. At high multipoles, a better model of the temperature-to-polarization leakage and corrections for the effective calibrations of the polarization channels (polarization efficiency or PE) allow us to fully use the polarization spectra, improving the constraints on the $\Lambda$CDM parameters by 20 to 30% compared to TT-only constraints. Tests on the modelling of the polarization demonstrate good consistency, with some residual modelling uncertainties, the accuracy of the PE modelling being the main limitation. Using our various tests, simulations, and comparison between different high-$\ell$ implementations, we estimate the consistency of the results to be better than the 0.5$\sigma$ level. Minor curiosities already present before (differences between $\ell$ 800 parameters or the preference for more smoothing of the $C_\ell$ peaks) are shown to be driven by the TT power spectrum and are not significantly modified by the inclusion of polarization. Overall, the legacy Planck CMB likelihoods provide a robust tool for constraining the cosmological model and represent a reference for future CMB observations. (Abridged)

322 citations


Journal ArticleDOI
Leor Barack1, Vitor Cardoso2, Vitor Cardoso3, Samaya Nissanke4  +228 moreInstitutions (101)
TL;DR: A comprehensive overview of the state of the art in the relevant fields of research, summarize important open problems, and lay out a roadmap for future progress can be found in this article, which is an initiative taken within the framework of the European Action on 'Black holes, Gravitational waves and Fundamental Physics'.
Abstract: The grand challenges of contemporary fundamental physics-dark matter, dark energy, vacuum energy, inflation and early universe cosmology, singularities and the hierarchy problem-all involve gravity as a key component. And of all gravitational phenomena, black holes stand out in their elegant simplicity, while harbouring some of the most remarkable predictions of General Relativity: event horizons, singularities and ergoregions. The hitherto invisible landscape of the gravitational Universe is being unveiled before our eyes: the historical direct detection of gravitational waves by the LIGO-Virgo collaboration marks the dawn of a new era of scientific exploration. Gravitational-wave astronomy will allow us to test models of black hole formation, growth and evolution, as well as models of gravitational-wave generation and propagation. It will provide evidence for event horizons and ergoregions, test the theory of General Relativity itself, and may reveal the existence of new fundamental fields. The synthesis of these results has the potential to radically reshape our understanding of the cosmos and of the laws of Nature. The purpose of this work is to present a concise, yet comprehensive overview of the state of the art in the relevant fields of research, summarize important open problems, and lay out a roadmap for future progress. This write-up is an initiative taken within the framework of the European Action on 'Black holes, Gravitational waves and Fundamental Physics'. © 2019 IOP Publishing Ltd.

314 citations


Journal ArticleDOI
Masashi Hazumi, Peter A. R. Ade1, Y. Akiba2, David Alonso3  +161 moreInstitutions (36)
TL;DR: LiteBIRD as mentioned in this paper is a candidate satellite for a strategic large mission of JAXA, which aims to map the polarization of the cosmic microwave background radiation over the full sky with unprecedented precision.
Abstract: LiteBIRD is a candidate satellite for a strategic large mission of JAXA. With its expected launch in the middle of the 2020s with a H3 rocket, LiteBIRD plans to map the polarization of the cosmic microwave background radiation over the full sky with unprecedented precision. The full success of LiteBIRD is to achieve $\delta r < 0.001$ , where $\delta r$ is the total error on the tensor-to-scalar ratio r. The required angular coverage corresponds to $2 \le \ell \le 200$ , where $\ell $ is the multipole moment. This allows us to test well-motivated cosmic inflation models. Full-sky surveys for 3 years at a Lagrangian point L2 will be carried out for 15 frequency bands between 34 and 448 GHz with two telescopes to achieve the total sensitivity of 2.5 $\upmu $ K arcmin with a typical angular resolution of 0.5$^\circ $ at 150 GHz. Each telescope is equipped with a half-wave plate system for polarization signal modulation and a focal plane filled with polarization-sensitive TES bolometers. A cryogenic system provides a 100 mK base temperature for the focal planes and 2 K and 5 K stages for optical components.

286 citations


Journal ArticleDOI
TL;DR: In this paper, a simple procedure to determine the 2-group global symmetry of a given QFT is presented, and a classification of the related "t Hooft anomalies" is provided.
Abstract: In general quantum field theories (QFTs), ordinary (0-form) global symmetries and 1-form symmetries can combine into 2-group global symmetries. We describe this phenomenon in detail using the language of symmetry defects. We exhibit a simple procedure to determine the (possible) 2-group global symmetry of a given QFT, and provide a classification of the related ’t Hooft anomalies (for symmetries not acting on spacetime). We also describe how QFTs can be coupled to extrinsic backgrounds for symmetry groups that differ from the intrinsic symmetry acting faithfully on the theory. Finally, we provide a variety of examples, ranging from TQFTs (gapped systems) to gapless QFTs. Along the way, we stress that the “obstruction to symmetry fractionalization” discussed in some condensed matter literature is really an instance of 2-group global symmetry.

229 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider invariance under the finite modular group Γ 4 ≃ S 4 and focus on the minimal scenario where the expectation value of the modulus is the only source of symmetry breaking, such that no flavons need to be introduced.

215 citations


Journal ArticleDOI
TL;DR: In this paper, a new version of the SEVN population-synthesis code was proposed to integrate stellar evolution by interpolation over a grid of stellar evolution tracks, and the authors used it to evolve a sample of binary stellar evolution processes.
Abstract: Studying the formation and evolution of black hole binaries (BHBs) is essential for the interpretation of current and forthcoming gravitational wave (GW) detections. We investigate the statistics of BHBs that form from isolated binaries, by means of a new version of the SEVN population-synthesis code. SEVN integrates stellar evolution by interpolation over a grid of stellar evolution tracks. We upgraded SEVN to include binary stellar evolution processes and we used it to evolve a sample of $1.5\times{}10^8$ binary systems, with metallicity in the range $\left[10^{-4};4\times 10^{-2}\right]$. From our simulations, we find that the mass distribution of black holes (BHs) in double compact-object binaries is remarkably similar to the one obtained considering only single stellar evolution. The maximum BH mass we obtain is $\sim 30$, $45$ and $55\, \mathrm{M}_\odot$ at metallicity $Z=2\times 10^{-2}$, $6\times 10^{-3}$, and $10^{-4}$, respectively. A few massive single BHs may also form ($\lesssim 0.1\%$ of the total number of BHs), with mass up to $\sim 65$, $90$ and $145\, \mathrm{M}_\odot$ at $Z=2\times 10^{-2}$, $6\times 10^{-3}$, and $10^{-4}$, respectively. These BHs fall in the mass gap predicted from pair-instability supernovae. We also show that the most massive BHBs are unlikely to merge within a Hubble time. In our simulations, merging BHs like GW151226 and GW170608, form at all metallicities, the high-mass systems (like GW150914, GW170814 and GW170104) originate from metal poor ($Z\lesssim{}6\times 10^{-3}$) progenitors, whereas GW170729-like systems are hard to form, even at $Z = 10^{-4}$. The BHB merger rate in the local Universe obtained from our simulations is $\sim 90 \mathrm{Gpc}^{-3}\mathrm{yr}^{-1}$, consistent with the rate inferred from LIGO-Virgo data.

215 citations


Journal ArticleDOI
TL;DR: In this article, a simple parametrization of the effect in terms of two parameters (Ξ 0,n) was proposed to test modified GW propagation with standard sirens with LISA.
Abstract: Modifications of General Relativity leave their imprint both on the cosmic expansion history through a non-trivial dark energy equation of state, and on the evolution of cosmological perturbations in the scalar and in the tensor sectors. In particular, the modification in the tensor sector gives rise to a notion of gravitational-wave (GW) luminosity distance, different from the standard electromagnetic luminosity distance, that can be studied with standard sirens at GW detectors such as LISA or third-generation ground based experiments. We discuss the predictions for modified GW propagation from some of the best studied theories of modified gravity, such as Horndeski or the more general degenerate higher order scalar-tensor (DHOST) theories, non-local infrared modifications of gravity, bigravity theories and the corresponding phenomenon of GW oscillation, as well as theories with extra or varying dimensions. We show that modified GW propagation is a completely generic phenomenon in modified gravity. We then use a simple parametrization of the effect in terms of two parameters (Ξ0,n), that is shown to fit well the results from a large class of models, to study the prospects of observing modified GW propagation using supermassive black hole binaries as standard sirens with LISA . We construct mock source catalogs and perform detailed Markov Chain Monte Carlo studies of the likelihood obtained from LISA standard sirens alone, as well as by combining them with CMB, BAO and SNe data to reduce the degeneracies between cosmological parameters. We find that the combination of LISA with the other cosmological datasets allows one to measure the parameter Ξ0 that characterizes modified GW propagation to the percent level accuracy, sufficient to test several modified gravity theories. LISA standard sirens can also improve constraints on GW oscillations induced by extra field content by about three orders of magnitude relative to the current capability of ground detectors. We also update the forecasts on the accuracy on H0 and on the dark-energy equation of state using more recent estimates for the LISA sensitivity.

Journal ArticleDOI
TL;DR: In this paper, the authors constructed phenomenologically viable models of lepton masses and mixing based on modular A 4 invariance broken to residual symmetries Z 3 T or Z 3 S T and Z 2 S respectively.

Journal ArticleDOI
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4  +1496 moreInstitutions (238)
TL;DR: The third volume of the FCC Conceptual Design Report as discussed by the authors is devoted to the hadron collider FCC-hh, and summarizes the physics discovery opportunities, presents the FCC-HH accelerator design, performance reach, and staged operation plan, discusses the underlying technologies, the civil engineering and technical infrastructure, and also sketches a possible implementation.
Abstract: In response to the 2013 Update of the European Strategy for Particle Physics (EPPSU), the Future Circular Collider (FCC) study was launched as a world-wide international collaboration hosted by CERN. The FCC study covered an energy-frontier hadron collider (FCC-hh), a highest-luminosity high-energy lepton collider (FCC-ee), the corresponding 100 km tunnel infrastructure, as well as the physics opportunities of these two colliders, and a high-energy LHC, based on FCC-hh technology. This document constitutes the third volume of the FCC Conceptual Design Report, devoted to the hadron collider FCC-hh. It summarizes the FCC-hh physics discovery opportunities, presents the FCC-hh accelerator design, performance reach, and staged operation plan, discusses the underlying technologies, the civil engineering and technical infrastructure, and also sketches a possible implementation. Combining ingredients from the Large Hadron Collider (LHC), the high-luminosity LHC upgrade and adding novel technologies and approaches, the FCC-hh design aims at significantly extending the energy frontier to 100 TeV. Its unprecedented centre-of-mass collision energy will make the FCC-hh a unique instrument to explore physics beyond the Standard Model, offering great direct sensitivity to new physics and discoveries.

Journal ArticleDOI
TL;DR: In this paper, the authors considered a class of theories where matter superfields transform in representations of the finite modular group Γ5 ≃ A5 and explicitly constructed a basis for the 11 modular forms of weight 2 and level 5.
Abstract: In the framework of the modular symmetry approach to lepton flavour, we consider a class of theories where matter superfields transform in representations of the finite modular group Γ5 ≃ A5. We explicitly construct a basis for the 11 modular forms of weight 2 and level 5. We show how these forms arrange themselves into two triplets and a quintet of A5. We also present multiplets of modular forms of higher weight. Finally, we provide an example of application of our results, constructing two models of neutrino masses and mixing based on the supersymmetric Weinberg operator.

Journal ArticleDOI
TL;DR: In this paper, the authors report on the current state of LLP searches at the Large Hadron Collider (LHC) at CERN and chart a path for the development of LLP searches into the future, both in the upcoming Run 3 and at the High Luminosity LHC.
Abstract: Particles beyond the Standard Model (SM) can generically have lifetimes that are long compared to SM particles at the weak scale. When produced at experiments such as the Large Hadron Collider (LHC) at CERN, these long-lived particles (LLPs) can decay far from the interaction vertex of the primary proton-proton collision. Such LLP signatures are distinct from those of promptly decaying particles that are targeted by the majority of searches for new physics at the LHC, often requiring customized techniques to identify, for example, significantly displaced decay vertices, tracks with atypical properties, and short track segments. Given their non-standard nature, a comprehensive overview of LLP signatures at the LHC is beneficial to ensure that possible avenues of the discovery of new physics are not overlooked. Here we report on the joint work of a community of theorists and experimentalists with the ATLAS, CMS, and LHCb experiments --- as well as those working on dedicated experiments such as MoEDAL, milliQan, MATHUSLA, CODEX-b, and FASER --- to survey the current state of LLP searches at the LHC, and to chart a path for the development of LLP searches into the future, both in the upcoming Run 3 and at the High-Luminosity LHC. The work is organized around the current and future potential capabilities of LHC experiments to generally discover new LLPs, and takes a signature-based approach to surveying classes of models that give rise to LLPs rather than emphasizing any particular theory motivation. We develop a set of simplified models; assess the coverage of current searches; document known, often unexpected backgrounds; explore the capabilities of proposed detector upgrades; provide recommendations for the presentation of search results; and look towards the newest frontiers, namely high-multiplicity "dark showers", highlighting opportunities for expanding the LHC reach for these signals.

Journal ArticleDOI
TL;DR: In this paper, a model of neutrino masses and lepton mixing based on broken modular symmetry is proposed, where the only source of symmetry breaking is the vacuum expectation value of the modulus field.
Abstract: We investigate models of charged lepton and neutrino masses and lepton mixing based on broken modular symmetry. The matter fields in these models are assumed to transform in irreducible representations of the finite modular group Γ4 ≃ S4. We analyse the minimal scenario in which the only source of symmetry breaking is the vacuum expectation value of the modulus field. In this scenario there is no need to introduce flavon fields. Using the basis for the lowest weight modular forms found earlier, we build minimal phenomenologically viable models in which the neutrino masses are generated via the type I seesaw mechanism. While successfully accommodating charged lepton masses, neutrino mixing angles and mass-squared differences, these models predict the values of the lightest neutrino mass (i.e., the absolute neutrino mass scale), of the Dirac and Majorana CP violation (CPV) phases, as well as specific correlations between the values of the atmospheric neutrino mixing parameter sin2θ23 and i) the Dirac CPV phase δ, ii) the sum of the neutrino masses, and iii) the effective Majorana mass in neutrinoless double beta decay. We consider also the case of residual symmetries ℤ 3 and ℤ 2 respectively in the charged lepton and neutrino sectors, corresponding to specific vacuum expectation values of the modulus.

Journal ArticleDOI
TL;DR: In this paper, the authors extended the thermally pulsing asymptotic giant branch (TP-AGB) model to the more metal-rich Large Magellanic Cloud (LMC).
Abstract: Reliable models of the thermally pulsing asymptotic giant branch (TP-AGB) phase are of critical importance across astrophysics, including our interpretation of the spectral energy distribution of galaxies, cosmic dust production, and enrichment of the interstellar medium. With the aim of improving sets of stellar isochrones that include a detailed description of the TP-AGB phase, we extend our recent calibration of the AGB population in the Small Magellanic Cloud (SMC) to the more metal-rich Large Magellanic Cloud (LMC). We model the LMC stellar populations with the trilegal code, using the spatially resolved star formation history derived from the VISTA survey. We characterize the efficiency of the third dredge-up by matching the star counts and the Ks-band luminosity functions of the AGB stars identified in the LMC. In line with previous findings, we confirm that, compared to the SMC, the third dredge-up in AGB stars of the LMC is somewhat less efficient, as a consequence of the higher metallicity. The predicted range of initial mass of C-rich stars is between Mi ≈ 1.7 and 3 M⊙ at Zi = 0.008. We show how the inclusion of new opacity data in the carbon star spectra will improve the performance of our models. We discuss the predicted lifetimes, integrated luminosities, and mass-loss rate distributions of the calibrated models. The results of our calibration are included in updated stellar isochrones publicly available.

Journal ArticleDOI
TL;DR: The results unveil an unpredicted and evolutionary conserved role of SREBP1 in rewiring cell metabolism in response to mechanical cues and show that a stiff cellular environment causes RhoA lipidation and acto-myosin contraction, which inhibits SRE BP1 and connects the extracellular matrix to lipid metabolism.
Abstract: Sterol regulatory element binding proteins (SREBPs) are a family of transcription factors that regulate lipid biosynthesis and adipogenesis by controlling the expression of several enzymes required for cholesterol, fatty acid, triacylglycerol and phospholipid synthesis. In vertebrates, SREBP activation is mainly controlled by a complex and well-characterized feedback mechanism mediated by cholesterol, a crucial bio-product of the SREBP-activated mevalonate pathway. In this work, we identified acto-myosin contractility and mechanical forces imposed by the extracellular matrix (ECM) as SREBP1 regulators. SREBP1 control by mechanical cues depends on geranylgeranyl pyrophosphate, another key bio-product of the mevalonate pathway, and impacts on stem cell fate in mouse and on fat storage in Drosophila. Mechanistically, we show that activation of AMP-activated protein kinase (AMPK) by ECM stiffening and geranylgeranylated RhoA-dependent acto-myosin contraction inhibits SREBP1 activation. Our results unveil an unpredicted and evolutionary conserved role of SREBP1 in rewiring cell metabolism in response to mechanical cues. SREBP transcription factors activate lipid synthesis and generate raw materials to lipidate various proteins. Here, the authors show that a stiff cellular environment causes RhoA lipidation and acto-myosin contraction, which inhibits SREBP1 and connects the extracellular matrix to lipid metabolism.

Posted Content
TL;DR: The intrinsic dimensionality of data-representations is studied, i.e. the minimal number of parameters needed to describe a representation, and it is found that, in a trained network, the ID is orders of magnitude smaller than the number of units in each layer.
Abstract: Deep neural networks progressively transform their inputs across multiple processing layers. What are the geometrical properties of the representations learned by these networks? Here we study the intrinsic dimensionality (ID) of data-representations, i.e. the minimal number of parameters needed to describe a representation. We find that, in a trained network, the ID is orders of magnitude smaller than the number of units in each layer. Across layers, the ID first increases and then progressively decreases in the final layers. Remarkably, the ID of the last hidden layer predicts classification accuracy on the test set. These results can neither be found by linear dimensionality estimates (e.g., with principal component analysis), nor in representations that had been artificially linearized. They are neither found in untrained networks, nor in networks that are trained on randomized labels. This suggests that neural networks that can generalize are those that transform the data into low-dimensional, but not necessarily flat manifolds.

Journal ArticleDOI
TL;DR: In this paper, the authors presented a new black hole solution surrounded by dark matter halo in the galactic center using the mass model of M87 and that coming from the universal rotation curve (URC) dark matter profile representing family of spiral galaxies.
Abstract: In this paper we present a new black hole solution surrounded by dark matter (DM) halo in the galactic center using the mass model of M87 and that coming from the universal rotation curve (URC) dark matter profile representing family of spiral galaxies. In both cases the DM halo density is cored with a size ${r}_{0}$ and a central density ${\ensuremath{\rho}}_{0}$: $\ensuremath{\rho}(r)={\ensuremath{\rho}}_{0}/(1+r/{r}_{0})(1+(r/{r}_{0}{)}^{2})$. Since ${r}_{0}{\ensuremath{\rho}}_{0}=120\text{ }\text{ }{\mathrm{M}}_{\ensuremath{\bigodot}}/{\mathrm{pc}}^{2}$ [Mon. Not. R. Astron. Soc. 397, 1169 (2009)], then by varying the central density one can reproduce the DM profile in any spiral. Using the Newman-Jains method we extend our solution to obtain a rotating black hole surrounded by dark matter halo. We find that the apparent shape of the shadow beside the black hole spin $a$, it also depends on the central density of the surrounded dark matter ${\ensuremath{\rho}}_{0}$. As a specific example we consider the galaxy M87, with a central density ${\ensuremath{\rho}}_{0}=6.9\ifmmode\times\else\texttimes\fi{}{10}^{6}\text{ }\text{ }{\mathrm{M}}_{\ensuremath{\bigodot}}/{\mathrm{kpc}}^{3}$ and a core radius ${r}_{0}=91.2\text{ }\text{ }\mathrm{kpc}$. In the case of M87, our analyses show that the effect of dark matter on the size of the black hole shadow is almost negligible compared to the shadow size of the Kerr vacuum solution hence the angular diameter $42\text{ }\text{ }\ensuremath{\mu}\mathrm{as}$ remains almost unaltered when the dark matter is considered. For a small totally dark matter dominated spiral such as UGC 7232, we find similar effect of dark matter on the shadow images compared to the M87. However, in specific conditions having a core radius comparable to the black hole mass and dark matter with very high density, we show that the shadow images decreases compared to the Kerr vacuum black hole. The effect of dark matter on the apparent shadow shape can shed some light in future observations as an indirect way to detect dark matter using the shadow images.

Journal ArticleDOI
L. Eyer1, L. Rimoldini1, M. Audard2, Richard I. Anderson1  +451 moreInstitutions (75)
TL;DR: In this article, the locations of variable star classes, variable object fractions, and typical variability amplitudes throughout the CaMD and show how variabilityrelated changes in colour and brightness induce "motions".
Abstract: Context. The ESA Gaia mission provides a unique time-domain survey for more than 1.6 billion sources with G 21 mag. Aims. We showcase stellar variability in the Galactic colour-absolute magnitude diagram (CaMD). We focus on pulsating, eruptive, and cataclysmic variables, as well as on stars that exhibit variability that is due to rotation and eclipses. Methods. We describe the locations of variable star classes, variable object fractions, and typical variability amplitudes throughout the CaMD and show how variability-related changes in colour and brightness induce "motions". To do this, we use 22 months of calibrated photometric, spectro-photometric, and astrometric Gaia data of stars with a significant parallax. To ensure that a large variety of variable star classes populate the CaMD, we crossmatched Gaia sources with known variable stars. We also used the statistics and variability detection modules of the Gaia variability pipeline. Corrections for interstellar extinction are not implemented in this article. Results. Gaia enables the first investigation of Galactic variable star populations in the CaMD on a similar, if not larger, scale as was previously done in the Magellanic Clouds. Although the observed colours are not corrected for reddening, distinct regions are visible in which variable stars occur. We determine variable star fractions to within the current detection thresholds of Gaia. Finally, we report the most complete description of variability-induced motion within the CaMD to date. Conclusions. Gaia enables novel insights into variability phenomena for an unprecedented number of stars, which will benefit the understanding of stellar astrophysics. The CaMD of Galactic variable stars provides crucial information on physical origins of variability in a way that has previously only been accessible for Galactic star clusters or external galaxies. Future Gaia data releases will enable significant improvements over this preview by providing longer time series, more accurate astrometry, and additional data types (time series BP and RP spectra, RVS spectra, and radial velocities), all for much larger samples of stars.

Journal ArticleDOI
TL;DR: In this article, a formalism of combined finite modular and generalised CP (gCP) symmetries for theories of flavour is developed and the corresponding consistency conditions for the two symmetry transformations acting on the modulus τ and on the matter fields are derived.
Abstract: The formalism of combined finite modular and generalised CP (gCP) sym-metries for theories of flavour is developed. The corresponding consistency conditions for the two symmetry transformations acting on the modulus τ and on the matter fields are derived. The implications of gCP symmetry in theories of flavour based on modular invariance described by finite modular groups are illustrated with the example of a modular S4 model of lepton flavour. Due to the addition of the gCP symmetry, viable modular models turn out to be more constrained, with the modulus τ being the only source of CP violation.

Journal ArticleDOI
TL;DR: In this paper, the symmetry resolved Renyi entropies in the one-dimensional tight binding model, equivalent to the spin-1/2 XX chain in a magnetic field, were investigated.
Abstract: We consider the symmetry resolved Renyi entropies in the one dimensional tight binding model, equivalent to the spin-1/2 XX chain in a magnetic field. We exploit the generalised Fisher-Hartwig conjecture to obtain the asymptotic behaviour of the entanglement entropies with a flux charge insertion at leading and subleading orders. The o(1) contributions are found to exhibit a rich structure of oscillatory behaviour. We then use these results to extract the symmetry resolved entanglement, determining exactly all the non-universal constants and logarithmic corrections to the scaling that are not accessible to the field theory approach. We also discuss how our results are generalised to a one-dimensional free fermi gas.

Journal ArticleDOI
TL;DR: In this article, an N-body cosmological hydrodynamical code AX-GADGET is proposed to quantify the FDM impact on specific astrophysical observables.
Abstract: Fuzzy Dark Matter (FDM) represents an alternative and intriguing description of the standard Cold Dark Matter (CDM) fluid, able to explain the lack of direct detection of dark matter particles in the GeV sector and to alleviate small scales tensions in the cosmic large-scale structure formation. Cosmological simulations of FDM models in the literature were performed either with very expensive high-resolution grid-based simulations of individual haloes or through N-body simulations encompassing larger cosmic volumes but resorting on significant approximations in the FDM non-linear dynamics to reduce their computational cost. With the use of the new N-body cosmological hydrodynamical code AX-GADGET, we are now able not only to overcome such numerical problems, but also to combine a fully consistent treatment of FDM dynamics with the presence of gas particles and baryonic physical processes, in order to quantify the FDM impact on specific astrophysical observables. In particular, in this paper we perform and analyse several hydrodynamical simulations in order to constrain the FDM mass by quantifying the impact of FDM on Lyman-$\alpha$ forest observations, as obtained for the first time in the literature in a N-body setup without approximating the FDM dynamics. We also study the statistical properties of haloes, exploiting the large available sample, to extract information on how FDM affects the abundance, the shape, and density profiles of dark matter haloes.

Journal ArticleDOI
01 Mar 2019-Carbon
TL;DR: In this paper, a review of carbon-based nanomaterials currently under investigation in basic and applied neuroscience, and the recent developments in this research field, with a special focus on in vitro studies.

Journal ArticleDOI
TL;DR: In this paper, the authors study fixed points of quantum gravity with renormalization group methods and a procedure to remove convergence-limiting poles from the flow, and the setup is tested within the $f(R)$ approximation for gravity by solving exact recursive relations up to order ${R}^{70}$ in the Ricci scalar, combined with resummations and numerical integration.
Abstract: We study fixed points of quantum gravity with renormalization group methods and a procedure to remove convergence-limiting poles from the flow. The setup is tested within the $f(R)$ approximation for gravity by solving exact recursive relations up to order ${R}^{70}$ in the Ricci scalar, combined with resummations and numerical integration. Results include fixed points, scaling exponents, the gap in the eigenvalue spectrum, the dimensionality of the UV-critical surface, fingerprints for weak coupling, and the quantum equations of motion. Our findings strengthen the view that ``most of quantum gravity'' is rather weakly coupled. Another novelty are a pair of de Sitter solutions for quantum cosmology, whose occurrence is traced back to the removal of poles. We also address slight disparities of results in the literature and give bounds on the number of fundamentally free parameters of quantum gravity.

Journal ArticleDOI
TL;DR: A statistical test upon a test statistic which measures deviations between two samples, using a Nearest Neighbors approach to estimate the local ratio of the density of points, which is model-independent and non-parametric.
Abstract: We propose a new scientific application of unsupervised learning techniques to boost our ability to search for new phenomena in data, by detecting discrepancies between two datasets. These could be, for example, a simulated standard-model background, and an observed dataset containing a potential hidden signal of New Physics. We build a statistical test upon a test statistic which measures deviations between two samples, using a Nearest Neighbors approach to estimate the local ratio of the density of points. The test is model-independent and non-parametric, requiring no knowledge of the shape of the underlying distributions, and it does not bin the data, thus retaining full information from the multidimensional feature space. As a proof-of-concept, we apply our method to synthetic Gaussian data, and to a simulated dark matter signal at the Large Hadron Collider. Even in the case where the background can not be simulated accurately enough to claim discovery, the technique is a powerful tool to identify regions of interest for further study.


Journal ArticleDOI
TL;DR: This study provides the proof-of-concept that olfactory mucosa samples collected from patients with PD and MSA possess important seeding activities for α-synuclein, and suggests RT-QuIC analyses of OM and cerebrospinal fluid (CSF) can be combined with the aim of increasing the overall diagnostic accuracy of these diseases.
Abstract: Parkinson’s disease (PD) is a neurodegenerative disorder whose diagnosis is often challenging because symptoms may overlap with neurodegenerative parkinsonisms. PD is characterized by intraneuronal accumulation of abnormal α-synuclein in brainstem while neurodegenerative parkinsonisms might be associated with accumulation of either α-synuclein, as in the case of Multiple System Atrophy (MSA) or tau, as in the case of Corticobasal Degeneration (CBD) and Progressive Supranuclear Palsy (PSP), in other disease-specific brain regions. Definite diagnosis of all these diseases can be formulated only neuropathologically by detection and localization of α-synuclein or tau aggregates in the brain. Compelling evidence suggests that trace-amount of these proteins can appear in peripheral tissues, including receptor neurons of the olfactory mucosa (OM). We have set and standardized the experimental conditions to extend the ultrasensitive Real Time Quaking Induced Conversion (RT-QuIC) assay for OM analysis. In particular, by using human recombinant α-synuclein as substrate of reaction, we have assessed the ability of OM collected from patients with clinical diagnoses of PD and MSA to induce α-synuclein aggregation, and compared their seeding ability to that of OM samples collected from patients with clinical diagnoses of CBD and PSP. Our results showed that a significant percentage of MSA and PD samples induced α-synuclein aggregation with high efficiency, but also few samples of patients with the clinical diagnosis of CBD and PSP caused the same effect. Notably, the final RT-QuIC aggregates obtained from MSA and PD samples owned peculiar biochemical and morphological features potentially enabling their discrimination. Our study provide the proof-of-concept that olfactory mucosa samples collected from patients with PD and MSA possess important seeding activities for α-synuclein. Additional studies are required for (i) estimating sensitivity and specificity of the technique and for (ii) evaluating its application for the diagnosis of PD and neurodegenerative parkinsonisms. RT-QuIC analyses of OM and cerebrospinal fluid (CSF) can be combined with the aim of increasing the overall diagnostic accuracy of these diseases, especially in the early stages.

Journal ArticleDOI
TL;DR: A unified approach counting the quantum effects is introduced, which is capable of modeling heat transport ranging from crystals to glasses, and naturally bridges the Boltzmann kinetic approach in crystals and the Allen-Feldman model in glasses.
Abstract: We introduce a novel approach to model heat transport in solids, based on the Green-Kubo theory of linear response. It naturally bridges the Boltzmann kinetic approach in crystals and the Allen-Feldman model in glasses, leveraging interatomic force constants and normal-mode linewidths computed at mechanical equilibrium. At variance with molecular dynamics, our approach naturally and easily accounts for quantum mechanical effects in energy transport. Our methodology is carefully validated against results for crystalline and amorphous silicon from equilibrium molecular dynamics and, in the former case, from the Boltzmann transport equation.