scispace - formally typeset
Search or ask a question

Showing papers by "École Polytechnique published in 2019"


Journal ArticleDOI
TL;DR: Scrublet avoids the need for expert knowledge or cell clustering by simulating multiplets from the data and building a nearest neighbor classifier, a framework for predicting the impact of multiplets in a given analysis and identifying problematic multiplets.
Abstract: Summary Single-cell RNA-sequencing has become a widely used, powerful approach for studying cell populations However, these methods often generate multiplet artifacts, where two or more cells receive the same barcode, resulting in a hybrid transcriptome In most experiments, multiplets account for several percent of transcriptomes and can confound downstream data analysis Here, we present Single-Cell Remover of Doublets (Scrublet), a framework for predicting the impact of multiplets in a given analysis and identifying problematic multiplets Scrublet avoids the need for expert knowledge or cell clustering by simulating multiplets from the data and building a nearest neighbor classifier To demonstrate the utility of this approach, we test Scrublet on several datasets that include independent knowledge of cell multiplets Scrublet is freely available for download at githubcom/AllonKleinLab/scrublet

1,021 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a comprehensive update of the current status of ultra-high-power lasers and demonstrate how the technology has developed, and what technologies are to be deployed to get to these new regimes, and some critical issues facing their development.
Abstract: In the 2015 review paper 'Petawatt Class Lasers Worldwide' a comprehensive overview of the current status of highpower facilities of >200 TW was presented. This was largely based on facility specifications, with some description of their uses, for instance in fundamental ultra-high-intensity interactions, secondary source generation, and inertial confinement fusion (ICF). With the 2018 Nobel Prize in Physics being awarded to Professors Donna Strickland and Gerard Mourou for the development of the technique of chirped pulse amplification (CPA), which made these lasers possible, we celebrate by providing a comprehensive update of the current status of ultra-high-power lasers and demonstrate how the technology has developed. We are now in the era of multi-petawatt facilities coming online, with 100 PW lasers being proposed and even under construction. In addition to this there is a pull towards development of industrial and multidisciplinary applications, which demands much higher repetition rates, delivering high-average powers with higher efficiencies and the use of alternative wavelengths: mid-IR facilities. So apart from a comprehensive update of the current global status, we want to look at what technologies are to be deployed to get to these new regimes, and some of the critical issues facing their development.

559 citations


Journal ArticleDOI
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4  +1491 moreInstitutions (239)
TL;DR: In this article, the authors present the second volume of the Future Circular Collider Conceptual Design Report, devoted to the electron-positron collider FCC-ee, and present the accelerator design, performance reach, a staged operation scenario, the underlying technologies, civil engineering, technical infrastructure, and an implementation plan.
Abstract: In response to the 2013 Update of the European Strategy for Particle Physics, the Future Circular Collider (FCC) study was launched, as an international collaboration hosted by CERN. This study covers a highest-luminosity high-energy lepton collider (FCC-ee) and an energy-frontier hadron collider (FCC-hh), which could, successively, be installed in the same 100 km tunnel. The scientific capabilities of the integrated FCC programme would serve the worldwide community throughout the 21st century. The FCC study also investigates an LHC energy upgrade, using FCC-hh technology. This document constitutes the second volume of the FCC Conceptual Design Report, devoted to the electron-positron collider FCC-ee. After summarizing the physics discovery opportunities, it presents the accelerator design, performance reach, a staged operation scenario, the underlying technologies, civil engineering, technical infrastructure, and an implementation plan. FCC-ee can be built with today’s technology. Most of the FCC-ee infrastructure could be reused for FCC-hh. Combining concepts from past and present lepton colliders and adding a few novel elements, the FCC-ee design promises outstandingly high luminosity. This will make the FCC-ee a unique precision instrument to study the heaviest known particles (Z, W and H bosons and the top quark), offering great direct and indirect sensitivity to new physics.

526 citations


Journal ArticleDOI
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4  +1496 moreInstitutions (238)
TL;DR: In this paper, the authors describe the detailed design and preparation of a construction project for a post-LHC circular energy frontier collider in collaboration with national institutes, laboratories and universities worldwide, and enhanced by a strong participation of industrial partners.
Abstract: Particle physics has arrived at an important moment of its history. The discovery of the Higgs boson, with a mass of 125 GeV, completes the matrix of particles and interactions that has constituted the “Standard Model” for several decades. This model is a consistent and predictive theory, which has so far proven successful at describing all phenomena accessible to collider experiments. However, several experimental facts do require the extension of the Standard Model and explanations are needed for observations such as the abundance of matter over antimatter, the striking evidence for dark matter and the non-zero neutrino masses. Theoretical issues such as the hierarchy problem, and, more in general, the dynamical origin of the Higgs mechanism, do likewise point to the existence of physics beyond the Standard Model. This report contains the description of a novel research infrastructure based on a highest-energy hadron collider with a centre-of-mass collision energy of 100 TeV and an integrated luminosity of at least a factor of 5 larger than the HL-LHC. It will extend the current energy frontier by almost an order of magnitude. The mass reach for direct discovery will reach several tens of TeV, and allow, for example, to produce new particles whose existence could be indirectly exposed by precision measurements during the earlier preceding e+e– collider phase. This collider will also precisely measure the Higgs self-coupling and thoroughly explore the dynamics of electroweak symmetry breaking at the TeV scale, to elucidate the nature of the electroweak phase transition. WIMPs as thermal dark matter candidates will be discovered, or ruled out. As a single project, this particle collider infrastructure will serve the world-wide physics community for about 25 years and, in combination with a lepton collider (see FCC conceptual design report volume 2), will provide a research tool until the end of the 21st century. Collision energies beyond 100 TeV can be considered when using high-temperature superconductors. The European Strategy for Particle Physics (ESPP) update 2013 stated “To stay at the forefront of particle physics, Europe needs to be in a position to propose an ambitious post-LHC accelerator project at CERN by the time of the next Strategy update”. The FCC study has implemented the ESPP recommendation by developing a long-term vision for an “accelerator project in a global context”. This document describes the detailed design and preparation of a construction project for a post-LHC circular energy frontier collider “in collaboration with national institutes, laboratories and universities worldwide”, and enhanced by a strong participation of industrial partners. Now, a coordinated preparation effort can be based on a core of an ever-growing consortium of already more than 135 institutes worldwide. The technology for constructing a high-energy circular hadron collider can be brought to the technology readiness level required for constructing within the coming ten years through a focused R&D programme. The FCC-hh concept comprises in the baseline scenario a power-saving, low-temperature superconducting magnet system based on an evolution of the Nb3Sn technology pioneered at the HL-LHC, an energy-efficient cryogenic refrigeration infrastructure based on a neon-helium (Nelium) light gas mixture, a high-reliability and low loss cryogen distribution infrastructure based on Invar, high-power distributed beam transfer using superconducting elements and local magnet energy recovery and re-use technologies that are already gradually introduced at other CERN accelerators. On a longer timescale, high-temperature superconductors can be developed together with industrial partners to achieve an even more energy efficient particle collider or to reach even higher collision energies.The re-use of the LHC and its injector chain, which also serve for a concurrently running physics programme, is an essential lever to come to an overall sustainable research infrastructure at the energy frontier. Strategic R&D for FCC-hh aims at minimising construction cost and energy consumption, while maximising the socio-economic impact. It will mitigate technology-related risks and ensure that industry can benefit from an acceptable utility. Concerning the implementation, a preparatory phase of about eight years is both necessary and adequate to establish the project governance and organisation structures, to build the international machine and experiment consortia, to develop a territorial implantation plan in agreement with the host-states’ requirements, to optimise the disposal of land and underground volumes, and to prepare the civil engineering project. Such a large-scale, international fundamental research infrastructure, tightly involving industrial partners and providing training at all education levels, will be a strong motor of economic and societal development in all participating nations. The FCC study has implemented a set of actions towards a coherent vision for the world-wide high-energy and particle physics community, providing a collaborative framework for topically complementary and geographically well-balanced contributions. This conceptual design report lays the foundation for a subsequent infrastructure preparatory and technical design phase.

425 citations


Journal ArticleDOI
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4  +1501 moreInstitutions (239)
TL;DR: In this article, the physics opportunities of the Future Circular Collider (FC) were reviewed, covering its e+e-, pp, ep and heavy ion programs, and the measurement capabilities of each FCC component, addressing the study of electroweak, Higgs and strong interactions.
Abstract: We review the physics opportunities of the Future Circular Collider, covering its e+e-, pp, ep and heavy ion programmes. We describe the measurement capabilities of each FCC component, addressing the study of electroweak, Higgs and strong interactions, the top quark and flavour, as well as phenomena beyond the Standard Model. We highlight the synergy and complementarity of the different colliders, which will contribute to a uniquely coherent and ambitious research programme, providing an unmatchable combination of precision and sensitivity to new physics.

407 citations


Journal ArticleDOI
TL;DR: Hydrophobicity is proposed as a governing factor in CO2 reduction selectivity and can help explain trends seen on previously reported electrocatalysts.
Abstract: The aqueous electrocatalytic reduction of CO2 into alcohol and hydrocarbon fuels presents a sustainable route towards energy-rich chemical feedstocks. Cu is the only material able to catalyse the substantial formation of multicarbon products (C2/C3), but competing proton reduction to hydrogen is an ever-present drain on selectivity. Here, a superhydrophobic surface was generated by 1-octadecanethiol treatment of hierarchically structured Cu dendrites, inspired by the structure of gas-trapping cuticles on subaquatic spiders. The hydrophobic electrode attained a 56% Faradaic efficiency for ethylene and 17% for ethanol production at neutral pH, compared to 9% and 4% on a hydrophilic, wettable equivalent. These observations are assigned to trapped gases at the hydrophobic Cu surface, which increase the concentration of CO2 at the electrode–solution interface and consequently increase CO2 reduction selectivity. Hydrophobicity is thus proposed as a governing factor in CO2 reduction selectivity and can help explain trends seen on previously reported electrocatalysts. Aqueous electrocatalytic reduction of CO2 into alcohol and hydrocarbon fuels is a sustainable route towards energy-rich chemical feedstocks. A superhydrophobic surface of hierarchically structured Cu dendrites exhibits a significant increase in CO2 reduction selectivity.

396 citations


Proceedings ArticleDOI
20 May 2019
TL;DR: This work proposes to rethink pairwise interactions with a self-attention mechanism, and jointly model Human-Robot as well as Human-Human interactions in the deep reinforcement learning framework, and captures the Human- human interactions occurring in dense crowds that indirectly affects the robot’s anticipation capability.
Abstract: Mobility in an effective and socially-compliant manner is an essential yet challenging task for robots operating in crowded spaces Recent works have shown the power of deep reinforcement learning techniques to learn socially cooperative policies However, their cooperation ability deteriorates as the crowd grows since they typically relax the problem as a one-way Human-Robot interaction problem In this work, we want to go beyond first-order Human-Robot interaction and more explicitly model Crowd-Robot Interaction (CRI) We propose to (i) rethink pairwise interactions with a self-attention mechanism, and (ii) jointly model Human-Robot as well as Human-Human interactions in the deep reinforcement learning framework Our model captures the Human-Human interactions occurring in dense crowds that indirectly affects the robot’s anticipation capability Our proposed attentive pooling mechanism learns the collective importance of neighboring humans with respect to their future states Various experiments demonstrate that our model can anticipate human dynamics and navigate in crowds with time efficiency, outperforming state-of-the-art methods

341 citations


Journal ArticleDOI
TL;DR: This review distills the historical and current developments spanning the last several decades of SIF heritage and complementarity within the broader field of fluorescence science, the maturation of physiological and radiative transfer modelling, SIF signal retrieval strategies, techniques for field and airborne sensing, advances in satellite-based systems, and applications of these capabilities in evaluation of photosynthesis and stress effects.

313 citations


Journal ArticleDOI
TL;DR: Volumetric bioprinting permits the creation of geometrically complex, centimeter‐scale constructs at an unprecedented printing velocity, opening new avenues for upscaling the production of hydrogel‐based constructs and for their application in tissue engineering, regenerative medicine, and soft robotics.
Abstract: Biofabrication technologies, including stereolithography and extrusion-based printing, are revolutionizing the creation of complex engineered tissues. The current paradigm in bioprinting relies on the additive layer-by-layer deposition and assembly of repetitive building blocks, typically cell-laden hydrogel fibers or voxels, single cells, or cellular aggregates. The scalability of these additive manufacturing technologies is limited by their printing velocity, as lengthy biofabrication processes impair cell functionality. Overcoming such limitations, the volumetric bioprinting of clinically relevant sized, anatomically shaped constructs, in a time frame ranging from seconds to tens of seconds is described. An optical-tomography-inspired printing approach, based on visible light projection, is developed to generate cell-laden tissue constructs with high viability (>85%) from gelatin-based photoresponsive hydrogels. Free-form architectures, difficult to reproduce with conventional printing, are obtained, including anatomically correct trabecular bone models with embedded angiogenic sprouts and meniscal grafts. The latter undergoes maturation in vitro as the bioprinted chondroprogenitor cells synthesize neo-fibrocartilage matrix. Moreover, free-floating structures are generated, as demonstrated by printing functional hydrogel-based ball-and-cage fluidic valves. Volumetric bioprinting permits the creation of geometrically complex, centimeter-scale constructs at an unprecedented printing velocity, opening new avenues for upscaling the production of hydrogel-based constructs and for their application in tissue engineering, regenerative medicine, and soft robotics.

280 citations


Journal ArticleDOI
TL;DR: It is proposed that METTL5–TRMT112 acts by extruding the adenosine to be modified from a double-stranded nucleic acid, supporting that its RNA-binding mode differs distinctly from that of other m6A RNA methyltransferases.
Abstract: N6-methyladenosine (m6A) has recently been found abundantly on messenger RNA and shown to regulate most steps of mRNA metabolism. Several important m6A methyltransferases have been described functionally and structurally, but the enzymes responsible for installing one m6A residue on each subunit of human ribosomes at functionally important sites have eluded identification for over 30 years. Here, we identify METTL5 as the enzyme responsible for 18S rRNA m6A modification and confirm ZCCHC4 as the 28S rRNA modification enzyme. We show that METTL5 must form a heterodimeric complex with TRMT112, a known methyltransferase activator, to gain metabolic stability in cells. We provide the first atomic resolution structure of METTL5-TRMT112, supporting that its RNA-binding mode differs distinctly from that of other m6A RNA methyltransferases. On the basis of similarities with a DNA methyltransferase, we propose that METTL5-TRMT112 acts by extruding the adenosine to be modified from a double-stranded nucleic acid.

270 citations


Journal ArticleDOI
TL;DR: An electrosynthetic method to design HEMG-NPs with up to eight tunable metallic components and show multifunctional electrocatalytic water splitting capabilities is presented.
Abstract: Creative approaches to the design of catalytic nanomaterials are necessary in achieving environmentally sustainable energy sources. Integrating dissimilar metals into a single nanoparticle (NP) offers a unique avenue for customizing catalytic activity and maximizing surface area. Alloys containing five or more equimolar components with a disordered, amorphous microstructure, referred to as High-Entropy Metallic Glasses (HEMGs), provide tunable catalytic performance based on the individual properties of incorporated metals. Here, we present a generalized strategy to electrosynthesize HEMG-NPs with up to eight equimolar components by confining multiple metal salt precursors to water nanodroplets emulsified in dichloroethane. Upon collision with an electrode, alloy NPs are electrodeposited into a disordered microstructure, where dissimilar metal atoms are proximally arranged. We also demonstrate precise control over metal stoichiometry by tuning the concentration of metal salt dissolved in the nanodroplet. The application of HEMG-NPs to energy conversion is highlighted with electrocatalytic water splitting on CoFeLaNiPt HEMG-NPs.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a model of an endowment economy with two competing, but intrinsically worthless currencies (Dollar, Bitcoin) and show that Bitcoin prices form a martingale.

Journal ArticleDOI
TL;DR: In this paper, the existence and uniqueness of the normalized Kahler-Ricci flow on non-singular Fano manifolds with log terminal singularities has been proved.
Abstract: We prove the existence and uniqueness of Kahler-Einstein metrics on Q-Fano varieties with log terminal singularities (and more generally on log Fano pairs) whose Mabuchi functional is proper. We study analogues of the works of Perelman on the convergence of the normalized Kahler-Ricci flow, and of Keller, Rubinstein on its discrete version, Ricci iteration. In the special case of (non-singular) Fano manifolds, our results on Ricci iteration yield smooth convergence without any additional condition, improving on previous results. Our result for the Kahler-Ricci flow provides weak convergence independently of Perelman's celebrated estimates.

Journal ArticleDOI
TL;DR: A systematic review of the last 10 years of the literature on handwritten signatures with respect to the new scenario is reported, focusing on the most promising domains of research and trying to elicit possible future research directions in this subject.
Abstract: Handwritten signatures are biometric traits at the center of debate in the scientific community. Over the last 40 years, the interest in signature studies has grown steadily, having as its main reference the application of automatic signature verification, as previously published reviews in 1989, 2000, and 2008 bear witness. Ever since, and over the last 10 years, the application of handwritten signature technology has strongly evolved and much research has focused on the possibility of applying systems based on handwritten signature analysis and processing to a multitude of new fields. After several years of haphazard growth of this research area, it is time to assess its current developments for their applicability in order to draw a structured way forward. This perspective reports a systematic review of the last 10 years of the literature on handwritten signatures with respect to the new scenario, focusing on the most promising domains of research and trying to elicit possible future research directions in this subject.

Journal ArticleDOI
Marco Ajello1, Makoto Arimoto2, Magnus Axelsson3, Magnus Axelsson4  +149 moreInstitutions (37)
TL;DR: In this article, the authors presented the second catalog of LAT-detected GRBs, covering the first 10 yr of operations, from 2008 to 2018 August 4, and found a total of 186 GRBs are found; of these, 91 showed emission in the range 30-100 MeV (17 of which were seen only in this band) and 169 are detected above 100 MeV.
Abstract: The Large Area Telescope (LAT) aboard the Fermi spacecraft routinely observes high-energy emission from gamma-ray bursts (GRBs). Here we present the second catalog of LAT-detected GRBs, covering the first 10 yr of operations, from 2008 to 2018 August 4. A total of 186 GRBs are found; of these, 91 show emission in the range 30–100 MeV (17 of which are seen only in this band) and 169 are detected above 100 MeV. Most of these sources were discovered by other instruments (Fermi/GBM, Swift/BAT, AGILE, INTEGRAL) or reported by the Interplanetary Network (IPN); the LAT has independently triggered on four GRBs. This catalog presents the results for all 186 GRBs. We study onset, duration, and temporal properties of each GRB, as well as spectral characteristics in the 100 MeV–100 GeV energy range. Particular attention is given to the photons with the highest energy. Compared with the first LAT GRB catalog, our rate of detection is significantly improved. The results generally confirm the main findings of the first catalog: the LAT primarily detects the brightest GBM bursts, and the high-energy emission shows delayed onset as well as longer duration. However, in this work we find delays exceeding 1 ks and several GRBs with durations over 10 ks. Furthermore, the larger number of LAT detections shows that these GRBs not only cover the high-fluence range of GBM-detected GRBs but also sample lower fluences. In addition, the greater number of detected GRBs with redshift estimates allows us to study their properties in both the observer and rest frames. Comparison of the observational results with theoretical predictions reveals that no model is currently able to explain all results, highlighting the role of LAT observations in driving theoretical models.

Proceedings Article
08 Dec 2019
TL;DR: This paper combines an encoder based on causal dilated convolutions with a novel triplet loss employing time-based negative sampling, obtaining general-purpose representations for variable length and multivariate time series.
Abstract: Time series constitute a challenging data type for machine learning algorithms, due to their highly variable lengths and sparse labeling in practice. In this paper, we tackle this challenge by proposing an unsupervised method to learn universal embeddings of time series. Unlike previous works, it is scalable with respect to their length and we demonstrate the quality, transferability and practicability of the learned representations with thorough experiments and comparisons. To this end, we combine an encoder based on causal dilated convolutions with a novel triplet loss employing time-based negative sampling, obtaining general-purpose representations for variable length and multivariate time series.

Journal ArticleDOI
TL;DR: In this article, the authors compute the characteristic function of the log-price in rough Heston models, where the Riccati equation is replaced by a fractional RICCati equation.
Abstract: It has been recently shown that rough volatility models, where the volatility is driven by a fractional Brownian motion with small Hurst parameter, provide very relevant dynamics in order to reproduce the behavior of both historical and implied volatilities. However, due to the non‐Markovian nature of the fractional Brownian motion, they raise new issues when it comes to derivatives pricing. Using an original link between nearly unstable Hawkes processes and fractional volatility models, we compute the characteristic function of the log‐price in rough Heston models. In the classical Heston model, the characteristic function is expressed in terms of the solution of a Riccati equation. Here, we show that rough Heston models exhibit quite a similar structure, the Riccati equation being replaced by a fractional Riccati equation.

Journal ArticleDOI
H. Abdalla1, R. Adam2, Felix Aharonian3, Felix Aharonian4  +232 moreInstitutions (40)
20 Nov 2019-Nature
TL;DR: In this article, the authors observed very high-energy gamma-ray burst (GRB) emission in the bright GRB 180720B deep in the GRB afterglow, ten hours after the end of the prompt emission phase.
Abstract: Gamma-ray bursts (GRBs) are brief flashes of γ-rays and are considered to be the most energetic explosive phenomena in the Universe1. The emission from GRBs comprises a short (typically tens of seconds) and bright prompt emission, followed by a much longer afterglow phase. During the afterglow phase, the shocked outflow—produced by the interaction between the ejected matter and the circumburst medium—slows down, and a gradual decrease in brightness is observed2. GRBs typically emit most of their energy via γ-rays with energies in the kiloelectronvolt-to-megaelectronvolt range, but a few photons with energies of tens of gigaelectronvolts have been detected by space-based instruments3. However, the origins of such high-energy (above one gigaelectronvolt) photons and the presence of very-high-energy (more than 100 gigaelectronvolts) emission have remained elusive4. Here we report observations of very-high-energy emission in the bright GRB 180720B deep in the GRB afterglow—ten hours after the end of the prompt emission phase, when the X-ray flux had already decayed by four orders of magnitude. Two possible explanations exist for the observed radiation: inverse Compton emission and synchrotron emission of ultrarelativistic electrons. Our observations show that the energy fluxes in the X-ray and γ-ray range and their photon indices remain comparable to each other throughout the afterglow. This discovery places distinct constraints on the GRB environment for both emission mechanisms, with the inverse Compton explanation alleviating the particle energy requirements for the emission observed at late times. The late timing of this detection has consequences for the future observations of GRBs at the highest energies.

Journal ArticleDOI
A. Abada1, Marcello Abbrescia2, Marcello Abbrescia3, Shehu S. AbdusSalam4  +1496 moreInstitutions (238)
TL;DR: The third volume of the FCC Conceptual Design Report as discussed by the authors is devoted to the hadron collider FCC-hh, and summarizes the physics discovery opportunities, presents the FCC-HH accelerator design, performance reach, and staged operation plan, discusses the underlying technologies, the civil engineering and technical infrastructure, and also sketches a possible implementation.
Abstract: In response to the 2013 Update of the European Strategy for Particle Physics (EPPSU), the Future Circular Collider (FCC) study was launched as a world-wide international collaboration hosted by CERN. The FCC study covered an energy-frontier hadron collider (FCC-hh), a highest-luminosity high-energy lepton collider (FCC-ee), the corresponding 100 km tunnel infrastructure, as well as the physics opportunities of these two colliders, and a high-energy LHC, based on FCC-hh technology. This document constitutes the third volume of the FCC Conceptual Design Report, devoted to the hadron collider FCC-hh. It summarizes the FCC-hh physics discovery opportunities, presents the FCC-hh accelerator design, performance reach, and staged operation plan, discusses the underlying technologies, the civil engineering and technical infrastructure, and also sketches a possible implementation. Combining ingredients from the Large Hadron Collider (LHC), the high-luminosity LHC upgrade and adding novel technologies and approaches, the FCC-hh design aims at significantly extending the energy frontier to 100 TeV. Its unprecedented centre-of-mass collision energy will make the FCC-hh a unique instrument to explore physics beyond the Standard Model, offering great direct sensitivity to new physics and discoveries.

Journal ArticleDOI
TL;DR: In this article, the four-dimensional S-matrix is reconsidered as a correlator on the celestial sphere at null infinity and the notion of a soft particle whose energy is taken to zero is replaced by a conformally soft particle with h = 0 or $$ \overline{h} = 0.
Abstract: The four-dimensional S-matrix is reconsidered as a correlator on the celestial sphere at null infinity. Asymptotic particle states can be characterized by the point at which they enter or exit the celestial sphere as well as their SL(2, ℂ) Lorentz quantum numbers: namely their conformal scaling dimension and spin $$ h\pm \overline{h} $$ instead of the energy and momentum. This characterization precludes the notion of a soft particle whose energy is taken to zero. We propose it should be replaced by the notion of a conformally soft particle with h = 0 or $$ \overline{h} $$ = 0. For photons we explicitly construct conformally soft SL(2, ℂ) currents with dimensions (1, 0) and identify them with the generator of a U(1) Kac-Moody symmetry on the celestial sphere. For gravity the generator of celestial conformal symmetry is constructed from a (2, 0) SL(2, ℂ) primary wavefunction. Interestingly, BMS supertranslations are generated by a spin-one weight ( $$ \frac{3}{2} $$ , $$ \frac{1}{2} $$ ) operator, which nevertheless shares holomorphic characteristics of a conformally soft operator. This is because the right hand side of its OPE with a weight (h, $$ \overline{h} $$ ) operator $$ {\mathcal{O}}_{h,\overline{h}} $$ involves the shifted operator $$ {\mathcal{O}}_{h+\frac{1}{2},\overline{h}+\frac{1}{2}} $$ . This OPE relation looks quite unusual from the celestial CFT2 perspective but is equivalent to the leading soft graviton theorem and may usefully constrain celestial correlators in quantum gravity.

Journal ArticleDOI
TL;DR: Evidence of a magnetic Weyl state is reported and the surface Fermi arcs in YbMnBi2 are observed, providing a fundamental link between the two areas of physics, and the practical way to design novel materials with exotic properties.
Abstract: Spectroscopic detection of Dirac and Weyl fermions in real materials is vital for both, promising applications and fundamental bridge between high-energy and condensed-matter physics. While the presence of Dirac and noncentrosymmetric Weyl fermions is well established in many materials, the magnetic Weyl semimetals still escape direct experimental detection. In order to find a time-reversal symmetry breaking Weyl state we design two materials and present here experimental and theoretical evidence of realization of such a state in one of them, YbMnBi2. We model the time-reversal symmetry breaking observed by magnetization and magneto-optical microscopy measurements by canted antiferromagnetism and find a number of Weyl points. Using angle-resolved photoemission, we directly observe two pairs of Weyl points connected by the Fermi arcs. Our results not only provide a fundamental link between the two areas of physics, but also demonstrate the practical way to design novel materials with exotic properties. Candidate materials containing magnetic Weyl fermions remain rare. Here, the authors report evidence of a magnetic Weyl state and observe the surface Fermi arcs in YbMnBi2.

Journal ArticleDOI
TL;DR: This work has shown that not only the intensity of the response of the immune system to shocks but also their direction may vary greatly both in the horizontal and the vertical.
Abstract: We consider in this paper the problem of sampling a high-dimensional probability distribution $\pi$ having a density wrt the Lebesgue measure on $\mathbb{R}^d$, known up to a normalisation factor $x \mapsto \mathrm{e}^{−U (x)} / \int_{\mathbb{R}^d} \mathrm{e}^{−U (y)}\mathrm{d}y$. Such problem naturally occurs for example in Bayesian inference and machine learning. Under the assumption that $U$ is continuously differentiable, $ abla U$ is globally Lipschitz and $U$ is strongly convex, we obtain non-asymptotic bounds for the convergence to stationarity in Wasserstein distance of order $2$ and total variation distance of the sampling method based on the Euler discretization of the Langevin stochastic differential equation, for both constant and decreasing step sizes. The dependence on the dimension of the state space of the obtained bounds is studied to demonstrate the applicability of this method. The convergence of an appropriately weighted empirical measure is also investigated and bounds for the mean square error and exponential deviation inequality are reported for functions which are either Lipchitz continuous or measurable and bounded. An illustration to a Bayesian inference for binary regression is presented.

Journal ArticleDOI
TL;DR: These results represent a new approach to studying van der Waals materials using microwave photons in coherent quantum circuits and show that this device can be operated as a voltage-tunable transmon qubit that can be controlled coherently.
Abstract: Quantum coherence and control is foundational to the science and engineering of quantum systems1,2. In van der Waals materials, the collective coherent behaviour of carriers has been probed successfully by transport measurements3–6. However, temporal coherence and control, as exemplified by manipulating a single quantum degree of freedom, remains to be verified. Here we demonstrate such coherence and control of a superconducting circuit incorporating graphene-based Josephson junctions. Furthermore, we show that this device can be operated as a voltage-tunable transmon qubit7–9, whose spectrum reflects the electronic properties of massless Dirac fermions travelling ballistically4,5. In addition to the potential for advancing extensible quantum computing technology, our results represent a new approach to studying van der Waals materials using microwave photons in coherent quantum circuits. A graphene-based Josephson junction incorporated in a superconducting circuit forms a voltage-tunable transmon qubit that can be controlled coherently.

Journal ArticleDOI
TL;DR: Incremental learning, online learning, and data stream learning are terms commonly associated with learning algorithms that update their models given a continuous influx of data without performing any act of reinforcement learning.
Abstract: Incremental learning, online learning, and data stream learning are terms commonly associated with learning algorithms that update their models given a continuous influx of data without performing multiple passes over data. Several works have been devoted to this area, either directly or indirectly as characteristics of big data processing, i.e., Velocity and Volume. Given the current industry needs, there are many challenges to be addressed before existing methods can be efficiently applied to real-world problems. In this work, we focus on elucidating the connections among the current stateof- the-art on related fields; and clarifying open challenges in both academia and industry. We treat with special care topics that were not thoroughly investigated in past position and survey papers. This work aims to evoke discussion and elucidate the current research opportunities, highlighting the relationship of different subareas and suggesting courses of action when possible.

Journal ArticleDOI
TL;DR: In this article, the experimental work has been supported by the European Research Council (ERC), the Scottish Funding Council, the UK EPSRC and the Swiss National Science Foundation (SNSF).
Abstract: Funding: The experimental work has been supported by the European Research Council (ERC), the Scottish Funding Council, the UK EPSRC and the Swiss National Science Foundation (SNSF). Theoretical work was supported by the ERC grant ERC-319286-QMAC and by the SNSF (NCCR MARVEL).

Journal ArticleDOI
TL;DR: The main novelty of this work is the conformal treatment of the optimal orientation of the microstructure, in a plane setting, although the periodicity cell has varying parameters and orientation throughout the computational domain, the angles between its members or bars are conserved.
Abstract: This paper is concerned with the topology optimization of structures made of periodically perforated material, where the microscopic periodic cell can be macroscopically modulated and oriented. The main idea is to optimize the homogenized formulation of this problem, which is an easy task of parametric optimization, then to project the optimal microstructure at a desired lengthscale, which is a delicate issue, albeit computationally cheap. The main novelty of our work is, in a plane setting, the conformal treatment of the optimal orientation of the microstructure. In other words, although the periodicity cell has varying parameters and orientation throughout the computational domain, the angles between its members or bars are conserved. The main application of our work is the optimization of so-called lattice materials which are becoming increasingly popular in the context of additive manufacturing. Several numerical examples are presented for compliance minimization in 2-d.

Journal ArticleDOI
TL;DR: It is demonstrated that the nearly perfectly hexagonal Fermi surface of PdCoO2 gives rise to highly directional ballistic transport with enhanced electron self-focusing effects, suggesting a novel class of ballistic electronic devices exploiting the unique transport characteristics of strongly faceted Fermani surfaces.
Abstract: Geometric electron optics may be implemented in solids when electron transport is ballistic on the length scale of a device. Currently, this is realized mainly in 2D materials characterized by circular Fermi surfaces. Here we demonstrate that the nearly perfectly hexagonal Fermi surface of PdCoO2 gives rise to highly directional ballistic transport. We probe this directional ballistic regime in a single crystal of PdCoO2 by use of focused ion beam (FIB) micro-machining, defining crystalline ballistic circuits with features as small as 250 nm. The peculiar hexagonal Fermi surface naturally leads to enhanced electron self-focusing effects in a magnetic field compared to circular Fermi surfaces. This super-geometric focusing can be quantitatively predicted for arbitrary device geometry, based on the hexagonal cyclotron orbits appearing in this material. These results suggest a novel class of ballistic electronic devices exploiting the unique transport characteristics of strongly faceted Fermi surfaces. Ballistic electron beams in clean metals can be focused by passing currents through well designed contraptions, which is mostly done in isotropic materials described by a circular Fermi surface. Here, the authors demonstrate that the almost hexagonal Fermi surface of PdCoO2 gives rise to highly directional ballistic transport with enhanced electron self-focusing effects.

Journal ArticleDOI
TL;DR: The Atlantic Meridional Overturning Circulation (AMOC) is one of the major sources of energy and carbon flux in the North Atlantic Ocean as mentioned in this paper, and it has been extensively studied in the literature.
Abstract: The Atlantic Meridional Overturning Circulation (AMOC) extends from the Southern Ocean to the northern North Atlantic, transporting heat northwards throughout the South and North Atlantic, and sinking carbon and nutrients into the deep ocean. Climate models indicate that changes to the AMOC both herald and drive climate shifts. Intensive trans-basin AMOC observational systems have been put in place to continuously monitor meridional volume transport variability, and in some cases, heat, freshwater and carbon transport. These observational programs have been used to diagnose the magnitude and origins of transport variability, and to investigate impacts of variability on essential climate variables such as sea surface temperature, ocean heat content and coastal sea level. AMOC observing approaches vary between the different systems, ranging from trans-basin arrays (OSNAP, RAPID 26°N, 11°S, SAMBA 34.5°S) to arrays concentrating on western boundaries (e.g., RAPID WAVE, MOVE 16°N). In this paper, we outline the different approaches (aims, strengths and limitations) and summarize the key results to date. We also discuss alternate approaches for capturing AMOC variability including direct estimates (e.g., using sea level, bottom pressure, and hydrography from autonomous profiling floats), indirect estimates applying budgetary approaches, state estimates or ocean reanalyses, and proxies. Based on the existing observations and their results, and the potential of new observational and formal synthesis approaches, we make suggestions as to how to evaluate a comprehensive, future-proof observational network of the AMOC to deepen our understanding of the AMOC and its role in global climate.

Journal ArticleDOI
TL;DR: A general principle is established which states that regularizing an inverse problem with a convex function yields solutions which are convex combinations of a small number of atoms.
Abstract: We establish a general principle which states that regularizing an inverse problem with a convex function yields solutions which are convex combinations of a small number of atoms. These atoms are identified with the extreme points and elements of the extreme rays of the regularizer level sets. An extension to a broader class of quasi-convex regularizers is also discussed. As a side result, we characterize the minimizers of the total gradient variation, which was still an unresolved problem.

Journal ArticleDOI
Zongbo Shi1, Zongbo Shi2, Tuan Vu1, Simone Kotthaus3, Simone Kotthaus4, Roy M. Harrison5, Roy M. Harrison1, Sue Grimmond3, Siyao Yue6, Tong Zhu7, James D. Lee8, Yiqun Han7, Yiqun Han9, Matthias Demuzere10, Rachel Dunmore8, Lujie Ren6, Lujie Ren2, Di Liu, Yuanlin Wang11, Yuanlin Wang6, Oliver Wild11, James Allan12, W. Joe F. Acton11, Janet F. Barlow3, Benjamin Barratt9, David C. S. Beddows1, William J. Bloss1, Giulia Calzolai13, David Carruthers, David C. Carslaw8, Queenie Chan9, Lia Chatzidiakou14, Yang Chen6, Leigh R. Crilley1, Hugh Coe12, Tie Dai6, Ruth M. Doherty15, Fengkui Duan16, Pingqing Fu6, Pingqing Fu2, Baozhu Ge6, Maofa Ge6, Daobo Guan17, Jacqueline F. Hamilton8, Kebin He16, Mathew R. Heal15, Dwayne E. Heard18, C. Nicholas Hewitt11, Michael Hollaway11, Min Hu7, Dongsheng Ji6, Xujiang Jiang16, Rod Jones14, Markus Kalberer14, Frank J. Kelly9, Louisa Kramer1, Ben Langford, Chun Lin15, Alastair C. Lewis8, Jie Li6, Weijun Li19, Huan Liu16, Junfeng Liu7, Miranda Loh, Keding Lu7, Franco Lucarelli13, Graham Mann18, Gordon McFiggans12, Mark R. Miller15, Graham P. Mills17, Paul Monk20, Eiko Nemitz, F. M. O'Connor21, Bin Ouyang11, Bin Ouyang14, Paul I. Palmer15, Carl J. Percival12, Olalekan A.M. Popoola14, Claire E. Reeves17, Andrew R. Rickard8, Longyi Shao22, Guangyu Shi6, Dominick V. Spracklen18, David Stevenson15, Yele Sun6, Zhiwei Sun23, Shu Tao7, Shengrui Tong6, Qingqing Wang6, Wenhua Wang22, Xinming Wang6, Xuejun Wang7, Zifang Wang6, Lianfang Wei6, Lisa K. Whalley18, Xuefang Wu1, Zhijun Wu7, Pinhua Xie6, Fumo Yang24, Qiang Zhang16, Yanli Zhang6, Yuanhang Zhang7, Mei Zheng7 
TL;DR: The Atmospheric Pollution and Human Health in a Chinese Megacity (APHH-Beijing) as discussed by the authors is an international collaborative project focusing on understanding the sources, processes and health effects of air pollution in the Beijing megacity.
Abstract: The Atmospheric Pollution and Human Health in a Chinese Megacity (APHH-Beijing) programme is an international collaborative project focusing on understanding the sources, processes and health effects of air pollution in the Beijing megacity. APHH-Beijing brings together leading China and UK research groups, state-of-the-art infrastructure and air quality models to work on four research themes: (1) sources and emissions of air pollutants; (2) atmospheric processes affecting urban air pollution; (3) air pollution exposure and health impacts; and (4) interventions and solutions. Themes 1 and 2 are closely integrated and support Theme 3, while Themes 1–3 provide scientific data for Theme 4 to develop cost-effective air pollution mitigation solutions. This paper provides an introduction to (i) the rationale of the APHH-Beijing programme and (ii) the measurement and modelling activities performed as part of it. In addition, this paper introduces the meteorology and air quality conditions during two joint intensive field campaigns – a core integration activity in APHH-Beijing. The coordinated campaigns provided observations of the atmospheric chemistry and physics at two sites: (i) the Institute of Atmospheric Physics in central Beijing and (ii) Pinggu in rural Beijing during 10 November–10 December 2016 (winter) and 21 May–22 June 2017 (summer). The campaigns were complemented by numerical modelling and automatic air quality and low-cost sensor observations in the Beijing megacity. In summary, the paper provides background information on the APHH-Beijing programme and sets the scene for more focused papers addressing specific aspects, processes and effects of air pollution in Beijing.