scispace - formally typeset
Search or ask a question
Author

Georges Aad

Bio: Georges Aad is an academic researcher from Aix-Marseille University. The author has contributed to research in topics: Large Hadron Collider & Higgs boson. The author has an hindex of 135, co-authored 1121 publications receiving 88811 citations. Previous affiliations of Georges Aad include Centre national de la recherche scientifique & University of Udine.


Papers
More filters
Journal ArticleDOI
TL;DR: The ATLAS experiment at the Large Hadron Collider at CERN reports a constraint on lepton-flavour-violating effects in weak interactions, searching for Z-boson decays into a τ lepton and another lepton of different flavour with opposite electric charge.
Abstract: Leptons with essentially the same properties apart from their mass are grouped into three families (or flavours). The number of leptons of each flavour is conserved in interactions, but this is not imposed by fundamental principles. Since the formulation of the standard model of particle physics, the observation of flavour oscillations among neutrinos has shown that lepton flavour is not conserved in neutrino weak interactions. So far, there has been no experimental evidence that this also occurs in interactions between charged leptons. Such an observation would be a sign of undiscovered particles or a yet unknown type of interaction. Here the ATLAS experiment at the Large Hadron Collider at CERN reports a constraint on lepton-flavour-violating effects in weak interactions, searching for Z-boson decays into a τ lepton and another lepton of different flavour with opposite electric charge. The branching fractions for these decays are measured to be less than 8.1 × 10−6 (eτ) and 9.5 × 10−6 (μτ) at the 95% confidence level using 139 fb−1 of proton–proton collision data at a centre-of-mass energy of s=13TeV and 20.3 fb−1 at s=8TeV. These results supersede the limits from the Large Electron–Positron Collider experiments conducted more than two decades ago.

8 citations

01 Jan 2012
TL;DR: In this paper, a measurement of the ZZ production cross section in proton-proton collisions at 7 TeV using data collected by the ATLAS experiment at the LHC is presented.
Abstract: A measurement of the ZZ production cross section in proton-proton collisions at sqrt{s} = 7 TeV using data collected by the ATLAS experiment at the LHC is presented. In a data sample corresponding to an integrated luminosity of 1.02fb-1, 12 events containing two Z boson candidates decaying to electrons and/or muons were observed. The expected background contribution is 0.3^{+0.9}_{-0.3} (stat) ^{+0.4}_{-0.3} (syst) events. The total cross section for on-shell ZZ production has been determined to be \sigma_{ZZ}_{tot}= 8.4^{+2.7}_{-2.3}(stat) ^{+0.4}_{-0.7}(syst)\pm 0.3 (lumi) pb$ and is compatible with the Standard Model expectation of 6.5^{+0.3}_{-0.2} pb calculated at the next-to-leading order in QCD. Limits on anomalous neutral triple gauge boson couplings are derived.

8 citations

Journal ArticleDOI
02 Jul 2021
TL;DR: In this paper, a convolutional and recurrent neural network (RNN) was used for energy reconstruction of the liquid-argon (LAr) calorimeter signals during the high-luminosity phase of the LHC at CERN.
Abstract: The ATLAS experiment at the Large Hadron Collider (LHC) is operated at CERN and measures proton–proton collisions at multi-TeV energies with a repetition frequency of 40 MHz. Within the phase-II upgrade of the LHC, the readout electronics of the liquid-argon (LAr) calorimeters of ATLAS are being prepared for high luminosity operation expecting a pileup of up to 200 simultaneous proton–proton interactions. Moreover, the calorimeter signals of up to 25 subsequent collisions are overlapping, which increases the difficulty of energy reconstruction by the calorimeter detector. Real-time processing of digitized pulses sampled at 40 MHz is performed using field-programmable gate arrays (FPGAs). To cope with the signal pileup, new machine learning approaches are explored: convolutional and recurrent neural networks outperform the optimal signal filter currently used, both in assignment of the reconstructed energy to the correct proton bunch crossing and in energy resolution. The improvements concern in particular energies derived from overlapping pulses. Since the implementation of the neural networks targets an FPGA, the number of parameters and the mathematical operations need to be well controlled. The trained neural network structures are converted into FPGA firmware using automated implementations in hardware description language and high-level synthesis tools. Very good agreement between neural network implementations in FPGA and software based calculations is observed. The prototype implementations on an Intel Stratix-10 FPGA reach maximum operation frequencies of 344–640 MHz. Applying time-division multiplexing allows the processing of 390–576 calorimeter channels by one FPGA for the most resource-efficient networks. Moreover, the latency achieved is about 200 ns. These performance parameters show that a neural-network based energy reconstruction can be considered for the processing of the ATLAS LAr calorimeter signals during the high-luminosity phase of the LHC.

8 citations

01 Jan 2014
Abstract: ATLAS measurements of the azimuthal anisotropy in lead–lead collisions at √ sNN = 2.76 TeV are shown using a dataset of approximately 7 μb−1 collected at the LHC in 2010. The measurements are performed for charged particles with transverse momenta 0.5 < pT < 20 GeV and in the pseudorapidity range |η| < 2.5. The anisotropy is characterized by the Fourier coefficients, vn , of the charged-particle azimuthal angle distribution for n = 2–4. The Fourier coefficients are evaluated using multi-particle cumulants calculated with the generating function method. Results on the transverse momentum, pseudorapidity and centrality dependence of the vn coefficients are presented. The elliptic flow, v2, is obtained from the two-, four-, sixand eight-particle cumulants while higher-order coefficients, v3 and v4, are determined with twoand four-particle cumulants. Flow harmonics vn measured with four-particle cumulants are significantly reduced compared to the measurement involving two-particle cumulants. A comparison to vn measurements obtained using different analysis methods and previously reported by the LHC experiments is also shown. Results of measurements of flow fluctuations evaluated with multiparticle cumulants are shown as a function of transverse momentum and the collision centrality. Models of the initial spatial geometry and its fluctuations fail to describe the flow fluctuations measurements.

8 citations

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, A. Abed Abud4  +3017 moreInstitutions (225)
TL;DR: In this paper, a wrong cross-section was used for the theory prediction in figure 6 due to not taking into account the VHH contamination properly in the rescaling formula for the signal samples.
Abstract: One correction is noted for the paper. A wrong cross-section was used for the theory prediction in figure 6 due to not taking into account the VHH contamination properly in the rescaling formula for the signal samples.

8 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
Georges Aad1, T. Abajyan2, Brad Abbott3, Jalal Abdallah4  +2964 moreInstitutions (200)
TL;DR: In this article, a search for the Standard Model Higgs boson in proton-proton collisions with the ATLAS detector at the LHC is presented, which has a significance of 5.9 standard deviations, corresponding to a background fluctuation probability of 1.7×10−9.

9,282 citations