scispace - formally typeset
Search or ask a question
Author

Brad Abbott

Bio: Brad Abbott is an academic researcher from University of Oklahoma. The author has contributed to research in topics: Large Hadron Collider & Higgs boson. The author has an hindex of 137, co-authored 1566 publications receiving 98604 citations. Previous affiliations of Brad Abbott include Aix-Marseille University & Purdue University.


Papers
More filters
Journal ArticleDOI
Georges Aad1, T. Abajyan2, Brad Abbott3, J. Abdallah4  +2936 moreInstitutions (203)
TL;DR: In this article, the distributions of event-by-event harmonic flow coefficients v (n) for n = 2-4 are measured in = 2.76 TeV Pb + Pb collisions using the ATLAS detector at the LHC.
Abstract: The distributions of event-by-event harmonic flow coefficients v (n) for n = 2- 4 are measured in = 2.76 TeV Pb + Pb collisions using the ATLAS detector at the LHC. The measurements are performed u ...

181 citations

Journal ArticleDOI
Morad Aaboud, Alexander Kupco1, Peter Davison2, Samuel Webb3  +2897 moreInstitutions (195)
TL;DR: A search for the electroweak production of charginos, neutralinos and sleptons decaying into final states involving two or three electrons or muons is presented and stringent limits at 95% confidence level are placed on the masses of relevant supersymmetric particles.
Abstract: A search for the electroweak production of charginos, neutralinos and sleptons decaying into final states involving two or three electrons or muons is presented. The analysis is based on 36.1 fb$^{-1}$ of $\sqrt{s}=13$ TeV proton–proton collisions recorded by the ATLAS detector at the Large Hadron Collider. Several scenarios based on simplified models are considered. These include the associated production of the next-to-lightest neutralino and the lightest chargino, followed by their decays into final states with leptons and the lightest neutralino via either sleptons or Standard Model gauge bosons, direct production of chargino pairs, which in turn decay into leptons and the lightest neutralino via intermediate sleptons, and slepton pair production, where each slepton decays directly into the lightest neutralino and a lepton. No significant deviations from the Standard Model expectation are observed and stringent limits at 95% confidence level are placed on the masses of relevant supersymmetric particles in each of these scenarios. For a massless lightest neutralino, masses up to 580 GeV are excluded for the associated production of the next-to-lightest neutralino and the lightest chargino, assuming gauge-boson mediated decays, whereas for slepton-pair production masses up to 500 GeV are excluded assuming three generations of mass-degenerate sleptons.

181 citations

Journal ArticleDOI
V. M. Abazov1, Brad Abbott2, M. Abolins3, Bobby Samir Acharya4  +550 moreInstitutions (82)
TL;DR: The first measurement of the integrated forward-backward charge asymmetry in top-quark-top-antiquark pair (t (t) over bar) production in proton-antiproton (p (p)over bar) collisions in the lepton+jets final state was presented in this article.
Abstract: We present the first measurement of the integrated forward-backward charge asymmetry in top-quark-top-antiquark pair (t (t) over bar) production in proton-antiproton (p (p) over bar) collisions in the lepton+jets final state. Using a b-jet tagging algorithm and kinematic reconstruction assuming t (t) over bar +X production and decay, a sample of 0.9 fb(-1) of data, collected by the D0 experiment at the Fermilab Tevatron Collider, is used to measure the asymmetry for different jet multiplicities. The result is also used to set upper limits on t (t) over bar +X production via a Z' resonance.

181 citations

Journal ArticleDOI
Morad Aaboud, Georges Aad1, Brad Abbott2, Dale Charles Abbott3  +3001 moreInstitutions (220)
TL;DR: In this paper, the decays of B0 s! + and B0! + have been studied using 26 : 3 fb of 13TeV LHC proton-proton collision data collected with the ATLAS detector in 2015 and 2016.
Abstract: A study of the decays B0 s ! + and B0 ! + has been performed using 26 : 3 fb of 13TeV LHC proton-proton collision data collected with the ATLAS detector in 2015 and 2016. Since the detector resolut ...

180 citations

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, A. Abed Abud4  +2954 moreInstitutions (198)
TL;DR: In this paper, the trigger algorithms and selection were optimized to control the rates while retaining a high efficiency for physics analyses at the ATLAS experiment to cope with a fourfold increase of peak LHC luminosity from 2015 to 2018 (Run 2), and a similar increase in the number of interactions per beam-crossing to about 60.
Abstract: Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for the ATLAS experiment to record signals for a wide variety of physics: from Standard Model processes to searches for new phenomena in both proton–proton and heavy-ion collisions. To cope with a fourfold increase of peak LHC luminosity from 2015 to 2018 (Run 2), to 2.1×1034cm-2s-1, and a similar increase in the number of interactions per beam-crossing to about 60, trigger algorithms and selections were optimised to control the rates while retaining a high efficiency for physics analyses. For proton–proton collisions, the single-electron trigger efficiency relative to a single-electron offline selection is at least 75% for an offline electron of 31 GeV, and rises to 96% at 60 GeV; the trigger efficiency of a 25 GeV leg of the primary diphoton trigger relative to a tight offline photon selection is more than 96% for an offline photon of 30 GeV. For heavy-ion collisions, the primary electron and photon trigger efficiencies relative to the corresponding standard offline selections are at least 84% and 95%, respectively, at 5 GeV above the corresponding trigger threshold.

180 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
Claude Amsler1, Michael Doser2, Mario Antonelli, D. M. Asner3  +173 moreInstitutions (86)
TL;DR: This biennial Review summarizes much of particle physics, using data from previous editions.

12,798 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations