scispace - formally typeset
Search or ask a question
Author

Marvin Johnson

Other affiliations: GlaxoSmithKline, CERN, Stockholm University  ...read more
Bio: Marvin Johnson is an academic researcher from Fermilab. The author has contributed to research in topics: Large Hadron Collider & Standard Model. The author has an hindex of 149, co-authored 1827 publications receiving 119520 citations. Previous affiliations of Marvin Johnson include GlaxoSmithKline & CERN.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, results from searches for the standard model Higgs boson in proton-proton collisions at 7 and 8 TeV in the CMS experiment at the LHC, using data samples corresponding to integrated luminosities of up to 5.8 standard deviations.

8,857 citations

Journal ArticleDOI
TL;DR: The Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC) at CERN as mentioned in this paper was designed to study proton-proton (and lead-lead) collisions at a centre-of-mass energy of 14 TeV (5.5 TeV nucleon-nucleon) and at luminosities up to 10(34)cm(-2)s(-1)
Abstract: The Compact Muon Solenoid (CMS) detector is described. The detector operates at the Large Hadron Collider (LHC) at CERN. It was conceived to study proton-proton (and lead-lead) collisions at a centre-of-mass energy of 14 TeV (5.5 TeV nucleon-nucleon) and at luminosities up to 10(34)cm(-2)s(-1) (10(27)cm(-2)s(-1)). At the core of the CMS detector sits a high-magnetic-field and large-bore superconducting solenoid surrounding an all-silicon pixel and strip tracker, a lead-tungstate scintillating-crystals electromagnetic calorimeter, and a brass-scintillator sampling hadron calorimeter. The iron yoke of the flux-return is instrumented with four stations of muon detectors covering most of the 4 pi solid angle. Forward sampling calorimeters extend the pseudo-rapidity coverage to high values (vertical bar eta vertical bar <= 5) assuring very good hermeticity. The overall dimensions of the CMS detector are a length of 21.6 m, a diameter of 14.6 m and a total weight of 12500 t.

5,193 citations

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +5117 moreInstitutions (314)
TL;DR: A measurement of the Higgs boson mass is presented based on the combined data samples of the ATLAS and CMS experiments at the CERN LHC in the H→γγ and H→ZZ→4ℓ decay channels.
Abstract: A measurement of the Higgs boson mass is presented based on the combined data samples of the ATLAS and CMS experiments at the CERN LHC in the H→γγ and H→ZZ→4l decay channels. The results are obtained from a simultaneous fit to the reconstructed invariant mass peaks in the two channels and for the two experiments. The measured masses from the individual channels and the two experiments are found to be consistent among themselves. The combined measured mass of the Higgs boson is mH=125.09±0.21 (stat)±0.11 (syst) GeV.

1,567 citations

Journal ArticleDOI
12 Sep 2013-Nature
TL;DR: In this paper, a screen for de novo mutations in patients with two classical epileptic encephalopathies: infantile spasms and Lennox-Gastaut syndrome (n = 115) was performed.
Abstract: Epileptic encephalopathies are a devastating group of severe childhood epilepsy disorders for which the cause is often unknown. Here we report a screen for de novo mutations in patients with two classical epileptic encephalopathies: infantile spasms (n = 149) and Lennox-Gastaut syndrome (n = 115). We sequenced the exomes of 264 probands, and their parents, and confirmed 329 de novo mutations. A likelihood analysis showed a significant excess of de novo mutations in the ∼4,000 genes that are the most intolerant to functional genetic variation in the human population (P = 2.9 × 10(-3)). Among these are GABRB3, with de novo mutations in four patients, and ALG13, with the same de novo mutation in two patients; both genes show clear statistical evidence of association with epileptic encephalopathy. Given the relevant site-specific mutation rates, the probabilities of these outcomes occurring by chance are P = 4.1 × 10(-10) and P = 7.8 × 10(-12), respectively. Other genes with de novo mutations in this cohort include CACNA1A, CHD2, FLNA, GABRA1, GRIN1, GRIN2B, HNRNPU, IQSEC2, MTOR and NEDD4L. Finally, we show that the de novo mutations observed are enriched in specific gene sets including genes regulated by the fragile X protein (P < 10(-8)), as has been reported previously for autism spectrum disorders.

1,254 citations

Journal ArticleDOI
TL;DR: In this paper, the cosmological results from a combined analysis of galaxy clustering and weak gravitational lensing, using 1321 deg2 of griz imaging data from the first year of the Dark Energy Survey (DES Y1), were presented.
Abstract: We present cosmological results from a combined analysis of galaxy clustering and weak gravitational lensing, using 1321 deg2 of griz imaging data from the first year of the Dark Energy Survey (DES Y1). We combine three two-point functions: (i) the cosmic shear correlation function of 26 million source galaxies in four redshift bins, (ii) the galaxy angular autocorrelation function of 650,000 luminous red galaxies in five redshift bins, and (iii) the galaxy-shear cross-correlation of luminous red galaxy positions and source galaxy shears. To demonstrate the robustness of these results, we use independent pairs of galaxy shape, photometric-redshift estimation and validation, and likelihood analysis pipelines. To prevent confirmation bias, the bulk of the analysis was carried out while "blind" to the true results; we describe an extensive suite of systematics checks performed and passed during this blinded phase. The data are modeled in flat ΛCDM and wCDM cosmologies, marginalizing over 20 nuisance parameters, varying 6 (for ΛCDM) or 7 (for wCDM) cosmological parameters including the neutrino mass density and including the 457×457 element analytic covariance matrix. We find consistent cosmological results from these three two-point functions and from their combination obtain S8≡σ8(Ωm/0.3)0.5=0.773-0.020+0.026 and Ωm=0.267-0.017+0.030 for ΛCDM; for wCDM, we find S8=0.782-0.024+0.036, Ωm=0.284-0.030+0.033, and w=-0.82-0.20+0.21 at 68% C.L. The precision of these DES Y1 constraints rivals that from the Planck cosmic microwave background measurements, allowing a comparison of structure in the very early and late Universe on equal terms. Although the DES Y1 best-fit values for S8 and Ωm are lower than the central values from Planck for both ΛCDM and wCDM, the Bayes factor indicates that the DES Y1 and Planck data sets are consistent with each other in the context of ΛCDM. Combining DES Y1 with Planck, baryonic acoustic oscillation measurements from SDSS, 6dF, and BOSS and type Ia supernovae from the Joint Lightcurve Analysis data set, we derive very tight constraints on cosmological parameters: S8=0.802±0.012 and Ωm=0.298±0.007 in ΛCDM and w=-1.00-0.04+0.05 in wCDM. Upcoming Dark Energy Survey analyses will provide more stringent tests of the ΛCDM model and extensions such as a time-varying equation of state of dark energy or modified gravity.

1,201 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
Claude Amsler1, Michael Doser2, Mario Antonelli, D. M. Asner3  +173 moreInstitutions (86)
TL;DR: This biennial Review summarizes much of particle physics, using data from previous editions.

12,798 citations