scispace - formally typeset
Open AccessJournal ArticleDOI

Measurement of the Higgs boson production rate in association with top quarks in final states with electrons, muons, and hadronically decaying tau leptons at √s=13Te

Albert M. Sirunyan, +2410 more
- 01 Jan 2021 - 
- Vol. 81, Iss: 4, pp 378-378
Reads0
Chats0
TLDR
The most sensitive measurement of the Higgs boson coupling to the tau lepton was obtained in this paper, where the authors used machine learning techniques and matrix-element methods.
Abstract
The rate for Higgs ( $${\mathrm{H}} $$ ) bosons production in association with either one ( $${\mathrm{t}} {\mathrm{H}} $$ ) or two ( $${\mathrm{t}} {{\overline{{{\mathrm{t}}}}}} {\mathrm{H}} $$ ) top quarks is measured in final states containing multiple electrons, muons, or tau leptons decaying to hadrons and a neutrino, using proton–proton collisions recorded at a center-of-mass energy of $$13\,\text {TeV} $$ by the CMS experiment. The analyzed data correspond to an integrated luminosity of 137 $$\,\text {fb}^{-1}$$ . The analysis is aimed at events that contain $${\mathrm{H}} \rightarrow {\mathrm{W}} {\mathrm{W}} $$ , $${\mathrm{H}} \rightarrow {\uptau } {\uptau } $$ , or $${\mathrm{H}} \rightarrow {\mathrm{Z}} {\mathrm{Z}} $$ decays and each of the top quark(s) decays either to lepton+jets or all-jet channels. Sensitivity to signal is maximized by including ten signatures in the analysis, depending on the lepton multiplicity. The separation among $${\mathrm{t}} {\mathrm{H}} $$ , $${\mathrm{t}} {{\overline{{{\mathrm{t}}}}}} {\mathrm{H}} $$ , and the backgrounds is enhanced through machine-learning techniques and matrix-element methods. The measured production rates for the $${\mathrm{t}} {{\overline{{{\mathrm{t}}}}}} {\mathrm{H}} $$ and $${\mathrm{t}} {\mathrm{H}} $$ signals correspond to $$0.92 \pm 0.19\,\text {(stat)} ^{+0.17}_{-0.13}\,\text {(syst)} $$ and $$5.7 \pm 2.7\,\text {(stat)} \pm 3.0\,\text {(syst)} $$ of their respective standard model (SM) expectations. The corresponding observed (expected) significance amounts to 4.7 (5.2) standard deviations for $${\mathrm{t}} {{\overline{{{\mathrm{t}}}}}} {\mathrm{H}} $$ , and to 1.4 (0.3) for $${\mathrm{t}} {\mathrm{H}} $$ production. Assuming that the Higgs boson coupling to the tau lepton is equal in strength to its expectation in the SM, the coupling $$y_{{\mathrm{t}}}$$ of the Higgs boson to the top quark divided by its SM expectation, $$\kappa _{{\mathrm{t}}}=y_{{\mathrm{t}}}/y_{{\mathrm{t}}}^{\mathrm {SM}}$$ , is constrained to be within $$-0.9< \kappa _{{\mathrm{t}}}< -0.7$$ or $$0.7< \kappa _{{\mathrm{t}}}< 1.1$$ , at 95% confidence level. This result is the most sensitive measurement of the $${\mathrm{t}} {{\overline{{{\mathrm{t}}}}}} {\mathrm{H}} $$ production rate to date.

read more

Content maybe subject to copyright    Report

Citations
More filters

Machine learning with Python

TL;DR: This presentation is a case study taken from the travel and holiday industry and describes the effectiveness of various techniques as well as the performance of Python-based libraries such as Python Data Analysis Library (Pandas), and Scikit-learn (built on NumPy, SciPy and matplotlib).
Proceedings ArticleDOI

Parton Distributions for the LHC Run II

Juan Rojo
Journal ArticleDOI

A portrait of the Higgs boson by the CMS experiment ten years after the discovery

C. Collaboration
- 30 Jun 2022 - 
TL;DR: The ATLAS and CMS Collaborations at the CERN Large Hadron Collider as discussed by the authors reported the most up-to-date combination of results on the properties of the Higgs boson, including the most stringent limit on the cross section for the production of a pair of Higgs particles, on the basis of data from proton-proton collisions at a centre-of-mass energy of 13 TeV.
References
More filters
Journal Article

Scikit-learn: Machine Learning in Python

TL;DR: Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems, focusing on bringing machine learning to non-specialists using a general-purpose high-level language.
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings ArticleDOI

XGBoost: A Scalable Tree Boosting System

TL;DR: XGBoost as discussed by the authors proposes a sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning to achieve state-of-the-art results on many machine learning challenges.
Related Papers (5)

The CMS experiment at the CERN LHC

S. Chatrchyan, +3175 more