scispace - formally typeset
Open AccessJournal ArticleDOI

JEDI-net: a jet identification algorithm based on interaction networks

TLDR
In this paper, the performance of a jet identification algorithm based on interaction networks (JEDI-net) was investigated to identify all-hadronic decays of high-momentum heavy particles produced at the LHC and distinguish them from ordinary jets originating from the hadronization of quarks and gluons.
Abstract
We investigate the performance of a jet identification algorithm based on interaction networks (JEDI-net) to identify all-hadronic decays of high-momentum heavy particles produced at the LHC and distinguish them from ordinary jets originating from the hadronization of quarks and gluons. The jet dynamics are described as a set of one-to-one interactions between the jet constituents. Based on a representation learned from these interactions, the jet is associated to one of the considered categories. Unlike other architectures, the JEDI-net models achieve their performance without special handling of the sparse input jet representation, extensive pre-processing, particle ordering, or specific assumptions regarding the underlying detector geometry. The presented models give better results with less model parameters, offering interesting prospects for LHC applications.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Graph Neural Networks in Particle Physics

TL;DR: Various applications of graph neural networks in particle physics are reviewed, including different graph constructions, model architectures and learning objectives, as well as key open problems in particle science for which graph neural Networks are promising.
Journal ArticleDOI

Graph neural networks in particle physics

TL;DR: In this paper, the authors review various applications of graph neural networks in particle physics, including different graph constructions, model architectures and learning objectives, as well as key open problems in particle particle physics for which graph neural network is promising.
Journal ArticleDOI

Interaction networks for the identification of boosted H → b b ¯ decays

TL;DR: In this article, an interaction network is used to identify high-transverse-momentum Higgs bosons decaying to bottom quark-antiquark pairs and distinguish them from ordinary jets that reflect the configurations of quarks and gluons at short distances.
Journal ArticleDOI

ABCNet: an attention-based method for particle tagging

TL;DR: A graph neural network enhanced by attention mechanisms called ABCNet is proposed, which demonstrates the advantages and flexibility of treating collider data as a point cloud and shows an improved performance compared to other algorithms available.
Journal ArticleDOI

Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors

TL;DR: In this paper, a method for designing optimally heterogeneously quantized versions of deep neural network models for minimum energy, high-accuracy, nanosecond inference and fully automated deployment on chip is introduced.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

Rectified Linear Units Improve Restricted Boltzmann Machines

TL;DR: Restricted Boltzmann machines were developed using binary stochastic hidden units that learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset.
Proceedings Article

Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding

TL;DR: Deep Compression as mentioned in this paper proposes a three-stage pipeline: pruning, quantization, and Huffman coding to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy.
Posted Content

ADADELTA: An Adaptive Learning Rate Method

Matthew D. Zeiler
- 22 Dec 2012 - 
TL;DR: A novel per-dimension learning rate method for gradient descent called ADADELTA that dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent is presented.
Journal ArticleDOI

The anti-$k_t$ jet clustering algorithm

TL;DR: The anti-k-t algorithm as mentioned in this paper behaves like an idealised cone algorithm, in that jets with only soft fragmentation are conical, active and passive areas are equal, the area anomalous dimensions are zero, the non-global logarithms are those of a rigid boundary and the Milan factor is universal.
Related Papers (5)