Jet tagging via particle clouds
Huilin Qu,Loukas Gouskos +1 more
TLDR
This work proposes ParticleNet, a customized neural network architecture using Dynamic Graph Convolutional Neural Network for jet tagging problems that achieves state-of-the-art performance on two representative jet tagging benchmarks and is improved significantly over existing methods.Abstract:
How to represent a jet is at the core of machine learning on jet physics. Inspired by the notion of point clouds, we propose a new approach that considers a jet as an unordered set of its constituent particles, effectively a ``particle cloud.'' Such a particle cloud representation of jets is efficient in incorporating raw information of jets and also explicitly respects the permutation symmetry. Based on the particle cloud representation, we propose ParticleNet, a customized neural network architecture using Dynamic Graph Convolutional Neural Network for jet tagging problems. The ParticleNet architecture achieves state-of-the-art performance on two representative jet tagging benchmarks and is improved significantly over existing methods.read more
Citations
More filters
Journal ArticleDOI
Graph convolutional networks: a comprehensive review
TL;DR: A comprehensive review specifically on the emerging field of graph convolutional networks, which is one of the most prominent graph deep learning models, is conducted and several open challenges are presented and potential directions for future research are discussed.
Journal ArticleDOI
Jet Substructure at the Large Hadron Collider: A Review of Recent Advances in Theory and Machine Learning
TL;DR: A comprehensive review of state-of-the-art theoretical and machine learning developments in jet substructure is provided in this article, which is meant both as a pedagogical introduction and as a comprehensive reference for experts.
Journal ArticleDOI
Graph Neural Networks in Particle Physics
TL;DR: Various applications of graph neural networks in particle physics are reviewed, including different graph constructions, model architectures and learning objectives, as well as key open problems in particle science for which graph neural Networks are promising.
Journal ArticleDOI
Learning representations of irregular particle-detector geometry with distance-weighted graph networks
TL;DR: In this paper, the authors explore the use of graph networks to deal with irregular geometry detectors in the context of particle reconstruction, and introduce two distance-weighted graph network architectures, dubbed GarNet and GravNet layers, and apply them to a typical particle reconstruction task.
Journal ArticleDOI
Graph neural networks in particle physics
TL;DR: In this paper, the authors review various applications of graph neural networks in particle physics, including different graph constructions, model architectures and learning objectives, as well as key open problems in particle particle physics for which graph neural network is promising.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal Article
Dropout: a simple way to prevent neural networks from overfitting
TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings Article
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings ArticleDOI
Rethinking the Inception Architecture for Computer Vision
TL;DR: In this article, the authors explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.