scispace - formally typeset
Search or ask a question
Author

Aneesh Heintz

Bio: Aneesh Heintz is an academic researcher. The author has contributed to research in topics: Deep learning & Artificial intelligence. The author has an hindex of 1, co-authored 1 publications receiving 15 citations.

Papers
More filters
Posted Content
TL;DR: A considerable speedup over CPU-based execution is possible, potentially enabling such algorithms to be used effectively in future computing workflows and the FPGA-based Level-1 trigger at the CERN Large Hadron Collider.
Abstract: We develop and study FPGA implementations of algorithms for charged particle tracking based on graph neural networks. The two complementary FPGA designs are based on OpenCL, a framework for writing programs that execute across heterogeneous platforms, and hls4ml, a high-level-synthesis-based compiler for neural network to firmware conversion. We evaluate and compare the resource usage, latency, and tracking performance of our implementations based on a benchmark dataset. We find a considerable speedup over CPU-based execution is possible, potentially enabling such algorithms to be used effectively in future computing workflows and the FPGA-based Level-1 trigger at the CERN Large Hadron Collider.

34 citations

Proceedings ArticleDOI
03 Jan 2022
TL;DR: In this article , the authors proposed a multiscale super-resolution using a deep normalizing flow network for uncertainty-quantified and Monte Carlo estimates so that image enhancement for spacecraft vision tasks may be more robust and predictable.
Abstract: Many onboard vision tasks for spacecraft navigation require high-quality remote-sensing images with clearly decipherable features. However, design constraints and the operational and environmental conditions limit their quality. Enhancing images through postprocessing is a cost-efficient solution. Current deep learning methods that enhance low-resolution images through super-resolution do not quantify network uncertainty of predictions and are trained at a single scale, which hinders practical integration in image-acquisition pipelines. This work proposes performing multiscale super-resolution using a deep normalizing flow network for uncertainty-quantified and Monte Carlo estimates so that image enhancement for spacecraft vision tasks may be more robust and predictable. The proposed network architecture outperforms state-of-the-art super-resolution models on in-orbit lunar imagery data. Simulations demonstrate its viability on task-based evaluations for landmark identification.

Cited by
More filters
Posted Content
TL;DR: This chapter recapitulate the mathematical formalism of GNNs and highlight aspects to consider when designing these networks for HEP data, including graph construction, model architectures, learning objectives, and graph pooling.
Abstract: Machine learning methods have a long history of applications in high energy physics (HEP). Recently, there is a growing interest in exploiting these methods to reconstruct particle signatures from raw detector data. In order to benefit from modern deep learning algorithms that were initially designed for computer vision or natural language processing tasks, it is common practice to transform HEP data into images or sequences. Conversely, graph neural networks (GNNs), which operate on graph data composed of elements with a set of features and their pairwise connections, provide an alternative way of incorporating weight sharing, local connectivity, and specialized domain knowledge. Particle physics data, such as the hits in a tracking detector, can generally be represented as graphs, making the use of GNNs natural. In this chapter, we recapitulate the mathematical formalism of GNNs and highlight aspects to consider when designing these networks for HEP data, including graph construction, model architectures, learning objectives, and graph pooling. We also review promising applications of GNNs for particle tracking and reconstruction in HEP and summarize the outlook for their deployment in current and future experiments.

41 citations

Journal ArticleDOI
TL;DR: In this paper, an end-to-end trainable, machine-learned particle-flow algorithm based on parallelizable, computationally efficient, and scalable graph neural network optimized using a multi-task objective on simulated events is presented.
Abstract: In general-purpose particle detectors, the particle-flow algorithm may be used to reconstruct a comprehensive particle-level view of the event by combining information from the calorimeters and the trackers, significantly improving the detector resolution for jets and the missing transverse momentum. In view of the planned high-luminosity upgrade of the CERN Large Hadron Collider (LHC), it is necessary to revisit existing reconstruction algorithms and ensure that both the physics and computational performance are sufficient in an environment with many simultaneous proton–proton interactions (pileup). Machine learning may offer a prospect for computationally efficient event reconstruction that is well-suited to heterogeneous computing platforms, while significantly improving the reconstruction quality over rule-based algorithms for granular detectors. We introduce MLPF, a novel, end-to-end trainable, machine-learned particle-flow algorithm based on parallelizable, computationally efficient, and scalable graph neural network optimized using a multi-task objective on simulated events. We report the physics and computational performance of the MLPF algorithm on a Monte Carlo dataset of top quark–antiquark pairs produced in proton–proton collisions in conditions similar to those expected for the high-luminosity LHC. The MLPF algorithm improves the physics response with respect to a rule-based benchmark algorithm and demonstrates computationally scalable particle-flow reconstruction in a high-pileup environment.

31 citations

Posted Content
TL;DR: Preliminary results by training a surrogate machine-learning model on real accelerator data to emulate the Booster environment and using this surrogate model to train the neural network for its regulation task are demonstrated.
Abstract: We describe a method for precisely regulating the gradient magnet power supply at the Fermilab Booster accelerator complex using a neural network trained via reinforcement learning. We demonstrate preliminary results by training a surrogate machine-learning model on real accelerator data to emulate the Booster environment, and using this surrogate model in turn to train the neural network for its regulation task. We additionally show how the neural networks to be deployed for control purposes may be compiled to execute on field-programmable gate arrays. This capability is important for operational stability in complicated environments such as an accelerator facility.

24 citations

Journal ArticleDOI
TL;DR: The Exa.TrkX project as mentioned in this paper applied geometric learning concepts such as metric learning and graph neural networks to HEP particle tracking and achieved tracking efficiency and purity similar to production tracking algorithms.
Abstract: The Exa.TrkX project has applied geometric learning concepts such as metric learning and graph neural networks to HEP particle tracking. Exa.TrkX’s tracking pipeline groups detector measurements to form track candidates and filters them. The pipeline, originally developed using the TrackML dataset (a simulation of an LHC-inspired tracking detector), has been demonstrated on other detectors, including DUNE Liquid Argon TPC and CMS High-Granularity Calorimeter. This paper documents new developments needed to study the physics and computing performance of the Exa.TrkX pipeline on the full TrackML dataset, a first step towards validating the pipeline using ATLAS and CMS data. The pipeline achieves tracking efficiency and purity similar to production tracking algorithms. Crucially for future HEP applications, the pipeline benefits significantly from GPU acceleration, and its computational requirements scale close to linearly with the number of particles in the event.

17 citations

Journal ArticleDOI
TL;DR: A review of supervised and unsupervised strategies for colloidal material design can be found in this paper , where a collection of computer approaches ranging from quantum chemistry to molecular dynamics and continuum modeling are discussed.

16 citations