-
PQuantML: A Tool for End-to-End Hardware-aware Model Compression
Authors:
Roope Niemi,
Anastasiia Petrovych,
Arghya Ranjan Das,
Enrico Lupi,
Chang Sun,
Dimitrios Danopoulos,
Marlon Joshua Helbing,
Mia Liu,
Sebastian Dittmeier,
Michael Kagan,
Vladimir Loncar,
Maurizio Pierini
Abstract:
PQuantML is a new open-source, hardware-aware neural network model compression library tailored to end-to-end workflows. Motivated by the need to deploy performant models to environments with strict latency constraints, PQuantML simplifies training of compressed models by providing a unified interface to apply pruning and quantization, either jointly or individually. The library implements multipl…
▽ More
PQuantML is a new open-source, hardware-aware neural network model compression library tailored to end-to-end workflows. Motivated by the need to deploy performant models to environments with strict latency constraints, PQuantML simplifies training of compressed models by providing a unified interface to apply pruning and quantization, either jointly or individually. The library implements multiple pruning methods with different granularities, as well as fixed-point quantization with support for High-Granularity Quantization. We evaluate PQuantML on representative tasks such as the jet substructure classification, so-called jet tagging, an on-edge problem related to real-time LHC data processing. Using various pruning methods with fixed-point quantization, PQuantML achieves substantial parameter and bit-width reductions while maintaining accuracy. The resulting compression is further compared against existing tools, such as QKeras and HGQ.
△ Less
Submitted 27 March, 2026;
originally announced March 2026.
-
Machine Learning on Heterogeneous, Edge, and Quantum Hardware for Particle Physics (ML-HEQUPP)
Authors:
Julia Gonski,
Jenni Ott,
Shiva Abbaszadeh,
Sagar Addepalli,
Matteo Cremonesi,
Jennet Dickinson,
Giuseppe Di Guglielmo,
Erdem Yigit Ertorer,
Lindsey Gray,
Ryan Herbst,
Christian Herwig,
Tae Min Hong,
Benedikt Maier,
Maryam Bayat Makou,
David Miller,
Mark S. Neubauer,
Cristián Peña,
Dylan Rankin,
Seon-Hee,
Seo,
Giordon Stark,
Alexander Tapper,
Audrey Corbeil Therrien,
Ioannis Xiotidis,
Keisuke Yoshihara
, et al. (98 additional authors not shown)
Abstract:
The next generation of particle physics experiments will face a new era of challenges in data acquisition, due to unprecedented data rates and volumes along with extreme environments and operational constraints. Harnessing this data for scientific discovery demands real-time inference and decision-making, intelligent data reduction, and efficient processing architectures beyond current capabilitie…
▽ More
The next generation of particle physics experiments will face a new era of challenges in data acquisition, due to unprecedented data rates and volumes along with extreme environments and operational constraints. Harnessing this data for scientific discovery demands real-time inference and decision-making, intelligent data reduction, and efficient processing architectures beyond current capabilities. Crucial to the success of this experimental paradigm are several emerging technologies, such as artificial intelligence and machine learning (AI/ML), silicon microelectronics, and the advent of quantum algorithms and processing. Their intersection includes areas of research such as low-power and low-latency devices for edge computing, heterogeneous accelerator systems, reconfigurable hardware, novel codesign and synthesis strategies, readout for cryogenic or high-radiation environments, and analog computing. This white paper presents a community-driven vision to identify and prioritize research and development opportunities in hardware-based ML systems and corresponding physics applications, contributing towards a successful transition to the new data frontier of fundamental science.
△ Less
Submitted 10 March, 2026; v1 submitted 24 February, 2026;
originally announced February 2026.
-
Enabling Low-Latency Machine learning on Radiation-Hard FPGAs with hls4ml
Authors:
Katya Govorkova,
Julian Garcia Pardinas,
Vladimir Loncar,
Victoria Nguyen,
Sebastian Schmitt,
Marco Pizzichemi,
Loris Martinazzoli,
Eluned Anne Smith
Abstract:
This paper presents the first demonstration of a viable, ultra-fast, radiation-hard machine learning (ML) application on FPGAs, which could be used in future high-energy physics experiments. We present a three-fold contribution, with the PicoCal calorimeter, planned for the LHCb Upgrade II experiment, used as a test case. First, we develop a lightweight autoencoder to compress a 32-sample timing r…
▽ More
This paper presents the first demonstration of a viable, ultra-fast, radiation-hard machine learning (ML) application on FPGAs, which could be used in future high-energy physics experiments. We present a three-fold contribution, with the PicoCal calorimeter, planned for the LHCb Upgrade II experiment, used as a test case. First, we develop a lightweight autoencoder to compress a 32-sample timing readout, representative of that of the PicoCal, into a two-dimensional latent space. Second, we introduce a systematic, hardware-aware quantization strategy and show that the model can be reduced to 10-bit weights with minimal performance loss. Third, as a barrier to the adoption of on-detector ML is the lack of support for radiation-hard FPGAs in the High-Energy Physics community's standard ML synthesis tool, hls4ml, we develop a new backend for this library. This new back-end enables the automatic translation of ML models into High-Level Synthesis (HLS) projects for the Microchip PolarFire family of FPGAs, one of the few commercially available and radiation hard FPGAs. We present the synthesis of the autoencoder on a target PolarFire FPGA, which indicates that a latency of 25 ns can be achieved. We show that the resources utilized are low enough that the model can be placed within the inherently protected logic of the FPGA. Our extension to hls4ml is a significant contribution, paving the way for broader adoption of ML on FPGAs in high-radiation environments.
△ Less
Submitted 17 February, 2026;
originally announced February 2026.
-
AIE4ML: An End-to-End Framework for Compiling Neural Networks for the Next Generation of AMD AI Engines
Authors:
Dimitrios Danopoulos,
Enrico Lupi,
Chang Sun,
Sebastian Dittmeier,
Michael Kagan,
Vladimir Loncar,
Maurizio Pierini
Abstract:
Efficient AI inference on AMD's Versal AI Engine (AIE) is challenging due to tightly coupled VLIW execution, explicit datapaths, and local memory management. Prior work focused on first-generation AIE kernel optimizations, without tackling full neural network execution across the 2D array. In this work, we present AIE4ML, the first comprehensive framework for converting AI models automatically int…
▽ More
Efficient AI inference on AMD's Versal AI Engine (AIE) is challenging due to tightly coupled VLIW execution, explicit datapaths, and local memory management. Prior work focused on first-generation AIE kernel optimizations, without tackling full neural network execution across the 2D array. In this work, we present AIE4ML, the first comprehensive framework for converting AI models automatically into optimized firmware targeting the AIE-ML generation devices, also with forward compatibility for the newer AIE-MLv2 architecture. At the single-kernel level, we attain performance close to the architectural peak. At the graph and system levels, we provide a structured parallelization method that can scale across the 2D AIE-ML fabric and exploit its dedicated memory tiles to stay entirely on-chip throughout the model execution. As a demonstration, we designed a generalized and highly efficient linear-layer implementation with intrinsic support for fused bias addition and ReLU activation. Also, as our framework necessitates the generation of multi-layer implementations, our approach systematically derives deterministic, compact, and topology-optimized placements tailored to the physical 2D grid of the device through a novel graph placement and search algorithm. Finally, the framework seamlessly accepts quantized models imported from high-level tools such as hls4ml or PyTorch while preserving bit-exactness. In layer scaling benchmarks, we achieve up to 98.6% efficiency relative to the single-kernel baseline, utilizing 296 of 304 AIE tiles (97.4%) of the device with entirely on-chip data movement. With evaluations across real-world model topologies, we demonstrate that AIE4ML delivers GPU-class throughput under microsecond latency constraints, making it a practical companion for ultra-low-latency environments such as trigger systems in particle physics experiments.
△ Less
Submitted 9 January, 2026; v1 submitted 17 December, 2025;
originally announced December 2025.
-
SparsePixels: Efficient Convolution for Sparse Data on FPGAs
Authors:
Ho Fung Tsoi,
Dylan Rankin,
Vladimir Loncar,
Philip Harris
Abstract:
Inference of standard convolutional neural networks (CNNs) on FPGAs often incurs high latency and a long initiation interval due to the deep nested loops required to densely convolve every input pixel regardless of its feature value. However, input features can be spatially sparse in some image data, where semantic information may occupy only a small fraction of the pixels and most computation wou…
▽ More
Inference of standard convolutional neural networks (CNNs) on FPGAs often incurs high latency and a long initiation interval due to the deep nested loops required to densely convolve every input pixel regardless of its feature value. However, input features can be spatially sparse in some image data, where semantic information may occupy only a small fraction of the pixels and most computation would be wasted on empty regions. In this work, we introduce SparsePixels, a framework that implements sparse convolution on FPGAs by selectively retaining and computing on a small subset of active pixels while ignoring the rest. We show that, for identifying neutrino interactions in naturally sparse LArTPC images with 4k pixels, a standard CNN with a compact size of 4k parameters incurs an inference latency of 48.665 $μ$s on an FPGA, whereas a sparse CNN of the same base architecture, computing on less than 1% of the input pixels, achieves a $\times 73$ speedup to 0.665 $μ$s with resource utilization well within on-chip budgets, trading only a small percent-level performance loss. This work aims to benefit future algorithm development for efficient data readout in modern experiments with latency requirements of microseconds or below.
△ Less
Submitted 15 December, 2025; v1 submitted 5 December, 2025;
originally announced December 2025.
-
hls4ml: A Flexible, Open-Source Platform for Deep Learning Acceleration on Reconfigurable Hardware
Authors:
Jan-Frederik Schulte,
Benjamin Ramhorst,
Chang Sun,
Jovan Mitrevski,
Nicolò Ghielmetti,
Enrico Lupi,
Dimitrios Danopoulos,
Vladimir Loncar,
Javier Duarte,
David Burnette,
Lauri Laatu,
Stylianos Tzelepis,
Konstantinos Axiotis,
Quentin Berthet,
Haoyan Wang,
Paul White,
Suleyman Demirsoy,
Marco Colombo,
Thea Aarrestad,
Sioni Summers,
Maurizio Pierini,
Giuseppe Di Guglielmo,
Jennifer Ngadiuba,
Javier Campos,
Ben Hawks
, et al. (28 additional authors not shown)
Abstract:
We present hls4ml, a free and open-source platform that translates machine learning (ML) models from modern deep learning frameworks into high-level synthesis (HLS) code that can be integrated into full designs for field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs). With its flexible and modular design, hls4ml supports a large number of deep learning framewo…
▽ More
We present hls4ml, a free and open-source platform that translates machine learning (ML) models from modern deep learning frameworks into high-level synthesis (HLS) code that can be integrated into full designs for field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs). With its flexible and modular design, hls4ml supports a large number of deep learning frameworks and can target HLS compilers from several vendors, including Vitis HLS, Intel oneAPI and Catapult HLS. Together with a wider eco-system for software-hardware co-design, hls4ml has enabled the acceleration of ML inference in a wide range of commercial and scientific applications where low latency, resource usage, and power consumption are critical. In this paper, we describe the structure and functionality of the hls4ml platform. The overarching design considerations for the generated HLS code are discussed, together with selected performance results.
△ Less
Submitted 1 December, 2025;
originally announced December 2025.
-
wa-hls4ml: A Benchmark and Surrogate Models for hls4ml Resource and Latency Estimation
Authors:
Benjamin Hawks,
Jason Weitz,
Dmitri Demler,
Karla Tame-Narvaez,
Dennis Plotnikov,
Mohammad Mehdi Rahimifar,
Hamza Ezzaoui Rahali,
Audrey C. Therrien,
Donovan Sproule,
Elham E Khoda,
Keegan A. Smith,
Russell Marroquin,
Giuseppe Di Guglielmo,
Nhan Tran,
Javier Duarte,
Vladimir Loncar
Abstract:
As machine learning (ML) is increasingly implemented in hardware to address real-time challenges in scientific applications, the development of advanced toolchains has significantly reduced the time required to iterate on various designs. These advancements have solved major obstacles, but also exposed new challenges. For example, processes that were not previously considered bottlenecks, such as…
▽ More
As machine learning (ML) is increasingly implemented in hardware to address real-time challenges in scientific applications, the development of advanced toolchains has significantly reduced the time required to iterate on various designs. These advancements have solved major obstacles, but also exposed new challenges. For example, processes that were not previously considered bottlenecks, such as hardware synthesis, are becoming limiting factors in the rapid iteration of designs. To mitigate these emerging constraints, multiple efforts have been undertaken to develop an ML-based surrogate model that estimates resource usage of ML accelerator architectures. We introduce wa-hls4ml, a benchmark for ML accelerator resource and latency estimation, and its corresponding initial dataset of over 680,000 fully connected and convolutional neural networks, all synthesized using hls4ml and targeting Xilinx FPGAs. The benchmark evaluates the performance of resource and latency predictors against several common ML model architectures, primarily originating from scientific domains, as exemplar models, and the average performance across a subset of the dataset. Additionally, we introduce GNN- and transformer-based surrogate models that predict latency and resources for ML accelerators. We present the architecture and performance of the models and find that the models generally predict latency and resources for the 75% percentile within several percent of the synthesized resources on the synthetic test dataset.
△ Less
Submitted 6 November, 2025;
originally announced November 2025.
-
da4ml: Distributed Arithmetic for Real-time Neural Networks on FPGAs
Authors:
Chang Sun,
Zhiqiang Que,
Vladimir Loncar,
Wayne Luk,
Maria Spiropulu
Abstract:
Neural networks with a latency requirement on the order of microseconds, like the ones used at the CERN Large Hadron Collider, are typically deployed on FPGAs fully unrolled and pipelined. A bottleneck for the deployment of such neural networks is area utilization, which is directly related to the required constant matrix-vector multiplication (CMVM) operations. In this work, we propose an efficie…
▽ More
Neural networks with a latency requirement on the order of microseconds, like the ones used at the CERN Large Hadron Collider, are typically deployed on FPGAs fully unrolled and pipelined. A bottleneck for the deployment of such neural networks is area utilization, which is directly related to the required constant matrix-vector multiplication (CMVM) operations. In this work, we propose an efficient algorithm for implementing CMVM operations with distributed arithmetic (DA) on FPGAs that simultaneously optimizes for area consumption and latency. The algorithm achieves resource reduction similar to state-of-the-art algorithms while being significantly faster to compute. The proposed algorithm is open-sourced and integrated into the \texttt{hls4ml} library, a free and open-source library for running real-time neural network inference on FPGAs. We show that the proposed algorithm can reduce on-chip resources by up to a third for realistic, highly quantized neural networks while simultaneously reducing latency, enabling the implementation of previously infeasible networks.
△ Less
Submitted 6 July, 2025;
originally announced July 2025.
-
Building Machine Learning Challenges for Anomaly Detection in Science
Authors:
Elizabeth G. Campolongo,
Yuan-Tang Chou,
Ekaterina Govorkova,
Wahid Bhimji,
Wei-Lun Chao,
Chris Harris,
Shih-Chieh Hsu,
Hilmar Lapp,
Mark S. Neubauer,
Josephine Namayanja,
Aneesh Subramanian,
Philip Harris,
Advaith Anand,
David E. Carlyn,
Subhankar Ghosh,
Christopher Lawrence,
Eric Moreno,
Ryan Raikman,
Jiaman Wu,
Ziheng Zhang,
Bayu Adhi,
Mohammad Ahmadi Gharehtoragh,
Saúl Alonso Monsalve,
Marta Babicz,
Furqan Baig
, et al. (126 additional authors not shown)
Abstract:
Scientific discoveries are often made by finding a pattern or object that was not predicted by the known rules of science. Oftentimes, these anomalous events or objects that do not conform to the norms are an indication that the rules of science governing the data are incomplete, and something new needs to be present to explain these unexpected outliers. The challenge of finding anomalies can be c…
▽ More
Scientific discoveries are often made by finding a pattern or object that was not predicted by the known rules of science. Oftentimes, these anomalous events or objects that do not conform to the norms are an indication that the rules of science governing the data are incomplete, and something new needs to be present to explain these unexpected outliers. The challenge of finding anomalies can be confounding since it requires codifying a complete knowledge of the known scientific behaviors and then projecting these known behaviors on the data to look for deviations. When utilizing machine learning, this presents a particular challenge since we require that the model not only understands scientific data perfectly but also recognizes when the data is inconsistent and out of the scope of its trained behavior. In this paper, we present three datasets aimed at developing machine learning-based anomaly detection for disparate scientific domains covering astrophysics, genomics, and polar science. We present the different datasets along with a scheme to make machine learning challenges around the three datasets findable, accessible, interoperable, and reusable (FAIR). Furthermore, we present an approach that generalizes to future machine learning challenges, enabling the possibility of large, more compute-intensive challenges that can ultimately lead to scientific discovery.
△ Less
Submitted 29 March, 2025; v1 submitted 3 March, 2025;
originally announced March 2025.
-
Comparative Analysis of FPGA and GPU Performance for Machine Learning-Based Track Reconstruction at LHCb
Authors:
Fotis I. Giasemis,
Vladimir Lončar,
Bertrand Granado,
Vladimir Vava Gligorov
Abstract:
In high-energy physics, the increasing luminosity and detector granularity at the Large Hadron Collider are driving the need for more efficient data processing solutions. Machine Learning has emerged as a promising tool for reconstructing charged particle tracks, due to its potentially linear computational scaling with detector hits. The recent implementation of a graph neural network-based track…
▽ More
In high-energy physics, the increasing luminosity and detector granularity at the Large Hadron Collider are driving the need for more efficient data processing solutions. Machine Learning has emerged as a promising tool for reconstructing charged particle tracks, due to its potentially linear computational scaling with detector hits. The recent implementation of a graph neural network-based track reconstruction pipeline in the first level trigger of the LHCb experiment on GPUs serves as a platform for comparative studies between computational architectures in the context of high-energy physics. This paper presents a novel comparison of the throughput of ML model inference between FPGAs and GPUs, focusing on the first step of the track reconstruction pipeline$\unicode{x2013}$an implementation of a multilayer perceptron. Using HLS4ML for FPGA deployment, we benchmark its performance against the GPU implementation and demonstrate the potential of FPGAs for high-throughput, low-latency inference without the need for an expertise in FPGA development and while consuming significantly less power.
△ Less
Submitted 30 April, 2025; v1 submitted 4 February, 2025;
originally announced February 2025.
-
Intelligent experiments through real-time AI: Fast Data Processing and Autonomous Detector Control for sPHENIX and future EIC detectors
Authors:
J. Kvapil,
G. Borca-Tasciuc,
H. Bossi,
K. Chen,
Y. Chen,
Y. Corrales Morales,
H. Da Costa,
C. Da Silva,
C. Dean,
J. Durham,
S. Fu,
C. Hao,
P. Harris,
O. Hen,
H. Jheng,
Y. Lee,
P. Li,
X. Li,
Y. Lin,
M. X. Liu,
V. Loncar,
J. P. Mitrevski,
A. Olvera,
M. L. Purschke,
J. S. Renck
, et al. (8 additional authors not shown)
Abstract:
This R\&D project, initiated by the DOE Nuclear Physics AI-Machine Learning initiative in 2022, leverages AI to address data processing challenges in high-energy nuclear experiments (RHIC, LHC, and future EIC). Our focus is on developing a demonstrator for real-time processing of high-rate data streams from sPHENIX experiment tracking detectors. The limitations of a 15 kHz maximum trigger rate imp…
▽ More
This R\&D project, initiated by the DOE Nuclear Physics AI-Machine Learning initiative in 2022, leverages AI to address data processing challenges in high-energy nuclear experiments (RHIC, LHC, and future EIC). Our focus is on developing a demonstrator for real-time processing of high-rate data streams from sPHENIX experiment tracking detectors. The limitations of a 15 kHz maximum trigger rate imposed by the calorimeters can be negated by intelligent use of streaming technology in the tracking system. The approach efficiently identifies low momentum rare heavy flavor events in high-rate p+p collisions (3MHz), using Graph Neural Network (GNN) and High Level Synthesis for Machine Learning (hls4ml). Success at sPHENIX promises immediate benefits, minimizing resources and accelerating the heavy-flavor measurements. The approach is transferable to other fields. For the EIC, we develop a DIS-electron tagger using Artificial Intelligence - Machine Learning (AI-ML) algorithms for real-time identification, showcasing the transformative potential of AI and FPGA technologies in high-energy nuclear and particle experiments real-time data processing pipelines.
△ Less
Submitted 8 January, 2025;
originally announced January 2025.
-
SymbolFit: Automatic Parametric Modeling with Symbolic Regression
Authors:
Ho Fung Tsoi,
Dylan Rankin,
Cecile Caillol,
Miles Cranmer,
Sridhara Dasu,
Javier Duarte,
Philip Harris,
Elliot Lipeles,
Vladimir Loncar
Abstract:
We introduce SymbolFit, a framework that automates parametric modeling by using symbolic regression to perform a machine-search for functions that fit the data while simultaneously providing uncertainty estimates in a single run. Traditionally, constructing a parametric model to accurately describe binned data has been a manual and iterative process, requiring an adequate functional form to be det…
▽ More
We introduce SymbolFit, a framework that automates parametric modeling by using symbolic regression to perform a machine-search for functions that fit the data while simultaneously providing uncertainty estimates in a single run. Traditionally, constructing a parametric model to accurately describe binned data has been a manual and iterative process, requiring an adequate functional form to be determined before the fit can be performed. The main challenge arises when the appropriate functional forms cannot be derived from first principles, especially when there is no underlying true closed-form function for the distribution. In this work, we develop a framework that automates and streamlines the process by utilizing symbolic regression, a machine learning technique that explores a vast space of candidate functions without requiring a predefined functional form because the functional form itself is treated as a trainable parameter, making the process far more efficient and effortless than traditional regression methods. We demonstrate the framework in high-energy physics experiments at the CERN Large Hadron Collider (LHC) using five real proton-proton collision datasets from new physics searches, including background modeling in resonance searches for high-mass dijet, trijet, paired-dijet, diphoton, and dimuon events. We show that our framework can flexibly and efficiently generate a wide range of candidate functions that fit a nontrivial distribution well using a simple fit configuration that varies only by random seed, and that the same fit configuration, which defines a vast function space, can also be applied to distributions of different shapes, whereas achieving a comparable result with traditional methods would have required extensive manual effort.
△ Less
Submitted 10 May, 2025; v1 submitted 14 November, 2024;
originally announced November 2024.
-
Low Latency Transformer Inference on FPGAs for Physics Applications with hls4ml
Authors:
Zhixing Jiang,
Dennis Yin,
Yihui Chen,
Elham E Khoda,
Scott Hauck,
Shih-Chieh Hsu,
Ekaterina Govorkova,
Philip Harris,
Vladimir Loncar,
Eric A. Moreno
Abstract:
This study presents an efficient implementation of transformer architectures in Field-Programmable Gate Arrays(FPGAs) using hls4ml. We demonstrate the strategy for implementing the multi-head attention, softmax, and normalization layer and evaluate three distinct models. Their deployment on VU13P FPGA chip achieved latency less than 2us, demonstrating the potential for real-time applications. HLS4…
▽ More
This study presents an efficient implementation of transformer architectures in Field-Programmable Gate Arrays(FPGAs) using hls4ml. We demonstrate the strategy for implementing the multi-head attention, softmax, and normalization layer and evaluate three distinct models. Their deployment on VU13P FPGA chip achieved latency less than 2us, demonstrating the potential for real-time applications. HLS4ML compatibility with any TensorFlow-built transformer model further enhances the scalability and applicability of this work. Index Terms: FPGAs, machine learning, transformers, high energy physics, LIGO
△ Less
Submitted 8 September, 2024;
originally announced September 2024.
-
Reliable edge machine learning hardware for scientific applications
Authors:
Tommaso Baldi,
Javier Campos,
Ben Hawks,
Jennifer Ngadiuba,
Nhan Tran,
Daniel Diaz,
Javier Duarte,
Ryan Kastner,
Andres Meza,
Melissa Quinnan,
Olivia Weng,
Caleb Geniesse,
Amir Gholami,
Michael W. Mahoney,
Vladimir Loncar,
Philip Harris,
Joshua Agar,
Shuyu Qin
Abstract:
Extreme data rate scientific experiments create massive amounts of data that require efficient ML edge processing. This leads to unique validation challenges for VLSI implementations of ML algorithms: enabling bit-accurate functional simulations for performance validation in experimental software frameworks, verifying those ML models are robust under extreme quantization and pruning, and enabling…
▽ More
Extreme data rate scientific experiments create massive amounts of data that require efficient ML edge processing. This leads to unique validation challenges for VLSI implementations of ML algorithms: enabling bit-accurate functional simulations for performance validation in experimental software frameworks, verifying those ML models are robust under extreme quantization and pruning, and enabling ultra-fine-grained model inspection for efficient fault tolerance. We discuss approaches to developing and validating reliable algorithms at the scientific edge under such strict latency, resource, power, and area requirements in extreme experimental environments. We study metrics for developing robust algorithms, present preliminary results and mitigation strategies, and conclude with an outlook of these and future directions of research towards the longer-term goal of developing autonomous scientific experimentation methods for accelerated scientific discovery.
△ Less
Submitted 27 June, 2024;
originally announced June 2024.
-
HGQ: High Granularity Quantization for Real-time Neural Networks on FPGAs
Authors:
Chang Sun,
Zhiqiang Que,
Thea K. Årrestad,
Vladimir Loncar,
Jennifer Ngadiuba,
Wayne Luk,
Maria Spiropulu
Abstract:
Neural networks with sub-microsecond inference latency are required by many critical applications. Targeting such applications deployed on FPGAs, we present High Granularity Quantization (HGQ), a quantization-aware training framework that optimizes parameter bit-widths through gradient descent. Unlike conventional methods, HGQ determines the optimal bit-width for each parameter independently, maki…
▽ More
Neural networks with sub-microsecond inference latency are required by many critical applications. Targeting such applications deployed on FPGAs, we present High Granularity Quantization (HGQ), a quantization-aware training framework that optimizes parameter bit-widths through gradient descent. Unlike conventional methods, HGQ determines the optimal bit-width for each parameter independently, making it suitable for hardware platforms supporting heterogeneous arbitrary precision arithmetic. In our experiments, HGQ shows superior performance compared to existing network compression methods, achieving orders of magnitude reduction in resource consumption and latency while maintaining the accuracy on several benchmark tasks. These improvements enable the deployment of complex models previously infeasible due to resource or latency constraints. HGQ is open-source and is used for developing next-generation trigger systems at the CERN ATLAS and CMS experiments for particle physics, enabling the use of advanced machine learning models for real-time data selection with sub-microsecond latency.
△ Less
Submitted 19 December, 2025; v1 submitted 1 May, 2024;
originally announced May 2024.
-
Ultrafast jet classification on FPGAs for the HL-LHC
Authors:
Patrick Odagiu,
Zhiqiang Que,
Javier Duarte,
Johannes Haller,
Gregor Kasieczka,
Artur Lobanov,
Vladimir Loncar,
Wayne Luk,
Jennifer Ngadiuba,
Maurizio Pierini,
Philipp Rincke,
Arpita Seksaria,
Sioni Summers,
Andre Sznajder,
Alexander Tapper,
Thea K. Aarrestad
Abstract:
Three machine learning models are used to perform jet origin classification. These models are optimized for deployment on a field-programmable gate array device. In this context, we demonstrate how latency and resource consumption scale with the input size and choice of algorithm. Moreover, the models proposed here are designed to work on the type of data and under the foreseen conditions at the C…
▽ More
Three machine learning models are used to perform jet origin classification. These models are optimized for deployment on a field-programmable gate array device. In this context, we demonstrate how latency and resource consumption scale with the input size and choice of algorithm. Moreover, the models proposed here are designed to work on the type of data and under the foreseen conditions at the CERN LHC during its high-luminosity phase. Through quantization-aware training and efficient synthetization for a specific field programmable gate array, we show that $O(100)$ ns inference of complex architectures such as Deep Sets and Interaction Networks is feasible at a relatively low computational resource cost.
△ Less
Submitted 4 July, 2024; v1 submitted 2 February, 2024;
originally announced February 2024.
-
Ultra Fast Transformers on FPGAs for Particle Physics Experiments
Authors:
Zhixing Jiang,
Dennis Yin,
Elham E Khoda,
Vladimir Loncar,
Ekaterina Govorkova,
Eric Moreno,
Philip Harris,
Scott Hauck,
Shih-Chieh Hsu
Abstract:
This work introduces a highly efficient implementation of the transformer architecture on a Field-Programmable Gate Array (FPGA) by using the \texttt{hls4ml} tool. Given the demonstrated effectiveness of transformer models in addressing a wide range of problems, their application in experimental triggers within particle physics becomes a subject of significant interest. In this work, we have imple…
▽ More
This work introduces a highly efficient implementation of the transformer architecture on a Field-Programmable Gate Array (FPGA) by using the \texttt{hls4ml} tool. Given the demonstrated effectiveness of transformer models in addressing a wide range of problems, their application in experimental triggers within particle physics becomes a subject of significant interest. In this work, we have implemented critical components of a transformer model, such as multi-head attention and softmax layers. To evaluate the effectiveness of our implementation, we have focused on a particle physics jet flavor tagging problem, employing a public dataset. We recorded latency under 2 $μ$s on the Xilinx UltraScale+ FPGA, which is compatible with hardware trigger requirements at the CERN Large Hadron Collider experiments.
△ Less
Submitted 1 February, 2024;
originally announced February 2024.
-
SymbolNet: Neural Symbolic Regression with Adaptive Dynamic Pruning for Compression
Authors:
Ho Fung Tsoi,
Vladimir Loncar,
Sridhara Dasu,
Philip Harris
Abstract:
Compact symbolic expressions have been shown to be more efficient than neural network models in terms of resource consumption and inference speed when implemented on custom hardware such as FPGAs, while maintaining comparable accuracy~\cite{tsoi2023symbolic}. These capabilities are highly valuable in environments with stringent computational resource constraints, such as high-energy physics experi…
▽ More
Compact symbolic expressions have been shown to be more efficient than neural network models in terms of resource consumption and inference speed when implemented on custom hardware such as FPGAs, while maintaining comparable accuracy~\cite{tsoi2023symbolic}. These capabilities are highly valuable in environments with stringent computational resource constraints, such as high-energy physics experiments at the CERN Large Hadron Collider. However, finding compact expressions for high-dimensional datasets remains challenging due to the inherent limitations of genetic programming, the search algorithm of most symbolic regression methods. Contrary to genetic programming, the neural network approach to symbolic regression offers scalability to high-dimensional inputs and leverages gradient methods for faster equation searching. Common ways of constraining expression complexity often involve multistage pruning with fine-tuning, which can result in significant performance loss. In this work, we propose $\tt{SymbolNet}$, a neural network approach to symbolic regression specifically designed as a model compression technique, aimed at enabling low-latency inference for high-dimensional inputs on custom hardware such as FPGAs. This framework allows dynamic pruning of model weights, input features, and mathematical operators in a single training process, where both training loss and expression complexity are optimized simultaneously. We introduce a sparsity regularization term for each pruning type, which can adaptively adjust its strength, leading to convergence at a target sparsity ratio. Unlike most existing symbolic regression methods that struggle with datasets containing more than $\mathcal{O}(10)$ inputs, we demonstrate the effectiveness of our model on the LHC jet tagging task (16 inputs), MNIST (784 inputs), and SVHN (3072 inputs).
△ Less
Submitted 3 January, 2025; v1 submitted 18 January, 2024;
originally announced January 2024.
-
FPGA Resource-aware Structured Pruning for Real-Time Neural Networks
Authors:
Benjamin Ramhorst,
Vladimir Loncar,
George A. Constantinides
Abstract:
Neural networks achieve state-of-the-art performance in image classification, speech recognition, scientific analysis and many more application areas. Due to the high computational complexity and memory footprint of neural networks, various compression techniques, such as pruning and quantization, have been proposed in literature. Pruning sparsifies a neural network, reducing the number of multipl…
▽ More
Neural networks achieve state-of-the-art performance in image classification, speech recognition, scientific analysis and many more application areas. Due to the high computational complexity and memory footprint of neural networks, various compression techniques, such as pruning and quantization, have been proposed in literature. Pruning sparsifies a neural network, reducing the number of multiplications and memory. However, pruning often fails to capture properties of the underlying hardware, causing unstructured sparsity and load-balance inefficiency, thus bottlenecking resource improvements. We propose a hardware-centric formulation of pruning, by formulating it as a knapsack problem with resource-aware tensor structures. Evaluated on a range of tasks, including sub-microsecond particle classification at CERN's Large Hadron Collider and fast image classification, the proposed method achieves reductions ranging between 55% and 92% in the DSP utilization and up to 81% in BRAM utilization.
△ Less
Submitted 12 December, 2023; v1 submitted 9 August, 2023;
originally announced August 2023.
-
Symbolic Regression on FPGAs for Fast Machine Learning Inference
Authors:
Ho Fung Tsoi,
Adrian Alan Pol,
Vladimir Loncar,
Ekaterina Govorkova,
Miles Cranmer,
Sridhara Dasu,
Peter Elmer,
Philip Harris,
Isobel Ojalvo,
Maurizio Pierini
Abstract:
The high-energy physics community is investigating the potential of deploying machine-learning-based solutions on Field-Programmable Gate Arrays (FPGAs) to enhance physics sensitivity while still meeting data processing time constraints. In this contribution, we introduce a novel end-to-end procedure that utilizes a machine learning technique called symbolic regression (SR). It searches the equati…
▽ More
The high-energy physics community is investigating the potential of deploying machine-learning-based solutions on Field-Programmable Gate Arrays (FPGAs) to enhance physics sensitivity while still meeting data processing time constraints. In this contribution, we introduce a novel end-to-end procedure that utilizes a machine learning technique called symbolic regression (SR). It searches the equation space to discover algebraic relations approximating a dataset. We use PySR (a software to uncover these expressions based on an evolutionary algorithm) and extend the functionality of hls4ml (a package for machine learning inference in FPGAs) to support PySR-generated expressions for resource-constrained production environments. Deep learning models often optimize the top metric by pinning the network size because the vast hyperparameter space prevents an extensive search for neural architecture. Conversely, SR selects a set of models on the Pareto front, which allows for optimizing the performance-resource trade-off directly. By embedding symbolic forms, our implementation can dramatically reduce the computational resources needed to perform critical tasks. We validate our method on a physics benchmark: the multiclass classification of jets produced in simulated proton-proton collisions at the CERN Large Hadron Collider. We show that our approach can approximate a 3-layer neural network using an inference model that achieves up to a 13-fold decrease in execution time, down to 5 ns, while still preserving more than 90% approximation accuracy.
△ Less
Submitted 17 January, 2024; v1 submitted 6 May, 2023;
originally announced May 2023.
-
Tailor: Altering Skip Connections for Resource-Efficient Inference
Authors:
Olivia Weng,
Gabriel Marcano,
Vladimir Loncar,
Alireza Khodamoradi,
Nojan Sheybani,
Andres Meza,
Farinaz Koushanfar,
Kristof Denolf,
Javier Mauricio Duarte,
Ryan Kastner
Abstract:
Deep neural networks use skip connections to improve training convergence. However, these skip connections are costly in hardware, requiring extra buffers and increasing on- and off-chip memory utilization and bandwidth requirements. In this paper, we show that skip connections can be optimized for hardware when tackled with a hardware-software codesign approach. We argue that while a network's sk…
▽ More
Deep neural networks use skip connections to improve training convergence. However, these skip connections are costly in hardware, requiring extra buffers and increasing on- and off-chip memory utilization and bandwidth requirements. In this paper, we show that skip connections can be optimized for hardware when tackled with a hardware-software codesign approach. We argue that while a network's skip connections are needed for the network to learn, they can later be removed or shortened to provide a more hardware efficient implementation with minimal to no accuracy loss. We introduce Tailor, a codesign tool whose hardware-aware training algorithm gradually removes or shortens a fully trained network's skip connections to lower their hardware cost. Tailor improves resource utilization by up to 34% for BRAMs, 13% for FFs, and 16% for LUTs for on-chip, dataflow-style architectures. Tailor increases performance by 30% and reduces memory bandwidth by 45% for a 2D processing element array architecture.
△ Less
Submitted 15 September, 2023; v1 submitted 17 January, 2023;
originally announced January 2023.
-
Ultra-low latency recurrent neural network inference on FPGAs for physics applications with hls4ml
Authors:
Elham E Khoda,
Dylan Rankin,
Rafael Teixeira de Lima,
Philip Harris,
Scott Hauck,
Shih-Chieh Hsu,
Michael Kagan,
Vladimir Loncar,
Chaitanya Paikara,
Richa Rao,
Sioni Summers,
Caterina Vernieri,
Aaron Wang
Abstract:
Recurrent neural networks have been shown to be effective architectures for many tasks in high energy physics, and thus have been widely adopted. Their use in low-latency environments has, however, been limited as a result of the difficulties of implementing recurrent architectures on field-programmable gate arrays (FPGAs). In this paper we present an implementation of two types of recurrent neura…
▽ More
Recurrent neural networks have been shown to be effective architectures for many tasks in high energy physics, and thus have been widely adopted. Their use in low-latency environments has, however, been limited as a result of the difficulties of implementing recurrent architectures on field-programmable gate arrays (FPGAs). In this paper we present an implementation of two types of recurrent neural network layers -- long short-term memory and gated recurrent unit -- within the hls4ml framework. We demonstrate that our implementation is capable of producing effective designs for both small and large models, and can be customized to meet specific design requirements for inference latencies and FPGA resources. We show the performance and synthesized designs for multiple neural networks, many of which are trained specifically for jet identification tasks at the CERN Large Hadron Collider.
△ Less
Submitted 1 July, 2022;
originally announced July 2022.
-
QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Authors:
Alessandro Pappalardo,
Yaman Umuroglu,
Michaela Blott,
Jovan Mitrevski,
Ben Hawks,
Nhan Tran,
Vladimir Loncar,
Sioni Summers,
Hendrik Borras,
Jules Muhizi,
Matthew Trahms,
Shih-Chieh Hsu,
Scott Hauck,
Javier Duarte
Abstract:
We present extensions to the Open Neural Network Exchange (ONNX) intermediate representation format to represent arbitrary-precision quantized neural networks. We first introduce support for low precision quantization in existing ONNX-based quantization formats by leveraging integer clipping, resulting in two new backward-compatible variants: the quantized operator format with clipping and quantiz…
▽ More
We present extensions to the Open Neural Network Exchange (ONNX) intermediate representation format to represent arbitrary-precision quantized neural networks. We first introduce support for low precision quantization in existing ONNX-based quantization formats by leveraging integer clipping, resulting in two new backward-compatible variants: the quantized operator format with clipping and quantize-clip-dequantize (QCDQ) format. We then introduce a novel higher-level ONNX format called quantized ONNX (QONNX) that introduces three new operators -- Quant, BipolarQuant, and Trunc -- in order to represent uniform quantization. By keeping the QONNX IR high-level and flexible, we enable targeting a wider variety of platforms. We also present utilities for working with QONNX, as well as examples of its usage in the FINN and hls4ml toolchains. Finally, we introduce the QONNX model zoo to share low-precision quantized neural networks.
△ Less
Submitted 24 June, 2022; v1 submitted 15 June, 2022;
originally announced June 2022.
-
Real-time semantic segmentation on FPGAs for autonomous vehicles with hls4ml
Authors:
Nicolò Ghielmetti,
Vladimir Loncar,
Maurizio Pierini,
Marcel Roed,
Sioni Summers,
Thea Aarrestad,
Christoffer Petersson,
Hampus Linander,
Jennifer Ngadiuba,
Kelvin Lin,
Philip Harris
Abstract:
In this paper, we investigate how field programmable gate arrays can serve as hardware accelerators for real-time semantic segmentation tasks relevant for autonomous driving. Considering compressed versions of the ENet convolutional neural network architecture, we demonstrate a fully-on-chip deployment with a latency of 4.9 ms per image, using less than 30% of the available resources on a Xilinx Z…
▽ More
In this paper, we investigate how field programmable gate arrays can serve as hardware accelerators for real-time semantic segmentation tasks relevant for autonomous driving. Considering compressed versions of the ENet convolutional neural network architecture, we demonstrate a fully-on-chip deployment with a latency of 4.9 ms per image, using less than 30% of the available resources on a Xilinx ZCU102 evaluation board. The latency is reduced to 3 ms per image when increasing the batch size to ten, corresponding to the use case where the autonomous vehicle receives inputs from multiple cameras simultaneously. We show, through aggressive filter reduction and heterogeneous quantization-aware training, and an optimized implementation of convolutional layers, that the power consumption and resource utilization can be significantly reduced while maintaining accuracy on the Cityscapes dataset.
△ Less
Submitted 16 May, 2022;
originally announced May 2022.
-
Lightweight Jet Reconstruction and Identification as an Object Detection Task
Authors:
Adrian Alan Pol,
Thea Aarrestad,
Ekaterina Govorkova,
Roi Halily,
Anat Klempner,
Tal Kopetz,
Vladimir Loncar,
Jennifer Ngadiuba,
Maurizio Pierini,
Olya Sirkin,
Sioni Summers
Abstract:
We apply object detection techniques based on deep convolutional blocks to end-to-end jet identification and reconstruction tasks encountered at the CERN Large Hadron Collider (LHC). Collision events produced at the LHC and represented as an image composed of calorimeter and tracker cells are given as an input to a Single Shot Detection network. The algorithm, named PFJet-SSD performs simultaneous…
▽ More
We apply object detection techniques based on deep convolutional blocks to end-to-end jet identification and reconstruction tasks encountered at the CERN Large Hadron Collider (LHC). Collision events produced at the LHC and represented as an image composed of calorimeter and tracker cells are given as an input to a Single Shot Detection network. The algorithm, named PFJet-SSD performs simultaneous localization, classification and regression tasks to cluster jets and reconstruct their features. This all-in-one single feed-forward pass gives advantages in terms of execution time and an improved accuracy w.r.t. traditional rule-based methods. A further gain is obtained from network slimming, homogeneous quantization, and optimized runtime for meeting memory and latency constraints of a typical real-time processing environment. We experiment with 8-bit and ternary quantization, benchmarking their accuracy and inference latency against a single-precision floating-point. We show that the ternary network closely matches the performance of its full-precision equivalent and outperforms the state-of-the-art rule-based algorithm. Finally, we report the inference latency on different hardware platforms and discuss future applications.
△ Less
Submitted 9 February, 2022;
originally announced February 2022.
-
Accelerating Recurrent Neural Networks for Gravitational Wave Experiments
Authors:
Zhiqiang Que,
Erwei Wang,
Umar Marikar,
Eric Moreno,
Jennifer Ngadiuba,
Hamza Javed,
Bartłomiej Borzyszkowski,
Thea Aarrestad,
Vladimir Loncar,
Sioni Summers,
Maurizio Pierini,
Peter Y Cheung,
Wayne Luk
Abstract:
This paper presents novel reconfigurable architectures for reducing the latency of recurrent neural networks (RNNs) that are used for detecting gravitational waves. Gravitational interferometers such as the LIGO detectors capture cosmic events such as black hole mergers which happen at unknown times and of varying durations, producing time-series data. We have developed a new architecture capable…
▽ More
This paper presents novel reconfigurable architectures for reducing the latency of recurrent neural networks (RNNs) that are used for detecting gravitational waves. Gravitational interferometers such as the LIGO detectors capture cosmic events such as black hole mergers which happen at unknown times and of varying durations, producing time-series data. We have developed a new architecture capable of accelerating RNN inference for analyzing time-series data from LIGO detectors. This architecture is based on optimizing the initiation intervals (II) in a multi-layer LSTM (Long Short-Term Memory) network, by identifying appropriate reuse factors for each layer. A customizable template for this architecture has been designed, which enables the generation of low-latency FPGA designs with efficient resource utilization using high-level synthesis tools. The proposed approach has been evaluated based on two LSTM models, targeting a ZYNQ 7045 FPGA and a U250 FPGA. Experimental results show that with balanced II, the number of DSPs can be reduced up to 42% while achieving the same IIs. When compared to other FPGA-based LSTM designs, our design can achieve about 4.92 to 12.4 times lower latency.
△ Less
Submitted 26 June, 2021;
originally announced June 2021.
-
A reconfigurable neural network ASIC for detector front-end data compression at the HL-LHC
Authors:
Giuseppe Di Guglielmo,
Farah Fahim,
Christian Herwig,
Manuel Blanco Valentin,
Javier Duarte,
Cristian Gingu,
Philip Harris,
James Hirschauer,
Martin Kwok,
Vladimir Loncar,
Yingyi Luo,
Llovizna Miranda,
Jennifer Ngadiuba,
Daniel Noonan,
Seda Ogrenci-Memik,
Maurizio Pierini,
Sioni Summers,
Nhan Tran
Abstract:
Despite advances in the programmable logic capabilities of modern trigger systems, a significant bottleneck remains in the amount of data to be transported from the detector to off-detector logic where trigger decisions are made. We demonstrate that a neural network autoencoder model can be implemented in a radiation tolerant ASIC to perform lossy data compression alleviating the data transmission…
▽ More
Despite advances in the programmable logic capabilities of modern trigger systems, a significant bottleneck remains in the amount of data to be transported from the detector to off-detector logic where trigger decisions are made. We demonstrate that a neural network autoencoder model can be implemented in a radiation tolerant ASIC to perform lossy data compression alleviating the data transmission problem while preserving critical information of the detector energy profile. For our application, we consider the high-granularity calorimeter from the CMS experiment at the CERN Large Hadron Collider. The advantage of the machine learning approach is in the flexibility and configurability of the algorithm. By changing the neural network weights, a unique data compression algorithm can be deployed for each sensor in different detector regions, and changing detector or collider conditions. To meet area, performance, and power constraints, we perform a quantization-aware training to create an optimized neural network hardware implementation. The design is achieved through the use of high-level synthesis tools and the hls4ml framework, and was processed through synthesis and physical layout flows based on a LP CMOS 65 nm technology node. The flow anticipates 200 Mrad of ionizing radiation to select gates, and reports a total area of 3.6 mm^2 and consumes 95 mW of power. The simulated energy consumption per inference is 2.4 nJ. This is the first radiation tolerant on-detector ASIC implementation of a neural network that has been designed for particle physics applications.
△ Less
Submitted 4 May, 2021;
originally announced May 2021.
-
hls4ml: An Open-Source Codesign Workflow to Empower Scientific Low-Power Machine Learning Devices
Authors:
Farah Fahim,
Benjamin Hawks,
Christian Herwig,
James Hirschauer,
Sergo Jindariani,
Nhan Tran,
Luca P. Carloni,
Giuseppe Di Guglielmo,
Philip Harris,
Jeffrey Krupa,
Dylan Rankin,
Manuel Blanco Valentin,
Josiah Hester,
Yingyi Luo,
John Mamish,
Seda Orgrenci-Memik,
Thea Aarrestad,
Hamza Javed,
Vladimir Loncar,
Maurizio Pierini,
Adrian Alan Pol,
Sioni Summers,
Javier Duarte,
Scott Hauck,
Shih-Chieh Hsu
, et al. (5 additional authors not shown)
Abstract:
Accessible machine learning algorithms, software, and diagnostic tools for energy-efficient devices and systems are extremely valuable across a broad range of application domains. In scientific domains, real-time near-sensor processing can drastically improve experimental design and accelerate scientific discoveries. To support domain scientists, we have developed hls4ml, an open-source software-h…
▽ More
Accessible machine learning algorithms, software, and diagnostic tools for energy-efficient devices and systems are extremely valuable across a broad range of application domains. In scientific domains, real-time near-sensor processing can drastically improve experimental design and accelerate scientific discoveries. To support domain scientists, we have developed hls4ml, an open-source software-hardware codesign workflow to interpret and translate machine learning algorithms for implementation with both FPGA and ASIC technologies. We expand on previous hls4ml work by extending capabilities and techniques towards low-power implementations and increased usability: new Python APIs, quantization-aware pruning, end-to-end FPGA workflows, long pipeline kernels for low power, and new device backends include an ASIC workflow. Taken together, these and continued efforts in hls4ml will arm a new generation of domain scientists with accessible, efficient, and powerful tools for machine-learning-accelerated discovery.
△ Less
Submitted 23 March, 2021; v1 submitted 9 March, 2021;
originally announced March 2021.
-
Fast convolutional neural networks on FPGAs with hls4ml
Authors:
Thea Aarrestad,
Vladimir Loncar,
Nicolò Ghielmetti,
Maurizio Pierini,
Sioni Summers,
Jennifer Ngadiuba,
Christoffer Petersson,
Hampus Linander,
Yutaro Iiyama,
Giuseppe Di Guglielmo,
Javier Duarte,
Philip Harris,
Dylan Rankin,
Sergo Jindariani,
Kevin Pedro,
Nhan Tran,
Mia Liu,
Edward Kreinar,
Zhenbin Wu,
Duc Hoang
Abstract:
We introduce an automated tool for deploying ultra low-latency, low-power deep neural networks with convolutional layers on FPGAs. By extending the hls4ml library, we demonstrate an inference latency of $5\,μ$s using convolutional architectures, targeting microsecond latency applications like those at the CERN Large Hadron Collider. Considering benchmark models trained on the Street View House Num…
▽ More
We introduce an automated tool for deploying ultra low-latency, low-power deep neural networks with convolutional layers on FPGAs. By extending the hls4ml library, we demonstrate an inference latency of $5\,μ$s using convolutional architectures, targeting microsecond latency applications like those at the CERN Large Hadron Collider. Considering benchmark models trained on the Street View House Numbers Dataset, we demonstrate various methods for model compression in order to fit the computational constraints of a typical FPGA device used in trigger and data acquisition systems of particle detectors. In particular, we discuss pruning and quantization-aware training, and demonstrate how resource utilization can be significantly reduced with little to no loss in model accuracy. We show that the FPGA critical resource consumption can be reduced by 97% with zero loss in model accuracy, and by 99% when tolerating a 6% accuracy degradation.
△ Less
Submitted 29 April, 2021; v1 submitted 13 January, 2021;
originally announced January 2021.
-
Accelerated Charged Particle Tracking with Graph Neural Networks on FPGAs
Authors:
Aneesh Heintz,
Vesal Razavimaleki,
Javier Duarte,
Gage DeZoort,
Isobel Ojalvo,
Savannah Thais,
Markus Atkinson,
Mark Neubauer,
Lindsey Gray,
Sergo Jindariani,
Nhan Tran,
Philip Harris,
Dylan Rankin,
Thea Aarrestad,
Vladimir Loncar,
Maurizio Pierini,
Sioni Summers,
Jennifer Ngadiuba,
Mia Liu,
Edward Kreinar,
Zhenbin Wu
Abstract:
We develop and study FPGA implementations of algorithms for charged particle tracking based on graph neural networks. The two complementary FPGA designs are based on OpenCL, a framework for writing programs that execute across heterogeneous platforms, and hls4ml, a high-level-synthesis-based compiler for neural network to firmware conversion. We evaluate and compare the resource usage, latency, an…
▽ More
We develop and study FPGA implementations of algorithms for charged particle tracking based on graph neural networks. The two complementary FPGA designs are based on OpenCL, a framework for writing programs that execute across heterogeneous platforms, and hls4ml, a high-level-synthesis-based compiler for neural network to firmware conversion. We evaluate and compare the resource usage, latency, and tracking performance of our implementations based on a benchmark dataset. We find a considerable speedup over CPU-based execution is possible, potentially enabling such algorithms to be used effectively in future computing workflows and the FPGA-based Level-1 trigger at the CERN Large Hadron Collider.
△ Less
Submitted 30 November, 2020;
originally announced December 2020.
-
Distance-Weighted Graph Neural Networks on FPGAs for Real-Time Particle Reconstruction in High Energy Physics
Authors:
Yutaro Iiyama,
Gianluca Cerminara,
Abhijay Gupta,
Jan Kieseler,
Vladimir Loncar,
Maurizio Pierini,
Shah Rukh Qasim,
Marcel Rieger,
Sioni Summers,
Gerrit Van Onsem,
Kinga Wozniak,
Jennifer Ngadiuba,
Giuseppe Di Guglielmo,
Javier Duarte,
Philip Harris,
Dylan Rankin,
Sergo Jindariani,
Mia Liu,
Kevin Pedro,
Nhan Tran,
Edward Kreinar,
Zhenbin Wu
Abstract:
Graph neural networks have been shown to achieve excellent performance for several crucial tasks in particle physics, such as charged particle tracking, jet tagging, and clustering. An important domain for the application of these networks is the FGPA-based first layer of real-time data filtering at the CERN Large Hadron Collider, which has strict latency and resource constraints. We discuss how t…
▽ More
Graph neural networks have been shown to achieve excellent performance for several crucial tasks in particle physics, such as charged particle tracking, jet tagging, and clustering. An important domain for the application of these networks is the FGPA-based first layer of real-time data filtering at the CERN Large Hadron Collider, which has strict latency and resource constraints. We discuss how to design distance-weighted graph networks that can be executed with a latency of less than 1$μ\mathrm{s}$ on an FPGA. To do so, we consider a representative task associated to particle reconstruction and identification in a next-generation calorimeter operating at a particle collider. We use a graph network architecture developed for such purposes, and apply additional simplifications to match the computing constraints of Level-1 trigger systems, including weight quantization. Using the $\mathtt{hls4ml}$ library, we convert the compressed models into firmware to be implemented on an FPGA. Performance of the synthesized models is presented both in terms of inference accuracy and resource usage.
△ Less
Submitted 3 February, 2021; v1 submitted 8 August, 2020;
originally announced August 2020.
-
Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors
Authors:
Claudionor N. Coelho Jr.,
Aki Kuusela,
Shan Li,
Hao Zhuang,
Thea Aarrestad,
Vladimir Loncar,
Jennifer Ngadiuba,
Maurizio Pierini,
Adrian Alan Pol,
Sioni Summers
Abstract:
Although the quest for more accurate solutions is pushing deep learning research towards larger and more complex algorithms, edge devices demand efficient inference and therefore reduction in model size, latency and energy consumption. One technique to limit model size is quantization, which implies using fewer bits to represent weights and biases. Such an approach usually results in a decline in…
▽ More
Although the quest for more accurate solutions is pushing deep learning research towards larger and more complex algorithms, edge devices demand efficient inference and therefore reduction in model size, latency and energy consumption. One technique to limit model size is quantization, which implies using fewer bits to represent weights and biases. Such an approach usually results in a decline in performance. Here, we introduce a method for designing optimally heterogeneously quantized versions of deep neural network models for minimum-energy, high-accuracy, nanosecond inference and fully automated deployment on chip. With a per-layer, per-parameter type automatic quantization procedure, sampling from a wide range of quantizers, model energy consumption and size are minimized while high accuracy is maintained. This is crucial for the event selection procedure in proton-proton collisions at the CERN Large Hadron Collider, where resources are strictly limited and a latency of ${\mathcal O}(1)~μ$s is required. Nanosecond inference and a resource consumption reduced by a factor of 50 when implemented on field-programmable gate array hardware are achieved.
△ Less
Submitted 21 June, 2021; v1 submitted 15 June, 2020;
originally announced June 2020.
-
Compressing deep neural networks on FPGAs to binary and ternary precision with HLS4ML
Authors:
Giuseppe Di Guglielmo,
Javier Duarte,
Philip Harris,
Duc Hoang,
Sergo Jindariani,
Edward Kreinar,
Mia Liu,
Vladimir Loncar,
Jennifer Ngadiuba,
Kevin Pedro,
Maurizio Pierini,
Dylan Rankin,
Sheila Sagear,
Sioni Summers,
Nhan Tran,
Zhenbin Wu
Abstract:
We present the implementation of binary and ternary neural networks in the hls4ml library, designed to automatically convert deep neural network models to digital circuits with FPGA firmware. Starting from benchmark models trained with floating point precision, we investigate different strategies to reduce the network's resource consumption by reducing the numerical precision of the network parame…
▽ More
We present the implementation of binary and ternary neural networks in the hls4ml library, designed to automatically convert deep neural network models to digital circuits with FPGA firmware. Starting from benchmark models trained with floating point precision, we investigate different strategies to reduce the network's resource consumption by reducing the numerical precision of the network parameters to binary or ternary. We discuss the trade-off between model accuracy and resource consumption. In addition, we show how to balance between latency and accuracy by retaining full precision on a selected subset of network components. As an example, we consider two multiclass classification tasks: handwritten digit recognition with the MNIST data set and jet identification with simulated proton-proton collisions at the CERN Large Hadron Collider. The binary and ternary implementation has similar performance to the higher precision implementation while using drastically fewer FPGA resources.
△ Less
Submitted 29 June, 2020; v1 submitted 11 March, 2020;
originally announced March 2020.
-
Fast inference of Boosted Decision Trees in FPGAs for particle physics
Authors:
Sioni Summers,
Giuseppe Di Guglielmo,
Javier Duarte,
Philip Harris,
Duc Hoang,
Sergo Jindariani,
Edward Kreinar,
Vladimir Loncar,
Jennifer Ngadiuba,
Maurizio Pierini,
Dylan Rankin,
Nhan Tran,
Zhenbin Wu
Abstract:
We describe the implementation of Boosted Decision Trees in the hls4ml library, which allows the translation of a trained model into FPGA firmware through an automated conversion process. Thanks to its fully on-chip implementation, hls4ml performs inference of Boosted Decision Tree models with extremely low latency. With a typical latency less than 100 ns, this solution is suitable for FPGA-based…
▽ More
We describe the implementation of Boosted Decision Trees in the hls4ml library, which allows the translation of a trained model into FPGA firmware through an automated conversion process. Thanks to its fully on-chip implementation, hls4ml performs inference of Boosted Decision Tree models with extremely low latency. With a typical latency less than 100 ns, this solution is suitable for FPGA-based real-time processing, such as in the Level-1 Trigger system of a collider experiment. These developments open up prospects for physicists to deploy BDTs in FPGAs for identifying the origin of jets, better reconstructing the energies of muons, and enabling better selection of rare signal processes.
△ Less
Submitted 19 February, 2020; v1 submitted 5 February, 2020;
originally announced February 2020.
-
OpenMP GNU and Intel Fortran programs for solving the time-dependent Gross-Pitaevskii equation
Authors:
Luis E. Young-S.,
Paulsamy Muruganandam,
Sadhan K. Adhikari,
Vladimir Loncar,
Dusan Vudragovic,
Antun Balaz
Abstract:
We present Open Multi-Processing (OpenMP) version of Fortran 90 programs for solving the Gross-Pitaevskii (GP) equation for a Bose-Einstein condensate in one, two, and three spatial dimensions, optimized for use with GNU and Intel compilers. We use the split-step Crank-Nicolson algorithm for imaginary- and real-time propagation, which enables efficient calculation of stationary and non-stationary…
▽ More
We present Open Multi-Processing (OpenMP) version of Fortran 90 programs for solving the Gross-Pitaevskii (GP) equation for a Bose-Einstein condensate in one, two, and three spatial dimensions, optimized for use with GNU and Intel compilers. We use the split-step Crank-Nicolson algorithm for imaginary- and real-time propagation, which enables efficient calculation of stationary and non-stationary solutions, respectively. The present OpenMP programs are designed for computers with multi-core processors and optimized for compiling with both commercially-licensed Intel Fortran and popular free open-source GNU Fortran compiler. The programs are easy to use and are elaborated with helpful comments for the users. All input parameters are listed at the beginning of each program. Different output files provide physical quantities such as energy, chemical potential, root-mean-square sizes, densities, etc. We also present speedup test results for new versions of the programs.
△ Less
Submitted 13 September, 2017;
originally announced September 2017.
-
OpenMP, OpenMP/MPI, and CUDA/MPI C programs for solving the time-dependent dipolar Gross-Pitaevskii equation
Authors:
Vladimir Loncar,
Luis E. Young-S.,
Srdjan Skrbic,
Paulsamy Muruganandam,
Sadhan K. Adhikari,
Antun Balaz
Abstract:
We present new versions of the previously published C and CUDA programs for solving the dipolar Gross-Pitaevskii equation in one, two, and three spatial dimensions, which calculate stationary and non-stationary solutions by propagation in imaginary or real time. Presented programs are improved and parallelized versions of previous programs, divided into three packages according to the type of para…
▽ More
We present new versions of the previously published C and CUDA programs for solving the dipolar Gross-Pitaevskii equation in one, two, and three spatial dimensions, which calculate stationary and non-stationary solutions by propagation in imaginary or real time. Presented programs are improved and parallelized versions of previous programs, divided into three packages according to the type of parallelization. First package contains improved and threaded version of sequential C programs using OpenMP. Second package additionally parallelizes three-dimensional variants of the OpenMP programs using MPI, allowing them to be run on distributed-memory systems. Finally, previous three-dimensional CUDA-parallelized programs are further parallelized using MPI, similarly as the OpenMP programs. We also present speedup test results obtained using new versions of programs in comparison with the previous sequential C and parallel CUDA programs. The improvements to the sequential version yield a speedup of 1.1 to 1.9, depending on the program. OpenMP parallelization yields further speedup of 2 to 12 on a 16-core workstation, while OpenMP/MPI version demonstrates a speedup of 11.5 to 16.5 on a computer cluster with 32 nodes used. CUDA/MPI version shows a speedup of 9 to 10 on a computer cluster with 32 nodes.
△ Less
Submitted 1 August, 2022; v1 submitted 17 October, 2016;
originally announced October 2016.