-
Counting Without Running: Evaluating LLMs' Reasoning About Code Complexity
Authors:
Gregory Bolet,
Giorgis Georgakoudis,
Konstantinos Parasyris,
Harshitha Menon,
Niranjan Hasabnis,
Kirk W. Cameron,
Gal Oren
Abstract:
Modern GPU software stacks demand developers who can anticipate performance bottlenecks before ever launching a kernel; misjudging floating-point workloads upstream can derail tuning, scheduling, and even hardware procurement. Yet despite rapid progress in code generation, today's Large Language Models (LLMs) are rarely tested on this kind of forward-looking reasoning. We close that gap with gpuFL…
▽ More
Modern GPU software stacks demand developers who can anticipate performance bottlenecks before ever launching a kernel; misjudging floating-point workloads upstream can derail tuning, scheduling, and even hardware procurement. Yet despite rapid progress in code generation, today's Large Language Models (LLMs) are rarely tested on this kind of forward-looking reasoning. We close that gap with gpuFLOPBench, a benchmark that asks models to "count without running" by predicting single and double-precision FLOP counts for 577 CUDA kernels drawn from HeCBench, annotated with ground-truth profiles and eight execution attributes that distinguish trivially analyzable code from kernels whose FLOPs depend on hidden compiler or runtime behavior. Evaluating current closed-source reasoning models shows clear but uneven progress: the newest LLMs achieve perfect classification on straightforward kernels but still incur multiple order-of-magnitude errors whenever implicit FLOPs arise from division, intrinsic math functions, or common subexpressions. These results surface a core limitation of existing code assistants -- the inability to internalize hardware-specific microcode effects -- and position gpuFLOPBench as a focused testbed for developing LLM tooling that can reason about performance with the same rigor as experienced GPU developers. Sources are available at our repository: https://github.com/Scientific-Computing-Lab/gpuFLOPBench
△ Less
Submitted 3 December, 2025;
originally announced December 2025.
-
LLMs as Packagers of HPC Software
Authors:
Caetano Melone,
Daniel Nichols,
Konstantinos Parasyris,
Todd Gamblin,
Harshitha Menon
Abstract:
High performance computing (HPC) software ecosystems are inherently heterogeneous, comprising scientific applications that depend on hundreds of external packages, each with distinct build systems, options, and dependency constraints. Tools such as Spack automate dependency resolution and environment management, but their effectiveness relies on manually written build recipes. As these ecosystems…
▽ More
High performance computing (HPC) software ecosystems are inherently heterogeneous, comprising scientific applications that depend on hundreds of external packages, each with distinct build systems, options, and dependency constraints. Tools such as Spack automate dependency resolution and environment management, but their effectiveness relies on manually written build recipes. As these ecosystems grow, maintaining existing specifications and creating new ones becomes increasingly labor-intensive. While large language models (LLMs) have shown promise in code generation, automatically producing correct and maintainable Spack recipes remains a significant challenge. We present a systematic analysis of how LLMs and context-augmentation methods can assist in the generation of Spack recipes. To this end, we introduce SpackIt, an end-to-end framework that combines repository analysis, retrieval of relevant examples, and iterative refinement through diagnostic feedback. We apply SpackIt to a representative subset of 308 open-source HPC packages to assess its effectiveness and limitations. Our results show that SpackIt increases installation success from 20% in a zero-shot setting to over 80% in its best configuration, demonstrating the value of retrieval and structured feedback for reliable package synthesis.
△ Less
Submitted 6 November, 2025;
originally announced November 2025.
-
Integrating Performance Tools in Model Reasoning for GPU Kernel Optimization
Authors:
Daniel Nichols,
Konstantinos Parasyris,
Charles Jekel,
Abhinav Bhatele,
Harshitha Menon
Abstract:
Language models are now prevalent in software engineering with many developers using them to automate tasks and accelerate their development. While language models have been tremendous at accomplishing complex software engineering tasks, there are still many areas where they fail to deliver desirable results, for instance code performance related tasks. Tasks like optimization depend on many compl…
▽ More
Language models are now prevalent in software engineering with many developers using them to automate tasks and accelerate their development. While language models have been tremendous at accomplishing complex software engineering tasks, there are still many areas where they fail to deliver desirable results, for instance code performance related tasks. Tasks like optimization depend on many complex data from the environment, hardware, etc. that are not directly represented in source code. Recent efforts have seen large improvements in general code modeling tasks using chain-of-thought style reasoning, but these models still fail to comprehend how the environment interacts with code performance. In this paper we propose a methodology to train language models that can interact with performance tools during their reasoning process. We then demonstrate how this methodology can be used to train a state-of-the-art GPU kernel optimization model.
△ Less
Submitted 20 October, 2025;
originally announced October 2025.
-
Modeling Code: Is Text All You Need?
Authors:
Daniel Nichols,
Konstantinos Parasyris,
Harshitha Menon,
Brian R. Bartoldson,
Giorgis Georgakoudis,
Tal Ben-Nun,
Abhinav Bhatele
Abstract:
Code LLMs have become extremely popular recently for modeling source code across a variety of tasks, such as generation, translation, and summarization. However, transformer-based models are limited in their capabilities to reason through structured, analytical properties of code, such as control and data flow. Previous work has explored the modeling of these properties with structured data and gr…
▽ More
Code LLMs have become extremely popular recently for modeling source code across a variety of tasks, such as generation, translation, and summarization. However, transformer-based models are limited in their capabilities to reason through structured, analytical properties of code, such as control and data flow. Previous work has explored the modeling of these properties with structured data and graph neural networks. However, these approaches lack the generative capabilities and scale of modern LLMs. In this work, we introduce a novel approach to combine the strengths of modeling both code as text and more structured forms.
△ Less
Submitted 15 July, 2025;
originally announced July 2025.
-
Leveraging AI for Productive and Trustworthy HPC Software: Challenges and Research Directions
Authors:
Keita Teranishi,
Harshitha Menon,
William F. Godoy,
Prasanna Balaprakash,
David Bau,
Tal Ben-Nun,
Abhinav Bhatele,
Franz Franchetti,
Michael Franusich,
Todd Gamblin,
Giorgis Georgakoudis,
Tom Goldstein,
Arjun Guha,
Steven Hahn,
Costin Iancu,
Zheming Jin,
Terry Jones,
Tze Meng Low,
Het Mankad,
Narasinga Rao Miniskar,
Mohammad Alaul Haque Monil,
Daniel Nichols,
Konstantinos Parasyris,
Swaroop Pophale,
Pedro Valero-Lara
, et al. (3 additional authors not shown)
Abstract:
We discuss the challenges and propose research directions for using AI to revolutionize the development of high-performance computing (HPC) software. AI technologies, in particular large language models, have transformed every aspect of software development. For its part, HPC software is recognized as a highly specialized scientific field of its own. We discuss the challenges associated with lever…
▽ More
We discuss the challenges and propose research directions for using AI to revolutionize the development of high-performance computing (HPC) software. AI technologies, in particular large language models, have transformed every aspect of software development. For its part, HPC software is recognized as a highly specialized scientific field of its own. We discuss the challenges associated with leveraging state-of-the-art AI technologies to develop such a unique and niche class of software and outline our research directions in the two US Department of Energy--funded projects for advancing HPC Software via AI: Ellora and Durban.
△ Less
Submitted 12 May, 2025;
originally announced May 2025.
-
Can Large Language Models Predict Parallel Code Performance?
Authors:
Gregory Bolet,
Giorgis Georgakoudis,
Harshitha Menon,
Konstantinos Parasyris,
Niranjan Hasabnis,
Hayden Estes,
Kirk W. Cameron,
Gal Oren
Abstract:
Accurate determination of the performance of parallel GPU code typically requires execution-time profiling on target hardware -- an increasingly prohibitive step due to limited access to high-end GPUs. This paper explores whether Large Language Models (LLMs) can offer an alternative approach for GPU performance prediction without relying on hardware. We frame the problem as a roofline classificati…
▽ More
Accurate determination of the performance of parallel GPU code typically requires execution-time profiling on target hardware -- an increasingly prohibitive step due to limited access to high-end GPUs. This paper explores whether Large Language Models (LLMs) can offer an alternative approach for GPU performance prediction without relying on hardware. We frame the problem as a roofline classification task: given the source code of a GPU kernel and the hardware specifications of a target GPU, can an LLM predict whether the GPU kernel is compute-bound or bandwidth-bound?
For this study, we build a balanced dataset of 340 GPU kernels, obtained from HeCBench benchmark and written in CUDA and OpenMP, along with their ground-truth labels obtained via empirical GPU profiling. We evaluate LLMs across four scenarios: (1) with access to profiling data of the kernel source, (2) zero-shot with source code only, (3) few-shot with code and label pairs, and (4) fine-tuned on a small custom dataset.
Our results show that state-of-the-art LLMs have a strong understanding of the Roofline model, achieving 100% classification accuracy when provided with explicit profiling data. We also find that reasoning-capable LLMs significantly outperform standard LLMs in zero- and few-shot settings, achieving up to 64% accuracy on GPU source codes, without profiling information. Lastly, we find that LLM fine-tuning will require much more data than what we currently have available.
This work is among the first to use LLMs for source-level roofline performance prediction via classification, and illustrates their potential to guide optimization efforts when runtime profiling is infeasible. Our findings suggest that with better datasets and prompt strategies, LLMs could become practical tools for HPC performance analysis and performance portability.
△ Less
Submitted 6 May, 2025;
originally announced May 2025.
-
Testing the Unknown: A Framework for OpenMP Testing via Random Program Generation
Authors:
Ignacio Laguna,
Patrick Chapman,
Konstantinos Parasyris,
Giorgis Georgakoudis,
Cindy Rubio-González
Abstract:
We present a randomized differential testing approach to test OpenMP implementations. In contrast to previous work that manually creates dozens of verification and validation tests, our approach is able to randomly generate thousands of tests, exposing OpenMP implementations to a wide range of program behaviors. We represent the space of possible random OpenMP tests using a grammar and implement o…
▽ More
We present a randomized differential testing approach to test OpenMP implementations. In contrast to previous work that manually creates dozens of verification and validation tests, our approach is able to randomly generate thousands of tests, exposing OpenMP implementations to a wide range of program behaviors. We represent the space of possible random OpenMP tests using a grammar and implement our method as an extension of the Varity program generator. By generating 1,800 OpenMP tests, we find various performance anomalies and correctness issues when we apply it to three OpenMP implementations: GCC, Clang, and Intel. We also present several case studies that analyze the anomalies and give more details about the classes of tests that our approach creates.
△ Less
Submitted 11 October, 2024;
originally announced October 2024.
-
HPAC-ML: A Programming Model for Embedding ML Surrogates in Scientific Applications
Authors:
Zane Fink,
Konstantinos Parasyris,
Praneet Rathi,
Giorgis Georgakoudis,
Harshitha Menon,
Peer-Timo Bremer
Abstract:
Recent advancements in Machine Learning (ML) have substantially improved its predictive and computational abilities, offering promising opportunities for surrogate modeling in scientific applications. By accurately approximating complex functions with low computational cost, ML-based surrogates can accelerate scientific applications by replacing computationally intensive components with faster mod…
▽ More
Recent advancements in Machine Learning (ML) have substantially improved its predictive and computational abilities, offering promising opportunities for surrogate modeling in scientific applications. By accurately approximating complex functions with low computational cost, ML-based surrogates can accelerate scientific applications by replacing computationally intensive components with faster model inference. However, integrating ML models into these applications remains a significant challenge, hindering the widespread adoption of ML surrogates as an approximation technique in modern scientific computing.
We propose an easy-to-use directive-based programming model that enables developers to seamlessly describe the use of ML models in scientific applications. The runtime support, as instructed by the programming model, performs data assimilation using the original algorithm and can replace the algorithm with model inference. Our evaluation across five benchmarks, testing over 5000 ML models, shows up to 83.6x speed improvements with minimal accuracy loss (as low as 0.01 RMSE).
△ Less
Submitted 26 August, 2024; v1 submitted 25 July, 2024;
originally announced July 2024.
-
Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies
Authors:
Brian R. Bartoldson,
James Diffenderfer,
Konstantinos Parasyris,
Bhavya Kailkhura
Abstract:
This paper revisits the simple, long-studied, yet still unsolved problem of making image classifiers robust to imperceptible perturbations. Taking CIFAR10 as an example, SOTA clean accuracy is about $100$%, but SOTA robustness to $\ell_{\infty}$-norm bounded perturbations barely exceeds $70$%. To understand this gap, we analyze how model size, dataset size, and synthetic data quality affect robust…
▽ More
This paper revisits the simple, long-studied, yet still unsolved problem of making image classifiers robust to imperceptible perturbations. Taking CIFAR10 as an example, SOTA clean accuracy is about $100$%, but SOTA robustness to $\ell_{\infty}$-norm bounded perturbations barely exceeds $70$%. To understand this gap, we analyze how model size, dataset size, and synthetic data quality affect robustness by developing the first scaling laws for adversarial training. Our scaling laws reveal inefficiencies in prior art and provide actionable feedback to advance the field. For instance, we discovered that SOTA methods diverge notably from compute-optimal setups, using excess compute for their level of robustness. Leveraging a compute-efficient setup, we surpass the prior SOTA with $20$% ($70$%) fewer training (inference) FLOPs. We trained various compute-efficient models, with our best achieving $74$% AutoAttack accuracy ($+3$% gain). However, our scaling laws also predict robustness slowly grows then plateaus at $90$%: dwarfing our new SOTA by scaling is impractical, and perfect robustness is impossible. To better understand this predicted limit, we carry out a small-scale human evaluation on the AutoAttack data that fools our top-performing model. Concerningly, we estimate that human performance also plateaus near $90$%, which we show to be attributable to $\ell_{\infty}$-constrained attacks' generation of invalid images not consistent with their original labels. Having characterized limiting roadblocks, we outline promising paths for future research.
△ Less
Submitted 10 July, 2024; v1 submitted 14 April, 2024;
originally announced April 2024.
-
Taking GPU Programming Models to Task for Performance Portability
Authors:
Joshua H. Davis,
Pranav Sivaraman,
Joy Kitson,
Konstantinos Parasyris,
Harshitha Menon,
Isaac Minn,
Giorgis Georgakoudis,
Abhinav Bhatele
Abstract:
Portability is critical to ensuring high productivity in developing and maintaining scientific software as the diversity in on-node hardware architectures increases. While several programming models provide portability for diverse GPU systems, they don't make any guarantees about performance portability. In this work, we explore several programming models -- CUDA, HIP, Kokkos, RAJA, OpenMP, OpenAC…
▽ More
Portability is critical to ensuring high productivity in developing and maintaining scientific software as the diversity in on-node hardware architectures increases. While several programming models provide portability for diverse GPU systems, they don't make any guarantees about performance portability. In this work, we explore several programming models -- CUDA, HIP, Kokkos, RAJA, OpenMP, OpenACC, and SYCL, to assess the consistency of their performance across NVIDIA and AMD GPUs. We use five proxy applications from different scientific domains, create implementations where missing, and use them to present a comprehensive comparative evaluation of the performance portability of these programming models. We provide a Spack scripting-based methodology to ensure reproducibility of experiments conducted in this work. Finally, we analyze the reasons for why some programming models underperform in certain scenarios and in some cases, present performance optimizations to the proxy applications.
△ Less
Submitted 4 September, 2025; v1 submitted 14 February, 2024;
originally announced February 2024.
-
DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training
Authors:
Aochuan Chen,
Yimeng Zhang,
Jinghan Jia,
James Diffenderfer,
Jiancheng Liu,
Konstantinos Parasyris,
Yihua Zhang,
Zheng Zhang,
Bhavya Kailkhura,
Sijia Liu
Abstract:
Zeroth-order (ZO) optimization has become a popular technique for solving machine learning (ML) problems when first-order (FO) information is difficult or impossible to obtain. However, the scalability of ZO optimization remains an open problem: Its use has primarily been limited to relatively small-scale ML problems, such as sample-wise adversarial attack generation. To our best knowledge, no pri…
▽ More
Zeroth-order (ZO) optimization has become a popular technique for solving machine learning (ML) problems when first-order (FO) information is difficult or impossible to obtain. However, the scalability of ZO optimization remains an open problem: Its use has primarily been limited to relatively small-scale ML problems, such as sample-wise adversarial attack generation. To our best knowledge, no prior work has demonstrated the effectiveness of ZO optimization in training deep neural networks (DNNs) without a significant decrease in performance. To overcome this roadblock, we develop DeepZero, a principled ZO deep learning (DL) framework that can scale ZO optimization to DNN training from scratch through three primary innovations. First, we demonstrate the advantages of coordinatewise gradient estimation (CGE) over randomized vector-wise gradient estimation in training accuracy and computational efficiency. Second, we propose a sparsityinduced ZO training protocol that extends the model pruning methodology using only finite differences to explore and exploit the sparse DL prior in CGE. Third, we develop the methods of feature reuse and forward parallelization to advance the practical implementations of ZO training. Our extensive experiments show that DeepZero achieves state-of-the-art (SOTA) accuracy on ResNet-20 trained on CIFAR-10, approaching FO training performance for the first time. Furthermore, we show the practical utility of DeepZero in applications of certified adversarial defense and DL-based partial differential equation error correction, achieving 10-20% improvement over SOTA. We believe our results will inspire future research on scalable ZO optimization and contribute to advancing DL with black box. Codes are available at https://github.com/OPTML-Group/DeepZero.
△ Less
Submitted 15 March, 2024; v1 submitted 3 October, 2023;
originally announced October 2023.
-
ComPile: A Large IR Dataset from Production Sources
Authors:
Aiden Grossman,
Ludger Paehler,
Konstantinos Parasyris,
Tal Ben-Nun,
Jacob Hegna,
William Moses,
Jose M Monsalve Diaz,
Mircea Trofin,
Johannes Doerfert
Abstract:
Code is increasingly becoming a core data modality of modern machine learning research impacting not only the way we write code with conversational agents like OpenAI's ChatGPT, Google's Bard, or Anthropic's Claude, the way we translate code from one language into another, but also the compiler infrastructure underlying the language. While modeling approaches may vary and representations differ, t…
▽ More
Code is increasingly becoming a core data modality of modern machine learning research impacting not only the way we write code with conversational agents like OpenAI's ChatGPT, Google's Bard, or Anthropic's Claude, the way we translate code from one language into another, but also the compiler infrastructure underlying the language. While modeling approaches may vary and representations differ, the targeted tasks often remain the same within the individual classes of models. Relying solely on the ability of modern models to extract information from unstructured code does not take advantage of 70 years of programming language and compiler development by not utilizing the structure inherent to programs in the data collection. This detracts from the performance of models working over a tokenized representation of input code and precludes the use of these models in the compiler itself. To work towards the first intermediate representation (IR) based models, we fully utilize the LLVM compiler infrastructure, shared by a number of languages, to generate a 182B token dataset of LLVM IR. We generated this dataset from programming languages built on the shared LLVM infrastructure, including Rust, Swift, Julia, and C/C++, by hooking into LLVM code generation either through the language's package manager or the compiler directly to extract the dataset of intermediate representations from production grade programs. Statistical analysis proves the utility of our dataset not only for large language model training, but also for the introspection into the code generation process itself with the dataset showing great promise for machine-learned compiler components.
△ Less
Submitted 30 April, 2024; v1 submitted 27 September, 2023;
originally announced September 2023.
-
HPAC-Offload: Accelerating HPC Applications with Portable Approximate Computing on the GPU
Authors:
Zane Fink,
Konstantinos Parasyris,
Giorgis Georgakoudis,
Harshitha Menon
Abstract:
The end of Dennard scaling and the slowdown of Moore's law led to a shift in technology trends toward parallel architectures, particularly in HPC systems. To continue providing performance benefits, HPC should embrace Approximate Computing (AC), which trades application quality loss for improved performance. However, existing AC techniques have not been extensively applied and evaluated in state-o…
▽ More
The end of Dennard scaling and the slowdown of Moore's law led to a shift in technology trends toward parallel architectures, particularly in HPC systems. To continue providing performance benefits, HPC should embrace Approximate Computing (AC), which trades application quality loss for improved performance. However, existing AC techniques have not been extensively applied and evaluated in state-of-the-art hardware architectures such as GPUs, the primary execution vehicle for HPC applications today.
This paper presents HPAC-Offload, a pragma-based programming model that extends OpenMP offload applications to support AC techniques, allowing portable approximations across different GPU architectures. We conduct a comprehensive performance analysis of HPAC-Offload across GPU-accelerated HPC applications, revealing that AC techniques can significantly accelerate HPC applications (1.64x LULESH on AMD, 1.57x NVIDIA) with minimal quality loss (0.1%). Our analysis offers deep insights into the performance of GPU-based AC that guide the future development of AC algorithms and systems for these architectures.
△ Less
Submitted 31 August, 2023;
originally announced August 2023.
-
Machine Learning-Driven Adaptive OpenMP For Portable Performance on Heterogeneous Systems
Authors:
Giorgis Georgakoudis,
Konstantinos Parasyris,
Chunhua Liao,
David Beckingsale,
Todd Gamblin,
Bronis de Supinski
Abstract:
Heterogeneity has become a mainstream architecture design choice for building High Performance Computing systems. However, heterogeneity poses significant challenges for achieving performance portability of execution. Adapting a program to a new heterogeneous platform is laborious and requires developers to manually explore a vast space of execution parameters. To address those challenges, this pa…
▽ More
Heterogeneity has become a mainstream architecture design choice for building High Performance Computing systems. However, heterogeneity poses significant challenges for achieving performance portability of execution. Adapting a program to a new heterogeneous platform is laborious and requires developers to manually explore a vast space of execution parameters. To address those challenges, this paper proposes new extensions to OpenMP for autonomous, machine learning-driven adaptation.
Our solution includes a set of novel language constructs, compiler transformations, and runtime support. We propose a producer-consumer pattern to flexibly define multiple, different variants of OpenMP code regions to enable adaptation. Those regions are transparently profiled at runtime to autonomously learn optimizing machine learning models that dynamically select the fastest variant. Our approach significantly reduces users' efforts of programming adaptive applications on heterogeneous architectures by leveraging machine learning techniques and code generation capabilities of OpenMP compilation. Using a complete reference implementation in Clang/LLVM we evaluate three use-cases of adaptive CPU-GPU execution. Experiments with HPC proxy applications and benchmarks demonstrate that the proposed adaptive OpenMP extensions automatically choose the best performing code variants for various adaptation possibilities, in several different heterogeneous platforms of CPUs and GPUs.
△ Less
Submitted 15 March, 2023;
originally announced March 2023.
-
Reliabuild: Searching for High-Fidelity Builds Using Active Learning
Authors:
Harshitha Menon,
Konstantinos Parasyris,
Tom Scogland,
Todd Gamblin
Abstract:
Modern software is incredibly complex. A typical application may comprise hundreds or thousands of reusable components. Automated package managers can help to maintain a consistent set of dependency versions, but ultimately the solvers in these systems rely on constraints generated by humans. At scale, small errors add up, and it becomes increasingly difficult to find high-fidelity configurations.…
▽ More
Modern software is incredibly complex. A typical application may comprise hundreds or thousands of reusable components. Automated package managers can help to maintain a consistent set of dependency versions, but ultimately the solvers in these systems rely on constraints generated by humans. At scale, small errors add up, and it becomes increasingly difficult to find high-fidelity configurations. We cannot test all configurations, because the space is combinatorial, so exhaustive exploration is infeasible.
In this paper, we present Reliabuild, an auto-tuning framework that efficiently explores the build configuration space and learns which package versions are likely to result in a successful configuration. We implement two models in Reliabuild to rank the different configurations and use adaptive sampling to select good configurations with fewer samples. We demonstrate Reliabuild's effectiveness by evaluating 31,186 build configurations of 61 packages from the Extreme-scale Scientific Software Stack(E4S). Reliabuild selects good configurations efficiently. For example, Reliabuild selects 3X the number of good configurations in comparison to random sampling for several packages including Abyss, Bolt, libnrm, OpenMPI. Our framework is also able to select all the high-fidelity builds in half the number of samples required by random sampling for packages such as Chai, OpenMPI, py-petsc4py, and slepc. We further use the model to learn statistics about the compatibility of different packages, which will enable package solvers to better select high-fidelity build configurations automatically.
△ Less
Submitted 10 February, 2022;
originally announced February 2022.
-
Artificial neural networks for online error detection
Authors:
Vassilis Vassiliadis,
Konstantinos Parasyris,
Christos D. Antonopoulos,
Spyros Lalis,
Nikolaos Bellas
Abstract:
Hardware reliability is adversely affected by the downscaling of semiconductor devices and the scale-out of systems necessitated by modern applications. Apart from crashes, this unreliability often manifests as silent data corruptions (SDCs), affecting application output. Therefore, we need low-cost and low-human-effort solutions to reduce the incidence rate and the effects of SDCs on the quality…
▽ More
Hardware reliability is adversely affected by the downscaling of semiconductor devices and the scale-out of systems necessitated by modern applications. Apart from crashes, this unreliability often manifests as silent data corruptions (SDCs), affecting application output. Therefore, we need low-cost and low-human-effort solutions to reduce the incidence rate and the effects of SDCs on the quality of application outputs. We propose Artificial Neural Networks (ANNs) as an effective mechanism for online error detection. We train ANNs using software fault injection. We find that the average overhead of our approach, followed by a costly error correction by re-execution, is 6.45% in terms of CPU cycles. We also report that ANNs discover 94.85% of faults thereby resulting in minimal output quality degradation. To validate our approach we overclock ARM Cortex A53 CPUs, execute benchmarks on them and record the program outputs. ANNs prove to be an efficient error detection mechanism, better than a state of the art approximate error detection mechanism (Topaz), both in terms of performance (12.81% CPU overhead) and quality of application output (94.11% detection coverage).
△ Less
Submitted 27 November, 2021;
originally announced November 2021.
-
Scrooge Attack: Undervolting ARM Processors for Profit
Authors:
Christian Göttel,
Konstantinos Parasyris,
Osman Unsal,
Pascal Felber,
Marcelo Pasin,
Valerio Schiavoni
Abstract:
Latest ARM processors are approaching the computational power of x86 architectures while consuming much less energy. Consequently, supply follows demand with Amazon EC2, Equinix Metal and Microsoft Azure offering ARM-based instances, while Oracle Cloud Infrastructure is about to add such support. We expect this trend to continue, with an increasing number of cloud providers offering ARM-based clou…
▽ More
Latest ARM processors are approaching the computational power of x86 architectures while consuming much less energy. Consequently, supply follows demand with Amazon EC2, Equinix Metal and Microsoft Azure offering ARM-based instances, while Oracle Cloud Infrastructure is about to add such support. We expect this trend to continue, with an increasing number of cloud providers offering ARM-based cloud instances.
ARM processors are more energy-efficient leading to substantial electricity savings for cloud providers. However, a malicious cloud provider could intentionally reduce the CPU voltage to further lower its costs. Running applications malfunction when the undervolting goes below critical thresholds. By avoiding critical voltage regions, a cloud provider can run undervolted instances in a stealthy manner.
This practical experience report describes a novel attack scenario: an attack launched by the cloud provider against its users to aggressively reduce the processor voltage for saving energy to the last penny. We call it the Scrooge Attack and show how it could be executed using ARM-based computing instances. We mimic ARM-based cloud instances by deploying our own ARM-based devices using different generations of Raspberry Pi. Using realistic and synthetic workloads, we demonstrate to which degree of aggressiveness the attack is relevant. The attack is unnoticeable by our detection method up to an offset of -50mV. We show that the attack may even remain completely stealthy for certain workloads. Finally, we propose a set of client-based detection methods that can identify undervolted instances. We support experimental reproducibility and provide instructions to reproduce our results.
△ Less
Submitted 12 May, 2022; v1 submitted 1 July, 2021;
originally announced July 2021.
-
MATCH: An MPI Fault Tolerance Benchmark Suite
Authors:
Luanzheng Guo,
Giorgis Georgakoudis,
Konstantinos Parasyris,
Ignacio Laguna,
Dong Li
Abstract:
MPI has been ubiquitously deployed in flagship HPC systems aiming to accelerate distributed scientific applications running on tens of hundreds of processes and compute nodes. Maintaining the correctness and integrity of MPI application execution is critical, especially for safety-critical scientific applications. Therefore, a collection of effective MPI fault tolerance techniques have been propos…
▽ More
MPI has been ubiquitously deployed in flagship HPC systems aiming to accelerate distributed scientific applications running on tens of hundreds of processes and compute nodes. Maintaining the correctness and integrity of MPI application execution is critical, especially for safety-critical scientific applications. Therefore, a collection of effective MPI fault tolerance techniques have been proposed to enable MPI application execution to efficiently resume from system failures. However, there is no structured way to study and compare different MPI fault tolerance designs, so to guide the selection and development of efficient MPI fault tolerance techniques for distinct scenarios. To solve this problem, we design, develop, and evaluate a benchmark suite called MATCH to characterize, research, and comprehensively compare different combinations and configurations of MPI fault tolerance designs. Our investigation derives useful findings: (1) Reinit recovery in general performs better than ULFM recovery; (2) Reinit recovery is independent of the scaling size and the input problem size, whereas ULFM recovery is not; (3) Using Reinit recovery with FTI checkpointing is a highly efficient fault tolerance design. MATCH code is available at https://github.com/kakulo/MPI- FT- Bench.
△ Less
Submitted 13 February, 2021;
originally announced February 2021.
-
LEGaTO: Low-Energy, Secure, and Resilient Toolset for Heterogeneous Computing
Authors:
B. Salami,
K. Parasyris,
A. Cristal,
O. Unsal,
X. Martorell,
P. Carpenter,
R. De La Cruz,
L. Bautista,
D. Jimenez,
C. Alvarez,
S. Nabavi,
S. Madonar,
M. Pericas,
P. Trancoso,
M. Abduljabbar,
J. Chen,
P. N. Soomro,
M Manivannan,
M. Berge,
S. Krupop,
F. Klawonn,
Al Mekhlafi,
S. May,
T. Becker,
G. Gaydadjiev
, et al. (20 additional authors not shown)
Abstract:
The LEGaTO project leverages task-based programming models to provide a software ecosystem for Made in-Europe heterogeneous hardware composed of CPUs, GPUs, FPGAs and dataflow engines. The aim is to attain one order of magnitude energy savings from the edge to the converged cloud/HPC, balanced with the security and resilience challenges. LEGaTO is an ongoing three-year EU H2020 project started in…
▽ More
The LEGaTO project leverages task-based programming models to provide a software ecosystem for Made in-Europe heterogeneous hardware composed of CPUs, GPUs, FPGAs and dataflow engines. The aim is to attain one order of magnitude energy savings from the edge to the converged cloud/HPC, balanced with the security and resilience challenges. LEGaTO is an ongoing three-year EU H2020 project started in December 2017.
△ Less
Submitted 1 December, 2019;
originally announced December 2019.
-
Hardware Versus Software Fault Injection of Modern Undervolted SRAMs
Authors:
Muhammet Abdullah Soyturk,
Konstantinos Parasyris,
Behzad Salami,
Osman Unsal,
Gulay Yalcin,
Leonardo Bautista Gomez
Abstract:
To improve power efficiency, researchers are experimenting with dynamically adjusting the supply voltage of systems below the nominal operating points. However, production systems are typically not allowed to function on voltage settings that is below the reliable limit. Consequently, existing software fault tolerance studies are based on fault models, which inject faults on random fault locations…
▽ More
To improve power efficiency, researchers are experimenting with dynamically adjusting the supply voltage of systems below the nominal operating points. However, production systems are typically not allowed to function on voltage settings that is below the reliable limit. Consequently, existing software fault tolerance studies are based on fault models, which inject faults on random fault locations using fault injection techniques. In this work we study whether random fault injection is accurate to simulate the behavior of undervolted SRAMs.
Our study extends the Gem5 simulator to support fault injection on the caches of the simulated system. The fault injection framework uses fault maps, which describe the faulty bits of SRAMs, as inputs. To compare random fault injection and hardware guided fault injection, we use two types of fault maps. The first type of maps are created through undervolting real SRAMs and observing the location of the erroneous bits, whereas the second type of maps are created by corrupting random bits of the SRAMs. During our study we corrupt the L1-Dcache of the simulated system and we monitor the behavior of the two types of fault maps on the resiliency of six benchmarks. The difference among the resiliency of a benchmark when tested with the different fault maps can be up to 24%.
△ Less
Submitted 30 November, 2019;
originally announced December 2019.
-
A Programming Model and Runtime System for Significance-Aware Energy-Efficient Computing
Authors:
Vassilis Vassiliadis,
Konstantinos Parasyris,
Charalambos Chalios,
Christos D. Antonopoulos,
Spyros Lalis,
Nikolaos Bellas,
Hans Vandierendonck,
Dimitrios S. Nikolopoulos
Abstract:
Reducing energy consumption is one of the key challenges in computing technology. One factor that contributes to high energy consumption is that all parts of the program are considered equally significant for the accuracy of the end-result. However, in many cases, parts of computations can be performed in an approximate way, or even dropped, without affecting the quality of the final output to a s…
▽ More
Reducing energy consumption is one of the key challenges in computing technology. One factor that contributes to high energy consumption is that all parts of the program are considered equally significant for the accuracy of the end-result. However, in many cases, parts of computations can be performed in an approximate way, or even dropped, without affecting the quality of the final output to a significant degree.
In this paper, we introduce a task-based programming model and runtime system that exploit this observation to trade off the quality of program outputs for increased energy-efficiency. This is done in a structured and flexible way, allowing for easy exploitation of different execution points in the quality/energy space, without code modifications and without adversely affecting application performance. The programmer specifies the significance of tasks, and optionally provides approximations for them. Moreover, she provides hints to the runtime on the percentage of tasks that should be executed accurately in order to reach the target quality of results. The runtime system can apply a number of different policies to decide whether it will execute each individual less-significant task in its accurate form, or in its approximate version. Policies differ in terms of their runtime overhead but also the degree to which they manage to execute tasks according to the programmer's specification.
The results from experiments performed on top of an Intel-based multicore/multiprocessor platform show that, depending on the runtime policy used, our system can achieve an energy reduction of up to 83% compared with a fully accurate execution and up to 35% compared with an approximate version employing loop perforation. At the same time, our approach always results in graceful quality degradation.
△ Less
Submitted 15 December, 2014;
originally announced December 2014.