-
Contrastive Metric Learning for Point Cloud Segmentation in Highly Granular Detectors
Authors:
Max Marriott-Clarke,
Lazar Novakovic,
Elizabeth Ratzer,
Robert J. Bainbridge,
Loukas Gouskos,
Benedikt Maier
Abstract:
We propose a novel clustering approach for point-cloud segmentation based on supervised contrastive metric learning (CML). Rather than predicting cluster assignments or object-centric variables, the method learns a latent representation in which points belonging to the same object are embedded nearby while unrelated points are separated. Clusters are then reconstructed using a density-based readou…
▽ More
We propose a novel clustering approach for point-cloud segmentation based on supervised contrastive metric learning (CML). Rather than predicting cluster assignments or object-centric variables, the method learns a latent representation in which points belonging to the same object are embedded nearby while unrelated points are separated. Clusters are then reconstructed using a density-based readout in the learned metric space, decoupling representation learning from cluster formation and enabling flexible inference. The approach is evaluated on simulated data from a highly granular calorimeter, where the task is to separate highly overlapping particle showers represented as sets of calorimeter hits. A direct comparison with object condensation (OC) is performed using identical graph neural network backbones and equal latent dimensionality, isolating the effect of the learning objective. The CML method produces a more stable and separable embedding geometry for both electromagnetic and hadronic particle showers, leading to improved local neighbourhood consistency, a more reliable separation of overlapping showers, and better generalization when extrapolating to unseen multiplicities and energies. This translates directly into higher reconstruction efficiency and purity, particularly in high-multiplicity regimes, as well as improved energy resolution. In mixed-particle environments, CML maintains strong performance, suggesting robust learning of the shower topology, while OC exhibits significant degradation. These results demonstrate that similarity-based representation learning combined with density-based aggregation is a promising alternative to object-centric approaches for point cloud segmentation in highly granular detectors.
△ Less
Submitted 24 March, 2026;
originally announced March 2026.
-
Machine Learning on Heterogeneous, Edge, and Quantum Hardware for Particle Physics (ML-HEQUPP)
Authors:
Julia Gonski,
Jenni Ott,
Shiva Abbaszadeh,
Sagar Addepalli,
Matteo Cremonesi,
Jennet Dickinson,
Giuseppe Di Guglielmo,
Erdem Yigit Ertorer,
Lindsey Gray,
Ryan Herbst,
Christian Herwig,
Tae Min Hong,
Benedikt Maier,
Maryam Bayat Makou,
David Miller,
Mark S. Neubauer,
Cristián Peña,
Dylan Rankin,
Seon-Hee,
Seo,
Giordon Stark,
Alexander Tapper,
Audrey Corbeil Therrien,
Ioannis Xiotidis,
Keisuke Yoshihara
, et al. (98 additional authors not shown)
Abstract:
The next generation of particle physics experiments will face a new era of challenges in data acquisition, due to unprecedented data rates and volumes along with extreme environments and operational constraints. Harnessing this data for scientific discovery demands real-time inference and decision-making, intelligent data reduction, and efficient processing architectures beyond current capabilitie…
▽ More
The next generation of particle physics experiments will face a new era of challenges in data acquisition, due to unprecedented data rates and volumes along with extreme environments and operational constraints. Harnessing this data for scientific discovery demands real-time inference and decision-making, intelligent data reduction, and efficient processing architectures beyond current capabilities. Crucial to the success of this experimental paradigm are several emerging technologies, such as artificial intelligence and machine learning (AI/ML), silicon microelectronics, and the advent of quantum algorithms and processing. Their intersection includes areas of research such as low-power and low-latency devices for edge computing, heterogeneous accelerator systems, reconfigurable hardware, novel codesign and synthesis strategies, readout for cryogenic or high-radiation environments, and analog computing. This white paper presents a community-driven vision to identify and prioritize research and development opportunities in hardware-based ML systems and corresponding physics applications, contributing towards a successful transition to the new data frontier of fundamental science.
△ Less
Submitted 10 March, 2026; v1 submitted 24 February, 2026;
originally announced February 2026.
-
JetFormer: A Scalable and Efficient Transformer for Jet Tagging from Offline Analysis to FPGA Triggers
Authors:
Ruoqing Zheng,
Chang Sun,
Qibin Liu,
Lauri Laatu,
Arianna Cox,
Benedikt Maier,
Alexander Tapper,
Jose G. F. Coutinho,
Wayne Luk,
Zhiqiang Que
Abstract:
We present JetFormer, a versatile and scalable encoder-only Transformer architecture for particle jet tagging at the Large Hadron Collider (LHC). Unlike prior approaches that are often tailored to specific deployment regimes, JetFormer is designed to operate effectively across the full spectrum of jet tagging scenarios, from high-accuracy offline analysis to ultra-low-latency online triggering. Th…
▽ More
We present JetFormer, a versatile and scalable encoder-only Transformer architecture for particle jet tagging at the Large Hadron Collider (LHC). Unlike prior approaches that are often tailored to specific deployment regimes, JetFormer is designed to operate effectively across the full spectrum of jet tagging scenarios, from high-accuracy offline analysis to ultra-low-latency online triggering. The model processes variable-length sets of particle features without relying on input of explicit pairwise interactions, yet achieves competitive or superior performance compared to state-of-the-art methods. On the large-scale JetClass dataset, a large-scale JetFormer matches the accuracy of the interaction-rich ParT model (within 0.7%) while using 37.4% fewer FLOPs, demonstrating its computational efficiency and strong generalization. On benchmark HLS4ML 150P datasets, JetFormer consistently outperforms existing models such as MLPs, Deep Sets, and Interaction Networks by 3-4% in accuracy. To bridge the gap to hardware deployment, we further introduce a hardware-aware optimization pipeline based on multi-objective hyperparameter search, yielding compact variants like JetFormer-tiny suitable for FPGA-based trigger systems with sub-microsecond latency requirements. Through structured pruning and quantization, we show that JetFormer can be aggressively compressed with minimal accuracy loss. By unifying high-performance modeling and deployability within a single architectural framework, JetFormer provides a practical pathway for deploying Transformer-based jet taggers in both offline and online environments at the LHC. Code is available at https://github.com/walkieq/JetFormer.
△ Less
Submitted 23 January, 2026;
originally announced January 2026.
-
Sub-microsecond Transformers for Jet Tagging on FPGAs
Authors:
Lauri Laatu,
Chang Sun,
Arianna Cox,
Abhijith Gandrakota,
Benedikt Maier,
Jennifer Ngadiuba,
Zhiqiang Que,
Wayne Luk,
Maria Spiropulu,
Alexander Tapper
Abstract:
We present the first sub-microsecond transformer implementation on an FPGA achieving competitive performance for state-of-the-art high-energy physics benchmarks. Transformers have shown exceptional performance on multiple tasks in modern machine learning applications, including jet tagging at the CERN Large Hadron Collider (LHC). However, their computational complexity prohibits use in real-time a…
▽ More
We present the first sub-microsecond transformer implementation on an FPGA achieving competitive performance for state-of-the-art high-energy physics benchmarks. Transformers have shown exceptional performance on multiple tasks in modern machine learning applications, including jet tagging at the CERN Large Hadron Collider (LHC). However, their computational complexity prohibits use in real-time applications, such as the hardware trigger system of the collider experiments up until now. In this work, we demonstrate the first application of transformers for jet tagging on FPGAs, achieving $\mathcal{O}(100)$ nanosecond latency with superior performance compared to alternative baseline models. We leverage high-granularity quantization and distributed arithmetic optimization to fit the entire transformer model on a single FPGA, achieving the required throughput and latency. Furthermore, we add multi-head attention and linear attention support to hls4ml, making our work accessible to the broader fast machine learning community. This work advances the next-generation trigger systems for the High Luminosity LHC, enabling the use of transformers for real-time applications in high-energy physics and beyond.
△ Less
Submitted 26 October, 2025;
originally announced October 2025.
-
QINNs: Quantum-Informed Neural Networks
Authors:
Aritra Bal,
Markus Klute,
Benedikt Maier,
Melik Oughton,
Eric Pezone,
Michael Spannowsky
Abstract:
Classical deep neural networks can learn rich multi-particle correlations in collider data, but their inductive biases are rarely anchored in physics structure. We propose quantum-informed neural networks (QINNs), a general framework that brings quantum information concepts and quantum observables into purely classical models. While the framework is broad, in this paper, we study one concrete real…
▽ More
Classical deep neural networks can learn rich multi-particle correlations in collider data, but their inductive biases are rarely anchored in physics structure. We propose quantum-informed neural networks (QINNs), a general framework that brings quantum information concepts and quantum observables into purely classical models. While the framework is broad, in this paper, we study one concrete realisation that encodes each particle as a qubit and uses the Quantum Fisher Information Matrix (QFIM) as a compact, basis-independent summary of particle correlations. Using jet tagging as a case study, QFIMs act as lightweight embeddings in graph neural networks, increasing model expressivity and plasticity. The QFIM reveals distinct patterns for QCD and hadronic top jets that align with physical expectations. Thus, QINNs offer a practical, interpretable, and scalable route to quantum-informed analyses, that is, tomography, of particle collisions, particularly by enhancing well-established deep learning approaches.
△ Less
Submitted 20 October, 2025;
originally announced October 2025.
-
LLMs Reproduce Human Purchase Intent via Semantic Similarity Elicitation of Likert Ratings
Authors:
Benjamin F. Maier,
Ulf Aslak,
Luca Fiaschi,
Nina Rismal,
Kemble Fletcher,
Christian C. Luhmann,
Robbie Dow,
Kli Pappas,
Thomas V. Wiecki
Abstract:
Consumer research costs companies billions annually yet suffers from panel biases and limited scale. Large language models (LLMs) offer an alternative by simulating synthetic consumers, but produce unrealistic response distributions when asked directly for numerical ratings. We present semantic similarity rating (SSR), a method that elicits textual responses from LLMs and maps these to Likert dist…
▽ More
Consumer research costs companies billions annually yet suffers from panel biases and limited scale. Large language models (LLMs) offer an alternative by simulating synthetic consumers, but produce unrealistic response distributions when asked directly for numerical ratings. We present semantic similarity rating (SSR), a method that elicits textual responses from LLMs and maps these to Likert distributions using embedding similarity to reference statements. Testing on an extensive dataset comprising 57 personal care product surveys conducted by a leading corporation in that market (9,300 human responses), SSR achieves 90% of human test-retest reliability while maintaining realistic response distributions (KS similarity > 0.85). Additionally, these synthetic respondents provide rich qualitative feedback explaining their ratings. This framework enables scalable consumer research simulations while preserving traditional survey metrics and interpretability.
△ Less
Submitted 27 October, 2025; v1 submitted 9 October, 2025;
originally announced October 2025.
-
Unveiling the Social Fabric: A Temporal, Nation-Scale Social Network and its Characteristics
Authors:
Jolien Cremers,
Benjamin Kohler,
Benjamin Frank Maier,
Stine Nymann Eriksen,
Johanna Einsiedler,
Frederik Kølby Christensen,
Sune Lehmann,
David Dreyer Lassen,
Laust Hvas Mortensen,
Andreas Bjerre-Nielsen
Abstract:
Social networks shape individuals' lives, influencing everything from career paths to health. This paper presents a registry-based, multi-layer and temporal network of the entire Danish population in the years 2008-2021 (roughly 7.2 mill. individuals). Our network maps the relationships formed through family, households, neighborhoods, colleagues and classmates. We outline key properties of this m…
▽ More
Social networks shape individuals' lives, influencing everything from career paths to health. This paper presents a registry-based, multi-layer and temporal network of the entire Danish population in the years 2008-2021 (roughly 7.2 mill. individuals). Our network maps the relationships formed through family, households, neighborhoods, colleagues and classmates. We outline key properties of this multiplex network, introducing both an individual-focused perspective as well as a bipartite representation. We show how to aggregate and combine the layers, and how to efficiently compute network measures such as shortest paths in large administrative networks. Our analysis reveals how past connections reappear later in other layers, that the number of relationships aggregated over time reflects the position in the income distribution, and that we can recover canonical shortest path length distributions when appropriately weighting connections. Along with the network data, we release a Python package that uses the bipartite network representation for efficient analysis.
△ Less
Submitted 17 September, 2024;
originally announced September 2024.
-
Re-Simulation-based Self-Supervised Learning for Pre-Training Foundation Models
Authors:
Philip Harris,
Michael Kagan,
Jeffrey Krupa,
Benedikt Maier,
Nathaniel Woodward
Abstract:
Self-Supervised Learning (SSL) is at the core of training modern large machine learning models, providing a scheme for learning powerful representations that can be used in a variety of downstream tasks. However, SSL strategies must be adapted to the type of training data and downstream tasks required. We propose RS3L ("Re-simulation-based self-supervised representation learning"), a novel simulat…
▽ More
Self-Supervised Learning (SSL) is at the core of training modern large machine learning models, providing a scheme for learning powerful representations that can be used in a variety of downstream tasks. However, SSL strategies must be adapted to the type of training data and downstream tasks required. We propose RS3L ("Re-simulation-based self-supervised representation learning"), a novel simulation-based SSL strategy that employs a method of re-simulation to drive data augmentation for contrastive learning in the physical sciences, particularly, in fields that rely on stochastic simulators. By intervening in the middle of the simulation process and re-running simulation components downstream of the intervention, we generate multiple realizations of an event, thus producing a set of augmentations covering all physics-driven variations available in the simulator. Using experiments from high-energy physics, we explore how this strategy may enable the development of a foundation model; we show how RS3L pre-training enables powerful performance in downstream tasks such as discrimination of a variety of objects and uncertainty mitigation. In addition to our results, we make the RS3L dataset publicly available for further studies on how to improve SSL strategies.
△ Less
Submitted 24 February, 2025; v1 submitted 11 March, 2024;
originally announced March 2024.
-
Autoencoders for Real-Time SUEP Detection
Authors:
Simranjit Singh Chhibra,
Nadezda Chernyavskaya,
Benedikt Maier,
Maurzio Pierini,
Syed Hasan
Abstract:
Confining dark sectors with pseudo-conformal dynamics can produce Soft Unclustered Energy Patterns (SUEP), at the Large Hadron Collider: the production of dark quarks in proton-proton collisions leading to a dark shower and the high-multiplicity production of dark hadrons. The final experimental signature is spherically-symmetric energy deposits by an anomalously large number of soft Standard Mode…
▽ More
Confining dark sectors with pseudo-conformal dynamics can produce Soft Unclustered Energy Patterns (SUEP), at the Large Hadron Collider: the production of dark quarks in proton-proton collisions leading to a dark shower and the high-multiplicity production of dark hadrons. The final experimental signature is spherically-symmetric energy deposits by an anomalously large number of soft Standard Model particles with a transverse energy of O(100) MeV. Assuming Yukawa-like couplings of the scalar portal state, the dominant production mode is gluon fusion, and the dominant background comes from multi-jet QCD events. We have developed a deep learning-based Anomaly Detection technique to reject QCD jets and identify any anomalous signature, including SUEP, in real-time in the High-Level Trigger system of the Compact Muon Solenoid experiment at the Large Hadron Collider. A deep convolutional neural autoencoder network has been trained using QCD events by taking transverse energy deposits in the inner tracker, electromagnetic calorimeter, and hadron calorimeter sub-detectors as 3-channel image data. Due to the sparse nature of the data, only ~0.5% of the total ~300 k image pixels have non-zero values. To tackle this challenge, a non-standard loss function, the inverse of the so-called Dice Loss, is exploited. The trained autoencoder with learned spatial features of QCD jets can detect 40% of the SUEP events, with a QCD event mistagging rate as low as 2%. The model inference time has been measured using the Intel CoreTM i5-9600KF processor and found to be ~20 ms, which perfectly satisfies the High-Level Trigger system's latency of O(100) ms. Given the virtue of the unsupervised learning of the autoencoders, the trained model can be applied to any new physics model that predicts an experimental signature anomalous to QCD jets.
△ Less
Submitted 5 July, 2024; v1 submitted 23 June, 2023;
originally announced June 2023.
-
Triggering Dark Showers with Conditional Dual Auto-Encoders
Authors:
Luca Anzalone,
Simranjit Singh Chhibra,
Benedikt Maier,
Nadezda Chernyavskaya,
Maurizio Pierini
Abstract:
We present a family of conditional dual auto-encoders (CoDAEs) for generic and model-independent new physics searches at colliders. New physics signals, which arise from new types of particles and interactions, are considered in our study as anomalies causing deviations in data with respect to expected background events. In this work, we perform a normal-only anomaly detection, which employs only…
▽ More
We present a family of conditional dual auto-encoders (CoDAEs) for generic and model-independent new physics searches at colliders. New physics signals, which arise from new types of particles and interactions, are considered in our study as anomalies causing deviations in data with respect to expected background events. In this work, we perform a normal-only anomaly detection, which employs only background samples, to search for manifestations of a dark version of strong force applying (variational) auto-encoders on raw detector images, which are large and highly sparse, without leveraging any physics-based pre-processing or strong assumption on the signals. The proposed CoDAE has a dual-encoder design, which is general and can learn an auxiliary yet compact latent space through spatial conditioning, showing a neat improvement over competitive physics-based baselines and related approaches, therefore also reducing the gap with fully supervised models. It is the first time an unsupervised model is shown to exhibit excellent discrimination against multiple dark shower models, illustrating the suitability of this method as an accurate, fast, model-independent algorithm to deploy, e.g., in the real-time event triggering systems of Large Hadron Collider experiments such as ATLAS and CMS.
△ Less
Submitted 24 September, 2024; v1 submitted 22 June, 2023;
originally announced June 2023.
-
Evidence for positive long- and short-term effects of vaccinations against COVID-19 in wearable sensor metrics -- Insights from the German Corona Data Donation Project
Authors:
Marc Wiedermann,
Annika H. Rose,
Benjamin F. Maier,
Jakob J. Kolb,
David Hinrichs,
Dirk Brockmann
Abstract:
Vaccines are among the most powerful tools used to combat the COVID-19 pandemic. They are highly effective against infection and substantially reduce the risk of severe disease, hospitalization, ICU admission, and death. However, their potential for attenuating long-term effects of a SARS-CoV-2 infection, commonly denoted as Long COVID, remains elusive and is still subject of debate. Such long-ter…
▽ More
Vaccines are among the most powerful tools used to combat the COVID-19 pandemic. They are highly effective against infection and substantially reduce the risk of severe disease, hospitalization, ICU admission, and death. However, their potential for attenuating long-term effects of a SARS-CoV-2 infection, commonly denoted as Long COVID, remains elusive and is still subject of debate. Such long-term effects can be effectively monitored at the individual level by analyzing physiological data collected by consumer-grade wearable sensors. Here, we investigate changes in resting heart rate, daily physical activity, and sleep duration in response to a SARS-CoV-2 infection stratified by vaccination status. Data was collected over a period of two years in the context of the German Corona Data Donation Project with currently around 190,000 monthly active donors. Compared to their unvaccinated counterparts, we find that vaccinated individuals on average experience smaller changes in their vital data that also return to normal levels more quickly. Likewise, extreme changes in vitals during the acute phase of the disease occur less frequently in vaccinated individuals. Our results solidify evidence that vaccines can mitigate long-term detrimental effects of SARS-CoV-2 infections both in terms of duration and magnitude. Furthermore, they demonstrate the value of large scale, high-resolution wearable sensor data in public health research.
△ Less
Submitted 6 April, 2022;
originally announced April 2022.
-
Scalable Biophysical Simulations of the Neuromuscular System
Authors:
Benjamin Maier
Abstract:
The human neuromuscular system consisting of skeletal muscles and neural circuits is a complex system that is not yet fully understood. Surface electromyography (EMG) can be used to study muscle behavior from the outside. Computer simulations with detailed biophysical models provide a non-invasive tool to interpret EMG signals and gain new insights into the system. The numerical solution of such m…
▽ More
The human neuromuscular system consisting of skeletal muscles and neural circuits is a complex system that is not yet fully understood. Surface electromyography (EMG) can be used to study muscle behavior from the outside. Computer simulations with detailed biophysical models provide a non-invasive tool to interpret EMG signals and gain new insights into the system. The numerical solution of such multi-scale models imposes high computational work loads, which restricts their application to short simulation time spans or coarse resolutions. We tackled this challenge by providing scalable software employing instruction-level and task-level parallelism, suitable numerical methods and efficient data handling. We implemented a comprehensive, state-of-the-art, multi-scale multi-physics model framework that can simulate surface EMG signals and muscle contraction as a result of neuromuscular stimulation.
This work describes the model framework and its numerical discretization, develops new algorithms for mesh generation and parallelization, covers the use and implementation of our software OpenDiHu, and evaluates its computational performance in numerous use cases. We obtain a speedup of several hundred compared to a baseline solver from the literature and demonstrate, that our distributed-memory parallelization and the use of High Performance Computing resources enables us to simulate muscular surface EMG of the biceps brachii muscle with realistic muscle fiber counts of several hundred thousands. We find that certain model effects are only visible with such high resolution. In conclusion, our software contributes to more realistic simulations of the neuromuscular system and provides a tool for applied researchers to complement in vivo experiments with in-silico studies. It can serve as a building block to set up comprehensive models for more organs in the musculoskeletal system.
△ Less
Submitted 13 July, 2021;
originally announced July 2021.
-
A Network Science Summer Course for High School Students
Authors:
Florian Klimm,
Benjamin F. Maier
Abstract:
We discuss a two-week summer course on Network Science that we taught for high school pupils. We present the concepts and contents of the course, evaluate them, and make the course material available.
We discuss a two-week summer course on Network Science that we taught for high school pupils. We present the concepts and contents of the course, evaluate them, and make the course material available.
△ Less
Submitted 5 May, 2020;
originally announced May 2020.
-
Dynamo -- Handling Scientific Data Across Sites and Storage Media
Authors:
Yutaro Iiyama,
Benedikt Maier,
Daniel Abercrombie,
Maxim Goncharov,
Christoph Paus
Abstract:
Dynamo is a full-stack software solution for scientific data management. Dynamo's architecture is modular, extensible, and customizable, making the software suitable for managing data in a wide range of installation scales, from a few terabytes stored at a single location to hundreds of petabytes distributed across a worldwide computing grid. This article documents the core system design of Dynamo…
▽ More
Dynamo is a full-stack software solution for scientific data management. Dynamo's architecture is modular, extensible, and customizable, making the software suitable for managing data in a wide range of installation scales, from a few terabytes stored at a single location to hundreds of petabytes distributed across a worldwide computing grid. This article documents the core system design of Dynamo and describes the applications that implement various data management tasks. A brief report is also given on the operational experiences of the system at the CMS experiment at the CERN Large Hadron Collider and at a small scale analysis facility.
△ Less
Submitted 16 May, 2021; v1 submitted 25 March, 2020;
originally announced March 2020.
-
Thresholding normally distributed data creates complex networks
Authors:
George T. Cantwell,
Yanchen Liu,
Benjamin F. Maier,
Alice C. Schwarze,
Carlos A. Serván,
Jordan Snyder,
Guillaume St-Onge
Abstract:
Network data sets are often constructed by some kind of thresholding procedure. The resulting networks frequently possess properties such as heavy-tailed degree distributions, clustering, large connected components and short average shortest path lengths. These properties are considered typical of complex networks and appear in many contexts, prompting consideration of their universality. Here we…
▽ More
Network data sets are often constructed by some kind of thresholding procedure. The resulting networks frequently possess properties such as heavy-tailed degree distributions, clustering, large connected components and short average shortest path lengths. These properties are considered typical of complex networks and appear in many contexts, prompting consideration of their universality. Here we introduce a simple model for correlated relational data and study the network ensemble obtained by thresholding it. We find that some, but not all, of the properties associated with complex networks can be seen after thresholding the correlated data, even though the underlying data are not "complex". In particular, we observe heavy-tailed degree distributions, a large numbers of triangles, and short path lengths, while we do not observe non-vanishing clustering or community structure.
△ Less
Submitted 29 May, 2020; v1 submitted 21 February, 2019;
originally announced February 2019.
-
Generalization of the small-world effect on a model approaching the Erdős-Rényi random graph
Authors:
Benjamin F. Maier
Abstract:
The famous Watts-Strogatz (WS) small-world network model does not approach the Erdős-Rényi (ER) random graph model in the limit of total randomization which can lead to confusion and complicates certain analyses. In this paper we discuss a simple alternative which was first introduced by Song and Wang, where instead of rewiring, edges are drawn between pairs of nodes with a distance-based connecti…
▽ More
The famous Watts-Strogatz (WS) small-world network model does not approach the Erdős-Rényi (ER) random graph model in the limit of total randomization which can lead to confusion and complicates certain analyses. In this paper we discuss a simple alternative which was first introduced by Song and Wang, where instead of rewiring, edges are drawn between pairs of nodes with a distance-based connection probability. We show that this model is simpler to analyze, approaches the true ER random graph model in the completely randomized limit, and demonstrate that the WS model and the alternative model may yield different quantitative results using the example of a random walk temporal observable. An efficient sampling algorithm for the alternative model is proposed. Analytic results regarding the degree distribution, degree variance, number of two-stars per node, number of triangles per node, clustering coefficient, and random walk mixing time are presented. Subsequently, the small-world effect is illustrated by showing that the clustering coefficient decreases much slower than an upper bound on the message delivery time with increasing long-range connection probability which generalizes the small-world effect from informed searches to random search strategies. Due to its accessibility for analytic evaluations, we propose that this modified model should be used as an alternative reference model for studying the influence of small-world topologies on dynamic systems as well as a simple model to introduce numerous topics when teaching network science.
△ Less
Submitted 25 June, 2019; v1 submitted 8 January, 2019;
originally announced January 2019.
-
Towards realistic HPC models of the neuromuscular system
Authors:
Chris Bradley,
Nehzat Emamy,
Thomas Ertl,
Dominik Göddeke,
Andreas Hessenthaler,
Thomas Klotz,
Aaron Krämer,
Michael Krone,
Benjamin Maier,
Miriam Mehl,
Tobias Rau,
Oliver Röhrle
Abstract:
Realistic simulations of detailed, biophysics-based, multi-scale models require very high resolution and, thus, large-scale compute facilities. Existing simulation environments, especially for biomedical applications, are designed to allow for a high flexibility and generality in model development. Flexibility and model development, however, are often a limiting factor for large-scale simulations.…
▽ More
Realistic simulations of detailed, biophysics-based, multi-scale models require very high resolution and, thus, large-scale compute facilities. Existing simulation environments, especially for biomedical applications, are designed to allow for a high flexibility and generality in model development. Flexibility and model development, however, are often a limiting factor for large-scale simulations. Therefore, new models are typically tested and run on small-scale compute facilities. By using a detailed biophysics-based, chemo-electromechanical skeletal muscle model and the international open-source software library OpenCMISS as an example, we present an approach to upgrade an existing muscle simulation framework from a moderately parallel version towards a massively parallel one that scales both in terms of problem size and in terms of the number of parallel processes. For this purpose, we investigate different modeling, algorithmic and implementational aspects. We present improvements addressing both numerical and parallel scalability. In addition, our approach includes a novel visualization environment, which is based on the MegaMol environment capable of handling large amounts of simulated data. It offers a platform for fast visualization prototyping, distributed rendering, and advanced visualization techniques. We present results of a variety of scaling studies at the Tier-1 supercomputer HazelHen at the High Performance Computing Center Stuttgart (HLRS). We improve the overall runtime by a factor of up to 2.6 and achieved good scalability on up to 768 cores, where the previous implementation used only 4 cores.
△ Less
Submitted 9 February, 2018;
originally announced February 2018.
-
Cover time for random walks on arbitrary complex networks
Authors:
Benjamin F. Maier,
Dirk Brockmann
Abstract:
We present an analytical method for computing the mean cover time of a random walk process on arbitrary, complex networks. The cover time is defined as the time a random walker requires to visit every node in the network at least once. This quantity is particularly important for random search processes and target localization in network topologies. Based on the global mean first passage time of ta…
▽ More
We present an analytical method for computing the mean cover time of a random walk process on arbitrary, complex networks. The cover time is defined as the time a random walker requires to visit every node in the network at least once. This quantity is particularly important for random search processes and target localization in network topologies. Based on the global mean first passage time of target nodes we derive an estimate for the cumulative distribution function of the cover time based on first passage time statistics. We show that our result can be applied to various model networks, including Erdős-Rényi and Barabási-Albert networks, as well as various real-world networks. Our results reveal an intimate link between first passage and cover time statistics in networks in which structurally induced temporal correlations decay quickly and offer a computationally efficient way for estimating cover times in network related applications.
△ Less
Submitted 1 August, 2018; v1 submitted 6 June, 2017;
originally announced June 2017.