-
Reinforcement learning with reputation-based adaptive exploration promotes the evolution of cooperation
Authors:
An Li,
Wenqiang Zhu,
Chaoqian Wang,
Longzhao Liu,
Hongwei Zheng,
Yishen Jiang,
Xin Wang,
Shaoting Tang
Abstract:
Multi-agent reinforcement learning serves as an effective tool for studying strategy adaptation in evolutionary games. Although prior work has integrated Q-learning with reputation mechanisms to promote cooperation, most existing algorithms adopt fixed exploration rates and overlook the influence of social context on exploratory behavior. In practice, individuals may adjust their willingness to ex…
▽ More
Multi-agent reinforcement learning serves as an effective tool for studying strategy adaptation in evolutionary games. Although prior work has integrated Q-learning with reputation mechanisms to promote cooperation, most existing algorithms adopt fixed exploration rates and overlook the influence of social context on exploratory behavior. In practice, individuals may adjust their willingness to explore based on their reputation and perceived social standing. To address this, we propose a Q-learning model that couples exploration rates with local reputation differences and incorporates asymmetric, state-dependent reputation updates. Our results show that each mechanism independently promotes cooperation, and their combination yields a reinforcing effect. The joint mechanism enhances cooperation by making ``high reputation--low exploration, low reputation--high exploration'', while adjusting reputation updates to amplify cooperative gains at low status and defection penalties at high status. This study thus offers insights into how social evaluation can shape learning behavior in complex environments.
△ Less
Submitted 9 April, 2026;
originally announced April 2026.
-
Enhancing Neutrinoless Double-Beta Decay Sensitivity of Liquid-Xenon Time Projection Chamber with Augmented Convolutional Neural Network
Authors:
E. Aprile,
J. Aalbers,
K. Abe,
M. Adrover,
S. Ahmed Maouloud,
L. Althueser,
B. Andrieu,
E. Angelino,
D. Antón Martin,
S. R. Armbruster,
F. Arneodo,
L. Baudis,
M. Bazyk,
L. Bellagamba,
R. Biondi,
A. Bismark,
K. Boese,
R. M. Braun,
G. Bruni,
G. Bruno,
R. Budnik,
C. Cai,
C. Capelli,
J. M. R. Cardoso,
A. P. Cimental Chávez
, et al. (151 additional authors not shown)
Abstract:
Dual-phase time projection chamber (TPC) that employs a multi-ton-scale liquid xenon (LXe) target mass is a pioneering detector technology to search for dark matter. Beyond its advantage in dark matter direct detection efforts, the natural xenon target allows it to search for the neutrinoless double-beta decay ($0νββ$) process, which would violate lepton number conservation and indicate that neutr…
▽ More
Dual-phase time projection chamber (TPC) that employs a multi-ton-scale liquid xenon (LXe) target mass is a pioneering detector technology to search for dark matter. Beyond its advantage in dark matter direct detection efforts, the natural xenon target allows it to search for the neutrinoless double-beta decay ($0νββ$) process, which would violate lepton number conservation and indicate that neutrinos are Majorana particles. However, such $0νββ$ searches have been limited by gamma-ray backgrounds originating from the detector materials. In this work, we designed an augmented convolutional neural network (A-CNN) model to extract additional event-topology information from detector data. Using simulation and calibration data from XENONnT, a leading LXe TPC experiment, our model achieved over 60% background rejection while maintaining 90% signal acceptance. This rejection power improves XENONnT's projected sensitivity of the $^{136}$Xe $0νββ$ search by about 40%. The implementation of A-CNN in the data analysis of future liquid xenon observatories, such as XLZD, will further enhance their sensitivities for $0νββ$ with $^{136}$Xe.
△ Less
Submitted 20 March, 2026;
originally announced March 2026.
-
Enhancing Angular Sensitivity of Segmented Antineutrino Detectors for Reactor Monitoring Applications
Authors:
Brian C. Crow,
Max A. A. Dornfest,
John G. Learned,
Jackson D. Seligman,
Nathan S. Sibert,
Jeffrey G. Yepez,
Viacheslav A. Li
Abstract:
We present a potential improvement over the standard method developed to determine antineutrino directionality in inverse-beta-decay detectors. The previously developed method for quantifying directionality in monolithic and segmented detectors may be ambiguous in methodology. In this paper, we present a new directionality algorithm and include error analysis. We have developed a new algorithm bas…
▽ More
We present a potential improvement over the standard method developed to determine antineutrino directionality in inverse-beta-decay detectors. The previously developed method for quantifying directionality in monolithic and segmented detectors may be ambiguous in methodology. In this paper, we present a new directionality algorithm and include error analysis. We have developed a new algorithm based on a measure of ``distance'' between two matrices. We report findings for our research in reactor-antineutrino directionality, and emphasize that the algorithm has broad applications whenever one desires computationally efficient 2D pattern-matching. We treat data from detector segments in the form of a matrix. The validation of our algorithm boils down to comparing a Monte Carlo generated ``empirical'' data set to a simulated data set. The empirical data set is generated for a particular orientation of the neutrino beam. We identify an optimal segmentation scale in the low-count regime. We also discuss the shortcomings of the conventional method and how this knowledge can be applied to segmented detectors, hybrid designs, and generalized validation, agnostic to the physics of detector design.
△ Less
Submitted 3 March, 2026;
originally announced March 2026.
-
Machine Learning on Heterogeneous, Edge, and Quantum Hardware for Particle Physics (ML-HEQUPP)
Authors:
Julia Gonski,
Jenni Ott,
Shiva Abbaszadeh,
Sagar Addepalli,
Matteo Cremonesi,
Jennet Dickinson,
Giuseppe Di Guglielmo,
Erdem Yigit Ertorer,
Lindsey Gray,
Ryan Herbst,
Christian Herwig,
Tae Min Hong,
Benedikt Maier,
Maryam Bayat Makou,
David Miller,
Mark S. Neubauer,
Cristián Peña,
Dylan Rankin,
Seon-Hee,
Seo,
Giordon Stark,
Alexander Tapper,
Audrey Corbeil Therrien,
Ioannis Xiotidis,
Keisuke Yoshihara
, et al. (98 additional authors not shown)
Abstract:
The next generation of particle physics experiments will face a new era of challenges in data acquisition, due to unprecedented data rates and volumes along with extreme environments and operational constraints. Harnessing this data for scientific discovery demands real-time inference and decision-making, intelligent data reduction, and efficient processing architectures beyond current capabilitie…
▽ More
The next generation of particle physics experiments will face a new era of challenges in data acquisition, due to unprecedented data rates and volumes along with extreme environments and operational constraints. Harnessing this data for scientific discovery demands real-time inference and decision-making, intelligent data reduction, and efficient processing architectures beyond current capabilities. Crucial to the success of this experimental paradigm are several emerging technologies, such as artificial intelligence and machine learning (AI/ML), silicon microelectronics, and the advent of quantum algorithms and processing. Their intersection includes areas of research such as low-power and low-latency devices for edge computing, heterogeneous accelerator systems, reconfigurable hardware, novel codesign and synthesis strategies, readout for cryogenic or high-radiation environments, and analog computing. This white paper presents a community-driven vision to identify and prioritize research and development opportunities in hardware-based ML systems and corresponding physics applications, contributing towards a successful transition to the new data frontier of fundamental science.
△ Less
Submitted 10 March, 2026; v1 submitted 24 February, 2026;
originally announced February 2026.
-
Light Dark Matter Search with 7.8 Tonne-Year of Ionization-Only Data in XENONnT
Authors:
E. Aprile,
J. Aalbers,
K. Abe,
M. Adrover,
S. Ahmed Maouloud,
L. Althueser,
B. Andrieu,
E. Angelino,
D. Antón Martin,
S. R. Armbruster,
F. Arneodo,
L. Baudis,
M. Bazyk,
V. Beligotti,
L. Bellagamba,
R. Biondi,
A. Bismark,
K. Boese,
R. M. Braun,
G. Bruni,
G. Bruno,
R. Budnik,
C. Cai,
C. Capelli,
J. M. R. Cardoso
, et al. (152 additional authors not shown)
Abstract:
We report on a blinded search for dark matter (DM) using ionization-only (S2-only) signals in XENONnT with a total exposure of $7.83\mathrm{tonne}\times\mathrm{year}$ over 579 days in three science runs. Dedicated background suppression techniques and the first complete S2-only background model in XENONnT provide sensitivity to nuclear recoils of [0.5, 5.0] $\mathrm{keV_\mathrm{nr}}$ and electroni…
▽ More
We report on a blinded search for dark matter (DM) using ionization-only (S2-only) signals in XENONnT with a total exposure of $7.83\mathrm{tonne}\times\mathrm{year}$ over 579 days in three science runs. Dedicated background suppression techniques and the first complete S2-only background model in XENONnT provide sensitivity to nuclear recoils of [0.5, 5.0] $\mathrm{keV_\mathrm{nr}}$ and electronic recoils of [0.04, 0.7] $\mathrm{keV_\mathrm{ee}}$. No significant excess over the expected background is observed, and we set 90\% confidence level upper limits on spin-independent DM--nucleon and spin-dependent DM--neutron scattering for DM masses between 3 and 8 $\mathrm{GeV}/c^2$, as well as on DM--electron scattering, axion-like particles, and dark photons, improving on previous constraints. For spin-independent DM--nucleon scattering, we exclude cross sections above $6.0\times10^{-45} $cm$^2$ at a DM mass of 5 $\mathrm{GeV}/c^2$, pushing the XENONnT sensitivity closer to the region where coherent elastic neutrino-nucleus scattering ($\text{CE}ν\text{NS}$) becomes an irreducible background.
△ Less
Submitted 16 January, 2026;
originally announced January 2026.
-
Can industrial overcapacity enable seasonal flexibility in electricity use? A case study of aluminum smelting in China
Authors:
Ruike Lyu,
Anna Li,
Jianxiao Wang,
Hongxi Luo,
Yan Shen,
Hongye Guo,
Ershun Du,
Chongqing Kang,
Jesse Jenkins
Abstract:
In many countries, declining demand in energy-intensive industries such as cement, steel, and aluminum is leading to industrial overcapacity. Although industrial overcapacity is traditionally envisioned as problematic and resource-wasteful, it could unlock energy-intensive industries' flexibility in electricity use. Here, using China's aluminum smelting industry as a case study, we evaluate the sy…
▽ More
In many countries, declining demand in energy-intensive industries such as cement, steel, and aluminum is leading to industrial overcapacity. Although industrial overcapacity is traditionally envisioned as problematic and resource-wasteful, it could unlock energy-intensive industries' flexibility in electricity use. Here, using China's aluminum smelting industry as a case study, we evaluate the system-level cost-benefit of retaining energy-intensive industries overcapacity for flexible electricity use in decarbonized energy systems. We find that overcapacity can enable aluminum smelters to adopt a seasonal operation paradigm, ceasing production during winter load peaks that are exacerbated by heating electrification and renewable seasonality. This seasonal operation paradigm could reduce the investment and operational costs of China's decarbonized electricity system by 23-32 billion CNY/year (11-15% of the aluminum smelting industry's product value), sufficient to offset the increased smelter maintenance and product storage costs associated with overcapacity. It may also provide an opportunity for seasonally complementary labor deployment across the aluminum smelting and thermal power generation sectors, offering a potential pathway for mitigating socio-economic disruptions caused by industrial restructuring and energy decarbonization.
△ Less
Submitted 26 March, 2026; v1 submitted 27 November, 2025;
originally announced November 2025.
-
Initial Assessment of Second Generation of Large-Area Picosecond Photodetectors with Multi-Channel Systems-on-a-Chip Readout
Authors:
V. A. Li,
O. A. Akindele,
M. Bondin,
S. R. Durham,
J. A. Foot,
M. J. Ford,
S. -W. Stradleigh
Abstract:
We first briefly describe the history and motivation behind Cherenkov and scintillation light detection. We then discuss the instrumentation needed to detect these photons as it applies to both photodetectors and readout electronics. One of the motivations is future large neutrino detectors that could in principle differentiate between Cherenkov and scintillation light if using novel water-based s…
▽ More
We first briefly describe the history and motivation behind Cherenkov and scintillation light detection. We then discuss the instrumentation needed to detect these photons as it applies to both photodetectors and readout electronics. One of the motivations is future large neutrino detectors that could in principle differentiate between Cherenkov and scintillation light if using novel water-based scintillators. In this paper, we present the first measurements utilizing the second generation of Large Area Picosecond Photodetectors (LAPPDs) in conjunction with commercial system-on-a-chip readouts from Nalu Scientific -- specifically, the High Density System on Chip (HDSoC) and Advanced ASoC Rapid Digitizer, Variable Adaptive Readout Chip (AARDVARC) platforms. These state-of-the-art full-waveform digitizers feature sampling rates on the order of 1 and 10 samples per nanosecond, respectively. Using a picosecond laser, we measured the timing jitter between a pair of LAPPD channels, demonstrating the potential of this setup for precise timing applications.
△ Less
Submitted 27 November, 2025;
originally announced November 2025.
-
Detecting Linear Dichroism with Atomic Resolution
Authors:
Roger Guzman,
Ján Rusz,
Ang Li,
Juan Carlos Idrobo,
Wu Zhou,
Jaume Gazquez
Abstract:
X-ray linear dichroism has been pivotal for probing electronic anisotropies, but its inherent limited spatial resolution precludes atomic-scale investigations of orbital polarization. Here we introduce a versatile electron linear dichroism methodology in scanning transmission electron microscopy that overcomes these constraints. By exploiting momentum-transfer-dependent electron energy-loss spectr…
▽ More
X-ray linear dichroism has been pivotal for probing electronic anisotropies, but its inherent limited spatial resolution precludes atomic-scale investigations of orbital polarization. Here we introduce a versatile electron linear dichroism methodology in scanning transmission electron microscopy that overcomes these constraints. By exploiting momentum-transfer-dependent electron energy-loss spectroscopy with an atomic-sized probe, we directly visualize orbital occupation at individual atomic columns in real space. Using strained La0.7Sr0.3MnO3 thin films as a model system, we resolve the Mn-3d eg orbital polarization with sub-angstrom precision. We show that compressive strain stabilizes 3z2-r2 occupation while tensile strain favors x2-y2. These results validate our approach against established X-ray measurements while achieving the ultimate single atomic-column sensitivity. We further demonstrate two optimized signal extraction protocols that adapt to experimental constraints without compromising sensitivity. This generalizable platform opens unprecedented opportunities to study symmetry-breaking phenomena at individual defects, interfaces, and in quantum materials where atomic-scale electronic anisotropy governs emergent functionality.
△ Less
Submitted 24 November, 2025;
originally announced November 2025.
-
Design and development of optical modules for the BUTTON-30 detector
Authors:
D. S. Bhattacharya,
J. Bae,
M. Bergevin,
J. Boissevain,
S. Boyd,
K. Bridges,
L. Capponi,
J. Coleman,
D. Costanzo,
T. Cunniffe,
S. A. Dazeley,
M. V. Diwan,
S. R. Durham,
E. Ellingwood,
A. Enqvist,
T. Gamble,
S. Gokhale,
J. Gooding,
C. Graham,
E. Gunger,
W. Hopkins,
I. Jovanovic,
T. Kaptanoglu,
E. Kneale,
L. Lebanowski
, et al. (41 additional authors not shown)
Abstract:
BUTTON-30 is a neutrino detector demonstrator located in the STFC Boulby underground facility in the north-east of England. The main goal of the project is to deploy and test the performance of the gadolinium-loaded water-based liquid scintillator for neutrino detection in an underground environment. This will pave the way for a future large-volume neutrino observatory that can also perform remote…
▽ More
BUTTON-30 is a neutrino detector demonstrator located in the STFC Boulby underground facility in the north-east of England. The main goal of the project is to deploy and test the performance of the gadolinium-loaded water-based liquid scintillator for neutrino detection in an underground environment. This will pave the way for a future large-volume neutrino observatory that can also perform remote monitoring of nuclear reactors for nonproliferation. This paper describes the design and construction of the watertight optical modules of the experiment.
△ Less
Submitted 26 January, 2026; v1 submitted 5 November, 2025;
originally announced November 2025.
-
Fokker-Planck equation governing the distribution of walkers in AFQMC
Authors:
Alfred Li,
Ankit Mahajan,
Sandeep Sharma
Abstract:
Auxiliary-field quantum Monte Carlo (AFQMC) is typically formulated as an open-ended random walk in an overcomplete space of Slater determinants, implemented through a Langevin equation. However, the explicit form of the underlying Fokker-Planck equation governing the walker population distribution has remained unknown. In this paper, we derive the Fokker-Planck equation for AFQMC and propose a no…
▽ More
Auxiliary-field quantum Monte Carlo (AFQMC) is typically formulated as an open-ended random walk in an overcomplete space of Slater determinants, implemented through a Langevin equation. However, the explicit form of the underlying Fokker-Planck equation governing the walker population distribution has remained unknown. In this paper, we derive the Fokker-Planck equation for AFQMC and propose a novel numerical scheme to solve it. The solution of the Fokker-Planck equation reveals the wavefunction actually sampled by the AFQMC algorithm. Interestingly, we find that even when the exact ground state is used as a guiding wavefunction in constrained path AFQMC, contrary to the common assumption, the wavefunction sampled by AFQMC is not exact. Beyond clarifying several fundamental aspects of AFQMC, the availability of a Fokker-Planck equation formulation opens new avenues for systematically improving its accuracy, which we outline in this paper.
△ Less
Submitted 22 October, 2025;
originally announced October 2025.
-
The BUTTON-30 detector at Boulby
Authors:
J. Bae,
M. Bergevin,
E. P. Bernard,
D. S. Bhattacharya,
J. Boissevain,
S. Boyd,
K. Bridges,
L. Capponi,
J. Coleman,
D. Costanzo,
T. Cunniffe,
S. A. Dazeley,
M. V. Diwan,
S. R. Durham,
E. Ellingwood,
A. Enqvist,
T. Gamble,
S. Gokhale,
J. Gooding,
C. Graham,
E. Gunger,
J. J. Hecla,
W. Hopkins,
I. Jovanovic,
T. Kaptanoglu
, et al. (40 additional authors not shown)
Abstract:
The BUTTON-30 detector is a 30-tonne technology demonstrator designed to evaluate the potential of hybrid event detection, simultaneously exploiting both Cherenkov and scintillation light to detect particles produced in neutrino interactions. The detector is installed at a depth of 1.1 km in the Boulby Underground Laboratory allowing to test the performance of this new technology underground in a…
▽ More
The BUTTON-30 detector is a 30-tonne technology demonstrator designed to evaluate the potential of hybrid event detection, simultaneously exploiting both Cherenkov and scintillation light to detect particles produced in neutrino interactions. The detector is installed at a depth of 1.1 km in the Boulby Underground Laboratory allowing to test the performance of this new technology underground in a low background environment. This paper describes the design and construction of the experiment.
△ Less
Submitted 5 March, 2026; v1 submitted 15 October, 2025;
originally announced October 2025.
-
First stave results towards mitigating sensor fracturing with interposers in the ATLAS ITk strips barrel
Authors:
G. D'Amen,
D. Dewhurst,
E. Dibley,
J. Dopke,
E. Duden,
G. Hawker,
B. Gallop,
N. Ghorbanian,
P. Jacobson,
M. Kurth,
A. Li,
D. Lynn,
A. Petersen,
P. Phillips,
D. Russell,
C. Sawyer,
C. Solaz,
W. Sorger,
S. Stucci,
A. Tishelman-Charny,
A. Tricoli,
G. van Nieuwenhuizen
Abstract:
At the conclusion of Run 3 of the Large Hadron Collider (LHC) at CERN, the accelerator complex will be upgraded to the High-Luminosity LHC (HL-LHC), allowing it to increase the dataset sizes of LHC experiments by about a factor of 20. This significant increase in dataset size will improve the sensitivity and precision of all physics analyses from LHC experiments, but will come with a more challeng…
▽ More
At the conclusion of Run 3 of the Large Hadron Collider (LHC) at CERN, the accelerator complex will be upgraded to the High-Luminosity LHC (HL-LHC), allowing it to increase the dataset sizes of LHC experiments by about a factor of 20. This significant increase in dataset size will improve the sensitivity and precision of all physics analyses from LHC experiments, but will come with a more challenging data-taking environment. In order to handle this, the ATLAS detector will undergo a substantial upgrade, including an upgrade of its inner tracker to an all-silicon tracker called the Inner Tracker (ITk), made of pixel and strip sub-detectors. During the pre-production phase of the ITk strips, it was discovered that thermally cycling modules loaded onto local support structures led to physical fractures in silicon sensors due to the induced thermal stress. This is understood to be the result of several factors, including the difference in coefficients of thermal expansion between the different module layers, and the close proximity of the module electrical components. Several mitigation strategies were tested to reduce the rate of module fracturing. This paper describes the assembly setup, testing setups, and electrical testing results of ITk strips barrel modules loaded onto local support structures. 69 of 70 modules with an in-built additional kapton layer were found to survive testing down to -70°C.
△ Less
Submitted 25 August, 2025;
originally announced August 2025.
-
Sensitivity of an Early Dark Matter Search using the Electromagnetic Calorimeter as a Target for the Light Dark Matter eXperiment
Authors:
LDMX Collaboration,
Torsten Åkesson,
Elizabeth Berzin,
Cameron Bravo,
Liam Brennan,
Lene Kristian Bryngemark,
Pierfrancesco Butti,
Filippo Delzanno,
E. Craig Dukes,
Valentina Dutta,
Bertrand Echenard,
Ralf Ehrlich,
Thomas Eichlersmith,
Einar Elén,
Andrew Furmanski,
Victor Gomez,
Matt Graham,
Chiara Grieco,
Craig Group,
Hannah Herde,
Christian Herwig,
David G. Hitlin,
Tyler Horoho,
Joseph Incandela,
Nathan Jay
, et al. (31 additional authors not shown)
Abstract:
The Light Dark Matter eXperiment (LDMX) is proposed to employ a thin tungsten target and a multi-GeV electron beam to carry out a missing momentum search for the production of dark matter candidate particles. We study the sensitivity for a complementary missing-energy-based search using the LDMX Electromagnetic Calorimeter as an active target with a focus on early running. In this context, we cons…
▽ More
The Light Dark Matter eXperiment (LDMX) is proposed to employ a thin tungsten target and a multi-GeV electron beam to carry out a missing momentum search for the production of dark matter candidate particles. We study the sensitivity for a complementary missing-energy-based search using the LDMX Electromagnetic Calorimeter as an active target with a focus on early running. In this context, we construct an event selection from a limited set of variables that projects sensitivity into previously-unexplored regions of light dark matter phase space -- down to an effective dark photon interaction strength $y$ of approximately $2\times10^{-13}$ ($5\times10^{-12}$) for a 1MeV (10MeV) dark matter candidate mass.
△ Less
Submitted 7 October, 2025; v1 submitted 11 August, 2025;
originally announced August 2025.
-
Tight-binding photonics
Authors:
Jing Li,
Aodong Li,
Yutao Chen,
Tao Xiao,
Renwen Huang,
Xiaolu Zhuo,
Jun Guan,
Zhen Gao,
Peng Zhan,
Minghui Lu,
Biye Xie
Abstract:
Photonics, dealing with the generation, manipulation, and detection of photons in various systems, lays the foundation of many advanced technologies. A key task of photonics is to know how photons propagate in complex media such as periodic and aperiodic photonic crystals. The conventional wisdom is to numerically solve the Maxwell equations either by dedicated numerical techniques or brute-force…
▽ More
Photonics, dealing with the generation, manipulation, and detection of photons in various systems, lays the foundation of many advanced technologies. A key task of photonics is to know how photons propagate in complex media such as periodic and aperiodic photonic crystals. The conventional wisdom is to numerically solve the Maxwell equations either by dedicated numerical techniques or brute-force finite-element calculations. Recently, the strict analogy between photonic crystals and theoretical tight-binding models provides an unprecedentedly convenient wayof understanding the spectra and wavefunctions of photonic systems by mapping the complicated differential equationsinto matrixed Hamiltonians that can be easily solved through the band theory and exact diagonalization. in this paper, we present a timely review of tight-binding-like photonics in various platforms, covering fundamental theories, experimental realizations, unique physical efiects, and their potential applications. We also provide a brief outlook on the future trends of this active area. Our review offers an in-depth and comprehensive picture on this rapidly developing field and may shed light on the future design on advanced tight-binding-like photonic devices.
△ Less
Submitted 6 August, 2025;
originally announced August 2025.
-
End-to-end image compression and reconstruction with ultrahigh speed and ultralow energy enabled by opto-electronic computing processor
Authors:
Yuhang Wang,
Ang Li,
Yihang Shao,
Qiang Li,
Yang Zhao,
Shilong Pan
Abstract:
The rapid development of AR/VR, remote sensing, satellite radar, and medical equipment has created an imperative demand for ultra efficient image compression and reconstruction that exceed the capabilities of electronic processors. For the first time, we demonstrate an end to end image compression and reconstruction approach using an optoelectronic computing processor,achieving orders of magnitude…
▽ More
The rapid development of AR/VR, remote sensing, satellite radar, and medical equipment has created an imperative demand for ultra efficient image compression and reconstruction that exceed the capabilities of electronic processors. For the first time, we demonstrate an end to end image compression and reconstruction approach using an optoelectronic computing processor,achieving orders of magnitude higher speed and lower energy consumption than electronic counterparts. At its core is a 32X32 silicon photonic computing chip, which monolithically integrates 32 high speed modulators, 32 detectors, and a programmable photonic matrix core, copackaged with all necessary control electronics (TIA, ADC, DAC, FPGA etc.). Leveraging the photonic matrix core programmability, the processor generates trainable compressive matrices, enabling adjustable image compression ratios (from 2X to 256X) to meet diverse application needs. Deploying a custom lightweight photonic integrated circuit oriented network (LiPICO-Net) enables high quality reconstruction of compressed images. Our approach delivers an end to end latency of only 49.5ps/pixel while consuming only less than 10.6nJ/pixel-both metrics representing 2-3 orders of magnitude improvement compared with classical models running on state-of-the-art GPUs. We validate the system on a 130 million-pixel aerial imagery, enabling real time compression where electronic systems falter due to power and latency constraints. This work not only provides a transformative solution for massive image processing but also opens new avenues for photonic computing applications.
△ Less
Submitted 30 July, 2025;
originally announced July 2025.
-
LensingFlow: An Automated Workflow for Gravitational Wave Lensing Analyses
Authors:
Mick Wright,
Justin Janquart,
Paolo Cremonese,
Juno C. L. Chan,
Alvin K. Y. Li,
Otto A. Hannuksela,
Rico K. L. Lo,
Jose M. Ezquiaga,
Daniel Williams,
Michael Williams,
Gregory Ashton,
Rhiannon Udall,
Anupreeta More,
Laura Uronen,
Ankur Barsode,
Eungwang Seo,
David Keitel,
Srashti Goyal,
Jef Heynen,
Anna Liu,
Prasia Pankunni
Abstract:
In this work, we present LensingFlow. This is an implementation of an automated workflow to search for evidence of gravitational lensing in a large series of gravitational wave events. This workflow conducts searches for evidence in all generally considered lensing regimes. The implementation of this workflow is built atop the Asimov automation framework and CBCFlow metadata management software an…
▽ More
In this work, we present LensingFlow. This is an implementation of an automated workflow to search for evidence of gravitational lensing in a large series of gravitational wave events. This workflow conducts searches for evidence in all generally considered lensing regimes. The implementation of this workflow is built atop the Asimov automation framework and CBCFlow metadata management software and the resulting product therefore encompasses both the automated running and status checking of jobs in the workflow as well as the automated production and storage of relevant metadata from these jobs to allow for later reproduction. This workflow encompasses a number of existing lensing pipelines and has been designed to accommodate any additional future pipelines to provide both a current and future basis on which to conduct large scale lensing analyses of gravitational wave signal catalogues. The workflow also implements a prioritisation management system for jobs submitted to the schedulers in common usage in computing clusters ensuring both the completion of the workflow across the entire catalogue of events as well as the priority completion of the most significant candidates. As a first proof-of-concept demonstration, we deploy LensingFlow on a mock data challenge comprising 10 signals in which signatures of each lensing regime are represented. LensingFlow successfully ran and identified the candidates from this data through its automated checks of results from consituent analyses.
△ Less
Submitted 29 July, 2025; v1 submitted 27 July, 2025;
originally announced July 2025.
-
CycleGAN-Driven Transfer Learning for Electronics Response Emulation in High-Purity Germanium Detectors
Authors:
Kevin Bhimani,
Julieta Gruszko,
Morgan Clark,
John Wilkerson,
Aobo Li
Abstract:
High-Purity Germanium (HPGe) detectors are a key technology for rare-event searches such as neutrinoless double-beta decay (\ensuremath{0νββ}) and dark matter experiments. Pulse shapes from these detectors vary with interaction topology and thus encode information critical for event classification. Pulse shape simulations (PSS) are essential for modeling analysis cuts that distinguish signal event…
▽ More
High-Purity Germanium (HPGe) detectors are a key technology for rare-event searches such as neutrinoless double-beta decay (\ensuremath{0νββ}) and dark matter experiments. Pulse shapes from these detectors vary with interaction topology and thus encode information critical for event classification. Pulse shape simulations (PSS) are essential for modeling analysis cuts that distinguish signal events from backgrounds and for generating reliable simulations of energy spectra. Traditional PSS methods rely on a series of first-principles corrections to replicate the effect of readout electronics, requiring challenging fits over large parameter spaces and often failing to accurately model the data. We present a neural network architecture, the Cyclic Positional U-Net (https://github.com/aobol/CPU-Net), that performs translations of simulated pulses so that they closely resemble measured detector signals. Using a Cycle Generative Adversarial Network (CycleGAN) framework, this {Response Emulation Network} (REN) learns a data-driven mapping between simulated and measured pulses with high fidelity, without requiring a predetermined response model. We use data from a High-Purity Germanium (HPGe) detector with an inverted-coaxial point contact (ICPC) geometry to show that \texttt{CPU-Net} effectively captures and reproduces critical pulse shape features, allowing more realistic simulations without detector-specific tuning. \texttt{CPU-Net} achieves up to a factor-of-four improvement in distribution-level agreement for pulse shape parameter reconstruction, while preserving the topology-dependent information required for pulse-shape discrimination.
△ Less
Submitted 14 January, 2026; v1 submitted 11 July, 2025;
originally announced July 2025.
-
Algorithm to extract direction in 2D discrete distributions and a continuous Frobenius norm
Authors:
Jeffrey G. Yepez,
Jackson D. Seligman,
Max A. A. Dornfest,
Brian C. Crow,
John G. Learned,
Viacheslav A. Li
Abstract:
In this study, we present a novel algorithm for determining directionality in 2D distributions of discrete data. We compare a reference dataset with a known direction to a measured dataset with an unknown direction by the Frobenius norm of the difference (FND) to find the unknown direction. To generalize this concept, we develop a continuous Frobenius norm of the difference (CFND) as a continuous…
▽ More
In this study, we present a novel algorithm for determining directionality in 2D distributions of discrete data. We compare a reference dataset with a known direction to a measured dataset with an unknown direction by the Frobenius norm of the difference (FND) to find the unknown direction. To generalize this concept, we develop a continuous Frobenius norm of the difference (CFND) as a continuous analog of the FND and derive its analytical expression. By relating fitted and normalized 2D Gaussian distributions, we show that the CFND approximates the FND, and we validate this relationship with computer simulations. We find that a first-order approximation of the CFND between two similar Gaussian distributions takes the form of an absolute sine function, offering a simple analytical form with potential for specialized applications in segmented inverse beta decay (IBD) neutrino detectors, astronomy, machine learning, and more. Although this method may easily extend to 3D scalar fields, our focus here is on 2D real-valued fields as it directly applies to directionality. Our methodology consists of modeling a 2D Gaussian distribution, binning the data into a histogram, and encoding it as a square matrix. Rotating this matrix around its geometric center and comparing it to a measured dataset using the FND gives us rotational data that we fit with an absolute sine function. The location of the minimum of this fit is the angle closest to the true angle of the direction in the measured dataset. We present the derivation and discuss initial applications of the CFND in our novel algorithm, demonstrating its success in approximating directionality in 2D distributions.
△ Less
Submitted 26 February, 2026; v1 submitted 20 June, 2025;
originally announced June 2025.
-
Unveiling Nano-scale Crystal Deformation using Coherent X-ray Dynamical Diffraction
Authors:
Longlong Wu,
David Yang,
Wei Wang,
Shinjae Yoo,
Ross J. Harder,
Wonsuk Cha,
Aiguo Li,
Ian K. Robinson
Abstract:
Visualization of internal deformation fields in crystalline materials helps bridge the gap between theoretical models and practical applications. Applying Bragg coherent diffraction imaging under X-ray dynamical diffraction conditions provides a promising approach to the longstanding challenge of investigating the deformation fields in micron-sized crystals. Here, we present an automatic different…
▽ More
Visualization of internal deformation fields in crystalline materials helps bridge the gap between theoretical models and practical applications. Applying Bragg coherent diffraction imaging under X-ray dynamical diffraction conditions provides a promising approach to the longstanding challenge of investigating the deformation fields in micron-sized crystals. Here, we present an automatic differentiation-based reconstruction method that integrates dynamical scattering theory to accurately reconstruct deformation fields in large crystals. Using this forward model, our simulated and experimental results demonstrate that three-dimensional local strain information inside a large crystal can be accurately reconstructed under coherent X-ray dynamical diffraction conditions with Bragg coherent X-ray diffraction imaging. These findings open an avenue for extending the investigation of local deformation fields to microscale crystals while maintaining nanoscale resolution, leveraging the enhanced coherence and brightness of advanced X-ray sources.
△ Less
Submitted 25 December, 2025; v1 submitted 18 June, 2025;
originally announced June 2025.
-
A prototype reactor-antineutrino detector based on $^6$Li-doped pulse-shaping-discriminating plastic scintillator
Authors:
O. Benevides Rodrigues,
E. P. Bernard,
N. S. Bowden,
C. Bravo,
R. Carr,
T. M. Classen,
A. J. Conant,
S. A. Dazeley,
M. T. Dunbrack,
S. R. Durham,
A. S. Erickson,
A. Haghighat,
K. M. Heeger,
P. Huber,
A. Irani,
O. Kyzylova,
V. A. Li,
J. M. Link,
B. R. Littlejohn,
F. Machado,
M. P. Mendenhall,
H. P. Mumm,
J. Newby,
C. Roca,
J. Ross
, et al. (4 additional authors not shown)
Abstract:
An aboveground 60-kg reactor-antineutrino detector prototype, comprised of a 2-dimensional array of 36 $^{6}$Li-doped pulse shape sensitive plastic scintillator bars, is described. Each bar is 50~cm long with a square cross section of 5.5~cm. Doped with $^{6}$Li at 0.1\% by mass, the detector is capable of identifying correlated energy depositions for the detection of reactor antineutrinos via the…
▽ More
An aboveground 60-kg reactor-antineutrino detector prototype, comprised of a 2-dimensional array of 36 $^{6}$Li-doped pulse shape sensitive plastic scintillator bars, is described. Each bar is 50~cm long with a square cross section of 5.5~cm. Doped with $^{6}$Li at 0.1\% by mass, the detector is capable of identifying correlated energy depositions for the detection of reactor antineutrinos via the inverse-beta-decay reaction. Each bar is wrapped with a specular reflector that directs photons towards PMTs mounted at both ends of the bar. This paper highlights the construction, key features, and main performance characteristics of the system. The system, which relies on multiple observables such as PSD, energy, position, and timing, is capable of detecting IBD-like neutron-correlated backgrounds, long-lived decay chains, and cosmogenic isotopes.
△ Less
Submitted 10 November, 2025; v1 submitted 8 May, 2025;
originally announced May 2025.
-
Unbinned inclusive cross-section measurements with machine-learned systematic uncertainties
Authors:
Lisa Benato,
Cristina Giordano,
Claudius Krause,
Ang Li,
Robert Schöfbeck,
Dennis Schwarz,
Maryam Shooshtari,
Daohan Wang
Abstract:
We introduce a novel methodology for addressing systematic uncertainties in unbinned inclusive cross-section measurements and related collider-based inference problems. Our approach incorporates known analytic dependencies on parameters of interest, including signal strengths and nuisance parameters. When these dependencies are unknown, as is frequently the case for systematic uncertainties, dedic…
▽ More
We introduce a novel methodology for addressing systematic uncertainties in unbinned inclusive cross-section measurements and related collider-based inference problems. Our approach incorporates known analytic dependencies on parameters of interest, including signal strengths and nuisance parameters. When these dependencies are unknown, as is frequently the case for systematic uncertainties, dedicated neural network parametrizations provide an approximation that is trained on simulated data. The resulting machine-learned surrogate captures the complete parameter dependence of the likelihood ratio, providing a near-optimal test statistic. As a case study, we perform a first-principles inclusive cross-section measurement of $\textrm{H}\rightarrowττ$ in the single-lepton channel, utilizing simulated data from the FAIR Universe Higgs Uncertainty Challenge. Results in Asimov data, from large-scale toy studies, and using the Fisher information demonstrate significant improvements over traditional binned methods. Our computer code ``Guaranteed Optimal Log-Likelihood-based Unbinned Method'' (GOLLUM) for machine-learning and inference is publicly available.
△ Less
Submitted 25 August, 2025; v1 submitted 8 May, 2025;
originally announced May 2025.
-
Future Circular Collider Feasibility Study Report: Volume 2, Accelerators, Technical Infrastructure and Safety
Authors:
M. Benedikt,
F. Zimmermann,
B. Auchmann,
W. Bartmann,
J. P. Burnet,
C. Carli,
A. Chancé,
P. Craievich,
M. Giovannozzi,
C. Grojean,
J. Gutleber,
K. Hanke,
A. Henriques,
P. Janot,
C. Lourenço,
M. Mangano,
T. Otto,
J. Poole,
S. Rajagopalan,
T. Raubenheimer,
E. Todesco,
L. Ulrici,
T. Watson,
G. Wilkinson,
A. Abada
, et al. (1439 additional authors not shown)
Abstract:
In response to the 2020 Update of the European Strategy for Particle Physics, the Future Circular Collider (FCC) Feasibility Study was launched as an international collaboration hosted by CERN. This report describes the FCC integrated programme, which consists of two stages: an electron-positron collider (FCC-ee) in the first phase, serving as a high-luminosity Higgs, top, and electroweak factory;…
▽ More
In response to the 2020 Update of the European Strategy for Particle Physics, the Future Circular Collider (FCC) Feasibility Study was launched as an international collaboration hosted by CERN. This report describes the FCC integrated programme, which consists of two stages: an electron-positron collider (FCC-ee) in the first phase, serving as a high-luminosity Higgs, top, and electroweak factory; followed by a proton-proton collider (FCC-hh) at the energy frontier in the second phase.
FCC-ee is designed to operate at four key centre-of-mass energies: the Z pole, the WW production threshold, the ZH production peak, and the top/anti-top production threshold - delivering the highest possible luminosities to four experiments. Over 15 years of operation, FCC-ee will produce more than 6 trillion Z bosons, 200 million WW pairs, nearly 3 million Higgs bosons, and 2 million top anti-top pairs. Precise energy calibration at the Z pole and WW threshold will be achieved through frequent resonant depolarisation of pilot bunches. The sequence of operation modes remains flexible.
FCC-hh will operate at a centre-of-mass energy of approximately 85 TeV - nearly an order of magnitude higher than the LHC - and is designed to deliver 5 to 10 times the integrated luminosity of the HL-LHC. Its mass reach for direct discovery extends to several tens of TeV. In addition to proton-proton collisions, FCC-hh is capable of supporting ion-ion, ion-proton, and lepton-hadron collision modes.
This second volume of the Feasibility Study Report presents the complete design of the FCC-ee collider, its operation and staging strategy, the full-energy booster and injector complex, required accelerator technologies, safety concepts, and technical infrastructure. It also includes the design of the FCC-hh hadron collider, development of high-field magnets, hadron injector options, and key technical systems for FCC-hh.
△ Less
Submitted 25 April, 2025;
originally announced May 2025.
-
Future Circular Collider Feasibility Study Report: Volume 3, Civil Engineering, Implementation and Sustainability
Authors:
M. Benedikt,
F. Zimmermann,
B. Auchmann,
W. Bartmann,
J. P. Burnet,
C. Carli,
A. Chancé,
P. Craievich,
M. Giovannozzi,
C. Grojean,
J. Gutleber,
K. Hanke,
A. Henriques,
P. Janot,
C. Lourenço,
M. Mangano,
T. Otto,
J. Poole,
S. Rajagopalan,
T. Raubenheimer,
E. Todesco,
L. Ulrici,
T. Watson,
G. Wilkinson,
P. Azzi
, et al. (1439 additional authors not shown)
Abstract:
Volume 3 of the FCC Feasibility Report presents studies related to civil engineering, the development of a project implementation scenario, and environmental and sustainability aspects. The report details the iterative improvements made to the civil engineering concepts since 2018, taking into account subsurface conditions, accelerator and experiment requirements, and territorial considerations. I…
▽ More
Volume 3 of the FCC Feasibility Report presents studies related to civil engineering, the development of a project implementation scenario, and environmental and sustainability aspects. The report details the iterative improvements made to the civil engineering concepts since 2018, taking into account subsurface conditions, accelerator and experiment requirements, and territorial considerations. It outlines a technically feasible and economically viable civil engineering configuration that serves as the baseline for detailed subsurface investigations, construction design, cost estimation, and project implementation planning. Additionally, the report highlights ongoing subsurface investigations in key areas to support the development of an improved 3D subsurface model of the region.
The report describes development of the project scenario based on the 'avoid-reduce-compensate' iterative optimisation approach. The reference scenario balances optimal physics performance with territorial compatibility, implementation risks, and costs. Environmental field investigations covering almost 600 hectares of terrain - including numerous urban, economic, social, and technical aspects - confirmed the project's technical feasibility and contributed to the preparation of essential input documents for the formal project authorisation phase. The summary also highlights the initiation of public dialogue as part of the authorisation process. The results of a comprehensive socio-economic impact assessment, which included significant environmental effects, are presented. Even under the most conservative and stringent conditions, a positive benefit-cost ratio for the FCC-ee is obtained. Finally, the report provides a concise summary of the studies conducted to document the current state of the environment.
△ Less
Submitted 25 April, 2025;
originally announced May 2025.
-
Future Circular Collider Feasibility Study Report: Volume 1, Physics, Experiments, Detectors
Authors:
M. Benedikt,
F. Zimmermann,
B. Auchmann,
W. Bartmann,
J. P. Burnet,
C. Carli,
A. Chancé,
P. Craievich,
M. Giovannozzi,
C. Grojean,
J. Gutleber,
K. Hanke,
A. Henriques,
P. Janot,
C. Lourenço,
M. Mangano,
T. Otto,
J. Poole,
S. Rajagopalan,
T. Raubenheimer,
E. Todesco,
L. Ulrici,
T. Watson,
G. Wilkinson,
P. Azzi
, et al. (1439 additional authors not shown)
Abstract:
Volume 1 of the FCC Feasibility Report presents an overview of the physics case, experimental programme, and detector concepts for the Future Circular Collider (FCC). This volume outlines how FCC would address some of the most profound open questions in particle physics, from precision studies of the Higgs and EW bosons and of the top quark, to the exploration of physics beyond the Standard Model.…
▽ More
Volume 1 of the FCC Feasibility Report presents an overview of the physics case, experimental programme, and detector concepts for the Future Circular Collider (FCC). This volume outlines how FCC would address some of the most profound open questions in particle physics, from precision studies of the Higgs and EW bosons and of the top quark, to the exploration of physics beyond the Standard Model. The report reviews the experimental opportunities offered by the staged implementation of FCC, beginning with an electron-positron collider (FCC-ee), operating at several centre-of-mass energies, followed by a hadron collider (FCC-hh). Benchmark examples are given of the expected physics performance, in terms of precision and sensitivity to new phenomena, of each collider stage. Detector requirements and conceptual designs for FCC-ee experiments are discussed, as are the specific demands that the physics programme imposes on the accelerator in the domains of the calibration of the collision energy, and the interface region between the accelerator and the detector. The report also highlights advances in detector, software and computing technologies, as well as the theoretical tools /reconstruction techniques that will enable the precision measurements and discovery potential of the FCC experimental programme. This volume reflects the outcome of a global collaborative effort involving hundreds of scientists and institutions, aided by a dedicated community-building coordination, and provides a targeted assessment of the scientific opportunities and experimental foundations of the FCC programme.
△ Less
Submitted 25 April, 2025;
originally announced May 2025.
-
Stability of complex communities: A perspective from discrete-time dynamics
Authors:
Shuaiying Wang,
Yuguang Yang,
Aming Li
Abstract:
Understanding the stability of complex communities is a central focus in ecology, many important theoretical advancements have been made to identify drivers of ecological stability. However, previous results often rely on the continuous-time dynamics, assuming that species have overlapping generations. In contrast, numerous real-world communities consist of species with non-overlapping generations…
▽ More
Understanding the stability of complex communities is a central focus in ecology, many important theoretical advancements have been made to identify drivers of ecological stability. However, previous results often rely on the continuous-time dynamics, assuming that species have overlapping generations. In contrast, numerous real-world communities consist of species with non-overlapping generations, whose quantitative behavior can only be precisely represented by discrete-time dynamics rather than continuous ones. Here, we develop a theoretical framework and propose a metric to quantify the stability of complex communities characterized by non-overlapping generations and diverse interaction types. In stark contrast to existing results for overlapping generations, we find that increasing self-regulation strength first stabilizes and then destabilizes complex communities. This pattern is further confirmed in both exploitative (E. aerogenes, P. aurantiaca, P. chlororaphis, P. citronellolis) and competitive (P. putida, P. veroni, S. marcescens) soil microbial communities. Moreover, we show that communities with diverse interaction types become the most stable, which is corroborated by empirical mouse microbial networks. Furthermore, we reveal that the prevalence of weak interactions can stabilize communities, which is consistent with findings from existing microbial experiments. Our analyses of complex communities with non-overlapping generations provide a more comprehensive understanding of ecological stability and informs practical strategies for ecological restoration and control.
△ Less
Submitted 3 April, 2025;
originally announced April 2025.
-
Personalised strategy of allocating social goods in structured populations
Authors:
Yao Meng,
Sean P. Cornelius,
Yang-Yu Liu,
Aming Li
Abstract:
Cooperation underlies many aspects of the evolution of human and animal societies, where cooperators produce social goods to benefit others. Explaining the emergence of cooperation among selfish individuals has become a major research interest in evolutionary dynamics. Previous studies typically use complex networks to capture the interactions between individuals, and assume that cooperators distr…
▽ More
Cooperation underlies many aspects of the evolution of human and animal societies, where cooperators produce social goods to benefit others. Explaining the emergence of cooperation among selfish individuals has become a major research interest in evolutionary dynamics. Previous studies typically use complex networks to capture the interactions between individuals, and assume that cooperators distribute benefits equally to their neighbors. In practice, the distribution of social goods is often non-uniform, and individuals may selectively provide benefits to those they interact with based on their personal preferences. Here, we develop an efficient algorithm to optimize the placement of donation structure in any given network to minimize the threshold for the emergence of cooperation. We find when cooperators allocate the benefits preferentially compared to the traditional settings of donating to all neighbors, cooperation tends to be maximally promoted. Furthermore, the optimal donation structure is strongly disassortative -- the low-degree nodes tend to donate to high-degree ones preferentially and vice versa. Based on this finding, we offer a local heuristic strategy based on degree thresholds for personalizing the allocation of social goods and choosing each cooperator's recipient, which we use to prove its effectiveness in empirical datasets. Our findings advance the understanding of mechanisms for promoting cooperation with strategic allocations of social goods.
△ Less
Submitted 6 March, 2025;
originally announced March 2025.
-
Observation of non-adiabatic Landau-Zener tunneling among Floquet states
Authors:
Yun Yen,
Marcel Reutzel,
Andi Li,
Zehua Wang,
Hrvoje Petek,
Michael Schüler
Abstract:
Electromagnetic fields not only induce electronic transitions but also fundamentally modify the quantum states of matter through strong light-matter interactions. As one established route, Floquet engineering provides a powerful framework to dress electronic states with time-periodic fields, giving rise to quasi-stationary Floquet states. With increasing field strength, non-perturbative responses…
▽ More
Electromagnetic fields not only induce electronic transitions but also fundamentally modify the quantum states of matter through strong light-matter interactions. As one established route, Floquet engineering provides a powerful framework to dress electronic states with time-periodic fields, giving rise to quasi-stationary Floquet states. With increasing field strength, non-perturbative responses of the dressed states emerge, yet their nonlinear dynamics remain challenging to interpret. In this work we explore the emergence of non-adiabatic Landau-Zener transitions among Floquet states in Cu(111) under intense optical fields. At increasing field strength, we observe a transition from perturbative dressing to a regime where Floquet states undergo non-adiabatic tunneling, revealing a breakdown of adiabatic Floquet evolution. These insights are obtained through interferometrically time-resolved multi-photon photoemission spectroscopy, which serves as a sensitive probe of transient Floquet state dynamics. Numerical simulations and the theory of instantaneous Floquet states allow us to directly examine real-time excitation pathways in this non-perturbative photoemission regime. Our results establish a direct connection the onset of light-dressing of matter, non-perturbative ultrafast lightwave electronics, and high-optical-harmonic generation in the solids.
△ Less
Submitted 6 March, 2025;
originally announced March 2025.
-
Unsupervised CP-UNet Framework for Denoising DAS Data with Decay Noise
Authors:
Tianye Huang,
Aopeng Li,
Xiang Li,
Jing Zhang,
Sijing Xian,
Qi Zhang,
Mingkong Lu,
Guodong Chen,
Liangming Xiong,
Xiangyun Hu
Abstract:
Distributed acoustic sensor (DAS) technology leverages optical fiber cables to detect acoustic signals, providing cost-effective and dense monitoring capabilities. It offers several advantages including resistance to extreme conditions, immunity to electromagnetic interference, and accurate detection. However, DAS typically exhibits a lower signal-to-noise ratio (S/N) compared to geophones and is…
▽ More
Distributed acoustic sensor (DAS) technology leverages optical fiber cables to detect acoustic signals, providing cost-effective and dense monitoring capabilities. It offers several advantages including resistance to extreme conditions, immunity to electromagnetic interference, and accurate detection. However, DAS typically exhibits a lower signal-to-noise ratio (S/N) compared to geophones and is susceptible to various noise types, such as random noise, erratic noise, level noise, and long-period noise. This reduced S/N can negatively impact data analyses containing inversion and interpretation. While artificial intelligence has demonstrated excellent denoising capabilities, most existing methods rely on supervised learning with labeled data, which imposes stringent requirements on the quality of the labels. To address this issue, we develop a label-free unsupervised learning (UL) network model based on Context-Pyramid-UNet (CP-UNet) to suppress erratic and random noises in DAS data. The CP-UNet utilizes the Context Pyramid Module in the encoding and decoding process to extract features and reconstruct the DAS data. To enhance the connectivity between shallow and deep features, we add a Connected Module (CM) to both encoding and decoding section. Layer Normalization (LN) is utilized to replace the commonly employed Batch Normalization (BN), accelerating the convergence of the model and preventing gradient explosion during training. Huber-loss is adopted as our loss function whose parameters are experimentally determined. We apply the network to both the 2-D synthetic and filed data. Comparing to traditional denoising methods and the latest UL framework, our proposed method demonstrates superior noise reduction performance.
△ Less
Submitted 18 February, 2025;
originally announced February 2025.
-
Transformability reveals the interplay of dynamics across different network orders
Authors:
Ming Xie,
Shibo He,
Aming Li,
Zike Zhang,
Youxian Sun,
Jiming Chen
Abstract:
Recent studies have investigated various dynamic processes characterizing collective behaviors in real-world systems. However, these dynamics have been studied individually in specific contexts. In this article, we present a holistic analysis framework that bridges the interplays between dynamics across networks of different orders, demonstrating that these processes are not independent but can un…
▽ More
Recent studies have investigated various dynamic processes characterizing collective behaviors in real-world systems. However, these dynamics have been studied individually in specific contexts. In this article, we present a holistic analysis framework that bridges the interplays between dynamics across networks of different orders, demonstrating that these processes are not independent but can undergo systematic transformations. Focusing on contagion dynamics, we identify and quantify dynamical and structural factors that explains the interplay between dynamics on higher-order and pairwise networks, uncovering a universal model for system instability governed by these factors. Furthermore, we validate the findings from contagion dynamics to opinion dynamics, highlighting its broader applicability across diverse dynamical processes. Our findings reveal the intrinsic coupling between diverse dynamical processes, providing fresh insights into the distinct role of complex dynamics governed by higher-order interactions.
△ Less
Submitted 27 January, 2025;
originally announced January 2025.
-
Anomalies in the rotational spectra of $^{86}$Sr ULRRM dimers
Authors:
C. Wang,
Y. Lu,
A. Li,
S. K. Kanungo,
T. C. Killian,
F. B. Dunning,
S. Yoshida
Abstract:
Anomalies in the rotational structure of $^{86}$Sr $^3S_1$ dimer ultralong-range Rydberg molecules (ULRRMs) created in a cold strontium gas by two-photon excitation via the intermediate $5s5p~^3P_1$ state are reported. Measurements reveal that the distribution of product rotational states is sensitive to intermediate state detuning. Comparative studies using $^{84}$Sr $^1S_0$ and $^3S_1$, and…
▽ More
Anomalies in the rotational structure of $^{86}$Sr $^3S_1$ dimer ultralong-range Rydberg molecules (ULRRMs) created in a cold strontium gas by two-photon excitation via the intermediate $5s5p~^3P_1$ state are reported. Measurements reveal that the distribution of product rotational states is sensitive to intermediate state detuning. Comparative studies using $^{84}$Sr $^1S_0$ and $^3S_1$, and $^{86}$Sr $^1S_0$ dimers display no similar behavior, indicating that the observed behavior is peculiar to $^{86}$Sr triplet dimers. While we have no definitive hypothesis as to the physical mechanism responsible for this behavior, possible explanations might involve the very different scattering lengths for $^{84}$Sr and $^{86}$Sr, or the interchange of spin and rotational angular momentum.
△ Less
Submitted 24 January, 2025;
originally announced January 2025.
-
Evolutionary game dynamics for higher-order interactions
Authors:
Jiachao Guo,
Yao Meng,
Aming Li
Abstract:
Cooperative behaviors are deeply embedded in structured biological and social systems. Networks are often employed to portray pairwise interactions among individuals, where network nodes represent individuals and links indicate who interacts with whom. However, it is increasingly recognized that many empirical interactions often involve triple or more individuals instead of the massively oversimpl…
▽ More
Cooperative behaviors are deeply embedded in structured biological and social systems. Networks are often employed to portray pairwise interactions among individuals, where network nodes represent individuals and links indicate who interacts with whom. However, it is increasingly recognized that many empirical interactions often involve triple or more individuals instead of the massively oversimplified lower-order pairwise interactions, highlighting the fundamental gap in understanding the evolution of collective cooperation for higher-order interactions with diverse scales of the number of individuals. Here, we develop a theoretical framework of evolutionary game dynamics for systematically analyzing how cooperation evolves and fixates under higher-order interactions. Specifically, we offer a simple condition under which cooperation is favored under arbitrary combinations of different orders of interactions. Compared to pairwise interactions, our findings suggest that higher-order interactions enable lower thresholds for the emergence of cooperation. Surprisingly, we show that higher-order interactions favor the evolution of cooperation in large-scale systems, which is the opposite for lower-order scenarios. Our results offer a new avenue for understanding the evolution of collective cooperation in empirical systems with higher-order interactions.
△ Less
Submitted 10 January, 2025;
originally announced January 2025.
-
The MAJORANA DEMONSTRATOR experiment's construction, commissioning, and performance
Authors:
N. Abgrall,
E. Aguayo,
I. J. Arnquist,
F. T. Avignone III,
A. S. Barabash,
C. J. Barton,
P. J. Barton,
F. E. Bertrand,
E. Blalock,
B. Bos,
M. Boswell,
A. W. Bradley,
V. Brudanin,
T. H. Burritt,
M. Busch,
M. Buuck,
D. Byram,
A. S. Caldwell,
T. S. Caldwell,
Y. -D. Chan,
C. D. Christofferson,
P. -H. Chu,
M. L. Clark,
D. C. Combs,
C. Cuesta
, et al. (86 additional authors not shown)
Abstract:
Background: The MAJORANA DEMONSTRATOR , a modular array of isotopically enriched high-purity germanium (HPGe) detectors, was constructed to demonstrate backgrounds low enough to justify building a tonne-scale experiment to search for the neutrinoless double-beta decay ($ββ(0ν)$) of $^{76}\mathrm{Ge}$. Purpose: This paper presents a description of the instrument, its commissioning, and operations.…
▽ More
Background: The MAJORANA DEMONSTRATOR , a modular array of isotopically enriched high-purity germanium (HPGe) detectors, was constructed to demonstrate backgrounds low enough to justify building a tonne-scale experiment to search for the neutrinoless double-beta decay ($ββ(0ν)$) of $^{76}\mathrm{Ge}$. Purpose: This paper presents a description of the instrument, its commissioning, and operations. It covers the electroforming, underground infrastructure, enrichment, detector fabrication, low-background and construction techniques, electronics, data acquisition, databases, and data processing of the MAJORANA DEMONSTRATOR. Method: The MAJORANA DEMONSTRATOR operated inside an ultra-low radioactivity passive shield at the 4850-foot~level of the Sanford Underground Research Facility (SURF) from 2015-2021. Results and Conclusions: The MAJORANA DEMONSTRATOR achieved the best energy resolution and second-best background level of any $ββ(0ν)$ search. This enabled it to achieve an ultimate half-life limit on $ββ(0ν)$ in $^{76}\mathrm{Ge}$ of $8.3\times 10^{25}$~yr (90\% C.L.) and perform a rich set of searches for other physics beyond the Standard Model.
△ Less
Submitted 3 January, 2025;
originally announced January 2025.
-
A Magnetic Compression method for sub-THz electron beam generation from RF freqencies
Authors:
An Li,
Jiaru Shi,
Hao Zha,
Qiang Gao,
Huaibi Chen
Abstract:
Current THz electron sources struggle with low energy gain and device miniaturization. We propose a magnetic compression method designed for relativistic electrons to perform post-compression on the beam from radiofrequency accelerators, to produce sub-THz electron beam with exceptionally high energy ($>1$ J). Through simulation studies, we longitudinally compress a relativistic electron beam with…
▽ More
Current THz electron sources struggle with low energy gain and device miniaturization. We propose a magnetic compression method designed for relativistic electrons to perform post-compression on the beam from radiofrequency accelerators, to produce sub-THz electron beam with exceptionally high energy ($>1$ J). Through simulation studies, we longitudinally compress a relativistic electron beam with energy of 60 MeV and frequency of 3 GHz across a time span of 24 ns, yielding an electron pulse train at a 0.1 THz. The compressed beam exhibits a pulse width of 0.8 ns, a total charge of 24 nC, and an energy of 1.4 J, providing a new potential for ultra-high-energy THz electron beams generation.
△ Less
Submitted 30 October, 2024;
originally announced October 2024.
-
Neutrinoless Double Beta Decay Sensitivity of the XLZD Rare Event Observatory
Authors:
XLZD Collaboration,
J. Aalbers,
K. Abe,
M. Adrover,
S. Ahmed Maouloud,
D. S. Akerib,
A. K. Al Musalhi,
F. Alder,
L. Althueser,
D. W. P. Amaral,
C. S. Amarasinghe,
A. Ames,
B. Andrieu,
N. Angelides,
E. Angelino,
B. Antunovic,
E. Aprile,
H. M. Araújo,
J. E. Armstrong,
M. Arthurs,
M. Babicz,
D. Bajpai,
A. Baker,
M. Balzer,
J. Bang
, et al. (419 additional authors not shown)
Abstract:
The XLZD collaboration is developing a two-phase xenon time projection chamber with an active mass of 60 to 80 t capable of probing the remaining WIMP-nucleon interaction parameter space down to the so-called neutrino fog. In this work we show that, based on the performance of currently operating detectors using the same technology and a realistic reduction of radioactivity in detector materials,…
▽ More
The XLZD collaboration is developing a two-phase xenon time projection chamber with an active mass of 60 to 80 t capable of probing the remaining WIMP-nucleon interaction parameter space down to the so-called neutrino fog. In this work we show that, based on the performance of currently operating detectors using the same technology and a realistic reduction of radioactivity in detector materials, such an experiment will also be able to competitively search for neutrinoless double beta decay in $^{136}$Xe using a natural-abundance xenon target. XLZD can reach a 3$σ$ discovery potential half-life of 5.7$\times$10$^{27}$ yr (and a 90% CL exclusion of 1.3$\times$10$^{28}$ yr) with 10 years of data taking, corresponding to a Majorana mass range of 7.3-31.3 meV (4.8-20.5 meV). XLZD will thus exclude the inverted neutrino mass ordering parameter space and will start to probe the normal ordering region for most of the nuclear matrix elements commonly considered by the community.
△ Less
Submitted 30 April, 2025; v1 submitted 23 October, 2024;
originally announced October 2024.
-
Telecom-wavelength Single-photon Emitters in Multi-layer InSe
Authors:
Huan Zhao,
Saban Hus,
Jinli Chen,
Xiaodong Yan,
Ben Lawrie,
Stephen Jesse,
An-Ping Li,
Liangbo Liang,
Han Htoon
Abstract:
The development of robust and efficient single photon emitters (SPEs) at telecom wavelengths is critical for advancements in quantum information science. Two-dimensional (2D) materials have recently emerged as promising sources for SPEs, owing to their high photon extraction efficiency, facile coupling to external fields, and seamless integration into photonic circuits. In this study, we demonstra…
▽ More
The development of robust and efficient single photon emitters (SPEs) at telecom wavelengths is critical for advancements in quantum information science. Two-dimensional (2D) materials have recently emerged as promising sources for SPEs, owing to their high photon extraction efficiency, facile coupling to external fields, and seamless integration into photonic circuits. In this study, we demonstrate the creation of SPEs emitting in the 1000 to 1550 nm near-infrared range by coupling 2D indium selenide (InSe) with strain-inducing nanopillar arrays. The emission wavelength exhibits a strong dependence on the number of layers. Hanbury Brown and Twiss experiments conducted at 10 K reveal clear photon antibunching, confirming the single-photon nature of the emissions. Density-functional-theory calculations and scanning-tunneling-microscopy analyses provide insights into the electronic structures and defect states, elucidating the origins of the SPEs. Our findings highlight the potential of multilayer 2D metal monochalcogenides for creating SPEs across a broad spectral range, paving the way for their integration into quantum communication technologies.
△ Less
Submitted 22 October, 2024;
originally announced October 2024.
-
The XLZD Design Book: Towards the Next-Generation Liquid Xenon Observatory for Dark Matter and Neutrino Physics
Authors:
XLZD Collaboration,
J. Aalbers,
K. Abe,
M. Adrover,
S. Ahmed Maouloud,
D. S. Akerib,
A. K. Al Musalhi,
F. Alder,
L. Althueser,
D. W. P. Amaral,
C. S. Amarasinghe,
A. Ames,
B. Andrieu,
N. Angelides,
E. Angelino,
B. Antunovic,
E. Aprile,
H. M. Araújo,
J. E. Armstrong,
M. Arthurs,
M. Babicz,
A. Baker,
M. Balzer,
J. Bang,
E. Barberio
, et al. (419 additional authors not shown)
Abstract:
This report describes the experimental strategy and technologies for XLZD, the next-generation xenon observatory sensitive to dark matter and neutrino physics. In the baseline design, the detector will have an active liquid xenon target of 60 tonnes, which could be increased to 80 tonnes if the market conditions for xenon are favorable. It is based on the mature liquid xenon time projection chambe…
▽ More
This report describes the experimental strategy and technologies for XLZD, the next-generation xenon observatory sensitive to dark matter and neutrino physics. In the baseline design, the detector will have an active liquid xenon target of 60 tonnes, which could be increased to 80 tonnes if the market conditions for xenon are favorable. It is based on the mature liquid xenon time projection chamber technology used in current-generation experiments, LZ and XENONnT. The report discusses the baseline design and opportunities for further optimization of the individual detector components. The experiment envisaged here has the capability to explore parameter space for Weakly Interacting Massive Particle (WIMP) dark matter down to the neutrino fog, with a 3$σ$ evidence potential for WIMP-nucleon cross sections as low as $3\times10^{-49}\rm\,cm^2$ (at 40 GeV/c$^2$ WIMP mass). The observatory will also have leading sensitivity to a wide range of alternative dark matter models. It is projected to have a 3$σ$ observation potential of neutrinoless double beta decay of $^{136}$Xe at a half-life of up to $5.7\times 10^{27}$ years. Additionally, it is sensitive to astrophysical neutrinos from the sun and galactic supernovae.
△ Less
Submitted 28 October, 2025; v1 submitted 22 October, 2024;
originally announced October 2024.
-
Machine learning-powered data cleaning for LEGEND: a semi-supervised approach using affinity propagation and support vector machines
Authors:
E. León,
A. Li,
M. A. Bahena Schott,
B. Bos,
M. Busch,
J. R. Chapman,
G. L. Duran,
J. Gruszko,
R. Henning,
E. L. Martin,
J. F. Wilkerson
Abstract:
Neutrinoless double-beta decay ($0νββ$) is a rare nuclear process that, if observed, will provide insight into the nature of neutrinos and help explain the matter-antimatter asymmetry in the universe. The Large Enriched Germanium Experiment for Neutrinoless Double-Beta Decay (LEGEND) will operate in two phases to search for $0νββ$. The first (second) stage will employ 200 (1000) kg of High-Purity…
▽ More
Neutrinoless double-beta decay ($0νββ$) is a rare nuclear process that, if observed, will provide insight into the nature of neutrinos and help explain the matter-antimatter asymmetry in the universe. The Large Enriched Germanium Experiment for Neutrinoless Double-Beta Decay (LEGEND) will operate in two phases to search for $0νββ$. The first (second) stage will employ 200 (1000) kg of High-Purity Germanium (HPGe) enriched in $^{76}$Ge to achieve a half-life sensitivity of 10$^{27}$ (10$^{28}$) years. In this study, we present a semi-supervised data-driven approach to remove non-physical events captured by HPGe detectors powered by a novel artificial intelligence model. We utilize Affinity Propagation to cluster waveform signals based on their shape and a Support Vector Machine to classify them into different categories. We train, optimize, test our model on data taken from a natural abundance HPGe detector installed in the Full Chain Test experimental stand at the University of North Carolina at Chapel Hill. We demonstrate that our model yields a maximum sacrifice of physics events of $0.024 ^{+0.004}_{-0.003} \%$. Our model is being used to accelerate data cleaning development for LEGEND-200 and will serve to improve data cleaning procedures for LEGEND-1000.
△ Less
Submitted 19 April, 2025; v1 submitted 5 October, 2024;
originally announced October 2024.
-
Optimization of LYSO crystals and SiPM parameters for the CMS MIP timing detector
Authors:
F. Addesa,
T. Anderson,
P. Barria,
C. Basile,
A. Benaglia,
R. Bertoni,
A. Bethani,
R. Bianco,
A. Bornheim,
G. Boldrini,
A. Boletti,
A. Bulla,
M. Campana,
B. Cardwell,
P. Carniti,
F. Cetorelli,
F. De Guio,
K. De Leo,
F. De Riggi,
J. Dervan,
E. Fernandez,
A. Gaile,
M. Gallinaro,
A. Ghezzi,
C. Gotti
, et al. (46 additional authors not shown)
Abstract:
For the High-Luminosity (HL-LHC) phase, the upgrade of the Compact Muon Solenoid (CMS) experiment at CERN will include a novel MIP Timing Detector (MTD). The central part of MTD, the barrel timing layer (BTL), is designed to provide a measurement of the time of arrival of charged particles with a precision of 30 ps at the beginning of HL-LHC, progressively degrading to 60 ps while operating in an…
▽ More
For the High-Luminosity (HL-LHC) phase, the upgrade of the Compact Muon Solenoid (CMS) experiment at CERN will include a novel MIP Timing Detector (MTD). The central part of MTD, the barrel timing layer (BTL), is designed to provide a measurement of the time of arrival of charged particles with a precision of 30 ps at the beginning of HL-LHC, progressively degrading to 60 ps while operating in an extremely harsh radiation environment for over a decade. In this paper we present a comparative analysis of the time resolution of BTL module prototypes made of LYSO:Ce crystal bars read out by silicon photo-multipliers (SiPMs). The timing performance measured in beam test campaigns is presented for prototypes with different construction and operation parameters, such as different SiPM cell sizes (15, 20, 25 and 30 $\rm μm$), SiPM manufacturers and crystal bar thicknesses. The evolution of time resolution as a function of the irradiation level has been studied using non-irradiated SiPMs as well as SiPMs exposed up to $2\times 10^{14}~n_{eq}/cm^2$ fluence. The key parameters defining the module time resolution such as SiPM characteristics (gain, photon detection efficiency, radiation induced dark count rate) and crystal properties (light output and dimensions) are discussed. These results have informed the final choice of the MTD barrel sensor configuration and offer a unique starting point for the design of future large-area scintillator-based timing detectors in either low or high radiation environments.
△ Less
Submitted 11 October, 2024;
originally announced October 2024.
-
RESuM: Rare Event Surrogate Model for Physics Detector Design
Authors:
Ann-Kathrin Schuetz,
Alan W. P. Poon,
Aobo Li
Abstract:
The experimental discovery of neutrinoless double-beta decay (NLDBD) would answer one of the most important questions in physics: Why is there more matter than antimatter in our universe? To maximize the chances of detection, NLDBD experiments must optimize their detector designs to minimize the probability of background events contaminating the detector. Given that this probability is inherently…
▽ More
The experimental discovery of neutrinoless double-beta decay (NLDBD) would answer one of the most important questions in physics: Why is there more matter than antimatter in our universe? To maximize the chances of detection, NLDBD experiments must optimize their detector designs to minimize the probability of background events contaminating the detector. Given that this probability is inherently low, design optimization either requires extremely costly simulations to generate sufficient background counts or contending with significant variance. In this work, we formalize this dilemma as a Rare Event Design (RED) problem: identifying optimal design parameters when the design metric to be minimized is inherently small. We then designed the Rare Event Surrogate Model (RESuM) for physics detector design optimization under RED conditions. RESuM uses a pretrained Conditional Neural Process (CNP) model to incorporate additional prior knowledges into a Multi-Fidelity Gaussian Process model. We applied RESuM to optimize neutron moderator designs for the LEGEND NLDBD experiment, identifying an optimal design that reduces neutron background by ($66.5\pm3.5$)% while using only 3.3% of the computational resources compared to traditional methods. Given the prevalence of RED problems in other fields of physical sciences, the RESuM algorithm has broad potential for simulation-intensive applications.
△ Less
Submitted 4 October, 2024;
originally announced October 2024.
-
Real-time Position Reconstruction for the KamLAND-Zen Experiment using Hardware-AI Co-design
Authors:
Alexander Migala,
Eugene Ku,
Zepeng Li,
Aobo Li
Abstract:
Monolithic liquid scintillator detector technology is the workhorse for detecting neutrinos and exploring new physics. The KamLAND-Zen experiment exemplifies this detector technology and has yielded top results in the quest for neutrinoless double-beta ($0νββ$) decay. To understand the physical events that occur in the detector, experimenters must reconstruct each event's position and energy from…
▽ More
Monolithic liquid scintillator detector technology is the workhorse for detecting neutrinos and exploring new physics. The KamLAND-Zen experiment exemplifies this detector technology and has yielded top results in the quest for neutrinoless double-beta ($0νββ$) decay. To understand the physical events that occur in the detector, experimenters must reconstruct each event's position and energy from the raw data produced. Traditionally, this information has been obtained through a time-consuming offline process, meaning that event position and energy would only be available days after data collection. This work introduces a new pipeline to acquire this information quickly by implementing a machine learning model, PointNet, onto a Field Programmable Gate Array (FPGA). This work outlines a successful demonstration of the entire pipeline, showing that event position and energy information can be reliably and quickly obtained as physics events occur in the detector. This marks one of the first instances of applying hardware-AI co-design in the context of $0νββ$ decay experiments.
△ Less
Submitted 3 October, 2024;
originally announced October 2024.
-
FAIR Universe HiggsML Uncertainty Dataset and Competition
Authors:
Lisa Benato,
Wahid Bhimji,
Paolo Calafiura,
Ragansu Chakkappai,
Po-Wen Chang,
Yuan-Tang Chou,
Sascha Diefenbacher,
Jordan Dudley,
Ibrahim Elsharkawy,
Steven Farrell,
Aishik Ghosh,
Cristina Giordano,
Isabelle Guyon,
Chris Harris,
Yota Hashizume,
Shih-Chieh Hsu,
Elham E. Khoda,
Claudius Krause,
Ang Li,
Benjamin Nachman,
Peter Nugent,
David Rousseau,
Robert Schoefbeck,
Maryam Shooshtari,
Dennis Schwarz
, et al. (4 additional authors not shown)
Abstract:
The FAIR Universe HiggsML Uncertainty Challenge focused on measuring the physical properties of elementary particles with imperfect simulators. Participants were required to compute and report confidence intervals for a parameter of interest regarding the Higgs boson while accounting for various systematic (epistemic) uncertainties. The dataset is a tabular dataset of 28 features and 280 million i…
▽ More
The FAIR Universe HiggsML Uncertainty Challenge focused on measuring the physical properties of elementary particles with imperfect simulators. Participants were required to compute and report confidence intervals for a parameter of interest regarding the Higgs boson while accounting for various systematic (epistemic) uncertainties. The dataset is a tabular dataset of 28 features and 280 million instances. Each instance represents a simulated proton-proton collision as observed at CERN's Large Hadron Collider in Geneva, Switzerland. The features of these simulations were chosen to capture key characteristics of different types of particles. These include primary attributes, such as the energy and three-dimensional momentum of the particles, as well as derived attributes, which are calculated from the primary ones using domain-specific knowledge. Additionally, a label feature designates each instance's type of proton-proton collision, distinguishing the Higgs boson events of interest from three background sources. As outlined in this paper, the permanent release of the dataset allows long-term benchmarking of new techniques. The leading submissions, including Contrastive Normalising Flows and Density Ratios estimation through classification, are described. Our challenge has brought together the physics and machine learning communities to advance our understanding and methodologies in handling systematic uncertainties within AI techniques.
△ Less
Submitted 24 September, 2025; v1 submitted 3 October, 2024;
originally announced October 2024.
-
Model-independent searches of new physics in DARWIN with a semi-supervised deep learning pipeline
Authors:
J. Aalbers,
K. Abe,
M. Adrover,
S. Ahmed Maouloud,
L. Althueser,
D. W. P. Amaral,
B. Andrieu,
E. Angelino,
D. Antón Martin,
B. Antunovic,
E. Aprile,
M. Babicz,
D. Bajpai,
M. Balzer,
E. Barberio,
L. Baudis,
M. Bazyk,
N. F. Bell,
L. Bellagamba,
R. Biondi,
Y. Biondi,
A. Bismark,
C. Boehm,
K. Boese,
R. Braun
, et al. (209 additional authors not shown)
Abstract:
We present a novel deep learning pipeline to perform a model-independent, likelihood-free search for anomalous (i.e., non-background) events in the proposed next generation multi-ton scale liquid Xenon-based direct detection experiment, DARWIN. We train an anomaly detector comprising a variational autoencoder and a classifier on extensive, high-dimensional simulated detector response data and cons…
▽ More
We present a novel deep learning pipeline to perform a model-independent, likelihood-free search for anomalous (i.e., non-background) events in the proposed next generation multi-ton scale liquid Xenon-based direct detection experiment, DARWIN. We train an anomaly detector comprising a variational autoencoder and a classifier on extensive, high-dimensional simulated detector response data and construct a one-dimensional anomaly score optimised to reject the background only hypothesis in the presence of an excess of non-background-like events. We benchmark the procedure with a sensitivity study that determines its power to reject the background-only hypothesis in the presence of an injected WIMP dark matter signal, outperforming the classical, likelihood-based background rejection test. We show that our neural networks learn relevant energy features of the events from low-level, high-dimensional detector outputs, without the need to compress this data into lower-dimensional observables, thus reducing computational effort and information loss. For the future, our approach lays the foundation for an efficient end-to-end pipeline that eliminates the need for many of the corrections and cuts that are traditionally part of the analysis chain, with the potential of achieving higher accuracy and significant reduction of analysis time.
△ Less
Submitted 1 October, 2024;
originally announced October 2024.
-
Promoting collective intelligence: The advantage of temporal star-structures
Authors:
Zhenglong Tian,
Yao Meng,
Wenxuan Fang,
Aming Li
Abstract:
System structures play an essential role in the emergence of collective intelligence in many natural and engineering systems. In empirical systems, interactions among multiple agents may change over time, forming a temporal network structure, where nodes represent the system's components and links capture who interacts with whom. Recent studies report that temporal networks are more conducive to t…
▽ More
System structures play an essential role in the emergence of collective intelligence in many natural and engineering systems. In empirical systems, interactions among multiple agents may change over time, forming a temporal network structure, where nodes represent the system's components and links capture who interacts with whom. Recent studies report that temporal networks are more conducive to the emergence of collective cooperation compared to their aggregated static structures. However, the question of which kind of structural characteristics of temporal networks promote collective cooperation still remains elusive. Here we systematically investigate the evolution of cooperation on temporal networks with diverse structural characteristics, such as random, star, and cluster structures. We uncover that temporal networks with single-star structures which lack network clusters are more conducive to collective cooperation than other structures. This counterintuitive result cautions against the common belief that network clusters normally facilitate collective cooperation, revealing the unique advantages of temporal networks over static networks. We further propose an index to quantify the capacity of structural characteristics of temporal networks in promoting collective cooperation. Our findings pave the way for designing the optimal structure of temporal networks to favour collective cooperation.
△ Less
Submitted 23 September, 2024;
originally announced September 2024.
-
Dominant strategy in repeated games on networks
Authors:
Xiaochen Wang,
Aming Li
Abstract:
Direct reciprocity, stemming from repeated interactions among players, is one of the fundamental mechanisms for understanding the evolution of cooperation. However, canonical strategies for the repeated prisoner's dilemma, such as Win-Stay-Lose-Shift and Tit-for-Tat, fail to consistently dominate alternative strategies during evolution. This complexity intensifies with the introduction of spatial…
▽ More
Direct reciprocity, stemming from repeated interactions among players, is one of the fundamental mechanisms for understanding the evolution of cooperation. However, canonical strategies for the repeated prisoner's dilemma, such as Win-Stay-Lose-Shift and Tit-for-Tat, fail to consistently dominate alternative strategies during evolution. This complexity intensifies with the introduction of spatial structure or network behind individual interactions, where nodes represent players and edges represent their interactions. Here, we propose a new strategy, ``Cooperate-Stay-Defect-Tolerate" (CSDT), which can dominate other strategies within networked populations by adhering to three essential characteristics. This strategy maintains current behaviour when the opponent cooperates and tolerates defection to a limited extent when the opponent defects. We demonstrate that the limit of tolerance of CSDT can vary with the network structure, evolutionary dynamics, and game payoffs. Furthermore, we find that incorporating the Always Defect strategy (ALLD) can enhance the evolution of CSDT and eliminate strategies that are vulnerable to defection in the population, providing a new interpretation of the role of ALLD in direct reciprocity. Our findings offer a novel perspective on how cooperative strategy evolves on networked populations.
△ Less
Submitted 6 September, 2024;
originally announced September 2024.
-
Coupling Between Local and Global Oscillations in Palladium-Catalysed Methane Oxidation
Authors:
Yuxiong Hu,
Jianyu Hu,
Mengzhao Sun,
Aowen Li,
Shucheng Shi,
P. J. Hu,
Wu Zhou,
Marc-Georg Willinger,
Dan Zhou,
Zhi Liu,
Xi Liu,
Wei-Xue Li,
Zhu-Jun Wang
Abstract:
The interplay between order and disorder is crucial across various fields, especially in understanding oscillatory phenomena. Periodic oscillations are frequently observed in heterogeneous catalysis, yet their underlying mechanisms need deeper exploration. Here, we investigate how periodic oscillations arise during methane oxidation catalysed by palladium nanoparticles (Pd NPs), utilizing a suite…
▽ More
The interplay between order and disorder is crucial across various fields, especially in understanding oscillatory phenomena. Periodic oscillations are frequently observed in heterogeneous catalysis, yet their underlying mechanisms need deeper exploration. Here, we investigate how periodic oscillations arise during methane oxidation catalysed by palladium nanoparticles (Pd NPs), utilizing a suite of complementary operando techniques across various spatial scales. We found that reaction intensity and collective dynamic modes can be tuned by the reactant gas-flow rate. At lower gas-flow rates, we observed periodic facet reconstruction of Pd NPs correlated with repeated bubbling behaviour at the Pd/PdO interface, without evident global oscillatory responses. Conversely, at higher gas-flow rates, Pd NPs undergo chaotic transformations between metallic and oxidized states, resulting in overall oscillation. Integrating our observations at different gas-flow rates, we attributed the emergence of global oscillation to thermal coupling regulated by gas flow and connected local and global dynamics through a weak synchronization mechanism. This work demonstrates the correlations between open surfaces and interfaces, chaos and regularity, and dissipative processes and coupling behaviour. Our findings offer critical insights into the complexity behind catalytic oscillations and provide guidance for modulating oscillatory behaviours in catalytic processes, with significant implications for both science and industry.
△ Less
Submitted 14 August, 2024;
originally announced August 2024.
-
An assay-based background projection for the MAJORANA DEMONSTRATOR using Monte Carlo Uncertainty Propagation
Authors:
I. J. Arnquist,
F. T. Avignone III,
A. S. Barabash,
C. J. Barton,
K. H. Bhimani,
E. Blalock,
B. Bos,
M. Busch,
T. S. Caldwell,
Y. -D. Chan,
C. D. Christofferson,
P. -H. Chu,
M. L. Clark,
C. Cuesta,
J. A. Detwiler,
Yu. Efremenko,
H. Ejiri,
S. R. Elliott,
N. Fuad,
G. K. Giovanetti,
M. P. Green,
J. Gruszko,
I. S. Guinn,
V. E. Guiseppe,
C. R. Haufe
, et al. (31 additional authors not shown)
Abstract:
The background index is an important quantity which is used in projecting and calculating the half-life sensitivity of neutrinoless double-beta decay ($0νββ$) experiments. A novel analysis framework is presented to calculate the background index using the specific activities, masses and simulated efficiencies of an experiment's components as distributions. This Bayesian framework includes a unifie…
▽ More
The background index is an important quantity which is used in projecting and calculating the half-life sensitivity of neutrinoless double-beta decay ($0νββ$) experiments. A novel analysis framework is presented to calculate the background index using the specific activities, masses and simulated efficiencies of an experiment's components as distributions. This Bayesian framework includes a unified approach to combine specific activities from assay. Monte Carlo uncertainty propagation is used to build a background index distribution from the specific activity, mass and efficiency distributions. This analysis method is applied to the MAJORANA DEMONSTRATOR, which deployed arrays of high-purity Ge detectors enriched in $^{76}$Ge to search for $0νββ$. The framework projects a mean background index of $\left[8.95 \pm 0.36\right] \times 10^{-4}$cts/(keV kg yr) from $^{232}$Th and $^{238}$U in the DEMONSTRATOR's components.
△ Less
Submitted 13 August, 2024;
originally announced August 2024.
-
Diff-PIC: Revolutionizing Particle-In-Cell Nuclear Fusion Simulation with Diffusion Models
Authors:
Chuan Liu,
Chunshu Wu,
Shihui Cao,
Mingkai Chen,
James Chenhao Liang,
Ang Li,
Michael Huang,
Chuang Ren,
Dongfang Liu,
Ying Nian Wu,
Tong Geng
Abstract:
The rapid development of AI highlights the pressing need for sustainable energy, a critical global challenge for decades. Nuclear fusion, generally seen as an ultimate solution, has been the focus of intensive research for nearly a century, with investments reaching hundreds of billions of dollars. Recent advancements in Inertial Confinement Fusion have drawn significant attention to fusion resear…
▽ More
The rapid development of AI highlights the pressing need for sustainable energy, a critical global challenge for decades. Nuclear fusion, generally seen as an ultimate solution, has been the focus of intensive research for nearly a century, with investments reaching hundreds of billions of dollars. Recent advancements in Inertial Confinement Fusion have drawn significant attention to fusion research, in which Laser-Plasma Interaction (LPI) is critical for ensuring fusion stability and efficiency. However, the complexity of LPI upon fusion ignition makes analytical approaches impractical, leaving researchers depending on extremely computation-demanding Particle-in-Cell (PIC) simulations to generate data, presenting a significant bottleneck to advancing fusion research. In response, this work introduces Diff-PIC, a novel framework that leverages conditional diffusion models as a computationally efficient alternative to PIC simulations for generating high-fidelity scientific LPI data. In this work, physical patterns captured by PIC simulations are distilled into diffusion models associated with two tailored enhancements: (1) To effectively capture the complex relationships between physical parameters and corresponding outcomes, the parameters are encoded in a physically-informed manner. (2) To further enhance efficiency while maintaining high fidelity and physical validity, the rectified flow technique is employed to transform our model into a one-step conditional diffusion model. Experimental results show that Diff-PIC achieves 16,200$\times$ speedup compared to traditional PIC on a 100 picosecond simulation, with an average reduction in MAE / RMSE / FID of 59.21% / 57.15% / 39.46% with respect to two other SOTA data generation approaches.
△ Less
Submitted 5 October, 2024; v1 submitted 3 August, 2024;
originally announced August 2024.
-
Exciton Fission Enhanced Silicon Solar Cell
Authors:
Narumi Nagaya,
Kangmin Lee,
Collin F. Perkinson,
Aaron Li,
Youri Lee,
Xinjue Zhong,
Sujin Lee,
Leah P. Weisburn,
Tomi K. Baikie,
Moungi G. Bawendi,
Troy Van Voorhis,
William A. Tisdale,
Antoine Kahn,
Kwanyong Seo,
Marc A. Baldo
Abstract:
While silicon solar cells dominate global photovoltaic energy production, their continued improvement is hindered by the single junction limit. One potential solution is to use molecular singlet exciton fission to generate two electrons from each absorbed high-energy photon. We demonstrate that the long-standing challenge of coupling molecular excited states to silicon solar cells can be overcome…
▽ More
While silicon solar cells dominate global photovoltaic energy production, their continued improvement is hindered by the single junction limit. One potential solution is to use molecular singlet exciton fission to generate two electrons from each absorbed high-energy photon. We demonstrate that the long-standing challenge of coupling molecular excited states to silicon solar cells can be overcome using sequential charge transfer. Combining zinc phthalocyanine, aluminum oxide, and a shallow junction crystalline silicon microwire solar cell, the peak charge generation efficiency per photon absorbed in tetracene is (138 +- 6)%, comfortably surpassing the quantum efficiency limit for conventional silicon solar cells and establishing a new, scalable approach to low cost, high efficiency photovoltaics.
△ Less
Submitted 30 July, 2024;
originally announced July 2024.
-
0.7 MW Yb:YAG pumped degenerate optical parametric oscillator at 2.06 μm
Authors:
Anni Li,
Mehran Bahri,
Robert M. Gray,
Seowon Choi,
Sajjad Hoseinkhani,
Anchit Srivastava,
Alireza Marandi,
Hanieh Fattahi
Abstract:
Frequency comb and field-resolved broadband absorption spectroscopy are promising techniques for rapid, precise, and sensitive detection of short-lived atmospheric pollutants on-site. Enhancing detection sensitivity in absorption spectroscopy hinges on bright sources that cover molecular resonances and fast signal modulation techniques to implement lock-in detection schemes efficiently. Yb:YAG thi…
▽ More
Frequency comb and field-resolved broadband absorption spectroscopy are promising techniques for rapid, precise, and sensitive detection of short-lived atmospheric pollutants on-site. Enhancing detection sensitivity in absorption spectroscopy hinges on bright sources that cover molecular resonances and fast signal modulation techniques to implement lock-in detection schemes efficiently. Yb:YAG thin-disk lasers, combined with optical parametric oscillators (OPO), present a compelling solution to fulfill these requirements. In this work, we report on a bright OPO pumped by a Yb:YAG thin-disk Kerr-lens mode-locked oscillator delivering 2.8 W, 114 fs pulses at 2.06 μm with an averaged energy of 90 nJ. The OPO cavity operates at 30.9 MHz pulse repetition rates, the second harmonic of the pump cavity, allowing for broadband, efficient, and dispersion-free modulation of the OPO output pulses at 15.45 MHz rate. With 13% optical-to-optical conversion efficiency and a high-frequency intra-cavity modulation, this scalable scheme holds promise to advance the detection sensitivity and frontiers of field-resolved spectroscopic techniques.
△ Less
Submitted 18 July, 2024;
originally announced July 2024.
-
A Review of Electromagnetic Elimination Methods for low-field portable MRI scanner
Authors:
Wanyu Bian,
Panfeng Li,
Mengyao Zheng,
Chihang Wang,
Anying Li,
Ying Li,
Haowei Ni,
Zixuan Zeng
Abstract:
This paper analyzes conventional and deep learning methods for eliminating electromagnetic interference (EMI) in MRI systems. We compare traditional analytical and adaptive techniques with advanced deep learning approaches. Key strengths and limitations of each method are highlighted. Recent advancements in active EMI elimination, such as external EMI receiver coils, are discussed alongside deep l…
▽ More
This paper analyzes conventional and deep learning methods for eliminating electromagnetic interference (EMI) in MRI systems. We compare traditional analytical and adaptive techniques with advanced deep learning approaches. Key strengths and limitations of each method are highlighted. Recent advancements in active EMI elimination, such as external EMI receiver coils, are discussed alongside deep learning methods, which show superior EMI suppression by leveraging neural networks trained on MRI data. While deep learning improves EMI elimination and diagnostic capabilities, it introduces security and safety concerns, particularly in commercial applications. A balanced approach, integrating conventional reliability with deep learning's advanced capabilities, is proposed for more effective EMI suppression in MRI systems.
△ Less
Submitted 13 November, 2024; v1 submitted 22 June, 2024;
originally announced June 2024.