Computational Physics
See recent articles
Showing new listings for Friday, 17 April 2026
- [1] arXiv:2604.14188 [pdf, html, other]
-
Title: Grading the Unspoken: Evaluating Tacit Reasoning in Quantum Field Theory and String Theory with LLMsComments: 9 pages + appendices, 2 figures, 9 tablesSubjects: Computational Physics (physics.comp-ph); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); High Energy Physics - Theory (hep-th)
Large language models have demonstrated impressive performance across many domains of mathematics and physics. One natural question is whether such models can support research in highly abstract theoretical fields such as quantum field theory and string theory. Evaluating this possibility faces an immediate challenge: correctness in these domains is layered, tacit, and fundamentally non-binary. Standard answer-matching metrics fail to capture whether intermediate conceptual steps are properly reconstructed or whether implicit structural constraints are respected. We construct a compact expert-curated dataset of twelve questions spanning core areas of quantum field theory and string theory, and introduce a five-level grading rubric separating statement correctness, key concept awareness, reasoning chain presence, tacit step reconstruction, and enrichment. Evaluating multiple contemporary LLMs, we observe near-ceiling performance on explicit derivations within stable conceptual frames, but systematic degradation when tasks require reconstruction of omitted reasoning steps or reorganization of representations under global consistency constraints. These failures are driven not only by missing intermediate steps, but by an instability in representation selection: models often fail to identify the correct conceptual framing required to resolve implicit tensions. We argue that highly abstract theoretical physics provides a uniquely sensitive lens on the epistemic limits of current evaluation paradigms.
- [2] arXiv:2604.14201 [pdf, html, other]
-
Title: LSTM-PINN for Steady-State Electrothermal Transport: Preserving Multi-Field Consis tency in Strongly Coupled Heat and Fluid FlowSubjects: Computational Physics (physics.comp-ph); Fluid Dynamics (physics.flu-dyn)
Steady-state electrothermal systems involve strongly coupled heat transfer, fluid flow, and electric-potential transport, creating severe numerical challenges for standard physics-informed neural networks (PINNs) due to stark disparities in gradient scales and residual stiffnesses across the physical fields. To resolve these multiphysics bottlenecks, we introduce a Long Short-Term Memory PINN (LSTM-PINN) framework that utilizes a depth-recursive memory mechanism to preserve long-range spatial feature dependencies and maintain strict cross-field consistency. The proposed architecture is rigorously evaluated against conventional and attention-based networks across a unified five-field formulation encompassing four complex convective and drag regimes: Boussinesq electrothermal flow, drift-potential gauge-constrained transport, strong buoyancy-coupled convection, and Brinkman--Forchheimer drift. Quantitative and visual analyses demonstrate that LSTM-PINN successfully suppresses non-physical artifacts and structural distortions, yielding the highest thermodynamic fidelity and consistently outperforming state-of-the-art baselines in global error metrics. Ultimately, this memory-enhanced approach provides a highly robust and accurate computational baseline for capturing localized boundary layers and complex energy-momentum feedback in advanced electrothermal energy systems.
- [3] arXiv:2604.14926 [pdf, html, other]
-
Title: Spectrally Accurate Simulation of Axisymmetric Vesicle DynamicsSubjects: Computational Physics (physics.comp-ph); Soft Condensed Matter (cond-mat.soft)
We present a meshless numerical method for simulating the dynamics of axisymmetric vesicles in a viscous medium. Key innovations include: (1) adaptive reparameterization based on local length scales, reducing the number of required harmonics; (2) gauge dynamics for maintaining optimal parameterization; (3) error control near the symmetry axis; and (4) spectrally accurate quadrature schemes for singular integrals. The method achieves high accuracy and computational efficiency for simulating lipid bilayer dynamics and related problems in soft matter physics.
- [4] arXiv:2604.14938 [pdf, html, other]
-
Title: Sharp-interface VOF method for phase-change simulations on unstructured meshesComments: Submitted to Journal of Computational PhysicsSubjects: Computational Physics (physics.comp-ph); Fluid Dynamics (physics.flu-dyn)
Unstructured meshes are among the most versatile approaches for capturing non-canonical geometries in fluid dynamics simulations. Despite this, most high-fidelity first-principles phase-change models are developed and applied on structured meshes. We present a phase-change simulation method for unstructured meshes that combines the algebraic Volume-of-Fluid (VOF) technique with geometric interface reconstruction, implemented in an in-house open-source CFD code. Phase-change rates are computed from local temperature gradients evaluated at the reconstructed interface, without empirical closure models, using a reconstruction procedure that operates on arbitrary polyhedral cells. Because the method relies on the standard finite-volume framework, it can be integrated into other cell-centred codes supporting unstructured meshes. The approach is validated against the one-dimensional Stefan and Sucking problems and the three-dimensional Scriven bubble growth on both hexahedral and polyhedral meshes, showing good agreement with analytical solutions in all three cases. A detailed analysis of the Scriven problem reveals that the interface-modified least-squares gradient stencil on Cartesian meshes overestimates the interfacial temperature gradient, producing a persistent overshoot of the analytical bubble radius and a coherent four-fold anisotropy that elongates the bubble along grid diagonals. On polyhedral meshes, the irregular face orientations eliminate both effects, yielding isotropic growth and monotonic convergence. Finally, we demonstrate the framework on turbulent upward co-current annular boiling flow, where early transient results are qualitatively consistent with a previous LES study and experimental observations of wave-modulated evaporation.
New submissions (showing 4 of 4 entries)
- [5] arXiv:2604.14189 (cross-list from physics.gen-ph) [pdf, html, other]
-
Title: SWEEP (Seismic Wave Equation Exploration Platform): A Unified Solver Framework for Differentiable Wave PhysicsComments: 13 pages, 11 figuresSubjects: General Physics (physics.gen-ph); Computational Physics (physics.comp-ph)
SWEEP (Seismic Wave Equation Exploration Platform) is a unified and extensible wave equation solver library designed for wavefield modeling and inversion. It supports a wide range of wave propagation engines, including acoustic, elastic, attenuative, VTI, TTI, and their Born approximations, among others. With a built-in support for automatic differentiation, the framework enables seamless implementation of full-waveform inversion (FWI), least-squares reverse time migration (LSRTM), and other gradient-based optimization methods. It also features a plug-and-play architecture, allowing easy integration and flexible combination of custom loss functions, multi-GPU computation, neural networks, and more. This makes Sweep a powerful and customizable platform for tackling advanced seismic inverse problems.
- [6] arXiv:2604.14208 (cross-list from physics.optics) [pdf, html, other]
-
Title: ML-based approach to classification and generation of structured light propagation in turbulent mediaSubjects: Optics (physics.optics); Machine Learning (cs.LG); Optimization and Control (math.OC); Computational Physics (physics.comp-ph)
This work develops machine learning approaches to classify structured light wave beams developing random speckle disturbances as they propagate through turbulent atmospheres. Beam propagation is modeled by the numerical simulation of a stochastic paraxial equation. We design convolutional neural networks tailored for this specific application and use them for a classification model with one-hot encoding. To address the challenge of potentially limited available data, we develop a prediction-based generative diffusion model to provide additional data during classifier training. We show that a Bregman distance minimization during the learning step improves the quality of the generation of high-frequency modes.
- [7] arXiv:2604.14472 (cross-list from cs.LG) [pdf, html, other]
-
Title: Auxiliary Finite-Difference Residual-Gradient Regularization for PINNsComments: 18 pages, 5 figures, 10 tablesSubjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computational Engineering, Finance, and Science (cs.CE); Computational Physics (physics.comp-ph)
Physics-informed neural networks (PINNs) are often selected by a single scalar loss even when the quantity of interest is more specific. We study a hybrid design in which the governing PDE residual remains automatic-differentiation (AD) based, while finite differences (FD) appear only in a weak auxiliary term that penalizes gradients of the sampled residual field. The FD term regularizes the residual field without replacing the PDE residual itself.
We examine this idea in two stages. Stage 1 is a controlled Poisson benchmark comparing a baseline PINN, the FD residual-gradient regularizer, and a matched AD residual-gradient baseline. Stage 2 transfers the same logic to a three-dimensional annular heat-conduction benchmark (PINN3D), where baseline errors concentrate near a wavy outer wall and the auxiliary grid is implemented as a body-fitted shell adjacent to the wall.
In Stage 1, the FD regularizer reproduces the main effect of residual-gradient control while exposing a trade-off between field accuracy and residual cleanliness. In Stage 2, the shell regularizer improves the application-facing quantities, namely outer-wall flux and boundary-condition behavior. Across seeds 0-5 and 100k epochs, the most reliable tested configuration is a fixed shell weight of 5e-4 under the Kourkoutas-beta optimizer regime: relative to a matched run without the shell term, it reduces the mean outer-wall BC RMSE from 1.22e-2 to 9.29e-4 and the mean wall-flux RMSE from 9.21e-3 to 9.63e-4. Adam with beta2=0.999 becomes usable when the initial learning rate is reduced to 1e-3, although its shell benefit is less robust than under Kourkoutas-beta. Overall, the results support a targeted view of hybrid PINNs: an auxiliary-only FD regularizer is most valuable when it is aligned with the physical quantity of interest, here the outer-wall flux. - [8] arXiv:2604.14553 (cross-list from physics.flu-dyn) [pdf, html, other]
-
Title: Learning to traverse convective flows at moderate to high Rayleigh numbersComments: 35 pages, 21 figuresSubjects: Fluid Dynamics (physics.flu-dyn); Computational Physics (physics.comp-ph)
We study the navigation of a self-propelled inertial particle in two-dimensional Rayleigh--Bénard convection at Prandtl number $Pr = 0.71$ and cell aspect ratio $\Gamma = 4$ for Rayleigh numbers $Ra$ ranging from $10^{7}$ to $10^{11}$. A reinforcement-learning (RL) controller selects the propulsive acceleration, subject to an upper bound $\mathcal{A}_{\max}$, to achieve a prescribed horizontal displacement. We find that the success rate increases abruptly with $\mathcal{A}_{\max}$ at moderate $Ra$, whereas at higher $Ra$ the transition becomes more gradual and shifts to larger $\mathcal{A}_{\max}$. Moreover, although the completion time increases with $Ra$, the propulsion energy required for successful traversal decreases. Proper orthogonal decomposition (POD) reveals that these performance differences arise from reorganisation of the carrier flow. At moderate $Ra$, the dominant large-scale circulation partitions the domain through robust transport barriers, requiring a finite thrust surplus to cross them; at higher $Ra$, energy is distributed across many modes, the barriers fragment, and transient plume-assisted pathways emerge. Compared with a constant-heading baseline, the learned policy aligns with local currents and consumes significantly less energy. Lagrangian coherent structure (LCS) analysis further shows that the RL agent inherently learns to cross repelling barriers and surf along attracting pathways. Finally, by mapping these behaviours onto the local Eulerian flow topology using Voronoi tessellation and the $Q$-criterion, we distil an interpretable, physics-based heuristic strategy that achieves robust navigability. These results connect turbulent-flow organisation with autonomous navigation under bounded actuation.
- [9] arXiv:2604.14562 (cross-list from cs.LG) [pdf, html, other]
-
Title: Material-Agnostic Zero-Shot Thermal Inference for Metal Additive Manufacturing via a Parametric PINN FrameworkSubjects: Machine Learning (cs.LG); Applied Physics (physics.app-ph); Computational Physics (physics.comp-ph)
Accurate thermal modeling in metal additive manufacturing (AM) is essential for understanding the process-structure-performance relationship. While prior studies have explored generalization across unseen process conditions, they often require extensive datasets, costly retraining, or pre-training. Generalization across different materials also remains relatively unexplored due to the challenges posed by distinct material-dependent thermal behaviors. This paper introduces a parametric physics-informed neural network (PINN) framework for zero-shot generalization across arbitrary materials without labeled data, retraining, or pre-training. The framework adopts a decoupled parametric PINN architecture that separately encodes material properties and spatiotemporal coordinates, fusing them through conditional modulation to better align with the multiplicative role of material parameters in the governing equation and boundary conditions. Physics-guided output scaling derived from Rosenthal's analytical solution and a hybrid optimization strategy are further incorporated to enhance physical consistency, training stability, and convergence. Experiments on bare plate laser powder bed fusion (LPBF) across diverse metal alloys, including both in-distribution and out-of-distribution cases, demonstrate effective zero-shot generalizability along with superior training efficiency. Specifically, the proposed framework achieved up to a 64.2% reduction in relative L2 error compared to the non-parametric baseline while surpassing its performance within only 4.4% of the baseline training epochs. Ablation studies confirm that the proposed framework's components are broadly applicable to other PINN-based approaches. Overall, the proposed framework provides an efficient and scalable material-agnostic solution for zero-shot thermal modeling, contributing to more flexible and practical deployment in metal AM.
- [10] arXiv:2604.14609 (cross-list from cs.AI) [pdf, html, other]
-
Title: El Agente Forjador: Task-Driven Agent Generation for Quantum SimulationZijian Zhang, Aiwei Yin, Amaan Baweja, Jiaru Bai, Ignacio Gustin, Varinia Bernales, Alán Aspuru-GuzikSubjects: Artificial Intelligence (cs.AI); Computational Physics (physics.comp-ph)
AI for science promises to accelerate the discovery process. The advent of large language models (LLMs) and agentic workflows enables the expediting of a growing range of scientific tasks. However, most of the current generation of agentic systems depend on static, hand-curated toolsets that hinder adaptation to new domains and evolving libraries. We present El Agente Forjador, a multi-agent framework in which universal coding agents autonomously forge, validate, and reuse computational tools through a four-stage workflow of tool analysis, tool generation, task execution, and iterative solution evaluation. Evaluated across 24 tasks spanning quantum chemistry and quantum dynamics on five coding agent setups, we compare three operating modes: zero-shot generation of tools per task, reuse of a curriculum-built toolset, and direct problem-solving with the coding agents as the baseline. We find that our tool generation and reuse framework consistently improves accuracy over the baseline. We also show that reusing a toolset built by a stronger coding agent can reduce API cost and substantially raises the solution quality for weaker coding agents. Case studies further demonstrate that tools forged for different domains can be combined to solve hybrid tasks. Taken together, these results show that LLM-based agents can use their scientific knowledge and coding capabilities to autonomously build reusable scientific tools, pointing toward a paradigm in which agent capabilities are defined by the tasks they are designed to solve rather than by explicitly engineered implementations.
- [11] arXiv:2604.14764 (cross-list from cond-mat.mtrl-sci) [pdf, html, other]
-
Title: Nonmagnetic-magnetic Transitions in Rutile RuO2Comments: 18 pages, 5 figuresSubjects: Materials Science (cond-mat.mtrl-sci); Computational Physics (physics.comp-ph)
Rutile RuO$_2$ has attracted great interest recently, as its magnetic ground state remains controversial. Experimental studies have reported either nonmagnetic or altermagnetic (AM) ground states in different crystalline samples of RuO$_2$, highlighting the need for a reasonable explanation to resolve this contradiction. In this study, density functional theory calculations are performed to reveal the correlation-sensitive and strain-dependent magnetism of bulk RuO$_2$. On one hand, multiple AM phases with different magnitudes of the spin magnetic moment are identified in the Hubbard parameter space for RuO$_2$. On the other hand, when appropriate strains which significantly change the crystal cell volume are applied, the ground state of RuO$_2$ can undergo transitions between the nonmagnetic state with no spin splitting and the magnetic states with spin splitting in the band structure. These findings not only demonstrate intriguing physics in 4d-electron-correlated RuO$_2$, but also retain its potential for spintronic applications.
- [12] arXiv:2604.14797 (cross-list from math.NA) [pdf, html, other]
-
Title: High-order kernel regularization of singular and hypersingular Helmholtz boundary integral operatorsSubjects: Numerical Analysis (math.NA); Computational Physics (physics.comp-ph)
This paper extends and analyzes the high-order kernel regularization framework of Beale & Tlupova (arXiv:2510.13639) to all four boundary integral operators of the Helmholtz Calderon calculus in three dimensions: the single-layer, double-layer, adjoint double-layer, and hypersingular operators. To the best of our knowledge, this work provides the first high-order kernel regularization of the hypersingular operator for both the Helmholtz and Laplace equations in three dimensions. The regularization replaces each singular kernel with a smooth modification constructed from error functions together with a polynomial correction whose coefficients are determined through moment conditions. Alongside the derivation of the regularizing functions, the paper provides a unified error analysis of the combined regularization and quadrature discretization procedure. By coupling the regularization parameter to the mesh size, the two error contributions can be balanced, leading to explicit overall convergence rates that depend jointly on the order of the regularization and the degree of exactness of the surface quadrature rule. A key practical feature of the method is its implementation simplicity. Once the regularizing functions are determined, the numerical task reduces entirely to the evaluation of smooth surface integrals using standard quadrature, without the need for element-local solves, singularity-specific precomputations, or specialized quadrature rules. Although the modified kernel is generally incompatible with kernel-specific fast methods, this limitation is addressed through H-matrix acceleration, applicable in a black-box manner. Numerical examples -- including verification of the predicted convergence rates and solution of sound-soft and sound-hard scattering problems by smooth obstacles -- demonstrate the accuracy and practicality of the proposed methodology.
- [13] arXiv:2604.14921 (cross-list from quant-ph) [pdf, other]
-
Title: Split-Evolution Quantum Phase Estimation for Particle-Conserving HamiltoniansSubjects: Quantum Physics (quant-ph); Computational Physics (physics.comp-ph)
We present a hardware demonstration and resource analysis of split-evolution quantum phase estimation (SE-QPE) on a Quantinuum System Model H2 quantum computer. SE-QPE is a modification to canonical QPE for particle-conserving Hamiltonians in which controlled time evolution is replaced by CSWAP-based interference between a target register and a reference register. For factorizations of time evolution with a shared eigenbasis, SE-QPE preserves the phase-register outcome distribution of canonical QPE and, unlike with compute--uncompute substitutions, it remains compatible with non-exact eigenstates. The substitution removes controlled-simulation overhead and enables parallel evolution on two registers, reducing the depth of each phase-kickback block. Resource analysis for Trotterized double-factorized chemistry Hamiltonians shows that the substitution becomes increasingly favorable at higher phase powers, as such combining QPE and SE-QPE implementations can be a useful option. Over a range of FeMoco active spaces, SE-QPE reduces time evolution resources, with asymptotic reductions of about 33% in CX count, 25% in $T$ count, and an asymptotic depth ratio of $3/N$ for CX layers. On Quantinuum H2-2, a four-qubit model ethylene demonstration with explicit inverse QFT and repeated phase-kickback steps up to 6 phase bits yields distinct energies and shows the auxiliary registers provide useful error detection filters.
- [14] arXiv:2604.14976 (cross-list from cond-mat.mtrl-sci) [pdf, other]
-
Title: Towards Non-van der Waals 2D Topological InsulatorsComments: 9 pages, 4 figuresSubjects: Materials Science (cond-mat.mtrl-sci); Computational Physics (physics.comp-ph)
Non-van der Waals two-dimensional (2D) materials derived from strongly bonded non-layered crystals have recently emerged as a novel and rising platform for nanoscale research. While uncovering and tuning their (opto-)electronic, catalytic, and magnetic properties has been the focus of intense research, the impact of spin-orbit coupling (SOC) onto their electronic structure has not yet been explored in detail. Studying these effects is, however, particularly relevant due to their surface cation termination and the presence of heavy elements in several representative compounds. Here, we investigate the effect of SOC onto the electronic structure of 2D AgBiO3, NaBiO3, and SbTlO3. While the first two systems show negligible band renormalization upon inclusion of relativistic effects around the band gap, SbTlO3 showcases a large SOC induced splitting (229meV) for the lowest conduction bands associated with a band inversion. Substitution of Tl with Pb forming SbPbO3 brings the band-inverted feature to the Fermi level. Analysis of topological invariants and investigation of edge states of zig-zag and armchair ribbons within the 200meV gap confirms the topological nature of the band splitting. Our work thus establishes a foundation for the systematic study of robust non-van der Waals 2D topological insulators.
- [15] arXiv:2604.14992 (cross-list from physics.optics) [pdf, other]
-
Title: Toroidal Plasmonic Nanodimers for Enhanced Near-Infrared Emission in Heterostructured InP Quantum DotsComments: 16 Pages, 4 Figures, 2 pages of supplementary information including 1 Supplementary FigureSubjects: Optics (physics.optics); Materials Science (cond-mat.mtrl-sci); Applied Physics (physics.app-ph); Computational Physics (physics.comp-ph); Quantum Physics (quant-ph)
Near-infrared (NIR) emitters operating in the 650-900 nm range are highly attractive for imaging and sensing in turbid media; however, cadmium-free InP-based quantum dots (QDs) often suffer from limited brightness due to nonradiative pathways and inefficient photon outcoupling. In particular, heterostructured InP QDs can exhibit band alignments that induce partial spatial separation of charge carriers, leading to reduced electron-hole wavefunction overlap. This modifies intrinsic recombination dynamics and enhances the sensitivity of their emission to the surrounding photonic environment. Here, we investigate silver toroidal plasmonic nanoantenna dimers (Ag TPNDs) through finite-difference-time-domain (FDTD) simulations as a geometry-tunable platform for enhancing NIR emission of heterostructured InP-based QDs. The coupled toroidal geometry supports strongly confined bonding modes that generate intense nanogap hotspots, while its resonance can be systematically tuned through the toroid aspect ratio. By spectrally aligning the antenna response with QD emission bands (675-845 nm), we achieve large Purcell enhancements together with high quantum efficiencies, demonstrating efficient conversion of enhanced decay rates into radiative emission. We further show that nanometer-scale variations in emitter-antenna separation strongly modulate the radiative rates and spectral response. These results establish toroidal plasmonic nanodimers as a topology-driven platform for controlling emission in NIR quantum emitters and for advancing NIR nanophotonic applications.
- [16] arXiv:2604.15116 (cross-list from math.NA) [pdf, html, other]
-
Title: Asymptotic gauge-invariant Hybrid High-Order method for magnetic Schrödinger equationsSubjects: Numerical Analysis (math.NA); Computational Physics (physics.comp-ph)
We introduce a Hybrid High-Order (HHO) method for the Schrödinger equation in the presence of a magnetic vector potential. In quantum mechanics, physical observables are invariant under continuous gauge transformations, which must be kept at the discrete level to avoid unphysical artifacts. To address this, we construct a discrete covariant gradient operator on arbitrary polyhedral meshes. We prove that the resulting discrete bilinear form guarantees gauge covariance asymptotically at the discrete level. The resulting scheme achieves optimal convergence rates and preserves a discrete Garding inequality, guaranteeing a stable ground state. The theoretical properties of the scheme are corroborated by numerical experiments, including the computation of the Fock-Darwin fundamental energy and replicating the Aharonov-Bohm effect.
- [17] arXiv:2604.15277 (cross-list from physics.flu-dyn) [pdf, html, other]
-
Title: Superstatistical Approach to Turbulent Circulation FluctuationsComments: 8 pages and 3 figuresSubjects: Fluid Dynamics (physics.flu-dyn); Statistical Mechanics (cond-mat.stat-mech); Computational Physics (physics.comp-ph)
Recent investigations of turbulent circulation fluctuations have uncovered substantial insights into the statistical organization of flow structures and revealed unexpected geometric features of turbulent intermittency. Of particular interest here is the observation that circulation probability distribution functions admit a superstatistical representation, namely a description based on "ensembles of Boltzmann-Gibbs ensembles". A fundamental phenomenological ingredient of this approach, which serves as a natural starting point for modeling, relies on the strong correlation between the dissipation field and the spatial distribution of elementary circulation-carrying structures, i.e., small-scale vortices. Within the language of superstatistics, this corresponds to characterizing circulation statistics through an appropriate choice of conditioned (Boltzmann-like) distributions and mixing distributions. We show that the superstatistical class of q-exponentials, known to have broad applicability in a wide range of multiscale and non-equilibrium systems, provides an accurate description of the observed circulation statistics in homogeneous and isotropic turbulence. This finding opens avenues for exploring the statistical structure of the turbulent cascade in the context of non-extensive statistical mechanics, rooted in the concept of non-additive entropies.
Cross submissions (showing 13 of 13 entries)
- [18] arXiv:2501.11146 (replaced) [pdf, html, other]
-
Title: An efficient explicit implementation of a near-optimal quantum algorithm for simulating linear dissipative differential equationsSubjects: Quantum Physics (quant-ph); Computational Physics (physics.comp-ph)
We propose an efficient block-encoding technique for the implementation of the Linear Combination of Hamiltonian Simulations (LCHS) for simulating dissipative initial-value problems. This algorithm approximates a target nonunitary operator as a weighted sum of Hamiltonian evolutions, thereby emulating a dissipative problem by mixing various time scales. We introduce an efficient encoding of the LCHS into a quantum circuit based on a simple coordinate transformation that turns the dependence on the summation index into a trigonometric function. Classically, this method is equivalent to the use of a highly accurate Fejér-Clenshaw-Curtis quadrature formula. Quantumly, this significantly simplifies block-encoding of a dissipative problem and allows one to perform an exponential number of Hamiltonian simulations by a single Quantum Signal Processing (QSP) circuit. The resulting LCHS circuit has high success probability and the selector scales logarithmically with the number of terms in the LCHS sum and linearly with time. Careful analysis of error convergence proves that this method is more efficient than other LCHS circuits that have recently appeared in the literature. We verify the quantum circuit and its scaling by simulating it on a digital emulator of fault-tolerant quantum computers and, as a test problem, solve the advection-diffusion equation. The proposed algorithm can be used for modeling a wide class of nonunitary initial-value problems including the Liouville equation with added dissipation and linear embeddings of nonlinear systems, such as the Koopman-von Neumann and Carleman embeddings.
- [19] arXiv:2506.08080 (replaced) [pdf, other]
-
Title: Towards AI-assisted Neutrino Flavor Theory DesignJason Benjamin Baretz, Max Fieg, Vijay Ganesh, Aishik Ghosh, V. Knapp-Perez, Jake Rudolph, Daniel WhitesonComments: 28 pages, 12 FiguresSubjects: High Energy Physics - Phenomenology (hep-ph); Machine Learning (cs.LG); Computational Physics (physics.comp-ph); Machine Learning (stat.ML)
Particle physics theories, such as those which explain neutrino flavor mixing, arise from a vast landscape of model-building possibilities. A model's construction typically relies on the intuition of theorists. It also requires considerable effort to identify appropriate symmetry groups, assign field representations, and extract predictions for comparison with experimental data. We develop an Autonomous Model Builder (AMBer), a framework in which a reinforcement learning agent interacts with a streamlined physics software pipeline to search these spaces efficiently. AMBer selects symmetry groups, particle content, and group representation assignments to construct viable models while minimizing the number of free parameters introduced. We validate our approach in well-studied regions of theory space and extend the exploration to a novel, previously unexamined symmetry group. While demonstrated in the context of neutrino flavor theories, this approach of reinforcement learning with physics software feedback may be extended to other theoretical model-building problems in the future.
- [20] arXiv:2511.13677 (replaced) [pdf, other]
-
Title: Open-shell frozen natural orbital approach for quantum eigensolversComments: 16 pages, 7 figures, 5 tablesJournal-ref: J. Chem. Phys. 164, 154105 (2026)Subjects: Chemical Physics (physics.chem-ph); Computational Physics (physics.comp-ph)
We present an open-shell frozen natural orbital (FNO) approach, which utilizes the second-order Z-averaged perturbation theory (ZAPT2), to reduce the restricted opten-shell Hartree-Fock virtual space size with controllable accuracy. Our ZAPT2 frozen natural orbital (ZAPT-FNO) selection scheme significantly outperforms the canonical molecular orbital virtual space truncation scheme based on Hartree-Fock orbital energies, especially when using large multiple-polarized and augmented basis sets. We demonstrate that the ZAPT-FNO-selected virtual orbitals lead to a systematic convergence of the correlation energies, but more importantly to the singlet-triplet T$_1$-S$_ 0$ energy gaps with respect to the complete active space (CAS) [occupied + virtual] size. We confirm our findings by simulating T$_1$-S$_ 0$ gaps in H$_2$O$_2$ and O$_2$ molecules using the traditional complete active space configuration interaction (CASCI) approach, as well as in stretched CH$_2$, for which we also employed the iterative qubit coupled cluster (iQCC) method as a quantum eigensolver. Finally, we applied the iQCC method with ZAPT-FNO-selected active space to the phosphorescent Ir(ppy)$_3$ complex with 260 electrons, where extended basis sets are required to achieve chemical (ca. 1 m$E_h$) accuracy. In this case, CASCI results are not available; however, the iQCC-computed T$_1$-S$_ 0$ gaps show robust convergence with enlarging basis set and CAS size, approaching the experimental value. Thus, the ZAPT-FNO method is very promising for improving the accuracy of quantum chemical modelling in a resource-efficient manner, and opens the door to simulating open-shell states of large materials within realistic active space sizes and without compromising on basis-set quality.
- [21] arXiv:2601.10557 (replaced) [pdf, html, other]
-
Title: Chebyshev Accelerated Subspace Eigensolver for Pseudo-hermitian HamiltoniansEdoardo Di Napoli (1), Clément Richefort (1), Xinzhe Wu (1) ((1) Jülich Supercomputing Centre, Forschungszentrum Jülich, Germany)Comments: To be submitted to SISCSubjects: Numerical Analysis (math.NA); Computational Engineering, Finance, and Science (cs.CE); Distributed, Parallel, and Cluster Computing (cs.DC); Computational Physics (physics.comp-ph)
Studying the optoelectronic structure of materials can require the computation of several thousands of the smallest positive eigenpairs of a pseudo-hermitian Hamiltonian. Iterative eigensolvers may be preferred over direct methods for this task since their complexity is a function of the desired fraction of the spectrum. In addition, they generally rely on highly optimized and scalable kernels such as matrix-vector multiplications that leverage the massive parallelism and the computational power of modern exascale systems. The Chebyshev Accelerated Subspace iteration Eigensolver (ChASE) is able to compute several thousands of the most extreme eigenpairs of dense hermitian matrices with proven scalability over massive parallel accelerated clusters. This work presents an extension of ChASE to solve for a portion of the smallest positive eigenpairs of pseudo-hermitian Hamiltonians as they appear in the treatment of excitonic materials. By exploiting the numerical structure and spectral properties of the Hamiltonian matrix, we preserve the characteristic positive-negative symmetry in the treatment of the eigenvectors and propose an oblique variant of Rayleigh-Ritz projection that features quadratic convergence of the Ritz values with no explicit construction of the dual basis. Additionally, we introduce a parallel implementation of the recursive matrix-product operation appearing in the Chebyshev filter with limited amount of global communications. Our development is supported by a full numerical analysis and experimental tests.
- [22] arXiv:2602.12109 (replaced) [pdf, html, other]
-
Title: A critical assessment of bonding descriptors for predicting materials propertiesAakash Ashok Naik, Nidal Dhamrait, Katharina Ueltzen, Christina Ertural, Philipp Benner, Gian-Marco Rignanese, Janine GeorgeSubjects: Materials Science (cond-mat.mtrl-sci); Chemical Physics (physics.chem-ph); Computational Physics (physics.comp-ph)
Most machine learning models for materials science rely on descriptors based on materials compositions and structures, even though the chemical bond has been proven to be a valuable concept for predicting materials properties. Over the years, various theoretical frameworks have been developed to characterize bonding in solid-state materials. However, integrating bonding information from these frameworks into machine learning pipelines at scale has been limited by the lack of a systematically generated and validated database. Recent advances in high-throughput bonding analysis workflows have addressed this issue, and our previously computed Quantum-Chemical Bonding Database for Solid-State Materials was extended to include approximately 13,000 materials. This database is then used to derive a new set of quantum-chemical bonding descriptors. A systematic assessment is performed using statistical significance tests to evaluate how the inclusion of these descriptors influences the performance of machine-learning models that otherwise rely solely on structure- and composition-derived features. Models are built to predict elastic, vibrational, and thermodynamic properties typically associated with chemical bonding in materials. The results demonstrate that incorporating quantum-chemical bonding descriptors not only improves predictive performance but also helps identify intuitive expressions for properties such as the projected force constant and lattice thermal conductivity via symbolic regression.
- [23] arXiv:2603.10992 (replaced) [pdf, html, other]
-
Title: Bayesian Optimization with Gaussian Processes to Accelerate Stationary Point SearchesRohit Goswami (1) ((1) Institute IMX and Lab-COSMO, École polytechnique fédérale de Lausanne (EPFL), Lausanne, Switzerland)Comments: 65 pages, 24 figures (main). Invited article for ACS Physical Chemistry AuSubjects: Machine Learning (stat.ML); Machine Learning (cs.LG); Chemical Physics (physics.chem-ph); Computational Physics (physics.comp-ph)
Building local surrogates to accelerate stationary point searches on potential energy surfaces spans decades of effort. Done correctly, surrogates can reduce the number of expensive electronic structure evaluations by roughly an order of magnitude while preserving the accuracy of the underlying theory, with the gain depending on oracle cost, search distance, and the availability of analytical forces. We present a unified Bayesian optimization view of minimization, single-point saddle searches, and double-ended path searches: all three share one six-step surrogate loop and differ only in the inner optimization target and the acquisition criterion. The framework uses Gaussian process regression with derivative observations, inverse-distance kernels, and active learning, and we develop optional extensions for production use, including farthest-point sampling with the Earth Mover's Distance, MAP regularization, an adaptive trust radius, and random Fourier features for scaling. Accompanying pedagogical Rust code demonstrates that all three applications use the same Bayesian optimization loop, bridging the gap between theoretical formulation and practical execution.
- [24] arXiv:2603.18222 (replaced) [pdf, html, other]
-
Title: An HHL-Based Quantum-Classical Solver for the Incompressible Navier-Stokes Equations with Approximate QSTComments: 15 pages, 11 figures; v2: minor formatting corrections in bibliography; v3: added figure 2 and subsection 2.2.3 detailing the use of QAE to extract the unnormalized vector magnitudeSubjects: Quantum Physics (quant-ph); Computational Physics (physics.comp-ph); Fluid Dynamics (physics.flu-dyn)
In computational fluid dynamics (CFD), the numerical integration of the Navier-Stokes equations is frequently constrained by the Poisson equation to determine the pressure. Discretization of this equation often results in the need to solve a system of linear algebraic equations. This step typically represents the primary computational bottleneck. Quantum linear system algorithms such as Harrow-Hassidim-Lloyd (HHL) offer the potential for exponential speedups for solving sparse linear systems, such as those that arise from the discretized Poisson equation. In this work, we successfully couple HHL to a discretized formulation of the incompressible Navier-Stokes equations and demonstrate both accurate lid-driven cavity flow simulations as a fully integrated benchmark problem, and accurate flow of the Taylor-Green vortex. To address the readout limitation, we utilize a recent novel quantum state tomography (QST) approach based on Chebyshev polynomials and Quantum Amplitude Estimation (QAE), which enables approximate statevector extraction without full state reconstruction. Together, these results clarify the algorithmic structure required for quantum CFD, explicitly confront the measurement bottleneck, and establish benchmark problems for future quantum fluid simulations. We implement the solver using IBM's Qiskit framework and validate the hybrid quantum-classical simulation against standard classical numerical methods. Our results demonstrate that the hybrid solver successfully captures the global vortex dynamics of the lid-driven cavity problem and the Taylor-Green vortex, offering a robust pathway for integrating quantum subroutines into more practical higher-Reynolds number CFD workflows.
- [25] arXiv:2604.13196 (replaced) [pdf, html, other]
-
Title: Deferred Cyclotomic Representation for Stable and Exact Evaluation of q-Hypergeometric SeriesComments: fixed typos, add refs. Implementation available at this https URLSubjects: Mathematical Physics (math-ph); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th); Numerical Analysis (math.NA); Computational Physics (physics.comp-ph)
We introduce a cyclotomic representation for finite $q$-hypergeometric series and $q$-deformed amplitudes that separates algebraic structure from evaluation. By expressing each summand in a sparse exponent basis over irreducible cyclotomic polynomials, all products and ratios of quantum factorials reduce to integer vector arithmetic. This ensures that cancellations between numerator and denominator are resolved exactly prior to any evaluation. This formulation yields the deferred cyclotomic representation (DCR), a parameter-independent combinatorial object of the series, from which evaluation in any target field is realized as a ring homomorphism.
For quantum recoupling coefficients, we demonstrate that this framework achieves linear memory scaling in the compilation phase, eliminates intermediate expression swell in exact arithmetic, and substantially extends the range of reliable double-precision computation by reducing cancellation-induced error amplification. Beyond its computational advantages, the DCR provides a unified perspective on $q$-deformed amplitudes. Structural properties like admissibility at roots of unity, and the classical limit all emerge as intrinsic properties of a single underlying combinatorial object.