-
Observation of disorder-induced superfluidity
Authors:
Nicole Ticea,
Elias Portoles,
Eliott Rosenberg,
Alexander Schuckert,
Aaron Szasz,
Bryce Kobrin,
Nicolas Pomata,
Pranjal Praneel,
Connie Miao,
Shashwat Kumar,
Ella Crane,
Ilya Drozdov,
Yuri Lensky,
Sofia Gonzalez-Garcia,
Thomas Kiely,
Dmitry Abanin,
Amira Abbas,
Rajeev Acharya,
Laleh Aghababaie Beni,
Georg Aigeldinger,
Ross Alcaraz,
Sayra Alcaraz,
Markus Ansmann,
Frank Arute,
Kunal Arya
, et al. (277 additional authors not shown)
Abstract:
The emergence of states with long-range correlations in a disordered landscape is rare, as disorder typically suppresses the particle mobility required for long-range coherence. But when more than two energy levels are available per site, disorder can induce resonances that locally enhance mobility. Here we explore phases arising from the interplay between disorder, kinetic energy, and interaction…
▽ More
The emergence of states with long-range correlations in a disordered landscape is rare, as disorder typically suppresses the particle mobility required for long-range coherence. But when more than two energy levels are available per site, disorder can induce resonances that locally enhance mobility. Here we explore phases arising from the interplay between disorder, kinetic energy, and interactions on a superconducting processor with qutrit readout and control. Compressibility measurements distinguish an incompressible Mott insulator from surrounding compressible phases and reveal signatures of glassiness, reflected in non-ergodic behavior. Spatially-resolved two-point correlator measurements identify regions of the phase diagram with a non-vanishing condensate fraction. We also visualize the spectrum by measuring the dynamical structure factor. A linearly-dispersing phonon mode materializes in the superfluid, appearing even when disorder is introduced to the clean Mott insulator. Our results provide strong experimental evidence for disorder-induced superfluidity.
△ Less
Submitted 24 December, 2025;
originally announced December 2025.
-
TICON: A Slide-Level Tile Contextualizer for Histopathology Representation Learning
Authors:
Varun Belagali,
Saarthak Kapse,
Pierre Marza,
Srijan Das,
Zilinghan Li,
Sofiène Boutaj,
Pushpak Pati,
Srikar Yellapragada,
Tarak Nath Nandi,
Ravi K Madduri,
Joel Saltz,
Prateek Prasanna,
Stergios Christodoulidis,
Maria Vakalopoulou,
Dimitris Samaras
Abstract:
The interpretation of small tiles in large whole slide images (WSI) often needs a larger image context. We introduce TICON, a transformer-based tile representation contextualizer that produces rich, contextualized embeddings for ''any'' application in computational pathology. Standard tile encoder-based pipelines, which extract embeddings of tiles stripped from their context, fail to model the ric…
▽ More
The interpretation of small tiles in large whole slide images (WSI) often needs a larger image context. We introduce TICON, a transformer-based tile representation contextualizer that produces rich, contextualized embeddings for ''any'' application in computational pathology. Standard tile encoder-based pipelines, which extract embeddings of tiles stripped from their context, fail to model the rich slide-level information essential for both local and global tasks. Furthermore, different tile-encoders excel at different downstream tasks. Therefore, a unified model is needed to contextualize embeddings derived from ''any'' tile-level foundation model. TICON addresses this need with a single, shared encoder, pretrained using a masked modeling objective to simultaneously unify and contextualize representations from diverse tile-level pathology foundation models. Our experiments demonstrate that TICON-contextualized embeddings significantly improve performance across many different tasks, establishing new state-of-the-art results on tile-level benchmarks (i.e., HEST-Bench, THUNDER, CATCH) and slide-level benchmarks (i.e., Patho-Bench). Finally, we pretrain an aggregator on TICON to form a slide-level foundation model, using only 11K WSIs, outperforming SoTA slide-level foundation models pretrained with up to 350K WSIs.
△ Less
Submitted 25 December, 2025; v1 submitted 24 December, 2025;
originally announced December 2025.
-
Device-Independent Anonymous Communication in Quantum Networks
Authors:
Srijani Das,
Manasi Patra,
Tuhin Paul,
Anish Majumdar,
Ramij Rahaman
Abstract:
Anonymity is a fundamental cryptographic primitive that hides the identities of both senders and receivers during message transmission over a network. Classical protocols cannot provide information-theoretic security for such task, and existing quantum approaches typically depend on classical subroutines and multiple private channels, thereby weakening their security in fully adversarial settings.…
▽ More
Anonymity is a fundamental cryptographic primitive that hides the identities of both senders and receivers during message transmission over a network. Classical protocols cannot provide information-theoretic security for such task, and existing quantum approaches typically depend on classical subroutines and multiple private channels, thereby weakening their security in fully adversarial settings. In this work, we introduce the first fully quantum protocol for anonymous communication in realistic quantum networks with a device-independent security proof.
△ Less
Submitted 24 December, 2025;
originally announced December 2025.
-
ESCHER: Efficient and Scalable Hypergraph Evolution Representation with Application to Triad Counting
Authors:
S. M. Shovan,
Arindam Khanda,
Sanjukta Bhowmick,
Sajal K. Das
Abstract:
Higher-order interactions beyond pairwise relationships in large complex networks are often modeled as hypergraphs. Analyzing hypergraph properties such as triad counts is essential, as hypergraphs can reveal intricate group interaction patterns that conventional graphs fail to capture. In real-world scenarios, these networks are often large and dynamic, introducing significant computational chall…
▽ More
Higher-order interactions beyond pairwise relationships in large complex networks are often modeled as hypergraphs. Analyzing hypergraph properties such as triad counts is essential, as hypergraphs can reveal intricate group interaction patterns that conventional graphs fail to capture. In real-world scenarios, these networks are often large and dynamic, introducing significant computational challenges. Due to the absence of specialized software packages and data structures, the analysis of large dynamic hypergraphs remains largely unexplored. Motivated by this gap, we propose ESCHER, a GPU-centric parallel data structure for Efficient and Scalable Hypergraph Evolution Representation, designed to manage large scale hypergraph dynamics efficiently. We also design a hypergraph triad-count update framework that minimizes redundant computation while fully leveraging the capabilities of ESCHER for dynamic operations. We validate the efficacy of our approach across multiple categories of hypergraph triad counting, including hyperedge-based, incident-vertex-based, and temporal triads. Empirical results on both large real-world and synthetic datasets demonstrate that our proposed method outperforms existing state-of-the-art methods, achieving speedups of up to 104.5x, 473.7x, and 112.5x for hyperedge-based, incident-vertex-based, and temporal triad types, respectively.
△ Less
Submitted 26 December, 2025; v1 submitted 24 December, 2025;
originally announced December 2025.
-
A single-cut discontinuity for cosmological correlators from unitarity and analyticity
Authors:
Shibam Das,
Debanjan Karan,
Babli Khatun,
Nilay Kundu
Abstract:
We derive discontinuity relations, also known as cutting rules, and explore the analytic properties of cosmological correlators, fundamental observables of the primordial universe. Our emphasis is on how these relations arise from unitarity and hermitian analyticity in interacting quantum field theories on de Sitter space-time. Instead of analyzing wave-function coefficients, we apply these relati…
▽ More
We derive discontinuity relations, also known as cutting rules, and explore the analytic properties of cosmological correlators, fundamental observables of the primordial universe. Our emphasis is on how these relations arise from unitarity and hermitian analyticity in interacting quantum field theories on de Sitter space-time. Instead of analyzing wave-function coefficients, we apply these relations directly to cosmological correlators. By studying conformally coupled and massless scalar fields with $φ^n$ self-interactions, we demonstrate that the discontinuity of a cosmological correlator can be expressed as a sum of products of lower-point discontinuities, stemming from a single-cut of one internal line in the corresponding tree-level exchange Witten diagram. Notably, beyond lower-point correlators, the decomposition of the discontinuities of cosmological correlators includes contributions from auxiliary elements that consist of both the real and imaginary parts of the lower-point wave-function coefficients, which have not been reported in the existing literature. Interestingly, depending on whether $n$ is even or odd in a $φ^n$ interaction, these different lower-point discontinuities contribute as the leading or sub-leading piece in the late-time limit to the discontinuity relations. Additionally, our single-cut discontinuity relation leads to a decomposition rule for the residue of the cosmological correlators at partial energy singularities, incorporating contributions from these auxiliary objects. Through explicit calculations in several models, we confirm that our discontinuity relations are consistent with results from the in-in formalism. While primarily developed using tree-level exchanges with polynomial interactions, we also demonstrate that our framework can be extended to include loop corrections and cases with derivative interactions.
△ Less
Submitted 23 December, 2025;
originally announced December 2025.
-
Optoelectronically Directed Self-Assembly of Active and Passive Particles into Programmable and Reconfigurable Colloidal Structures
Authors:
Donggang Cao,
Sankha Shuvra Das,
Gilad Yossifon
Abstract:
Controlled assembly of active-passive colloidal mixtures offers a route to reconfigurable microscale machines, but their self-assembly pathways remain poorly understood. We study the directed assembly of metallo-dielectric Janus particles (JPs) and passive polystyrene (PS) beads using optoelectrically reconfigurable AC-field patterning, which allows precise control over particle composition and bi…
▽ More
Controlled assembly of active-passive colloidal mixtures offers a route to reconfigurable microscale machines, but their self-assembly pathways remain poorly understood. We study the directed assembly of metallo-dielectric Janus particles (JPs) and passive polystyrene (PS) beads using optoelectrically reconfigurable AC-field patterning, which allows precise control over particle composition and binding sequence. Through experiments, analytical modeling, and simulations, we show that dipolar interactions drive robust JP-JP and JP-PS dimer formation with frequency-dependent stability. At intermediate and high frequencies, JP-PS binding is strongly attractive, whereas at low frequencies it becomes effectively repulsive due to electrical double-layer screening and electrohydrodynamic flows at the metallic hemisphere. In multi-particle systems, PS beads act as cooperative hubs that hierarchically recruit JPs, yielding higher-order hybrid structures. We identify structural isomers - for example, 3JP + 1PS clusters can form chain-like or triangular configurations depending on assembly sequence. Simulations confirm both as equilibrium states, with the triangular isomer slightly more stable. Similar polymorphism appears in larger clusters (4JPs). Overall, we establish a framework for controlled active-passive colloidal assembly, showing how frequency-tunable interactions and structural polymorphism enable the design of reconfigurable colloidal machines for applications in microrobotics, targeted delivery, and adaptive materials.
△ Less
Submitted 23 December, 2025;
originally announced December 2025.
-
Image Denoising via Quantum Reservoir Computing
Authors:
Soumyadip Das,
Luke Antoncich,
Jingbo B. Wang
Abstract:
Quantum Reservoir Computing (QRC) leverages the natural dynamics of quantum systems for information processing, without requiring a fault-tolerant quantum computer. In this work, we apply QRC within a hybrid quantum classical framework for image denoising. The quantum reservoir is implemented using a Rydberg atom array, while a classical neural network serves as the readout layer. To prepare the i…
▽ More
Quantum Reservoir Computing (QRC) leverages the natural dynamics of quantum systems for information processing, without requiring a fault-tolerant quantum computer. In this work, we apply QRC within a hybrid quantum classical framework for image denoising. The quantum reservoir is implemented using a Rydberg atom array, while a classical neural network serves as the readout layer. To prepare the input, images are first compressed using Principal Component Analysis (PCA), reducing their dimensionality to match the size of the atom array. Each feature vector is encoded into local detuning parameters of a time-dependent Hamiltonian governing the Rydberg system. As the system evolves, it generates nonlinear embeddings through the measurement of observables across multiple time steps. These temporal embeddings capture complex correlations, which are fed into a classical neural network to reconstruct the denoised images. To evaluate performance, we compare this QRC-assisted model against a baseline architecture consisting of PCA followed by a dense neural network, trained under identical conditions. Our results show that the QRC-based approach achieves improved image sharpness and similar structural recovery compared to the PCA-based model. We demonstrate the practical viability of this framework through experiments on QuEra's Aquila neutral-atom processor, leveraging its programmable atom arrays to physically realize the reservoir dynamics.
△ Less
Submitted 21 December, 2025;
originally announced December 2025.
-
Analysing Skill Predominance in Generalized Fantasy Cricket
Authors:
Supratim Das,
Sarthak Sarkar,
Subhamoy Maitra,
Tridib Mukherjee
Abstract:
In fantasy sports, strategic thinking-not mere luck-often defines who wins and who falls short. As fantasy cricket grows in popularity across India, understanding whether success stems from skill or chance has become both an analytical and regulatory question. This study introduces a new limited-selection contest framework in which participants choose from four expert-designed teams and share priz…
▽ More
In fantasy sports, strategic thinking-not mere luck-often defines who wins and who falls short. As fantasy cricket grows in popularity across India, understanding whether success stems from skill or chance has become both an analytical and regulatory question. This study introduces a new limited-selection contest framework in which participants choose from four expert-designed teams and share prizes based on the highest cumulative score. By combining simulation experiments with real performance data from the 2024 Indian Premier League (IPL), we evaluate whether measurable skill emerges within this structure. Results reveal that strategic and informed team selection consistently outperforms random choice, underscoring a clear skill advantage that persists despite stochastic variability. The analysis quantifies how team composition, inter-team correlation, and participant behaviour jointly influence winning probabilities, highlighting configurations where skill becomes statistically dominant. These findings provide actionable insights for players seeking to maximise returns through strategy and for platform designers aiming to develop fair, transparent, and engaging skill-based gaming ecosystems that balance competition with regulatory compliance.
△ Less
Submitted 20 December, 2025;
originally announced December 2025.
-
Deep Learning-Based Surrogate Creep Modelling in Inconel 625: A High-Temperature Alloy Study
Authors:
Shubham Das,
Kaushal Singhania,
Amit Sadhu,
Suprabhat Das,
Arghya Nandi
Abstract:
Time-dependent deformation, particularly creep, in high-temperature alloys such as Inconel 625 is a key factor in the long-term reliability of components used in aerospace and energy systems. Although Inconel 625 shows excellent creep resistance, finite-element creep simulations in tools such as ANSYS remain computationally expensive, often requiring tens of minutes for a single 10,000-hour run. T…
▽ More
Time-dependent deformation, particularly creep, in high-temperature alloys such as Inconel 625 is a key factor in the long-term reliability of components used in aerospace and energy systems. Although Inconel 625 shows excellent creep resistance, finite-element creep simulations in tools such as ANSYS remain computationally expensive, often requiring tens of minutes for a single 10,000-hour run. This work proposes deep learning based surrogate models to provide fast and accurate replacements for such simulations. Creep strain data was generated in ANSYS using the Norton law under uniaxial stresses of 50 to 150 MPa and temperatures of 700 to 1000 $^\circ$C, and this temporal dataset was used to train two architectures: a BiLSTM Variational Autoencoder for uncertainty-aware and generative predictions, and a BiLSTM Transformer hybrid that employs self-attention to capture long-range temporal behavior. Both models act as surrogate predictors, with the BiLSTM-VAE offering probabilistic output and the BiLSTM-Transformer delivering high deterministic accuracy. Performance is evaluated using RMSE, MAE, and $R^2$. Results show that the BiLSTM-VAE provides stable and reliable creep strain forecasts, while the BiLSTM-Transformer achieves strong accuracy across the full time range. Latency tests indicate substantial speedup: while each ANSYS simulation requires 30 to 40 minutes for a given stress-temperature condition, the surrogate models produce predictions within seconds. The proposed framework enables rapid creep assessment for design optimization and structural health monitoring, and provides a scalable solution for high-temperature alloy applications.
△ Less
Submitted 19 December, 2025;
originally announced December 2025.
-
Emergence of a hidden-order phase well below the charge density wave transition in a topological Weyl semimetal (TaSe$_4$)$_2$I
Authors:
Sk Kalimuddin,
Sudipta Chatterjee,
Arnab Bera,
Satyabrata Bera,
Deep Singha Roy,
Soham Das,
Tuhin Debnath,
Ashis K. Nandy,
Shishir K. Pandey,
Mintu Mondal
Abstract:
The emergence of a charge density wave (CDW) in a Weyl semimetal -- a correlated topological phase, is exceptionally rare in condensed matter systems. In this context, the quasi-one-dimensional type-III Weyl semimetal (TaSe$_4$)$_2$I undergoes a CDW transition at $T_{\mathrm{CDW}} \approx 263$~K, providing an exceptional platform to investigate correlated topological CDW states. Here, we uncover a…
▽ More
The emergence of a charge density wave (CDW) in a Weyl semimetal -- a correlated topological phase, is exceptionally rare in condensed matter systems. In this context, the quasi-one-dimensional type-III Weyl semimetal (TaSe$_4$)$_2$I undergoes a CDW transition at $T_{\mathrm{CDW}} \approx 263$~K, providing an exceptional platform to investigate correlated topological CDW states. Here, we uncover an additional hidden-order phase transition at $T^* \sim 100$ K, well below the CDW onset, using low-frequency resistance noise spectroscopy, electrical transport, and thermoelectric measurements. This transition is characterized by a sharp enhancement in the noise exponent ($α$) and variance of resistance fluctuations. Analysis of higher-order statistics of resistance fluctuations reveals the correlated dynamics underlying the transition. A pronounced anomaly in the Seebeck coefficient near $T^*$ further suggests a Fermi surface reconstruction. First-principles calculations reveal a structural distortion from the high-symmetry $I422$ phase to a low-symmetry $C2$ phase, via an intermediate $I4$ symmetry. This leads to renormalization of the electronic structure near the Fermi level and opening of a bandgap in the hidden-order phase. These findings demonstrate a previously unidentified correlated phase transition in the topological CDW-Weyl semimetal (TaSe$_4$)$_2$I, enriching the phase diagram of this material and establishing it as an ideal platform for studying intertwined electronic and structural orders.
△ Less
Submitted 19 December, 2025;
originally announced December 2025.
-
Molecular Quantum Computations on a Protein
Authors:
Akhil Shajan,
Danil Kaliakin,
Fangchun Liang,
Thaddeus Pellegrini,
Hakan Doga,
Subhamoy Bhowmik,
Susanta Das,
Antonio Mezzacapo,
Mario Motta,
Kenneth M. Merz Jr
Abstract:
This work presents the implementation of a fragment-based, quantum-centric supercomputing workflow for computing molecular electronic structure using quantum hardware. The workflow is applied to predict the relative energies of two conformers of the 300-atom Trp-cage miniprotein. The methodology employs wave function-based embedding (EWF) as the underlying fragmentation framework, in which all ato…
▽ More
This work presents the implementation of a fragment-based, quantum-centric supercomputing workflow for computing molecular electronic structure using quantum hardware. The workflow is applied to predict the relative energies of two conformers of the 300-atom Trp-cage miniprotein. The methodology employs wave function-based embedding (EWF) as the underlying fragmentation framework, in which all atoms in the system are explicitly included in the CI treatment. CI calculations for individual fragments are performed using either sample-based quantum diagonalization (SQD) for challenging fragments or full configuration interaction (FCI) for trivial fragments. To assess the accuracy of SQD for fragment CI calculations, EWF-(FCI,SQD) results are compared against EWF-MP2 and EWF-CCSD benchmarks. Overall, the results demonstrate that large-scale electronic configuration interaction (CI) simulations of protein systems containing hundreds or even thousands of atoms can be realized through the combined use of quantum and classical computing resources.
△ Less
Submitted 18 December, 2025;
originally announced December 2025.
-
A novel violation of the equivalence principle
Authors:
Saurya Das,
Mitja Fridman,
Sourav Sur
Abstract:
It is generally assumed that any discrepancy between an object's inertial and gravitational masses, leading to a violation of the equivalence principle, arises from the nature of its internal constituents and their interactions. We show here that the difference can instead be a function of the distance of the object from a gravitating body, and suggest ways of testing this, illustrating side-by-si…
▽ More
It is generally assumed that any discrepancy between an object's inertial and gravitational masses, leading to a violation of the equivalence principle, arises from the nature of its internal constituents and their interactions. We show here that the difference can instead be a function of the distance of the object from a gravitating body, and suggest ways of testing this, illustrating side-by-side a covariant framework for the same.
△ Less
Submitted 18 December, 2025;
originally announced December 2025.
-
SoK: Reviewing Two Decades of Security, Privacy, Accessibility, and Usability Studies on Internet of Things for Older Adults
Authors:
Suleiman Saka,
Sanchari Das
Abstract:
The Internet of Things (IoT) has the potential to enhance older adults' independence and quality of life, but it also exposes them to security, privacy, accessibility, and usability (SPAU) risks. We conducted a systematic review of 44 peer-reviewed studies published between 2004 and 2024 using a five-phase screening pipeline. From each study, we extracted data on study design, IoT type, SPAU measu…
▽ More
The Internet of Things (IoT) has the potential to enhance older adults' independence and quality of life, but it also exposes them to security, privacy, accessibility, and usability (SPAU) risks. We conducted a systematic review of 44 peer-reviewed studies published between 2004 and 2024 using a five-phase screening pipeline. From each study, we extracted data on study design, IoT type, SPAU measures, and identified research gaps. We introduce the SPAU-IoT Framework, which comprises 27 criteria across four dimensions: security (e.g., resilience to cyber threats, secure authentication, encrypted communication, secure-by-default settings, and guardianship features), privacy (e.g., data minimization, explicit consent, and privacy-preserving analytics), accessibility (e.g., compliance with ADA/WCAG standards and assistive-technology compatibility), and usability (e.g., guided interaction, integrated assistance, and progressive learning). Applying this framework revealed that more than 70% of studies implemented authentication and encryption mechanisms, whereas fewer than 50% addressed accessibility or usability concerns. We further developed a threat model that maps IoT assets, networks, and backend servers to exploit vectors such as phishing, caregiver exploitation, and weak-password attacks, explicitly accounting for age-related vulnerabilities including cognitive decline and sensory impairment. Our results expose a systemic lack of integrated SPAU approaches in existing IoT research and translate these gaps into actionable, standards-aligned design guidelines for IoT systems designed for older adults.
△ Less
Submitted 18 December, 2025;
originally announced December 2025.
-
Privacy Discourse and Emotional Dynamics in Mental Health Information Interaction on Reddit
Authors:
Jai Kruthunz Naveen Kumar,
Aishwarya Umeshkumar Surani,
Harkirat Singh,
Sanchari Das
Abstract:
Reddit is a major venue for mental-health information interaction and peer support, where privacy concerns increasingly surface in user discourse. Thus, we analyze privacy-related discussions across 14 mental-health and regulatory subreddits, comprising 10,119 posts and 65,385 comments collected with a custom web scraper. Using lexicon-based sentiment analysis, we quantify emotional alignment betw…
▽ More
Reddit is a major venue for mental-health information interaction and peer support, where privacy concerns increasingly surface in user discourse. Thus, we analyze privacy-related discussions across 14 mental-health and regulatory subreddits, comprising 10,119 posts and 65,385 comments collected with a custom web scraper. Using lexicon-based sentiment analysis, we quantify emotional alignment between communities via cosine similarity of sentiment distributions, observing high similarity for Bipolar and ADHD (0.877), Anxiety and Depression (0.849), and MentalHealthSupport and MentalIllness (0.989) subreddits. We also construct keyword dictionaries to tag privacy-related themes (e.g., HIPAA, GDPR) and perform temporal analysis from 2020 to 2025, finding a 50% increase in privacy discourse with intermittent regulatory spikes. A chi-square test of independence across subreddit domains indicates significant distributional differences. The results characterize how privacy-oriented discussion co-varies with user sentiment in online mental-health communities.
△ Less
Submitted 17 December, 2025;
originally announced December 2025.
-
COBRA: Catastrophic Bit-flip Reliability Analysis of State-Space Models
Authors:
Sanjay Das,
Swastik Bhattacharya,
Shamik Kundu,
Arnab Raha,
Souvik Kundu,
Kanad Basu
Abstract:
State-space models (SSMs), exemplified by the Mamba architecture, have recently emerged as state-of-the-art sequence-modeling frameworks, offering linear-time scalability together with strong performance in long-context settings. Owing to their unique combination of efficiency, scalability, and expressive capacity, SSMs have become compelling alternatives to transformer-based models, which suffer…
▽ More
State-space models (SSMs), exemplified by the Mamba architecture, have recently emerged as state-of-the-art sequence-modeling frameworks, offering linear-time scalability together with strong performance in long-context settings. Owing to their unique combination of efficiency, scalability, and expressive capacity, SSMs have become compelling alternatives to transformer-based models, which suffer from the quadratic computational and memory costs of attention mechanisms. As SSMs are increasingly deployed in real-world applications, it is critical to assess their susceptibility to both software- and hardware-level threats to ensure secure and reliable operation. Among such threats, hardware-induced bit-flip attacks (BFAs) pose a particularly severe risk by corrupting model parameters through memory faults, thereby undermining model accuracy and functional integrity. To investigate this vulnerability, we introduce RAMBO, the first BFA framework specifically designed to target Mamba-based architectures. Through experiments on the Mamba-1.4b model with LAMBADA benchmark, a cloze-style word-prediction task, we demonstrate that flipping merely a single critical bit can catastrophically reduce accuracy from 74.64% to 0% and increase perplexity from 18.94 to 3.75 x 10^6. These results demonstrate the pronounced fragility of SSMs to adversarial perturbations.
△ Less
Submitted 21 December, 2025; v1 submitted 14 December, 2025;
originally announced December 2025.
-
Expanding stellar horizons with polarized light
Authors:
J. Vandersnickt,
R. Ochoa Armenta,
V. Vanlaer,
A. David-Uraz,
C. Aerts,
S. B. Das,
J. -C. Bouret,
D. M. Bowman,
L. Bugnet,
V. Khalack,
J. Labadie-Bartz,
S. Mathis,
Y. Nazé,
C. Neiner,
P. Petit,
V. Petit,
K. Thomson-Paressant,
T. Van Doorsselaere,
M. Vanrespaille
Abstract:
The polarization of light is a critically under-utilized, rich source of information in astronomy. For stars in particular, surface magnetism polarization that can be detected and measured with spectro-polarimetry. Many questions about these surface fields remain unanswered due to a lack of dedicated instruments capable of probing weak and strong surface magnetic fields for the entire mass range o…
▽ More
The polarization of light is a critically under-utilized, rich source of information in astronomy. For stars in particular, surface magnetism polarization that can be detected and measured with spectro-polarimetry. Many questions about these surface fields remain unanswered due to a lack of dedicated instruments capable of probing weak and strong surface magnetic fields for the entire mass range of stars, from M-dwarfs (and even substellar objects) to massive O-type stars at different evolutionary stages and metallicities. These questions range from the origin of these fields to their true incidence rate throughout the stellar population and the dependence on metallicity. Magnetic fields, although currently often excluded from stellar evolution models, play an important role in stellar evolution. Connecting the surface fields to internal fields through asteroseismology will instigate a new era of understanding stellar evolution and the transport of angular momentum and chemical elements throughout stellar interiors, also impacting our understanding of star-planet interactions and stellar remnants. Polarimetry is also an under-utilized tool to observationally constrain the mode identification of nonradial oscillations, which lies at the basis of accurate asteroseismic parameter estimation at percentage-level for stellar radii, masses, ages, internal rotation, and magnetic field strengths. Combining strong constraints on mode identification and surface magnetic properties through the acquisition of time-resolved, high-resolution and high-signal-to-noise (S/N) spectro-polarimetry and spectroscopy promises to bring leaps forward in our understanding of stellar structure, particularly when combined with long-term space photometric data from past, current, and future missions.
△ Less
Submitted 17 December, 2025;
originally announced December 2025.
-
Pulsed single-photon spectroscopy of an emitter with vibrational coupling
Authors:
Sourav Das,
Aiman Khan,
Elnaz Darsheshdar,
Francesco Albarelli,
Animesh Datta
Abstract:
We analytically derive the quantum state of a single-photon pulse scattered from a single quantum two-level emitter interacting with a vibrational bath. This solution for the quadripartite system enables an information-theoretic characterization of vibrational effects in quantum light spectroscopy. We show that vibration-induced dephasing reduces the quantum Fisher information (QFI) for estimating…
▽ More
We analytically derive the quantum state of a single-photon pulse scattered from a single quantum two-level emitter interacting with a vibrational bath. This solution for the quadripartite system enables an information-theoretic characterization of vibrational effects in quantum light spectroscopy. We show that vibration-induced dephasing reduces the quantum Fisher information (QFI) for estimating the emitter's linewidth, largely reflecting the Franck-Condon suppression of light-matter coupling. Comparing time- and frequency-resolved photodetection, we find the latter to be more informative in estimating the emitter's linewidth for stronger vibrational coupling.
△ Less
Submitted 16 December, 2025;
originally announced December 2025.
-
A Graph-Based Forensic Framework for Inferring Hardware Noise of Cloud Quantum Backend
Authors:
Subrata Das,
Archisman Ghosh,
Swaroop Ghosh
Abstract:
Cloud quantum platforms give users access to many backends with different qubit technologies, coupling layouts, and noise levels. The execution of a circuit, however, depends on internal allocation and routing policies that are not observable to the user. A provider may redirect jobs to more error-prone regions to conserve resources, balance load or for other opaque reasons, causing degradation in…
▽ More
Cloud quantum platforms give users access to many backends with different qubit technologies, coupling layouts, and noise levels. The execution of a circuit, however, depends on internal allocation and routing policies that are not observable to the user. A provider may redirect jobs to more error-prone regions to conserve resources, balance load or for other opaque reasons, causing degradation in fidelity while still presenting stale or averaged calibration data. This lack of transparency creates a security gap: users cannot verify whether their circuits were executed on the hardware for which they were charged. Forensic methods that infer backend behavior from user-visible artifacts are therefore becoming essential. In this work, we introduce a Graph Neural Network (GNN)-based forensic framework that predicts per-qubit and per-qubit link error rates of an unseen backend using only topology information and aggregated features extracted from transpiled circuits. We construct a dataset from several IBM 27-qubit devices, merge static calibration features with dynamic transpilation features and train separate GNN regressors for one- and two-qubit errors. At inference time, the model operates without access to calibration data from the target backend and reconstructs a complete error map from the features available to the user. Our results on the target backend show accurate recovery of backend error rate, with an average mismatch of approximately 22% for single-qubit errors and 18% for qubit-link errors. The model also exhibits strong ranking agreement, with the ordering induced by predicted error values closely matching that of the actual calibration errors, as reflected by high Spearman correlation. The framework consistently identifies weak links and high-noise qubits and remains robust under realistic temporal noise drift.
△ Less
Submitted 16 December, 2025;
originally announced December 2025.
-
Magic state cultivation on a superconducting quantum processor
Authors:
Emma Rosenfeld,
Craig Gidney,
Gabrielle Roberts,
Alexis Morvan,
Nathan Lacroix,
Dvir Kafri,
Jeffrey Marshall,
Ming Li,
Volodymyr Sivak,
Dmitry Abanin,
Amira Abbas,
Rajeev Acharya,
Laleh Aghababaie Beni,
Georg Aigeldinger,
Ross Alcaraz,
Sayra Alcaraz,
Trond I. Andersen,
Markus Ansmann,
Frank Arute,
Kunal Arya,
Walt Askew,
Nikita Astrakhantsev,
Juan Atalaya,
Ryan Babbush,
Brian Ballard
, et al. (270 additional authors not shown)
Abstract:
Fault-tolerant quantum computing requires a universal gate set, but the necessary non-Clifford gates represent a significant resource cost for most quantum error correction architectures. Magic state cultivation offers an efficient alternative to resource-intensive distillation protocols; however, testing the proposal's assumptions represents a challenging departure from quantum memory experiments…
▽ More
Fault-tolerant quantum computing requires a universal gate set, but the necessary non-Clifford gates represent a significant resource cost for most quantum error correction architectures. Magic state cultivation offers an efficient alternative to resource-intensive distillation protocols; however, testing the proposal's assumptions represents a challenging departure from quantum memory experiments. We present an experimental study of magic state cultivation on a superconducting quantum processor. We implement cultivation, including code-switching into a surface code, and develop a fault-tolerant measurement protocol to bound the magic state fidelity. Cultivation reduces the error by a factor of 40, with a state fidelity of 0.9999(1) (retaining 8% of attempts). Our results experimentally establish magic state cultivation as a viable solution to one of quantum computing's most significant challenges.
△ Less
Submitted 15 December, 2025;
originally announced December 2025.
-
DL$^3$M: A Vision-to-Language Framework for Expert-Level Medical Reasoning through Deep Learning and Large Language Models
Authors:
Md. Najib Hasan,
Imran Ahmad,
Sourav Basak Shuvo,
Md. Mahadi Hasan Ankon,
Sunanda Das,
Nazmul Siddique,
Hui Wang
Abstract:
Medical image classifiers detect gastrointestinal diseases well, but they do not explain their decisions. Large language models can generate clinical text, yet they struggle with visual reasoning and often produce unstable or incorrect explanations. This leaves a gap between what a model sees and the type of reasoning a clinician expects. We introduce a framework that links image classification wi…
▽ More
Medical image classifiers detect gastrointestinal diseases well, but they do not explain their decisions. Large language models can generate clinical text, yet they struggle with visual reasoning and often produce unstable or incorrect explanations. This leaves a gap between what a model sees and the type of reasoning a clinician expects. We introduce a framework that links image classification with structured clinical reasoning. A new hybrid model, MobileCoAtNet, is designed for endoscopic images and achieves high accuracy across eight stomach-related classes. Its outputs are then used to drive reasoning by several LLMs. To judge this reasoning, we build two expert-verified benchmarks covering causes, symptoms, treatment, lifestyle, and follow-up care. Thirty-two LLMs are evaluated against these gold standards. Strong classification improves the quality of their explanations, but none of the models reach human-level stability. Even the best LLMs change their reasoning when prompts vary. Our study shows that combining DL with LLMs can produce useful clinical narratives, but current LLMs remain unreliable for high-stakes medical decisions. The framework provides a clearer view of their limits and a path for building safer reasoning systems. The complete source code and datasets used in this study are available at https://github.com/souravbasakshuvo/DL3M.
△ Less
Submitted 14 December, 2025;
originally announced December 2025.
-
Graph AI generates neurological hypotheses validated in molecular, organoid, and clinical systems
Authors:
Ayush Noori,
Joaquín Polonuer,
Katharina Meyer,
Bogdan Budnik,
Shad Morton,
Xinyuan Wang,
Sumaiya Nazeen,
Yingnan He,
Iñaki Arango,
Lucas Vittor,
Matthew Woodworth,
Richard C. Krolewski,
Michelle M. Li,
Ninning Liu,
Tushar Kamath,
Evan Macosko,
Dylan Ritter,
Jalwa Afroz,
Alexander B. H. Henderson,
Lorenz Studer,
Samuel G. Rodriques,
Andrew White,
Noa Dagan,
David A. Clifton,
George M. Church
, et al. (4 additional authors not shown)
Abstract:
Neurological diseases are the leading global cause of disability, yet most lack disease-modifying treatments. We present PROTON, a heterogeneous graph transformer that generates testable hypotheses across molecular, organoid, and clinical systems. To evaluate PROTON, we apply it to Parkinson's disease (PD), bipolar disorder (BD), and Alzheimer's disease (AD). In PD, PROTON linked genetic risk loci…
▽ More
Neurological diseases are the leading global cause of disability, yet most lack disease-modifying treatments. We present PROTON, a heterogeneous graph transformer that generates testable hypotheses across molecular, organoid, and clinical systems. To evaluate PROTON, we apply it to Parkinson's disease (PD), bipolar disorder (BD), and Alzheimer's disease (AD). In PD, PROTON linked genetic risk loci to genes essential for dopaminergic neuron survival and predicted pesticides toxic to patient-derived neurons, including the insecticide endosulfan, which ranked within the top 1.29% of predictions. In silico screens performed by PROTON reproduced six genome-wide $α$-synuclein experiments, including a split-ubiquitin yeast two-hybrid system (normalized enrichment score [NES] = 2.30, FDR-adjusted $p < 1 \times 10^{-4}$), an ascorbate peroxidase proximity labeling assay (NES = 2.16, FDR $< 1 \times 10^{-4}$), and a high-depth targeted exome sequencing study in 496 synucleinopathy patients (NES = 2.13, FDR $< 1 \times 10^{-4}$). In BD, PROTON predicted calcitriol as a candidate drug that reversed proteomic alterations observed in cortical organoids derived from BD patients. In AD, we evaluated PROTON predictions in health records from $n = 610,524$ patients at Mass General Brigham, confirming that five PROTON-predicted drugs were associated with reduced seven-year dementia risk (minimum hazard ratio = 0.63, 95% CI: 0.53-0.75, $p < 1 \times 10^{-7}$). PROTON generated neurological hypotheses that were evaluated across molecular, organoid, and clinical systems, defining a path for AI-driven discovery in neurological disease.
△ Less
Submitted 13 December, 2025;
originally announced December 2025.
-
Arrival Time -- Classical Parameter or Quantum Operator?
Authors:
MohammadJavad Kazemi,
MohammadHossein Barati,
Ghadir Jafari,
S. Shajidul Haque,
Saurya Das
Abstract:
The question of how to interpret and compute arrival-time distributions in quantum mechanics remains unsettled, reflecting the longstanding tension between treating time as a quantum observable or as a classical parameter. Most previous studies have focused on the single-particle case in the far-field regime, where both approaches yield very similar arrival-time distributions and a semi-classical…
▽ More
The question of how to interpret and compute arrival-time distributions in quantum mechanics remains unsettled, reflecting the longstanding tension between treating time as a quantum observable or as a classical parameter. Most previous studies have focused on the single-particle case in the far-field regime, where both approaches yield very similar arrival-time distributions and a semi-classical analysis typically suffices. Recent advances in atom-optics technologies now make it possible to experimentally investigate arrival-time distributions for entangled multi-particle systems in the near-field regime, where a deeper analysis beyond semi-classical approximations is required. Even in the far-field regime, due to quantum non-locality, the semi-classical approximation cannot generally hold in multi-particle systems. Therefore, in this work, two fundamental approaches to the arrival-time problem -- namely, the time-parameter and time-operator approaches -- are extended to multi-particle systems. Using these extensions, we propose a feasible two-particle arrival-time experiment and numerically evaluate the corresponding joint distributions. Our results reveal regimes in which the two approaches yield inequivalent predictions, highlighting conditions under which experiments could shed new light on distinguishing between competing accounts of time in quantum mechanics. Our findings also provide important insights for the development of quantum technologies that use entanglement in the time domain, including non-local temporal interferometry, temporal ghost imaging, and temporal state tomography in multi-particle systems.
△ Less
Submitted 15 December, 2025;
originally announced December 2025.
-
Hyperbolic Gaussian Blurring Mean Shift: A Statistical Mode-Seeking Framework for Clustering in Curved Spaces
Authors:
Arghya Pratihar,
Arnab Seal,
Swagatam Das,
Inesh Chattopadhyay
Abstract:
Clustering is a fundamental unsupervised learning task for uncovering patterns in data. While Gaussian Blurring Mean Shift (GBMS) has proven effective for identifying arbitrarily shaped clusters in Euclidean space, it struggles with datasets exhibiting hierarchical or tree-like structures. In this work, we introduce HypeGBMS, a novel extension of GBMS to hyperbolic space. Our method replaces Eucli…
▽ More
Clustering is a fundamental unsupervised learning task for uncovering patterns in data. While Gaussian Blurring Mean Shift (GBMS) has proven effective for identifying arbitrarily shaped clusters in Euclidean space, it struggles with datasets exhibiting hierarchical or tree-like structures. In this work, we introduce HypeGBMS, a novel extension of GBMS to hyperbolic space. Our method replaces Euclidean computations with hyperbolic distances and employs Möbius-weighted means to ensure that all updates remain consistent with the geometry of the space. HypeGBMS effectively captures latent hierarchies while retaining the density-seeking behavior of GBMS. We provide theoretical insights into convergence and computational complexity, along with empirical results that demonstrate improved clustering quality in hierarchical datasets. This work bridges classical mean-shift clustering and hyperbolic representation learning, offering a principled approach to density-based clustering in curved spaces. Extensive experimental evaluations on $11$ real-world datasets demonstrate that HypeGBMS significantly outperforms conventional mean-shift clustering methods in non-Euclidean settings, underscoring its robustness and effectiveness.
△ Less
Submitted 12 December, 2025;
originally announced December 2025.
-
Analytical and DNN-Aided Performance Evaluation of IRS-Assisted THz Communication Systems
Authors:
Soumendu Das,
Nagendra Kumar,
Dharmendra Dixit
Abstract:
This paper investigates the performance of an intelligent reflecting surface (IRS)-assisted terahertz (THz) communication system, where the IRS facilitates connectivity between the source and destination nodes in the absence of a direct transmission path. The source-IRS and IRS-destination links are subject to various challenges, including atmospheric attenuation, asymmetric $α$-$μ$ distributed sm…
▽ More
This paper investigates the performance of an intelligent reflecting surface (IRS)-assisted terahertz (THz) communication system, where the IRS facilitates connectivity between the source and destination nodes in the absence of a direct transmission path. The source-IRS and IRS-destination links are subject to various challenges, including atmospheric attenuation, asymmetric $α$-$μ$ distributed small-scale fading, and beam misalignment-induced pointing errors. The IRS link is characterized using the Laguerre series expansion (LSE) approximation, while both the source-IRS and IRS-destination channels are modeled as independent and identically distributed (i.i.d.) $α$-$μ$ fading channels. Furthermore, closed-form analytical expressions are derived for the outage probability (OP), average channel capacity (ACC), and average symbol error rate (ASER) for rectangular QAM (RQAM) and hexagonal QAM (HQAM) schemes over the end-to-end (e2e) link. The impact of random co-phasing and phase quantization errors are also examined. In addition to the theoretical analysis, deep neural network-based frameworks are developed to predict key performance metrics, facilitating fast and accurate system evaluation without computationally intensive analytical computations. Moreover, the asymptotic analysis in the high-signal-to-noise ratio (SNR) regime yields closed-form expressions for coding gain and diversity order, providing further insights into performance trends. Finally, Monte Carlo simulations validate the theoretical formulations and present a comprehensive assessment of system behavior under practical conditions.
△ Less
Submitted 10 December, 2025;
originally announced December 2025.
-
Luxical: High-Speed Lexical-Dense Text Embeddings
Authors:
DatologyAI,
:,
Luke Merrick,
Alex Fang,
Aldo Carranza,
Alvin Deng,
Amro Abbas,
Brett Larsen,
Cody Blakeney,
Darren Teh,
David Schwab,
Fan Pan,
Haakon Mongstad,
Haoli Yin,
Jack Urbanek,
Jason Lee,
Jason Telanoff,
Josh Wills,
Kaleigh Mentzer,
Paul Burstein,
Parth Doshi,
Paul Burnstein,
Pratyush Maini,
Ricardo Monti,
Rishabh Adiga
, et al. (9 additional authors not shown)
Abstract:
Frontier language model quality increasingly hinges on our ability to organize web-scale text corpora for training. Today's dominant tools trade off speed and flexibility: lexical classifiers (e.g., FastText) are fast but limited to producing classification output scores, while the vector-valued outputs of transformer text embedding models flexibly support numerous workflows (e.g., clustering, cla…
▽ More
Frontier language model quality increasingly hinges on our ability to organize web-scale text corpora for training. Today's dominant tools trade off speed and flexibility: lexical classifiers (e.g., FastText) are fast but limited to producing classification output scores, while the vector-valued outputs of transformer text embedding models flexibly support numerous workflows (e.g., clustering, classification, and retrieval) but are computationally expensive to produce. We introduce Luxical, a library for high-speed "lexical-dense" text embeddings that aims to recover the best properties of both approaches for web-scale text organization. Luxical combines sparse TF--IDF features, a small ReLU network, and a knowledge distillation training regimen to approximate large transformer embedding models at a fraction of their operational cost. In this technical report, we describe the Luxical architecture and training objective and evaluate a concrete Luxical model in two disparate applications: a targeted webcrawl document retrieval test and an end-to-end language model data curation task grounded in text classification. In these tasks we demonstrate speedups ranging from 3x to 100x over varying-sized neural baselines, and comparable to FastText model inference during the data curation task. On these evaluations, the tested Luxical model illustrates favorable compute/quality trade-offs for large-scale text organization, matching the quality of neural baselines. Luxical is available as open-source software at https://github.com/datologyai/luxical.
△ Less
Submitted 11 December, 2025; v1 submitted 9 December, 2025;
originally announced December 2025.
-
Neutrino mass constraints in the context of 4-parameter dark energy equation of state and DESI DR2 observations
Authors:
Gowri S Nair,
Amlan Chakraborty,
Luca Amendola,
Subinoy Das
Abstract:
Cosmological constraints on the total neutrino mass, $\sum m_ν$, are strongly shaped by assumptions about the dark-energy equation of state due to the well-known degeneracy between massive neutrinos and late-time cosmic acceleration. In this work, we move beyond the two-parameter Chevallier-Polarski-Linder (CPL) form adopted in recent DESI analyses and re-examine neutrino mass constraints using a…
▽ More
Cosmological constraints on the total neutrino mass, $\sum m_ν$, are strongly shaped by assumptions about the dark-energy equation of state due to the well-known degeneracy between massive neutrinos and late-time cosmic acceleration. In this work, we move beyond the two-parameter Chevallier-Polarski-Linder (CPL) form adopted in recent DESI analyses and re-examine neutrino mass constraints using a flexible four-parameter dark energy equation of state (4pDE). We implement the 4pDE model in a modified version of CLASS and perform a full MCMC analysis using Planck, DESI DR2 BAO, and Pantheon+ data. Relative to our previous 4pDE study based on pre-DESI BAO datasets, the inclusion of DESI DR2 substantially tightens the constraints on the transition parameters while still yielding a relaxed neutrino-mass bound compared to $Λ$CDM, $\sum m_ν< 0.101$ eV ($95\%$ C.L.). This upper limit is more stringent than the DESI DR2 constraint obtained within the $w_0w_a$CDM framework. From the best-fit parameters, we reconstruct the evolution of the 4pDE equation of state along with both $68\%$ and $95\%$C.L. We do not find a statistically significant phantom-crossing at $z \sim 0.5$, consistent with the conclusion from the DESI collaboration; at higher redshifts, the reconstructed $w(z)$ follows the CPL evolution and deviates only at low redshift. Additionally we also find reduction in $Δχ^2_{\rm min}=-7.3$ compared to $Λ$CDM model.
△ Less
Submitted 9 December, 2025;
originally announced December 2025.
-
HalluShift++: Bridging Language and Vision through Internal Representation Shifts for Hierarchical Hallucinations in MLLMs
Authors:
Sujoy Nath,
Arkaprabha Basu,
Sharanya Dasgupta,
Swagatam Das
Abstract:
Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities in vision-language understanding tasks. While these models often produce linguistically coherent output, they often suffer from hallucinations, generating descriptions that are factually inconsistent with the visual content, potentially leading to adverse consequences. Therefore, the assessment of hallucinations in…
▽ More
Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities in vision-language understanding tasks. While these models often produce linguistically coherent output, they often suffer from hallucinations, generating descriptions that are factually inconsistent with the visual content, potentially leading to adverse consequences. Therefore, the assessment of hallucinations in MLLM has become increasingly crucial in the model development process. Contemporary methodologies predominantly depend on external LLM evaluators, which are themselves susceptible to hallucinations and may present challenges in terms of domain adaptation. In this study, we propose the hypothesis that hallucination manifests as measurable irregularities within the internal layer dynamics of MLLMs, not merely due to distributional shifts but also in the context of layer-wise analysis of specific assumptions. By incorporating such modifications, \textsc{\textsc{HalluShift++}} broadens the efficacy of hallucination detection from text-based large language models (LLMs) to encompass multimodal scenarios. Our codebase is available at https://github.com/C0mRD/HalluShift_Plus.
△ Less
Submitted 8 December, 2025;
originally announced December 2025.
-
On Quasinormality of compact perturbations of the isometries
Authors:
Susmita Das
Abstract:
We study the compact perturbations of an isometry on a separable Hilbert space and provide a complete characterization of when they are quasinormal. Based on that, we present a complete classification for a rank-one perturbation of a unilateral shift of finite multiplicity to be quasinormal in the setting of the Hardy space. The result can also be generalized for a separable Hilbert space. As an a…
▽ More
We study the compact perturbations of an isometry on a separable Hilbert space and provide a complete characterization of when they are quasinormal. Based on that, we present a complete classification for a rank-one perturbation of a unilateral shift of finite multiplicity to be quasinormal in the setting of the Hardy space. The result can also be generalized for a separable Hilbert space. As an application, we provide a complete characterization for quasinormality of a rank-one perturbation of the Hardy shift.
△ Less
Submitted 7 December, 2025;
originally announced December 2025.
-
Novel Deep Learning Architectures for Classification and Segmentation of Brain Tumors from MRI Images
Authors:
Sayan Das,
Arghadip Biswas
Abstract:
Brain tumors pose a significant threat to human life, therefore it is very much necessary to detect them accurately in the early stages for better diagnosis and treatment. Brain tumors can be detected by the radiologist manually from the MRI scan images of the patients. However, the incidence of brain tumors has risen amongst children and adolescents in recent years, resulting in a substantial vol…
▽ More
Brain tumors pose a significant threat to human life, therefore it is very much necessary to detect them accurately in the early stages for better diagnosis and treatment. Brain tumors can be detected by the radiologist manually from the MRI scan images of the patients. However, the incidence of brain tumors has risen amongst children and adolescents in recent years, resulting in a substantial volume of data, as a result, it is time-consuming and difficult to detect manually. With the emergence of Artificial intelligence in the modern world and its vast application in the medical field, we can make an approach to the CAD (Computer Aided Diagnosis) system for the early detection of Brain tumors automatically. All the existing models for this task are not completely generalized and perform poorly on the validation data. So, we have proposed two novel Deep Learning Architectures - (a) SAETCN (Self-Attention Enhancement Tumor Classification Network) for the classification of different kinds of brain tumors. We have achieved an accuracy of 99.38% on the validation dataset making it one of the few Novel Deep learning-based architecture that is capable of detecting brain tumors accurately. We have trained the model on the dataset, which contains images of 3 types of tumors (glioma, meningioma, and pituitary tumors) and non-tumor cases. and (b) SAS-Net (Self-Attentive Segmentation Network) for the accurate segmentation of brain tumors. We have achieved an overall pixel accuracy of 99.23%.
△ Less
Submitted 6 December, 2025;
originally announced December 2025.
-
Physics-Grounded Attached Shadow Detection Using Approximate 3D Geometry and Light Direction
Authors:
Shilin Hu,
Jingyi Xu,
Sagnik Das,
Dimitris Samaras,
Hieu Le
Abstract:
Attached shadows occur on the surface of the occluder where light cannot reach because of self-occlusion. They are crucial for defining the three-dimensional structure of objects and enhancing scene understanding. Yet existing shadow detection methods mainly target cast shadows, and there are no dedicated datasets or models for detecting attached shadows. To address this gap, we introduce a framew…
▽ More
Attached shadows occur on the surface of the occluder where light cannot reach because of self-occlusion. They are crucial for defining the three-dimensional structure of objects and enhancing scene understanding. Yet existing shadow detection methods mainly target cast shadows, and there are no dedicated datasets or models for detecting attached shadows. To address this gap, we introduce a framework that jointly detects cast and attached shadows by reasoning about their mutual relationship with scene illumination and geometry. Our system consists of a shadow detection module that predicts both shadow types separately, and a light estimation module that infers the light direction from the detected shadows. The estimated light direction, combined with surface normals, allows us to derive a geometry-consistent partial map that identifies regions likely to be self-occluded. This partial map is then fed back to refine shadow predictions, forming a closed-loop reasoning process that iteratively improves both shadow segmentation and light estimation. In order to train our method, we have constructed a dataset of 1,458 images with separate annotations for cast and attached shadows, enabling training and quantitative evaluation of both. Experimental results demonstrate that this iterative geometry-illumination reasoning substantially improves the detection of attached shadows, with at least 33% BER reduction, while maintaining strong full and cast shadow performance.
△ Less
Submitted 5 December, 2025;
originally announced December 2025.
-
Extreme-Mass-Ratio Inspirals Embedded in Dark Matter Halo II: Chaotic Imprints in Gravitational Waves
Authors:
Surajit Das,
Surojit Dalui,
Bum-Hoon Lee,
Yi-Fu Cai
Abstract:
We investigate the imprints of chaos in gravitational waves from extreme-mass-ratio inspirals configuration, where a stellar massive object, confined in a harmonic potential, orbits a supermassive Schwarzschild-like black hole embedded in a Dehnen-type dark matter halo. In our first paper [1], we demonstrated the system's transition from non-chaotic to chaotic dynamics by analyzing Poincaré sectio…
▽ More
We investigate the imprints of chaos in gravitational waves from extreme-mass-ratio inspirals configuration, where a stellar massive object, confined in a harmonic potential, orbits a supermassive Schwarzschild-like black hole embedded in a Dehnen-type dark matter halo. In our first paper [1], we demonstrated the system's transition from non-chaotic to chaotic dynamics by analyzing Poincaré sections, orbital evolution, and Lyapunov exponents across different energies and dark matter halo parameters. In this work, we compute the gravitational waveforms of the small celestial object along different chaotic and non-chaotic orbits by implementing the numerical kludge scheme. We further perform a spectral analysis of the gravitational waveforms from such orbits. In particular, we show that when the system is in a chaotic state, the gravitational wave signals are characterized by broader frequency spectra with finite widths, enhanced amplitude and energy emission rate, distinctly differentiating them from the signals generated during the system's non-chaotic state. Through recurrence analysis we also show that the time series of gravitational waveforms strain carry unique information on the motion of chaotic dynamics, which can be used to distinctly differentiate from non-chaotic to chaotic motion of the source. Furthermore, we discuss the potential detectability of these orbits for upcoming observatories like LISA, TianQin, and Taiji, emphasizing the significant potential for detecting chaotic imprints in gravitational waves to substantially enhance our understanding of chaotic dynamics in black hole physics and the dark matter environments of galactic nuclei.
△ Less
Submitted 4 December, 2025;
originally announced December 2025.
-
Inference-time Stochastic Refinement of GRU-Normalizing Flow for Real-time Video Motion Transfer
Authors:
Tasmiah Haque,
Srinjoy Das
Abstract:
Real-time video motion transfer applications such as immersive gaming and vision-based anomaly detection require accurate yet diverse future predictions to support realistic synthesis and robust downstream decision making under uncertainty. To improve the diversity of such sequential forecasts we propose a novel inference-time refinement technique that combines Gated Recurrent Unit-Normalizing Flo…
▽ More
Real-time video motion transfer applications such as immersive gaming and vision-based anomaly detection require accurate yet diverse future predictions to support realistic synthesis and robust downstream decision making under uncertainty. To improve the diversity of such sequential forecasts we propose a novel inference-time refinement technique that combines Gated Recurrent Unit-Normalizing Flows (GRU-NF) with stochastic sampling methods. While GRU-NF can capture multimodal distributions through its integration of normalizing flows within a temporal forecasting framework, its deterministic transformation structure can limit expressivity. To address this, inspired by Stochastic Normalizing Flows (SNF), we introduce Markov Chain Monte Carlo (MCMC) steps during GRU-NF inference, enabling the model to explore a richer output space and better approximate the true data distribution without retraining. We validate our approach in a keypoint-based video motion transfer pipeline, where capturing temporally coherent and perceptually diverse future trajectories is essential for realistic samples and low bandwidth communication. Experiments show that our inference framework, Gated Recurrent Unit- Stochastic Normalizing Flows (GRU-SNF) outperforms GRU-NF in generating diverse outputs without sacrificing accuracy, even under longer prediction horizons. By injecting stochasticity during inference, our approach captures multimodal behavior more effectively. These results highlight the potential of integrating stochastic dynamics with flow-based sequence models for generative time series forecasting.
△ Less
Submitted 3 December, 2025;
originally announced December 2025.
-
Two dimensional de-Sitter and deformed CFTs
Authors:
Suchetan Das
Abstract:
We present an alternative dimensional reduction that yields an effective theory of dilatons in a two-dimensional de Sitter background. Specifically, by performing an S-wave reduction of higher-dimensional Einstein gravity, we obtain free massless dilatons in the Nariai static patch, and a dynamically evolving dilatons in the past Milne wedge. We then propose a (Nariai) static patch worldsheet form…
▽ More
We present an alternative dimensional reduction that yields an effective theory of dilatons in a two-dimensional de Sitter background. Specifically, by performing an S-wave reduction of higher-dimensional Einstein gravity, we obtain free massless dilatons in the Nariai static patch, and a dynamically evolving dilatons in the past Milne wedge. We then propose a (Nariai) static patch worldsheet formulation in terms of CFTs with SL(2,$\mathbb{R}$) deformed Hamiltonians on the cylinder. A key feature of this construction is that a stretched horizon in the (Nariai) static patch, equipped with an emergent UV boundary condition, acts as a gravitating observer. Using the similar reduction, we have also obtained a Schwarzian action coupled to free massless dilatons in the near horizon near extremal limit of four dimensional charged AdS black holes. The worldsheet description for the same has been proposed and discussed in \cite{Das:2025cuq}. We also comment on how different notions of worldsheet time may themselves be \textit{emergent}.
△ Less
Submitted 26 December, 2025; v1 submitted 3 December, 2025;
originally announced December 2025.
-
Neutrino-dominated relativistic shocked accretion flow around rotating black hole: implications for short gamma-ray bursts
Authors:
Amit Kumar,
Sayan Chakrabarti,
Santabrata Das
Abstract:
We investigate the physical properties of the central engine powering gamma-ray bursts (GRBs), modelled as a stellar-mass black hole accreting via a neutrino-dominated accretion flow (NDAF). By solving the governing hydrodynamic equations, we obtain global transonic NDAF solutions featuring shock transitions and examine their role in powering GRB energetics. The NDAF solutions are explored over a…
▽ More
We investigate the physical properties of the central engine powering gamma-ray bursts (GRBs), modelled as a stellar-mass black hole accreting via a neutrino-dominated accretion flow (NDAF). By solving the governing hydrodynamic equations, we obtain global transonic NDAF solutions featuring shock transitions and examine their role in powering GRB energetics. The NDAF solutions are explored over a broad range of black hole parameters, including its mass ($M_{\rm BH}$) and spin ($a_{\rm k}$), and accretion rate ($\dot{M}$). We find that shocked NDAFs can naturally account for the observed diversity in GRB energy output. Incorporating results from numerical simulations of binary neutron star and black hole-neutron star mergers, we estimate the remnant black hole mass and spin parameters for the predicted range of post-merger disk mass ($M_{\rm disk}$). Our analysis reveals that small-mass black holes with relatively low spin values can adequately reproduce the luminosities of short GRBs (SGRBs), whereas identical GRB luminosities can also be achieved for more massive black holes possessing higher spin values. Finally, we uncover a robust correlation between the black hole spin and disk mass such that $M_{\rm disk}$ decreases with increasing $a_{\rm k}$, remaining largely independent of the black hole mass ($M_{\rm BH}$) powering GRBs.
△ Less
Submitted 3 December, 2025;
originally announced December 2025.
-
Rare Event Searches Using Cryogenic Detectors via Direct Detection Methods
Authors:
S. Das,
R. Dey,
V. K. S. Kashyap,
B. Mohanty,
D. Mondal,
S. Banik,
M. Chaudhuri,
V. Iyer
Abstract:
Cryogenic detectors are at the forefront of rare-event search experiments, including direct detection of dark matter, coherent elastic neutrino-nucleus scattering, neutrinoless double-beta decay, and searches for fractionally charged particles. Their unique ability to achieve ultra-low energy thresholds, typically O(eV-100 eV), together with excellent energy resolution and effective background sup…
▽ More
Cryogenic detectors are at the forefront of rare-event search experiments, including direct detection of dark matter, coherent elastic neutrino-nucleus scattering, neutrinoless double-beta decay, and searches for fractionally charged particles. Their unique ability to achieve ultra-low energy thresholds, typically O(eV-100 eV), together with excellent energy resolution and effective background suppression, makes them sensitive to extremely faint signals from rare interactions. These rare particle interactions produce phonons, ionization, or scintillation, depending on the target medium, which are registered by specialized sensors and converted into measurable signals. This review summarizes the underlying detection principles, surveys major experiments and recent results, examines forthcoming initiatives, and outlines the evolving role of cryogenic detectors in advancing the frontiers of rare-event physics.
△ Less
Submitted 1 December, 2025;
originally announced December 2025.
-
Quantum-Classical Separation in Bounded-Resource Tasks Arising from Measurement Contextuality
Authors:
Shashwat Kumar,
Eliott Rosenberg,
Alejandro Grajales Dau,
Rodrigo Cortinas,
Dmitri Maslov,
Richard Oliver,
Adam Zalcman,
Matthew Neeley,
Alice Pagano,
Aaron Szasz,
Ilya Drozdov,
Zlatko Minev,
Craig Gidney,
Noureldin Yosri,
Stijn J. de Graaf,
Aniket Maiti,
Dmitry Abanin,
Rajeev Acharya,
Laleh Aghababaie Beni,
Georg Aigeldinger,
Ross Alcaraz,
Sayra Alcaraz,
Trond I. Andersen,
Markus Ansmann,
Frank Arute
, et al. (258 additional authors not shown)
Abstract:
The prevailing view is that quantum phenomena can be harnessed to tackle certain problems beyond the reach of classical approaches. Quantifying this capability as a quantum-classical separation and demonstrating it on current quantum processors has remained elusive. Using a superconducting qubit processor, we show that quantum contextuality enables certain tasks to be performed with success probab…
▽ More
The prevailing view is that quantum phenomena can be harnessed to tackle certain problems beyond the reach of classical approaches. Quantifying this capability as a quantum-classical separation and demonstrating it on current quantum processors has remained elusive. Using a superconducting qubit processor, we show that quantum contextuality enables certain tasks to be performed with success probabilities beyond classical limits. With a few qubits, we illustrate quantum contextuality with the magic square game, as well as quantify it through a Kochen--Specker--Bell inequality violation. To examine many-body contextuality, we implement the N-player GHZ game and separately solve a 2D hidden linear function problem, exceeding classical success rate in both. Our work proposes novel ways to benchmark quantum processors using contextuality-based algorithms.
△ Less
Submitted 1 December, 2025;
originally announced December 2025.
-
Velocity-space turbulent cascade in the near-Sun solar wind: first insights from the Parker Solar Probe mission
Authors:
A. Larosa,
O. Pezzi,
T. Bowen,
L. Sorriso-Valvo,
N. Sioulas,
F. Pucci,
D. Trotta,
J. L. Verniero,
R. Livi,
S. Bharati Das,
A. Chasapis,
D. Perrone,
F. Valentini,
S. Servidio
Abstract:
In space plasmas, the rarity of collisions leads to complex structures in the velocity space where a turbulent cascade of the velocity distribution function fluctuations is thought to occur. Previous studies have explored this phenomenon using the Hermite decomposition of the ion velocity distribution function (VDF) in both magnetosheath data and numerical simulations. In this work, we investigate…
▽ More
In space plasmas, the rarity of collisions leads to complex structures in the velocity space where a turbulent cascade of the velocity distribution function fluctuations is thought to occur. Previous studies have explored this phenomenon using the Hermite decomposition of the ion velocity distribution function (VDF) in both magnetosheath data and numerical simulations. In this work, we investigate the Hermite spectrum of the ion VDFs measured by Parker Solar Probe in the inner heliosphere. We analyze a superalfvénic stream at a radial distances of $R \approx 28 R_{sun}$ and a subalfvénic at $R \approx 11 R_{sun}$, the former characterized by a prevalence of VDFs with suprathermal beams (also known as hammerhead). The Hermite analysis is also compared with various proxies of energization and dissipation, in order to establish a connection between turbulent cascades in real space and those in the velocity space. A qualitative agreement between the energization proxies and the Hermite analysis is observed. The results are suggestive of the presence of a dual cascade in real and velocity space.
△ Less
Submitted 1 December, 2025;
originally announced December 2025.
-
Revisiting wideband pulsar timing measurements
Authors:
Abhimanyu Susobhanan,
Avinash Kumar Paladi,
Réka Desmecht,
Amarnath,
Manjari Bagchi,
Manoneeta Chakraborty,
Shaswata Chowdhury,
Suruj Jyoti Das,
Debabrata Deb,
Shantanu Desai,
Churchil Dwivedi,
Himanshu Grover,
Jibin Jose,
Bhal Chandra Joshi,
Shubham Kala,
Fazal Kareem,
Kuldeep Meena,
Sushovan Mondal,
K Nobleson,
Arul Pandian B,
Kaustubh Rai,
Adya Shukla,
Manpreet Singh,
Aman Srivastava,
Mayuresh Surnis
, et al. (6 additional authors not shown)
Abstract:
In the wideband paradigm of pulsar timing, the time of arrival of a pulsar pulse is measured simultaneously with the corresponding dispersion measure from a frequency-resolved integrated pulse profile. We present a new method for performing wideband measurements that rigorously accounts for measurement noise. We demonstrate this method using observations of PSR J2124$-$3358 made as part of the Ind…
▽ More
In the wideband paradigm of pulsar timing, the time of arrival of a pulsar pulse is measured simultaneously with the corresponding dispersion measure from a frequency-resolved integrated pulse profile. We present a new method for performing wideband measurements that rigorously accounts for measurement noise. We demonstrate this method using observations of PSR J2124$-$3358 made as part of the Indian Pulsar Timing Array experiment using the upgraded Giant Metre-wave Radio Telescope, and show that our method produces more realistic measurement uncertainty estimates compared to the existing wideband measurement method.
△ Less
Submitted 1 December, 2025; v1 submitted 1 December, 2025;
originally announced December 2025.
-
SafeCiM: Investigating Resilience of Hybrid Floating-Point Compute-in-Memory Deep Learning Accelerators
Authors:
Swastik Bhattacharya,
Sanjay Das,
Anand Menon,
Shamik Kundu,
Arnab Raha,
Kanad Basu
Abstract:
Deep Neural Networks (DNNs) continue to grow in complexity with Large Language Models (LLMs) incorporating vast numbers of parameters. Handling these parameters efficiently in traditional accelerators is limited by data-transmission bottlenecks, motivating Compute-in-Memory (CiM) architectures that integrate computation within or near memory to reduce data movement. Recent work has explored CiM de…
▽ More
Deep Neural Networks (DNNs) continue to grow in complexity with Large Language Models (LLMs) incorporating vast numbers of parameters. Handling these parameters efficiently in traditional accelerators is limited by data-transmission bottlenecks, motivating Compute-in-Memory (CiM) architectures that integrate computation within or near memory to reduce data movement. Recent work has explored CiM designs using Floating-Point (FP) and Integer (INT) operations. FP computations typically deliver higher output quality due to their wider dynamic range and precision, benefiting precision-sensitive Generative AI applications. These include models such as LLMs, thus driving advancements in FP-CiM accelerators. However, the vulnerability of FP-CiM to hardware faults remains underexplored, posing a major reliability concern in mission-critical settings. To address this gap, we systematically analyze hardware fault effects in FP-CiM by introducing bit-flip faults at key computational stages, including digital multipliers, CiM memory cells, and digital adder trees. Experiments with Convolutional Neural Networks (CNNs) such as AlexNet and state-of-the-art LLMs including LLaMA-3.2-1B and Qwen-0.3B-Base reveal how faults at each stage affect inference accuracy. Notably, a single adder fault can reduce LLM accuracy to 0%. Based on these insights, we propose a fault-resilient design, SafeCiM, that mitigates fault impact far better than a naive FP-CiM with a pre-alignment stage. For example, with 4096 MAC units, SafeCiM reduces accuracy degradation by up to 49x for a single adder fault compared to the baseline FP-CiM architecture.
△ Less
Submitted 22 November, 2025;
originally announced December 2025.
-
Witten-O'Raifeartaigh potential revisited in the context of Warm Inflation
Authors:
Suratna Das,
Umang Kumar,
Swagat S. Mishra,
Varun Sahni
Abstract:
Warm Inflation is a scenario in which the inflaton field dissipates its energy during inflation to maintain a subdominant constant radiation bath. Two of its remarkable features are (i) inflation can be realized even by very steep potentials and (ii) such a scenario doesn't call for a separate post-inflation reheating phase. We exploit the first feature to show that Warm Inflation can successfully…
▽ More
Warm Inflation is a scenario in which the inflaton field dissipates its energy during inflation to maintain a subdominant constant radiation bath. Two of its remarkable features are (i) inflation can be realized even by very steep potentials and (ii) such a scenario doesn't call for a separate post-inflation reheating phase. We exploit the first feature to show that Warm Inflation can successfully take place on the very steep left wing of the Witten-O'Raifeartaigh potential while remaining in excellent agreement with current cosmological data (joint analysis of Planck, ACT and DESI). The Witten-O'Raifeartaigh potential has a flatter right wing as well, which opens up the possibility of dark energy when the field rolls along this wing. However in order to successfully realize quintessential inflation one needs to (i) normalize the two wings of the Witten-O'Raifeartaigh potential differently in order to bridge between the two extreme energy scales of inflation and dark energy, (ii) allow the quintessence field to be dissipative, which is consistent with the presence of a dissipative term in warm inflation. The dissipative dynamics of the quintessence field is needed in order to sustain slow-roll in the right wing. With these modifications, we demonstrate that the Witten-O'Raifeartaigh potential can give rise to a unified model of warm inflation (on the left wing) and transient dark energy (on the right wing).
△ Less
Submitted 28 November, 2025;
originally announced November 2025.
-
Impact of Cosmic Ray Distribution on the Growth and Saturation of Bell Instability
Authors:
Saikat Das,
Siddhartha Gupta,
Prateek Sharma
Abstract:
Cosmic rays (CRs) streaming in weakly magnetized plasmas can drive large-amplitude magnetic fluctuations via nonresonant streaming instability (NRSI), or Bell instability. Using one-dimensional kinetic simulations, we investigate how mono-energetic and power-law CR momentum distributions influence the growth and saturation of NRSI. The linear growth is governed solely by the CR current and is larg…
▽ More
Cosmic rays (CRs) streaming in weakly magnetized plasmas can drive large-amplitude magnetic fluctuations via nonresonant streaming instability (NRSI), or Bell instability. Using one-dimensional kinetic simulations, we investigate how mono-energetic and power-law CR momentum distributions influence the growth and saturation of NRSI. The linear growth is governed solely by the CR current and is largely insensitive to the CR distribution. However, the saturation depends strongly on the CR distribution and is achieved through CR isotropization, which quenches the driving current. Mono-energetic CRs effectively amplify the magnetic field and isotropize. For power-law distributions, the lowest-energy CRs dominate current relaxation and magnetic growth, while the highest-energy CRs remain weakly scattered, limiting their contribution to saturation. In the absence of low-energy CRs, high-energy particles amplify magnetic fields effectively and isotropize. We provide a modified saturation prescription accounting for these effects and propose a layered CR-confinement scenario upstream of astrophysical shocks, relevant to particle acceleration to high energies.
△ Less
Submitted 26 November, 2025;
originally announced November 2025.
-
Geometric Calibration and Neutral Zones for Uncertainty-Aware Multi-Class Classification
Authors:
Soumojit Das,
Nairanjana Dasgupta,
Prashanta Dutta
Abstract:
Modern artificial intelligence systems make critical decisions yet often fail silently when uncertain -- even well-calibrated models provide no mechanism to identify \textit{which specific predictions} are unreliable. We develop a geometric framework addressing both calibration and instance-level uncertainty quantification for neural network probability outputs. Treating probability vectors as poi…
▽ More
Modern artificial intelligence systems make critical decisions yet often fail silently when uncertain -- even well-calibrated models provide no mechanism to identify \textit{which specific predictions} are unreliable. We develop a geometric framework addressing both calibration and instance-level uncertainty quantification for neural network probability outputs. Treating probability vectors as points on the $(c-1)$-dimensional probability simplex equipped with the Fisher--Rao metric, we construct: (i) Additive Log-Ratio (ALR) calibration maps that reduce exactly to Platt scaling for binary problems while extending naturally to multi-class settings, and (ii) geometric reliability scores that translate calibrated probabilities into actionable uncertainty measures, enabling principled deferral of ambiguous predictions to human review.
Theoretical contributions include: consistency of the calibration estimator at rate $O_p(n^{-1/2})$ via M-estimation theory (Theorem~1), and tight concentration bounds for reliability scores with explicit sub-Gaussian parameters enabling sample size calculations for validation set design (Theorem~2). We conjecture Neyman--Pearson optimality of our neutral zone construction based on connections to Bhattacharyya coefficients. Empirical validation on Adeno-Associated Virus classification demonstrates that the two-stage framework captures 72.5\% of errors while deferring 34.5\% of samples, reducing automated decision error rates from 16.8\% to 6.9\%. Notably, calibration alone yields marginal accuracy gains; the operational benefit arises primarily from the reliability scoring mechanism, which applies to any well-calibrated probability output. This work bridges information geometry and statistical learning, offering formal guarantees for uncertainty-aware classification in applications requiring rigorous validation.
△ Less
Submitted 29 November, 2025; v1 submitted 25 November, 2025;
originally announced November 2025.
-
Beyond the Legal Lens: A Sociotechnical Taxonomy of Lived Privacy Incidents and Harms
Authors:
Kirsten Chapman,
Garrett Smith,
Kaitlyn Klabacka,
Harrison Winslow,
Louise Barkhuus,
Cori Faklaris,
Sauvik Das,
Pamela Wisniewski,
Bart Piet Knijnenburg,
Heather Lipford,
Xinru Page
Abstract:
To understand how privacy incidents lead to harms, HCI researchers have historically leveraged legal frameworks. However, these frameworks expect acute, tangible harms and thus may not cover the full range of human experience relevant to modern-day digital privacy. To address this gap, our research builds upon these existing frameworks to develop a more comprehensive representation of people's liv…
▽ More
To understand how privacy incidents lead to harms, HCI researchers have historically leveraged legal frameworks. However, these frameworks expect acute, tangible harms and thus may not cover the full range of human experience relevant to modern-day digital privacy. To address this gap, our research builds upon these existing frameworks to develop a more comprehensive representation of people's lived experiences with privacy harms. We analyzed 369 privacy incidents reported by individuals from the general public. We found a broader range of privacy incidents and harms than accounted for in existing legal frameworks. The majority of reported privacy harms were not based on tangible harm, but on fear and loss of psychological safety. We also characterize the actors, motives, and information associated with various incidents. This work contributes a new framework for understanding digital privacy harms that can be utilized both in research and practice.
△ Less
Submitted 25 November, 2025;
originally announced November 2025.
-
RED-F: Reconstruction-Elimination based Dual-stream Contrastive Forecasting for Multivariate Time Series Anomaly Prediction
Authors:
PengYu Chen,
Xiaohou Shi,
Yuan Chang,
Yan Sun,
Sajal K. Das
Abstract:
The proactive prediction of anomalies (AP) in multivariate time series (MTS) is a critical challenge to ensure system dependability. The difficulty lies in identifying subtle anomaly precursors concealed within normal signals. However, existing unsupervised methods, trained exclusively on normal data, demonstrate a fundamental propensity to reconstruct normal patterns. Consequently, when confronte…
▽ More
The proactive prediction of anomalies (AP) in multivariate time series (MTS) is a critical challenge to ensure system dependability. The difficulty lies in identifying subtle anomaly precursors concealed within normal signals. However, existing unsupervised methods, trained exclusively on normal data, demonstrate a fundamental propensity to reconstruct normal patterns. Consequently, when confronted with weak precursors, their predictions are dominated by the normal pattern, submerging the very signal required for prediction. To contend with the limitation, we propose RED-F, a Reconstruction-Elimination based Dual-stream Contrastive Forecasting framework, comprising the Reconstruction-Elimination Model (REM) and the Dual-stream Contrastive Forecasting Model (DFM). The REM utilizes a hybrid time-frequency mechanism to mitigate the precursor, generating a purified, normal-pattern baseline. The DFM then receives this purified baseline and the original sequence which retains the precursor as parallel inputs. At the core of our framework, RED-F employs a contrastive forecast that transforms the difficult task of absolute signal detection into a simpler, more robust task of relative trajectory comparison by computing the divergence between these two predictive streams. This contrastive mechanism serves to amplify the faint precursor signal. Furthermore, the DFM is trained with a novel Multi-Series Prediction (MSP) objective, which leverages distant future context to enhance its predictive sensitivity. Extensive experiments on six real-world datasets demonstrate the superior capability of RED-F in anomaly prediction tasks.
△ Less
Submitted 25 November, 2025;
originally announced November 2025.
-
Measurement-Assisted Clifford Synthesis
Authors:
Sowmitra Das
Abstract:
In this letter, we introduce a method to synthesize an $n$-qubit Clifford unitary $C$ from the stabilizer tableau of its inverse $C†$, using ancilla qubits and measurements. The procedure uses ancillary $|+\rangle$ states, controlled-Paulis, $X$-basis measurements and single-qubit Pauli corrections on the data qubits (based on the measurement results). This introduces a new normal form for Cliffor…
▽ More
In this letter, we introduce a method to synthesize an $n$-qubit Clifford unitary $C$ from the stabilizer tableau of its inverse $C†$, using ancilla qubits and measurements. The procedure uses ancillary $|+\rangle$ states, controlled-Paulis, $X$-basis measurements and single-qubit Pauli corrections on the data qubits (based on the measurement results). This introduces a new normal form for Clifford synthesis, with the number of two-qubit gates required exactly equal to the weight of the stabilizer tableau, and a depth linear in $n$.
△ Less
Submitted 24 November, 2025;
originally announced November 2025.
-
Algorithmic detection of false data injection attacks in cyber-physical systems
Authors:
Souvik Das,
Avishek Ghosh,
Debasish Chatterjee
Abstract:
This article introduces an anomaly detection based algorithm (AD-CPS) to detect false data injection attacks that fall under the category of data deception/integrity attacks, but with arbitrary information structure, in cyber-physical systems (CPSs) modeled as stochastic linear time-invariant systems. The core idea of this data-driven algorithm is based on the fact that an honest state (one not co…
▽ More
This article introduces an anomaly detection based algorithm (AD-CPS) to detect false data injection attacks that fall under the category of data deception/integrity attacks, but with arbitrary information structure, in cyber-physical systems (CPSs) modeled as stochastic linear time-invariant systems. The core idea of this data-driven algorithm is based on the fact that an honest state (one not compromised by adversaries) generated by the CPS should concentrate near its weighted empirical mean of the immediate past samples. As the first theoretical result, we provide non-asymptotic guarantees on the false positive error incurred by the algorithm for attacks that are 2-step honest, referring to adversaries that act intermittently rather than successively. Moreover, we establish that for adversaries possessing a certain minimum energy, the false negative error incurred by AD-CPS is low. Extensive experiments were conducted on partially observed stochastic LTI systems to demonstrate these properties and to quantitatively compare AD-CPS with an optimal CUSUM-based test.
△ Less
Submitted 23 November, 2025;
originally announced November 2025.
-
Extreme Model Compression for Edge Vision-Language Models: Sparse Temporal Token Fusion and Adaptive Neural Compression
Authors:
Md Tasnin Tanvir,
Soumitra Das,
Sk Md Abidar Rahaman,
Ali Shiri Sichani
Abstract:
The demand for edge AI in vision-language tasks requires models that achieve real-time performance on resource-constrained devices with limited power and memory. This paper proposes two adaptive compression techniques -- Sparse Temporal Token Fusion (STTF) and Adaptive Neural Compression (ANC) -- that integrate algorithmic innovations with hardware-aware optimizations. Unlike previous approaches r…
▽ More
The demand for edge AI in vision-language tasks requires models that achieve real-time performance on resource-constrained devices with limited power and memory. This paper proposes two adaptive compression techniques -- Sparse Temporal Token Fusion (STTF) and Adaptive Neural Compression (ANC) -- that integrate algorithmic innovations with hardware-aware optimizations. Unlike previous approaches relying on static pruning or uniform scaling, STTF dynamically reuses visual tokens through event-driven change detection, while ANC conditionally activates encoder branches via a learned router, enabling fine-grained adaptation to scene complexity. Our 3B-parameter TinyGPT-STTF achieves CIDEr 131.2, BLEU-4 0.38, METEOR 0.31, and ROUGE-L 0.56 on the COCO 2017 test set, surpassing LLaVA-1.5 7B by 17.6 CIDEr points while using 2.3x fewer parameters and 62x fewer on-device FLOPs. TinyGPT-ANC reaches CIDEr 128.5. On event-based vision tasks, STTF reduces average token count by 84% (from 196 to 31 tokens) while preserving 95.6% accuracy on the DVS128 Gesture dataset, and ANC cuts FLOPs by up to 90% in low-motion scenes. Compared to strong baselines, our models improve accuracy by up to 4.4% and reduce latency by up to 13x. These results enable efficient deployment of capable vision-language models on real-world edge devices.
△ Less
Submitted 23 November, 2025;
originally announced November 2025.
-
Stro-VIGRU: Defining the Vision Recurrent-Based Baseline Model for Brain Stroke Classification
Authors:
Subhajeet Das,
Pritam Paul,
Rohit Bahadur,
Sohan Das
Abstract:
Stroke majorly causes death and disability worldwide, and early recognition is one of the key elements of successful treatment of the same. It is common to diagnose strokes using CT scanning, which is fast and readily available, however, manual analysis may take time and may result in mistakes. In this work, a pre-trained Vision Transformer-based transfer learning framework is proposed for the ear…
▽ More
Stroke majorly causes death and disability worldwide, and early recognition is one of the key elements of successful treatment of the same. It is common to diagnose strokes using CT scanning, which is fast and readily available, however, manual analysis may take time and may result in mistakes. In this work, a pre-trained Vision Transformer-based transfer learning framework is proposed for the early identification of brain stroke. A few of the encoder blocks of the ViT model are frozen, and the rest are allowed to be fine-tuned in order to learn brain stroke-specific features. The features that have been extracted are given as input to a single-layer Bi-GRU to perform classification. Class imbalance is handled by data augmentation. The model has achieved 94.06% accuracy in classifying brain stroke from the Stroke Dataset.
△ Less
Submitted 23 November, 2025;
originally announced November 2025.
-
Deep X-ray observation of NGC 3221: everything everywhere all at once
Authors:
Sanskriti Das,
Smita Mathur,
Bret D. Lehmer,
Steven W. Allen,
Yair Krongold,
Anjali Gupta
Abstract:
We present a comprehensive analysis of 475 ks (438 ks unpublished & 37 ks archival) XMM-Newton/EPIC-pn observation of a nearby, highly inclined, star-forming, luminous infrared galaxy NGC 3221 through spatial, temporal, and spectral information. We confirm the presence of a low-luminosity (presumably Compton-thick) AGN. The 0.4$-$12 keV luminosity and the hardness ratio of the six ultra-luminous X…
▽ More
We present a comprehensive analysis of 475 ks (438 ks unpublished & 37 ks archival) XMM-Newton/EPIC-pn observation of a nearby, highly inclined, star-forming, luminous infrared galaxy NGC 3221 through spatial, temporal, and spectral information. We confirm the presence of a low-luminosity (presumably Compton-thick) AGN. The 0.4$-$12 keV luminosity and the hardness ratio of the six ultra-luminous X-ray sources (ULX) previously identified in Chandra data exhibit diverse variability on day-scale. The collective emission from unresolved sources exhibits a different day-scale variability. We have also discovered two new predominantly soft ($<1$ keV) sources. One of these has an enigmatic spectral shape featuring a soft component, which we interpret as a superbubble in NGC 3221, and a variable hard component from a compact object, which is unresolved from the superbubble. We do not confidently detect any X-ray emission from SN 1961L. The hot gas in the ISM (out to $\pm$6 kpc from the disk plane) and the extraplanar region (6$-$12 kpc) both require two thermal phases at $\sim 0.15$ keV and $\sim 0.55$ keV. The $\sim 0.55$ keV component is fainter in the ISM than the $\sim 0.15$ keV component, but the emission from the latter falls off more steeply with disk height than the former. This makes the extraplanar region hotter and less dense than the ISM. The proximity of NGC 3221 and the occurrence of the underluminous AGN offer a unique observing opportunity to study the hot diffuse medium in conjunction with nuclear and disk-wide point sources.
△ Less
Submitted 22 November, 2025;
originally announced November 2025.
-
Cosmogenic Origin of KM3-230213A: Delayed Gamma-Ray Emission from A Cosmic-Ray Transient
Authors:
Sovan Boxi,
Saikat Das,
Nayantara Gupta
Abstract:
The highest-energy cosmic neutrino detected by the ARCA detector of KM3NeT has reignited the quest to pinpoint the sources of ultrahigh-energy cosmic rays (UHECRs; $E\gtrsim 0.1$ EeV). By uncovering the associated multimessenger signals, we investigate the origin of the 220 PeV $ν_μ$ event KM3-230213A from a transient source that accelerated cosmic rays to $\sim 10$ EeV. UHECR protons that escape…
▽ More
The highest-energy cosmic neutrino detected by the ARCA detector of KM3NeT has reignited the quest to pinpoint the sources of ultrahigh-energy cosmic rays (UHECRs; $E\gtrsim 0.1$ EeV). By uncovering the associated multimessenger signals, we investigate the origin of the 220 PeV $ν_μ$ event KM3-230213A from a transient source that accelerated cosmic rays to $\sim 10$ EeV. UHECR protons that escape the source interact with the cosmic background radiation, producing a PeV-EeV cosmogenic neutrino spectrum. The secondary $e^\pm$ and $γ$-rays initiate an electromagnetic cascade, resulting in a cosmogenic $γ$-ray spectrum that peaks at a delayed time due to deflection of charged particles in the extragalactic magnetic field (EGMF). Our results shed light on the nature of the UHECR source for the $ν_μ$ event and provide crucial insights into the detection of multi-TeV $γ$-rays of cosmogenic origin from similar past cosmological transients. Using the $γ$-ray sensitivity of currently operating and next-generation imaging atmospheric Cherenkov telescopes, the flux and time-delay distribution can constrain the source distance. We further show that the detection of such a $γ$-ray signal above the background depends on the EGMF strength. Together with the non-detection of coincident spatial or temporal photon counterparts at the current epoch, this detection is the first compelling candidate for a sub-EeV cosmogenic neutrino.
△ Less
Submitted 22 November, 2025;
originally announced November 2025.