-
The neutron skin effect in Pb+Pb collisions at 2.76A TeV at the LHC
Authors:
Amit Paul,
Rupa Chatterjee
Abstract:
Collisions of lead nuclei at relativistic energies provide valuable insight into the properties of the quark gluon plasma formed in such collisions where the initial geometry and density profile play a crucial role in governing the subsequent evolution of the produced hot and dense fireball. The neutron skin thickness resulting from the difference between the neutron and proton density distributio…
▽ More
Collisions of lead nuclei at relativistic energies provide valuable insight into the properties of the quark gluon plasma formed in such collisions where the initial geometry and density profile play a crucial role in governing the subsequent evolution of the produced hot and dense fireball. The neutron skin thickness resulting from the difference between the neutron and proton density distributions in neutron rich lead nuclei plays an important role in nuclear structure studies. In this work we investigate the impact of neutron skin on the space time evolution of the fireball formed in Pb+Pb collisions at 2.76A TeV at the LHC and analyze how the presence of neutron skin affect bulk observables sensitive to the initial nuclear structure. The time evolution of initial profile along with the average $p_T$, particle spectra and anisotropic flow parameters are estimated to investigate the effect of neutron skin on these observables. The initial spatial anisotropy of the fireball is found to be affected by the neutron skin thickness significantly especially for the peripheral collisions. This leads to a substantial enhancement of the elliptic flow of hadrons with an even stronger effect observed for photons. In addition, the effect is found to be more pronounced for lower beam energy collisions of lead nuclei.
△ Less
Submitted 9 April, 2026;
originally announced April 2026.
-
Spectroscopic factors as a probe of nuclear shape in $^{44}$S via one-neutron knockout reaction
Authors:
Ranojit Barman,
Masaaki Kimura,
Yoshiki Chazono,
Kazuki Yoshida,
Kazuyuki Ogata,
Rajdeep Chatterjee
Abstract:
Background: Neutron-rich nucleus $^{44}$S lies in the region where traditional $N=28$ shell closure weakens, leading to the emergence of shape coexistence and large-amplitude collective motion (LACM). Understanding the nature and degree of shape mixing in this nucleus remains an important and fascinating problem. Purpose: We investigate the manifestation of shape fluctuations in $^{44}$S and exami…
▽ More
Background: Neutron-rich nucleus $^{44}$S lies in the region where traditional $N=28$ shell closure weakens, leading to the emergence of shape coexistence and large-amplitude collective motion (LACM). Understanding the nature and degree of shape mixing in this nucleus remains an important and fascinating problem. Purpose: We investigate the manifestation of shape fluctuations in $^{44}$S and examine how the electric transitions and the spectroscopic factors from one-neutron knockout reactions can serve as probes of shapes mixing. Method: The antisymmetrized molecular dynamics combined with the generator coordinate method (AMD+GCM) is used to study the structure of $^{44}$S and $^{43}$S. Calculations are performed by using Gogny effective interactions with two different parameter sets, D1S and D1M, to explore the interaction dependence of shape mixing. Monopole and quadrupole transition strengths and spectroscopic factors are evaluated. The cross sections for the $^{44}$S$(p,pn)^{43}$S reaction are calculated within the distorted wave impulse approximation (DWIA). Results: The calculations reveal a strong interaction dependence of shape fluctuation in $^{44}$S. The structural differences obtained from D1S and D1M interactions produce distinct patterns of the electric transitions, the spectroscopic factors, and the cross sections for $^{44}$S$(p,pn)^{43}$S knockout reaction. Conclusion: The population of $3/2^-$ and $7/2^-$ states of $^{43}$S is particularly sensitive to the underlying shape fluctuation in $^{44}$S. Thus, the measurement of $^{44}$S$(p,pn)^{43}$S reaction can provide a direct experimental probe.
△ Less
Submitted 18 March, 2026;
originally announced March 2026.
-
FairMed-XGB: A Bayesian-Optimised Multi-Metric Framework with Explainability for Demographic Equity in Critical Healthcare Data
Authors:
Mitul Goswami,
Romit Chatterjee,
Arif Ahmed Sekh
Abstract:
Machine learning models deployed in critical care settings exhibit demographic biases, particularly gender disparities, that undermine clinical trust and equitable treatment. This paper introduces FairMed-XGB, a novel framework that systematically detects and mitigates gender-based prediction bias while preserving model performance and transparency. The framework integrates a fairness-aware loss f…
▽ More
Machine learning models deployed in critical care settings exhibit demographic biases, particularly gender disparities, that undermine clinical trust and equitable treatment. This paper introduces FairMed-XGB, a novel framework that systematically detects and mitigates gender-based prediction bias while preserving model performance and transparency. The framework integrates a fairness-aware loss function combining Statistical Parity Difference, Theil Index, and Wasserstein Distance, jointly optimised via Bayesian Search into an XGBoost classifier. Post-mitigation evaluation on seven clinically distinct cohorts derived from the MIMIC-IV-ED and eICU databases demonstrates substantial bias reduction: Statistical Parity Difference decreases by 40 to 51 percent on MIMIC-IV-ED and 10 to 19 percent on eICU; Theil Index collapses by four to five orders of magnitude to near-zero values; Wasserstein Distance is reduced by 20 to 72 percent. These gains are achieved with negligible degradation in predictive accuracy (AUC-ROC drop <0.02). SHAP-based explainability reveals that the framework diminishes reliance on gender-proxy features, providing clinicians with actionable insights into how and where bias is corrected. FairMed-XGB offers a robust, interpretable, and ethically aligned solution for equitable clinical decision-making, paving the way for trustworthy deployment of AI in high-stakes healthcare environments.
△ Less
Submitted 16 March, 2026;
originally announced March 2026.
-
Soundscapes in Spectrograms: Pioneering Multilabel Classification for South Asian Sounds
Authors:
Sudip Chakrabarty,
Pappu Bishwas,
Rajdeep Chatterjee,
Tathagata Bandyopadhyay,
Digonto Biswas,
Bibek Howlader
Abstract:
Environmental sound classification is a field of growing importance for urban monitoring and cultural soundscape analysis, especially within the acoustically rich environments of South Asia. These regions present a unique challenge as multiple natural, human, and cultural sounds often overlap, straining traditional methods that frequently rely on Mel Frequency Cepstral Coefficients (MFCC). This st…
▽ More
Environmental sound classification is a field of growing importance for urban monitoring and cultural soundscape analysis, especially within the acoustically rich environments of South Asia. These regions present a unique challenge as multiple natural, human, and cultural sounds often overlap, straining traditional methods that frequently rely on Mel Frequency Cepstral Coefficients (MFCC). This study introduces a novel spectrogram-based methodology with a superior ability to capture these complex auditory patterns. A Convolutional Neural Network (CNN) architecture is implemented to solve a demanding multilabel, multiclass classification problem on the SAS-KIIT dataset. To demonstrate robustness and comparability, the approach is also validated using the renowned UrbanSound8K dataset. The results confirm that the proposed spectrogram-based method significantly outperforms existing MFCC-based techniques, achieving higher classification accuracy across both datasets. This improvement lays the groundwork for more robust and accurate audio classification systems in real-world applications.
△ Less
Submitted 9 March, 2026;
originally announced March 2026.
-
Mantle Convection and Nightside Volcanism on Lava World K2-141 b
Authors:
Tobias G. Meier,
Claire Marie Guimond,
Raymond T. Pierrehumbert,
Jayne Birkby,
Richard D. Chatterjee,
Chloe E. Fisher,
Gregor J. Golabek,
Mark Hammond,
Thaddeus D. Komacek,
Tim Lichtenberg,
Alex McGinty,
Erik Meier Valdés,
Harrison Nicholls,
Luke T. Parker,
Rob J. Spaargaren,
Paul J. Tackley
Abstract:
Ultra-short period lava worlds offer a unique window into the coupled evolution of planetary interior and atmospheres under extreme irradiation. In this study, we investigate the mantle dynamics, nightside volcanism, and volatile outgassing on lava world K2-141 b ($1.54 R_{\oplus}$, $5.31 M_{\oplus}$) using two-dimensional convection models with tracer-based volatile tracking. Our simulations expl…
▽ More
Ultra-short period lava worlds offer a unique window into the coupled evolution of planetary interior and atmospheres under extreme irradiation. In this study, we investigate the mantle dynamics, nightside volcanism, and volatile outgassing on lava world K2-141 b ($1.54 R_{\oplus}$, $5.31 M_{\oplus}$) using two-dimensional convection models with tracer-based volatile tracking. Our simulations explore a range of interior configurations, including models with and without plastic yielding, basal versus mixed heating, core cooling, and melt intrusion. In models without plastic yielding (i.e. with a strong lithosphere), we find that mantle upwellings form at the substellar and antistellar points, while downwellings form near the day-night terminators at the boundary between the magma ocean and cold, solid nightside. These downwellings facilitate the recycling of crustal material, representing a form of asymmetric, single-lid tectonics. The resulting magma ocean thickness varies from 200 to 300 km depending on the model parameters, corresponding to about 2-3% of the planet's radius. Continuous nightside volcanism produces a basaltic crust and gradually depletes the mantle of volatiles. We find that over a billion years, volcanic eruptions can outgas tens of bars of CO$_{2}$ and H$_{2}$O. We show that even relatively large volcanic eruptions on the nightside produce thermal emission signals of no more than 1 ppm, remaining below the current detectability threshold in thermal phase curves. However, for most models, outgassing rates are increased near the day-night terminators and future studies should assess whether such localised outgassing could lead to atmospheric signatures in transmission spectroscopy.
△ Less
Submitted 2 March, 2026;
originally announced March 2026.
-
Explainable Continuous-Time Mask Refinement with Local Self-Similarity Priors for Medical Image Segmentation
Authors:
Rajdeep Chatterjee,
Sudip Chakrabarty,
Trishaani Acharjee
Abstract:
Accurate semantic segmentation of foot ulcers is essential for automated wound monitoring, yet boundary delineation remains challenging due to tissue heterogeneity and poor contrast with surrounding skin. To overcome the limitations of standard intensity-based networks, we present LSS-LTCNet:an ante-hoc explainable framework synergizing deterministic structural priors with continuous-time neural d…
▽ More
Accurate semantic segmentation of foot ulcers is essential for automated wound monitoring, yet boundary delineation remains challenging due to tissue heterogeneity and poor contrast with surrounding skin. To overcome the limitations of standard intensity-based networks, we present LSS-LTCNet:an ante-hoc explainable framework synergizing deterministic structural priors with continuous-time neural dynamics. Our architecture departs from traditional black-box models by employing a Local Self-Similarity (LSS) mechanism that extracts dense, illumination-invariant texture descriptors to explicitly disentangle necrotic tissue from background artifacts. To enforce topological precision, we introduce a Liquid Time-Constant (LTC) refinement module that treats boundary evolution as an ODEgoverned dynamic system, iteratively refining masks over continuous time-steps. Comprehensive evaluation on the MICCAI FUSeg dataset demonstrates that LSS-LTCNet achieves state-of-the-art boundary alignment, securing a peak Dice score of 86.96% and an exceptional 95th percentile Hausdorff Distance (HD95) of 8.91 pixels. Requiring merely 25.70M parameters, the model significantly outperforms heavier U-Net and transformer baselines in efficiency. By providing inherent visual audit trails alongside high-fidelity predictions, LSS-LTCNet offers a robust and transparent solution for computer-aided diagnosis in mobile healthcare (mHealth) settings.
△ Less
Submitted 27 February, 2026;
originally announced March 2026.
-
Assessing LLM Response Quality in the Context of Technology-Facilitated Abuse
Authors:
Vijay Prakash,
Majed Almansoori,
Donghan Hu,
Rahul Chatterjee,
Danny Yuxing Huang
Abstract:
Technology-facilitated abuse (TFA) is a pervasive form of intimate partner violence (IPV) that leverages digital tools to control, surveil, or harm survivors. While tech clinics are one of the reliable sources of support for TFA survivors, they face limitations due to staffing constraints and logistical barriers. As a result, many survivors turn to online resources for assistance. With the growing…
▽ More
Technology-facilitated abuse (TFA) is a pervasive form of intimate partner violence (IPV) that leverages digital tools to control, surveil, or harm survivors. While tech clinics are one of the reliable sources of support for TFA survivors, they face limitations due to staffing constraints and logistical barriers. As a result, many survivors turn to online resources for assistance. With the growing accessibility and popularity of large language models (LLMs), and increasing interest from IPV organizations, survivors may begin to consult LLM-based chatbots before seeking help from tech clinics.
In this work, we present the first expert-led manual evaluation of four LLMs - two widely used general-purpose non-reasoning models and two domain-specific models designed for IPV contexts - focused on their effectiveness in responding to TFA-related questions. Using real-world questions collected from literature and online forums, we assess the quality of zero-shot single-turn LLM responses generated with a survivor safety-centered prompt on criteria tailored to the TFA domain. Additionally, we conducted a user study to evaluate the perceived actionability of these responses from the perspective of individuals who have experienced TFA.
Our findings, grounded in both expert assessment and user feedback, provide insights into the current capabilities and limitations of LLMs in the TFA context and may inform the design, development, and fine-tuning of future models for this domain. We conclude with concrete recommendations to improve LLM performance for survivor support.
△ Less
Submitted 11 January, 2026;
originally announced February 2026.
-
Weak Zero-Knowledge and One-Way Functions
Authors:
Rohit Chatterjee,
Yunqi Li,
Prashant Nalini Vasudevan
Abstract:
We study the implications of the existence of weak Zero-Knowledge (ZK) protocols for worst-case hard languages. These are protocols that have completeness, soundness, and zero-knowledge errors (denoted $ε_c$, $ε_s$, and $ε_z$, respectively) that might not be negligible. Under the assumption that there are worst-case hard languages in NP, we show the following:
1. If all languages in NP have NIZK…
▽ More
We study the implications of the existence of weak Zero-Knowledge (ZK) protocols for worst-case hard languages. These are protocols that have completeness, soundness, and zero-knowledge errors (denoted $ε_c$, $ε_s$, and $ε_z$, respectively) that might not be negligible. Under the assumption that there are worst-case hard languages in NP, we show the following:
1. If all languages in NP have NIZK proofs or arguments satisfying $ ε_c+ε_s+ ε_z < 1 $, then One-Way Functions (OWFs) exist.
This covers all possible non-trivial values for these error rates. It additionally implies that if all languages in NP have such NIZK proofs and $ε_c$ is negligible, then they also have NIZK proofs where all errors are negligible. Previously, these results were known under the more restrictive condition $ ε_c+\sqrt{ε_s}+ε_z < 1 $ [Chakraborty et al., CRYPTO 2025].
2. If all languages in NP have $k$-round public-coin ZK proofs or arguments satisfying $ ε_c+ε_s+(2k-1).ε_z < 1 $, then OWFs exist.
3. If, for some constant $k$, all languages in NP have $k$-round public-coin ZK proofs or arguments satisfying $ ε_c+ε_s+k.ε_z < 1 $, then infinitely-often OWFs exist.
△ Less
Submitted 17 February, 2026;
originally announced February 2026.
-
Making Databases Searchable with Deep Context
Authors:
Alekh Jindal,
Shi Qiao,
Shivani Tripathi,
Niloy Debnath,
Kunal Singh,
Pushpanjali Nema,
Sharath Prakash,
Aditya Halder,
Ronith PR,
Sadiq Mohammed,
Abdul Hameed,
Karan Hanswadkar,
Ayush Kshitij,
Sarthak Bhatt,
Rony Chatterjee,
Jyoti Pandey,
Christina Pavlopoulou,
Ravi Shetye
Abstract:
Databases are the most critical assets for enterprises, and yet they remain largely inaccessible to people who make the most important decisions. In this paper, we describe the Tursio search platform that builds an abstraction layer, aka semantic knowledge graph, over the underlying databases to make them searchable in natural language. Tursio infuses large language models (LLMs) into every part o…
▽ More
Databases are the most critical assets for enterprises, and yet they remain largely inaccessible to people who make the most important decisions. In this paper, we describe the Tursio search platform that builds an abstraction layer, aka semantic knowledge graph, over the underlying databases to make them searchable in natural language. Tursio infuses large language models (LLMs) into every part of the query processing stack, including data modeling, query compilation, query planning, and result reasoning. This allows Tursio to process natural language queries systematically using techniques from traditional query planning and rewriting, rather than black-box memorization. We describe the architecture of Tursio in detail and present a comprehensive evaluation on production workloads, and synthetic and realistic benchmarks. Our results show that Tursio achieves high accuracy while being efficient and scalable, making databases truly searchable for non-expert users.
△ Less
Submitted 16 February, 2026; v1 submitted 9 February, 2026;
originally announced February 2026.
-
DeepSearchQA: Bridging the Comprehensiveness Gap for Deep Research Agents
Authors:
Nikita Gupta,
Riju Chatterjee,
Lukas Haas,
Connie Tao,
Andrew Wang,
Chang Liu,
Hidekazu Oiwa,
Elena Gribovskaya,
Jan Ackermann,
John Blitzer,
Sasha Goldshtein,
Dipanjan Das
Abstract:
We introduce DeepSearchQA, a 900-prompt benchmark for evaluating agents on difficult multi-step information-seeking tasks across 17 different fields. Unlike traditional benchmarks that target single answer retrieval or broad-spectrum factuality, DeepSearchQA features a dataset of challenging, handcrafted tasks designed to evaluate an agent's ability to execute complex search plans to generate exha…
▽ More
We introduce DeepSearchQA, a 900-prompt benchmark for evaluating agents on difficult multi-step information-seeking tasks across 17 different fields. Unlike traditional benchmarks that target single answer retrieval or broad-spectrum factuality, DeepSearchQA features a dataset of challenging, handcrafted tasks designed to evaluate an agent's ability to execute complex search plans to generate exhaustive answer lists. This shift in design explicitly tests three critical, yet under-evaluated capabilities: 1) systematic collation of fragmented information from disparate sources, 2) de-duplication and entity resolution to ensure precision, and 3) the ability to reason about stopping criteria within an open-ended search space. Each task is structured as a causal chain, where discovering information for one step is dependent on the successful completion of the previous one, stressing long-horizon planning and context retention. All tasks are grounded in the open web with objectively verifiable answer sets. Our comprehensive evaluation of state-of-the-art agent architectures reveals significant performance limitations: even the most advanced models struggle to balance high recall with precision. We observe distinct failure modes ranging from premature stopping (under-retrieval) to hedging behaviors, where agents cast an overly wide net of low-confidence answers to artificially boost recall. These findings highlight critical headroom in current agent designs and position DeepSearchQA as an essential diagnostic tool for driving future research toward more robust, deep-research capabilities.
△ Less
Submitted 28 January, 2026;
originally announced January 2026.
-
The Tensionless Lives of Null Strings
Authors:
Arjun Bagchi,
Aritra Banerjee,
Ritankar Chatterjee,
Priyadarshini Pandit
Abstract:
The tensionless limit probes the very high energy regime of string theory in contrast to the well studied point-particle limit which reduces to Einstein gravity. Tensionless strings sweep out null worldsheets in the target space and hence are also called null strings. This article aims to provide a comprehensive review of tensionless null string theory beginning with the initial work of Schild, an…
▽ More
The tensionless limit probes the very high energy regime of string theory in contrast to the well studied point-particle limit which reduces to Einstein gravity. Tensionless strings sweep out null worldsheets in the target space and hence are also called null strings. This article aims to provide a comprehensive review of tensionless null string theory beginning with the initial work of Schild, and continuing to the foundational work of Isberg et al (ILST) and then focussing on developments in the past decade.
Recent work centres on the emergence of the Carrollian Conformal Algebra as residual worldsheet symmetries of the ILST action and the identification of tensionless limit as a worldsheet Carrollian limit on the string worldsheet. Carrollian structures are used to address the classical and quantum aspects of the null string. In the classical theory, the aforementioned limit agrees with the analysis from the ILST action. Symmetries, constraints, mode expansions computed from both perspectives match nicely providing a robust cross-check of the analyses. We discuss closed and open null strings as well as their supersymmetric cousins.
The quantum null string comes with several surprises, the foremost of which is the emergence of three consistent quantum theories from the ILST action. We detail the canonical quantisation and the spectrum of the triumvirate of theories. We discuss the novelties of the quantum null theories and the effect compactifaction has on them. We also discuss Carroll strings, applications of these ideas to strings approaching black holes and give a quick overview of other related developments.
△ Less
Submitted 28 January, 2026;
originally announced January 2026.
-
"Lighting The Way For Those Not Here": How Technology Researchers Can Help Fight the Missing and Murdered Indigenous Relatives (MMIR) Crisis
Authors:
Naman Gupta,
Sophie Stephenson,
Chung Chi Yeung,
Wei Ting Wu,
Jeneile Luebke,
Kate Walsh,
Rahul Chatterjee
Abstract:
Indigenous peoples across Turtle Island (North America) face disproportionate rates of disappearance and murder, a "genocide" rooted in settler-colonial violence and systemic erasure. Technology plays a crucial role in the Missing and Murdered Indigenous Relatives (MMIR) crisis: perpetuating harm and impeding investigations, yet enabling advocacy and resistance. Communities utilize technologies su…
▽ More
Indigenous peoples across Turtle Island (North America) face disproportionate rates of disappearance and murder, a "genocide" rooted in settler-colonial violence and systemic erasure. Technology plays a crucial role in the Missing and Murdered Indigenous Relatives (MMIR) crisis: perpetuating harm and impeding investigations, yet enabling advocacy and resistance. Communities utilize technologies such as AMBER alerts, news websites, social media groups, and campaigns (like #MMIW, #MMIWR, #NoMoreStolenSisters, and #NoMoreStolenDaughters) to mobilize searches, amplify awareness, and honor missing relatives. Yet, little research in HCI has critically examined technology's role in shaping the MMIR crisis by centering community voices. Through a large-scale study, we analyze 140 webpages to identify systemic, technological, and institutional barriers that hinder communities' efforts, while highlighting socio-technical actions that foster healing and safety. Finally, we amplify Indigenous voices by providing a dataset of stories that resist epistemic erasure, along with recommendations for HCI researchers to support Indigenous-led initiatives with cultural sensitivity, accountability, and self-determination.
△ Less
Submitted 25 January, 2026;
originally announced January 2026.
-
Data-driven Lake Water Quality Forecasting for Time Series with Missing Data using Machine Learning
Authors:
Rishit Chatterjee,
Tahiya Chowdhury
Abstract:
Volunteer-led lake monitoring yields irregular, seasonal time series with many gaps arising from ice cover, weather-related access constraints, and occasional human errors, complicating forecasting and early warning of harmful algal blooms. We study Secchi Disk Depth (SDD) forecasting on a 30-lake, data-rich subset drawn from three decades of in situ records collected across Maine lakes. Missingne…
▽ More
Volunteer-led lake monitoring yields irregular, seasonal time series with many gaps arising from ice cover, weather-related access constraints, and occasional human errors, complicating forecasting and early warning of harmful algal blooms. We study Secchi Disk Depth (SDD) forecasting on a 30-lake, data-rich subset drawn from three decades of in situ records collected across Maine lakes. Missingness is handled via Multiple Imputation by Chained Equations (MICE), and we evaluate performance with a normalized Mean Absolute Error (nMAE) metric for cross-lake comparability. Among six candidates, ridge regression provides the best mean test performance. Using ridge regression, we then quantify the minimal sample size, showing that under a backward, recent-history protocol, the model reaches within 5% of full-history accuracy with approximately 176 training samples per lake on average. We also identify a minimal feature set, where a compact four-feature subset matches the thirteen-feature baseline within the same 5% tolerance. Bringing these results together, we introduce a joint feasibility function that identifies the minimal training history and fewest predictors sufficient to achieve the target of staying within 5% of the complete-history, full-feature baseline. In our study, meeting the 5% accuracy target required about 64 recent samples and just one predictor per lake, highlighting the practicality of targeted monitoring. Hence, our joint feasibility strategy unifies recent-history length and feature choice under a fixed accuracy target, yielding a simple, efficient rule for setting sampling effort and measurement priorities for lake researchers.
△ Less
Submitted 21 January, 2026;
originally announced January 2026.
-
How Plasma Properties of the Fanaroff-Riley Jet can Shape its Morphology
Authors:
Priyesh Kumar Tripathi,
Indranil Chattopadhyay,
Raj Kishor Joshi,
Ritaban Chatterjee,
Sanjit Debnath,
M. Saleem Khan
Abstract:
Extragalactic jets are broadly classified into two categories based on radio observations: core-brightened jets, known as Fanaroff-Riley Type I (FR I), and edge-brightened jets, classified as Type II (FR II). This FR dichotomy may arise due to variation in the ambient medium and/or the properties of the jet itself, such as injection speed, temperature, composition, magnetization, etc. To investiga…
▽ More
Extragalactic jets are broadly classified into two categories based on radio observations: core-brightened jets, known as Fanaroff-Riley Type I (FR I), and edge-brightened jets, classified as Type II (FR II). This FR dichotomy may arise due to variation in the ambient medium and/or the properties of the jet itself, such as injection speed, temperature, composition, magnetization, etc. To investigate this, we perform large-scale three-dimensional magnetohydrodynamic (3D-MHD) simulations of low-power, supersonic jets extending to kiloparsec scales. We inject a jet beam carrying an initially toroidal magnetic field into a denser, unmagnetized, and stratified ambient medium through a cylindrical nozzle. Our simulations explore jets with varying injection parameters to investigate their impact on morphology and emission properties. Furthermore, we examine jets with significantly different plasma compositions, such as hadronic and mixed electron-positron-proton configurations, to study the conditions that may drive transitions between FR I and FR II morphologies. We find that, under the same injection parameters, mixed plasma composition jets tend to evolve into FR I structures. In contrast, electron-proton jets exhibit a transition between FR I and FR II morphologies at different stages of their evolution.
△ Less
Submitted 14 January, 2026;
originally announced January 2026.
-
SlimEdge: Performance and Device Aware Distributed DNN Deployment on Resource-Constrained Edge Hardware
Authors:
Mahadev Sunil Kumar,
Arnab Raha,
Debayan Das,
Gopakumar G,
Rounak Chatterjee,
Amitava Mukherjee
Abstract:
Distributed deep neural networks (DNNs) have become central to modern computer vision, yet their deployment on resource-constrained edge devices remains hindered by substantial parameter counts, computational demands, and the probability of device failure. Here, we present an approach to the efficient deployment of distributed DNNs that jointly respect hardware limitations, preserve task performan…
▽ More
Distributed deep neural networks (DNNs) have become central to modern computer vision, yet their deployment on resource-constrained edge devices remains hindered by substantial parameter counts, computational demands, and the probability of device failure. Here, we present an approach to the efficient deployment of distributed DNNs that jointly respect hardware limitations, preserve task performance, and remain robust to partial system failures. Our method integrates structured model pruning with a multi-objective optimization framework to tailor network capacity for heterogeneous device constraints, while explicitly accounting for device availability and failure probability during deployment. We demonstrate this framework using Multi-View Convolutional Neural Networks (MVCNN), a state-of-the-art architecture for 3D object recognition, by quantifying the contribution of individual views to classification accuracy and allocating pruning budgets accordingly. Experimental results show that the resulting models satisfy user-specified bounds on accuracy and memory footprint, even under multiple simultaneous device failures. The inference time is reduced by factors up to 4.7x across diverse simulated device configurations. These findings suggest that performance-aware, view-adaptive, and failure-resilient compression provides a viable pathway for deploying complex vision models in distributed edge environments.
△ Less
Submitted 15 February, 2026; v1 submitted 10 December, 2025;
originally announced December 2025.
-
AUDRON: A Deep Learning Framework with Fused Acoustic Signatures for Drone Type Recognition
Authors:
Rajdeep Chatterjee,
Sudip Chakrabarty,
Trishaani Acharjee,
Deepanjali Mishra
Abstract:
Unmanned aerial vehicles (UAVs), commonly known as drones, are increasingly used across diverse domains, including logistics, agriculture, surveillance, and defense. While these systems provide numerous benefits, their misuse raises safety and security concerns, making effective detection mechanisms essential. Acoustic sensing offers a low-cost and non-intrusive alternative to vision or radar-base…
▽ More
Unmanned aerial vehicles (UAVs), commonly known as drones, are increasingly used across diverse domains, including logistics, agriculture, surveillance, and defense. While these systems provide numerous benefits, their misuse raises safety and security concerns, making effective detection mechanisms essential. Acoustic sensing offers a low-cost and non-intrusive alternative to vision or radar-based detection, as drone propellers generate distinctive sound patterns. This study introduces AUDRON (AUdio-based Drone Recognition Network), a hybrid deep learning framework for drone sound detection, employing a combination of Mel-Frequency Cepstral Coefficients (MFCC), Short-Time Fourier Transform (STFT) spectrograms processed with convolutional neural networks (CNNs), recurrent layers for temporal modeling, and autoencoder-based representations. Feature-level fusion integrates complementary information before classification. Experimental evaluation demonstrates that AUDRON effectively differentiates drone acoustic signatures from background noise, achieving high accuracy while maintaining generalizability across varying conditions. AUDRON achieves 98.51 percent and 97.11 percent accuracy in binary and multiclass classification. The results highlight the advantage of combining multiple feature representations with deep learning for reliable acoustic drone detection, suggesting the framework's potential for deployment in security and surveillance applications where visual or radar sensing may be limited.
△ Less
Submitted 30 December, 2025; v1 submitted 23 December, 2025;
originally announced December 2025.
-
Explainable Transformer-CNN Fusion for Noise-Robust Speech Emotion Recognition
Authors:
Sudip Chakrabarty,
Pappu Bishwas,
Rajdeep Chatterjee
Abstract:
Speech Emotion Recognition (SER) systems often degrade in performance when exposed to the unpredictable acoustic interference found in real-world environments. Additionally, the opacity of deep learning models hinders their adoption in trust-sensitive applications. To bridge this gap, we propose a Hybrid Transformer-CNN framework that unifies the contextual modeling of Wav2Vec 2.0 with the spectra…
▽ More
Speech Emotion Recognition (SER) systems often degrade in performance when exposed to the unpredictable acoustic interference found in real-world environments. Additionally, the opacity of deep learning models hinders their adoption in trust-sensitive applications. To bridge this gap, we propose a Hybrid Transformer-CNN framework that unifies the contextual modeling of Wav2Vec 2.0 with the spectral stability of 1D-Convolutional Neural Networks. Our dual-stream architecture processes raw waveforms to capture long-range temporal dependencies while simultaneously extracting noise-resistant spectral features (MFCC, ZCR, RMSE) via a custom Attentive Temporal Pooling mechanism. We conducted extensive validation across four diverse benchmark datasets: RAVDESS, TESS, SAVEE, and CREMA-D. To rigorously test robustness, we subjected the model to non-stationary acoustic interference using real-world noise profiles from the SAS-KIIT dataset. The proposed framework demonstrates superior generalization and state-of-the-art accuracy across all datasets, significantly outperforming single-branch baselines under realistic environmental interference. Furthermore, we address the ``black-box" problem by integrating SHAP and Score-CAM into the evaluation pipeline. These tools provide granular visual explanations, revealing how the model strategically shifts attention between temporal and spectral cues to maintain reliability in the presence of complex environmental noise.
△ Less
Submitted 20 December, 2025;
originally announced December 2025.
-
Pseudo-Legendrian and Legendrian Simplicity of Links in 3-Manifolds
Authors:
Patricia Cahn,
Rima Chatterjee,
Vladimir Chernov
Abstract:
We construct infinite families of non-simple isotopy classes of links in overtwisted contact structures on $S^1$-bundles over surfaces. These examples include: (1) a pair of Legendrian links that are not Legendrian isotopic, but which are isotopic as framed links, homotopic as Legendrian immersed multi-curves, and have Legendrian-isotopic components and (2) a pair of Legendrian links that are not…
▽ More
We construct infinite families of non-simple isotopy classes of links in overtwisted contact structures on $S^1$-bundles over surfaces. These examples include: (1) a pair of Legendrian links that are not Legendrian isotopic, but which are isotopic as framed links, homotopic as Legendrian immersed multi-curves, and have Legendrian-isotopic components and (2) a pair of Legendrian links that are not Legendrian isotopic, but are isotopic as framed links, homotopic as Legendrian immersed multi-curves, and which are link-homotopic as Legendrian links. Moreover, we construct examples showing that both of these non-simplicity phenomena can occur in the same smooth isotopy class. To construct these examples, we develop the theory of links transverse to a nowhere-zero vector field in a 3-manifold, and construct analogous examples in the category of links transverse to a vector field.
△ Less
Submitted 19 January, 2026; v1 submitted 19 December, 2025;
originally announced December 2025.
-
JWST NIRSpec finds no clear signs of an atmosphere on TOI-1685 b
Authors:
Chloe E. Fisher,
Matthew J. Hooton,
Amélie Gressier,
Merlin Zgraggen,
Meng Tian,
Kevin Heng,
Natalie H. Allen,
Richard D. Chatterjee,
Brett M. Morris,
Nicholas W. Borsato,
Néstor Espinoza,
Daniel Kitzmann,
Tobias G. Meier,
Lars A. Buchhave,
Adam J. Burgasser,
Brice-Olivier Demory,
Mark Fortune,
H. Jens Hoeijmakers,
Raphael Luque,
Erik A. Meier Valdés,
João M. Mendonça,
Bibiana Prinoth,
Alexander D. Rathcke,
Jake Taylor
Abstract:
Determining the prevalence of atmospheres on terrestrial planets is a core objective in exoplanetary science. While M dwarf systems offer a promising opportunity, conclusive observations of terrestrial atmospheres have remained elusive, with many yielding flat transmission spectra. We observe four transits of the hot terrestrial planet TOI-1685 b using JWST's NIRSpec G395H instrument. Combining th…
▽ More
Determining the prevalence of atmospheres on terrestrial planets is a core objective in exoplanetary science. While M dwarf systems offer a promising opportunity, conclusive observations of terrestrial atmospheres have remained elusive, with many yielding flat transmission spectra. We observe four transits of the hot terrestrial planet TOI-1685 b using JWST's NIRSpec G395H instrument. Combining this with the transit from the previously-observed phase curve of the planet with the same instrument, we perform a detailed analysis to determine the possibility of an atmosphere on TOI-1685 b. From our retrievals, the Bayesian evidence favours a simple flat line model, indicating no evidence for an atmosphere on TOI-1685 b, in line with results from the phase curve analysis. Our results show that hydrogen-dominated atmospheres can be confidently ruled out. For heavier, secondary atmospheres we find a lower limit on the mean molecular weight of ~10, at a significance of ~5 sigma. Pure CO2, SO2, H2O, and CH4 atmospheres, or a mixed secondary atmosphere (CO+CO2+SO2) could explain the data (Delta lnZ < 3). However, pure CH4 atmospheres may be physically unlikely, and the pure H2O and CO2 cases require a high-altitude cloud, which could also be interpreted as a thin cloud-free atmosphere. We discuss the theoretical possibility for different types of atmosphere on this planet, and consider the effects of atmospheric escape and stellar activity on the system. Though we find that TOI-1685 b is likely a bare rock, this study also highlights the challenges of detecting secondary atmospheres on rocky planets with JWST.
△ Less
Submitted 18 December, 2025; v1 submitted 17 December, 2025;
originally announced December 2025.
-
RLAX: Large-Scale, Distributed Reinforcement Learning for Large Language Models on TPUs
Authors:
Runlong Zhou,
Lefan Zhang,
Shang-Chen Wu,
Kelvin Zou,
Hanzhi Zhou,
Ke Ye,
Yihao Feng,
Dong Yin,
Alex Guillen Garcia,
Dmytro Babych,
Rohit Chatterjee,
Matthew Hopkins,
Xiang Kong,
Chang Lan,
Lezhi Li,
Yiping Ma,
Daniele Molinari,
Senyu Tong,
Yanchao Sun,
Thomas Voice,
Jianyu Wang,
Chong Wang,
Simon Wang,
Floris Weers,
Yechen Xu
, et al. (7 additional authors not shown)
Abstract:
Reinforcement learning (RL) has emerged as the de-facto paradigm for improving the reasoning capabilities of large language models (LLMs). We have developed RLAX, a scalable RL framework on TPUs. RLAX employs a parameter-server architecture. A master trainer periodically pushes updated model weights to the parameter server while a fleet of inference workers pull the latest weights and generates ne…
▽ More
Reinforcement learning (RL) has emerged as the de-facto paradigm for improving the reasoning capabilities of large language models (LLMs). We have developed RLAX, a scalable RL framework on TPUs. RLAX employs a parameter-server architecture. A master trainer periodically pushes updated model weights to the parameter server while a fleet of inference workers pull the latest weights and generates new rollouts. We introduce a suite of system techniques to enable scalable and preemptible RL for a diverse set of state-of-art RL algorithms. To accelerate convergence and improve model quality, we have devised new dataset curation and alignment techniques. Large-scale evaluations show that RLAX improves QwQ-32B's pass@8 accuracy by 12.8% in just 12 hours 48 minutes on 1024 v5p TPUs, while remaining robust to preemptions during training.
△ Less
Submitted 10 December, 2025; v1 submitted 6 December, 2025;
originally announced December 2025.
-
Initial state and evolution of hot and dense medium produced in isobaric collisions at 200A GeV at RHIC
Authors:
Amit Paul,
Rupa Chatterjee
Abstract:
Isobaric collisions provide a unique opportunity to investigate how variations in the charge to mass ratio affect the final state observables produced in relativistic heavy ion collisions. Most importantly, isobaric systems that differ in their nuclear structure offer valuable insights into the underlying nuclear geometries, making them powerful tools to probe the role of nuclear structure using h…
▽ More
Isobaric collisions provide a unique opportunity to investigate how variations in the charge to mass ratio affect the final state observables produced in relativistic heavy ion collisions. Most importantly, isobaric systems that differ in their nuclear structure offer valuable insights into the underlying nuclear geometries, making them powerful tools to probe the role of nuclear structure using heavy ion collisions. We study the initial state and evolution of the hot and dense medium formed in Ru+Ru and Zr+Zr collisions at 200A GeV at RHIC using a relativistic hydrodynamical model. The initial geometry of the two isobaric collisions is found to influence the evolution of the hot and dense medium produced. The sensitivity of photon production, charged particle spectra and anisotropic flow coefficients ($v_n$) to the initial geometry, including different orientations of the isobaric set have been studied in detail. Significant variations in anisotropic flow of photons and hadrons are observed, highlighting the role of nuclear deformation in shaping final state observables. Moreover, photon anisotropic flow is found to be considerably more sensitive to the initial state than charged particle anisotropic flow, indicating that photon measurements in isobaric collisions have strong potential to constrain initial state modeling and improve our understanding of QGP properties in such systems.
△ Less
Submitted 3 December, 2025;
originally announced December 2025.
-
Enhancing Machine Learning Model Efficiency through Quantization and Bit Depth Optimization: A Performance Analysis on Healthcare Data
Authors:
Mitul Goswami,
Romit Chatterjee
Abstract:
This research aims to optimize intricate learning models by implementing quantization and bit-depth optimization techniques. The objective is to significantly cut time complexity while preserving model efficiency, thus addressing the challenge of extended execution times in intricate models. Two medical datasets were utilized as case studies to apply a Logistic Regression (LR) machine learning mod…
▽ More
This research aims to optimize intricate learning models by implementing quantization and bit-depth optimization techniques. The objective is to significantly cut time complexity while preserving model efficiency, thus addressing the challenge of extended execution times in intricate models. Two medical datasets were utilized as case studies to apply a Logistic Regression (LR) machine learning model. Using efficient quantization and bit depth optimization strategies the input data is downscaled from float64 to float32 and int32. The results demonstrated a significant reduction in time complexity, with only a minimal decrease in model accuracy post-optimization, showcasing the state-of-the-art optimization approach. This comprehensive study concludes that the impact of these optimization techniques varies depending on a set of parameters.
△ Less
Submitted 16 November, 2025;
originally announced November 2025.
-
MiVID: Multi-Strategic Self-Supervision for Video Frame Interpolation using Diffusion Model
Authors:
Priyansh Srivastava,
Romit Chatterjee,
Abir Sen,
Aradhana Behura,
Ratnakar Dash
Abstract:
Video Frame Interpolation (VFI) remains a cornerstone in video enhancement, enabling temporal upscaling for tasks like slow-motion rendering, frame rate conversion, and video restoration. While classical methods rely on optical flow and learning-based models assume access to dense ground-truth, both struggle with occlusions, domain shifts, and ambiguous motion. This article introduces MiVID, a lig…
▽ More
Video Frame Interpolation (VFI) remains a cornerstone in video enhancement, enabling temporal upscaling for tasks like slow-motion rendering, frame rate conversion, and video restoration. While classical methods rely on optical flow and learning-based models assume access to dense ground-truth, both struggle with occlusions, domain shifts, and ambiguous motion. This article introduces MiVID, a lightweight, self-supervised, diffusion-based framework for video interpolation. Our model eliminates the need for explicit motion estimation by combining a 3D U-Net backbone with transformer-style temporal attention, trained under a hybrid masking regime that simulates occlusions and motion uncertainty. The use of cosine-based progressive masking and adaptive loss scheduling allows our network to learn robust spatiotemporal representations without any high-frame-rate supervision. Our framework is evaluated on UCF101-7 and DAVIS-7 datasets. MiVID is trained entirely on CPU using the datasets and 9-frame video segments, making it a low-resource yet highly effective pipeline. Despite these constraints, our model achieves optimal results at just 50 epochs, competitive with several supervised baselines.This work demonstrates the power of self-supervised diffusion priors for temporally coherent frame synthesis and provides a scalable path toward accessible and generalizable VFI systems.
△ Less
Submitted 8 November, 2025;
originally announced November 2025.
-
Non-loose Legendrian Hopf links in lens spaces
Authors:
Rima Chatterjee
Abstract:
We give a complete classification of non-loose Legendrian Hopf links in $L(p,q)$ generalizing a result of the author with Geiges and Onaran. The classification is for non-loose Hopf links for both zero and non-zero Giroux torsion in their complement. We also give an explicit algorithm for the contact surgery diagrams for all these Legendrian representatives with no Giroux torsion in their compleme…
▽ More
We give a complete classification of non-loose Legendrian Hopf links in $L(p,q)$ generalizing a result of the author with Geiges and Onaran. The classification is for non-loose Hopf links for both zero and non-zero Giroux torsion in their complement. We also give an explicit algorithm for the contact surgery diagrams for all these Legendrian representatives with no Giroux torsion in their complement.
△ Less
Submitted 23 October, 2025;
originally announced October 2025.
-
Effect of Isolation Criteria on Prompt Photon Production in Relativistic Nuclear Collisions
Authors:
Sinjini Chandra,
Rupa Chatterjee,
Zubayer Ahammed
Abstract:
Prompt photon measurements in relativistic nuclear collisions serve as an essential comparative basis for heavy ion studies enabling the separation of medium induced effects. However, the identification of prompt photons is experimentally challenging due to substantial backgrounds from photons produced in hadron decays and jet fragmentation. Appropriate isolation criteria are applied to suppress t…
▽ More
Prompt photon measurements in relativistic nuclear collisions serve as an essential comparative basis for heavy ion studies enabling the separation of medium induced effects. However, the identification of prompt photons is experimentally challenging due to substantial backgrounds from photons produced in hadron decays and jet fragmentation. Appropriate isolation criteria are applied to suppress these background contributions. We analyze prompt photon spectra using the JETPHOX framework to quantify the relative contributions of fragmentation and direct production mechanisms to the total photon yield. We perform a systematic study of the impact of isolation criteria on prompt photon production in relativistic nuclear collisions with emphasis on their dependence on beam energy and photon transverse momentum. The fragmentation contribution is found to be substantially large particularly for $p_T < 15$ GeV and the isolation criterion plays a crucial role in the analysis of prompt photons in that $p_T$ region. A dynamical isolation criterion suppresses the fragmentation component more effectively than a fixed one in this region. Furthermore, the isolation criterion shows a stronger dependence on beam energy and photon $p_T$ than on system size. These observations emphasize the importance of employing carefully selected and consistent isolation criteria when comparing experimental data with theoretical calculations especially for observables sensitive to fragmentation.
△ Less
Submitted 16 October, 2025;
originally announced October 2025.
-
Decoding Balanced Linear Codes With Preprocessing
Authors:
Andrej Bogdanov,
Rohit Chatterjee,
Yunqi Li,
Prashant Nalini Vasudevan
Abstract:
Prange's information set algorithm is a decoding algorithm for arbitrary linear codes. It decodes corrupted codewords of any $\mathbb{F}_2$-linear code $C$ of message length $n$ up to relative error rate $O(\log n / n)$ in $\mathsf{poly}(n)$ time. We show that the error rate can be improved to $O((\log n)^2 / n)$, provided: (1) the decoder has access to a polynomial-length advice string that depen…
▽ More
Prange's information set algorithm is a decoding algorithm for arbitrary linear codes. It decodes corrupted codewords of any $\mathbb{F}_2$-linear code $C$ of message length $n$ up to relative error rate $O(\log n / n)$ in $\mathsf{poly}(n)$ time. We show that the error rate can be improved to $O((\log n)^2 / n)$, provided: (1) the decoder has access to a polynomial-length advice string that depends on $C$ only, and (2) $C$ is $n^{-Ω(1)}$-balanced.
As a consequence we improve the error tolerance in decoding random linear codes if inefficient preprocessing of the code is allowed. This reveals potential vulnerabilities in cryptographic applications of Learning Noisy Parities with low noise rate.
Our main technical result is that the Hamming weight of $Hw$, where $H$ is a random sample of *short dual* codewords, measures the proximity of a word $w$ to the code in the regime of interest. Given such $H$ as advice, our algorithm corrects errors by locally minimizing this measure. We show that for most codes, the error rate tolerated by our decoder is asymptotically optimal among all algorithms whose decision is based on thresholding $Hw$ for an arbitrary polynomial-size advice matrix $H$.
△ Less
Submitted 16 October, 2025;
originally announced October 2025.
-
Memory behavior of a randomly driven model glass
Authors:
Roni Chatterjee,
Smarajit Karmakar,
Muhittin Mungan,
Damien Vandembroucq
Abstract:
We investigate by atomistic simulations the memory behavior a model glass subjected to random driving protocols. The training consists of a random walk of forward and/or backward shearing sequences bounded by a maximal shear strain of absolute value γT . We show that such a stochastic training protocol is able to record the training amplitude. Different read-out protocols are also tested and are s…
▽ More
We investigate by atomistic simulations the memory behavior a model glass subjected to random driving protocols. The training consists of a random walk of forward and/or backward shearing sequences bounded by a maximal shear strain of absolute value γT . We show that such a stochastic training protocol is able to record the training amplitude. Different read-out protocols are also tested and are shown to be able to retrieve the training amplitude. We then emphasize the ten- sorial character of the memory encoded in the glass sample and then characterize the anisotropic mechanical behavior of the trained samples.
△ Less
Submitted 6 October, 2025;
originally announced October 2025.
-
Public-Key Encryption from the MinRank Problem
Authors:
Rohit Chatterjee,
Changrui Mu,
Prashant Nalini Vasudevan
Abstract:
We construct a public-key encryption scheme from the hardness of the (planted) MinRank problem over uniformly random instances. This corresponds to the hardness of decoding random linear rank-metric codes. Existing constructions of public-key encryption from such problems require hardness for structured instances arising from the masking of efficiently decodable codes. Central to our construction…
▽ More
We construct a public-key encryption scheme from the hardness of the (planted) MinRank problem over uniformly random instances. This corresponds to the hardness of decoding random linear rank-metric codes. Existing constructions of public-key encryption from such problems require hardness for structured instances arising from the masking of efficiently decodable codes. Central to our construction is the development of a new notion of duality for rank-metric codes.
△ Less
Submitted 4 October, 2025;
originally announced October 2025.
-
Spectral nature of Sco X-1 observed using the X-ray SPECtroscopy and Timing (XSPECT) payload on-board XPoSat
Authors:
V. P. Shyam Prakash,
Vivek K. Agrawal,
Rwitika Chatterjee,
Radhakrishna Vatedka,
Koushal Vadodariya,
A. M. Vinodkumar
Abstract:
Scorpius X-1 is the brightest and first discovered X-ray source in the sky. Studying this source in the low-energy band has been challenging in the past due to its high brightness. However, with the X-ray SPECtroscopy and Timing (XSPECT) payload on-board Indias first X-ray Polarimetry Satellite (XPoSat), we have the capability to study the source despite its very high brightness, thanks to the fas…
▽ More
Scorpius X-1 is the brightest and first discovered X-ray source in the sky. Studying this source in the low-energy band has been challenging in the past due to its high brightness. However, with the X-ray SPECtroscopy and Timing (XSPECT) payload on-board Indias first X-ray Polarimetry Satellite (XPoSat), we have the capability to study the source despite its very high brightness, thanks to the fast (1 ms) readout of the instrument. We investigate the evolution of the spectral and timing properties of Sco X-1 across the horizontal, normal, and flaring branch, as observed with XSPECT. We examine changes in the spectral parameters as a function of position on the color-color diagram (CCD). Spectral studies indicate that the soft X-ray emission can be modeled using a multicolor disk component, with the inner disk temperature ranging from 0.6 to 0.8 keV. The hard component is described by a Comptonized continuum using either the nthComp or Comptb model with electron temperatures from 2.4 to 4.7 keV and optical depth between 5 and 14. Additionally, we observe the presence of an iron K-alpha line at 6.6 keV and an iron K-beta line at 7.6 keV. Both spectral models suggest a steep rise in Comptonization flux as well as disk flux in the flaring branch. An increase in neutron star blackbody temperature and inner disk temperature are also observed during flaring. The Z-track is driven by changes in the optical depth of the corona, the Comptonization flux and the disk flux and the inner disk temperature. No quasi-periodic oscillations are detected in any branch, suggesting their association with the high-energy spectrum.
△ Less
Submitted 3 October, 2025;
originally announced October 2025.
-
Privacy-Preserving Performance Profiling of In-The-Wild GPUs
Authors:
Ian McDougall,
Michael Davies,
Rahul Chatterjee,
Somesh Jha,
Karthikeyan Sankaralingam
Abstract:
GPUs are the dominant platform for many important applications today including deep learning, accelerated computing, and scientific simulation. However, as the complexity of both applications and hardware increases, GPU chip manufacturers face a significant challenge: how to gather comprehensive performance characteristics and value profiles from GPUs deployed in real-world scenarios. Such data, e…
▽ More
GPUs are the dominant platform for many important applications today including deep learning, accelerated computing, and scientific simulation. However, as the complexity of both applications and hardware increases, GPU chip manufacturers face a significant challenge: how to gather comprehensive performance characteristics and value profiles from GPUs deployed in real-world scenarios. Such data, encompassing the types of kernels executed and the time spent in each, is crucial for optimizing chip design and enhancing application performance. Unfortunately, despite the availability of low-level tools like NSYS and NCU, current methodologies fall short, offering data collection capabilities only on an individual user basis rather than a broader, more informative fleet-wide scale. This paper takes on the problem of realizing a system that allows planet-scale real-time GPU performance profiling of low-level hardware characteristics. The three fundamental problems we solve are: i) user experience of achieving this with no slowdown; ii) preserving user privacy, so that no 3rd party is aware of what applications any user runs; iii) efficacy in showing we are able to collect data and assign it applications even when run on 1000s of GPUs. Our results simulate a 100,000 size GPU deployment, running applications from the Torchbench suite, showing our system addresses all 3 problems.
△ Less
Submitted 25 September, 2025;
originally announced September 2025.
-
Probing Initial State Clustering through Photon Anisotropic Flow in 7A TeV $^{16}$O+$^{16}$O Collisions at the LHC
Authors:
Sanchari Thakur,
Pingal Dasgupta,
Rupa Chatterjee,
Sinjini Chandra,
Sidharth K. Prasad
Abstract:
The presence of $α$ clustered structures in light nuclei can enhance the initial spatial anisotropies in relativistic nuclear collisions relative to those arising from nuclei with uniform density distributions. Thus, observables that are strongly sensitive to the initial geometry can be a more efficient probe of the clustered structures than observables dominated by final state dynamics. We invest…
▽ More
The presence of $α$ clustered structures in light nuclei can enhance the initial spatial anisotropies in relativistic nuclear collisions relative to those arising from nuclei with uniform density distributions. Thus, observables that are strongly sensitive to the initial geometry can be a more efficient probe of the clustered structures than observables dominated by final state dynamics. We investigate the collisions of $α$ clustered oxygen nuclei at $\sqrt{s_{NN}}=7$A TeV at the LHC using the GLISSANDO initial state model along with the MUSIC event-by-event hydrodynamical framework. The tetrahedral $α$ clustered structure of $^{16}$O leads to significantly larger initial triangular eccentricity $ε_3$ than collisions with uniform density distributions especially in the most central events. The spatial eccentricity $ε_2$ is found to be relatively less sensitive to the initial state clustered structure. The production of thermal photons is estimated to be only marginally influenced by clustering for both central as well as peripheral collisions. In contrast, the photon triangular flow coefficient $v_3(p_T)$ is strongly affected by initial state clustering resulting in substantially larger values in both central and peripheral collisions. An experimental determination of photon anisotropic flow together with the ratios of flow coefficients in $^{16}$O+$^{16}$O collisions therefore expected to provide valuable insight into the possible clustered structure in light nuclei and also to constrain parameters in theoretical modeling.
△ Less
Submitted 25 September, 2025;
originally announced September 2025.
-
Boundary Carroll CFTs: SUSY and Superstrings
Authors:
Arjun Bagchi,
Shankhadeep Chakrabortty,
Pronoy Chakraborty,
Ritankar Chatterjee,
Priyadarshini Pandit
Abstract:
We consider two dimensional superconformal Carrollian theories with boundaries and construct two variants of the Boundary Superconformal Carrollian Algebra (BSCCA), viz. the Homogeneous and the Inhomogeneous, by making appropriate identification of the parent superconformal Carrollian algebras. These new algebras are then recovered by appropriate limits of a single copy of Super Virasoro algebra.…
▽ More
We consider two dimensional superconformal Carrollian theories with boundaries and construct two variants of the Boundary Superconformal Carrollian Algebra (BSCCA), viz. the Homogeneous and the Inhomogeneous, by making appropriate identification of the parent superconformal Carrollian algebras. These new algebras are then recovered by appropriate limits of a single copy of Super Virasoro algebra. We then focus on the theory of null tensionless superstrings and construct, for the first time, an open null superstring. The Homogeneous version of the BSCCA is realised as worldsheet symmetries on this open null superstring.
△ Less
Submitted 27 August, 2025;
originally announced August 2025.
-
Entanglement harvesting and curvature of entanglement: A modular operator approach
Authors:
Rupak Chatterjee
Abstract:
An operator-algebraic framework based on Tomita-Takesaki modular theory is used to study aspects of quantum entanglement via the application of the modular conjugation operator $J$. The entanglement structure of quantum fields is studied through the protocol of entanglement harvesting whereby quantum correlations evolve through the time evolution of qubit detectors coupled to a Bosonic field. Modu…
▽ More
An operator-algebraic framework based on Tomita-Takesaki modular theory is used to study aspects of quantum entanglement via the application of the modular conjugation operator $J$. The entanglement structure of quantum fields is studied through the protocol of entanglement harvesting whereby quantum correlations evolve through the time evolution of qubit detectors coupled to a Bosonic field. Modular conjugation operators are constructed for Unruh-Dewitt type qubits interacting with a scalar field such that initially unentangled qubits become entangled. The entanglement harvested in this process is directly quantified by an expectation value involving $J$ offering a physical application of this operator. The modular operator formalism is then extended to the Markovian open system dynamics of coupled qubits by expressing entanglement monotones as functionals of a state $ρ$ and its modular reflection $JρJ$. The second derivative of such functionals with respect to an external coupling parameter, termed the curvature of entanglement, provides a natural measure of entanglement sensitivity. At points of modular self-duality, the curvature of entanglement coincides with the quantum Fisher information measure. These results demonstrate that the modular conjugation operator $J$ captures both the harvesting of entanglement from quantum fields and the curvature of entanglement in coupled qubit dynamics providing parallel modular structures that connect these systems.
△ Less
Submitted 16 October, 2025; v1 submitted 17 August, 2025;
originally announced August 2025.
-
Generation and certification of pure phase entangled light
Authors:
Rounak Chatterjee,
Mayuresh Kanagal,
Vikas S Bhat,
Kiran Bajar,
Sushil Mujumdar
Abstract:
Biphoton systems exhibiting entanglement in position-momentum variables, known as spatial entanglement, are among the most intriguing and well-studied phenomena in quantum optics. A notable subset of these are phase entangled states, where entanglement manifests purely through correlations in the spatial phase of the wavefunction. While the generation of such states from biphotons via spontaneous…
▽ More
Biphoton systems exhibiting entanglement in position-momentum variables, known as spatial entanglement, are among the most intriguing and well-studied phenomena in quantum optics. A notable subset of these are phase entangled states, where entanglement manifests purely through correlations in the spatial phase of the wavefunction. While the generation of such states from biphotons via spontaneous parametric down-conversion has been explored, their physical implications and applications remain under-investigated. In this work, we theoretically and experimentally examine a unique form of phase entanglement known as `pure' phase entanglement. This state exhibits the unusual feature that the position of one photon is correlated with the momentum of the other. Unlike typical spatially entangled states, it shows no direct correlation in position or momentum between the two photons, underscoring that all correlations arise purely from the spatial phase of the wavefunction. We delve deeper into the theory of this state and experimentally construct it from known phase-entangled states. To certify its properties, we propose a setup that performs a "one-particle momentum measurement" and explore the various tunable parameters. We also highlight potential applications of this state in quantum optics and imaging experiments.
△ Less
Submitted 25 August, 2025; v1 submitted 15 August, 2025;
originally announced August 2025.
-
Mean free path of photons in relativistic heavy ion collisions
Authors:
Jajati K. Nayak,
Rupa Chatterjee
Abstract:
Electromagnetic probes, such as photons and dileptons, play a key role in diagnosing the initial temperature of the hot and dense quark-gluon plasma (QGP) matter created in relativistic nuclear collisions at very high energies. This is due to their large mean free path $λ$, which allows them to escape the medium without significant interactions. Unlike hadronic particles, which experience multiple…
▽ More
Electromagnetic probes, such as photons and dileptons, play a key role in diagnosing the initial temperature of the hot and dense quark-gluon plasma (QGP) matter created in relativistic nuclear collisions at very high energies. This is due to their large mean free path $λ$, which allows them to escape the medium without significant interactions. Unlike hadronic particles, which experience multiple scatterings and are affected by the evolving medium, electromagnetic probes carry undistorted information from the initial stages of the expanding system. In this work an attempt has been made to revisit the estimation of mean free paths of photons in QGP phase for a temperature range predicted by hydrodynamics for heavy ion collisions at $\sqrt{s_{NN}}=200$ GeV at RHIC and $\sqrt{s_{NN}}=2.76$ TeV at the LHC. The mean free paths have been estimated for a plasma expanding via (1+1)D and (2+1)D hydrodynamical expansions. For the (1+1)D case, photons with low energy ($E_γ< 0.2$ GeV) coming from a high temperature ($>250$ MeV) source are found to have shorter mean free path compared to the expansion scale of the system; while the high energy photons have always larger mean free paths. A similar qualitative nature of the mean free path has also been observed for a more realistic (2+1)D hydrodynamic model calculations although the $λ$ values are found to be larger on a quantitative scale compared to the (1+1)D case.
△ Less
Submitted 7 August, 2025;
originally announced August 2025.
-
Cable links of uniformly thick knot types
Authors:
Rima Chatterjee,
John B. Etnyre,
Hyunki Min,
Thomas Rodewald
Abstract:
In this paper, we study Legendrian realizations of cable links of knot types that are uniformly thick but not Legendrian simple, extending prior work of Dalton, the second author, and Traynor. This leads to new phenomena, such as stabilized Legendrian links that are smoothly isotopic and component-wise Legendrian isotopic, but are not Legendrian isotopic. In our study, we establish new results for…
▽ More
In this paper, we study Legendrian realizations of cable links of knot types that are uniformly thick but not Legendrian simple, extending prior work of Dalton, the second author, and Traynor. This leads to new phenomena, such as stabilized Legendrian links that are smoothly isotopic and component-wise Legendrian isotopic, but are not Legendrian isotopic. In our study, we establish new results for cable links whose cabling slope is sufficiently negative. We will also show how to classify Legendrian knots in (most) negative cables of twist knots. This is done by introducing a new technique to the study of cables based on Legendrian surgeries.
△ Less
Submitted 10 July, 2025; v1 submitted 3 July, 2025;
originally announced July 2025.
-
Volatile-rich evolution of molten super-Earth L 98-59 d
Authors:
Harrison Nicholls,
Tim Lichtenberg,
Richard D. Chatterjee,
Claire Marie Guimond,
Emma Postolec,
Raymond T. Pierrehumbert
Abstract:
Small low-density exoplanets are sculpted by strong stellar irradiation, but their primordial compositions and subsequent evolution are still unknown. Two often-considered scenarios hold that they formed with rocky interiors and H$_2$-He atmospheres ('gas-dwarfs'), or alternatively with bulk compositions dominated by H$_2$O phases ('water-worlds'). Here, we constrain the possible range of evolutio…
▽ More
Small low-density exoplanets are sculpted by strong stellar irradiation, but their primordial compositions and subsequent evolution are still unknown. Two often-considered scenarios hold that they formed with rocky interiors and H$_2$-He atmospheres ('gas-dwarfs'), or alternatively with bulk compositions dominated by H$_2$O phases ('water-worlds'). Here, we constrain the possible range of evolutionary histories linking the birth conditions of low-density super-Earth L 98-59 d to recent observations using a coupled atmosphere-interior evolutionary model. We find that the observations can be explained by in-situ photochemical production of SO$_2$ in an H$_2$ background, indicative of a chemically-reducing mantle and substantial (1.8 mass pct.) early sulfur and hydrogen content, inconsistent with both the gas-dwarf and water-world scenarios. L 98-59 d's interior comprises a permanent magma ocean, allowing long-term retention of volatiles within its mantle over billions of years, consistent with California-Kepler Survey trends. Our analysis reveals an evolutionary pathway in which planets host volatile-rich atmospheres sustained by long-term magma ocean degassing, shaped by secular cooling, atmospheric erosion and photochemistry. Internal and environmental processes contribute to the observed diversity of super-Earth and sub-Neptune exoplanets.
△ Less
Submitted 16 March, 2026; v1 submitted 3 July, 2025;
originally announced July 2025.
-
Cascade at local yield strain for silica and metallic glass
Authors:
Nandlal Pingua,
Himani Rautela,
Roni Chatterjee,
Smarajit Karmakar,
Pinaki Chaudhuri,
Shiladitya Sengupta
Abstract:
We report observations of unusal \emph{first} plastic events in silica and metallic glasses in the shear startup regime at applied strain two orders of magnitude smaller than yield strain. The (non-Affine) particle displacement field during these events have complex real space structure with multiple disconnected cores of high displacement appearing at the \emph{same} applied strain under athermal…
▽ More
We report observations of unusal \emph{first} plastic events in silica and metallic glasses in the shear startup regime at applied strain two orders of magnitude smaller than yield strain. The (non-Affine) particle displacement field during these events have complex real space structure with multiple disconnected cores of high displacement appearing at the \emph{same} applied strain under athermal quasistatic simple shear deformation, and identified by a ``cell based cluster analysis'' method. By monitoring the stress relaxation during the first plastic event by Langevin dynamics simulation, we directly show the cascade nature of these events. Thus these first plastic events are reminiscent of avalanches in the post-yielding steady state, but unlike the steady state avalanches, we show that these events are not system spanning. To understand the nature of these events, we tune three factors that are known to affect brittleness of a glass. These are (i) sample preparation history, (ii) inter-particle interactions and (iii) rigidity of the background matrix applying a ``soft matrix'' probe recently developed by some of us. In each case we show that such first plastic events are more probable in more ductile glasses. Our observations are consistent with the picture that more ductile materials are softer, implying that understanding the role of softness may be a promising route to develop microscopic quantifiers of brittleness and thus clarifying the physical origin of brittle-to-ductile transition.
△ Less
Submitted 20 June, 2025;
originally announced June 2025.
-
XSPECT on-board XPoSat: Calibration and First Results
Authors:
Rwitika Chatterjee,
Koushal Vadodariya,
Radhakrishna Vatedka,
Vivek Kumar Agrawal,
Anurag Tyagi,
Kiran M Jayasurya,
Shyam Prakash V. P.,
Ramadevi M C,
Vaishali Sharan
Abstract:
XPoSat is India's first X-ray spectro-polarimetry mission, consisting of two co-aligned instruments, a polarimeter (POLIX) and a spectrometer (XSPECT), to study the X-ray emission from celestial sources. Since polarimetry is a photon-hungry technique, the mission is designed to observe sources for long integration times (~ few days to weeks). This provides an unique opportunity, enabling XSPECT to…
▽ More
XPoSat is India's first X-ray spectro-polarimetry mission, consisting of two co-aligned instruments, a polarimeter (POLIX) and a spectrometer (XSPECT), to study the X-ray emission from celestial sources. Since polarimetry is a photon-hungry technique, the mission is designed to observe sources for long integration times (~ few days to weeks). This provides an unique opportunity, enabling XSPECT to carry out long-term monitoring of sources, and study their spectro-temporal evolution. To ensure that the instrument is able to fulfill its scientific objectives, it was extensively calibrated on-ground. Post launch, these calibrations were validated using on-board observations. Additionally, some aspects of the instrument such as alignment and effective area were also derived and fine-tuned from in-flight data. In this paper, we describe the calibration of XSPECT instrument in detail, including some initial results derived from its data to establish its capabilities.
△ Less
Submitted 31 December, 2025; v1 submitted 11 June, 2025;
originally announced June 2025.
-
JWST NIRISS Transmission Spectroscopy of the Super-Earth GJ 357b, a Favourable Target for Atmospheric Retention
Authors:
Jake Taylor,
Michael Radica,
Richard D. Chatterjee,
Mark Hammond,
Tobias Meier,
Suzanne Aigrain,
Ryan J. MacDonald,
Loic Albert,
Björn Benneke,
Louis-Philippe Coulombe,
Nicolas B. Cowan,
Lisa Dang,
René Doyon,
Laura Flagg,
Doug Johnstone,
Lisa Kaltenegger,
David Lafrenière,
Stefan Pelletier,
Caroline Piaulet-Ghorayeb,
Jason F. Rowe,
Pierre-Alexis Roy
Abstract:
We present a JWST NIRISS/SOSS transmission spectrum of the super-Earth GJ 357 b: the first atmospheric observation of this exoplanet. Despite missing the first $\sim$40 % of the transit due to using an out-of-date ephemeris, we still recover a transmission spectrum that does not display any clear signs of atmospheric features. We perform a search for Gaussian-shaped absorption features within the…
▽ More
We present a JWST NIRISS/SOSS transmission spectrum of the super-Earth GJ 357 b: the first atmospheric observation of this exoplanet. Despite missing the first $\sim$40 % of the transit due to using an out-of-date ephemeris, we still recover a transmission spectrum that does not display any clear signs of atmospheric features. We perform a search for Gaussian-shaped absorption features within the data but find that this analysis yields comparable fits to the observations as a flat line. We compare the transmission spectrum to a grid of atmosphere models and reject, to 3-$σ$ confidence, atmospheres with metallicities $\lesssim$100$\times$ solar ($\sim$4 g/mol) with clouds at pressures down to 0.01 bar. We analyse how the retention of a secondary atmosphere on GJ 357 b may be possible due to its higher escape velocity compared to an Earth-sized planet and the exceptional inactivity of its host star relative to other M2.5V stars. The star's XUV luminosity decays below the threshold for rapid atmospheric escape early enough that the volcanic revival of an atmosphere of several bars of CO$_2$ is plausible, though subject to considerable uncertainty. Finally, we model the feasibility of detecting an atmosphere on GJ 357 b with MIRI/LRS, MIRI photometry, and NIRSpec/G395H. We find that, with two eclipses, it would be possible to detect features indicative of an atmosphere or surface. Further to this, with 3-4 transits, it would be possible to detect a 1 bar nitrogen-rich atmosphere with 1000 ppm of CO$_2$.
△ Less
Submitted 30 May, 2025;
originally announced May 2025.
-
Searching Clinical Data Using Generative AI
Authors:
Karan Hanswadkar,
Anika Kanchi,
Shivani Tripathi,
Shi Qiao,
Rony Chatterjee,
Alekh Jindal
Abstract:
Artificial Intelligence (AI) is making a major impact on healthcare, particularly through its application in natural language processing (NLP) and predictive analytics. The healthcare sector has increasingly adopted AI for tasks such as clinical data analysis and medical code assignment. However, searching for clinical information in large and often unorganized datasets remains a manual and error-…
▽ More
Artificial Intelligence (AI) is making a major impact on healthcare, particularly through its application in natural language processing (NLP) and predictive analytics. The healthcare sector has increasingly adopted AI for tasks such as clinical data analysis and medical code assignment. However, searching for clinical information in large and often unorganized datasets remains a manual and error-prone process. Assisting this process with automations can help physicians improve their operational productivity significantly.
In this paper, we present a generative AI approach, coined SearchAI, to enhance the accuracy and efficiency of searching clinical data. Unlike traditional code assignment, which is a one-to-one problem, clinical data search is a one-to-many problem, i.e., a given search query can map to a family of codes. Healthcare professionals typically search for groups of related diseases, drugs, or conditions that map to many codes, and therefore, they need search tools that can handle keyword synonyms, semantic variants, and broad open-ended queries. SearchAI employs a hierarchical model that respects the coding hierarchy and improves the traversal of relationships from parent to child nodes. SearchAI navigates these hierarchies predictively and ensures that all paths are reachable without losing any relevant nodes.
To evaluate the effectiveness of SearchAI, we conducted a series of experiments using both public and production datasets. Our results show that SearchAI outperforms default hierarchical traversals across several metrics, including accuracy, robustness, performance, and scalability. SearchAI can help make clinical data more accessible, leading to streamlined workflows, reduced administrative burden, and enhanced coding and diagnostic accuracy.
△ Less
Submitted 29 May, 2025;
originally announced May 2025.
-
X-Ray spectroscopy and timing (XSPECT) experiment on XPoSat -- instrument configuration and science prospects
Authors:
Radhakrishna V,
Anurag Tyagi,
Koushal Vadodariya,
Vivek K Agrawal,
Rwitika Chatterjee,
Ramadevi M C,
Kiran M Jayasurya,
Kumar,
Vaishali S,
Srikar P Tadepalli,
Sreedatta Reddy K,
Lokesh K Garg,
Nidhi Sharma,
Evangelin L Justin
Abstract:
X-ray Polarimeter Satellite (XPoSat) with POLarimeter Instrument in X-rays (POLIX), is India's first spacecraft dedicated to study medium energy X-ray polarisation from celestial objects. X-Ray Spectroscopy and Timing (XSPECT) instrument on XPoSat is configured to study long term spectral behaviour of select sources in Soft X-ray regime. The instrument uses Swept Charge Devices (SCD)s to provide l…
▽ More
X-ray Polarimeter Satellite (XPoSat) with POLarimeter Instrument in X-rays (POLIX), is India's first spacecraft dedicated to study medium energy X-ray polarisation from celestial objects. X-Ray Spectroscopy and Timing (XSPECT) instrument on XPoSat is configured to study long term spectral behaviour of select sources in Soft X-ray regime. The instrument uses Swept Charge Devices (SCD)s to provide large area and spectral performance with passive cooling arrangement. The instrument consists of set of collimators with two different FOVs, optical light blocking filters, and signal processing electronics. The instrument was designed, tested and calibrated on ground. The unique opportunity is provided by ISRO's XPoSat mission, where a source is observed for longer duration. The device used also enables spectroscopy study of brighter sources compared to the CCD based spectrometers. The first results demonstrate instrument capability for spectral studies in the 0.8 keV-15 keV energy band.
△ Less
Submitted 26 May, 2025;
originally announced May 2025.
-
Tracking phase entanglement during propagation of downconverted photons
Authors:
Rounak Chatterjee,
Vikas S Bhat,
Kiran Bajar,
Sushil Mujumdar
Abstract:
High-dimensional entanglement in the form of transverse spatial correlation between a pair of photons generated via spontaneous parametric downconversion is not only a valuable resource in many academic and real-life applications but also provides access to several intriguing quantum phenomena. One such non-intuitive phenomenon is phase entanglement, in which the biphoton state is correlated in th…
▽ More
High-dimensional entanglement in the form of transverse spatial correlation between a pair of photons generated via spontaneous parametric downconversion is not only a valuable resource in many academic and real-life applications but also provides access to several intriguing quantum phenomena. One such non-intuitive phenomenon is phase entanglement, in which the biphoton state is correlated in the complex phase of its wavefunction. This state, which emerges during the propagation of the biphoton wavefunction, exhibits neither position nor momentum correlation, yet retains full entanglement. In this work, we experimentally explore this state in two distinct ways. The first is by tracking the vanishing spatial photon number correlation over propagation distances lying in $\left[0,\infty\right)$, folded into a finite range using single-lens imaging. These observations show excellent agreement with our theoretical predictions based on the Double Gaussian (DG) approximation of the biphoton state. The second approach involves performing a two-photon interference experiment using a double slit and this state, which reveals the correlated phase front. We show, both theoretically and experimentally, that the observed two-photon interference structure is markedly different from that produced by position-correlated photons, as confirmed by computing the joint probability distribution of photons (JPD) and related metrics. Such interference using phase-entangled light has not been attempted before and opens avenues for advanced experiments and applications in the field of spatial entanglement.
△ Less
Submitted 15 August, 2025; v1 submitted 23 May, 2025;
originally announced May 2025.
-
Floquet topological phases of higher winding numbers in extended Su-Schrieffer-Heeger model under quenched drive
Authors:
Rittwik Chatterjee,
Asim Kumar Ghosh
Abstract:
In this study topological properties of static and dynamic
Su-Schrieffer-Heeger models with staggered further neighbor hopping terms
of different extents are investigated. Topological characterization
of the static chiral models is established in terms of
conventional winding number while Floquet topological character is
studied by a pair of winding numbers.
With the increase of extent…
▽ More
In this study topological properties of static and dynamic
Su-Schrieffer-Heeger models with staggered further neighbor hopping terms
of different extents are investigated. Topological characterization
of the static chiral models is established in terms of
conventional winding number while Floquet topological character is
studied by a pair of winding numbers.
With the increase of extent of further neighbor terms topological phases
with higher winding numbers are found to emerge in both static and dynamic
systems. Topological phase diagrams of static models for four different
extents of further neighbor terms are presented, which has been generalized
for arbitrary extent afterwards. Similarly, Floquet topological
phase diagrams of four such dynamic models have been presented.
For every model four different parametrizations of hopping terms
are introduced which
exhibits different patterns of topological phase diagrams.
In each case emergence of `0' and `$π$' energy edge states
is noted and they are found to consistent to the
bulk-boundary correspondence rule applicable for chiral topological systems.
△ Less
Submitted 16 October, 2025; v1 submitted 15 May, 2025;
originally announced May 2025.
-
Self-limited tidal heating and prolonged magma oceans in the L 98-59 system
Authors:
Harrison Nicholls,
Claire Marie Guimond,
Hamish C. F. C. Hay,
Richard D. Chatterjee,
Tim Lichtenberg,
Raymond T. Pierrehumbert
Abstract:
Rocky exoplanets accessible to characterisation often lie on close-in orbits where tidal heating within their interiors is significant, with the L 98-59 planetary system being a prime example. As a long-term energy source for ongoing mantle melting and outgassing, tidal heating has been considered as a way to replenish lost atmospheres on rocky planets around active M-dwarfs. We simulate the early…
▽ More
Rocky exoplanets accessible to characterisation often lie on close-in orbits where tidal heating within their interiors is significant, with the L 98-59 planetary system being a prime example. As a long-term energy source for ongoing mantle melting and outgassing, tidal heating has been considered as a way to replenish lost atmospheres on rocky planets around active M-dwarfs. We simulate the early evolution of L 98-59 b, c and d using a time-evolved interior-atmosphere modelling framework, with a self-consistent implementation of tidal heating and redox-controlled outgassing. Emerging from our calculations is a novel self-limiting mechanism between radiative cooling, tidal heating, and mantle rheology, which we term the `radiation-tide-rheology feedback'. Our coupled modelling yields self-limiting tidal heating estimates that are up to two orders of magnitude lower than previous calculations, and yet are still large enough to enable the extension of primordial magma oceans to Gyr timescales. Comparisons with a semi-analytic model demonstrate that this negative feedback is a robust mechanism which can probe a given planet's initial conditions, atmospheric composition, and interior structure. The orbit and instellation of the sub-Venus L 98-59 b likely place it in a regime where tidal heating has kept the planet molten up to the present day, even if it were to have lost its atmosphere. For c and d, a long-lived magma ocean can be induced by tides only with additional atmospheric regulation of energy transport.
△ Less
Submitted 11 July, 2025; v1 submitted 6 May, 2025;
originally announced May 2025.
-
Location of a Sample of GeV and Optical Outbursts in the Jets of Blazars
Authors:
Maitreya Kundu,
Arit Bala,
Saugata Barat,
Ritaban Chatterjee
Abstract:
The exact location of the $γ$-ray emitting region in blazar jets has long been a matter of debate. However, the location has important implications about the emission processes, geometric and physical parameters of the jet, as well as the nature of interaction of the jet with the interstellar and intergalactic medium. Diverse conclusions have been drawn by various authors based on a variety of met…
▽ More
The exact location of the $γ$-ray emitting region in blazar jets has long been a matter of debate. However, the location has important implications about the emission processes, geometric and physical parameters of the jet, as well as the nature of interaction of the jet with the interstellar and intergalactic medium. Diverse conclusions have been drawn by various authors based on a variety of methods applied to different data sets of many blazars, e.g., the location is less than 0.1 pc from the central engine within the broad line region (BLR) or a few or tens of pc downstream beyond the dusty torus or at some intermediate distance. Here we use a method, established in a previous work, in which the location of the GeV/optical emission is determined using the ratio of energy dissipated during contemporaneous outbursts at those wave bands. We apply it to a total of 47 multi-wavelength outbursts in 10 blazars. We find that the location of the GeV/optical emission is beyond the BLR for all cases. This result is consistent with other studies, in which the location has been determined for a large sample of blazars. We compare the location determined by our method for several GeV outbursts of multiple blazars to that obtained by other authors using different methods. We find that our results are consistent in such one-to-one comparison in most cases, for which the required data were available.
△ Less
Submitted 19 May, 2025; v1 submitted 5 May, 2025;
originally announced May 2025.
-
Optimizing the qudit dimensions of position-momentum entangled photons for QKD
Authors:
Vikas S Bhat,
Rounak Chatterjee,
Kiran Bajar,
Sushil Mujumdar
Abstract:
We propose an optimization scheme to maximize the secure key rate of a high-dimensional variant of BBM92. We use the position-momentum conjugate bases to encode the higher dimensional qudits, realised in a fully passive optical setup. The setup employs a single lens for the basis measurements and no lossy or slow elements. We optimize the qudit dimension for the protocol by maximizing the number o…
▽ More
We propose an optimization scheme to maximize the secure key rate of a high-dimensional variant of BBM92. We use the position-momentum conjugate bases to encode the higher dimensional qudits, realised in a fully passive optical setup. The setup employs a single lens for the basis measurements and no lossy or slow elements. We optimize the qudit dimension for the protocol by maximizing the number of equiprobable sections (macropixels) of the detected beam while minimizing their overlap error. We show the enhanced key rate by discarding events from the ambiguous border pixels. Our strategy maximizes the overlap between the discarded regions from neighbouring macropixels, thereby globally minimizing the overall loss and error. We calculate the optimal dimension and the secure key rate for certain beam parameters. We experimentally show the feasibility of our scheme. This work paves the way for realistic implementations of high-dimensional device-independent quantum key distribution with enhanced bitrates.
△ Less
Submitted 1 May, 2025;
originally announced May 2025.
-
The Cosmic Shoreline Revisited: A Metric for Atmospheric Retention Informed by Hydrodynamic Escape
Authors:
Xuan Ji,
Richard D. Chatterjee,
Brandon Park Coy,
Edwin S. Kite
Abstract:
The "cosmic shoreline," a semi-empirical relation that separates airless worlds from worlds with atmospheres as proposed by K. J. Zahnle & D. C. Catling, is now guiding large-scale JWST surveys aimed at detecting rocky exoplanet atmospheres. We expand upon this framework by revisiting the shoreline using existing hydrodynamic escape models applied to Earth-like, Venus-like, and steam atmospheres f…
▽ More
The "cosmic shoreline," a semi-empirical relation that separates airless worlds from worlds with atmospheres as proposed by K. J. Zahnle & D. C. Catling, is now guiding large-scale JWST surveys aimed at detecting rocky exoplanet atmospheres. We expand upon this framework by revisiting the shoreline using existing hydrodynamic escape models applied to Earth-like, Venus-like, and steam atmospheres for rocky exoplanets, and we estimate energy-limited escape rates for CH4 atmospheres. We determine the critical instellation required for atmospheric retention by calculating time-integrated atmospheric mass loss. Our analysis introduces a new metric for target selection in the Rocky Worlds Director's Discretionary Time and refines expectations for rocky planet atmosphere searches. Exploring initial volatile inventory ranging from 0.01% to 1% of planetary mass, we find that its variation prevents the definition of a unique clear-cut shoreline, though nonlinear escape physics can reduce this sensitivity to initial conditions. Additionally, uncertain distributions of high-energy stellar evolution and planet age further blur the critical instellations for atmospheric retention, yielding broad shorelines. Hydrodynamic escape models find atmospheric retention is markedly more favorable for higher-mass planets orbiting higher-mass stars, with carbon-rich atmospheres remaining plausible for 55 Cancri e despite its extreme instellation. We caution that our estimates are sensitive to processes with poorly understood dynamics, such as atomic line cooling. Finally, we illustrate how density measurements can be used to statistically test the existence of the cosmic shorelines, emphasizing the need for more precise mass and radius measurements.
△ Less
Submitted 15 October, 2025; v1 submitted 28 April, 2025;
originally announced April 2025.
-
Rapid and efficient wavefront correction for spatially entangled photons using symmetrized optimization
Authors:
Kiran Bajar,
Ronen Shekel,
Vikas S. Bhat,
Rounak Chatterjee,
Yaron Bromberg,
Sushil Mujumdar
Abstract:
Spatial entanglement is a key resource in quantum technologies, enabling applications in quantum communication, imaging, and computation. However, propagation through complex media distorts spatial correlations, posing a challenge for practical implementations. We introduce a symmetrized genetic algorithm (sGA) for adaptive wavefront correction of spatially entangled photons, leveraging the insigh…
▽ More
Spatial entanglement is a key resource in quantum technologies, enabling applications in quantum communication, imaging, and computation. However, propagation through complex media distorts spatial correlations, posing a challenge for practical implementations. We introduce a symmetrized genetic algorithm (sGA) for adaptive wavefront correction of spatially entangled photons, leveraging the insight that only the even-parity component of wavefront distortions affects two-photon correlations. By enforcing symmetry constraints, sGA reduces the optimization parameter space by half, leading to faster convergence and improved enhancement within finite number of generations compared to standard genetic algorithms (GA). Additionally, we establish the dependence of enhancement on the signal-to-noise ratio of the feedback signal, which is controlled by detector integration time. This technique enables correction of entanglement degradation, enhancing quantum imaging, secure quantum communication, and quantum sensing in complex environments.
△ Less
Submitted 28 April, 2025;
originally announced April 2025.
-
Sub-Day Timescale X-ray Spectral Variability of the TeV Blazars Mrk 421 and 1ES 1959+650
Authors:
Susmita Das,
Ritaban Chatterjee
Abstract:
We present X-ray spectra ($0.7-20$ keV) of two high synchrotron-peaked blazars Mrk 421 and 1ES 1959+650 from simultaneous observations by the SXT and LAXPC instruments onboard \textit{AstroSat} and the \textit{Swift}-XRT during multiple intervals in 2016-19. The spectra of individual epochs are satisfactorily fitted by the log-parabola model. We carry out time-resolved X-ray spectroscopy using the…
▽ More
We present X-ray spectra ($0.7-20$ keV) of two high synchrotron-peaked blazars Mrk 421 and 1ES 1959+650 from simultaneous observations by the SXT and LAXPC instruments onboard \textit{AstroSat} and the \textit{Swift}-XRT during multiple intervals in 2016-19. The spectra of individual epochs are satisfactorily fitted by the log-parabola model. We carry out time-resolved X-ray spectroscopy using the \textit{AstroSat} data with a time resolution of $\sim$10 ks at all epochs, and study the temporal evolution of the best-fit spectral parameters of the log-parabola model. The energy light curves, with duration ranging from $0.5-5$ days, show intra-day variability and change in brightness states from one epoch to another. We find that the variation of the spectral index ($α$) at hours to days timescale has an inverse relation with the energy flux and the peak energy of the spectrum, which indicates a harder-when-brighter trend in the blazars. The variation of curvature ($β$) does not follows a clear trend with the flux and has an anti-correlation with $α$. Comparison with spectral variation simulated using a theoretical model of time variable nonthermal emission from blazar jets shows that radiative cooling and gradual acceleration of emitting particles belonging to an initial simple power-law energy distribution can reproduce most of the variability patterns of the spectral parameters at sub-day timescales.
△ Less
Submitted 23 April, 2025;
originally announced April 2025.