-
Reimagining RAN Automation in 6G: An Agentic AI Framework with Hierarchical Online Decision Transformer
Authors:
Md Arafat Habib,
Medhat Elsayed,
Majid Bavand,
Pedro Enrique Iturria Rivera,
Yigit Ozcan,
Melike Erol-Kantarci
Abstract:
In this paper, we propose an Agentic Artificial Intelligence (AI) framework for wireless networks. The framework coordinates a pool of AI agents guided by Natural Language (NL) inputs from a human operator. At its core, the super agent is powered by a Hierarchical Online Decision Transformer (H-ODT). It orchestrates three categories of agents: (i) inter-slice, intra-slice resource allocation agent…
▽ More
In this paper, we propose an Agentic Artificial Intelligence (AI) framework for wireless networks. The framework coordinates a pool of AI agents guided by Natural Language (NL) inputs from a human operator. At its core, the super agent is powered by a Hierarchical Online Decision Transformer (H-ODT). It orchestrates three categories of agents: (i) inter-slice, intra-slice resource allocation agents, (ii) network application orchestration agents, and (iii) self-healing agents. The orchestration takes place with the help of an Agentic Retrieval-Augmented Generation (RAG) module that integrates knowledge from heterogeneous sources. In this proposed methodology, the super agent directly interfaces with operators and generates sequential policies to activate relevant agents. The proposed framework is evaluated against three state-of-the-art baselines, showing improved throughput, reduced network delay, and higher energy efficiency at both slice-level and system-wide performance metrics. Also, the proposed Agentic framework introduces a bi-level human operator intent validation methodology, both at the slice-level and Key Performance Indicator (KPI)-level using generative AI-based time series predictors. We could rule out performance-degrading operator intents with an accuracy of 88.5%. Lastly, while being interrupted by any performance-degrading events, the self-healing capability of Agentic AI in our framework automatically recovers 90% of its previous performance, avoiding quality-of-service drifts when there is no human involvement.
△ Less
Submitted 4 April, 2026;
originally announced April 2026.
-
Efficient Acceleration of High-Quality GeV-Electron Bunches in a Hybrid Laser- and Beam-Driven Plasma Wakefield Accelerator
Authors:
F. M. Foerster,
M. Ayache,
Z. Bi,
M. Cerchez,
S. Corde,
A. Döpp,
F. Haberstroh,
A. F. Habib,
T. Heinemann,
B. Hidding,
A. Irman,
F. Irshad,
O. Kononenko,
M. LaBerge,
A. Martinez de la Ossa,
A. Münzer,
F. Peña,
G. Schilling,
S. Schöbel,
U. Schramm,
S. Sharan,
E. Travac,
P. Ufer,
N. Weiße,
M. Zeuner
, et al. (2 additional authors not shown)
Abstract:
Plasma-based accelerators are compact and provide high gradients, yet their practical use has been limited by energy gain, stability, beam quality, and energy transfer efficiency. Here, we address several of these challenges simultaneously using a hybrid scheme in which an electron bunch from a laser wakefield accelerator (LWFA) drives a subsequent plasma wakefield accelerator (PWFA) stage with in…
▽ More
Plasma-based accelerators are compact and provide high gradients, yet their practical use has been limited by energy gain, stability, beam quality, and energy transfer efficiency. Here, we address several of these challenges simultaneously using a hybrid scheme in which an electron bunch from a laser wakefield accelerator (LWFA) drives a subsequent plasma wakefield accelerator (PWFA) stage with internal witness injection. Close to driver depletion in the PWFA stage, we obtain witness bunches with higher electron energy, reduced energy spread and divergence, and higher angular-spectral charge density compared to LWFA alone. We report energy transformer ratios approaching~2, and about 20\% of the initial energy in the drive beam was transferred to the witness bunch, thereby achieving a driver-to-witness energy transfer efficiency that largely surpasses that of all previous PWFA experiments.
△ Less
Submitted 27 February, 2026;
originally announced February 2026.
-
Multi Layer Protection Against Low Rate DDoS Attacks in Containerized Systems
Authors:
Ahmad Fareed,
Bilal Al Habib,
Anne Pepita Francis
Abstract:
Low rate Distributed Denial of Service DDoS attacks have emerged as a major threat to containerized cloud infrastructures. Due to their low traffic volumes, these attacks can be difficult to detect and mitigate, potentially causing serious harm to internet applications. This work proposes a DDoS mitigation system that effectively defends against low rate DDoS attacks in containerized environments…
▽ More
Low rate Distributed Denial of Service DDoS attacks have emerged as a major threat to containerized cloud infrastructures. Due to their low traffic volumes, these attacks can be difficult to detect and mitigate, potentially causing serious harm to internet applications. This work proposes a DDoS mitigation system that effectively defends against low rate DDoS attacks in containerized environments using a multi layered defense strategy. The solution integrates a Web Application Firewall WAF, rate limiting, dynamic blacklisting, TCP and UDP header analysis, and zero trust principles to detect and block malicious traffic at different stages of the attack life cycle. By applying zero trust principles, the system ensures that each data packet is carefully inspected before granting access, improving overall security and resilience. Additionally, the systems integration with Docker orchestration facilitates deployment and management in containerized settings.
△ Less
Submitted 11 February, 2026;
originally announced February 2026.
-
Reconstructing Gamma Ray Burst Energy Relations with Observational H(z) data in Neural Network Framework
Authors:
Nilanjana Bagchi Aurpa,
Abha Dev Habib,
Nisha Rani
Abstract:
Gamma-ray bursts (GRBs) offer a powerful probe of the cosmic expansion history far beyond the redshift range accessible to Type Ia supernovae. However, the study of cosmological models using GRBs is hindered by the circularity problem, which arises from assuming a fiducial cosmological model during GRB luminosity distance calibration. In this work, we perform a model-independent calibration of GRB…
▽ More
Gamma-ray bursts (GRBs) offer a powerful probe of the cosmic expansion history far beyond the redshift range accessible to Type Ia supernovae. However, the study of cosmological models using GRBs is hindered by the circularity problem, which arises from assuming a fiducial cosmological model during GRB luminosity distance calibration. In this work, we perform a model-independent calibration of GRB luminosity relations using observational measurements of the Hubble parameter from the A220 and J220 compilations, thereby avoiding explicit cosmological assumptions. We employ an Artificial Neural Network to reconstruct the calibration relation directly from the data. In addition, we implement a Bayesian Neural Network framework as an alternative approach, enabling a data-driven treatment of both statistical and systematic uncertainties. The calibrated GRB sample is used to constrain the Amati relation, and we systematically compare the outcomes obtained from different calibration techniques and datasets. We find that the Amati relation slopes derived from the two neural network approaches are consistent with each other and with previous low-redshift calibrations obtained using model-independent methods. The Bayesian Neural Network approach provides a more robust framework for propagating uncertainties in the calibration procedure.
△ Less
Submitted 10 March, 2026; v1 submitted 13 January, 2026;
originally announced January 2026.
-
Hierarchical Decision Mamba Meets Agentic AI: A Novel Approach for RAN Slicing in 6G
Authors:
Md Arafat Habib,
Medhat Elsayed,
Majid Bavand,
Pedro Enrique Iturria Rivera,
Yigit Ozcan,
Melike Erol-Kantarci
Abstract:
Radio Access Network (RAN) slicing enables multiple logical networks to exist on top of the same physical infrastructure by allocating resources to distinct service groups, where radio resource scheduling plays a key role in ensuring compliance with slice-specific Service-Level Agreements (SLAs). Existing configuration-based or intent-driven Reinforcement Learning (RL) approaches usually rely on s…
▽ More
Radio Access Network (RAN) slicing enables multiple logical networks to exist on top of the same physical infrastructure by allocating resources to distinct service groups, where radio resource scheduling plays a key role in ensuring compliance with slice-specific Service-Level Agreements (SLAs). Existing configuration-based or intent-driven Reinforcement Learning (RL) approaches usually rely on static mappings and SLA conversions. The current literature does not integrate natural language understanding with coordinated decision-making. To address these limitations, we propose an Agentic AI framework for 6G RAN slicing, driven by a super agent built using Hierarchical Decision Mamba (HDM) controllers and a Large Language Model (LLM). The super agent interprets operator intents and translates them into actionable goals using the LLM, which are used by HDM to coordinate inter-slice, intra-slice, and self-healing agents. Compared to transformer-based and reward-driven baselines, the proposed Agentic AI framework demonstrates consistent improvements across key performance indicators, including higher throughput, improved cell-edge performance, and reduced latency across different slices.
△ Less
Submitted 24 February, 2026; v1 submitted 29 December, 2025;
originally announced December 2025.
-
MoonBot: Modular and On-Demand Reconfigurable Robot Toward Moon Base Construction
Authors:
Kentaro Uno,
Elian Neppel,
Gustavo H. Diaz,
Ashutosh Mishra,
Shamistan Karimov,
A. Sejal Jain,
Ayesha Habib,
Pascal Pama,
Hazal Gozbasi,
Shreya Santra,
Kazuya Yoshida
Abstract:
The allure of lunar surface exploration and development has recently captured widespread global attention. Robots have proved to be indispensable for exploring uncharted terrains, uncovering and leveraging local resources, and facilitating the construction of future human habitats. In this article, we introduce the modular and on-demand reconfigurable robot (MoonBot), a modular and reconfigurable…
▽ More
The allure of lunar surface exploration and development has recently captured widespread global attention. Robots have proved to be indispensable for exploring uncharted terrains, uncovering and leveraging local resources, and facilitating the construction of future human habitats. In this article, we introduce the modular and on-demand reconfigurable robot (MoonBot), a modular and reconfigurable robotic system engineered to maximize functionality while operating within the stringent mass constraints of lunar payloads and adapting to varying environmental conditions and task requirements. This article details the design and development of MoonBot and presents a preliminary field demonstration that validates the proof of concept through the execution of milestone tasks simulating the establishment of lunar infrastructure. These tasks include essential civil engineering operations, infrastructural component transportation and deployment, and assistive operations with inflatable modules. Furthermore, we systematically summarize the lessons learned during testing, focusing on the connector design and providing valuable insights for the advancement of modular robotic systems in future lunar missions.
△ Less
Submitted 25 December, 2025;
originally announced December 2025.
-
Synthetic Data Pipelines for Adaptive, Mission-Ready Militarized Humanoids
Authors:
Mohammed Ayman Habib,
Aldo Petruzzelli
Abstract:
Omnia presents a synthetic data driven pipeline to accelerate the training, validation, and deployment readiness of militarized humanoids. The approach converts first-person spatial observations captured from point-of-view recordings, smart glasses, augmented reality headsets, and spatial browsing workflows into scalable, mission-specific synthetic datasets for humanoid autonomy. By generating lar…
▽ More
Omnia presents a synthetic data driven pipeline to accelerate the training, validation, and deployment readiness of militarized humanoids. The approach converts first-person spatial observations captured from point-of-view recordings, smart glasses, augmented reality headsets, and spatial browsing workflows into scalable, mission-specific synthetic datasets for humanoid autonomy. By generating large volumes of high-fidelity simulated scenarios and pairing them with automated labeling and model training, the pipeline enables rapid iteration on perception, navigation, and decision-making capabilities without the cost, risk, or time constraints of extensive field trials. The resulting datasets can be tuned quickly for new operational environments and threat conditions, supporting both baseline humanoid performance and advanced subsystems such as multimodal sensing, counter-detection survivability, and CBRNE-relevant reconnaissance behaviors. This work targets faster development cycles and improved robustness in complex, contested settings by exposing humanoid systems to broad scenario diversity early in the development process.
△ Less
Submitted 16 December, 2025;
originally announced December 2025.
-
Plankton-Oxygen Dynamics in the Context of Climate Change: A Fractional Model with A Probability Density Function Approach
Authors:
Mahmoud M. El-Borai,
Wagdy G. El-Sayed,
Mahmoud A. Habib
Abstract:
We analyze how climate change affects marine oxygen production by modeling plankton--oxygen dynamics with a fractional-order nonlinear system and establishing rigorous conditions for the model's well-posedness. We formulate a three-dimensional system $d^αx(t)/dt^α= A x(t) + f(x(t))$, where $A$ is a diagonal $3\times 3$ matrix and $f$ is nonlinear. We (i) rigorously state the model, (ii) derive a L…
▽ More
We analyze how climate change affects marine oxygen production by modeling plankton--oxygen dynamics with a fractional-order nonlinear system and establishing rigorous conditions for the model's well-posedness. We formulate a three-dimensional system $d^αx(t)/dt^α= A x(t) + f(x(t))$, where $A$ is a diagonal $3\times 3$ matrix and $f$ is nonlinear. We (i) rigorously state the model, (ii) derive a Lipschitz constant for $f$ under suitable assumptions, and (iii) prove existence, uniqueness, and continuous dependence on initial data using a fractional formula with a probability density kernel and a generalized Gr"onwall inequality. Under the stated conditions, $f$ satisfies a computable Lipschitz bound that yields existence and uniqueness of solutions for the fractional system, and the solutions depend continuously on initial conditions, establishing the well-posedness of the plankton--oxygen model. We introduce a fractional, PDF-kernel--based framework for plankton--oxygen dynamics and provide general proofs of well-posedness via a generalized Gr"onwall approach, capturing memory effects that classical integer-order models miss. These results justify numerical simulations and sensitivity analyses of fractional marine-ecosystem models, providing a sound basis for testing mitigation and management strategies affecting oxygen dynamics and supporting evidence-based policies for protecting marine ecosystems under global warming.
△ Less
Submitted 13 November, 2025; v1 submitted 7 November, 2025;
originally announced November 2025.
-
Identifying multi-omics interactions for lung cancer drug targets discovery using Kernel Machine Regression
Authors:
Md. Imtyaz Ahmed,
Md. Delwar Hossain,
Md Mostafizer Rahman,
Md. Ahsan Habib,
Md. Mamunur Rashid,
Md. Selim Reza,
Md Ashad Alam
Abstract:
Cancer exhibits diverse and complex phenotypes driven by multifaceted molecular interactions. Recent biomedical research has emphasized the comprehensive study of such diseases by integrating multi-omics datasets (genome, proteome, transcriptome, epigenome). This approach provides an efficient method for identifying genetic variants associated with cancer and offers a deeper understanding of how t…
▽ More
Cancer exhibits diverse and complex phenotypes driven by multifaceted molecular interactions. Recent biomedical research has emphasized the comprehensive study of such diseases by integrating multi-omics datasets (genome, proteome, transcriptome, epigenome). This approach provides an efficient method for identifying genetic variants associated with cancer and offers a deeper understanding of how the disease develops and spreads. However, it is challenging to comprehend complex interactions among the features of multi-omics datasets compared to single omics. In this paper, we analyze lung cancer multi-omics datasets from The Cancer Genome Atlas (TCGA). Using four statistical methods, LIMMA, the T test, Canonical Correlation Analysis (CCA), and the Wilcoxon test, we identified differentially expressed genes across gene expression, DNA methylation, and miRNA expression data. We then integrated these multi-omics data using the Kernel Machine Regression (KMR) approach. Our findings reveal significant interactions among the three omics: gene expression, miRNA expression, and DNA methylation in lung cancer. From our data analysis, we identified 38 genes significantly associated with lung cancer. From our data analysis, we identified 38 genes significantly associated with lung cancer. Among these, eight genes of highest ranking (PDGFRB, PDGFRA, SNAI1, ID1, FGF11, TNXB, ITGB1, ZIC1) were highlighted by rigorous statistical analysis. Furthermore, in silico studies identified three top-ranked potential candidate drugs (Selinexor, Orapred, and Capmatinib) that could play a crucial role in the treatment of lung cancer. These proposed drugs are also supported by the findings of other independent studies, which underscore their potential efficacy in the fight against lung cancer.
△ Less
Submitted 17 October, 2025;
originally announced October 2025.
-
Generative AI for Intent-Driven Network Management in 6G RAN: A Case Study on the Mamba Model
Authors:
Md Arafat Habib,
Medhat Elsayed,
Yigit Ozcan,
Pedro Enrique Iturria-Rivera,
Majid Bavand,
Melike Erol-Kantarci
Abstract:
With the emergence of 6G, mobile networks are becoming increasingly heterogeneous and dynamic, necessitating advanced automation for efficient management. Intent-Driven Networks (IDNs) address this by translating high-level intents into optimization policies. Large Language Models (LLMs) can enhance this process by understanding complex human instructions, enabling adaptive and intelligent automat…
▽ More
With the emergence of 6G, mobile networks are becoming increasingly heterogeneous and dynamic, necessitating advanced automation for efficient management. Intent-Driven Networks (IDNs) address this by translating high-level intents into optimization policies. Large Language Models (LLMs) can enhance this process by understanding complex human instructions, enabling adaptive and intelligent automation. Given the rapid advancements in Generative AI (GenAI), a comprehensive survey of LLM-based IDN architectures in disaggregated Radio Access Network (RAN) environments is both timely and critical. This article provides such a survey, along with a case study on a selective State-Space Model (SSM)-enabled IDN architecture that integrates GenAI across three key stages: intent processing, intent validation, and intent execution. For the first time in the literature, we propose a hierarchical framework built on Mamba-SSM that introduces GenAI across all stages of the IDN pipeline. We further present a case study demonstrating that the proposed Mamba architecture significantly improves network performance through intelligent automation, surpassing existing IDN approaches. In a multi-cell 5G/6G scenario, the proposed architecture reduces quality of service drift by up to 70%, improves throughput by up to 80 Mbps, and lowers inference time to 60-70 ms, outperforming GenAI, reinforcement learning, and non-machine learning baselines.
△ Less
Submitted 5 February, 2026; v1 submitted 8 August, 2025;
originally announced August 2025.
-
Towards reliable use of artificial intelligence to classify otitis media using otoscopic images: Addressing bias and improving data quality
Authors:
Yixi Xu,
Al-Rahim Habib,
Graeme Crossland,
Hemi Patel,
Chris Perry,
Kris Bock,
Tony Lian,
William B. Weeks,
Rahul Dodhia,
Juan Lavista Ferres,
Narinder Pal Singh
Abstract:
Ear disease contributes significantly to global hearing loss, with recurrent otitis media being a primary preventable cause in children, impacting development. Artificial intelligence (AI) offers promise for early diagnosis via otoscopic image analysis, but dataset biases and inconsistencies limit model generalizability and reliability. This retrospective study systematically evaluated three publi…
▽ More
Ear disease contributes significantly to global hearing loss, with recurrent otitis media being a primary preventable cause in children, impacting development. Artificial intelligence (AI) offers promise for early diagnosis via otoscopic image analysis, but dataset biases and inconsistencies limit model generalizability and reliability. This retrospective study systematically evaluated three public otoscopic image datasets (Chile; Ohio, USA; Türkiye) using quantitative and qualitative methods. Two counterfactual experiments were performed: (1) obscuring clinically relevant features to assess model reliance on non-clinical artifacts, and (2) evaluating the impact of hue, saturation, and value on diagnostic outcomes. Quantitative analysis revealed significant biases in the Chile and Ohio, USA datasets. Counterfactual Experiment I found high internal performance (AUC > 0.90) but poor external generalization, because of dataset-specific artifacts. The Türkiye dataset had fewer biases, with AUC decreasing from 0.86 to 0.65 as masking increased, suggesting higher reliance on clinically meaningful features. Counterfactual Experiment II identified common artifacts in the Chile and Ohio, USA datasets. A logistic regression model trained on clinically irrelevant features from the Chile dataset achieved high internal (AUC = 0.89) and external (Ohio, USA: AUC = 0.87) performance. Qualitative analysis identified redundancy in all the datasets and stylistic biases in the Ohio, USA dataset that correlated with clinical outcomes. In summary, dataset biases significantly compromise reliability and generalizability of AI-based otoscopic diagnostic models. Addressing these biases through standardized imaging protocols, diverse dataset inclusion, and improved labeling methods is crucial for developing robust AI solutions, improving high-quality healthcare access, and enhancing diagnostic accuracy.
△ Less
Submitted 12 August, 2025; v1 submitted 24 July, 2025;
originally announced July 2025.
-
Self-Adaptive Stabilization and Quality Boost for Electron Beams from All-Optical Plasma Wakefield Accelerators
Authors:
D. Campbell,
T. Heinemann,
A. Dickson,
T. Wilson,
L. Berman,
M. Cerchez,
S. Corde,
A. Döpp,
A. F. Habib,
A. Irman,
S. Karsch,
A. Martinez de la Ossa,
A. Pukhov,
L. Reichwein,
U. Schramm,
A. Sutherland,
B. Hidding
Abstract:
Shot-to-shot fluctuations in electron beams from laser wakefield accelerators present a significant challenge for applications. Here, we show that instead of using such fluctuating beams directly, employing them to drive a plasma photocathode-based wakefield refinement stage can produce secondary electron beams with greater stability, higher quality, and improved reliability. Our simulation-based…
▽ More
Shot-to-shot fluctuations in electron beams from laser wakefield accelerators present a significant challenge for applications. Here, we show that instead of using such fluctuating beams directly, employing them to drive a plasma photocathode-based wakefield refinement stage can produce secondary electron beams with greater stability, higher quality, and improved reliability. Our simulation-based analysis reveals that drive beam jitters are compensated by both the insensitivity of beam-driven plasma wakefield acceleration, and the decoupled physics of plasma photocathode injection. While beam-driven, dephasing-free plasma wakefield acceleration mitigates energy and energy spread fluctuations, intrinsically synchronized plasma photocathode injection compensates charge and current jitters of incoming electron beams, and provides a simultaneous quality boost. Our findings suggest plasma photocathodes are ideal injectors for hybrid laser-plasma wakefield accelerators, and nurture prospects for demanding applications such as free-electron lasers.
△ Less
Submitted 9 July, 2025;
originally announced July 2025.
-
Ultra-high-gain water-window X-ray laser driven by plasma photocathode wakefield acceleration
Authors:
Lily H. A. Berman,
David Campbell,
Edgar Hartmann,
Thomas Heinemann,
Thomas Wilson,
Bernhard Hidding,
Ahmad Fahim Habib
Abstract:
X-ray free-electron lasers are large and complex machines, limited by electron beam brightness. Here we show through start-to-end simulations how to realise compact, robust and tunable X-ray lasers in the water window, based on ultra-bright electron beams from plasma wakefield accelerators. First, an ultra-low-emittance electron beam is released by a plasma photocathode in a metre-scale plasma wak…
▽ More
X-ray free-electron lasers are large and complex machines, limited by electron beam brightness. Here we show through start-to-end simulations how to realise compact, robust and tunable X-ray lasers in the water window, based on ultra-bright electron beams from plasma wakefield accelerators. First, an ultra-low-emittance electron beam is released by a plasma photocathode in a metre-scale plasma wakefield accelerator. By tuning the beam charge, space-charge forces create a balance between beam fields and wakefields that reduces beam energy spread and improves energy stability - both critical for beam extraction, transport, and focusing into a metre-scale undulator. Here, the resulting ultra-bright beams produce wavelength-tunable, coherent, femtosecond scale photon pulses at ultra-high gain. This regime enables reliable generation of millijoule-gigawatt-class X-ray laser pulses across the water window, offering tunability via the witness beam charge and robustness against variations in plasma wakefield strength. Our findings help democratise access to coherent, high-power, soft X-ray radiation.
△ Less
Submitted 8 July, 2025;
originally announced July 2025.
-
Harnessing the Power of LLMs, Informers and Decision Transformers for Intent-driven RAN Management in 6G
Authors:
Md Arafat Habib,
Pedro Enrique Iturria Rivera,
Yigit Ozcan,
Medhat Elsayed,
Majid Bavand,
Raimundas Gaigalas,
Melike Erol-Kantarci
Abstract:
Intent-driven network management is critical for managing the complexity of 5G and 6G networks. It enables adaptive, on-demand management of the network based on the objectives of the network operators. In this paper, we propose an innovative three-step framework for intent-driven network management based on Generative AI (GenAI) algorithms. First, we fine-tune a Large Language Model (LLM) on a cu…
▽ More
Intent-driven network management is critical for managing the complexity of 5G and 6G networks. It enables adaptive, on-demand management of the network based on the objectives of the network operators. In this paper, we propose an innovative three-step framework for intent-driven network management based on Generative AI (GenAI) algorithms. First, we fine-tune a Large Language Model (LLM) on a custom dataset using a Quantized Low-Rank Adapter (QLoRA) to enable memory-efficient intent processing within limited computational resources. A Retrieval Augmented Generation (RAG) module is included to support dynamic decision-making. Second, we utilize a transformer architecture for time series forecasting to predict key parameters, such as power consumption, traffic load, and packet drop rate, to facilitate intent validation proactively. Lastly, we introduce a Hierarchical Decision Transformer with Goal Awareness (HDTGA) to optimize the selection and orchestration of network applications and hence, optimize the network. Our intent guidance and processing approach improves BERTScore by 6% and the semantic similarity score by 9% compared to the base LLM model. Again, the proposed predictive intent validation approach can successfully rule out the performance-degrading intents with an average of 88% accuracy. Finally, compared to the baselines, the proposed HDTGA algorithm increases throughput at least by 19.3%, reduces delay by 48.5%, and boosts energy efficiency by 54.9%.
△ Less
Submitted 3 May, 2025;
originally announced May 2025.
-
Analyzing Performance Bottlenecks in Zero-Knowledge Proof Based Rollups on Ethereum
Authors:
Md. Ahsan Habib
Abstract:
Blockchain technology is rapidly evolving, with scalability remaining one of its most significant challenges. While various solutions have been proposed and continue to be developed, it is essential to consider the blockchain trilemma -- balancing scalability, security, and decentralization -- when designing new approaches. One promising solution is the zero-knowledge proof (ZKP)-based rollup, imp…
▽ More
Blockchain technology is rapidly evolving, with scalability remaining one of its most significant challenges. While various solutions have been proposed and continue to be developed, it is essential to consider the blockchain trilemma -- balancing scalability, security, and decentralization -- when designing new approaches. One promising solution is the zero-knowledge proof (ZKP)-based rollup, implemented on top of Ethereum. However, the performance of these systems is often limited by the efficiency of the ZKP mechanism. This paper explores the performance of ZKP-based rollups, focusing on a solution built using the Hardhat Ethereum development environment. Through detailed analysis, the paper identifies and examines key bottlenecks within the ZKP system, providing insight into potential areas for optimization to enhance scalability and overall system performance.
△ Less
Submitted 21 March, 2025;
originally announced March 2025.
-
Magnetic-field dependence of spin-phonon relaxation and dephasing due to g-factor fluctuations from first principles
Authors:
Joshua Quinton,
Mayada Fadel,
Junqing Xu,
Adela Habib,
Mani Chandra,
Yuan Ping,
Ravishankar Sundararaman
Abstract:
The electron spin decay lifetime in materials can be characterized by relaxation (T1) and irreversible (T2) and reversible (T2*) decoherence processes. Their interplay leads to a complex dependence of spin relaxation times on the direction and magnitude of magnetic fields, relevant for spintronics and quantum information applications. Here, we use real-time first-principles density matrix dynamics…
▽ More
The electron spin decay lifetime in materials can be characterized by relaxation (T1) and irreversible (T2) and reversible (T2*) decoherence processes. Their interplay leads to a complex dependence of spin relaxation times on the direction and magnitude of magnetic fields, relevant for spintronics and quantum information applications. Here, we use real-time first-principles density matrix dynamics simulations to directly simulate Hahn echo measurements, disentangle dephasing from decoherence, and predict T1, T2 and T2* spin lifetimes. We show that g-factor fluctuations lead to non-trivial magnetic field dependence of each of these lifetimes in inversion-symmetric crystals of CsPbBr3 and silicon, even when only intrinsic spin-phonon scattering is present. Most importantly, fluctuations in the off-diagonal components of the g-tensor lead to a strong magnetic field dependence of even the T1 lifetime in silicon. Our calculations elucidate the detailed role of anisotropic g-factors in determining the spin dynamics even in simple, low spin-orbit coupling materials such as silicon.
△ Less
Submitted 5 March, 2025; v1 submitted 27 November, 2024;
originally announced November 2024.
-
TabSeq: A Framework for Deep Learning on Tabular Data via Sequential Ordering
Authors:
Al Zadid Sultan Bin Habib,
Kesheng Wang,
Mary-Anne Hartley,
Gianfranco Doretto,
Donald A. Adjeroh
Abstract:
Effective analysis of tabular data still poses a significant problem in deep learning, mainly because features in tabular datasets are often heterogeneous and have different levels of relevance. This work introduces TabSeq, a novel framework for the sequential ordering of features, addressing the vital necessity to optimize the learning process. Features are not always equally informative, and for…
▽ More
Effective analysis of tabular data still poses a significant problem in deep learning, mainly because features in tabular datasets are often heterogeneous and have different levels of relevance. This work introduces TabSeq, a novel framework for the sequential ordering of features, addressing the vital necessity to optimize the learning process. Features are not always equally informative, and for certain deep learning models, their random arrangement can hinder the model's learning capacity. Finding the optimum sequence order for such features could improve the deep learning models' learning process. The novel feature ordering technique we provide in this work is based on clustering and incorporates both local ordering and global ordering. It is designed to be used with a multi-head attention mechanism in a denoising autoencoder network. Our framework uses clustering to align comparable features and improve data organization. Multi-head attention focuses on essential characteristics, whereas the denoising autoencoder highlights important aspects by rebuilding from distorted inputs. This method improves the capability to learn from tabular data while lowering redundancy. Our research, demonstrating improved performance through appropriate feature sequence rearrangement using raw antibody microarray and two other real-world biomedical datasets, validates the impact of feature ordering. These results demonstrate that feature ordering can be a viable approach to improved deep learning of tabular data.
△ Less
Submitted 21 October, 2024; v1 submitted 17 October, 2024;
originally announced October 2024.
-
Machine Learning-enabled Traffic Steering in O-RAN: A Case Study on Hierarchical Learning Approach
Authors:
Md Arafat Habib,
Hao Zhou,
Pedro Enrique Iturria-Rivera,
Yigit Ozcan,
Medhat Elsayed,
Majid Bavand,
Raimundas Gaigalas,
Melike Erol-Kantarci
Abstract:
Traffic Steering is a crucial technology for wireless networks, and multiple efforts have been put into developing efficient Machine Learning (ML)-enabled traffic steering schemes for Open Radio Access Networks (O-RAN). Given the swift emergence of novel ML techniques, conducting a timely survey that comprehensively examines the ML-based traffic steering schemes in O-RAN is critical. In this artic…
▽ More
Traffic Steering is a crucial technology for wireless networks, and multiple efforts have been put into developing efficient Machine Learning (ML)-enabled traffic steering schemes for Open Radio Access Networks (O-RAN). Given the swift emergence of novel ML techniques, conducting a timely survey that comprehensively examines the ML-based traffic steering schemes in O-RAN is critical. In this article, we provide such a survey along with a case study of hierarchical learning-enabled traffic steering in O-RAN. In particular, we first introduce the background of traffic steering in O-RAN and overview relevant state-of-the-art ML techniques and their applications. Then, we analyze the compatibility of the hierarchical learning framework in O-RAN and further propose a Hierarchical Deep-Q-Learning (h-DQN) framework for traffic steering. Compared to existing works, which focus on single-layer architecture with standalone agents, h-DQN decomposes the traffic steering problem into a bi-level architecture with hierarchical intelligence. The meta-controller makes long-term and high-level policies, while the controller executes instant traffic steering actions under high-level policies. Finally, the case study shows that the hierarchical learning approach can provide significant performance improvements over the baseline algorithms.
△ Less
Submitted 30 September, 2024;
originally announced September 2024.
-
Susceptibility Formulation of Density Matrix Perturbation Theory
Authors:
Anders M. N. Niklasson,
Adela Habib,
Joshua Finkelstein,
Emanuel H. Rubensson
Abstract:
Density matrix perturbation theory based on recursive Fermi-operator expansions provides a computationally efficient framework for time-independent response calculations in quantum chemistry and materials science. From a perturbation in the Hamiltonian we can calculate the first-order perturbation in the density matrix, which then gives us the linear response in the expectation values for some cho…
▽ More
Density matrix perturbation theory based on recursive Fermi-operator expansions provides a computationally efficient framework for time-independent response calculations in quantum chemistry and materials science. From a perturbation in the Hamiltonian we can calculate the first-order perturbation in the density matrix, which then gives us the linear response in the expectation values for some chosen set of observables. Here we present an alternative, {\it dual} formulation, where we instead calculate the static susceptibility of an observable, which then gives us the linear response in the expectation values for any number of different Hamiltonian perturbations. We show how the calculation of the susceptibility can be performed with the same expansion schemes used in recursive density matrix perturbation theory, including generalizations to fractional occupation numbers and self-consistent linear response calculations, i.e. similar to density functional perturbation theory. As with recursive density matrix perturbation theory, the dual susceptibility formulation is well suited for numerically thresholded sparse matrix algebra, which has linear scaling complexity for sufficiently large sparse systems. Similarly, the recursive computation of the susceptibility also seamlessly integrates with the computational framework of deep neural networks used in artificial intelligence (AI) applications. This integration enables the calculation of quantum response properties that can leverage cutting-edge AI-hardware, such as Nvidia Tensor cores or Google Tensor Processing Units. We demonstrate performance for recursive susceptibility calculations using Nvidia Graphics Processing Units and Tensor cores.
△ Less
Submitted 25 September, 2024;
originally announced September 2024.
-
On the Prevalence, Evolution, and Impact of Code Smells in Simulation Modelling Software
Authors:
Riasat Mahbub,
Mohammad Masudur Rahman,
Muhammad Ahsanul Habib
Abstract:
Simulation modelling systems are routinely used to test or understand real-world scenarios in a controlled setting. They have found numerous applications in scientific research, engineering, and industrial operations. Due to their complex nature, the simulation systems could suffer from various code quality issues and technical debt. However, to date, there has not been any investigation into thei…
▽ More
Simulation modelling systems are routinely used to test or understand real-world scenarios in a controlled setting. They have found numerous applications in scientific research, engineering, and industrial operations. Due to their complex nature, the simulation systems could suffer from various code quality issues and technical debt. However, to date, there has not been any investigation into their code quality issues (e.g. code smells). In this paper, we conduct an empirical study investigating the prevalence, evolution, and impact of code smells in simulation software systems. First, we employ static analysis tools (e.g. Designite) to detect and quantify the prevalence of various code smells in 155 simulation and 327 traditional projects from Github. Our findings reveal that certain code smells (e.g. Long Statement, Magic Number) are more prevalent in simulation software systems than in traditional software systems. Second, we analyze the evolution of these code smells across multiple project versions and investigate their chances of survival. Our experiments show that some code smells such as Magic Number and Long Parameter List can survive a long time in simulation software systems. Finally, we examine any association between software bugs and code smells. Our experiments show that although Design and Architecture code smells are introduced simultaneously with bugs, there is no significant association between code smells and bugs in simulation systems.
△ Less
Submitted 5 September, 2024;
originally announced September 2024.
-
Robust CNN Multi-Nested-LSTM Framework with Compound Loss for Patch-based Multi-Push Ultrasound Shear Wave Imaging and Segmentation
Authors:
Md. Jahin Alam,
Ahsan Habib,
Md. Kamrul Hasan
Abstract:
Ultrasound Shear Wave Elastography (SWE) is a noteworthy tool for in-vivo noninvasive tissue pathology assessment. State-of-the-art techniques can generate reasonable estimates of tissue elasticity, but high-quality and noise-resiliency in SWE reconstruction have yet to demonstrate advancements. In this work, we propose a two-stage DL pipeline producing reliable reconstructions and denoise said re…
▽ More
Ultrasound Shear Wave Elastography (SWE) is a noteworthy tool for in-vivo noninvasive tissue pathology assessment. State-of-the-art techniques can generate reasonable estimates of tissue elasticity, but high-quality and noise-resiliency in SWE reconstruction have yet to demonstrate advancements. In this work, we propose a two-stage DL pipeline producing reliable reconstructions and denoise said reconstructions to obtain lower noise prevailing elasticity mappings. The reconstruction network consists of a Resnet3D Encoder to extract temporal context from the sequential multi-push data. The encoded features are sent to multiple Nested CNN LSTMs which process them in a temporal attention-guided windowing basis and map the 3D features to 2D using FFT-attention, which are then decoded into an elasticity map as primary reconstruction. The 2D maps from each multi-push region are merged and cleaned using a dual-decoder denoiser network, which independently denoises foreground and background before fusion. The post-denoiser generates a higher-quality reconstruction and an inclusion-segmentation mask. A multi-objective loss is designed to accommodate the denoising, fusing, and segmentation processes. The method is validated on sequential multi-push SWE motion data with multiple overlapping regions. A patch-based training procedure is introduced with network modifications to handle data scarcity. Evaluations produce 32.66dB PSNR, 43.19dB CNR in noisy simulation, and 22.44dB PSNR, 36.88dB CNR in experimental data, across all test samples. Moreover, IoUs (0.909 and 0.781) were quite satisfactory in the datasets. After comparing with other reported deep-learning approaches, our method proves quantitatively and qualitatively superior in dealing with noise influences in SWE data. From a performance point of view, our deep-learning pipeline has the potential to become utilitarian in the clinical domain.
△ Less
Submitted 30 July, 2024;
originally announced July 2024.
-
LLM-Based Intent Processing and Network Optimization Using Attention-Based Hierarchical Reinforcement Learning
Authors:
Md Arafat Habib,
Pedro Enrique Iturria Rivera,
Yigit Ozcan,
Medhat Elsayed,
Majid Bavand,
Raimundus Gaigalas,
Melike Erol-Kantarci
Abstract:
Intent-based network automation is a promising tool to enable easier network management however certain challenges need to be effectively addressed. These are: 1) processing intents, i.e., identification of logic and necessary parameters to fulfill an intent, 2) validating an intent to align it with current network status, and 3) satisfying intents via network optimizing functions like xApps and r…
▽ More
Intent-based network automation is a promising tool to enable easier network management however certain challenges need to be effectively addressed. These are: 1) processing intents, i.e., identification of logic and necessary parameters to fulfill an intent, 2) validating an intent to align it with current network status, and 3) satisfying intents via network optimizing functions like xApps and rApps in O-RAN. This paper addresses these points via a three-fold strategy to introduce intent-based automation for O-RAN. First, intents are processed via a lightweight Large Language Model (LLM). Secondly, once an intent is processed, it is validated against future incoming traffic volume profiles (high or low). Finally, a series of network optimization applications (rApps and xApps) have been developed. With their machine learning-based functionalities, they can improve certain key performance indicators such as throughput, delay, and energy efficiency. In this final stage, using an attention-based hierarchical reinforcement learning algorithm, these applications are optimally initiated to satisfy the intent of an operator. Our simulations show that the proposed method can achieve at least 12% increase in throughput, 17.1% increase in energy efficiency, and 26.5% decrease in network delay compared to the baseline algorithms.
△ Less
Submitted 21 December, 2024; v1 submitted 10 June, 2024;
originally announced June 2024.
-
CredSec: A Blockchain-based Secure Credential Management System for University Adoption
Authors:
Md. Ahsan Habib,
Md. Mostafijur Rahman,
Nieb Hasan Neom
Abstract:
University education play a critical role in shaping intellectual and professional development of the individuals and contribute significantly to the advancement of knowledge and society. Generally, university authority has a direct control of students result making and stores the credential in their local dedicated server. So, there is chance to alter the credential and also have a very high poss…
▽ More
University education play a critical role in shaping intellectual and professional development of the individuals and contribute significantly to the advancement of knowledge and society. Generally, university authority has a direct control of students result making and stores the credential in their local dedicated server. So, there is chance to alter the credential and also have a very high possibility to encounter various threats and different security attacks. To resolve these, we propose a blockchain based secure credential management system (BCMS) for efficiently storing, managing and recovering credential without involving the university authority. The proposed BCMS incorporates a modified two factor encryption (m2FE) technique, a combination of RSA cryptosystem and a DNA encoding to ensure credential privacy and an enhanced authentication scheme for teachers and students. Besides, to reduce size of the cipher credential and its conversion time, we use character to integer (C2I) table instead of ASCII table. Finally, the experimental result and analysis of the BCMS illustrate the effectiveness over state of the art works.
△ Less
Submitted 3 June, 2024;
originally announced June 2024.
-
Efficient Mixed-Precision Matrix Factorization of the Inverse Overlap Matrix in Electronic Structure Calculations with AI-Hardware and GPUs
Authors:
Adela Habib,
Joshua Finkelstein,
Anders M. N. Niklasson
Abstract:
In recent years, a new kind of accelerated hardware has gained popularity in the Artificial Intelligence (AI) and Machine Learning (ML) communities which enables extremely high-performance tensor contractions in reduced precision for deep neural network calculations. In this article, we exploit Nvidia Tensor cores, a prototypical example of such AI/ML hardware, to develop a mixed precision approac…
▽ More
In recent years, a new kind of accelerated hardware has gained popularity in the Artificial Intelligence (AI) and Machine Learning (ML) communities which enables extremely high-performance tensor contractions in reduced precision for deep neural network calculations. In this article, we exploit Nvidia Tensor cores, a prototypical example of such AI/ML hardware, to develop a mixed precision approach for computing a dense matrix factorization of the inverse overlap matrix in electronic structure theory, $S^{-1}$. This factorization of $S^{-1}$, written as $ZZ^T=S^{-1}$, is used to transform the general matrix eigenvalue problem into a standard matrix eigenvalue problem. Here we present a mixed precision iterative refinement algorithm where $Z$ is given recursively using matrix-matrix multiplications and can be computed with high performance on Tensor cores. To understand the performance and accuracy of Tensor cores, comparisons are made to GPU-only implementations in single and double precision. Additionally, we propose a non-parametric stopping criteria which is robust in the face of lower precision floating point operations. The algorithm is particularly useful when we have a good initial guess to $Z$, for example, from previous time steps in quantum-mechanical molecular dynamics simulations or from a previous iteration in a geometry optimization.
△ Less
Submitted 29 April, 2024;
originally announced April 2024.
-
Transformer-Based Wireless Traffic Prediction and Network Optimization in O-RAN
Authors:
Md Arafat Habib,
Pedro Enrique Iturria-Rivera,
Yigit Ozcan,
Medhat Elsayed,
Majid Bavand,
Raimundus Gaigalas,
Melike Erol-Kantarci
Abstract:
This paper introduces an innovative method for predicting wireless network traffic in concise temporal intervals for Open Radio Access Networks (O-RAN) using a transformer architecture, which is the machine learning model behind generative AI tools. Depending on the anticipated traffic, the system either launches a reinforcement learning-based traffic steering xApp or a cell sleeping rApp to enhan…
▽ More
This paper introduces an innovative method for predicting wireless network traffic in concise temporal intervals for Open Radio Access Networks (O-RAN) using a transformer architecture, which is the machine learning model behind generative AI tools. Depending on the anticipated traffic, the system either launches a reinforcement learning-based traffic steering xApp or a cell sleeping rApp to enhance performance metrics like throughput or energy efficiency. Our simulation results demonstrate that the proposed traffic prediction-based network optimization mechanism matches the performance of standalone RAN applications (rApps/ xApps) that are always on during the whole simulation time while offering on-demand activation. This feature is particularly advantageous during instances of abrupt fluctuations in traffic volume. Rather than persistently operating specific applications irrespective of the actual incoming traffic conditions, the proposed prediction-based method increases the average energy efficiency by 39.7% compared to the "Always on Traffic Steering xApp" and achieves 10.1% increase in throughput compared to the "Always on Cell Sleeping rApp". The simulation has been conducted over 24 hours, emulating a whole day traffic pattern for a dense urban area.
△ Less
Submitted 16 March, 2024;
originally announced March 2024.
-
Cassini sets in taxicab geometry
Authors:
Alexander Habib,
Dylan Helliwell
Abstract:
Given two points $p$ and $q$ in the plane and a nonnegative number $r$, the Cassini oval is the set of points $x$ that satisfy $d(x, p) d(x, q) = r^2$. In this paper, we study this set using the taxicab metric. We find that these sets have characteristics that are qualitatively similar to their Euclidean counterparts while also reflecting the underlying taxicab structure. We provide a geometric de…
▽ More
Given two points $p$ and $q$ in the plane and a nonnegative number $r$, the Cassini oval is the set of points $x$ that satisfy $d(x, p) d(x, q) = r^2$. In this paper, we study this set using the taxicab metric. We find that these sets have characteristics that are qualitatively similar to their Euclidean counterparts while also reflecting the underlying taxicab structure. We provide a geometric description of these sets and provide a characterization in terms of intersections and unions of a restricted family of such sets analogous to that found recently for taxicab Apollonian sets.
△ Less
Submitted 3 January, 2025; v1 submitted 8 March, 2024;
originally announced March 2024.
-
Analyzing the Dynamics of COVID-19 Lockdown Success: Insights from Regional Data and Public Health Measures
Authors:
Md. Motaleb Hossen Manik,
Md. Ahsan Habib,
Md. Zabirul Islam,
Tanim Ahmed,
Fabliha Haque
Abstract:
The COVID-19 pandemic caused by the coronavirus had a significant effect on social, economic, and health systems globally. The virus emerged in Wuhan, China, and spread worldwide resulting in severe disease, death, and social interference. Countries implemented lockdowns in various regions to limit the spread of the virus. Some of them were successful and some failed. Here, several factors played…
▽ More
The COVID-19 pandemic caused by the coronavirus had a significant effect on social, economic, and health systems globally. The virus emerged in Wuhan, China, and spread worldwide resulting in severe disease, death, and social interference. Countries implemented lockdowns in various regions to limit the spread of the virus. Some of them were successful and some failed. Here, several factors played a vital role in their success. But mostly these factors and their correlations remained unidentified. In this paper, we unlocked those factors that contributed to the success of lockdown during the COVID-19 pandemic and explored the correlations among them. Moreover, this paper proposes several strategies to control any pandemic situation in the future. Here, it explores the relationships among variables, such as population density, number of infected, death, recovered patients, and the success or failure of the lockdown in different regions of the world. The findings suggest a strong correlation among these factors and indicate that the spread of similar kinds of viruses can be reduced in the future by implementing several safety measures.
△ Less
Submitted 24 February, 2024;
originally announced February 2024.
-
BdSLW60: A Word-Level Bangla Sign Language Dataset
Authors:
Husne Ara Rubaiyeat,
Hasan Mahmud,
Ahsan Habib,
Md. Kamrul Hasan
Abstract:
Sign language discourse is an essential mode of daily communication for the deaf and hard-of-hearing people. However, research on Bangla Sign Language (BdSL) faces notable limitations, primarily due to the lack of datasets. Recognizing wordlevel signs in BdSL (WL-BdSL) presents a multitude of challenges, including the need for well-annotated datasets, capturing the dynamic nature of sign gestures…
▽ More
Sign language discourse is an essential mode of daily communication for the deaf and hard-of-hearing people. However, research on Bangla Sign Language (BdSL) faces notable limitations, primarily due to the lack of datasets. Recognizing wordlevel signs in BdSL (WL-BdSL) presents a multitude of challenges, including the need for well-annotated datasets, capturing the dynamic nature of sign gestures from facial or hand landmarks, developing suitable machine learning or deep learning-based models with substantial video samples, and so on. In this paper, we address these challenges by creating a comprehensive BdSL word-level dataset named BdSLW60 in an unconstrained and natural setting, allowing positional and temporal variations and allowing sign users to change hand dominance freely. The dataset encompasses 60 Bangla sign words, with a significant scale of 9307 video trials provided by 18 signers under the supervision of a sign language professional. The dataset was rigorously annotated and cross-checked by 60 annotators. We also introduced a unique approach of a relative quantization-based key frame encoding technique for landmark based sign gesture recognition. We report the benchmarking of our BdSLW60 dataset using the Support Vector Machine (SVM) with testing accuracy up to 67.6% and an attention-based bi-LSTM with testing accuracy up to 75.1%. The dataset is available at https://www.kaggle.com/datasets/hasaniut/bdslw60 and the code base is accessible from https://github.com/hasanssl/BdSLW60_Code.
△ Less
Submitted 13 February, 2024;
originally announced February 2024.
-
Enriching Automatic Test Case Generation by Extracting Relevant Test Inputs from Bug Reports
Authors:
Wendkûuni C. Ouédraogo,
Laura Plein,
Kader Kaboré,
Andrew Habib,
Jacques Klein,
David Lo,
Tegawendé F. Bissyandé
Abstract:
The quality of software is closely tied to the effectiveness of the tests it undergoes. Manual test writing, though crucial for bug detection, is time-consuming, which has driven significant research into automated test case generation. However, current methods often struggle to generate relevant inputs, limiting the effectiveness of the tests produced. To address this, we introduce BRMiner, a nov…
▽ More
The quality of software is closely tied to the effectiveness of the tests it undergoes. Manual test writing, though crucial for bug detection, is time-consuming, which has driven significant research into automated test case generation. However, current methods often struggle to generate relevant inputs, limiting the effectiveness of the tests produced. To address this, we introduce BRMiner, a novel approach that leverages Large Language Models (LLMs) in combination with traditional techniques to extract relevant inputs from bug reports, thereby enhancing automated test generation tools. In this study, we evaluate BRMiner using the Defects4J benchmark and test generation tools such as EvoSuite and Randoop. Our results demonstrate that BRMiner achieves a Relevant Input Rate (RIR) of 60.03% and a Relevant Input Extraction Accuracy Rate (RIEAR) of 31.71%, significantly outperforming methods that rely on LLMs alone. The integration of BRMiner's input enhances EvoSuite ability to generate more effective test, leading to increased code coverage, with gains observed in branch, instruction, method, and line coverage across multiple projects. Furthermore, BRMiner facilitated the detection of 58 unique bugs, including those that were missed by traditional baseline approaches. Overall, BRMiner's combination of LLM filtering with traditional input extraction techniques significantly improves the relevance and effectiveness of automated test generation, advancing the detection of bugs and enhancing code coverage, thereby contributing to higher-quality software development.
△ Less
Submitted 19 March, 2025; v1 submitted 22 December, 2023;
originally announced December 2023.
-
Interferometric apodization by homothety -- II. Experimental validation
Authors:
J Chafi,
Y El Azhari,
O Azagrouze,
A Jabiri,
A Boskri,
Z Benkhaldoun,
A Habib
Abstract:
This work presents the results of experimental laboratory tests on the apodization of circular and rectangular apertures using the Interferometric Apodization by Homothety (IAH) technique. The IAH approach involves splitting the amplitude of the instrumental PSF into two equal parts. One of the two produced PSFs undergoes a homothety to change its transverse dimensions while its amplitude is prope…
▽ More
This work presents the results of experimental laboratory tests on the apodization of circular and rectangular apertures using the Interferometric Apodization by Homothety (IAH) technique. The IAH approach involves splitting the amplitude of the instrumental PSF into two equal parts. One of the two produced PSFs undergoes a homothety to change its transverse dimensions while its amplitude is properly controlled. The two PSFs are then combined to produce an apodized image. The diffraction wings of the resulting PSF are subsequently reduced by some variable reduction factor, depending on an amplitude parameter $γ$ and a spread parameter $η$. This apodization approach was implemented in the laboratory using an interferometric setup based on the Mach-Zehnder Interferometer (MZI). The experimental results exhibit a strong agreement between theory and experiment. For instance, the average experimental contrast obtained at a low angular separation of $2.4λ/D$ does not exceed $5\times10^{-4}$. This work also allowed us to study the influence on the apodizer's performance of some parameters such as the wavelength and the density of the neutral filters.
△ Less
Submitted 8 December, 2023;
originally announced December 2023.
-
A technique to avoid Blockchain Denial of Service (BDoS) and Selfish Mining Attack
Authors:
Md. Ahsan Habib,
Md. Motaleb Hossen Manik
Abstract:
Blockchain denial of service (BDoS) and selfish mining are the two most crucial attacks on blockchain technology. A classical DoS attack targets the computer network to limit, restrict, or stop accessing the system of authorized users which is ineffective against renowned cryptocurrencies like Bitcoin, Ethereum, etc. Unlike the conventional DoS, the BDoS affects the system's mechanism design to ma…
▽ More
Blockchain denial of service (BDoS) and selfish mining are the two most crucial attacks on blockchain technology. A classical DoS attack targets the computer network to limit, restrict, or stop accessing the system of authorized users which is ineffective against renowned cryptocurrencies like Bitcoin, Ethereum, etc. Unlike the conventional DoS, the BDoS affects the system's mechanism design to manipulate the incentive structure to discourage honest miners to participate in the mining process. In contrast, in a selfish mining attack, the adversary miner keeps its discovered block private to fork the chain intentionally that aiming to increase the incentive of the adversary miner. This paper proposed a technique to successfully avoid BDoS and selfish mining attacks. The existing infrastructure of blockchain technology doesn't need to be changed a lot to incorporate the proposed solution.
△ Less
Submitted 29 April, 2023;
originally announced October 2023.
-
Cardiovascular Disease Risk Prediction via Social Media
Authors:
Al Zadid Sultan Bin Habib,
Md Asif Bin Syed,
Md Tanvirul Islam,
Donald A. Adjeroh
Abstract:
Researchers use Twitter and sentiment analysis to predict Cardiovascular Disease (CVD) risk. We developed a new dictionary of CVD-related keywords by analyzing emotions expressed in tweets. Tweets from eighteen US states, including the Appalachian region, were collected. Using the VADER model for sentiment analysis, users were classified as potentially at CVD risk. Machine Learning (ML) models wer…
▽ More
Researchers use Twitter and sentiment analysis to predict Cardiovascular Disease (CVD) risk. We developed a new dictionary of CVD-related keywords by analyzing emotions expressed in tweets. Tweets from eighteen US states, including the Appalachian region, were collected. Using the VADER model for sentiment analysis, users were classified as potentially at CVD risk. Machine Learning (ML) models were employed to classify individuals' CVD risk and applied to a CDC dataset with demographic information to make the comparison. Performance evaluation metrics such as Test Accuracy, Precision, Recall, F1 score, Mathew's Correlation Coefficient (MCC), and Cohen's Kappa (CK) score were considered. Results demonstrated that analyzing tweets' emotions surpassed the predictive power of demographic data alone, enabling the identification of individuals at potential risk of developing CVD. This research highlights the potential of Natural Language Processing (NLP) and ML techniques in using tweets to identify individuals with CVD risks, providing an alternative approach to traditional demographic information for public health monitoring.
△ Less
Submitted 28 September, 2023; v1 submitted 22 September, 2023;
originally announced September 2023.
-
Optical, Thermal, and Electrical Analysis of Perovskite Solar Cell with Grated CdS and Embedded Plasmonic Au Nanoparticles
Authors:
Ohidul Islam,
M. Hussayeen Khan Anik,
Sakib Mahmud,
Joyprokash Debnath,
Ahsan Habib,
Sharnali Islam
Abstract:
We propose a novel approach to enhance the performance of perovskite solar cells (PSCs) by incorporating grated Cadmium Sulfide (CdS) and plasmonic gold nanoparticles (Au NPs) into the absorber layer. The CdS grating acts as the electron transport layer and penetrates into the perovskite absorber layer, increasing the absorption of the active layer and reducing the electron-hole recombination rate…
▽ More
We propose a novel approach to enhance the performance of perovskite solar cells (PSCs) by incorporating grated Cadmium Sulfide (CdS) and plasmonic gold nanoparticles (Au NPs) into the absorber layer. The CdS grating acts as the electron transport layer and penetrates into the perovskite absorber layer, increasing the absorption of the active layer and reducing the electron-hole recombination rate. The plasmonic Au NPs enhance the absorption in the infrared region by scattering and trapping the incident light. We perform a coupled optical and electrical study that shows a significant improvement in the short circuit current density (JSC) and power conversion efficiency (PCE) of the PSC after introducing the CdS grating and plasmonic Au NPs. Specifically, we observe a 48% increase in average optical absorption from 800 nm to 1400 nm and a 7.42 mA/cm^2 increase in JSC. We also find that the PCE of the PSC is increased by 7.91% when comparing the planar reference structure (without the CdS grating and the plasmonic Au NP). However, metal nanoparticles introduce ohmic losses and temperature rise in the solar cell. We analyze the non-radiative heat profile, electric field distribution, and temperature distribution across the PSC. We observe a temperature increase of approximately 14 K above the ambient temperature for the grated CdS layer with incorporated Au NPs, which is comparable to the temperature increase observed in the planar reference structure. Our results have the potential to pave the way for the development of highly efficient and stable PSCs in the future.
△ Less
Submitted 18 September, 2023;
originally announced September 2023.
-
Learning to Represent Patches
Authors:
Xunzhu Tang,
Haoye Tian,
Zhenghan Chen,
Weiguo Pian,
Saad Ezzini,
Abdoul Kader Kabore,
Andrew Habib,
Jacques Klein,
Tegawende F. Bissyande
Abstract:
Patch representation is crucial in automating various software engineering tasks, like determining patch accuracy or summarizing code changes. While recent research has employed deep learning for patch representation, focusing on token sequences or Abstract Syntax Trees (ASTs), they often miss the change's semantic intent and the context of modified lines. To bridge this gap, we introduce a novel…
▽ More
Patch representation is crucial in automating various software engineering tasks, like determining patch accuracy or summarizing code changes. While recent research has employed deep learning for patch representation, focusing on token sequences or Abstract Syntax Trees (ASTs), they often miss the change's semantic intent and the context of modified lines. To bridge this gap, we introduce a novel method, Patcherizer. It delves into the intentions of context and structure, merging the surrounding code context with two innovative representations. These capture the intention in code changes and the intention in AST structural modifications pre and post-patch. This holistic representation aptly captures a patch's underlying intentions. Patcherizer employs graph convolutional neural networks for structural intention graph representation and transformers for intention sequence representation. We evaluated Patcherizer's embeddings' versatility in three areas: (1) Patch description generation, (2) Patch accuracy prediction, and (3) Patch intention identification. Our experiments demonstrate the representation's efficacy across all tasks, outperforming state-of-the-art methods. For example, in patch description generation, Patcherizer excels, showing an average boost of 19.39% in BLEU, 8.71% in ROUGE-L, and 34.03% in METEOR scores.
△ Less
Submitted 3 October, 2023; v1 submitted 31 August, 2023;
originally announced August 2023.
-
Reconfigurable Intelligent Surface Assisted Railway Communications: A survey
Authors:
Aline Habib,
Ammar El Falou,
Charlotte Langlais,
Marion Berbineau
Abstract:
The number of train passengers and the demand for high data rates to handle new technologies such as video streaming and IoT technologies are continuously increasing. Therefore the exploration of millimeter waves (mmWave) band is a key technology to meet this demand. However, the high penetration loss makes mmWave very sensitive to blocking, limiting its coverage area. One promising, efficient, an…
▽ More
The number of train passengers and the demand for high data rates to handle new technologies such as video streaming and IoT technologies are continuously increasing. Therefore the exploration of millimeter waves (mmWave) band is a key technology to meet this demand. However, the high penetration loss makes mmWave very sensitive to blocking, limiting its coverage area. One promising, efficient, and low-cost solution is the reconfigurable intelligent surface (RIS). This paper reviews the state of the art of RIS for railway communications in the mmWave context. First, we present the different types of RIS and review some optimization algorithms used in the literature to find the RIS phase shift. Then, we review recent works on RIS in the railway domain and provide future directions.
△ Less
Submitted 10 July, 2023;
originally announced July 2023.
-
Intent-driven Intelligent Control and Orchestration in O-RAN Via Hierarchical Reinforcement Learning
Authors:
Md Arafat Habib,
Hao Zhou,
Pedro Enrique Iturria-Rivera,
Medhat Elsayed,
Majid Bavand,
Raimundas Gaigalas,
Yigit Ozcan,
Melike Erol-Kantarci
Abstract:
rApps and xApps need to be controlled and orchestrated well in the open radio access network (O-RAN) so that they can deliver a guaranteed network performance in a complex multi-vendor environment. This paper proposes a novel intent-driven intelligent control and orchestration scheme based on hierarchical reinforcement learning (HRL). The proposed scheme can orchestrate multiple rApps or xApps acc…
▽ More
rApps and xApps need to be controlled and orchestrated well in the open radio access network (O-RAN) so that they can deliver a guaranteed network performance in a complex multi-vendor environment. This paper proposes a novel intent-driven intelligent control and orchestration scheme based on hierarchical reinforcement learning (HRL). The proposed scheme can orchestrate multiple rApps or xApps according to the operator's intent of optimizing certain key performance indicators (KPIs), such as throughput, energy efficiency, and latency. Specifically, we propose a bi-level architecture with a meta-controller and a controller. The meta-controller provides the target performance in terms of KPIs, while the controller performs xApp orchestration at the lower level. Our simulation results show that the proposed HRL-based intent-driven xApp orchestration mechanism achieves 7.5% and 21.4% increase in average system throughput with respect to two baselines, i.e., a single xApp baseline and a non-machine learning-based algorithm, respectively. Similarly, 17.3% and 37.9% increase in energy efficiency are observed in comparison to the same baselines.
△ Less
Submitted 5 July, 2023;
originally announced July 2023.
-
Interferometric apodization by homothety -- I. Optimization of the device parameters
Authors:
Jamal Chafi,
Youssef El Azhari,
Ossama Azagrouze,
Abdelhadi Jabiri,
Zouhair Benkhaldoun,
Abdelfatah Habib,
Youssef Errazzouki
Abstract:
This study is focused on the very high dynamic imaging field, specifically the direct observation of exoplanetary systems. The coronagraph is an essential technique for suppressing the star's light, making it possible to detect an exoplanet with a very weak luminosity compared to its host star. Apodization improves the rejection of the coronagraph, thereby increasing its sensitivity. This work pre…
▽ More
This study is focused on the very high dynamic imaging field, specifically the direct observation of exoplanetary systems. The coronagraph is an essential technique for suppressing the star's light, making it possible to detect an exoplanet with a very weak luminosity compared to its host star. Apodization improves the rejection of the coronagraph, thereby increasing its sensitivity. This work presents the apodization method by interferometry using homothety, with either a rectangular or circular aperture. We discuss the principle method, the proposed experimental setup, and present the obtained results by optimizing the free parameters of the system while concentrating the maximum of the light energy in the central diffraction lobe, with a concentration rate of 93.6\% for the circular aperture and 91.5\% for the rectangular geometry. The obtained results enabled scaling the various elements of the experiment in accordance with practical constraints. Simulation results are presented for both circular and rectangular apertures. We performed simulations on a hexagonal aperture, both with and without a central obstruction, as well as a segmented aperture similar to the one used in the Thirty Meter Telescope (TMT). This approach enables the attainment of a contrast of approximately $10^{-4}$ at small angular separations, specifically around $1.8λ/D$. When integrated with a coronagraph, this technique exhibits great promise. These findings confirm that our proposed technique can effectively enhance the performance of a coronagraph.
△ Less
Submitted 2 July, 2023;
originally announced July 2023.
-
Extended NYUSIM-based MmWave Channel Model and Simulator for RIS-Assisted Systems
Authors:
Aline Habib,
Israa Khaled,
Ammar El Falou,
Charlotte Langlais
Abstract:
Spectrum scarcity has motivated the exploration of the millimeter-wave (mmWave) band as a key technology to cope with the ever-increasing data traffic. However, in this band, radiofrequency waves are highly susceptible to transmission loss and blockage. Recently, reconfigurable intelligent surfaces (RIS) have been proposed to transform the random nature of the propagation channel into a programmab…
▽ More
Spectrum scarcity has motivated the exploration of the millimeter-wave (mmWave) band as a key technology to cope with the ever-increasing data traffic. However, in this band, radiofrequency waves are highly susceptible to transmission loss and blockage. Recently, reconfigurable intelligent surfaces (RIS) have been proposed to transform the random nature of the propagation channel into a programmable and controllable radio environment. This innovative technique can improve mmWave coverage. However, most works consider theoretical channel models. In order to fill the gap towards a realistic RIS channel simulator, we extend the 3D statistical channel simulator NYUSIM based on extensive measurements to help model RIS-assisted mmWave systems. We validate the extended simulator analytically and via simulations. In addition, we study the received power in different configurations. Finally, we highlight the effectiveness of using RIS when the direct link is partially blocked or non-existent.
△ Less
Submitted 21 June, 2023;
originally announced June 2023.
-
A Secure Land Record Management System using Blockchain Technology
Authors:
Md. Samir Shahariar,
Pranta Banik,
Md. Ahsan Habib
Abstract:
A land record (LR) contains very sensitive information related to land e.g. owner, buyer, etc. Currently, almost all over the world, the LR is maintained by different governmental offices and most of them maintain the LR with paper-based approach. Some of the works focus to digitalize the existing land record management system (LRMS) but with some security concerns. A blockchain-based LRMS can be…
▽ More
A land record (LR) contains very sensitive information related to land e.g. owner, buyer, etc. Currently, almost all over the world, the LR is maintained by different governmental offices and most of them maintain the LR with paper-based approach. Some of the works focus to digitalize the existing land record management system (LRMS) but with some security concerns. A blockchain-based LRMS can be effective enough to solve the existing issues. This paper proposes a blockchain-based LRMS that (i) digitalizes the existing paper-based system, (ii) ensures LR privacy using an asymmetric cryptosystem, (iii) preserves LR integrity, (iv) facilitates a platform for trading land through an advertising agency, and (v) accelerates the process of changing ownership that saves time significantly. Besides, this paper also proposes a new way of character to integer mapping named C2I table that reduces around 33% overhead of text to integer conversion compared to ASCII table. The experimental results, analyses, and comparisons indicate the effectiveness of the proposed LRMS over the state-of-the-art systems.
△ Less
Submitted 9 February, 2023;
originally announced April 2023.
-
A Secure Medical Record Sharing Scheme Based on Blockchain and Two-fold Encryption
Authors:
Md. Ahsan Habib,
Kazi Md. Rokibul Alam,
Yasuhiko Morimoto
Abstract:
Usually, a medical record (MR) contains the patients disease-oriented sensitive information. In addition, the MR needs to be shared among different bodies, e.g., diagnostic centres, hospitals, physicians, etc. Hence, retaining the privacy and integrity of MR is crucial. A blockchain based secure MR sharing system can manage these aspects properly. This paper proposes a blockchain based electronic…
▽ More
Usually, a medical record (MR) contains the patients disease-oriented sensitive information. In addition, the MR needs to be shared among different bodies, e.g., diagnostic centres, hospitals, physicians, etc. Hence, retaining the privacy and integrity of MR is crucial. A blockchain based secure MR sharing system can manage these aspects properly. This paper proposes a blockchain based electronic (e-) MR sharing scheme that (i) considers the medical image and the text as the input, (ii) enriches the data privacy through a two-fold encryption mechanism consisting of an asymmetric cryptosystem and the dynamic DNA encoding, (iii) assures data integrity by storing the encrypted e-MR in the distinct block designated for each user in the blockchain, and (iv) eventually, enables authorized entities to regain the e-MR through decryption. Preliminary evaluations, analyses, comparisons with state-of-the-art works, etc., imply the efficacy of the proposed scheme.
△ Less
Submitted 9 February, 2023;
originally announced April 2023.
-
Machine Learning Models Capture Plasmon Dynamics in Ag Nanoparticles
Authors:
Adela Habib,
Nicholas Lubbers,
Sergei Tretiak,
Benjamin Nebgen
Abstract:
Highly energetic electron-hole pairs (hot carriers) formed from plasmon decay in metallic nanostructures promise sustainable pathways for energy-harvesting devices. However, efficient collection before thermalization remains an obstacle for realization of their full energy generating potential. Addressing this challenge requires detailed understanding of physical processes from plasmon excitation…
▽ More
Highly energetic electron-hole pairs (hot carriers) formed from plasmon decay in metallic nanostructures promise sustainable pathways for energy-harvesting devices. However, efficient collection before thermalization remains an obstacle for realization of their full energy generating potential. Addressing this challenge requires detailed understanding of physical processes from plasmon excitation in metal to their collection in a molecule or a semiconductor, where atomistic theoretical investigation may be particularly beneficial. Unfortunately, first-principles theoretical modeling of these processes is extremely costly, limiting the analysis to systems with a few 100s of atoms. Recent advances in machine learned interatomic potentials suggest that dynamics can be accelerated with surrogate models which replace the full solution of the Schroedinger Equation. Here, we modify an existing neural network, Hierarchically Interacting Particle Neural Network (HIP-NN), to predict plasmon dynamics in Ag nanoparticles. We demonstrate the model's capability to accurately predict plasmon dynamics in large nanoparticles of up to 561 atoms not present in the training dataset. More importantly, with machine learning models we gain a speed-up of about 200 times as compared with the rt-TDDFT calculations when predicting important physical quantities such as dynamic dipole moments in Ag55 and about 4000 times for extended nanoparticles that are 10 times larger. This underscores the promise of future machine learning accelerated electron/nuclear dynamics simulations for understanding fundamental properties of plasmon-driven hot carrier devices.
△ Less
Submitted 7 March, 2023;
originally announced March 2023.
-
Physarum Inspired Bicycle Lane Network Design in a Congested Mega City
Authors:
Md. Ahsan Habib,
M. A. H. Akhand
Abstract:
Mobility is a key factor in urban life and transport network plays a vital role in mobility. Worse transport network having less mobility is one of the key reasons to decline the living standard in any unplanned mega city. Transport mobility enhancement in an unplanned mega city is always challenging due to various constraints including complex design and high cost involvement. The aim of this the…
▽ More
Mobility is a key factor in urban life and transport network plays a vital role in mobility. Worse transport network having less mobility is one of the key reasons to decline the living standard in any unplanned mega city. Transport mobility enhancement in an unplanned mega city is always challenging due to various constraints including complex design and high cost involvement. The aim of this thesis is to enhance transport mobility in a megacity introducing a bicycle lane. To design the bicycle lane natural Physarum, brainless single celled multi-nucleated protist, is studied and modified for better optimization. Recently Physarum inspired techniques are drawn significant attention to the construction of effective networks. Exiting Physarum inspired models effectively and efficiently solves different problems including transport network design and modification and implication for bicycle lane is the unique contribution of this study. Central area of Dhaka, the capital city of Bangladesh, is considered to analyze and design the bicycle lane network bypassing primary roads.
△ Less
Submitted 29 January, 2023;
originally announced January 2023.
-
Optimising complexity of CNN models for resource constrained devices: QRS detection case study
Authors:
Ahsan Habib,
Chandan Karmakar,
John Yearwood
Abstract:
Traditional DL models are complex and resource hungry and thus, care needs to be taken in designing Internet of (medical) things (IoT, or IoMT) applications balancing efficiency-complexity trade-off. Recent IoT solutions tend to avoid using deep-learning methods due to such complexities, and rather classical filter-based methods are commonly used. We hypothesize that a shallow CNN model can offer…
▽ More
Traditional DL models are complex and resource hungry and thus, care needs to be taken in designing Internet of (medical) things (IoT, or IoMT) applications balancing efficiency-complexity trade-off. Recent IoT solutions tend to avoid using deep-learning methods due to such complexities, and rather classical filter-based methods are commonly used. We hypothesize that a shallow CNN model can offer satisfactory level of performance in combination by leveraging other essential solution-components, such as post-processing that is suitable for resource constrained environment. In an IoMT application context, QRS-detection and R-peak localisation from ECG signal as a case study, the complexities of CNN models and post-processing were varied to identify a set of combinations suitable for a range of target resource-limited environments. To the best of our knowledge, finding a deploy-able configuration, by incrementally increasing the CNN model complexity, as required to match the target's resource capacity, and leveraging the strength of post-processing, is the first of its kind. The results show that a shallow 2-layer CNN with a suitable post-processing can achieve $>$90\% F1-score, and the scores continue to improving for 8-32 layer CNNs, which can be used to profile target constraint environment. The outcome shows that it is possible to design an optimal DL solution with known target performance characteristics and resource (computing capacity, and memory) constraints.
△ Less
Submitted 22 January, 2023;
originally announced January 2023.
-
Hierarchical Reinforcement Learning Based Traffic Steering in Multi-RAT 5G Deployments
Authors:
Md Arafat Habib,
Hao Zhou,
Pedro Enrique Iturria-Rivera,
Medhat Elsayed,
Majid Bavand,
Raimundas Gaigalas,
Yigit Ozcan,
Melike Erol-Kantarci
Abstract:
In 5G non-standalone mode, an intelligent traffic steering mechanism can vastly aid in ensuring smooth user experience by selecting the best radio access technology (RAT) from a multi-RAT environment for a specific traffic flow. In this paper, we propose a novel load-aware traffic steering algorithm based on hierarchical reinforcement learning (HRL) while satisfying diverse QoS requirements of dif…
▽ More
In 5G non-standalone mode, an intelligent traffic steering mechanism can vastly aid in ensuring smooth user experience by selecting the best radio access technology (RAT) from a multi-RAT environment for a specific traffic flow. In this paper, we propose a novel load-aware traffic steering algorithm based on hierarchical reinforcement learning (HRL) while satisfying diverse QoS requirements of different traffic types. HRL can significantly increase system performance using a bi-level architecture having a meta-controller and a controller. In our proposed method, the meta-controller provides an appropriate threshold for load balancing, while the controller performs traffic admission to an appropriate RAT in the lower level. Simulation results show that HRL outperforms a Deep Q-Learning (DQN) and a threshold-based heuristic baseline with 8.49%, 12.52% higher average system throughput and 27.74%, 39.13% lower network delay, respectively.
△ Less
Submitted 18 January, 2023;
originally announced January 2023.
-
Traffic Steering for 5G Multi-RAT Deployments using Deep Reinforcement Learning
Authors:
Md Arafat Habib,
Hao Zhou,
Pedro Enrique Iturria Rivera,
Medhat Elsayed,
Majid Bavand,
Raimundas Gaigalas,
Steve Furr,
Melike Erol-Kantarci
Abstract:
In 5G non-standalone mode, traffic steering is a critical technique to take full advantage of 5G new radio while optimizing dual connectivity of 5G and LTE networks in multiple radio access technology (RAT). An intelligent traffic steering mechanism can play an important role to maintain seamless user experience by choosing appropriate RAT (5G or LTE) dynamically for a specific user traffic flow w…
▽ More
In 5G non-standalone mode, traffic steering is a critical technique to take full advantage of 5G new radio while optimizing dual connectivity of 5G and LTE networks in multiple radio access technology (RAT). An intelligent traffic steering mechanism can play an important role to maintain seamless user experience by choosing appropriate RAT (5G or LTE) dynamically for a specific user traffic flow with certain QoS requirements. In this paper, we propose a novel traffic steering mechanism based on Deep Q-learning that can automate traffic steering decisions in a dynamic environment having multiple RATs, and maintain diverse QoS requirements for different traffic classes. The proposed method is compared with two baseline algorithms: a heuristic-based algorithm and Q-learningbased traffic steering. Compared to the Q-learning and heuristic baselines, our results show that the proposed algorithm achieves better performance in terms of 6% and 10% higher average system throughput, and 23% and 33% lower network delay, respectively.
△ Less
Submitted 12 January, 2023;
originally announced January 2023.
-
Emotion Recognition from Microblog Managing Emoticon with Text and Classifying using 1D CNN
Authors:
Md. Ahsan Habib,
M. A. H. Akhand,
Md. Abdus Samad Kamal
Abstract:
Microblog, an online-based broadcast medium, is a widely used forum for people to share their thoughts and opinions. Recently, Emotion Recognition (ER) from microblogs is an inspiring research topic in diverse areas. In the machine learning domain, automatic emotion recognition from microblogs is a challenging task, especially, for better outcomes considering diverse content. Emoticon becomes very…
▽ More
Microblog, an online-based broadcast medium, is a widely used forum for people to share their thoughts and opinions. Recently, Emotion Recognition (ER) from microblogs is an inspiring research topic in diverse areas. In the machine learning domain, automatic emotion recognition from microblogs is a challenging task, especially, for better outcomes considering diverse content. Emoticon becomes very common in the text of microblogs as it reinforces the meaning of content. This study proposes an emotion recognition scheme considering both the texts and emoticons from microblog data. Emoticons are considered unique expressions of the users' emotions and can be changed by the proper emotional words. The succession of emoticons appearing in the microblog data is preserved and a 1D Convolutional Neural Network (CNN) is employed for emotion classification. The experimental result shows that the proposed emotion recognition scheme outperforms the other existing methods while tested on Twitter data.
△ Less
Submitted 7 January, 2023;
originally announced January 2023.
-
Relevance Classification of Flood-related Twitter Posts via Multiple Transformers
Authors:
Wisal Mukhtiar,
Waliiya Rizwan,
Aneela Habib,
Yasir Saleem Afridi,
Laiq Hasan,
Kashif Ahmad
Abstract:
In recent years, social media has been widely explored as a potential source of communication and information in disasters and emergency situations. Several interesting works and case studies of disaster analytics exploring different aspects of natural disasters have been already conducted. Along with the great potential, disaster analytics comes with several challenges mainly due to the nature of…
▽ More
In recent years, social media has been widely explored as a potential source of communication and information in disasters and emergency situations. Several interesting works and case studies of disaster analytics exploring different aspects of natural disasters have been already conducted. Along with the great potential, disaster analytics comes with several challenges mainly due to the nature of social media content. In this paper, we explore one such challenge and propose a text classification framework to deal with Twitter noisy data. More specifically, we employed several transformers both individually and in combination, so as to differentiate between relevant and non-relevant Twitter posts, achieving the highest F1-score of 0.87.
△ Less
Submitted 31 December, 2022;
originally announced January 2023.
-
Unconventionally Fast Transport through Sliding Dynamics of Rodlike Particles in Macromolecular Networks
Authors:
Xuanyu Zhang,
Xiaobin Dai,
Md Ahsan Habib,
Ziyang Xu,
Lijuan Gao,
Wenlong Chen,
Wenjie Wei,
Zhongqiu Tang,
Xianyu Qi,
Xiangjun Gong,
Lingxiang Jiang,
Li-Tang Yan
Abstract:
Transport of rodlike particles in confinement environments of macromolecular networks plays crucial roles in many important biological processes and technological applications. The relevant understanding has been limited to thin rods with diameter much smaller than network mesh size, although the opposite case, of which the dynamical behaviors and underlying physical mechanisms remain unclear, is…
▽ More
Transport of rodlike particles in confinement environments of macromolecular networks plays crucial roles in many important biological processes and technological applications. The relevant understanding has been limited to thin rods with diameter much smaller than network mesh size, although the opposite case, of which the dynamical behaviors and underlying physical mechanisms remain unclear, is ubiquitous. Here, we solve this issue by combining experiments, simulations and theory. We find a nonmonotonic dependence of translational diffusion on rod length, characterized by length commensuration-governed unconventionally fast dynamics which is in striking contrast to the monotonic dependence for thin rods. Our results clarify that such a fast diffusion of thick rods with length of integral multiple of mesh size follows sliding dynamics and demonstrate it to be "anomalous yet Brownian". Moreover, good agreement between theoretical analysis and simulations corroborates that the sliding dynamics is an intermediate regime between hopping and Brownian dynamics, and provides a mechanistic interpretation based on the rod-length dependent entropic free energy barrier. The findings yield a principle, that is, length commensuration, for optimal design of rodlike particles with highly efficient transport in confined environments of macromolecular networks, and might enrich the physics of the diffusion dynamics in heterogeneous media.
△ Less
Submitted 19 November, 2023; v1 submitted 26 December, 2022;
originally announced December 2022.
-
Attosecond-Angstrom free-electron-laser towards the cold beam limit
Authors:
A. F. Habib,
G. G. Manahan,
P. Scherkl,
T. Heinemann,
A. Sutherland,
R. Altuiri,
B. M. Alotaibi,
M. Litos,
J. Cary,
T. Raubenheimer,
E. Hemsing,
M. Hogan,
J. B. Rosenzweig,
P. H. Williams,
B. W. J. McNeil,
B. Hidding
Abstract:
Electron beam quality is paramount for X-ray pulse production in free-electron-lasers (FELs). State-of-the-art linear accelerators (linacs) can deliver multi-GeV electron beams with sufficient quality for hard X-ray-FELs, albeit requiring km-scale setups, whereas plasma-based accelerators can produce multi-GeV electron beams on metre-scale distances, and begin to reach beam qualities sufficient fo…
▽ More
Electron beam quality is paramount for X-ray pulse production in free-electron-lasers (FELs). State-of-the-art linear accelerators (linacs) can deliver multi-GeV electron beams with sufficient quality for hard X-ray-FELs, albeit requiring km-scale setups, whereas plasma-based accelerators can produce multi-GeV electron beams on metre-scale distances, and begin to reach beam qualities sufficient for EUV FELs. We show, that electron beams from plasma photocathodes many orders of magnitude brighter than state-of-the-art can be generated in plasma wakefield accelerators (PWFA), and then extracted, captured, transported and injected into undulators without quality loss. These ultrabright, sub-femtosecond electron beams can drive hard X-FELs near the cold beam limit to generate coherent X-ray pulses of attosecond-Angstrom class, reaching saturation after only 10 metres of undulator. This plasma-X-FEL opens pathways for novel photon science capabilities, such as unperturbed observation of electronic motion inside atoms at their natural time and length scale, and towards higher photon energies.
△ Less
Submitted 8 December, 2022;
originally announced December 2022.
-
Site Assessment and Layout Optimization for Rooftop Solar Energy Generation in Worldview-3 Imagery
Authors:
Zeyad Awwad,
Abdulaziz Alharbi,
Abdulelah H. Habib,
Olivier L. de Weck
Abstract:
With the growth of residential rooftop PV adoption in recent decades, the problem of effective layout design has become increasingly important in recent years. Although a number of automated methods have been introduced, these tend to rely on simplifying assumptions and heuristics to improve computational tractability. We demonstrate a fully automated layout design pipeline that attempts to solve…
▽ More
With the growth of residential rooftop PV adoption in recent decades, the problem of effective layout design has become increasingly important in recent years. Although a number of automated methods have been introduced, these tend to rely on simplifying assumptions and heuristics to improve computational tractability. We demonstrate a fully automated layout design pipeline that attempts to solve a more general formulation with greater geometric flexibility that accounts for shading losses. Our approach generates rooftop areas from satellite imagery and uses MINLP optimization to select panel positions, azimuth angles and tilt angles on an individual basis rather than imposing any predefined layouts. Our results demonstrate that shading plays a critical role in automated rooftop PV optimization and significantly changes the resulting layouts. Additionally, they suggest that, although several common heuristics are often effective, they may not be universally suitable due to complications resulting from geometric restrictions and shading losses. Finally, we evaluate a few specific heuristics from the literature and propose a potential new rule of thumb that may help improve rooftop solar energy potential when shading effects are considered.
△ Less
Submitted 28 February, 2023; v1 submitted 7 December, 2022;
originally announced December 2022.