-
Beyond Training: Enabling Self-Evolution of Agents with MOBIMEM
Authors:
Zibin Liu,
Cheng Zhang,
Xi Zhao,
Yunfei Feng,
Bingyu Bai,
Dahu Feng,
Erhu Feng,
Yubin Xia,
Haibo Chen
Abstract:
Large Language Model (LLM) agents are increasingly deployed to automate complex workflows in mobile and desktop environments. However, current model-centric agent architectures struggle to self-evolve post-deployment: improving personalization, capability, and efficiency typically requires continuous model retraining/fine-tuning, which incurs prohibitive computational overheads and suffers from an…
▽ More
Large Language Model (LLM) agents are increasingly deployed to automate complex workflows in mobile and desktop environments. However, current model-centric agent architectures struggle to self-evolve post-deployment: improving personalization, capability, and efficiency typically requires continuous model retraining/fine-tuning, which incurs prohibitive computational overheads and suffers from an inherent trade-off between model accuracy and inference efficiency.
To enable iterative self-evolution without model retraining, we propose MOBIMEM, a memory-centric agent system. MOBIMEM first introduces three specialized memory primitives to decouple agent evolution from model weights: (1) Profile Memory uses a lightweight distance-graph (DisGraph) structure to align with user preferences, resolving the accuracy-latency trade-off in user profile retrieval; (2) Experience Memory employs multi-level templates to instantiate execution logic for new tasks, ensuring capability generalization; and (3) Action Memory records fine-grained interaction sequences, reducing the reliance on expensive model inference. Building upon this memory architecture, MOBIMEM further integrates a suite of OS-inspired services to orchestrate execution: a scheduler that coordinates parallel sub-task execution and memory operations; an agent record-and-replay (AgentRR) mechanism that enables safe and efficient action reuse; and a context-aware exception handling that ensures graceful recovery from user interruptions and runtime errors.
Evaluation on AndroidWorld and top-50 apps shows that MOBIMEM achieves 83.1% profile alignment with 23.83 ms retrieval time (280x faster than GraphRAG baselines), improves task success rates by up to 50.3%, and reduces end-to-end latency by up to 9x on mobile devices.
△ Less
Submitted 15 December, 2025;
originally announced December 2025.
-
M-GRPO: Stabilizing Self-Supervised Reinforcement Learning for Large Language Models with Momentum-Anchored Policy Optimization
Authors:
Bizhe Bai,
Hongming Wu,
Peng Ye,
Tao Chen
Abstract:
Self-supervised reinforcement learning (RL) presents a promising approach for enhancing the reasoning capabilities of Large Language Models (LLMs) without reliance on expensive human-annotated data. However, we find that existing methods suffer from a critical failure mode under long-horizon training: a "policy collapse" where performance precipitously degrades. We diagnose this instability and de…
▽ More
Self-supervised reinforcement learning (RL) presents a promising approach for enhancing the reasoning capabilities of Large Language Models (LLMs) without reliance on expensive human-annotated data. However, we find that existing methods suffer from a critical failure mode under long-horizon training: a "policy collapse" where performance precipitously degrades. We diagnose this instability and demonstrate that simply scaling the number of rollouts -- a common strategy to improve performance -- only delays, but does not prevent, this collapse. To counteract this instability, we first introduce M-GRPO (Momentum-Anchored Group Relative Policy Optimization), a framework that leverages a slowly evolving momentum model to provide a stable training target. In addition, we identify that this process is often accompanied by a rapid collapse in policy entropy, resulting in a prematurely confident and suboptimal policy. To specifically address this issue, we propose a second contribution: an adaptive filtering method based on the interquartile range (IQR) that dynamically prunes low-entropy trajectories, preserving essential policy diversity. Our extensive experiments on multiple reasoning benchmarks demonstrate that M-GRPO stabilizes the training process while the IQR filter prevents premature convergence. The combination of these two innovations leads to superior training stability and state-of-the-art performance.
△ Less
Submitted 15 December, 2025;
originally announced December 2025.
-
Autonomous labeling of surgical resection margins using a foundation model
Authors:
Xilin Yang,
Musa Aydin,
Yuhong Lu,
Sahan Yoruc Selcuk,
Bijie Bai,
Yijie Zhang,
Andrew Birkeland,
Katjana Ehrlich,
Julien Bec,
Laura Marcu,
Nir Pillar,
Aydogan Ozcan
Abstract:
Assessing resection margins is central to pathological specimen evaluation and has profound implications for patient outcomes. Current practice employs physical inking, which is applied variably, and cautery artifacts can obscure the true margin on histological sections. We present a virtual inking network (VIN) that autonomously localizes the surgical cut surface on whole-slide images, reducing r…
▽ More
Assessing resection margins is central to pathological specimen evaluation and has profound implications for patient outcomes. Current practice employs physical inking, which is applied variably, and cautery artifacts can obscure the true margin on histological sections. We present a virtual inking network (VIN) that autonomously localizes the surgical cut surface on whole-slide images, reducing reliance on inks and standardizing margin-focused review. VIN uses a frozen foundation model as the feature extractor and a compact two-layer multilayer perceptron trained for patch-level classification of cautery-consistent features. The dataset comprised 120 hematoxylin and eosin (H&E) stained slides from 12 human tonsil tissue blocks, resulting in ~2 TB of uncompressed raw image data, where a board-certified pathologist provided boundary annotations. In blind testing with 20 slides from previously unseen blocks, VIN produced coherent margin overlays that qualitatively aligned with expert annotations across serial sections. Quantitatively, region-level accuracy was ~73.3% across the test set, with errors largely confined to limited areas that did not disrupt continuity of the whole-slide margin map. These results indicate that VIN captures cautery-related histomorphology and can provide a reproducible, ink-free margin delineation suitable for integration into routine digital pathology workflows and for downstream measurement of margin distances.
△ Less
Submitted 27 November, 2025;
originally announced November 2025.
-
HQ-SVC: Towards High-Quality Zero-Shot Singing Voice Conversion in Low-Resource Scenarios
Authors:
Bingsong Bai,
Yizhong Geng,
Fengping Wang,
Cong Wang,
Puyuan Guo,
Yingming Gao,
Ya Li
Abstract:
Zero-shot singing voice conversion (SVC) transforms a source singer's timbre to an unseen target speaker's voice while preserving melodic content without fine-tuning. Existing methods model speaker timbre and vocal content separately, losing essential acoustic information that degrades output quality while requiring significant computational resources. To overcome these limitations, we propose HQ-…
▽ More
Zero-shot singing voice conversion (SVC) transforms a source singer's timbre to an unseen target speaker's voice while preserving melodic content without fine-tuning. Existing methods model speaker timbre and vocal content separately, losing essential acoustic information that degrades output quality while requiring significant computational resources. To overcome these limitations, we propose HQ-SVC, an efficient framework for high-quality zero-shot SVC. HQ-SVC first extracts jointly content and speaker features using a decoupled codec. It then enhances fidelity through pitch and volume modeling, preserving critical acoustic information typically lost in separate modeling approaches, and progressively refines outputs via differentiable signal processing and diffusion techniques. Evaluations confirm HQ-SVC significantly outperforms state-of-the-art zero-shot SVC methods in conversion quality and efficiency. Beyond voice conversion, HQ-SVC achieves superior voice naturalness compared to specialized audio super-resolution methods while natively supporting voice super-resolution tasks.
△ Less
Submitted 15 November, 2025; v1 submitted 11 November, 2025;
originally announced November 2025.
-
Forget BIT, It is All about TOKEN: Towards Semantic Information Theory for LLMs
Authors:
Bo Bai
Abstract:
Large language models (LLMs) have demonstrated remarkable capabilities in numerous real-world applications. While the vast majority of research conducted from an experimental perspective is progressing rapidly, it demands substantial computational power, data, and other resources. Therefore, how to open the black-box of LLMs from a theoretical standpoint has become a critical challenge. This paper…
▽ More
Large language models (LLMs) have demonstrated remarkable capabilities in numerous real-world applications. While the vast majority of research conducted from an experimental perspective is progressing rapidly, it demands substantial computational power, data, and other resources. Therefore, how to open the black-box of LLMs from a theoretical standpoint has become a critical challenge. This paper takes the theory of rate-distortion function, directed information, and Granger causality as its starting point to investigate the information-theoretic principles behind LLMs, leading to the development of semantic information theory for LLMs, where the fundamental unit is token, rather than bits that lacks any semantic meaning. By defining the probabilistic model of LLMs, we discuss structure-agnostic information-theoretic measures, such as the directed rate-distortion function in pre-training, the directed rate-reward function in post-training, and the semantic information flow in inference phase. This paper also delves deeply into the theory of token-level semantic embedding and the information-theoretically optimal vectorization method. Thereafter, we propose a general definition of autoregression LLM, where the Transformer architecture and its performance such as ELBO, generalization error bound, memory capacity, and semantic information measures can be derived theoretically. Other architectures, such as Mamba/Mamba2 and LLaDA, are also discussed in our framework. Consequently, this paper provides a theoretical framework for understanding LLMs from the perspective of semantic information theory, which also offers the necessary theoretical tools for further in-depth research.
△ Less
Submitted 2 November, 2025;
originally announced November 2025.
-
ToolPRM: Fine-Grained Inference Scaling of Structured Outputs for Function Calling
Authors:
Jianghao Lin,
Yuanyuan Shi,
Xin Peng,
Renjie Ding,
Hairui Wang,
Yuxuan Peng,
Bizhe Bai,
Weixi Song,
Fengshuo Bai,
Huacan Chai,
Weinan Zhang,
Fei Huang,
Ying Wen
Abstract:
Large language models (LLMs) are increasingly demonstrating strong capabilities as autonomous agents, with function calling serving as a core mechanism for interaction with the environment. Meanwhile, inference scaling has become a cutting-edge technique to enhance LLM performance by allocating more computational resources during the inference process. However, current research on inference scalin…
▽ More
Large language models (LLMs) are increasingly demonstrating strong capabilities as autonomous agents, with function calling serving as a core mechanism for interaction with the environment. Meanwhile, inference scaling has become a cutting-edge technique to enhance LLM performance by allocating more computational resources during the inference process. However, current research on inference scaling primarily focuses on unstructured output generation tasks, leaving its application in structured outputs, like function calling, largely underexplored. To bridge this gap, we propose an inference scaling framework that combines fine-grained beam search with a process reward model, ToolPRM, which scores the internal steps of each single function call. To train ToolPRM, we construct the first fine-grained intra-call process supervision dataset, automatically annotated with function-masking techniques to provide step-level rewards for structured tool-use reasoning. Extensive experiments demonstrate that ToolPRM beats the coarse-grained and outcome reward models in terms of predictive accuracy, indicating its stronger capability in supervising the function calling inference process. Inference scaling technique equipped with ToolPRM also significantly improves the backbone model performance across various function calling tasks and benchmarks. More importantly, we reveal a key principle for applying inference scaling techniques to structured outputs: "explore more but retain less" due to the unrecoverability characteristics of structured function calling generation.
△ Less
Submitted 16 October, 2025;
originally announced October 2025.
-
FlowXpert: Context-Aware Flow Embedding for Enhanced Traffic Detection in IoT Network
Authors:
Chao Zha,
Haolin Pan,
Bing Bai,
Jiangxing Wu,
Ruyun Zhang
Abstract:
In the Internet of Things (IoT) environment, continuous interaction among a large number of devices generates complex and dynamic network traffic, which poses significant challenges to rule-based detection approaches. Machine learning (ML)-based traffic detection technology, capable of identifying anomalous patterns and potential threats within this traffic, serves as a critical component in ensur…
▽ More
In the Internet of Things (IoT) environment, continuous interaction among a large number of devices generates complex and dynamic network traffic, which poses significant challenges to rule-based detection approaches. Machine learning (ML)-based traffic detection technology, capable of identifying anomalous patterns and potential threats within this traffic, serves as a critical component in ensuring network security. This study first identifies a significant issue with widely adopted feature extraction tools (e.g., CICMeterFlow): the extensive use of time- and length-related features leads to high sparsity, which adversely affects model convergence. Furthermore, existing traffic detection methods generally lack an embedding mechanism capable of efficiently and comprehensively capturing the semantic characteristics of network traffic. To address these challenges, we propose a novel feature extraction tool that eliminates traditional time and length features in favor of context-aware semantic features related to the source host, thus improving the generalizability of the model. In addition, we design an embedding training framework that integrates the unsupervised DBSCAN clustering algorithm with a contrastive learning strategy to effectively capture fine-grained semantic representations of traffic. Extensive empirical evaluations are conducted on the real-world Mawi data set to validate the proposed method in terms of detection accuracy, robustness, and generalization. Comparative experiments against several state-of-the-art (SOTA) models demonstrate the superior performance of our approach. Furthermore, we confirm its applicability and deployability in real-time scenarios.
△ Less
Submitted 25 September, 2025;
originally announced September 2025.
-
SynParaSpeech: Automated Synthesis of Paralinguistic Datasets for Speech Generation and Understanding
Authors:
Bingsong Bai,
Qihang Lu,
Wenbing Yang,
Zihan Sun,
Yueran Hou,
Peilei Jia,
Songbai Pu,
Ruibo Fu,
Yingming Gao,
Ya Li,
Jun Gao
Abstract:
Paralinguistic sounds, like laughter and sighs, are crucial for synthesizing more realistic and engaging speech. However, existing methods typically depend on proprietary datasets, while publicly available resources often suffer from incomplete speech, inaccurate or missing timestamps, and limited real-world relevance. To address these problems, we propose an automated framework for generating lar…
▽ More
Paralinguistic sounds, like laughter and sighs, are crucial for synthesizing more realistic and engaging speech. However, existing methods typically depend on proprietary datasets, while publicly available resources often suffer from incomplete speech, inaccurate or missing timestamps, and limited real-world relevance. To address these problems, we propose an automated framework for generating large-scale paralinguistic data and apply it to construct the SynParaSpeech dataset. The dataset comprises 6 paralinguistic categories with 118.75 hours of data and precise timestamps, all derived from natural conversational speech. Our contributions lie in introducing the first automated method for constructing large-scale paralinguistic datasets and releasing the SynParaSpeech corpus, which advances speech generation through more natural paralinguistic synthesis and enhances speech understanding by improving paralinguistic event detection. The dataset and audio samples are available at https://github.com/ShawnPi233/SynParaSpeech.
△ Less
Submitted 28 September, 2025; v1 submitted 18 September, 2025;
originally announced September 2025.
-
DB3 Team's Solution For Meta KDD Cup' 25
Authors:
Yikuan Xia,
Jiazun Chen,
Yirui Zhan,
Suifeng Zhao,
Weipeng Jiang,
Chaorui Zhang,
Wei Han,
Bo Bai,
Jun Gao
Abstract:
This paper presents the db3 team's winning solution for the Meta CRAG-MM Challenge 2025 at KDD Cup'25. Addressing the challenge's unique multi-modal, multi-turn question answering benchmark (CRAG-MM), we developed a comprehensive framework that integrates tailored retrieval pipelines for different tasks with a unified LLM-tuning approach for hallucination control. Our solution features (1) domain-…
▽ More
This paper presents the db3 team's winning solution for the Meta CRAG-MM Challenge 2025 at KDD Cup'25. Addressing the challenge's unique multi-modal, multi-turn question answering benchmark (CRAG-MM), we developed a comprehensive framework that integrates tailored retrieval pipelines for different tasks with a unified LLM-tuning approach for hallucination control. Our solution features (1) domain-specific retrieval pipelines handling image-indexed knowledge graphs, web sources, and multi-turn conversations; and (2) advanced refusal training using SFT, DPO, and RL. The system achieved 2nd place in Task 1, 2nd place in Task 2, and 1st place in Task 3, securing the grand prize for excellence in ego-centric queries through superior handling of first-person perspective challenges.
△ Less
Submitted 12 August, 2025;
originally announced September 2025.
-
Grid-like Error-Correcting Codes for Matrix Multiplication with Better Correcting Capability
Authors:
Hao Shi,
Zhengyi Jiang,
Zhongyi Huang,
Bo Bai,
Gong Zhang,
Hanxu Hou
Abstract:
Matrix multiplication over the real field constitutes a foundational operation in the training of deep learning models, serving as a computational cornerstone for both forward and backward propagation processes. However, the presence of silent data corruption (SDC) in large-scale distributed training environments poses a significant threat to model convergence and predictive accuracy, particularly…
▽ More
Matrix multiplication over the real field constitutes a foundational operation in the training of deep learning models, serving as a computational cornerstone for both forward and backward propagation processes. However, the presence of silent data corruption (SDC) in large-scale distributed training environments poses a significant threat to model convergence and predictive accuracy, particularly when such errors manifest during matrix multiplication. Due to their transient and non-intrusive nature, these errors often evade detection, allowing them to propagate and accumulate over time, ultimately leading to substantial degradation in model performance. In this paper, we introduce a novel error-correcting coding framework specifically tailored for matrix multiplication operations. Our proposed framework is designed to detect and correct multiple computational errors that may arise during the execution of matrix products. By leveraging a grid-based structural encoding scheme, our approach enhances error localization and correction capabilities across all participating matrices, thereby significantly improving the fault tolerance of the computation. Experimental results demonstrate that our method achieves deterministic correction of up to two erroneous symbols distributed across three matrices with 100\% reliability, while incurring only a 24\% overhead in computational time on GPU architectures. Furthermore, we provide a rigorous theoretical analysis of the error-correction properties inherent to our coding scheme, establishing its correctness and robustness under well-defined fault models.
△ Less
Submitted 6 August, 2025;
originally announced August 2025.
-
NeuralDB: Scaling Knowledge Editing in LLMs to 100,000 Facts with Neural KV Database
Authors:
Weizhi Fei,
Hao Shi,
Jing Xu,
Jingchen Peng,
Jiazheng Li,
Jingzhao Zhang,
Bo Bai,
Wei Han,
Zhenyuan Chen,
Xueyan Niu
Abstract:
Efficiently editing knowledge stored in large language models (LLMs) enables model updates without large-scale training. One possible solution is Locate-and-Edit (L\&E), allowing simultaneous modifications of a massive number of facts. However, such editing may compromise the general abilities of LLMs and even result in forgetting edited facts when scaling up to thousands of edits. In this paper,…
▽ More
Efficiently editing knowledge stored in large language models (LLMs) enables model updates without large-scale training. One possible solution is Locate-and-Edit (L\&E), allowing simultaneous modifications of a massive number of facts. However, such editing may compromise the general abilities of LLMs and even result in forgetting edited facts when scaling up to thousands of edits. In this paper, we model existing linear L\&E methods as querying a Key-Value (KV) database. From this perspective, we then propose NeuralDB, an editing framework that explicitly represents the edited facts as a neural KV database equipped with a non-linear gated retrieval module, % In particular, our gated module only operates when inference involves the edited facts, effectively preserving the general abilities of LLMs. Comprehensive experiments involving the editing of 10,000 facts were conducted on the ZsRE and CounterFacts datasets, using GPT2-XL, GPT-J (6B) and Llama-3 (8B). The results demonstrate that NeuralDB not only excels in editing efficacy, generalization, specificity, fluency, and consistency, but also preserves overall performance across six representative text understanding and generation tasks. Further experiments indicate that NeuralDB maintains its effectiveness even when scaled to 100,000 facts (\textbf{50x} more than in prior work).
△ Less
Submitted 23 July, 2025;
originally announced July 2025.
-
Faster-than-Nyquist Signaling is Good for Single-Carrier ISAC: An Analytical Study
Authors:
Shuangyang Li,
Fan Liu,
Yifeng Xiong,
Weijie Yuan,
Baoming Bai,
Christos Masouros,
Giuseppe Caire
Abstract:
In this paper, we provide an analytical study of single-carrier faster-than-Nyquist (FTN) signaling for integrated sensing and communications (ISAC). Our derivations show that FTN is advantageous for ISAC, and reveal new insights that these advantages come from the fact that FTN signaling can effectively avoid the spectral aliasing due to the mismatch between the symbol rate and the bandwidth of t…
▽ More
In this paper, we provide an analytical study of single-carrier faster-than-Nyquist (FTN) signaling for integrated sensing and communications (ISAC). Our derivations show that FTN is advantageous for ISAC, and reveal new insights that these advantages come from the fact that FTN signaling can effectively avoid the spectral aliasing due to the mismatch between the symbol rate and the bandwidth of the shaping pulse. Specifically, the communication spectral efficiency advantages of FTN signaling over time-invariant multipath channels are analytically shown, where both upper- and lower-bounds on the spectral efficiency are derived. We show that the gap between these two bounds corresponds to the potential signal-to-noise ratio (SNR) variation due to the presence of multipath delay and spectral aliasing, which diminishes as the symbol rate grows higher. Particularly, in the limiting case, this SNR variation disappears while the degree of freedom (DoF) of the system attain the maximum. Furthermore, the sensing advantages for FTN signals are verified in terms of the expected normalized squared ambiguity function. We show that FTN signals generally enjoy a more robust ranging performance. More importantly, we prove that FTN signaling can effectively avoid the undesired peaks in the considered ambiguity function along the Doppler dimension, thereby reducing the ambiguities in velocity estimation. All these conclusions are explicitly verified by numerical results.
△ Less
Submitted 11 June, 2025;
originally announced June 2025.
-
Graph Evidential Learning for Anomaly Detection
Authors:
Chunyu Wei,
Wenji Hu,
Xingjia Hao,
Yunhai Wang,
Yueguo Chen,
Bing Bai,
Fei Wang
Abstract:
Graph anomaly detection faces significant challenges due to the scarcity of reliable anomaly-labeled datasets, driving the development of unsupervised methods. Graph autoencoders (GAEs) have emerged as a dominant approach by reconstructing graph structures and node features while deriving anomaly scores from reconstruction errors. However, relying solely on reconstruction error for anomaly detecti…
▽ More
Graph anomaly detection faces significant challenges due to the scarcity of reliable anomaly-labeled datasets, driving the development of unsupervised methods. Graph autoencoders (GAEs) have emerged as a dominant approach by reconstructing graph structures and node features while deriving anomaly scores from reconstruction errors. However, relying solely on reconstruction error for anomaly detection has limitations, as it increases the sensitivity to noise and overfitting. To address these issues, we propose Graph Evidential Learning (GEL), a probabilistic framework that redefines the reconstruction process through evidential learning. By modeling node features and graph topology using evidential distributions, GEL quantifies two types of uncertainty: graph uncertainty and reconstruction uncertainty, incorporating them into the anomaly scoring mechanism. Extensive experiments demonstrate that GEL achieves state-of-the-art performance while maintaining high robustness against noise and structural perturbations.
△ Less
Submitted 31 May, 2025;
originally announced June 2025.
-
Think Twice, Act Once: Token-Aware Compression and Action Reuse for Efficient Inference in Vision-Language-Action Models
Authors:
Xudong Tan,
Yaoxin Yang,
Peng Ye,
Jialin Zheng,
Bizhe Bai,
Xinyi Wang,
Jia Hao,
Tao Chen
Abstract:
Vision-Language-Action (VLA) models have emerged as a powerful paradigm for general-purpose robot control through natural language instructions. However, their high inference cost-stemming from large-scale token computation and autoregressive decoding-poses significant challenges for real-time deployment and edge applications. While prior work has primarily focused on architectural optimization, w…
▽ More
Vision-Language-Action (VLA) models have emerged as a powerful paradigm for general-purpose robot control through natural language instructions. However, their high inference cost-stemming from large-scale token computation and autoregressive decoding-poses significant challenges for real-time deployment and edge applications. While prior work has primarily focused on architectural optimization, we take a different perspective by identifying a dual form of redundancy in VLA models: (i) high similarity across consecutive action steps, and (ii) substantial redundancy in visual tokens. Motivated by these observations, we propose FlashVLA, the first training-free and plug-and-play acceleration framework that enables action reuse in VLA models. FlashVLA improves inference efficiency through a token-aware action reuse mechanism that avoids redundant decoding across stable action steps, and an information-guided visual token selection strategy that prunes low-contribution tokens. Extensive experiments on the LIBERO benchmark show that FlashVLA reduces FLOPs by 55.7% and latency by 36.0%, with only a 0.7% drop in task success rate. These results demonstrate the effectiveness of FlashVLA in enabling lightweight, low-latency VLA inference without retraining.
△ Less
Submitted 27 May, 2025;
originally announced May 2025.
-
Wireless Large AI Model: Shaping the AI-Native Future of 6G and Beyond
Authors:
Fenghao Zhu,
Xinquan Wang,
Siming Jiang,
Xinyi Li,
Maojun Zhang,
Yixuan Chen,
Chongwen Huang,
Zhaohui Yang,
Xiaoming Chen,
Zhaoyang Zhang,
Richeng Jin,
Yongming Huang,
Wei Feng,
Tingting Yang,
Baoming Bai,
Feifei Gao,
Kun Yang,
Yuanwei Liu,
Sami Muhaidat,
Chau Yuen,
Kaibin Huang,
Kai-Kit Wong,
Dusit Niyato,
Ying-Chang Liang,
Mérouane Debbah
Abstract:
The emergence of sixth-generation and beyond communication systems is expected to fundamentally transform digital experiences through introducing unparalleled levels of intelligence, efficiency, and connectivity. A promising technology poised to enable this revolutionary vision is the wireless large AI model (WLAM), characterized by its exceptional capabilities in data processing, inference, and d…
▽ More
The emergence of sixth-generation and beyond communication systems is expected to fundamentally transform digital experiences through introducing unparalleled levels of intelligence, efficiency, and connectivity. A promising technology poised to enable this revolutionary vision is the wireless large AI model (WLAM), characterized by its exceptional capabilities in data processing, inference, and decision-making. In light of these remarkable capabilities, this paper provides a comprehensive survey of WLAM, elucidating its fundamental principles, diverse applications, critical challenges, and future research opportunities. We begin by introducing the background of WLAM and analyzing the key synergies with wireless networks, emphasizing the mutual benefits. Subsequently, we explore the foundational characteristics of WLAM, delving into their unique relevance in wireless environments. Then, the role of WLAM in optimizing wireless communication systems across various use cases and the reciprocal benefits are systematically investigated. Furthermore, we discuss the integration of WLAM with emerging technologies, highlighting their potential to enable transformative capabilities and breakthroughs in wireless communication. Finally, we thoroughly examine the high-level challenges hindering the practical implementation of WLAM and discuss pivotal future research directions.
△ Less
Submitted 18 December, 2025; v1 submitted 20 April, 2025;
originally announced April 2025.
-
ER-RAG: Enhance RAG with ER-Based Unified Modeling of Heterogeneous Data Sources
Authors:
Yikuan Xia,
Jiazun Chen,
Yirui Zhan,
Suifeng Zhao,
Weipeng Jiang,
Chaorui Zhang,
Wei Han,
Bo Bai,
Jun Gao
Abstract:
Large language models (LLMs) excel in question-answering (QA) tasks, and retrieval-augmented generation (RAG) enhances their precision by incorporating external evidence from diverse sources like web pages, databases, and knowledge graphs. However, current RAG methods rely on agent-specific strategies for individual data sources, posing challenges low-resource or black-box environments and complic…
▽ More
Large language models (LLMs) excel in question-answering (QA) tasks, and retrieval-augmented generation (RAG) enhances their precision by incorporating external evidence from diverse sources like web pages, databases, and knowledge graphs. However, current RAG methods rely on agent-specific strategies for individual data sources, posing challenges low-resource or black-box environments and complicates operations when evidence is fragmented across sources. To address these limitations, we propose ER-RAG, a framework that unifies evidence integration across heterogeneous data sources using the Entity-Relationship (ER) model. ER-RAG standardizes entity retrieval and relationship querying through ER-based APIs with GET and JOIN operations. It employs a two-stage generation process: first, a preference optimization module selects optimal sources; second, another module constructs API chains based on source schemas. This unified approach allows efficient fine-tuning and seamless integration across diverse data sources. ER-RAG demonstrated its effectiveness by winning all three tracks of the 2024 KDDCup CRAG Challenge, achieving performance on par with commercial RAG pipelines using an 8B LLM backbone. It outperformed hybrid competitors by 3.1% in LLM score and accelerated retrieval by 5.5X.
△ Less
Submitted 2 March, 2025;
originally announced April 2025.
-
Local Information Matters: Inference Acceleration For Grounded Conversation Generation Models Through Adaptive Local-Aware Token Pruning
Authors:
Bizhe Bai,
Jianjian Cao,
Yadan Luo,
Tao Chen
Abstract:
Grounded Conversation Generation (GCG) is an emerging vision-language task that requires models to generate natural language responses seamlessly intertwined with corresponding object segmentation masks. Recent models, such as GLaMM and OMG-LLaVA, achieve pixel-level grounding but incur significant computational costs due to processing a large number of visual tokens. Existing token pruning method…
▽ More
Grounded Conversation Generation (GCG) is an emerging vision-language task that requires models to generate natural language responses seamlessly intertwined with corresponding object segmentation masks. Recent models, such as GLaMM and OMG-LLaVA, achieve pixel-level grounding but incur significant computational costs due to processing a large number of visual tokens. Existing token pruning methods, like FastV and PyramidDrop, fail to preserve the local visual features critical for accurate grounding, leading to substantial performance drops in GCG tasks. To address this, we propose Adaptive Local-Aware Token Pruning (ALTP), a simple yet effective framework that accelerates GCG models by prioritizing local object information. ALTP introduces two key components: (1) Detail Density Capture (DDC), which uses superpixel segmentation to retain tokens in object-centric regions, preserving fine-grained details, and (2) Dynamic Density Formation (DDF), which dynamically allocates tokens based on information density, ensuring higher retention in semantically rich areas. Extensive experiments on the GranDf dataset demonstrate that ALTP significantly outperforms existing token pruning methods, such as FastV and PyramidDrop, on both GLaMM and OMG-LLaVA models. Notably, when applied to GLaMM, ALTP achieves a 90% reduction in visual tokens with a 4.9% improvement in AP50 and a 5.0% improvement in Recall compared to PyramidDrop. Similarly, on OMG-LLaVA, ALTP improves AP by 2.1% and mIOU by 3.0% at a 90% token reduction compared with PDrop.
△ Less
Submitted 1 April, 2025; v1 submitted 31 March, 2025;
originally announced March 2025.
-
Efficient Prompt Compression with Evaluator Heads for Long-Context Transformer Inference
Authors:
Weizhi Fei,
Xueyan Niu,
Guoqing Xie,
Yingqing Liu,
Bo Bai,
Wei Han
Abstract:
Although applications involving long-context inputs are crucial for the effective utilization of large language models (LLMs), they also result in increased computational costs and reduced performance. To address this challenge, we propose an efficient, training-free prompt compression method that retains key information within compressed prompts. We identify specific attention heads in transforme…
▽ More
Although applications involving long-context inputs are crucial for the effective utilization of large language models (LLMs), they also result in increased computational costs and reduced performance. To address this challenge, we propose an efficient, training-free prompt compression method that retains key information within compressed prompts. We identify specific attention heads in transformer-based LLMs, which we designate as evaluator heads, that are capable of selecting tokens in long inputs that are most significant for inference. Building on this discovery, we develop EHPC, an Evaluator Head-based Prompt Compression method, which enables LLMs to rapidly "skim through" input prompts by leveraging only the first few layers with evaluator heads during the pre-filling stage, subsequently passing only the important tokens to the model for inference. EHPC achieves state-of-the-art results across two mainstream benchmarks: prompt compression and long-context inference acceleration. Consequently, it effectively reduces the complexity and costs associated with commercial API calls. We further demonstrate that EHPC attains competitive results compared to key-value cache-based acceleration methods, thereby highlighting its potential to enhance the efficiency of LLMs for long-context tasks.
△ Less
Submitted 5 February, 2025; v1 submitted 22 January, 2025;
originally announced January 2025.
-
Revisit the AWGN-goodness of Polar-like Lattices
Authors:
Ling Liu,
Junjiang Yu,
Shanxiang Lyu,
Baoming Bai
Abstract:
This paper aims to provide a comprehensive introduction to lattices constructed based on polar-like codes and demonstrate some of their key properties, such as AWGN goodness. We first present polar lattices directly from the perspective of their generator matrix. Next, we discuss their connection with the recently proposed PAC (polarization adjusted convolutional) lattices and analyze the structur…
▽ More
This paper aims to provide a comprehensive introduction to lattices constructed based on polar-like codes and demonstrate some of their key properties, such as AWGN goodness. We first present polar lattices directly from the perspective of their generator matrix. Next, we discuss their connection with the recently proposed PAC (polarization adjusted convolutional) lattices and analyze the structural advantages of PAC lattices, through which the AWGN-goodness of PAC lattices can be conveniently demonstrated.
△ Less
Submitted 14 November, 2025; v1 submitted 21 January, 2025;
originally announced January 2025.
-
Construction of Simultaneously Good Polar Codes and Polar Lattices
Authors:
Ling Liu,
Ruimin Yuan,
Shanxiang Lyu,
Cong Ling,
Baoming Bai
Abstract:
In this work, we investigate the simultaneous goodness of polar codes and polar lattices. The simultaneous goodness of a lattice or a code means that it is optimal for both channel coding and source coding simultaneously. The existence of such kind of lattices was proven by using random lattice ensembles. Our work provides an explicit construction based on the polarization technique.
In this work, we investigate the simultaneous goodness of polar codes and polar lattices. The simultaneous goodness of a lattice or a code means that it is optimal for both channel coding and source coding simultaneously. The existence of such kind of lattices was proven by using random lattice ensembles. Our work provides an explicit construction based on the polarization technique.
△ Less
Submitted 22 January, 2025; v1 submitted 21 January, 2025;
originally announced January 2025.
-
Lying mirror
Authors:
Yuhang Li,
Shiqi Chen,
Bijie Bai,
Aydogan Ozcan
Abstract:
We introduce an all-optical system, termed the "lying mirror", to hide input information by transforming it into misleading, ordinary-looking patterns that effectively camouflage the underlying image data and deceive the observers. This misleading transformation is achieved through passive light-matter interactions of the incident light with an optimized structured diffractive surface, enabling th…
▽ More
We introduce an all-optical system, termed the "lying mirror", to hide input information by transforming it into misleading, ordinary-looking patterns that effectively camouflage the underlying image data and deceive the observers. This misleading transformation is achieved through passive light-matter interactions of the incident light with an optimized structured diffractive surface, enabling the optical concealment of any form of secret input data without any digital computing. These lying mirror designs were shown to camouflage different types of input image data, exhibiting robustness against a range of adversarial manipulations, including random image noise as well as unknown, random rotations, shifts, and scaling of the object features. The feasibility of the lying mirror concept was also validated experimentally using a structured micro-mirror array along with multi-wavelength illumination at 480, 550 and 600 nm, covering the blue, green and red image channels. This framework showcases the power of structured diffractive surfaces for visual information processing and might find various applications in defense, security and entertainment.
△ Less
Submitted 20 October, 2024;
originally announced October 2024.
-
Towards Faster Graph Partitioning via Pre-training and Inductive Inference
Authors:
Meng Qin,
Chaorui Zhang,
Yu Gao,
Yibin Ding,
Weipeng Jiang,
Weixi Zhang,
Wei Han,
Bo Bai
Abstract:
Graph partitioning (GP) is a classic problem that divides the node set of a graph into densely-connected blocks. Following the IEEE HPEC Graph Challenge and recent advances in pre-training techniques (e.g., large-language models), we propose PR-GPT (Pre-trained & Refined Graph ParTitioning) based on a novel pre-training & refinement paradigm. We first conduct the offline pre-training of a deep gra…
▽ More
Graph partitioning (GP) is a classic problem that divides the node set of a graph into densely-connected blocks. Following the IEEE HPEC Graph Challenge and recent advances in pre-training techniques (e.g., large-language models), we propose PR-GPT (Pre-trained & Refined Graph ParTitioning) based on a novel pre-training & refinement paradigm. We first conduct the offline pre-training of a deep graph learning (DGL) model on small synthetic graphs with various topology properties. By using the inductive inference of DGL, one can directly generalize the pre-trained model (with frozen model parameters) to large graphs and derive feasible GP results. We also use the derived partition as a good initialization of an efficient GP method (e.g., InfoMap) to further refine the quality of partitioning. In this setting, the online generalization and refinement of PR-GPT can not only benefit from the transfer ability regarding quality but also ensure high inference efficiency without re-training. Based on a mechanism of reducing the scale of a graph to be processed by the refinement method, PR-GPT also has the potential to support streaming GP. Experiments on the Graph Challenge benchmark demonstrate that PR-GPT can ensure faster GP on large-scale graphs without significant quality degradation, compared with running a refinement method from scratch. We will make our code public at https://github.com/KuroginQin/PRGPT.
△ Less
Submitted 1 September, 2024;
originally announced September 2024.
-
Enhancing and Accelerating Large Language Models via Instruction-Aware Contextual Compression
Authors:
Haowen Hou,
Fei Ma,
Binwen Bai,
Xinxin Zhu,
Fei Yu
Abstract:
Large Language Models (LLMs) have garnered widespread attention due to their remarkable performance across various tasks. However, to mitigate the issue of hallucinations, LLMs often incorporate retrieval-augmented pipeline to provide them with rich external knowledge and context. Nevertheless, challenges stem from inaccurate and coarse-grained context retrieved from the retriever. Supplying irrel…
▽ More
Large Language Models (LLMs) have garnered widespread attention due to their remarkable performance across various tasks. However, to mitigate the issue of hallucinations, LLMs often incorporate retrieval-augmented pipeline to provide them with rich external knowledge and context. Nevertheless, challenges stem from inaccurate and coarse-grained context retrieved from the retriever. Supplying irrelevant context to the LLMs can result in poorer responses, increased inference latency, and higher costs. This paper introduces a method called Instruction-Aware Contextual Compression, which filters out less informative content, thereby accelerating and enhancing the use of LLMs. The experimental results demonstrate that Instruction-Aware Contextual Compression notably reduces memory consumption and minimizes generation latency while maintaining performance levels comparable to those achieved with the use of the full context. Specifically, we achieved a 50% reduction in context-related costs, resulting in a 5% reduction in inference memory usage and a 2.2-fold increase in inference speed, with only a minor drop of 0.047 in Rouge-1. These findings suggest that our method strikes an effective balance between efficiency and performance.
△ Less
Submitted 27 August, 2024;
originally announced August 2024.
-
A Mean Field Ansatz for Zero-Shot Weight Transfer
Authors:
Xingyuan Chen,
Wenwei Kuang,
Lei Deng,
Wei Han,
Bo Bai,
Goncalo dos Reis
Abstract:
The pre-training cost of large language models (LLMs) is prohibitive. One cutting-edge approach to reduce the cost is zero-shot weight transfer, also known as model growth for some cases, which magically transfers the weights trained in a small model to a large model. However, there are still some theoretical mysteries behind the weight transfer. In this paper, inspired by prior applications of me…
▽ More
The pre-training cost of large language models (LLMs) is prohibitive. One cutting-edge approach to reduce the cost is zero-shot weight transfer, also known as model growth for some cases, which magically transfers the weights trained in a small model to a large model. However, there are still some theoretical mysteries behind the weight transfer. In this paper, inspired by prior applications of mean field theory to neural network dynamics, we introduce a mean field ansatz to provide a theoretical explanation for weight transfer. Specifically, we propose the row-column (RC) ansatz under the mean field point of view, which describes the measure structure of the weights in the neural network (NN) and admits a close measure dynamic. Thus, the weights of different sizes NN admit a common distribution under proper assumptions, and weight transfer methods can be viewed as sampling methods. We empirically validate the RC ansatz by exploring simple MLP examples and LLMs such as GPT-3 and Llama-3.1. We show the mean-field point of view is adequate under suitable assumptions which can provide theoretical support for zero-shot weight transfer.
△ Less
Submitted 16 August, 2024;
originally announced August 2024.
-
Error Correction Decoding Algorithms of RS Codes Based on An Earlier Termination Algorithm to Find The Error Locator Polynomial
Authors:
Zhengyi Jiang,
Hao Shi,
Zhongyi Huang,
Linqi Song,
Bo Bai,
Gong Zhang,
Hanxu Hou
Abstract:
Reed-Solomon (RS) codes are widely used to correct errors in storage systems. Finding the error locator polynomial is one of the key steps in the error correction procedure of RS codes. Modular Approach (MA) is an effective algorithm for solving the Welch-Berlekamp (WB) key-equation problem to find the error locator polynomial that needs $2t$ steps, where $t$ is the error correction capability. In…
▽ More
Reed-Solomon (RS) codes are widely used to correct errors in storage systems. Finding the error locator polynomial is one of the key steps in the error correction procedure of RS codes. Modular Approach (MA) is an effective algorithm for solving the Welch-Berlekamp (WB) key-equation problem to find the error locator polynomial that needs $2t$ steps, where $t$ is the error correction capability. In this paper, we first present a new MA algorithm that only requires $2e$ steps and then propose two fast decoding algorithms for RS codes based on our MA algorithm, where $e$ is the number of errors and $e\leq t$. We propose Improved-Frequency Domain Modular Approach (I-FDMA) algorithm that needs $2e$ steps to solve the error locator polynomial and present our first decoding algorithm based on the I-FDMA algorithm. We show that, compared with the existing methods based on MA algorithms, our I-FDMA algorithm can effectively reduce the decoding complexity of RS codes when $e<t$. Furthermore, we propose the $t_0$-Shortened I-FDMA ($t_0$-SI-FDMA) algorithm ($t_0$ is a predetermined even number less than $2t-1$) based on the new termination mechanism to solve the error number $e$ quickly. We propose our second decoding algorithm based on the SI-FDMA algorithm for RS codes and show that the multiplication complexity of our second decoding algorithm is lower than our first decoding algorithm (the I-FDMA decoding algorithm) when $2e<t_0+1$.
△ Less
Submitted 28 July, 2024;
originally announced July 2024.
-
Cross-Phase Mutual Learning Framework for Pulmonary Embolism Identification on Non-Contrast CT Scans
Authors:
Bizhe Bai,
Yan-Jie Zhou,
Yujian Hu,
Tony C. W. Mok,
Yilang Xiang,
Le Lu,
Hongkun Zhang,
Minfeng Xu
Abstract:
Pulmonary embolism (PE) is a life-threatening condition where rapid and accurate diagnosis is imperative yet difficult due to predominantly atypical symptomatology. Computed tomography pulmonary angiography (CTPA) is acknowledged as the gold standard imaging tool in clinics, yet it can be contraindicated for emergency department (ED) patients and represents an onerous procedure, thus necessitating…
▽ More
Pulmonary embolism (PE) is a life-threatening condition where rapid and accurate diagnosis is imperative yet difficult due to predominantly atypical symptomatology. Computed tomography pulmonary angiography (CTPA) is acknowledged as the gold standard imaging tool in clinics, yet it can be contraindicated for emergency department (ED) patients and represents an onerous procedure, thus necessitating PE identification through non-contrast CT (NCT) scans. In this work, we explore the feasibility of applying a deep-learning approach to NCT scans for PE identification. We propose a novel Cross-Phase Mutual learNing framework (CPMN) that fosters knowledge transfer from CTPA to NCT, while concurrently conducting embolism segmentation and abnormality classification in a multi-task manner. The proposed CPMN leverages the Inter-Feature Alignment (IFA) strategy that enhances spatial contiguity and mutual learning between the dual-pathway network, while the Intra-Feature Discrepancy (IFD) strategy can facilitate precise segmentation of PE against complex backgrounds for single-pathway networks. For a comprehensive assessment of the proposed approach, a large-scale dual-phase dataset containing 334 PE patients and 1,105 normal subjects has been established. Experimental results demonstrate that CPMN achieves the leading identification performance, which is 95.4\% and 99.6\% in patient-level sensitivity and specificity on NCT scans, indicating the potential of our approach as an economical, accessible, and precise tool for PE identification in clinical practice.
△ Less
Submitted 16 July, 2024;
originally announced July 2024.
-
On Zero-Error Capacity of Graphs with One Edge
Authors:
Qi Cao,
Qi Chen,
Baoming Bai
Abstract:
In this paper, we study the zero-error capacity of channels with memory, which are represented by graphs. We provide a method to construct code for any graph with one edge, thereby determining a lower bound on its zero-error capacity. Moreover, this code can achieve zero-error capacity when the symbols in a vertex with degree one are the same. We further apply our method to the one-edge graphs rep…
▽ More
In this paper, we study the zero-error capacity of channels with memory, which are represented by graphs. We provide a method to construct code for any graph with one edge, thereby determining a lower bound on its zero-error capacity. Moreover, this code can achieve zero-error capacity when the symbols in a vertex with degree one are the same. We further apply our method to the one-edge graphs representing the binary channels with two memories. There are 28 possible graphs, which can be organized into 11 categories based on their symmetries. The code constructed by our method is proved to achieve the zero-error capacity for all these graphs except for the two graphs in Case 11.
△ Less
Submitted 24 June, 2024;
originally announced June 2024.
-
Retrieval Meets Reasoning: Dynamic In-Context Editing for Long-Text Understanding
Authors:
Weizhi Fei,
Xueyan Niu,
Guoqing Xie,
Yanhua Zhang,
Bo Bai,
Lei Deng,
Wei Han
Abstract:
Current Large Language Models (LLMs) face inherent limitations due to their pre-defined context lengths, which impede their capacity for multi-hop reasoning within extensive textual contexts. While existing techniques like Retrieval-Augmented Generation (RAG) have attempted to bridge this gap by sourcing external information, they fall short when direct answers are not readily available. We introd…
▽ More
Current Large Language Models (LLMs) face inherent limitations due to their pre-defined context lengths, which impede their capacity for multi-hop reasoning within extensive textual contexts. While existing techniques like Retrieval-Augmented Generation (RAG) have attempted to bridge this gap by sourcing external information, they fall short when direct answers are not readily available. We introduce a novel approach that re-imagines information retrieval through dynamic in-context editing, inspired by recent breakthroughs in knowledge editing. By treating lengthy contexts as malleable external knowledge, our method interactively gathers and integrates relevant information, thereby enabling LLMs to perform sophisticated reasoning steps. Experimental results demonstrate that our method effectively empowers context-limited LLMs, such as Llama2, to engage in multi-hop reasoning with improved performance, which outperforms state-of-the-art context window extrapolation methods and even compares favorably to more advanced commercial long-context models. Our interactive method not only enhances reasoning capabilities but also mitigates the associated training and computational costs, making it a pragmatic solution for enhancing LLMs' reasoning within expansive contexts.
△ Less
Submitted 18 June, 2024;
originally announced June 2024.
-
SPA-SVC: Self-supervised Pitch Augmentation for Singing Voice Conversion
Authors:
Bingsong Bai,
Fengping Wang,
Yingming Gao,
Ya Li
Abstract:
Diffusion-based singing voice conversion (SVC) models have shown better synthesis quality compared to traditional methods. However, in cross-domain SVC scenarios, where there is a significant disparity in pitch between the source and target voice domains, the models tend to generate audios with hoarseness, posing challenges in achieving high-quality vocal outputs. Therefore, in this paper, we prop…
▽ More
Diffusion-based singing voice conversion (SVC) models have shown better synthesis quality compared to traditional methods. However, in cross-domain SVC scenarios, where there is a significant disparity in pitch between the source and target voice domains, the models tend to generate audios with hoarseness, posing challenges in achieving high-quality vocal outputs. Therefore, in this paper, we propose a Self-supervised Pitch Augmentation method for Singing Voice Conversion (SPA-SVC), which can enhance the voice quality in SVC tasks without requiring additional data or increasing model parameters. We innovatively introduce a cycle pitch shifting training strategy and Structural Similarity Index (SSIM) loss into our SVC model, effectively enhancing its performance. Experimental results on the public singing datasets M4Singer indicate that our proposed method significantly improves model performance in both general SVC scenarios and particularly in cross-domain SVC scenarios.
△ Less
Submitted 9 June, 2024;
originally announced June 2024.
-
Beyond Scaling Laws: Understanding Transformer Performance with Associative Memory
Authors:
Xueyan Niu,
Bo Bai,
Lei Deng,
Wei Han
Abstract:
Increasing the size of a Transformer does not always lead to enhanced performance. This phenomenon cannot be explained by the empirical scaling laws. Furthermore, the model's enhanced performance is closely associated with its memorization of the training samples. We present a theoretical framework that sheds light on the memorization during pre-training of transformer-based language models. We mo…
▽ More
Increasing the size of a Transformer does not always lead to enhanced performance. This phenomenon cannot be explained by the empirical scaling laws. Furthermore, the model's enhanced performance is closely associated with its memorization of the training samples. We present a theoretical framework that sheds light on the memorization during pre-training of transformer-based language models. We model the behavior of Transformers with associative memories using Hopfield networks, such that each transformer block effectively conducts an approximate nearest-neighbor search. In particular, the energy function in modern continuous Hopfield networks serves as an explanation for the attention mechanism, which we approximate with a distance-based energy function. By observing that the softmax function corresponds to the gradient of the LogSumExp function in the energy, and employing the majorization-minimization technique, we construct a global energy function designed to capture the layered architecture. We demonstrate a dependency between the model size and the dataset size for the model to achieve optimal performance, and we show that the achievable cross-entropy loss is bounded from below.
△ Less
Submitted 27 November, 2024; v1 submitted 14 May, 2024;
originally announced May 2024.
-
On the quantization goodness of polar lattices
Authors:
Ling Liu,
Shanxiang Lyu,
Cong Ling,
Baoming Bai
Abstract:
In this work, we prove that polar lattices, when tailored for lossy compression, are quantization-good in the sense that their normalized second moments approach $\frac{1}{2πe}$ as the dimension of lattices increases. It has been predicted by Zamir et al. \cite{ZamirQZ96} that the Entropy Coded Dithered Quantization (ECDQ) system using quantization-good lattices can achieve the rate-distortion bou…
▽ More
In this work, we prove that polar lattices, when tailored for lossy compression, are quantization-good in the sense that their normalized second moments approach $\frac{1}{2πe}$ as the dimension of lattices increases. It has been predicted by Zamir et al. \cite{ZamirQZ96} that the Entropy Coded Dithered Quantization (ECDQ) system using quantization-good lattices can achieve the rate-distortion bound of i.i.d. Gaussian sources. In our previous work \cite{LingQZ}, we established that polar lattices are indeed capable of attaining the same objective. It is reasonable to conjecture that polar lattices also demonstrate quantization goodness in the context of lossy compression. This study confirms this hypothesis.
△ Less
Submitted 20 January, 2025; v1 submitted 7 May, 2024;
originally announced May 2024.
-
Set Transformation: Trade-off Between Repair Bandwidth and Sub-packetization
Authors:
Hao Shi,
Zhengyi Jiang,
Zhongyi Huang,
Bo Bai,
Gong Zhang,
Hanxu Hou
Abstract:
Maximum distance separable (MDS) codes facilitate the achievement of elevated levels of fault tolerance in storage systems while incurring minimal redundancy overhead. Reed-Solomon (RS) codes are typical MDS codes with the sub-packetization level being one, however, they require large repair bandwidth defined as the total amount of symbols downloaded from other surviving nodes during single-node f…
▽ More
Maximum distance separable (MDS) codes facilitate the achievement of elevated levels of fault tolerance in storage systems while incurring minimal redundancy overhead. Reed-Solomon (RS) codes are typical MDS codes with the sub-packetization level being one, however, they require large repair bandwidth defined as the total amount of symbols downloaded from other surviving nodes during single-node failure/repair. In this paper, we present the {\em set transformation}, which can transform any MDS code into set transformed code such that (i) the sub-packetization level is flexible and ranges from 2 to $(n-k)^{\lfloor\frac{n}{n-k}\rfloor}$ in which $n$ is the number of nodes and $k$ is the number of data nodes, (ii) the new code is MDS code, (iii) the new code has lower repair bandwidth for any single-node failure. We show that our set transformed codes have both lower repair bandwidth and lower field size than the existing related MDS array codes, such as elastic transformed codes \cite{10228984}. Specifically, our set transformed codes have $2\%-6.6\%$ repair bandwidth reduction compared with elastic transformed codes \cite{10228984} for the evaluated typical parameters.
△ Less
Submitted 4 May, 2024;
originally announced May 2024.
-
Computation and Critical Transitions of Rate-Distortion-Perception Functions With Wasserstein Barycenter
Authors:
Chunhui Chen,
Xueyan Niu,
Wenhao Ye,
Hao Wu,
Bo Bai
Abstract:
The information rate-distortion-perception (RDP) function characterizes the three-way trade-off between description rate, average distortion, and perceptual quality measured by discrepancy between probability distributions and has been applied to emerging areas in communications empowered by generative modeling. We study several variants of the RDP functions through the lens of optimal transport t…
▽ More
The information rate-distortion-perception (RDP) function characterizes the three-way trade-off between description rate, average distortion, and perceptual quality measured by discrepancy between probability distributions and has been applied to emerging areas in communications empowered by generative modeling. We study several variants of the RDP functions through the lens of optimal transport to characterize their critical transitions. By transforming the information RDP function into a Wasserstein Barycenter problem, we identify the critical transitions when one of the constraints becomes inactive. Further, the non-strictly convexity brought by the perceptual constraint can be regularized by an entropy regularization term. We prove that the entropy regularized model converges to the original problem and propose an alternating iteration method based on the Sinkhorn algorithm to numerically solve the regularized optimization problem. In many practical scenarios, the computation of the Distortion-Rate-Perception (DRP) function offers a solution to minimize distortion and perceptual discrepancy under rate constraints. However, the interchange of the rate objective and the distortion constraint significantly amplifies the complexity. The proposed method effectively addresses this complexity, providing an efficient solution for DRP functions. Using our numerical method, we propose a reverse data hiding scheme that imperceptibly embeds a secret message into an image, ensuring perceptual fidelity and achieving a significant improvement in the perceptual quality of the stego image compared to traditional methods under the same embedding rate. Our theoretical results and numerical method lay an attractive foundation for steganographic communications with perceptual quality constraints.
△ Less
Submitted 30 October, 2024; v1 submitted 6 April, 2024;
originally announced April 2024.
-
Automated HER2 Scoring in Breast Cancer Images Using Deep Learning and Pyramid Sampling
Authors:
Sahan Yoruc Selcuk,
Xilin Yang,
Bijie Bai,
Yijie Zhang,
Yuzhu Li,
Musa Aydin,
Aras Firat Unal,
Aditya Gomatam,
Zhen Guo,
Darrow Morgan Angus,
Goren Kolodney,
Karine Atlan,
Tal Keidar Haran,
Nir Pillar,
Aydogan Ozcan
Abstract:
Human epidermal growth factor receptor 2 (HER2) is a critical protein in cancer cell growth that signifies the aggressiveness of breast cancer (BC) and helps predict its prognosis. Accurate assessment of immunohistochemically (IHC) stained tissue slides for HER2 expression levels is essential for both treatment guidance and understanding of cancer mechanisms. Nevertheless, the traditional workflow…
▽ More
Human epidermal growth factor receptor 2 (HER2) is a critical protein in cancer cell growth that signifies the aggressiveness of breast cancer (BC) and helps predict its prognosis. Accurate assessment of immunohistochemically (IHC) stained tissue slides for HER2 expression levels is essential for both treatment guidance and understanding of cancer mechanisms. Nevertheless, the traditional workflow of manual examination by board-certified pathologists encounters challenges, including inter- and intra-observer inconsistency and extended turnaround times. Here, we introduce a deep learning-based approach utilizing pyramid sampling for the automated classification of HER2 status in IHC-stained BC tissue images. Our approach analyzes morphological features at various spatial scales, efficiently managing the computational load and facilitating a detailed examination of cellular and larger-scale tissue-level details. This method addresses the tissue heterogeneity of HER2 expression by providing a comprehensive view, leading to a blind testing classification accuracy of 84.70%, on a dataset of 523 core images from tissue microarrays. Our automated system, proving reliable as an adjunct pathology tool, has the potential to enhance diagnostic precision and evaluation speed, and might significantly impact cancer treatment planning.
△ Less
Submitted 31 March, 2024;
originally announced April 2024.
-
Fundamentals of Delay-Doppler Communications: Practical Implementation and Extensions to OTFS
Authors:
Shuangyang Li,
Peter Jung,
Weijie Yuan,
Zhiqiang Wei,
Jinhong Yuan,
Baoming Bai,
Giuseppe Caire
Abstract:
The recently proposed orthogonal time frequency space (OTFS) modulation, which is a typical Delay-Doppler (DD) communication scheme, has attracted significant attention thanks to its appealing performance over doubly-selective channels. In this paper, we present the fundamentals of general DD communications from the viewpoint of the Zak transform. We start our study by constructing DD domain basis…
▽ More
The recently proposed orthogonal time frequency space (OTFS) modulation, which is a typical Delay-Doppler (DD) communication scheme, has attracted significant attention thanks to its appealing performance over doubly-selective channels. In this paper, we present the fundamentals of general DD communications from the viewpoint of the Zak transform. We start our study by constructing DD domain basis functions aligning with the time-frequency (TF)-consistency condition, which are globally quasi-periodic and locally twisted-shifted. We unveil that these features are translated to unique signal structures in both time and frequency, which are beneficial for communication purposes. Then, we focus on the practical implementations of DD Nyquist communications, where we show that rectangular windows achieve perfect DD orthogonality, while truncated periodic signals can obtain sufficient DD orthogonality. Particularly, smoothed rectangular window with excess bandwidth can result in a slightly worse orthogonality but better pulse localization in the DD domain. Furthermore, we present a practical pulse shaping framework for general DD communications and derive the corresponding input-output relation under various shaping pulses. Our numerical results agree with our derivations and also demonstrate advantages of DD communications over conventional orthogonal frequency-division multiplexing (OFDM).
△ Less
Submitted 21 March, 2024;
originally announced March 2024.
-
Virtual birefringence imaging and histological staining of amyloid deposits in label-free tissue using autofluorescence microscopy and deep learning
Authors:
Xilin Yang,
Bijie Bai,
Yijie Zhang,
Musa Aydin,
Sahan Yoruc Selcuk,
Zhen Guo,
Gregory A. Fishbein,
Karine Atlan,
William Dean Wallace,
Nir Pillar,
Aydogan Ozcan
Abstract:
Systemic amyloidosis is a group of diseases characterized by the deposition of misfolded proteins in various organs and tissues, leading to progressive organ dysfunction and failure. Congo red stain is the gold standard chemical stain for the visualization of amyloid deposits in tissue sections, as it forms complexes with the misfolded proteins and shows a birefringence pattern under polarized lig…
▽ More
Systemic amyloidosis is a group of diseases characterized by the deposition of misfolded proteins in various organs and tissues, leading to progressive organ dysfunction and failure. Congo red stain is the gold standard chemical stain for the visualization of amyloid deposits in tissue sections, as it forms complexes with the misfolded proteins and shows a birefringence pattern under polarized light microscopy. However, Congo red staining is tedious and costly to perform, and prone to false diagnoses due to variations in the amount of amyloid, staining quality and expert interpretation through manual examination of tissue under a polarization microscope. Here, we report the first demonstration of virtual birefringence imaging and virtual Congo red staining of label-free human tissue to show that a single trained neural network can rapidly transform autofluorescence images of label-free tissue sections into brightfield and polarized light microscopy equivalent images, matching the histochemically stained versions of the same samples. We demonstrate the efficacy of our method with blind testing and pathologist evaluations on cardiac tissue where the virtually stained images agreed well with the histochemically stained ground truth images. Our virtually stained polarization and brightfield images highlight amyloid birefringence patterns in a consistent, reproducible manner while mitigating diagnostic challenges due to variations in the quality of chemical staining and manual imaging processes as part of the clinical workflow.
△ Less
Submitted 14 March, 2024;
originally announced March 2024.
-
Extreme Video Compression with Pre-trained Diffusion Models
Authors:
Bohan Li,
Yiming Liu,
Xueyan Niu,
Bo Bai,
Lei Deng,
Deniz Gündüz
Abstract:
Diffusion models have achieved remarkable success in generating high quality image and video data. More recently, they have also been used for image compression with high perceptual quality. In this paper, we present a novel approach to extreme video compression leveraging the predictive power of diffusion-based generative models at the decoder. The conditional diffusion model takes several neural…
▽ More
Diffusion models have achieved remarkable success in generating high quality image and video data. More recently, they have also been used for image compression with high perceptual quality. In this paper, we present a novel approach to extreme video compression leveraging the predictive power of diffusion-based generative models at the decoder. The conditional diffusion model takes several neural compressed frames and generates subsequent frames. When the reconstruction quality drops below the desired level, new frames are encoded to restart prediction. The entire video is sequentially encoded to achieve a visually pleasing reconstruction, considering perceptual quality metrics such as the learned perceptual image patch similarity (LPIPS) and the Frechet video distance (FVD), at bit rates as low as 0.02 bits per pixel (bpp). Experimental results demonstrate the effectiveness of the proposed scheme compared to standard codecs such as H.264 and H.265 in the low bpp regime. The results showcase the potential of exploiting the temporal relations in video data using generative models. Code is available at: https://github.com/ElesionKyrie/Extreme-Video-Compression-With-Prediction-Using-Pre-trainded-Diffusion-Models-
△ Less
Submitted 13 February, 2024;
originally announced February 2024.
-
Multiplexed all-optical permutation operations using a reconfigurable diffractive optical network
Authors:
Guangdong Ma,
Xilin Yang,
Bijie Bai,
Jingxi Li,
Yuhang Li,
Tianyi Gan,
Che-Yung Shen,
Yijie Zhang,
Yuzhu Li,
Mona Jarrahi,
Aydogan Ozcan
Abstract:
Large-scale and high-dimensional permutation operations are important for various applications in e.g., telecommunications and encryption. Here, we demonstrate the use of all-optical diffractive computing to execute a set of high-dimensional permutation operations between an input and output field-of-view through layer rotations in a diffractive optical network. In this reconfigurable multiplexed…
▽ More
Large-scale and high-dimensional permutation operations are important for various applications in e.g., telecommunications and encryption. Here, we demonstrate the use of all-optical diffractive computing to execute a set of high-dimensional permutation operations between an input and output field-of-view through layer rotations in a diffractive optical network. In this reconfigurable multiplexed material designed by deep learning, every diffractive layer has four orientations: 0, 90, 180, and 270 degrees. Each unique combination of these rotatable layers represents a distinct rotation state of the diffractive design tailored for a specific permutation operation. Therefore, a K-layer rotatable diffractive material is capable of all-optically performing up to 4^K independent permutation operations. The original input information can be decrypted by applying the specific inverse permutation matrix to output patterns, while applying other inverse operations will lead to loss of information. We demonstrated the feasibility of this reconfigurable multiplexed diffractive design by approximating 256 randomly selected permutation matrices using K=4 rotatable diffractive layers. We also experimentally validated this reconfigurable diffractive network using terahertz radiation and 3D-printed diffractive layers, providing a decent match to our numerical results. The presented rotation-multiplexed diffractive processor design is particularly useful due to its mechanical reconfigurability, offering multifunctional representation through a single fabrication process.
△ Less
Submitted 4 February, 2024;
originally announced February 2024.
-
Subwavelength Imaging using a Solid-Immersion Diffractive Optical Processor
Authors:
Jingtian Hu,
Kun Liao,
Niyazi Ulas Dinc,
Carlo Gigli,
Bijie Bai,
Tianyi Gan,
Xurong Li,
Hanlong Chen,
Xilin Yang,
Yuhang Li,
Cagatay Isil,
Md Sadman Sakib Rahman,
Jingxi Li,
Xiaoyong Hu,
Mona Jarrahi,
Demetri Psaltis,
Aydogan Ozcan
Abstract:
Phase imaging is widely used in biomedical imaging, sensing, and material characterization, among other fields. However, direct imaging of phase objects with subwavelength resolution remains a challenge. Here, we demonstrate subwavelength imaging of phase and amplitude objects based on all-optical diffractive encoding and decoding. To resolve subwavelength features of an object, the diffractive im…
▽ More
Phase imaging is widely used in biomedical imaging, sensing, and material characterization, among other fields. However, direct imaging of phase objects with subwavelength resolution remains a challenge. Here, we demonstrate subwavelength imaging of phase and amplitude objects based on all-optical diffractive encoding and decoding. To resolve subwavelength features of an object, the diffractive imager uses a thin, high-index solid-immersion layer to transmit high-frequency information of the object to a spatially-optimized diffractive encoder, which converts/encodes high-frequency information of the input into low-frequency spatial modes for transmission through air. The subsequent diffractive decoder layers (in air) are jointly designed with the encoder using deep-learning-based optimization, and communicate with the encoder layer to create magnified images of input objects at its output, revealing subwavelength features that would otherwise be washed away due to diffraction limit. We demonstrate that this all-optical collaboration between a diffractive solid-immersion encoder and the following decoder layers in air can resolve subwavelength phase and amplitude features of input objects in a highly compact design. To experimentally demonstrate its proof-of-concept, we used terahertz radiation and developed a fabrication method for creating monolithic multi-layer diffractive processors. Through these monolithically fabricated diffractive encoder-decoder pairs, we demonstrated phase-to-intensity transformations and all-optically reconstructed subwavelength phase features of input objects by directly transforming them into magnified intensity features at the output. This solid-immersion-based diffractive imager, with its compact and cost-effective design, can find wide-ranging applications in bioimaging, endoscopy, sensing and materials characterization.
△ Less
Submitted 16 January, 2024;
originally announced January 2024.
-
Information hiding cameras: optical concealment of object information into ordinary images
Authors:
Bijie Bai,
Ryan Lee,
Yuhang Li,
Tianyi Gan,
Yuntian Wang,
Mona Jarrahi,
Aydogan Ozcan
Abstract:
Data protection methods like cryptography, despite being effective, inadvertently signal the presence of secret communication, thereby drawing undue attention. Here, we introduce an optical information hiding camera integrated with an electronic decoder, optimized jointly through deep learning. This information hiding-decoding system employs a diffractive optical processor as its front-end, which…
▽ More
Data protection methods like cryptography, despite being effective, inadvertently signal the presence of secret communication, thereby drawing undue attention. Here, we introduce an optical information hiding camera integrated with an electronic decoder, optimized jointly through deep learning. This information hiding-decoding system employs a diffractive optical processor as its front-end, which transforms and hides input images in the form of ordinary-looking patterns that deceive/mislead human observers. This information hiding transformation is valid for infinitely many combinations of secret messages, all of which are transformed into ordinary-looking output patterns, achieved all-optically through passive light-matter interactions within the optical processor. By processing these ordinary-looking output images, a jointly-trained electronic decoder neural network accurately reconstructs the original information hidden within the deceptive output pattern. We numerically demonstrated our approach by designing an information hiding diffractive camera along with a jointly-optimized convolutional decoder neural network. The efficacy of this system was demonstrated under various lighting conditions and noise levels, showing its robustness. We further extended this information hiding camera to multi-spectral operation, allowing the concealment and decoding of multiple images at different wavelengths, all performed simultaneously in a single feed-forward operation. The feasibility of our framework was also demonstrated experimentally using THz radiation. This optical encoder-electronic decoder-based co-design provides a novel information hiding camera interface that is both high-speed and energy-efficient, offering an intriguing solution for visual information security.
△ Less
Submitted 15 January, 2024;
originally announced January 2024.
-
Localization and Discrete Beamforming with a Large Reconfigurable Intelligent Surface
Authors:
Baojia Luo,
Yili Deng,
Miaomiao Dong,
Zhongyi Huang,
Xiang Chen,
Wei Han,
Bo Bai
Abstract:
In millimeter-wave (mmWave) cellular systems, reconfigurable intelligent surfaces (RISs) are foreseeably deployed with a large number of reflecting elements to achieve high beamforming gains. The large-sized RIS will make radio links fall in the near-field localization regime with spatial non-stationarity issues. Moreover, the discrete phase restriction on the RIS reflection coefficient incurs exp…
▽ More
In millimeter-wave (mmWave) cellular systems, reconfigurable intelligent surfaces (RISs) are foreseeably deployed with a large number of reflecting elements to achieve high beamforming gains. The large-sized RIS will make radio links fall in the near-field localization regime with spatial non-stationarity issues. Moreover, the discrete phase restriction on the RIS reflection coefficient incurs exponential complexity for discrete beamforming. It remains an open problem to find the optimal RIS reflection coefficient design in polynomial time. To address these issues, we propose a scalable partitioned-far-field protocol that considers both the near-filed non-stationarity and discrete beamforming. The protocol approximates near-field signal propagation using a partitioned-far-field representation to inherit the sparsity from the sophisticated far-field and facilitate the near-field localization scheme. To improve the theoretical localization performance, we propose a fast passive beamforming (FPB) algorithm that optimally solves the discrete RIS beamforming problem, reducing the search complexity from exponential order to linear order. Furthermore, by exploiting the partitioned structure of RIS, we introduce a two-stage coarse-to-fine localization algorithm that leverages both the time delay and angle information. Numerical results demonstrate that centimeter-level localization precision is achieved under medium and high signal-to-noise ratios (SNR), revealing that RISs can provide support for low-cost and high-precision localization in future cellular systems.
△ Less
Submitted 19 December, 2023;
originally announced December 2023.
-
Extending Context Window of Large Language Models via Semantic Compression
Authors:
Weizhi Fei,
Xueyan Niu,
Pingyi Zhou,
Lu Hou,
Bo Bai,
Lei Deng,
Wei Han
Abstract:
Transformer-based Large Language Models (LLMs) often impose limitations on the length of the text input to ensure the generation of fluent and relevant responses. This constraint restricts their applicability in scenarios involving long texts. We propose a novel semantic compression method that enables generalization to texts that are 6-8 times longer, without incurring significant computational c…
▽ More
Transformer-based Large Language Models (LLMs) often impose limitations on the length of the text input to ensure the generation of fluent and relevant responses. This constraint restricts their applicability in scenarios involving long texts. We propose a novel semantic compression method that enables generalization to texts that are 6-8 times longer, without incurring significant computational costs or requiring fine-tuning. Our proposed framework draws inspiration from source coding in information theory and employs a pre-trained model to reduce the semantic redundancy of long inputs before passing them to the LLMs for downstream tasks. Experimental results demonstrate that our method effectively extends the context window of LLMs across a range of tasks including question answering, summarization, few-shot learning, and information retrieval. Furthermore, the proposed semantic compression method exhibits consistent fluency in text generation while reducing the associated computational overhead.
△ Less
Submitted 15 December, 2023;
originally announced December 2023.
-
RaftGP: Random Fast Graph Partitioning
Authors:
Yu Gao,
Meng Qin,
Yibin Ding,
Li Zeng,
Chaorui Zhang,
Weixi Zhang,
Wei Han,
Rongqian Zhao,
Bo Bai
Abstract:
Graph partitioning (GP), a.k.a. community detection, is a classic problem that divides the node set of a graph into densely-connected blocks. Following prior work on the IEEE HPEC Graph Challenge benchmark and recent advances in graph machine learning, we propose a novel RAndom FasT Graph Partitioning (RaftGP) method based on an efficient graph embedding scheme. It uses the Gaussian random project…
▽ More
Graph partitioning (GP), a.k.a. community detection, is a classic problem that divides the node set of a graph into densely-connected blocks. Following prior work on the IEEE HPEC Graph Challenge benchmark and recent advances in graph machine learning, we propose a novel RAndom FasT Graph Partitioning (RaftGP) method based on an efficient graph embedding scheme. It uses the Gaussian random projection to extract community-preserving features from classic GP objectives. These features are fed into a graph neural network (GNN) to derive low-dimensional node embeddings. Surprisingly, our experiments demonstrate that a randomly initialized GNN even without training is enough for RaftGP to derive informative community-preserving embeddings and support high-quality GP. To enable the derived embeddings to tackle GP, we introduce a hierarchical model selection algorithm that simultaneously determines the number of blocks and the corresponding GP result. We evaluate RaftGP on the Graph Challenge benchmark and compare the performance with five baselines, where our method can achieve a better trade-off between quality and efficiency. In particular, compared to the baseline algorithm of the IEEE HPEC Graph Challenge, our method is 6.68x -- 23.9x faster on graphs with 1E3 -- 5E4 nodes and at least 64.5x faster on larger (1E5 node) graphs on which the baseline takes more than 1E4 seconds. Our method achieves better accuracy on all test cases. We also develop a new graph generator to address some limitations of the original generator in the benchmark.
△ Less
Submitted 3 December, 2023;
originally announced December 2023.
-
Pseudo Label-Guided Data Fusion and Output Consistency for Semi-Supervised Medical Image Segmentation
Authors:
Tao Wang,
Yuanbin Chen,
Xinlin Zhang,
Yuanbo Zhou,
Junlin Lan,
Bizhe Bai,
Tao Tan,
Min Du,
Qinquan Gao,
Tong Tong
Abstract:
Supervised learning algorithms based on Convolutional Neural Networks have become the benchmark for medical image segmentation tasks, but their effectiveness heavily relies on a large amount of labeled data. However, annotating medical image datasets is a laborious and time-consuming process. Inspired by semi-supervised algorithms that use both labeled and unlabeled data for training, we propose t…
▽ More
Supervised learning algorithms based on Convolutional Neural Networks have become the benchmark for medical image segmentation tasks, but their effectiveness heavily relies on a large amount of labeled data. However, annotating medical image datasets is a laborious and time-consuming process. Inspired by semi-supervised algorithms that use both labeled and unlabeled data for training, we propose the PLGDF framework, which builds upon the mean teacher network for segmenting medical images with less annotation. We propose a novel pseudo-label utilization scheme, which combines labeled and unlabeled data to augment the dataset effectively. Additionally, we enforce the consistency between different scales in the decoder module of the segmentation network and propose a loss function suitable for evaluating the consistency. Moreover, we incorporate a sharpening operation on the predicted results, further enhancing the accuracy of the segmentation.
Extensive experiments on three publicly available datasets demonstrate that the PLGDF framework can largely improve performance by incorporating the unlabeled data. Meanwhile, our framework yields superior performance compared to six state-of-the-art semi-supervised learning methods. The codes of this study are available at https://github.com/ortonwang/PLGDF.
△ Less
Submitted 17 November, 2023;
originally announced November 2023.
-
A Virtual Reality Training System for Automotive Engines Assembly and Disassembly
Authors:
Gongjin Lan,
Qiangqiang Lai,
Bing Bai,
Zirui Zhao,
Qi Hao
Abstract:
Automotive engine assembly and disassembly are common and crucial programs in the automotive industry. Traditional education trains students to learn automotive engine assembly and disassembly in lecture courses and then to operate with physical engines, which are generally low effectiveness and high cost. In this work, we developed a multi-layer structured Virtual Reality (VR) system to provide s…
▽ More
Automotive engine assembly and disassembly are common and crucial programs in the automotive industry. Traditional education trains students to learn automotive engine assembly and disassembly in lecture courses and then to operate with physical engines, which are generally low effectiveness and high cost. In this work, we developed a multi-layer structured Virtual Reality (VR) system to provide students with training in automotive engine (Buick Verano) assembly and disassembly. We designed the VR training system with The VR training system is designed to have several major features, including replaceable engine parts and reusable tools, friendly user interfaces and guidance, and bottom-up designed multi-layer architecture, which can be extended to various engine models. The VR system is evaluated with controlled experiments of two groups of students. The results demonstrate that our VR training system provides remarkable usability in terms of effectiveness and efficiency. Currently, our VR system has been demonstrated and employed in the courses of Chinese colleges to train students in automotive engine assembly and disassembly. A free-to-use executable file (Microsoft Windows) and open-source code are available at https://github.com/LadissonLai/SUSTech_VREngine for facilitating the development of VR systems in the automotive industry. Finally, a video describing the operations in our VR training system is available at https://www.youtube.com/watch?v=yZe4YTwwAC4
△ Less
Submitted 2 November, 2023;
originally announced November 2023.
-
Complex-valued universal linear transformations and image encryption using spatially incoherent diffractive networks
Authors:
Xilin Yang,
Md Sadman Sakib Rahman,
Bijie Bai,
Jingxi Li,
Aydogan Ozcan
Abstract:
As an optical processor, a Diffractive Deep Neural Network (D2NN) utilizes engineered diffractive surfaces designed through machine learning to perform all-optical information processing, completing its tasks at the speed of light propagation through thin optical layers. With sufficient degrees-of-freedom, D2NNs can perform arbitrary complex-valued linear transformations using spatially coherent l…
▽ More
As an optical processor, a Diffractive Deep Neural Network (D2NN) utilizes engineered diffractive surfaces designed through machine learning to perform all-optical information processing, completing its tasks at the speed of light propagation through thin optical layers. With sufficient degrees-of-freedom, D2NNs can perform arbitrary complex-valued linear transformations using spatially coherent light. Similarly, D2NNs can also perform arbitrary linear intensity transformations with spatially incoherent illumination; however, under spatially incoherent light, these transformations are non-negative, acting on diffraction-limited optical intensity patterns at the input field-of-view (FOV). Here, we expand the use of spatially incoherent D2NNs to complex-valued information processing for executing arbitrary complex-valued linear transformations using spatially incoherent light. Through simulations, we show that as the number of optimized diffractive features increases beyond a threshold dictated by the multiplication of the input and output space-bandwidth products, a spatially incoherent diffractive visual processor can approximate any complex-valued linear transformation and be used for all-optical image encryption using incoherent illumination. The findings are important for the all-optical processing of information under natural light using various forms of diffractive surface-based optical processors.
△ Less
Submitted 5 October, 2023;
originally announced October 2023.
-
High Perceptual Quality Wireless Image Delivery with Denoising Diffusion Models
Authors:
Selim F. Yilmaz,
Xueyan Niu,
Bo Bai,
Wei Han,
Lei Deng,
Deniz Gunduz
Abstract:
We consider the image transmission problem over a noisy wireless channel via deep learning-based joint source-channel coding (DeepJSCC) along with a denoising diffusion probabilistic model (DDPM) at the receiver. Specifically, we are interested in the perception-distortion trade-off in the practical finite block length regime, in which separate source and channel coding can be highly suboptimal. W…
▽ More
We consider the image transmission problem over a noisy wireless channel via deep learning-based joint source-channel coding (DeepJSCC) along with a denoising diffusion probabilistic model (DDPM) at the receiver. Specifically, we are interested in the perception-distortion trade-off in the practical finite block length regime, in which separate source and channel coding can be highly suboptimal. We introduce a novel scheme, where the conventional DeepJSCC encoder targets transmitting a lower resolution version of the image, which later can be refined thanks to the generative model available at the receiver. In particular, we utilize the range-null space decomposition of the target image; DeepJSCC transmits the range-space of the image, while DDPM progressively refines its null space contents. Through extensive experiments, we demonstrate significant improvements in distortion and perceptual quality of reconstructed images compared to standard DeepJSCC and the state-of-the-art generative learning-based method.
△ Less
Submitted 20 September, 2024; v1 submitted 27 September, 2023;
originally announced September 2023.
-
Generalized Simple Regenerating Codes: Trading Sub-packetization and Fault Tolerance
Authors:
Zhengyi Jiang,
Hao Shi,
Zhongyi Huang,
Bo Bai,
Gong Zhang,
Hanxu Hou
Abstract:
Maximum distance separable (MDS) codes have the optimal trade-off between storage efficiency and fault tolerance, which are widely used in distributed storage systems. As typical non-MDS codes, simple regenerating codes (SRCs) can achieve both smaller repair bandwidth and smaller repair locality than traditional MDS codes in repairing single-node erasure.
In this paper, we propose {\em generaliz…
▽ More
Maximum distance separable (MDS) codes have the optimal trade-off between storage efficiency and fault tolerance, which are widely used in distributed storage systems. As typical non-MDS codes, simple regenerating codes (SRCs) can achieve both smaller repair bandwidth and smaller repair locality than traditional MDS codes in repairing single-node erasure.
In this paper, we propose {\em generalized simple regenerating codes} (GSRCs) that can support much more parameters than that of SRCs. We show that there is a trade-off between sub-packetization and fault tolerance in our GSRCs, and SRCs achieve a special point of the trade-off of GSRCs. We show that the fault tolerance of our GSRCs increases when the sub-packetization increases linearly. We also show that our GSRCs can locally repair any singe-symbol erasure and any single-node erasure, and the repair bandwidth of our GSRCs is smaller than that of the existing related codes.
△ Less
Submitted 5 September, 2023;
originally announced September 2023.
-
Pyramid diffractive optical networks for unidirectional image magnification and demagnification
Authors:
Bijie Bai,
Xilin Yang,
Tianyi Gan,
Jingxi Li,
Deniz Mengu,
Mona Jarrahi,
Aydogan Ozcan
Abstract:
Diffractive deep neural networks (D2NNs) are composed of successive transmissive layers optimized using supervised deep learning to all-optically implement various computational tasks between an input and output field-of-view (FOV). Here, we present a pyramid-structured diffractive optical network design (which we term P-D2NN), optimized specifically for unidirectional image magnification and dema…
▽ More
Diffractive deep neural networks (D2NNs) are composed of successive transmissive layers optimized using supervised deep learning to all-optically implement various computational tasks between an input and output field-of-view (FOV). Here, we present a pyramid-structured diffractive optical network design (which we term P-D2NN), optimized specifically for unidirectional image magnification and demagnification. In this design, the diffractive layers are pyramidally scaled in alignment with the direction of the image magnification or demagnification. This P-D2NN design creates high-fidelity magnified or demagnified images in only one direction, while inhibiting the image formation in the opposite direction - achieving the desired unidirectional imaging operation using a much smaller number of diffractive degrees of freedom within the optical processor volume. Furthermore, P-D2NN design maintains its unidirectional image magnification/demagnification functionality across a large band of illumination wavelengths despite being trained with a single wavelength. We also designed a wavelength-multiplexed P-D2NN, where a unidirectional magnifier and a unidirectional demagnifier operate simultaneously in opposite directions, at two distinct illumination wavelengths. Furthermore, we demonstrate that by cascading multiple unidirectional P-D2NN modules, we can achieve higher magnification factors. The efficacy of the P-D2NN architecture was also validated experimentally using terahertz illumination, successfully matching our numerical simulations. P-D2NN offers a physics-inspired strategy for designing task-specific visual processors.
△ Less
Submitted 31 July, 2024; v1 submitted 29 August, 2023;
originally announced August 2023.
-
MDS Array Codes With Small Sub-packetization Levels and Small Repair Degrees
Authors:
Jie Li,
Yi Liu,
Xiaohu Tang,
Yunghsiang S. Han,
Bo Bai,
Gong Zhang
Abstract:
High-rate minimum storage regenerating (MSR) codes are known to require a large sub-packetization level, which can make meta-data management difficult and hinder implementation in practical systems. A few maximum distance separable (MDS) array code constructions have been proposed to attain a much smaller sub-packetization level by sacrificing a bit of repair bandwidth. However, to the best of our…
▽ More
High-rate minimum storage regenerating (MSR) codes are known to require a large sub-packetization level, which can make meta-data management difficult and hinder implementation in practical systems. A few maximum distance separable (MDS) array code constructions have been proposed to attain a much smaller sub-packetization level by sacrificing a bit of repair bandwidth. However, to the best of our knowledge, only one construction by Guruswami et al. can support the repair of a failed node without contacting all the surviving nodes. This construction is certainly of theoretical interest but not yet practical due to its requirement for very large code parameters. In this paper, we propose a generic transformation that can convert any $(\overline{n}, \overline{k})$ MSR code with a repair degree of $\overline{d}<\overline{n}-1$ into another $(n=s\overline{n},k)$ MDS array code that supports $d<n-1$ with a small sub-packetization level and $(1+ε)$-optimal repair bandwidth (i.e., $1+ε$ times the optimal value) under a specific condition. We obtain three MDS array codes with small sub-packetization levels and $(1+ε)$-optimal repair bandwidth by applying this transformation to three known MSR codes. All the new MDS array codes have a small repair degree of $d<n-1$ and work for both small and large code parameters.
△ Less
Submitted 28 August, 2023;
originally announced August 2023.