-
Physics-consistent deep learning for blind aberration recovery in mobile optics
Authors:
Kartik Jhawar,
Tamo Sancho Miguel Tandoc,
Khoo Jun Xuan,
Wang Lipo
Abstract:
Mobile photography is often limited by complex, lens-specific optical aberrations. While recent deep learning methods approach this as an end-to-end deblurring task, these "black-box" models lack explicit optical modeling and can hallucinate details. Conversely, classical blind deconvolution remains highly unstable. To bridge this gap, we present Lens2Zernike, a deep learning framework that blindl…
▽ More
Mobile photography is often limited by complex, lens-specific optical aberrations. While recent deep learning methods approach this as an end-to-end deblurring task, these "black-box" models lack explicit optical modeling and can hallucinate details. Conversely, classical blind deconvolution remains highly unstable. To bridge this gap, we present Lens2Zernike, a deep learning framework that blindly recovers physical optical parameters from a single blurred image. To the best of our knowledge, no prior work has simultaneously integrated supervision across three distinct optical domains. We introduce a novel physics-consistent strategy that explicitly minimizes errors via direct Zernike coefficient regression (z), differentiable physics constraints encompassing both wavefront and point spread function derivations (p), and auxiliary multi-task spatial map predictions (m). Through an ablation study on a ResNet-18 backbone, we demonstrate that our full multi-task framework (z+p+m) yields a 35% improvement over coefficient-only baselines. Crucially, comparative analysis reveals that our approach outperforms two established deep learning methods from previous literature, achieving significantly lower regression errors. Ultimately, we demonstrate that these recovered physical parameters enable stable non-blind deconvolution, providing substantial in-domain improvement on the patented Institute for Digital Molecular Analytics and Science (IDMxS) Mobile Camera Lens Database for restoring diffraction-limited details from severely aberrated mobile captures.
△ Less
Submitted 5 March, 2026;
originally announced March 2026.
-
Steer2Adapt: Dynamically Composing Steering Vectors Elicits Efficient Adaptation of LLMs
Authors:
Pengrui Han,
Xueqiang Xu,
Keyang Xuan,
Peiyang Song,
Siru Ouyang,
Runchu Tian,
Yuqing Jiang,
Cheng Qian,
Pengcheng Jiang,
Jiashuo Sun,
Junxia Cui,
Ming Zhong,
Ge Liu,
Jiawei Han,
Jiaxuan You
Abstract:
Activation steering has emerged as a promising approach for efficiently adapting large language models (LLMs) to downstream behaviors. However, most existing steering methods rely on a single static direction per task or concept, making them inflexible under task variation and inadequate for complex tasks that require multiple coordinated capabilities. To address this limitation, we propose STEER2…
▽ More
Activation steering has emerged as a promising approach for efficiently adapting large language models (LLMs) to downstream behaviors. However, most existing steering methods rely on a single static direction per task or concept, making them inflexible under task variation and inadequate for complex tasks that require multiple coordinated capabilities. To address this limitation, we propose STEER2ADAPT, a lightweight framework that adapts LLMs by composing steering vectors rather than learning new ones from scratch. In many domains (e.g., reasoning or safety), tasks share a small set of underlying concept dimensions. STEER2ADAPT captures these dimensions as a reusable, low-dimensional semantic prior subspace, and adapts to new tasks by dynamically discovering a linear combination of basis vectors from only a handful of examples. Experiments across 9 tasks and 3 models in both reasoning and safety domains demonstrate the effectiveness of STEER2ADAPT, achieving an average improvement of 8.2%. Extensive analyses further show that STEER2ADAPT is a data-efficient, stable, and transparent inference-time adaptation method for LLMs.
△ Less
Submitted 6 February, 2026;
originally announced February 2026.
-
SocialVeil: Probing Social Intelligence of Language Agents under Communication Barriers
Authors:
Keyang Xuan,
Pengda Wang,
Chongrui Ye,
Haofei Yu,
Tal August,
Jiaxuan You
Abstract:
Large language models (LLMs) are increasingly evaluated in interactive environments to test their social intelligence. However, existing benchmarks often assume idealized communication between agents, limiting our ability to diagnose whether LLMs can maintain and repair interactions in more realistic, imperfect settings. To close this gap, we present \textsc{SocialVeil}, a social learning environm…
▽ More
Large language models (LLMs) are increasingly evaluated in interactive environments to test their social intelligence. However, existing benchmarks often assume idealized communication between agents, limiting our ability to diagnose whether LLMs can maintain and repair interactions in more realistic, imperfect settings. To close this gap, we present \textsc{SocialVeil}, a social learning environment that can simulate social interaction under cognitive-difference-induced communication barriers. Grounded in a systematic literature review of communication challenges in human interaction, \textsc{SocialVeil} introduces three representative types of such disruption, \emph{semantic vagueness}, \emph{sociocultural mismatch}, and \emph{emotional interference}. We also introduce two barrier-aware evaluation metrics, \emph{unresolved confusion} and \emph{mutual understanding}, to evaluate interaction quality under impaired communication. Experiments across 720 scenarios and four frontier LLMs show that barriers consistently impair performance, with mutual understanding reduced by over 45\% on average, and confusion elevated by nearly 50\%. Human evaluations validate the fidelity of these simulated barriers (ICC$\approx$0.78, Pearson r$\approx$0.80). We further demonstrate that adaptation strategies (Repair Instruction and Interactive learning) only have a modest effect far from barrier-free performance. This work takes a step toward bringing social interaction environments closer to real-world communication, opening opportunities for exploring the social intelligence of LLM agents.
△ Less
Submitted 4 February, 2026;
originally announced February 2026.
-
TinyScientist: An Interactive, Extensible, and Controllable Framework for Building Research Agents
Authors:
Haofei Yu,
Keyang Xuan,
Fenghai Li,
Kunlun Zhu,
Zijie Lei,
Jiaxun Zhang,
Ziheng Qi,
Kyle Richardson,
Jiaxuan You
Abstract:
Automatic research with Large Language Models (LLMs) is rapidly gaining importance, driving the development of increasingly complex workflows involving multi-agent systems, planning, tool usage, code execution, and human-agent interaction to accelerate research processes. However, as more researchers and developers begin to use and build upon these tools and platforms, the complexity and difficult…
▽ More
Automatic research with Large Language Models (LLMs) is rapidly gaining importance, driving the development of increasingly complex workflows involving multi-agent systems, planning, tool usage, code execution, and human-agent interaction to accelerate research processes. However, as more researchers and developers begin to use and build upon these tools and platforms, the complexity and difficulty of extending and maintaining such agentic workflows have become a significant challenge, particularly as algorithms and architectures continue to advance. To address this growing complexity, TinyScientist identifies the essential components of the automatic research workflow and proposes an interactive, extensible, and controllable framework that easily adapts to new tools and supports iterative growth. We provide an open-source codebase, an interactive web demonstration, and a PyPI Python package to make state-of-the-art auto-research pipelines broadly accessible to every researcher and developer.
△ Less
Submitted 7 October, 2025;
originally announced October 2025.
-
Sotopia-RL: Reward Design for Social Intelligence
Authors:
Haofei Yu,
Zhengyang Qi,
Yining Zhao,
Kolby Nottingham,
Keyang Xuan,
Bodhisattwa Prasad Majumder,
Hao Zhu,
Paul Pu Liang,
Jiaxuan You
Abstract:
Social intelligence has become a critical capability for large language models (LLMs), enabling them to engage effectively in real-world social tasks such as collaboration and negotiation. Reinforcement learning (RL) is a natural fit for training socially intelligent agents because it allows models to learn sophisticated strategies directly through social interactions without requiring human annot…
▽ More
Social intelligence has become a critical capability for large language models (LLMs), enabling them to engage effectively in real-world social tasks such as collaboration and negotiation. Reinforcement learning (RL) is a natural fit for training socially intelligent agents because it allows models to learn sophisticated strategies directly through social interactions without requiring human annotations. However, there are two unique parts about social intelligence tasks: (1) the quality of individual utterances in social interactions is not strictly related to final success; (2) social interactions require multi-dimensional rubrics for success. Therefore, we argue that it is necessary to design rewards for building utterance-level multi-dimensional reward models to facilitate RL training for social intelligence tasks. To address these challenges, we propose Sotopia-RL, a novel framework that refines coarse episode-level feedback into utterance-level, multi-dimensional rewards. Utterance-level credit assignment attributes outcomes to individual utterances, while multi-dimensional rewards capture the full richness of social interactions and reduce reward hacking. Experiments in Sotopia, an open-ended social learning environment, demonstrate that Sotopia-RL achieves state-of-the-art social goal completion scores (7.17 on Sotopia-hard and 8.31 on Sotopia-full), significantly outperforming existing approaches. Ablation studies confirm the necessity of both utterance-level credit assignment and multi-dimensional reward design for RL training.
△ Less
Submitted 7 October, 2025; v1 submitted 5 August, 2025;
originally announced August 2025.
-
TAMP: Token-Adaptive Layerwise Pruning in Multimodal Large Language Models
Authors:
Jaewoo Lee,
Keyang Xuan,
Chanakya Ekbote,
Sandeep Polisetty,
Yi R. Fung,
Paul Pu Liang
Abstract:
Multimodal Large Language Models (MLLMs) have shown remarkable versatility in understanding diverse multimodal data and tasks. However, these capabilities come with an increased model scale. While post-training pruning reduces model size in unimodal models, its application to MLLMs often yields limited success. Our analysis discovers that conventional methods fail to account for the unique token a…
▽ More
Multimodal Large Language Models (MLLMs) have shown remarkable versatility in understanding diverse multimodal data and tasks. However, these capabilities come with an increased model scale. While post-training pruning reduces model size in unimodal models, its application to MLLMs often yields limited success. Our analysis discovers that conventional methods fail to account for the unique token attributes across layers and modalities inherent to MLLMs. Inspired by this observation, we propose TAMP, a simple yet effective pruning framework tailored for MLLMs, featuring two key components: (1) Diversity-Aware Sparsity, which adjusts sparsity ratio per layer based on diversities among multimodal output tokens, preserving more parameters in high-diversity layers; and (2) Adaptive Multimodal Input Activation, which identifies representative multimodal input tokens using attention scores to guide unstructured weight pruning. We validate our method on two state-of-the-art MLLMs: LLaVA-NeXT, designed for vision-language tasks, and VideoLLaMA2, capable of processing audio, visual, and language modalities. Empirical experiments across various multimodal evaluation benchmarks demonstrate that each component of our approach substantially outperforms existing pruning techniques.
△ Less
Submitted 17 May, 2025; v1 submitted 14 April, 2025;
originally announced April 2025.
-
Pathology Image Restoration via Mixture of Prompts
Authors:
Jiangdong Cai,
Yan Chen,
Zhenrong Shen,
Haotian Jiang,
Honglin Xiong,
Kai Xuan,
Lichi Zhang,
Qian Wang
Abstract:
In digital pathology, acquiring all-in-focus images is essential to high-quality imaging and high-efficient clinical workflow. Traditional scanners achieve this by scanning at multiple focal planes of varying depths and then merging them, which is relatively slow and often struggles with complex tissue defocus. Recent prevailing image restoration technique provides a means to restore high-quality…
▽ More
In digital pathology, acquiring all-in-focus images is essential to high-quality imaging and high-efficient clinical workflow. Traditional scanners achieve this by scanning at multiple focal planes of varying depths and then merging them, which is relatively slow and often struggles with complex tissue defocus. Recent prevailing image restoration technique provides a means to restore high-quality pathology images from scans of single focal planes. However, existing image restoration methods are inadequate, due to intricate defocus patterns in pathology images and their domain-specific semantic complexities. In this work, we devise a two-stage restoration solution cascading a transformer and a diffusion model, to benefit from their powers in preserving image fidelity and perceptual quality, respectively. We particularly propose a novel mixture of prompts for the two-stage solution. Given initial prompt that models defocus in microscopic imaging, we design two prompts that describe the high-level image semantics from pathology foundation model and the fine-grained tissue structures via edge extraction. We demonstrate that, by feeding the prompt mixture to our method, we can restore high-quality pathology images from single-focal-plane scans, implying high potentials of the mixture of prompts to clinical usage. Code will be publicly available at https://github.com/caijd2000/MoP.
△ Less
Submitted 16 March, 2025;
originally announced March 2025.
-
ResearchTown: Simulator of Human Research Community
Authors:
Haofei Yu,
Zhaochen Hong,
Zirui Cheng,
Kunlun Zhu,
Keyang Xuan,
Jinwei Yao,
Tao Feng,
Jiaxuan You
Abstract:
Large Language Models (LLMs) have demonstrated remarkable potential in scientific domains, yet a fundamental question remains unanswered: Can we simulate human research communities with LLMs? Addressing this question can deepen our understanding of the processes behind idea brainstorming and inspire the automatic discovery of novel scientific insights. In this work, we propose ResearchTown, a mult…
▽ More
Large Language Models (LLMs) have demonstrated remarkable potential in scientific domains, yet a fundamental question remains unanswered: Can we simulate human research communities with LLMs? Addressing this question can deepen our understanding of the processes behind idea brainstorming and inspire the automatic discovery of novel scientific insights. In this work, we propose ResearchTown, a multi-agent framework for research community simulation. Within this framework, the human research community is simplified as an agent-data graph, where researchers and papers are represented as agent-type and data-type nodes, respectively, and connected based on their collaboration relationships. We also introduce TextGNN, a text-based inference framework that models various research activities (e.g., paper reading, paper writing, and review writing) as special forms of a unified message-passing process on the agent-data graph. To evaluate the quality of the research community simulation, we present ResearchBench, a benchmark that uses a node-masking prediction task for scalable and objective assessment based on similarity. Our experiments reveal three key findings: (1) ResearchTown can provide a realistic simulation of collaborative research activities, including paper writing and review writing; (2) ResearchTown can maintain robust simulation with multiple researchers and diverse papers; (3) ResearchTown can generate interdisciplinary research ideas that potentially inspire pioneering research directions.
△ Less
Submitted 6 June, 2025; v1 submitted 23 December, 2024;
originally announced December 2024.
-
APILOT: Navigating Large Language Models to Generate Secure Code by Sidestepping Outdated API Pitfalls
Authors:
Weiheng Bai,
Keyang Xuan,
Pengxiang Huang,
Qiushi Wu,
Jianing Wen,
Jingjing Wu,
Kangjie Lu
Abstract:
With the rapid development of large language models (LLMs), their applications have expanded into diverse fields, such as code assistance. However, the substantial size of LLMs makes their training highly resource- and time-intensive, rendering frequent retraining or updates impractical. Consequently, time-sensitive data can become outdated, potentially misleading LLMs in time-aware tasks. For exa…
▽ More
With the rapid development of large language models (LLMs), their applications have expanded into diverse fields, such as code assistance. However, the substantial size of LLMs makes their training highly resource- and time-intensive, rendering frequent retraining or updates impractical. Consequently, time-sensitive data can become outdated, potentially misleading LLMs in time-aware tasks. For example, new vulnerabilities are discovered in various programs every day. Without updating their knowledge, LLMs may inadvertently generate code that includes these newly discovered vulnerabilities. Current strategies, such as prompt engineering and fine-tuning, do not effectively address this issue.
To address this issue, we propose solution, named APILOT, which maintains a realtime, quickly updatable dataset of outdated APIs. Additionally, APILOT utilizes an augmented generation method that leverages this dataset to navigate LLMs in generating secure, version-aware code. We conducted a comprehensive evaluation to measure the effectiveness of APILOT in reducing the incidence of outdated API recommendations across seven different state-of-the-art LLMs. The evaluation results indicate that APILOT can reduce outdated code recommendations by 89.42% on average with limited performance overhead. Interestingly, while enhancing security, APILOT also improves the usability of the code generated by LLMs, showing an average increase of 27.54% in usability. This underscores APILOT's dual capability to enhance both the safety and practical utility of code suggestions in contemporary software development environments.
△ Less
Submitted 24 September, 2024;
originally announced September 2024.
-
LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation
Authors:
Keyang Xuan,
Li Yi,
Fan Yang,
Ruochen Wu,
Yi R. Fung,
Heng Ji
Abstract:
The rise of multimodal misinformation on social platforms poses significant challenges for individuals and societies. Its increased credibility and broader impact compared to textual misinformation make detection complex, requiring robust reasoning across diverse media types and profound knowledge for accurate verification. The emergence of Large Vision Language Model (LVLM) offers a potential sol…
▽ More
The rise of multimodal misinformation on social platforms poses significant challenges for individuals and societies. Its increased credibility and broader impact compared to textual misinformation make detection complex, requiring robust reasoning across diverse media types and profound knowledge for accurate verification. The emergence of Large Vision Language Model (LVLM) offers a potential solution to this problem. Leveraging their proficiency in processing visual and textual information, LVLM demonstrates promising capabilities in recognizing complex information and exhibiting strong reasoning skills. In this paper, we first investigate the potential of LVLM on multimodal misinformation detection. We find that even though LVLM has a superior performance compared to LLMs, its profound reasoning may present limited power with a lack of evidence. Based on these observations, we propose LEMMA: LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation. LEMMA leverages LVLM intuition and reasoning capabilities while augmenting them with external knowledge to enhance the accuracy of misinformation detection. Our method improves the accuracy over the top baseline LVLM by 7% and 13% on Twitter and Fakeddit datasets respectively.
△ Less
Submitted 20 June, 2024; v1 submitted 19 February, 2024;
originally announced February 2024.
-
Arbitrary Reduction of MRI Inter-slice Spacing Using Hierarchical Feature Conditional Diffusion
Authors:
Xin Wang,
Zhenrong Shen,
Zhiyun Song,
Sheng Wang,
Mengjun Liu,
Lichi Zhang,
Kai Xuan,
Qian Wang
Abstract:
Magnetic resonance (MR) images collected in 2D scanning protocols typically have large inter-slice spacing, resulting in high in-plane resolution but reduced through-plane resolution. Super-resolution techniques can reduce the inter-slice spacing of 2D scanned MR images, facilitating the downstream visual experience and computer-aided diagnosis. However, most existing super-resolution methods are…
▽ More
Magnetic resonance (MR) images collected in 2D scanning protocols typically have large inter-slice spacing, resulting in high in-plane resolution but reduced through-plane resolution. Super-resolution techniques can reduce the inter-slice spacing of 2D scanned MR images, facilitating the downstream visual experience and computer-aided diagnosis. However, most existing super-resolution methods are trained at a fixed scaling ratio, which is inconvenient in clinical settings where MR scanning may have varying inter-slice spacings. To solve this issue, we propose Hierarchical Feature Conditional Diffusion (HiFi-Diff)} for arbitrary reduction of MR inter-slice spacing. Given two adjacent MR slices and the relative positional offset, HiFi-Diff can iteratively convert a Gaussian noise map into any desired in-between MR slice. Furthermore, to enable fine-grained conditioning, the Hierarchical Feature Extraction (HiFE) module is proposed to hierarchically extract conditional features and conduct element-wise modulation. Our experimental results on the publicly available HCP-1200 dataset demonstrate the high-fidelity super-resolution capability of HiFi-Diff and its efficacy in enhancing downstream segmentation performance.
△ Less
Submitted 15 September, 2023; v1 submitted 16 April, 2023;
originally announced April 2023.
-
Spatiotemporal Classification with limited labels using Constrained Clustering for large datasets
Authors:
Praveen Ravirathinam,
Rahul Ghosh,
Ke Wang,
Keyang Xuan,
Ankush Khandelwal,
Hilary Dugan,
Paul Hanson,
Vipin Kumar
Abstract:
Creating separable representations via representation learning and clustering is critical in analyzing large unstructured datasets with only a few labels. Separable representations can lead to supervised models with better classification capabilities and additionally aid in generating new labeled samples. Most unsupervised and semisupervised methods to analyze large datasets do not leverage the ex…
▽ More
Creating separable representations via representation learning and clustering is critical in analyzing large unstructured datasets with only a few labels. Separable representations can lead to supervised models with better classification capabilities and additionally aid in generating new labeled samples. Most unsupervised and semisupervised methods to analyze large datasets do not leverage the existing small amounts of labels to get better representations. In this paper, we propose a spatiotemporal clustering paradigm that uses spatial and temporal features combined with a constrained loss to produce separable representations. We show the working of this method on the newly published dataset ReaLSAT, a dataset of surface water dynamics for over 680,000 lakes across the world, making it an essential dataset in terms of ecology and sustainability. Using this large unlabelled dataset, we first show how a spatiotemporal representation is better compared to just spatial or temporal representation. We then show how we can learn even better representation using a constrained loss with few labels. We conclude by showing how our method, using few labels, can pick out new labeled samples from the unlabeled data, which can be used to augment supervised methods leading to better classification.
△ Less
Submitted 14 October, 2022;
originally announced October 2022.
-
TBI-GAN: An Adversarial Learning Approach for Data Synthesis on Traumatic Brain Segmentation
Authors:
Xiangyu Zhao,
Di Zang,
Sheng Wang,
Zhenrong Shen,
Kai Xuan,
Zeyu Wei,
Zhe Wang,
Ruizhe Zheng,
Xuehai Wu,
Zheren Li,
Qian Wang,
Zengxin Qi,
Lichi Zhang
Abstract:
Brain network analysis for traumatic brain injury (TBI) patients is critical for its consciousness level assessment and prognosis evaluation, which requires the segmentation of certain consciousness-related brain regions. However, it is difficult to construct a TBI segmentation model as manually annotated MR scans of TBI patients are hard to collect. Data augmentation techniques can be applied to…
▽ More
Brain network analysis for traumatic brain injury (TBI) patients is critical for its consciousness level assessment and prognosis evaluation, which requires the segmentation of certain consciousness-related brain regions. However, it is difficult to construct a TBI segmentation model as manually annotated MR scans of TBI patients are hard to collect. Data augmentation techniques can be applied to alleviate the issue of data scarcity. However, conventional data augmentation strategies such as spatial and intensity transformation are unable to mimic the deformation and lesions in traumatic brains, which limits the performance of the subsequent segmentation task. To address these issues, we propose a novel medical image inpainting model named TBI-GAN to synthesize TBI MR scans with paired brain label maps. The main strength of our TBI-GAN method is that it can generate TBI images and corresponding label maps simultaneously, which has not been achieved in the previous inpainting methods for medical images. We first generate the inpainted image under the guidance of edge information following a coarse-to-fine manner, and then the synthesized intensity image is used as the prior for label inpainting. Furthermore, we introduce a registration-based template augmentation pipeline to increase the diversity of the synthesized image pairs and enhance the capacity of data augmentation. Experimental results show that the proposed TBI-GAN method can produce sufficient synthesized TBI images with high quality and valid label maps, which can greatly improve the 2D and 3D traumatic brain segmentation performance compared with the alternatives.
△ Less
Submitted 11 August, 2022;
originally announced August 2022.
-
Spatial Attention-based Implicit Neural Representation for Arbitrary Reduction of MRI Slice Spacing
Authors:
Xin Wang,
Sheng Wang,
Honglin Xiong,
Kai Xuan,
Zixu Zhuang,
Mengjun Liu,
Zhenrong Shen,
Xiangyu Zhao,
Lichi Zhang,
Qian Wang
Abstract:
Magnetic resonance (MR) images collected in 2D clinical protocols typically have large inter-slice spacing, resulting in high in-plane resolution and reduced through-plane resolution. Super-resolution technique can enhance the through-plane resolution of MR images to facilitate downstream visualization and computer-aided diagnosis. However, most existing works train the super-resolution network at…
▽ More
Magnetic resonance (MR) images collected in 2D clinical protocols typically have large inter-slice spacing, resulting in high in-plane resolution and reduced through-plane resolution. Super-resolution technique can enhance the through-plane resolution of MR images to facilitate downstream visualization and computer-aided diagnosis. However, most existing works train the super-resolution network at a fixed scaling factor, which is not friendly to clinical scenes of varying inter-slice spacing in MR scanning. Inspired by the recent progress in implicit neural representation, we propose a Spatial Attention-based Implicit Neural Representation (SA-INR) network for arbitrary reduction of MR inter-slice spacing. The SA-INR aims to represent an MR image as a continuous implicit function of 3D coordinates. In this way, the SA-INR can reconstruct the MR image with arbitrary inter-slice spacing by continuously sampling the coordinates in 3D space. In particular, a local-aware spatial attention operation is introduced to model nearby voxels and their affinity more accurately in a larger receptive field. Meanwhile, to improve the computational efficiency, a gradient-guided gating mask is proposed for applying the local-aware spatial attention to selected areas only. We evaluate our method on the public HCP-1200 dataset and the clinical knee MR dataset to demonstrate its superiority over other existing methods.
△ Less
Submitted 19 March, 2023; v1 submitted 23 May, 2022;
originally announced May 2022.
-
Knee Cartilage Defect Assessment by Graph Representation and Surface Convolution
Authors:
Zixu Zhuang,
Liping Si,
Sheng Wang,
Kai Xuan,
Xi Ouyang,
Yiqiang Zhan,
Zhong Xue,
Lichi Zhang,
Dinggang Shen,
Weiwu Yao,
Qian Wang
Abstract:
Knee osteoarthritis (OA) is the most common osteoarthritis and a leading cause of disability. Cartilage defects are regarded as major manifestations of knee OA, which are visible by magnetic resonance imaging (MRI). Thus early detection and assessment for knee cartilage defects are important for protecting patients from knee OA. In this way, many attempts have been made on knee cartilage defect as…
▽ More
Knee osteoarthritis (OA) is the most common osteoarthritis and a leading cause of disability. Cartilage defects are regarded as major manifestations of knee OA, which are visible by magnetic resonance imaging (MRI). Thus early detection and assessment for knee cartilage defects are important for protecting patients from knee OA. In this way, many attempts have been made on knee cartilage defect assessment by applying convolutional neural networks (CNNs) to knee MRI. However, the physiologic characteristics of the cartilage may hinder such efforts: the cartilage is a thin curved layer, implying that only a small portion of voxels in knee MRI can contribute to the cartilage defect assessment; heterogeneous scanning protocols further challenge the feasibility of the CNNs in clinical practice; the CNN-based knee cartilage evaluation results lack interpretability. To address these challenges, we model the cartilages structure and appearance from knee MRI into a graph representation, which is capable of handling highly diverse clinical data. Then, guided by the cartilage graph representation, we design a non-Euclidean deep learning network with the self-attention mechanism, to extract cartilage features in the local and global, and to derive the final assessment with a visualized result. Our comprehensive experiments show that the proposed method yields superior performance in knee cartilage defect assessment, plus its convenient 3D visualization for interpretability.
△ Less
Submitted 12 January, 2022;
originally announced January 2022.
-
Multi-Modal MRI Reconstruction Assisted with Spatial Alignment Network
Authors:
Kai Xuan,
Lei Xiang,
Xiaoqian Huang,
Lichi Zhang,
Shu Liao,
Dinggang Shen,
Qian Wang
Abstract:
In clinical practice, multi-modal magnetic resonance imaging (MRI) with different contrasts is usually acquired in a single study to assess different properties of the same region of interest in the human body. The whole acquisition process can be accelerated by having one or more modalities under-sampled in the $k$-space. Recent research has shown that, considering the redundancy between differen…
▽ More
In clinical practice, multi-modal magnetic resonance imaging (MRI) with different contrasts is usually acquired in a single study to assess different properties of the same region of interest in the human body. The whole acquisition process can be accelerated by having one or more modalities under-sampled in the $k$-space. Recent research has shown that, considering the redundancy between different modalities, a target MRI modality under-sampled in the $k$-space can be more efficiently reconstructed with a fully-sampled reference MRI modality. However, we find that the performance of the aforementioned multi-modal reconstruction can be negatively affected by subtle spatial misalignment between different modalities, which is actually common in clinical practice. In this paper, we improve the quality of multi-modal reconstruction by compensating for such spatial misalignment with a spatial alignment network. First, our spatial alignment network estimates the displacement between the fully-sampled reference and the under-sampled target images, and warps the reference image accordingly. Then, the aligned fully-sampled reference image joins the multi-modal reconstruction of the under-sampled target image. Also, considering the contrast difference between the target and reference images, we have designed a cross-modality-synthesis-based registration loss in combination with the reconstruction loss, to jointly train the spatial alignment network and the reconstruction network. The experiments on both clinical MRI and multi-coil $k$-space raw data demonstrate the superiority and robustness of the multi-modal MRI reconstruction empowered with our spatial alignment network. Our code is publicly available at \url{https://github.com/woxuankai/SpatialAlignmentNetwork}.
△ Less
Submitted 2 April, 2022; v1 submitted 12 August, 2021;
originally announced August 2021.
-
Generalized-TODIM Method for Multi-criteria Decision Making with Basic Uncertain Information and its Application
Authors:
Zhiyuan Zhou,
Kai Xuan,
Zhifu Tao,
Ligang Zhou
Abstract:
Due to the fact that basic uncertain information provides a simple form for decision information with certainty degree, it has been developed to reflect the quality of observed or subjective assessments. In order to study the algebra structure and preference relation of basic uncertain information, we develop some algebra operations for basic uncertain information. The order relation of such type…
▽ More
Due to the fact that basic uncertain information provides a simple form for decision information with certainty degree, it has been developed to reflect the quality of observed or subjective assessments. In order to study the algebra structure and preference relation of basic uncertain information, we develop some algebra operations for basic uncertain information. The order relation of such type of information has also been considered. Finally, to apply the developed algebra operations and order relations, a generalized TODIM method for multi-attribute decision making with basic uncertain information is given. The numerical example shows that the developed decision procedure is valid.
△ Less
Submitted 27 April, 2021; v1 submitted 19 April, 2021;
originally announced April 2021.
-
A Self-ensembling Framework for Semi-supervised Knee Cartilage Defects Assessment with Dual-Consistency
Authors:
Jiayu Huo,
Liping Si,
Xi Ouyang,
Kai Xuan,
Weiwu Yao,
Zhong Xue,
Qian Wang,
Dinggang Shen,
Lichi Zhang
Abstract:
Knee osteoarthritis (OA) is one of the most common musculoskeletal disorders and requires early-stage diagnosis. Nowadays, the deep convolutional neural networks have achieved greatly in the computer-aided diagnosis field. However, the construction of the deep learning models usually requires great amounts of annotated data, which is generally high-cost. In this paper, we propose a novel approach…
▽ More
Knee osteoarthritis (OA) is one of the most common musculoskeletal disorders and requires early-stage diagnosis. Nowadays, the deep convolutional neural networks have achieved greatly in the computer-aided diagnosis field. However, the construction of the deep learning models usually requires great amounts of annotated data, which is generally high-cost. In this paper, we propose a novel approach for knee cartilage defects assessment, including severity classification and lesion localization. This can be treated as a subtask of knee OA diagnosis. Particularly, we design a self-ensembling framework, which is composed of a student network and a teacher network with the same structure. The student network learns from both labeled data and unlabeled data and the teacher network averages the student model weights through the training course. A novel attention loss function is developed to obtain accurate attention masks. With dual-consistency checking of the attention in the lesion classification and localization, the two networks can gradually optimize the attention distribution and improve the performance of each other, whereas the training relies on partially labeled data only and follows the semi-supervised manner. Experiments show that the proposed method can significantly improve the self-ensembling performance in both knee cartilage defects classification and localization, and also greatly reduce the needs of annotated data.
△ Less
Submitted 12 October, 2020; v1 submitted 19 May, 2020;
originally announced May 2020.
-
Robust Brain Magnetic Resonance Image Segmentation for Hydrocephalus Patients: Hard and Soft Attention
Authors:
Xuhua Ren,
Jiayu Huo,
Kai Xuan,
Dongming Wei,
Lichi Zhang,
Qian Wang
Abstract:
Brain magnetic resonance (MR) segmentation for hydrocephalus patients is considered as a challenging work. Encoding the variation of the brain anatomical structures from different individuals cannot be easily achieved. The task becomes even more difficult especially when the image data from hydrocephalus patients are considered, which often have large deformations and differ significantly from the…
▽ More
Brain magnetic resonance (MR) segmentation for hydrocephalus patients is considered as a challenging work. Encoding the variation of the brain anatomical structures from different individuals cannot be easily achieved. The task becomes even more difficult especially when the image data from hydrocephalus patients are considered, which often have large deformations and differ significantly from the normal subjects. Here, we propose a novel strategy with hard and soft attention modules to solve the segmentation problems for hydrocephalus MR images. Our main contributions are three-fold: 1) the hard-attention module generates coarse segmentation map using multi-atlas-based method and the VoxelMorph tool, which guides subsequent segmentation process and improves its robustness; 2) the soft-attention module incorporates position attention to capture precise context information, which further improves the segmentation accuracy; 3) we validate our method by segmenting insula, thalamus and many other regions-of-interests (ROIs) that are critical to quantify brain MR images of hydrocephalus patients in real clinical scenario. The proposed method achieves much improved robustness and accuracy when segmenting all 17 consciousness-related ROIs with high variations for different subjects. To the best of our knowledge, this is the first work to employ deep learning for solving the brain segmentation problems of hydrocephalus patients.
△ Less
Submitted 12 January, 2020;
originally announced January 2020.