-
Towards a Science of Scaling Agent Systems
Authors:
Yubin Kim,
Ken Gu,
Chanwoo Park,
Chunjong Park,
Samuel Schmidgall,
A. Ali Heydari,
Yao Yan,
Zhihan Zhang,
Yuchen Zhuang,
Mark Malhotra,
Paul Pu Liang,
Hae Won Park,
Yuzhe Yang,
Xuhai Xu,
Yilun Du,
Shwetak Patel,
Tim Althoff,
Daniel McDuff,
Xin Liu
Abstract:
Agents, language model-based systems that are capable of reasoning, planning, and acting are becoming the dominant paradigm for real-world AI applications. Despite this widespread adoption, the principles that determine their performance remain underexplored. We address this by deriving quantitative scaling principles for agent systems. We first formalize a definition for agentic evaluation and ch…
▽ More
Agents, language model-based systems that are capable of reasoning, planning, and acting are becoming the dominant paradigm for real-world AI applications. Despite this widespread adoption, the principles that determine their performance remain underexplored. We address this by deriving quantitative scaling principles for agent systems. We first formalize a definition for agentic evaluation and characterize scaling laws as the interplay between agent quantity, coordination structure, model capability, and task properties. We evaluate this across four benchmarks: Finance-Agent, BrowseComp-Plus, PlanCraft, and Workbench. With five canonical agent architectures (Single-Agent and four Multi-Agent Systems: Independent, Centralized, Decentralized, Hybrid), instantiated across three LLM families, we perform a controlled evaluation spanning 180 configurations. We derive a predictive model using coordination metrics, that achieves cross-validated R^2=0.524, enabling prediction on unseen task domains. We identify three effects: (1) a tool-coordination trade-off: under fixed computational budgets, tool-heavy tasks suffer disproportionately from multi-agent overhead. (2) a capability saturation: coordination yields diminishing or negative returns once single-agent baselines exceed ~45%. (3) topology-dependent error amplification: independent agents amplify errors 17.2x, while centralized coordination contains this to 4.4x. Centralized coordination improves performance by 80.8% on parallelizable tasks, while decentralized coordination excels on web navigation (+9.2% vs. +0.2%). Yet for sequential reasoning tasks, every multi-agent variants degraded performance by 39-70%. The framework predicts the optimal coordination strategy for 87% of held-out configurations. Out-of-sample validation on GPT-5.2, achieves MAE=0.071 and confirms four of five scaling principles generalize to unseen frontier models.
△ Less
Submitted 16 December, 2025; v1 submitted 9 December, 2025;
originally announced December 2025.
-
From Insight to Exploit: Leveraging LLM Collaboration for Adaptive Adversarial Text Generation
Authors:
Najrin Sultana,
Md Rafi Ur Rashid,
Kang Gu,
Shagufta Mehnaz
Abstract:
LLMs can provide substantial zero-shot performance on diverse tasks using a simple task prompt, eliminating the need for training or fine-tuning. However, when applying these models to sensitive tasks, it is crucial to thoroughly assess their robustness against adversarial inputs. In this work, we introduce Static Deceptor (StaDec) and Dynamic Deceptor (DyDec), two innovative attack frameworks des…
▽ More
LLMs can provide substantial zero-shot performance on diverse tasks using a simple task prompt, eliminating the need for training or fine-tuning. However, when applying these models to sensitive tasks, it is crucial to thoroughly assess their robustness against adversarial inputs. In this work, we introduce Static Deceptor (StaDec) and Dynamic Deceptor (DyDec), two innovative attack frameworks designed to systematically generate dynamic and adaptive adversarial examples by leveraging the understanding of the LLMs. We produce subtle and natural-looking adversarial inputs that preserve semantic similarity to the original text while effectively deceiving the target LLM. By utilizing an automated, LLM-driven pipeline, we eliminate the dependence on external heuristics. Our attacks evolve with the advancements in LLMs and demonstrate strong transferability across models unknown to the attacker. Overall, this work provides a systematic approach for the self-assessment of an LLM's robustness. We release our code and data at https://github.com/Shukti042/AdversarialExample.
△ Less
Submitted 4 November, 2025;
originally announced November 2025.
-
Completion $\neq$ Collaboration: Scaling Collaborative Effort with Agents
Authors:
Shannon Zejiang Shen,
Valerie Chen,
Ken Gu,
Alexis Ross,
Zixian Ma,
Jillian Ross,
Alex Gu,
Chenglei Si,
Wayne Chi,
Andi Peng,
Jocelyn J Shen,
Ameet Talwalkar,
Tongshuang Wu,
David Sontag
Abstract:
Current evaluations of agents remain centered around one-shot task completion, failing to account for the inherently iterative and collaborative nature of many real-world problems, where human goals are often underspecified and evolve. We argue for a shift from building and assessing task completion agents to developing collaborative agents, assessed not only by the quality of their final outputs…
▽ More
Current evaluations of agents remain centered around one-shot task completion, failing to account for the inherently iterative and collaborative nature of many real-world problems, where human goals are often underspecified and evolve. We argue for a shift from building and assessing task completion agents to developing collaborative agents, assessed not only by the quality of their final outputs but by how well they engage with and enhance human effort throughout the problem-solving process. To support this shift, we introduce collaborative effort scaling, a framework that captures how an agent's utility grows with increasing user involvement. Through case studies and simulated evaluations, we show that state-of-the-art agents often underperform in multi-turn, real-world scenarios, revealing a missing ingredient in agent design: the ability to sustain engagement and scaffold user understanding. Collaborative effort scaling offers a lens for diagnosing agent behavior and guiding development toward more effective interactions.
△ Less
Submitted 30 October, 2025; v1 submitted 29 October, 2025;
originally announced October 2025.
-
SynthWorlds: Controlled Parallel Worlds for Disentangling Reasoning and Knowledge in Language Models
Authors:
Ken Gu,
Advait Bhat,
Mike A Merrill,
Robert West,
Xin Liu,
Daniel McDuff,
Tim Althoff
Abstract:
Evaluating the reasoning ability of language models (LMs) is complicated by their extensive parametric world knowledge, where benchmark performance often reflects factual recall rather than genuine reasoning. Existing datasets and approaches (e.g., temporal filtering, paraphrasing, adversarial substitution) cannot cleanly separate the two. We present SynthWorlds, a framework that disentangles task…
▽ More
Evaluating the reasoning ability of language models (LMs) is complicated by their extensive parametric world knowledge, where benchmark performance often reflects factual recall rather than genuine reasoning. Existing datasets and approaches (e.g., temporal filtering, paraphrasing, adversarial substitution) cannot cleanly separate the two. We present SynthWorlds, a framework that disentangles task reasoning complexity from factual knowledge. In SynthWorlds, we construct parallel corpora representing two worlds with identical interconnected structure: a real-mapped world, where models may exploit parametric knowledge, and a synthetic-mapped world, where such knowledge is meaningless. On top of these corpora, we design two mirrored tasks as case studies: multi-hop question answering and page navigation, which maintain equal reasoning difficulty across worlds. Experiments in parametric-only (e.g., closed-book QA) and knowledge-augmented (e.g., retrieval-augmented) LM settings reveal a persistent knowledge advantage gap, defined as the performance boost models gain from memorized parametric world knowledge. Knowledge acquisition and integration mechanisms reduce but do not eliminate this gap, highlighting opportunities for system improvements. Fully automatic and scalable, SynthWorlds provides a controlled environment for evaluating LMs in ways that were previously challenging, enabling precise and testable comparisons of reasoning and memorization.
△ Less
Submitted 30 October, 2025; v1 submitted 28 October, 2025;
originally announced October 2025.
-
Dexbotic: Open-Source Vision-Language-Action Toolbox
Authors:
Bin Xie,
Erjin Zhou,
Fan Jia,
Hao Shi,
Haoqiang Fan,
Haowei Zhang,
Hebei Li,
Jianjian Sun,
Jie Bin,
Junwen Huang,
Kai Liu,
Kaixin Liu,
Kefan Gu,
Lin Sun,
Meng Zhang,
Peilong Han,
Ruitao Hao,
Ruitao Zhang,
Saike Huang,
Songhan Xie,
Tiancai Wang,
Tianle Liu,
Wenbin Tang,
Wenqi Zhu,
Yang Chen
, et al. (14 additional authors not shown)
Abstract:
In this paper, we present Dexbotic, an open-source Vision-Language-Action (VLA) model toolbox based on PyTorch. It aims to provide a one-stop VLA research service for professionals in the field of embodied intelligence. It offers a codebase that supports multiple mainstream VLA policies simultaneously, allowing users to reproduce various VLA methods with just a single environment setup. The toolbo…
▽ More
In this paper, we present Dexbotic, an open-source Vision-Language-Action (VLA) model toolbox based on PyTorch. It aims to provide a one-stop VLA research service for professionals in the field of embodied intelligence. It offers a codebase that supports multiple mainstream VLA policies simultaneously, allowing users to reproduce various VLA methods with just a single environment setup. The toolbox is experiment-centric, where the users can quickly develop new VLA experiments by simply modifying the Exp script. Moreover, we provide much stronger pretrained models to achieve great performance improvements for state-of-the-art VLA policies. Dexbotic will continuously update to include more of the latest pre-trained foundation models and cutting-edge VLA models in the industry.
△ Less
Submitted 27 October, 2025;
originally announced October 2025.
-
RoboChallenge: Large-scale Real-robot Evaluation of Embodied Policies
Authors:
Adina Yakefu,
Bin Xie,
Chongyang Xu,
Enwen Zhang,
Erjin Zhou,
Fan Jia,
Haitao Yang,
Haoqiang Fan,
Haowei Zhang,
Hongyang Peng,
Jing Tan,
Junwen Huang,
Kai Liu,
Kaixin Liu,
Kefan Gu,
Qinglun Zhang,
Ruitao Zhang,
Saike Huang,
Shen Cheng,
Shuaicheng Liu,
Tiancai Wang,
Tiezhen Wang,
Wei Sun,
Wenbin Tang,
Yajun Wei
, et al. (12 additional authors not shown)
Abstract:
Testing on real machines is indispensable for robotic control algorithms. In the context of learning-based algorithms, especially VLA models, demand for large-scale evaluation, i.e. testing a large number of models on a large number of tasks, is becoming increasingly urgent. However, doing this right is highly non-trivial, especially when scalability and reproducibility is taken into account. In t…
▽ More
Testing on real machines is indispensable for robotic control algorithms. In the context of learning-based algorithms, especially VLA models, demand for large-scale evaluation, i.e. testing a large number of models on a large number of tasks, is becoming increasingly urgent. However, doing this right is highly non-trivial, especially when scalability and reproducibility is taken into account. In this report, we describe our methodology for constructing RoboChallenge, an online evaluation system to test robotic control algorithms, and our survey of recent state-of-the-art VLA models using our initial benchmark Table30.
△ Less
Submitted 20 October, 2025;
originally announced October 2025.
-
ManiAgent: An Agentic Framework for General Robotic Manipulation
Authors:
Yi Yang,
Kefan Gu,
Yuqing Wen,
Hebei Li,
Yucheng Zhao,
Tiancai Wang,
Xudong Liu
Abstract:
While Vision-Language-Action (VLA) models have demonstrated impressive capabilities in robotic manipulation, their performance in complex reasoning and long-horizon task planning is limited by data scarcity and model capacity. To address this, we introduce ManiAgent, an agentic architecture for general manipulation tasks that achieves end-to-end output from task descriptions and environmental inpu…
▽ More
While Vision-Language-Action (VLA) models have demonstrated impressive capabilities in robotic manipulation, their performance in complex reasoning and long-horizon task planning is limited by data scarcity and model capacity. To address this, we introduce ManiAgent, an agentic architecture for general manipulation tasks that achieves end-to-end output from task descriptions and environmental inputs to robotic manipulation actions. In this framework, multiple agents involve inter-agent communication to perform environmental perception, sub-task decomposition and action generation, enabling efficient handling of complex manipulation scenarios. Evaluations show ManiAgent achieves an 86.8% success rate on the SimplerEnv benchmark and 95.8% on real-world pick-and-place tasks, enabling efficient data collection that yields VLA models with performance comparable to those trained on human-annotated datasets. The project webpage is available at https://yi-yang929.github.io/ManiAgent/.
△ Less
Submitted 13 October, 2025; v1 submitted 13 October, 2025;
originally announced October 2025.
-
IntentionVLA: Generalizable and Efficient Embodied Intention Reasoning for Human-Robot Interaction
Authors:
Yandu Chen,
Kefan Gu,
Yuqing Wen,
Yucheng Zhao,
Tiancai Wang,
Liqiang Nie
Abstract:
Vision-Language-Action (VLA) models leverage pretrained vision-language models (VLMs) to couple perception with robotic control, offering a promising path toward general-purpose embodied intelligence. However, current SOTA VLAs are primarily pretrained on multimodal tasks with limited relevance to embodied scenarios, and then finetuned to map explicit instructions to actions. Consequently, due to…
▽ More
Vision-Language-Action (VLA) models leverage pretrained vision-language models (VLMs) to couple perception with robotic control, offering a promising path toward general-purpose embodied intelligence. However, current SOTA VLAs are primarily pretrained on multimodal tasks with limited relevance to embodied scenarios, and then finetuned to map explicit instructions to actions. Consequently, due to the lack of reasoning-intensive pretraining and reasoning-guided manipulation, these models are unable to perform implicit human intention reasoning required for complex, real-world interactions. To overcome these limitations, we propose \textbf{IntentionVLA}, a VLA framework with a curriculum training paradigm and an efficient inference mechanism. Our proposed method first leverages carefully designed reasoning data that combine intention inference, spatial grounding, and compact embodied reasoning, endowing the model with both reasoning and perception capabilities. In the following finetuning stage, IntentionVLA employs the compact reasoning outputs as contextual guidance for action generation, enabling fast inference under indirect instructions. Experimental results show that IntentionVLA substantially outperforms $π_0$, achieving 18\% higher success rates with direct instructions and 28\% higher than ECoT under intention instructions. On out-of-distribution intention tasks, IntentionVLA achieves over twice the success rate of all baselines, and further enables zero-shot human-robot interaction with 40\% success rate. These results highlight IntentionVLA as a promising paradigm for next-generation human-robot interaction (HRI) systems.
△ Less
Submitted 9 October, 2025;
originally announced October 2025.
-
Diagnosing Shortcut-Induced Rigidity in Continual Learning: The Einstellung Rigidity Index (ERI)
Authors:
Kai Gu,
Weishi Shi
Abstract:
Deep neural networks frequently exploit shortcut features, defined as incidental correlations between inputs and labels without causal meaning. Shortcut features undermine robustness and reduce reliability under distribution shifts. In continual learning (CL), the consequences of shortcut exploitation can persist and intensify: weights inherited from earlier tasks bias representation reuse toward…
▽ More
Deep neural networks frequently exploit shortcut features, defined as incidental correlations between inputs and labels without causal meaning. Shortcut features undermine robustness and reduce reliability under distribution shifts. In continual learning (CL), the consequences of shortcut exploitation can persist and intensify: weights inherited from earlier tasks bias representation reuse toward whatever features most easily satisfied prior labels, mirroring the cognitive Einstellung effect, a phenomenon where past habits block optimal solutions. Whereas catastrophic forgetting erodes past skills, shortcut-induced rigidity throttles the acquisition of new ones. We introduce the Einstellung Rigidity Index (ERI), a compact diagnostic that disentangles genuine transfer from cue-inflated performance using three interpretable facets: (i) Adaptation Delay (AD), (ii) Performance Deficit (PD), and (iii) Relative Suboptimal Feature Reliance (SFR_rel). On a two-phase CIFAR-100 CL benchmark with a deliberately spurious magenta patch in Phase 2, we evaluate Naive fine-tuning (SGD), online Elastic Weight Consolidation (EWC_on), Dark Experience Replay (DER++), Gradient Projection Memory (GPM), and Deep Generative Replay (DGR). Across these continual learning methods, we observe that CL methods reach accuracy thresholds earlier than a Scratch-T2 baseline (negative AD) but achieve slightly lower final accuracy on patched shortcut classes (positive PD). Masking the patch improves accuracy for CL methods while slightly reducing Scratch-T2, yielding negative SFR_rel. This pattern indicates the patch acted as a distractor for CL models in this setting rather than a helpful shortcut.
△ Less
Submitted 30 September, 2025;
originally announced October 2025.
-
PerformSinger: Multimodal Singing Voice Synthesis Leveraging Synchronized Lip Cues from Singing Performance Videos
Authors:
Ke Gu,
Zhicong Wu,
Peng Bai,
Sitong Qiao,
Zhiqi Jiang,
Junchen Lu,
Xiaodong Shi,
Xinyuan Qian
Abstract:
Existing singing voice synthesis (SVS) models largely rely on fine-grained, phoneme-level durations, which limits their practical application. These methods overlook the complementary role of visual information in duration prediction.To address these issues, we propose PerformSinger, a pioneering multimodal SVS framework, which incorporates lip cues from video as a visual modality, enabling high-q…
▽ More
Existing singing voice synthesis (SVS) models largely rely on fine-grained, phoneme-level durations, which limits their practical application. These methods overlook the complementary role of visual information in duration prediction.To address these issues, we propose PerformSinger, a pioneering multimodal SVS framework, which incorporates lip cues from video as a visual modality, enabling high-quality "duration-free" singing voice synthesis. PerformSinger comprises parallel multi-branch multimodal encoders, a feature fusion module, a duration and variational prediction network, a mel-spectrogram decoder and a vocoder. The fusion module, composed of adapter and fusion blocks, employs a progressive fusion strategy within an aligned semantic space to produce high-quality multimodal feature representations, thereby enabling accurate duration prediction and high-fidelity audio synthesis. To facilitate the research, we design, collect and annotate a novel SVS dataset involving synchronized video streams and precise phoneme-level manual annotations. Extensive experiments demonstrate the state-of-the-art performance of our proposal in both subjective and objective evaluations. The code and dataset will be publicly available.
△ Less
Submitted 24 September, 2025;
originally announced September 2025.
-
Mano Technical Report
Authors:
Tianyu Fu,
Anyang Su,
Chenxu Zhao,
Hanning Wang,
Minghui Wu,
Zhe Yu,
Fei Hu,
Mingjia Shi,
Wei Dong,
Jiayao Wang,
Yuyang Chen,
Ruiyang Yu,
Siran Peng,
Menglin Li,
Nan Huang,
Haitian Wei,
Jiawei Yu,
Yi Xin,
Xilin Zhao,
Kai Gu,
Ping Jiang,
Sifan Zhou,
Shuo Wang
Abstract:
Graphical user interfaces (GUIs) are the primary medium for human-computer interaction, yet automating GUI interactions remains challenging due to the complexity of visual elements, dynamic environments, and the need for multi-step reasoning. Existing methods based on vision-language models (VLMs) often suffer from limited resolution, domain mismatch, and insufficient sequential decisionmaking cap…
▽ More
Graphical user interfaces (GUIs) are the primary medium for human-computer interaction, yet automating GUI interactions remains challenging due to the complexity of visual elements, dynamic environments, and the need for multi-step reasoning. Existing methods based on vision-language models (VLMs) often suffer from limited resolution, domain mismatch, and insufficient sequential decisionmaking capability. To address these issues, we propose Mano, a robust GUI agent built upon a multi-modal foundation model pre-trained on extensive web and computer system data. Our approach integrates a novel simulated environment for high-fidelity data generation, a three-stage training pipeline (supervised fine-tuning, offline reinforcement learning, and online reinforcement learning), and a verification module for error recovery. Mano demonstrates state-of-the-art performance on multiple GUI benchmarks, including Mind2Web and OSWorld, achieving significant improvements in success rate and operational accuracy. Our work provides new insights into the effective integration of reinforcement learning with VLMs for practical GUI agent deployment, highlighting the importance of domain-specific data, iterative training, and holistic reward design.
△ Less
Submitted 31 October, 2025; v1 submitted 21 September, 2025;
originally announced September 2025.
-
LLaDA-VLA: Vision Language Diffusion Action Models
Authors:
Yuqing Wen,
Hebei Li,
Kefan Gu,
Yucheng Zhao,
Tiancai Wang,
Xiaoyan Sun
Abstract:
The rapid progress of auto-regressive vision-language models (VLMs) has inspired growing interest in vision-language-action models (VLA) for robotic manipulation. Recently, masked diffusion models, a paradigm distinct from autoregressive models, have begun to demonstrate competitive performance in text generation and multimodal applications, leading to the development of a series of diffusion-base…
▽ More
The rapid progress of auto-regressive vision-language models (VLMs) has inspired growing interest in vision-language-action models (VLA) for robotic manipulation. Recently, masked diffusion models, a paradigm distinct from autoregressive models, have begun to demonstrate competitive performance in text generation and multimodal applications, leading to the development of a series of diffusion-based VLMs (d-VLMs). However, leveraging such models for robot policy learning remains largely unexplored. In this work, we present LLaDA-VLA, the first Vision-Language-Diffusion-Action model built upon pretrained d-VLMs for robotic manipulation. To effectively adapt d-VLMs to robotic domain, we introduce two key designs: (1) a localized special-token classification strategy that replaces full-vocabulary classification with special action token classification, reducing adaptation difficulty; (2) a hierarchical action-structured decoding strategy that decodes action sequences hierarchically considering the dependencies within and across actions. Extensive experiments demonstrate that LLaDA-VLA significantly outperforms state-of-the-art VLAs on both simulation and real-world robots.
△ Less
Submitted 10 September, 2025; v1 submitted 8 September, 2025;
originally announced September 2025.
-
The Anatomy of a Personal Health Agent
Authors:
A. Ali Heydari,
Ken Gu,
Vidya Srinivas,
Hong Yu,
Zhihan Zhang,
Yuwei Zhang,
Akshay Paruchuri,
Qian He,
Hamid Palangi,
Nova Hammerquist,
Ahmed A. Metwally,
Brent Winslow,
Yubin Kim,
Kumar Ayush,
Yuzhe Yang,
Girish Narayanswamy,
Maxwell A. Xu,
Jake Garrison,
Amy Armento Lee,
Jenny Vafeiadou,
Ben Graef,
Isaac R. Galatzer-Levy,
Erik Schenck,
Andrew Barakat,
Javier Perez
, et al. (13 additional authors not shown)
Abstract:
Health is a fundamental pillar of human wellness, and the rapid advancements in large language models (LLMs) have driven the development of a new generation of health agents. However, the application of health agents to fulfill the diverse needs of individuals in daily non-clinical settings is underexplored. In this work, we aim to build a comprehensive personal health agent that is able to reason…
▽ More
Health is a fundamental pillar of human wellness, and the rapid advancements in large language models (LLMs) have driven the development of a new generation of health agents. However, the application of health agents to fulfill the diverse needs of individuals in daily non-clinical settings is underexplored. In this work, we aim to build a comprehensive personal health agent that is able to reason about multimodal data from everyday consumer wellness devices and common personal health records, and provide personalized health recommendations. To understand end-users' needs when interacting with such an assistant, we conducted an in-depth analysis of web search and health forum queries, alongside qualitative insights from users and health experts gathered through a user-centered design process. Based on these findings, we identified three major categories of consumer health needs, each of which is supported by a specialist sub-agent: (1) a data science agent that analyzes personal time-series wearable and health record data, (2) a health domain expert agent that integrates users' health and contextual data to generate accurate, personalized insights, and (3) a health coach agent that synthesizes data insights, guiding users using a specified psychological strategy and tracking users' progress. Furthermore, we propose and develop the Personal Health Agent (PHA), a multi-agent framework that enables dynamic, personalized interactions to address individual health needs. To evaluate each sub-agent and the multi-agent system, we conducted automated and human evaluations across 10 benchmark tasks, involving more than 7,000 annotations and 1,100 hours of effort from health experts and end-users. Our work represents the most comprehensive evaluation of a health agent to date and establishes a strong foundation towards the futuristic vision of a personal health agent accessible to everyone.
△ Less
Submitted 18 September, 2025; v1 submitted 27 August, 2025;
originally announced August 2025.
-
Efficient Cloud-Edge-Device Query Execution Based on Collaborative Scan Operator
Authors:
Chunyu Zhao,
Hongzhi Wang,
Kaixin Zhang,
Hongliang Li,
Yihan Zhang,
Jiawei Zhang,
Kunkai Gu,
Yuan Tian,
Xiangdong Huang,
Jingyi Xu
Abstract:
In cloud-edge-device (CED) collaborative query (CQ) processing, by leveraging CED collaboration, the advantages of both cloud computing and edge resources can be fully integrated. However, it is difficult to implement collaborative operators that can flexibly switch between the cloud and the edge during query execution. Thus, in this paper, we aim to improve the query performance when the edge res…
▽ More
In cloud-edge-device (CED) collaborative query (CQ) processing, by leveraging CED collaboration, the advantages of both cloud computing and edge resources can be fully integrated. However, it is difficult to implement collaborative operators that can flexibly switch between the cloud and the edge during query execution. Thus, in this paper, we aim to improve the query performance when the edge resources reach a bottleneck. To achieve seamless switching of query execution between the cloud and edge, we propose a CQ processing method by establishing a CED collaborative framework based on the collaborative scan operator, so that query execution can be transferred to the cloud at any time when the edge resources are saturated. Extensive experiments show that, under sufficient network download bandwidth, the CED collaborative scan operator can effectively alleviate the performance degradation of scan operators caused by high I/O load and CPU wait time at the edge. It also achieves balanced resource scheduling between the cloud and edge.
△ Less
Submitted 21 August, 2025;
originally announced August 2025.
-
Dialogues Aspect-based Sentiment Quadruple Extraction via Structural Entropy Minimization Partitioning
Authors:
Kun Peng,
Cong Cao,
Hao Peng,
Zhifeng Hao,
Lei Jiang,
Kongjing Gu,
Yanbing Liu,
Philip S. Yu
Abstract:
Dialogues Aspect-based Sentiment Quadruple Extraction (DiaASQ) aims to extract all target-aspect-opinion-sentiment quadruples from a given multi-round, multi-participant dialogue. Existing methods typically learn word relations across entire dialogues, assuming a uniform distribution of sentiment elements. However, we find that dialogues often contain multiple semantically independent sub-dialogue…
▽ More
Dialogues Aspect-based Sentiment Quadruple Extraction (DiaASQ) aims to extract all target-aspect-opinion-sentiment quadruples from a given multi-round, multi-participant dialogue. Existing methods typically learn word relations across entire dialogues, assuming a uniform distribution of sentiment elements. However, we find that dialogues often contain multiple semantically independent sub-dialogues without clear dependencies between them. Therefore, learning word relationships across the entire dialogue inevitably introduces additional noise into the extraction process. To address this, our method focuses on partitioning dialogues into semantically independent sub-dialogues. Achieving completeness while minimizing these sub-dialogues presents a significant challenge. Simply partitioning based on reply relationships is ineffective. Instead, we propose utilizing a structural entropy minimization algorithm to partition the dialogues. This approach aims to preserve relevant utterances while distinguishing irrelevant ones as much as possible. Furthermore, we introduce a two-step framework for quadruple extraction: first extracting individual sentiment elements at the utterance level, then matching quadruples at the sub-dialogue level. Extensive experiments demonstrate that our approach achieves state-of-the-art performance in DiaASQ with much lower computational costs.
△ Less
Submitted 7 August, 2025;
originally announced August 2025.
-
ROSA: Harnessing Robot States for Vision-Language and Action Alignment
Authors:
Yuqing Wen,
Kefan Gu,
Haoxuan Liu,
Yucheng Zhao,
Tiancai Wang,
Haoqiang Fan,
Xiaoyan Sun
Abstract:
Vision-Language-Action (VLA) models have recently made significant advance in multi-task, end-to-end robotic control, due to the strong generalization capabilities of Vision-Language Models (VLMs). A fundamental challenge in developing such models is effectively aligning the vision-language space with the robotic action space. Existing approaches typically rely on directly fine-tuning VLMs using e…
▽ More
Vision-Language-Action (VLA) models have recently made significant advance in multi-task, end-to-end robotic control, due to the strong generalization capabilities of Vision-Language Models (VLMs). A fundamental challenge in developing such models is effectively aligning the vision-language space with the robotic action space. Existing approaches typically rely on directly fine-tuning VLMs using expert demonstrations. However, this strategy suffers from a spatio-temporal gap, resulting in considerable data inefficiency and heavy reliance on human labor. Spatially, VLMs operate within a high-level semantic space, whereas robotic actions are grounded in low-level 3D physical space; temporally, VLMs primarily interpret the present, while VLA models anticipate future actions. To overcome these challenges, we propose a novel training paradigm, ROSA, which leverages robot state estimation to improve alignment between vision-language and action spaces. By integrating robot state estimation data obtained via an automated process, ROSA enables the VLA model to gain enhanced spatial understanding and self-awareness, thereby boosting performance and generalization. Extensive experiments in both simulated and real-world environments demonstrate the effectiveness of ROSA, particularly in low-data regimes.
△ Less
Submitted 16 June, 2025;
originally announced June 2025.
-
RADAR: Benchmarking Language Models on Imperfect Tabular Data
Authors:
Ken Gu,
Zhihan Zhang,
Kate Lin,
Yuwei Zhang,
Akshay Paruchuri,
Hong Yu,
Mehran Kazemi,
Kumar Ayush,
A. Ali Heydari,
Maxwell A. Xu,
Girish Narayanswamy,
Yun Liu,
Ming-Zher Poh,
Yuzhe Yang,
Mark Malhotra,
Shwetak Patel,
Hamid Palangi,
Xuhai Xu,
Daniel McDuff,
Tim Althoff,
Xin Liu
Abstract:
Language models (LMs) are increasingly being deployed to perform autonomous data analyses. However, their data awareness -- the ability to recognize, reason over, and appropriately handle data artifacts such as missing values, outliers, and logical inconsistencies -- remains underexplored. These artifacts are especially common in real-world tabular data and, if mishandled, can significantly compro…
▽ More
Language models (LMs) are increasingly being deployed to perform autonomous data analyses. However, their data awareness -- the ability to recognize, reason over, and appropriately handle data artifacts such as missing values, outliers, and logical inconsistencies -- remains underexplored. These artifacts are especially common in real-world tabular data and, if mishandled, can significantly compromise the validity of analytical conclusions. To address this gap, we present RADAR, a benchmark for systematically evaluating data-aware reasoning on tabular data. We develop a framework to simulate data artifacts via programmatic perturbations to enable targeted evaluation of model behavior. RADAR comprises 2980 table query pairs, grounded in real-world data spanning 9 domains and 5 data artifact types. In addition to evaluating artifact handling, RADAR systematically varies table size to study how reasoning performance holds when increasing table size. Our evaluation reveals that, despite decent performance on tables without data artifacts, frontier models degrade significantly when data artifacts are introduced, exposing critical gaps in their capacity for robust, data-aware analysis. Designed to be flexible and extensible, RADAR supports diverse perturbation types and controllable table sizes, offering a valuable resource for advancing tabular reasoning.
△ Less
Submitted 30 October, 2025; v1 submitted 9 June, 2025;
originally announced June 2025.
-
LSM-2: Learning from Incomplete Wearable Sensor Data
Authors:
Maxwell A. Xu,
Girish Narayanswamy,
Kumar Ayush,
Dimitris Spathis,
Shun Liao,
Shyam A. Tailor,
Ahmed Metwally,
A. Ali Heydari,
Yuwei Zhang,
Jake Garrison,
Samy Abdel-Ghaffar,
Xuhai Xu,
Ken Gu,
Jacob Sunshine,
Ming-Zher Poh,
Yun Liu,
Tim Althoff,
Shrikanth Narayanan,
Pushmeet Kohli,
Mark Malhotra,
Shwetak Patel,
Yuzhe Yang,
James M. Rehg,
Xin Liu,
Daniel McDuff
Abstract:
Foundation models, a cornerstone of recent advancements in machine learning, have predominantly thrived on complete and well-structured data. Wearable sensor data frequently suffers from significant missingness, posing a substantial challenge for self-supervised learning (SSL) models that typically assume complete data inputs. This paper introduces the second generation of Large Sensor Model (LSM-…
▽ More
Foundation models, a cornerstone of recent advancements in machine learning, have predominantly thrived on complete and well-structured data. Wearable sensor data frequently suffers from significant missingness, posing a substantial challenge for self-supervised learning (SSL) models that typically assume complete data inputs. This paper introduces the second generation of Large Sensor Model (LSM-2) with Adaptive and Inherited Masking (AIM), a novel SSL approach that learns robust representations directly from incomplete data without requiring explicit imputation. AIM's core novelty lies in its use of learnable mask tokens to model both existing ("inherited") and artificially introduced missingness, enabling it to robustly handle fragmented real-world data during inference. Pre-trained on an extensive dataset of 40M hours of day-long multimodal sensor data, our LSM-2 with AIM achieves the best performance across a diverse range of tasks, including classification, regression and generative modeling. Furthermore, LSM-2 with AIM exhibits superior scaling performance, and critically, maintains high performance even under targeted missingness scenarios, reflecting clinically coherent patterns, such as the diagnostic value of nighttime biosignals for hypertension prediction. This makes AIM a more reliable choice for real-world wearable data applications.
△ Less
Submitted 5 June, 2025;
originally announced June 2025.
-
VRS-UIE: Value-Driven Reordering Scanning for Underwater Image Enhancement
Authors:
Kui Jiang,
Yan Luo,
Junjun Jiang,
Ke Gu,
Nan Ma,
Xianming Liu
Abstract:
State Space Models (SSMs) have emerged as a promising backbone for vision tasks due to their linear complexity and global receptive field. However, in the context of Underwater Image Enhancement (UIE), the standard sequential scanning mechanism is fundamentally challenged by the unique statistical distribution characteristics of underwater scenes. The predominance of large-portion, homogeneous but…
▽ More
State Space Models (SSMs) have emerged as a promising backbone for vision tasks due to their linear complexity and global receptive field. However, in the context of Underwater Image Enhancement (UIE), the standard sequential scanning mechanism is fundamentally challenged by the unique statistical distribution characteristics of underwater scenes. The predominance of large-portion, homogeneous but useless oceanic backgrounds can dilute the feature representation responses of sparse yet valuable targets, thereby impeding effective state propagation and compromising the model's ability to preserve both local semantics and global structure. To address this limitation, we propose a novel Value-Driven Reordering Scanning framework for UIE, termed VRS-UIE. Its core innovation is a Multi-Granularity Value Guidance Learning (MVGL) module that generates a pixel-aligned value map to dynamically reorder the SSM's scanning sequence. This prioritizes informative regions to facilitate the long-range state propagation of salient features. Building upon the MVGL, we design a Mamba-Conv Mixer (MCM) block that synergistically integrates priority-driven global sequencing with dynamically adjusted local convolutions, thereby effectively modeling both large-portion oceanic backgrounds and high-value semantic targets. A Cross-Feature Bridge (CFB) further refines multi-level feature fusion. Extensive experiments demonstrate that our VRS-UIE framework sets a new state-of-the-art, delivering superior enhancement performance (surpassing WMamba by 0.89 dB on average) by effectively suppressing water bias and preserving structural and color fidelity. Furthermore, by incorporating efficient convolutional operators and resolution rescaling, we construct a light-weight yet effective scheme, VRS-UIE-S, suitable for real-time UIE applications.
△ Less
Submitted 15 October, 2025; v1 submitted 2 May, 2025;
originally announced May 2025.
-
Point Tracking in Surgery--The 2024 Surgical Tattoos in Infrared (STIR) Challenge
Authors:
Adam Schmidt,
Mert Asim Karaoglu,
Soham Sinha,
Mingang Jang,
Ho-Gun Ha,
Kyungmin Jung,
Kyeongmo Gu,
Ihsan Ullah,
Hyunki Lee,
Jonáš Šerých,
Michal Neoral,
Jiří Matas,
Rulin Zhou,
Wenlong He,
An Wang,
Hongliang Ren,
Bruno Silva,
Sandro Queirós,
Estêvão Lima,
João L. Vilaça,
Shunsuke Kikuchi,
Atsushi Kouno,
Hiroki Matsuzaki,
Tongtong Li,
Yulu Chen
, et al. (15 additional authors not shown)
Abstract:
Understanding tissue motion in surgery is crucial to enable applications in downstream tasks such as segmentation, 3D reconstruction, virtual tissue landmarking, autonomous probe-based scanning, and subtask autonomy. Labeled data are essential to enabling algorithms in these downstream tasks since they allow us to quantify and train algorithms. This paper introduces a point tracking challenge to a…
▽ More
Understanding tissue motion in surgery is crucial to enable applications in downstream tasks such as segmentation, 3D reconstruction, virtual tissue landmarking, autonomous probe-based scanning, and subtask autonomy. Labeled data are essential to enabling algorithms in these downstream tasks since they allow us to quantify and train algorithms. This paper introduces a point tracking challenge to address this, wherein participants can submit their algorithms for quantification. The submitted algorithms are evaluated using a dataset named surgical tattoos in infrared (STIR), with the challenge aptly named the STIR Challenge 2024. The STIR Challenge 2024 comprises two quantitative components: accuracy and efficiency. The accuracy component tests the accuracy of algorithms on in vivo and ex vivo sequences. The efficiency component tests the latency of algorithm inference. The challenge was conducted as a part of MICCAI EndoVis 2024. In this challenge, we had 8 total teams, with 4 teams submitting before and 4 submitting after challenge day. This paper details the STIR Challenge 2024, which serves to move the field towards more accurate and efficient algorithms for spatial understanding in surgery. In this paper we summarize the design, submissions, and results from the challenge. The challenge dataset is available here: https://zenodo.org/records/14803158 , and the code for baseline models and metric calculation is available here: https://github.com/athaddius/STIRMetrics
△ Less
Submitted 31 March, 2025;
originally announced March 2025.
-
Online Test-time Adaptation for 3D Human Pose Estimation: A Practical Perspective with Estimated 2D Poses
Authors:
Qiuxia Lin,
Kerui Gu,
Linlin Yang,
Angela Yao
Abstract:
Online test-time adaptation for 3D human pose estimation is used for video streams that differ from training data. Ground truth 2D poses are used for adaptation, but only estimated 2D poses are available in practice. This paper addresses adapting models to streaming videos with estimated 2D poses. Comparing adaptations reveals the challenge of limiting estimation errors while preserving accurate p…
▽ More
Online test-time adaptation for 3D human pose estimation is used for video streams that differ from training data. Ground truth 2D poses are used for adaptation, but only estimated 2D poses are available in practice. This paper addresses adapting models to streaming videos with estimated 2D poses. Comparing adaptations reveals the challenge of limiting estimation errors while preserving accurate pose information. To this end, we propose adaptive aggregation, a two-stage optimization, and local augmentation for handling varying levels of estimated pose error. First, we perform adaptive aggregation across videos to initialize the model state with labeled representative samples. Within each video, we use a two-stage optimization to benefit from 2D fitting while minimizing the impact of erroneous updates. Second, we employ local augmentation, using adjacent confident samples to update the model before adapting to the current non-confident sample. Our method surpasses state-of-the-art by a large margin, advancing adaptation towards more practical settings of using estimated 2D poses.
△ Less
Submitted 14 March, 2025;
originally announced March 2025.
-
Semantics-aware Test-time Adaptation for 3D Human Pose Estimation
Authors:
Qiuxia Lin,
Rongyu Chen,
Kerui Gu,
Angela Yao
Abstract:
This work highlights a semantics misalignment in 3D human pose estimation. For the task of test-time adaptation, the misalignment manifests as overly smoothed and unguided predictions. The smoothing settles predictions towards some average pose. Furthermore, when there are occlusions or truncations, the adaptation becomes fully unguided. To this end, we pioneer the integration of a semantics-aware…
▽ More
This work highlights a semantics misalignment in 3D human pose estimation. For the task of test-time adaptation, the misalignment manifests as overly smoothed and unguided predictions. The smoothing settles predictions towards some average pose. Furthermore, when there are occlusions or truncations, the adaptation becomes fully unguided. To this end, we pioneer the integration of a semantics-aware motion prior for the test-time adaptation of 3D pose estimation. We leverage video understanding and a well-structured motion-text space to adapt the model motion prediction to adhere to video semantics during test time. Additionally, we incorporate a missing 2D pose completion based on the motion-text similarity. The pose completion strengthens the motion prior's guidance for occlusions and truncations. Our method significantly improves state-of-the-art 3D human pose estimation TTA techniques, with more than 12% decrease in PA-MPJPE on 3DPW and 3DHP.
△ Less
Submitted 28 May, 2025; v1 submitted 15 February, 2025;
originally announced February 2025.
-
Deep Learning Models for Colloidal Nanocrystal Synthesis
Authors:
Kai Gu,
Yingping Liang,
Jiaming Su,
Peihan Sun,
Jia Peng,
Naihua Miao,
Zhimei Sun,
Ying Fu,
Haizheng Zhong,
Jun Zhang
Abstract:
Colloidal synthesis of nanocrystals usually includes complex chemical reactions and multi-step crystallization processes. Despite the great success in the past 30 years, it remains challenging to clarify the correlations between synthetic parameters of chemical reaction and physical properties of nanocrystals. Here, we developed a deep learning-based nanocrystal synthesis model that correlates syn…
▽ More
Colloidal synthesis of nanocrystals usually includes complex chemical reactions and multi-step crystallization processes. Despite the great success in the past 30 years, it remains challenging to clarify the correlations between synthetic parameters of chemical reaction and physical properties of nanocrystals. Here, we developed a deep learning-based nanocrystal synthesis model that correlates synthetic parameters with the final size and shape of target nanocrystals, using a dataset of 3500 recipes covering 348 distinct nanocrystal compositions. The size and shape labels were obtained from transmission electron microscope images using a segmentation model trained with a semi-supervised algorithm on a dataset comprising 1.2 million nanocrystals. By applying the reaction intermediate-based data augmentation method and elaborated descriptors, the synthesis model was able to predict nanocrystal's size with a mean absolute error of 1.39 nm, while reaching an 89% average accuracy for shape classification. The synthesis model shows knowledge transfer capabilities across different nanocrystals with inputs of new recipes. With that, the influence of chemicals on the final size of nanocrystals was further evaluated, revealing the importance order of nanocrystal composition, precursor or ligand, and solvent. Overall, the deep learning-based nanocrystal synthesis model offers a powerful tool to expedite the development of high-quality nanocrystals.
△ Less
Submitted 14 December, 2024;
originally announced December 2024.
-
CONCERTO: Complex Query Execution Mechanism-Aware Learned Cost Estimation
Authors:
Kaixin Zhang,
Hongzhi Wang,
Kunkai Gu,
Ziqi Li,
Chunyu Zhao,
Yingze Li,
Yu Yan
Abstract:
With the growing demand for massive data analysis, many DBMSs have adopted complex underlying query execution mechanisms, including vectorized operators, parallel execution, and dynamic pipeline modifications. However, there remains a lack of targeted Query Performance Prediction (QPP) methods for these complex execution mechanisms and their interactions, as most existing approaches focus on tradi…
▽ More
With the growing demand for massive data analysis, many DBMSs have adopted complex underlying query execution mechanisms, including vectorized operators, parallel execution, and dynamic pipeline modifications. However, there remains a lack of targeted Query Performance Prediction (QPP) methods for these complex execution mechanisms and their interactions, as most existing approaches focus on traditional tree-shaped query plans and static serial executors. To address this challenge, this paper proposes CONCERTO, a Complex query executiON meChanism-awaE leaRned cosT estimatiOn method. CONCERTO first establishes independent resource cost models for each physical operator. It then constructs a Directed Acyclic Graph (DAG) consisting of a dataflow tree backbone and resource competition relationships among concurrent operators. After calibrating the cost impact of parallel operator execution using Graph Attention Networks (GATs) with additional attention mechanisms, CONCERTO extracts and aggregates cost vector trees through Temporal Convolutional Networks (TCNs), ultimately achieving effective query performance prediction. Experimental results demonstrate that CONCERTO achieves higher prediction accuracy than existing methods.
△ Less
Submitted 28 March, 2025; v1 submitted 1 December, 2024;
originally announced December 2024.
-
Cohort profile: the Northwest China Real-world and Population-based Cohort
Authors:
Qi Huang,
Yanjun Li,
Bo Yin,
Yaoguo Wang,
Yujuan Yuan,
Yanying Guo,
Kuiying Gu,
Yining Yang,
Qian Di
Abstract:
The Northwest China Real-World and Population-based cohort is an ongoing prospective cohort with more than 25 million population, covering almost all residents across approximately 1.66 million square kilometers in northwest China; The cohort integrates data from various sources, including health profiles, examination records, electronic health records, mortality records, statistical yearbooks, an…
▽ More
The Northwest China Real-World and Population-based cohort is an ongoing prospective cohort with more than 25 million population, covering almost all residents across approximately 1.66 million square kilometers in northwest China; The cohort integrates data from various sources, including health profiles, examination records, electronic health records, mortality records, statistical yearbooks, and environmental datasets, covering comprehensive health-related factors such as demographics, lifestyle factors, family medical history, living conditions, enrollment in national public health services, physical examinations, blood assay tests, diagnostic assessments, disease outcomes, and cause-specific mortality. This real-world dataset can evaluate clinical treatment effectiveness and prognosis, assess impact of health policy, and investigate the health effects of multiple risk factors . From January 2019 to December 2023, the cohort has included 13,634,481 participants, accumulating 47,050,707 person-years of follow-up, with 13,598,407 medical diagnosis records and 881,114 recorded deaths. Cohort data are available upon request. De-identified and anonymized data are stored on local servers and accessed through a data-sharing platform, enabling users to utilize the data without direct access to the raw information. A description of the proposed research can be sent to Yining Yang & Qian Di.
△ Less
Submitted 13 November, 2024;
originally announced November 2024.
-
A Cross-Font Image Retrieval Network for Recognizing Undeciphered Oracle Bone Inscriptions
Authors:
Zhicong Wu,
Qifeng Su,
Ke Gu,
Xiaodong Shi
Abstract:
Oracle Bone Inscription (OBI) is the earliest mature writing system in China, which represents a crucial stage in the development of hieroglyphs. Nevertheless, the substantial quantity of undeciphered OBI characters remains a significant challenge for scholars, while conventional methods of ancient script research are both time-consuming and labor-intensive. In this paper, we propose a cross-font…
▽ More
Oracle Bone Inscription (OBI) is the earliest mature writing system in China, which represents a crucial stage in the development of hieroglyphs. Nevertheless, the substantial quantity of undeciphered OBI characters remains a significant challenge for scholars, while conventional methods of ancient script research are both time-consuming and labor-intensive. In this paper, we propose a cross-font image retrieval network (CFIRN) to decipher OBI characters by establishing associations between OBI characters and other script forms, simulating the interpretive behavior of paleography scholars. Concretely, our network employs a siamese framework to extract deep features from character images of various fonts, fully exploring structure clues with different resolutions by multiscale feature integration (MFI) module and multiscale refinement classifier (MRC). Extensive experiments on three challenging cross-font image retrieval datasets demonstrate that, given undeciphered OBI characters, our CFIRN can effectively achieve accurate matches with characters from other gallery fonts, thereby facilitating the deciphering.
△ Less
Submitted 25 December, 2024; v1 submitted 10 September, 2024;
originally announced September 2024.
-
BLADE: Benchmarking Language Model Agents for Data-Driven Science
Authors:
Ken Gu,
Ruoxi Shang,
Ruien Jiang,
Keying Kuang,
Richard-John Lin,
Donghe Lyu,
Yue Mao,
Youran Pan,
Teng Wu,
Jiaqian Yu,
Yikun Zhang,
Tianmai M. Zhang,
Lanyi Zhu,
Mike A. Merrill,
Jeffrey Heer,
Tim Althoff
Abstract:
Data-driven scientific discovery requires the iterative integration of scientific domain knowledge, statistical expertise, and an understanding of data semantics to make nuanced analytical decisions, e.g., about which variables, transformations, and statistical models to consider. LM-based agents equipped with planning, memory, and code execution capabilities have the potential to support data-dri…
▽ More
Data-driven scientific discovery requires the iterative integration of scientific domain knowledge, statistical expertise, and an understanding of data semantics to make nuanced analytical decisions, e.g., about which variables, transformations, and statistical models to consider. LM-based agents equipped with planning, memory, and code execution capabilities have the potential to support data-driven science. However, evaluating agents on such open-ended tasks is challenging due to multiple valid approaches, partially correct steps, and different ways to express the same decisions. To address these challenges, we present BLADE, a benchmark to automatically evaluate agents' multifaceted approaches to open-ended research questions. BLADE consists of 12 datasets and research questions drawn from existing scientific literature, with ground truth collected from independent analyses by expert data scientists and researchers. To automatically evaluate agent responses, we developed corresponding computational methods to match different representations of analyses to this ground truth. Though language models possess considerable world knowledge, our evaluation shows that they are often limited to basic analyses. However, agents capable of interacting with the underlying data demonstrate improved, but still non-optimal, diversity in their analytical decision making. Our work enables the evaluation of agents for data-driven science and provides researchers deeper insights into agents' analysis approaches.
△ Less
Submitted 10 November, 2025; v1 submitted 18 August, 2024;
originally announced August 2024.
-
CharED: Character-wise Ensemble Decoding for Large Language Models
Authors:
Kevin Gu,
Eva Tuecke,
Dmitriy Katz,
Raya Horesh,
David Alvarez-Melis,
Mikhail Yurochkin
Abstract:
Large language models (LLMs) have shown remarkable potential for problem solving, with open source models achieving increasingly impressive performance on benchmarks measuring areas from logical reasoning to mathematical ability. Ensembling models can further improve capabilities across a variety of domains. However, conventional methods of combining models at inference time such as shallow fusion…
▽ More
Large language models (LLMs) have shown remarkable potential for problem solving, with open source models achieving increasingly impressive performance on benchmarks measuring areas from logical reasoning to mathematical ability. Ensembling models can further improve capabilities across a variety of domains. However, conventional methods of combining models at inference time such as shallow fusion necessitate a shared vocabulary and tokenization, and alternatives like fine-tuning for domain-specific performance are both time consuming and computationally expensive. We therefore present an inference-time ensembling algorithm aimed at "averaging" outputs from multiple LLMs and illustrate its improved performance across multiple domains compared to its constituent models alone. Character-wise ensemble decoding, CharED, finds the marginal distribution of each character for an individual model and performs a weighted average to generate an output, character by character. In coding, math, and toxicity benchmarks, we find our proposed model able to combine complimentary strengths of multiple LLMs, regardless of vocabulary, tokenization, or model size.
△ Less
Submitted 25 June, 2024;
originally announced July 2024.
-
Humans as Checkerboards: Calibrating Camera Motion Scale for World-Coordinate Human Mesh Recovery
Authors:
Fengyuan Yang,
Kerui Gu,
Ha Linh Nguyen,
Tze Ho Elden Tse,
Angela Yao
Abstract:
Accurate camera motion estimation is essential for recovering global human motion in world coordinates from RGB video inputs. SLAM is widely used for estimating camera trajectory and point cloud, but monocular SLAM does so only up to an unknown scale factor. Previous works estimate the scale factor through optimization, but this is unreliable and time-consuming. This paper presents an optimization…
▽ More
Accurate camera motion estimation is essential for recovering global human motion in world coordinates from RGB video inputs. SLAM is widely used for estimating camera trajectory and point cloud, but monocular SLAM does so only up to an unknown scale factor. Previous works estimate the scale factor through optimization, but this is unreliable and time-consuming. This paper presents an optimization-free scale calibration framework, Human as Checkerboard (HAC). HAC innovatively leverages the human body predicted by human mesh recovery model as a calibration reference. Specifically, it uses the absolute depth of human-scene contact joints as references to calibrate the corresponding relative scene depth from SLAM. HAC benefits from geometric priors encoded in human mesh recovery models to estimate the SLAM scale and achieves precise global human motion estimation. Simple yet powerful, our method sets a new state-of-the-art performance for global human mesh estimation tasks, reducing motion errors by 50% over prior local-to-global methods while using 100$\times$ less inference time than optimization-based methods. Project page: https://martayang.github.io/HAC.
△ Less
Submitted 12 December, 2024; v1 submitted 29 June, 2024;
originally announced July 2024.
-
Fine-tuning of Geospatial Foundation Models for Aboveground Biomass Estimation
Authors:
Michal Muszynski,
Levente Klein,
Ademir Ferreira da Silva,
Anjani Prasad Atluri,
Carlos Gomes,
Daniela Szwarcman,
Gurkanwar Singh,
Kewen Gu,
Maciel Zortea,
Naomi Simumba,
Paolo Fraccaro,
Shraddha Singh,
Steve Meliksetian,
Campbell Watson,
Daiki Kimura,
Harini Srinivasan
Abstract:
Global vegetation structure mapping is critical for understanding the global carbon cycle and maximizing the efficacy of nature-based carbon sequestration initiatives. Moreover, vegetation structure mapping can help reduce the impacts of climate change by, for example, guiding actions to improve water security, increase biodiversity and reduce flood risk. Global satellite measurements provide an i…
▽ More
Global vegetation structure mapping is critical for understanding the global carbon cycle and maximizing the efficacy of nature-based carbon sequestration initiatives. Moreover, vegetation structure mapping can help reduce the impacts of climate change by, for example, guiding actions to improve water security, increase biodiversity and reduce flood risk. Global satellite measurements provide an important set of observations for monitoring and managing deforestation and degradation of existing forests, natural forest regeneration, reforestation, biodiversity restoration, and the implementation of sustainable agricultural practices. In this paper, we explore the effectiveness of fine-tuning of a geospatial foundation model to estimate above-ground biomass (AGB) using space-borne data collected across different eco-regions in Brazil. The fine-tuned model architecture consisted of a Swin-B transformer as the encoder (i.e., backbone) and a single convolutional layer for the decoder head. All results were compared to a U-Net which was trained as the baseline model Experimental results of this sparse-label prediction task demonstrate that the fine-tuned geospatial foundation model with a frozen encoder has comparable performance to a U-Net trained from scratch. This is despite the fine-tuned model having 13 times less parameters requiring optimization, which saves both time and compute resources. Further, we explore the transfer-learning capabilities of the geospatial foundation models by fine-tuning on satellite imagery with sparse labels from different eco-regions in Brazil.
△ Less
Submitted 28 June, 2024;
originally announced June 2024.
-
KITRO: Refining Human Mesh by 2D Clues and Kinematic-tree Rotation
Authors:
Fengyuan Yang,
Kerui Gu,
Angela Yao
Abstract:
2D keypoints are commonly used as an additional cue to refine estimated 3D human meshes. Current methods optimize the pose and shape parameters with a reprojection loss on the provided 2D keypoints. Such an approach, while simple and intuitive, has limited effectiveness because the optimal solution is hard to find in ambiguous parameter space and may sacrifice depth. Additionally, divergent gradie…
▽ More
2D keypoints are commonly used as an additional cue to refine estimated 3D human meshes. Current methods optimize the pose and shape parameters with a reprojection loss on the provided 2D keypoints. Such an approach, while simple and intuitive, has limited effectiveness because the optimal solution is hard to find in ambiguous parameter space and may sacrifice depth. Additionally, divergent gradients from distal joints complicate and deviate the refinement of proximal joints in the kinematic chain. To address these, we introduce Kinematic-Tree Rotation (KITRO), a novel mesh refinement strategy that explicitly models depth and human kinematic-tree structure. KITRO treats refinement from a bone-wise perspective. Unlike previous methods which perform gradient-based optimizations, our method calculates bone directions in closed form. By accounting for the 2D pose, bone length, and parent joint's depth, the calculation results in two possible directions for each child joint. We then use a decision tree to trace binary choices for all bones along the human skeleton's kinematic-tree to select the most probable hypothesis. Our experiments across various datasets and baseline models demonstrate that KITRO significantly improves 3D joint estimation accuracy and achieves an ideal 2D fit simultaneously. Our code available at: https://github.com/MartaYang/KITRO.
△ Less
Submitted 30 May, 2024;
originally announced May 2024.
-
Distribution-informed and wavelength-flexible data-driven photoacoustic oximetry
Authors:
Janek Gröhl,
Kylie Yeung,
Kevin Gu,
Thomas R. Else,
Monika Golinska,
Ellie V. Bunce,
Lina Hacker,
Sarah E. Bohndiek
Abstract:
Significance: Photoacoustic imaging (PAI) promises to measure spatially-resolved blood oxygen saturation, but suffers from a lack of accurate and robust spectral unmixing methods to deliver on this promise. Accurate blood oxygenation estimation could have important clinical applications, from cancer detection to quantifying inflammation.
Aim: This study addresses the inflexibility of existing da…
▽ More
Significance: Photoacoustic imaging (PAI) promises to measure spatially-resolved blood oxygen saturation, but suffers from a lack of accurate and robust spectral unmixing methods to deliver on this promise. Accurate blood oxygenation estimation could have important clinical applications, from cancer detection to quantifying inflammation.
Aim: This study addresses the inflexibility of existing data-driven methods for estimating blood oxygenation in PAI by introducing a recurrent neural network architecture.
Approach: We created 25 simulated training dataset variations to assess neural network performance. We used a long short-term memory network to implement a wavelength-flexible network architecture and proposed the Jensen-Shannon divergence to predict the most suitable training dataset.
Results: The network architecture can handle arbitrary input wavelengths and outperforms linear unmixing and the previously proposed learned spectral decolouring method. Small changes in the training data significantly affect the accuracy of our method, but we find that the Jensen-Shannon divergence correlates with the estimation error and is thus suitable for predicting the most appropriate training datasets for any given application.
Conclusions: A flexible data-driven network architecture combined with the Jensen-Shannon Divergence to predict the best training data set provides a promising direction that might enable robust data-driven photoacoustic oximetry for clinical use cases.
△ Less
Submitted 21 March, 2024;
originally announced March 2024.
-
Second-Order Information Matters: Revisiting Machine Unlearning for Large Language Models
Authors:
Kang Gu,
Md Rafi Ur Rashid,
Najrin Sultana,
Shagufta Mehnaz
Abstract:
With the rapid development of Large Language Models (LLMs), we have witnessed intense competition among the major LLM products like ChatGPT, LLaMa, and Gemini. However, various issues (e.g. privacy leakage and copyright violation) of the training corpus still remain underexplored. For example, the Times sued OpenAI and Microsoft for infringing on its copyrights by using millions of its articles fo…
▽ More
With the rapid development of Large Language Models (LLMs), we have witnessed intense competition among the major LLM products like ChatGPT, LLaMa, and Gemini. However, various issues (e.g. privacy leakage and copyright violation) of the training corpus still remain underexplored. For example, the Times sued OpenAI and Microsoft for infringing on its copyrights by using millions of its articles for training. From the perspective of LLM practitioners, handling such unintended privacy violations can be challenging. Previous work addressed the ``unlearning" problem of LLMs using gradient information, while they mostly introduced significant overheads like data preprocessing or lacked robustness. In this paper, contrasting with the methods based on first-order information, we revisit the unlearning problem via the perspective of second-order information (Hessian). Our unlearning algorithms, which are inspired by classic Newton update, are not only data-agnostic/model-agnostic but also proven to be robust in terms of utility preservation or privacy guarantee. Through a comprehensive evaluation with four NLP datasets as well as a case study on real-world datasets, our methods consistently show superiority over the first-order methods.
△ Less
Submitted 13 March, 2024;
originally announced March 2024.
-
RobKiNet: Robotic Kinematics Informed Neural Network for Optimal Robot Configuration Prediction
Authors:
Yanlong Peng,
Zhigang Wang,
Yisheng Zhang,
Pengxu Chang,
Ziwen He,
Kai Gu,
Hongshen Zhang,
Ming Chen
Abstract:
Task and Motion Planning (TAMP) is essential for robots to interact with the world and accomplish complex tasks. The TAMP problem involves a critical gap: exploring the robot's configuration parameters (such as chassis position and robotic arm joint angles) within continuous space to ensure that task-level global constraints are met while also enhancing the efficiency of subsequent motion planning…
▽ More
Task and Motion Planning (TAMP) is essential for robots to interact with the world and accomplish complex tasks. The TAMP problem involves a critical gap: exploring the robot's configuration parameters (such as chassis position and robotic arm joint angles) within continuous space to ensure that task-level global constraints are met while also enhancing the efficiency of subsequent motion planning. Existing methods still have significant room for improvement in terms of efficiency. Recognizing that robot kinematics is a key factor in motion planning, we propose a framework called the Robotic Kinematics Informed Neural Network (RobKiNet) as a bridge between task and motion layers. RobKiNet integrates kinematic knowledge into neural networks to train models capable of efficient configuration prediction. We designed a Chassis Motion Predictor(CMP) and a Full Motion Predictor(FMP) using RobKiNet, which employed two entirely different sets of forward and inverse kinematics constraints to achieve loosely coupled control and whole-body control, respectively. Experiments demonstrate that CMP and FMP can predict configuration parameters with 96.67% and 98% accuracy, respectively. That means that the corresponding motion planning can achieve a speedup of 24.24x and 153x compared to random sampling. Furthermore, RobKiNet demonstrates remarkable data efficiency. CMP only requires 1/71 and FMP only requires 1/15052 of the training data for the same prediction accuracy compared to other deep learning methods. These results demonstrate the great potential of RoboKiNet in robot applications.
△ Less
Submitted 4 March, 2025; v1 submitted 25 February, 2024;
originally announced February 2024.
-
A New Image Quality Database for Multiple Industrial Processes
Authors:
Xuanchao Ma,
Yanlin Jiang,
Hongyan Liu,
Chengxu Zhou,
Ke Gu
Abstract:
Recent years have witnessed a broader range of applications of image processing technologies in multiple industrial processes, such as smoke detection, security monitoring, and workpiece inspection. Different kinds of distortion types and levels must be introduced into an image during the processes of acquisition, compression, transmission, storage, and display, which might heavily degrade the ima…
▽ More
Recent years have witnessed a broader range of applications of image processing technologies in multiple industrial processes, such as smoke detection, security monitoring, and workpiece inspection. Different kinds of distortion types and levels must be introduced into an image during the processes of acquisition, compression, transmission, storage, and display, which might heavily degrade the image quality and thus strongly reduce the final display effect and clarity. To verify the reliability of existing image quality assessment methods, we establish a new industrial process image database (IPID), which contains 3000 distorted images generated by applying different levels of distortion types to each of the 50 source images. We conduct the subjective test on the aforementioned 3000 images to collect their subjective quality ratings in a well-suited laboratory environment. Finally, we perform comparison experiments on IPID database to investigate the performance of some objective image quality assessment algorithms. The experimental results show that the state-of-the-art image quality assessment methods have difficulty in predicting the quality of images that contain multiple distortion types.
△ Less
Submitted 15 February, 2024; v1 submitted 25 January, 2024;
originally announced January 2024.
-
DocGraphLM: Documental Graph Language Model for Information Extraction
Authors:
Dongsheng Wang,
Zhiqiang Ma,
Armineh Nourbakhsh,
Kang Gu,
Sameena Shah
Abstract:
Advances in Visually Rich Document Understanding (VrDU) have enabled information extraction and question answering over documents with complex layouts. Two tropes of architectures have emerged -- transformer-based models inspired by LLMs, and Graph Neural Networks. In this paper, we introduce DocGraphLM, a novel framework that combines pre-trained language models with graph semantics. To achieve t…
▽ More
Advances in Visually Rich Document Understanding (VrDU) have enabled information extraction and question answering over documents with complex layouts. Two tropes of architectures have emerged -- transformer-based models inspired by LLMs, and Graph Neural Networks. In this paper, we introduce DocGraphLM, a novel framework that combines pre-trained language models with graph semantics. To achieve this, we propose 1) a joint encoder architecture to represent documents, and 2) a novel link prediction approach to reconstruct document graphs. DocGraphLM predicts both directions and distances between nodes using a convergent joint loss function that prioritizes neighborhood restoration and downweighs distant node detection. Our experiments on three SotA datasets show consistent improvement on IE and QA tasks with the adoption of graph features. Moreover, we report that adopting the graph features accelerates convergence in the learning process during training, despite being solely constructed through link prediction.
△ Less
Submitted 5 January, 2024;
originally announced January 2024.
-
Learning Unorthogonalized Matrices for Rotation Estimation
Authors:
Kerui Gu,
Zhihao Li,
Shiyong Liu,
Jianzhuang Liu,
Songcen Xu,
Youliang Yan,
Michael Bi Mi,
Kenji Kawaguchi,
Angela Yao
Abstract:
Estimating 3D rotations is a common procedure for 3D computer vision. The accuracy depends heavily on the rotation representation. One form of representation -- rotation matrices -- is popular due to its continuity, especially for pose estimation tasks. The learning process usually incorporates orthogonalization to ensure orthonormal matrices. Our work reveals, through gradient analysis, that comm…
▽ More
Estimating 3D rotations is a common procedure for 3D computer vision. The accuracy depends heavily on the rotation representation. One form of representation -- rotation matrices -- is popular due to its continuity, especially for pose estimation tasks. The learning process usually incorporates orthogonalization to ensure orthonormal matrices. Our work reveals, through gradient analysis, that common orthogonalization procedures based on the Gram-Schmidt process and singular value decomposition will slow down training efficiency. To this end, we advocate removing orthogonalization from the learning process and learning unorthogonalized `Pseudo' Rotation Matrices (PRoM). An optimization analysis shows that PRoM converges faster and to a better solution. By replacing the orthogonalization incorporated representation with our proposed PRoM in various rotation-related tasks, we achieve state-of-the-art results on large-scale benchmarks for human pose estimation.
△ Less
Submitted 1 December, 2023;
originally announced December 2023.
-
On the Calibration of Human Pose Estimation
Authors:
Kerui Gu,
Rongyu Chen,
Angela Yao
Abstract:
Most 2D human pose estimation frameworks estimate keypoint confidence in an ad-hoc manner, using heuristics such as the maximum value of heatmaps. The confidence is part of the evaluation scheme, e.g., AP for the MSCOCO dataset, yet has been largely overlooked in the development of state-of-the-art methods. This paper takes the first steps in addressing miscalibration in pose estimation. From a ca…
▽ More
Most 2D human pose estimation frameworks estimate keypoint confidence in an ad-hoc manner, using heuristics such as the maximum value of heatmaps. The confidence is part of the evaluation scheme, e.g., AP for the MSCOCO dataset, yet has been largely overlooked in the development of state-of-the-art methods. This paper takes the first steps in addressing miscalibration in pose estimation. From a calibration point of view, the confidence should be aligned with the pose accuracy. In practice, existing methods are poorly calibrated. We show, through theoretical analysis, why a miscalibration gap exists and how to narrow the gap. Simply predicting the instance size and adjusting the confidence function gives considerable AP improvements. Given the black-box nature of deep neural networks, however, it is not possible to fully close this gap with only closed-form adjustments. As such, we go one step further and learn network-specific adjustments by enforcing consistency between confidence and pose accuracy. Our proposed Calibrated ConfidenceNet (CCNet) is a light-weight post-hoc addition that improves AP by up to 1.4% on off-the-shelf pose estimation frameworks. Applied to the downstream task of mesh recovery, CCNet facilitates an additional 1.0mm decrease in 3D keypoint error.
△ Less
Submitted 28 November, 2023;
originally announced November 2023.
-
Gradient-Free Privacy Leakage in Federated Language Models through Selective Weight Tampering
Authors:
Md Rafi Ur Rashid,
Vishnu Asutosh Dasu,
Kang Gu,
Najrin Sultana,
Shagufta Mehnaz
Abstract:
Federated learning (FL) has become a key component in various language modeling applications such as machine translation, next-word prediction, and medical record analysis. These applications are trained on datasets from many FL participants that often include privacy-sensitive data, such as healthcare records, phone/credit card numbers, login credentials, etc. Although FL enables computation with…
▽ More
Federated learning (FL) has become a key component in various language modeling applications such as machine translation, next-word prediction, and medical record analysis. These applications are trained on datasets from many FL participants that often include privacy-sensitive data, such as healthcare records, phone/credit card numbers, login credentials, etc. Although FL enables computation without necessitating clients to share their raw data, existing works show that privacy leakage is still probable in federated language models. In this paper, we present two novel findings on the leakage of privacy-sensitive user data from federated large language models without requiring access to gradients. Firstly, we make a key observation that model snapshots from the intermediate rounds in FL can cause greater privacy leakage than the final trained model. Secondly, we identify that a malicious FL participant can aggravate the leakage by tampering with the model's selective weights that are responsible for memorizing the sensitive training data of some other clients, even without any cooperation from the server. Our best-performing method increases the membership inference recall by 29% and achieves up to 71% private data reconstruction, evidently outperforming existing attacks that consider much stronger adversary capabilities. Lastly, we recommend a balanced suite of techniques for an FL client to defend against such privacy risk.
△ Less
Submitted 9 December, 2025; v1 submitted 24 October, 2023;
originally announced October 2023.
-
How Do Analysts Understand and Verify AI-Assisted Data Analyses?
Authors:
Ken Gu,
Ruoxi Shang,
Tim Althoff,
Chenglong Wang,
Steven M. Drucker
Abstract:
Data analysis is challenging as it requires synthesizing domain knowledge, statistical expertise, and programming skills. Assistants powered by large language models (LLMs), such as ChatGPT, can assist analysts by translating natural language instructions into code. However, AI-assistant responses and analysis code can be misaligned with the analyst's intent or be seemingly correct but lead to inc…
▽ More
Data analysis is challenging as it requires synthesizing domain knowledge, statistical expertise, and programming skills. Assistants powered by large language models (LLMs), such as ChatGPT, can assist analysts by translating natural language instructions into code. However, AI-assistant responses and analysis code can be misaligned with the analyst's intent or be seemingly correct but lead to incorrect conclusions. Therefore, validating AI assistance is crucial and challenging. Here, we explore how analysts understand and verify the correctness of AI-generated analyses. To observe analysts in diverse verification approaches, we develop a design probe equipped with natural language explanations, code, visualizations, and interactive data tables with common data operations. Through a qualitative user study (n=22) using this probe, we uncover common behaviors within verification workflows and how analysts' programming, analysis, and tool backgrounds reflect these behaviors. Additionally, we provide recommendations for analysts and highlight opportunities for designers to improve future AI-assistant experiences.
△ Less
Submitted 4 March, 2024; v1 submitted 19 September, 2023;
originally announced September 2023.
-
How Do Data Analysts Respond to AI Assistance? A Wizard-of-Oz Study
Authors:
Ken Gu,
Madeleine Grunde-McLaughlin,
Andrew M. McNutt,
Jeffrey Heer,
Tim Althoff
Abstract:
Data analysis is challenging as analysts must navigate nuanced decisions that may yield divergent conclusions. AI assistants have the potential to support analysts in planning their analyses, enabling more robust decision making. Though AI-based assistants that target code execution (e.g., Github Copilot) have received significant attention, limited research addresses assistance for both analysis…
▽ More
Data analysis is challenging as analysts must navigate nuanced decisions that may yield divergent conclusions. AI assistants have the potential to support analysts in planning their analyses, enabling more robust decision making. Though AI-based assistants that target code execution (e.g., Github Copilot) have received significant attention, limited research addresses assistance for both analysis execution and planning. In this work, we characterize helpful planning suggestions and their impacts on analysts' workflows. We first review the analysis planning literature and crowd-sourced analysis studies to categorize suggestion content. We then conduct a Wizard-of-Oz study (n=13) to observe analysts' preferences and reactions to planning assistance in a realistic scenario. Our findings highlight subtleties in contextual factors that impact suggestion helpfulness, emphasizing design implications for supporting different abstractions of assistance, forms of initiative, increased engagement, and alignment of goals between analysts and assistants.
△ Less
Submitted 4 March, 2024; v1 submitted 18 September, 2023;
originally announced September 2023.
-
S&Reg: End-to-End Learning-Based Model for Multi-Goal Path Planning Problem
Authors:
Yuan Huang,
Kairui Gu,
Hee-hyol Lee
Abstract:
In this paper, we propose a novel end-to-end approach for solving the multi-goal path planning problem in obstacle environments. Our proposed model, called S&Reg, integrates multi-task learning networks with a TSP solver and a path planner to quickly compute a closed and feasible path visiting all goals. Specifically, the model first predicts promising regions that potentially contain the optimal…
▽ More
In this paper, we propose a novel end-to-end approach for solving the multi-goal path planning problem in obstacle environments. Our proposed model, called S&Reg, integrates multi-task learning networks with a TSP solver and a path planner to quickly compute a closed and feasible path visiting all goals. Specifically, the model first predicts promising regions that potentially contain the optimal paths connecting two goals as a segmentation task. Simultaneously, estimations for pairwise distances between goals are conducted as a regression task by the neural networks, while the results construct a symmetric weight matrix for the TSP solver. Leveraging the TSP result, the path planner efficiently explores feasible paths guided by promising regions. We extensively evaluate the S&Reg model through simulations and compare it with the other sampling-based algorithms. The results demonstrate that our proposed model achieves superior performance in respect of computation time and solution cost, making it an effective solution for multi-goal path planning in obstacle environments. The proposed approach has the potential to be extended to other sampling-based algorithms for multi-goal path planning.
△ Less
Submitted 8 August, 2023;
originally announced August 2023.
-
Bias-Compensated Integral Regression for Human Pose Estimation
Authors:
Kerui Gu,
Linlin Yang,
Michael Bi Mi,
Angela Yao
Abstract:
In human and hand pose estimation, heatmaps are a crucial intermediate representation for a body or hand keypoint. Two popular methods to decode the heatmap into a final joint coordinate are via an argmax, as done in heatmap detection, or via softmax and expectation, as done in integral regression. Integral regression is learnable end-to-end, but has lower accuracy than detection. This paper uncov…
▽ More
In human and hand pose estimation, heatmaps are a crucial intermediate representation for a body or hand keypoint. Two popular methods to decode the heatmap into a final joint coordinate are via an argmax, as done in heatmap detection, or via softmax and expectation, as done in integral regression. Integral regression is learnable end-to-end, but has lower accuracy than detection. This paper uncovers an induced bias from integral regression that results from combining the softmax and the expectation operation. This bias often forces the network to learn degenerately localized heatmaps, obscuring the keypoint's true underlying distribution and leads to lower accuracies. Training-wise, by investigating the gradients of integral regression, we show that the implicit guidance of integral regression to update the heatmap makes it slower to converge than detection. To counter the above two limitations, we propose Bias Compensated Integral Regression (BCIR), an integral regression-based framework that compensates for the bias. BCIR also incorporates a Gaussian prior loss to speed up training and improve prediction accuracy. Experimental results on both the human body and hand benchmarks show that BCIR is faster to train and more accurate than the original integral regression, making it competitive with state-of-the-art detection methods.
△ Less
Submitted 25 January, 2023;
originally announced January 2023.
-
Understanding and Supporting Debugging Workflows in Multiverse Analysis
Authors:
Ken Gu,
Eunice Jun,
Tim Althoff
Abstract:
Multiverse analysis, a paradigm for statistical analysis that considers all combinations of reasonable analysis choices in parallel, promises to improve transparency and reproducibility. Although recent tools help analysts specify multiverse analyses, they remain difficult to use in practice. In this work, we identify debugging as a key barrier due to the latency from running analyses to detecting…
▽ More
Multiverse analysis, a paradigm for statistical analysis that considers all combinations of reasonable analysis choices in parallel, promises to improve transparency and reproducibility. Although recent tools help analysts specify multiverse analyses, they remain difficult to use in practice. In this work, we identify debugging as a key barrier due to the latency from running analyses to detecting bugs and the scale of metadata processing needed to diagnose a bug. To address these challenges, we prototype a command-line interface tool, Multiverse Debugger, which helps diagnose bugs in the multiverse and propagate fixes. In a qualitative lab study (n=13), we use Multiverse Debugger as a probe to develop a model of debugging workflows and identify specific challenges, including difficulty in understanding the multiverse's composition. We conclude with design implications for future multiverse analysis authoring systems.
△ Less
Submitted 4 June, 2023; v1 submitted 7 October, 2022;
originally announced October 2022.
-
Utility-Oriented Underwater Image Quality Assessment Based on Transfer Learning
Authors:
Weiling Chen,
Rongfu Lin,
Honggang Liao,
Tiesong Zhao,
Ke Gu,
Patrick Le Callet
Abstract:
The widespread image applications have greatly promoted the vision-based tasks, in which the Image Quality Assessment (IQA) technique has become an increasingly significant issue. For user enjoyment in multimedia systems, the IQA exploits image fidelity and aesthetics to characterize user experience; while for other tasks such as popular object recognition, there exists a low correlation between u…
▽ More
The widespread image applications have greatly promoted the vision-based tasks, in which the Image Quality Assessment (IQA) technique has become an increasingly significant issue. For user enjoyment in multimedia systems, the IQA exploits image fidelity and aesthetics to characterize user experience; while for other tasks such as popular object recognition, there exists a low correlation between utilities and perceptions. In such cases, the fidelity-based and aesthetics-based IQA methods cannot be directly applied. To address this issue, this paper proposes a utility-oriented IQA in object recognition. In particular, we initialize our research in the scenario of underwater fish detection, which is a critical task that has not yet been perfectly addressed. Based on this task, we build an Underwater Image Utility Database (UIUD) and a learning-based Underwater Image Utility Measure (UIUM). Inspired by the top-down design of fidelity-based IQA, we exploit the deep models of object recognition and transfer their features to our UIUM. Experiments validate that the proposed transfer-learning-based UIUM achieves promising performance in the recognition task. We envision our research provides insights to bridge the researches of IQA and computer vision.
△ Less
Submitted 7 May, 2022;
originally announced May 2022.
-
An Instance-Dependent Simulation Framework for Learning with Label Noise
Authors:
Keren Gu,
Xander Masotto,
Vandana Bachani,
Balaji Lakshminarayanan,
Jack Nikodem,
Dong Yin
Abstract:
We propose a simulation framework for generating instance-dependent noisy labels via a pseudo-labeling paradigm. We show that the distribution of the synthetic noisy labels generated with our framework is closer to human labels compared to independent and class-conditional random flipping. Equipped with controllable label noise, we study the negative impact of noisy labels across a few practical s…
▽ More
We propose a simulation framework for generating instance-dependent noisy labels via a pseudo-labeling paradigm. We show that the distribution of the synthetic noisy labels generated with our framework is closer to human labels compared to independent and class-conditional random flipping. Equipped with controllable label noise, we study the negative impact of noisy labels across a few practical settings to understand when label noise is more problematic. We also benchmark several existing algorithms for learning with noisy labels and compare their behavior on our synthetic datasets and on the datasets with independent random label noise. Additionally, with the availability of annotator information from our simulation framework, we propose a new technique, Label Quality Model (LQM), that leverages annotator features to predict and correct against noisy labels. We show that by adding LQM as a label correction step before applying existing noisy label techniques, we can further improve the models' performance.
△ Less
Submitted 17 October, 2021; v1 submitted 23 July, 2021;
originally announced July 2021.
-
Feature Selection for Multivariate Time Series via Network Pruning
Authors:
Kang Gu,
Soroush Vosoughi,
Temiloluwa Prioleau
Abstract:
In recent years, there has been an ever increasing amount of multivariate time series (MTS) data in various domains, typically generated by a large family of sensors such as wearable devices. This has led to the development of novel learning methods on MTS data, with deep learning models dominating the most recent advancements. Prior literature has primarily focused on designing new network archit…
▽ More
In recent years, there has been an ever increasing amount of multivariate time series (MTS) data in various domains, typically generated by a large family of sensors such as wearable devices. This has led to the development of novel learning methods on MTS data, with deep learning models dominating the most recent advancements. Prior literature has primarily focused on designing new network architectures for modeling temporal dependencies within MTS. However, a less studied challenge is associated with high dimensionality of MTS data. In this paper, we propose a novel neural component, namely Neural Feature Selector (NFS), as an end-2-end solution for feature selection in MTS data. Specifically, NFS is based on decomposed convolution design and includes two modules: firstly each feature stream (a stream corresponds to an univariate series of MTS) within MTS is processed by a temporal CNN independently; then an aggregating CNN combines the processed streams to produce input for other downstream networks. We evaluated the proposed NFS model on four real-world MTS datasets and found that it achieves comparable results with state-of-the-art methods while providing the benefit of feature selection. Our paper also highlights the robustness and effectiveness of feature selection with NFS compared to using recent autoencoder-based methods.
△ Less
Submitted 21 October, 2021; v1 submitted 11 February, 2021;
originally announced February 2021.
-
Bi-Level Graph Neural Networks for Drug-Drug Interaction Prediction
Authors:
Yunsheng Bai,
Ken Gu,
Yizhou Sun,
Wei Wang
Abstract:
We introduce Bi-GNN for modeling biological link prediction tasks such as drug-drug interaction (DDI) and protein-protein interaction (PPI). Taking drug-drug interaction as an example, existing methods using machine learning either only utilize the link structure between drugs without using the graph representation of each drug molecule, or only leverage the individual drug compound structures wit…
▽ More
We introduce Bi-GNN for modeling biological link prediction tasks such as drug-drug interaction (DDI) and protein-protein interaction (PPI). Taking drug-drug interaction as an example, existing methods using machine learning either only utilize the link structure between drugs without using the graph representation of each drug molecule, or only leverage the individual drug compound structures without using graph structure for the higher-level DDI graph. The key idea of our method is to fundamentally view the data as a bi-level graph, where the highest level graph represents the interaction between biological entities (interaction graph), and each biological entity itself is further expanded to its intrinsic graph representation (representation graphs), where the graph is either flat like a drug compound or hierarchical like a protein with amino acid level graph, secondary structure, tertiary structure, etc. Our model not only allows the usage of information from both the high-level interaction graph and the low-level representation graphs, but also offers a baseline for future research opportunities to address the bi-level nature of the data.
△ Less
Submitted 11 June, 2020;
originally announced June 2020.
-
Shop The Look: Building a Large Scale Visual Shopping System at Pinterest
Authors:
Raymond Shiau,
Hao-Yu Wu,
Eric Kim,
Yue Li Du,
Anqi Guo,
Zhiyuan Zhang,
Eileen Li,
Kunlong Gu,
Charles Rosenberg,
Andrew Zhai
Abstract:
As online content becomes ever more visual, the demand for searching by visual queries grows correspondingly stronger. Shop The Look is an online shopping discovery service at Pinterest, leveraging visual search to enable users to find and buy products within an image. In this work, we provide a holistic view of how we built Shop The Look, a shopping oriented visual search system, along with lesso…
▽ More
As online content becomes ever more visual, the demand for searching by visual queries grows correspondingly stronger. Shop The Look is an online shopping discovery service at Pinterest, leveraging visual search to enable users to find and buy products within an image. In this work, we provide a holistic view of how we built Shop The Look, a shopping oriented visual search system, along with lessons learned from addressing shopping needs. We discuss topics including core technology across object detection and visual embeddings, serving infrastructure for realtime inference, and data labeling methodology for training/evaluation data collection and human evaluation. The user-facing impacts of our system design choices are measured through offline evaluations, human relevance judgements, and online A/B experiments. The collective improvements amount to cumulative relative gains of over 160% in end-to-end human relevance judgements and over 80% in engagement. Shop The Look is deployed in production at Pinterest.
△ Less
Submitted 18 June, 2020;
originally announced June 2020.
-
Bootstrapping Complete The Look at Pinterest
Authors:
Eileen Li,
Eric Kim,
Andrew Zhai,
Josh Beal,
Kunlong Gu
Abstract:
Putting together an ideal outfit is a process that involves creativity and style intuition. This makes it a particularly difficult task to automate. Existing styling products generally involve human specialists and a highly curated set of fashion items. In this paper, we will describe how we bootstrapped the Complete The Look (CTL) system at Pinterest. This is a technology that aims to learn the s…
▽ More
Putting together an ideal outfit is a process that involves creativity and style intuition. This makes it a particularly difficult task to automate. Existing styling products generally involve human specialists and a highly curated set of fashion items. In this paper, we will describe how we bootstrapped the Complete The Look (CTL) system at Pinterest. This is a technology that aims to learn the subjective task of "style compatibility" in order to recommend complementary items that complete an outfit. In particular, we want to show recommendations from other categories that are compatible with an item of interest. For example, what are some heels that go well with this cocktail dress? We will introduce our outfit dataset of over 1 million outfits and 4 million objects, a subset of which we will make available to the research community, and describe the pipeline used to obtain and refresh this dataset. Furthermore, we will describe how we evaluate this subjective task and compare model performance across multiple training methods. Lastly, we will share our lessons going from experimentation to working prototype, and how to mitigate failure modes in the production environment. Our work represents one of the first examples of an industrial-scale solution for compatibility-based fashion recommendation.
△ Less
Submitted 29 June, 2020; v1 submitted 18 June, 2020;
originally announced June 2020.