-
Revisiting Fairness-aware Interactive Recommendation: Item Lifecycle as a Control Knob
Authors:
Yun Lu,
Xiaoyu Shi,
Hong Xie,
Chongjun Xia,
Zhenhui Gong,
Mingsheng Shang
Abstract:
This paper revisits fairness-aware interactive recommendation (e.g., TikTok, KuaiShou) by introducing a novel control knob, i.e., the lifecycle of items. We make threefold contributions. First, we conduct a comprehensive empirical analysis and uncover that item lifecycles in short-video platforms follow a compressed three-phase pattern, i.e., rapid growth, transient stability, and sharp decay, whi…
▽ More
This paper revisits fairness-aware interactive recommendation (e.g., TikTok, KuaiShou) by introducing a novel control knob, i.e., the lifecycle of items. We make threefold contributions. First, we conduct a comprehensive empirical analysis and uncover that item lifecycles in short-video platforms follow a compressed three-phase pattern, i.e., rapid growth, transient stability, and sharp decay, which significantly deviates from the classical four-stage model (introduction, growth, maturity, decline). Second, we introduce LHRL, a lifecycle-aware hierarchical reinforcement learning framework that dynamically harmonizes fairness and accuracy by leveraging phase-specific exposure dynamics. LHRL consists of two key components: (1) PhaseFormer, a lightweight encoder combining STL decomposition and attention mechanisms for robust phase detection; (2) a two-level HRL agent, where the high-level policy imposes phase-aware fairness constraints, and the low-level policy optimizes immediate user engagement. This decoupled optimization allows for effective reconciliation between long-term equity and short-term utility. Third, experiments on multiple real-world interactive recommendation datasets demonstrate that LHRL significantly improves both fairness and user engagement. Furthermore, the integration of lifecycle-aware rewards into existing RL-based models consistently yields performance gains, highlighting the generalizability and practical value of our approach.
△ Less
Submitted 20 November, 2025;
originally announced November 2025.
-
FT-NCFM: An Influence-Aware Data Distillation Framework for Efficient VLA Models
Authors:
Kewei Chen,
Yayu Long,
Shuai Li,
Mingsheng Shang
Abstract:
The powerful generalization of Vision-Language-Action (VLA) models is bottlenecked by their heavy reliance on massive, redundant, and unevenly valued datasets, hindering their widespread application. Existing model-centric optimization paths, such as model compression (which often leads to performance degradation) or policy distillation (whose products are model-dependent and lack generality), fai…
▽ More
The powerful generalization of Vision-Language-Action (VLA) models is bottlenecked by their heavy reliance on massive, redundant, and unevenly valued datasets, hindering their widespread application. Existing model-centric optimization paths, such as model compression (which often leads to performance degradation) or policy distillation (whose products are model-dependent and lack generality), fail to fundamentally address this data-level challenge. To this end, this paper introduces FT-NCFM, a fundamentally different, data-centric generative data distillation framework. Our framework employs a self-contained Fact-Tracing (FT) engine that combines causal attribution with programmatic contrastive verification to assess the intrinsic value of samples. Guided by these assessments, an adversarial NCFM process synthesizes a model-agnostic, information-dense, and reusable data asset. Experimental results on several mainstream VLA benchmarks show that models trained on just 5% of our distilled coreset achieve a success rate of 85-90% compared with training on the full dataset, while reducing training time by over 80%. Our work demonstrates that intelligent data distillation is a highly promising new path for building efficient, high-performance VLA models.
△ Less
Submitted 20 November, 2025;
originally announced November 2025.
-
PIPHEN: Physical Interaction Prediction with Hamiltonian Energy Networks
Authors:
Kewei Chen,
Yayu Long,
Mingsheng Shang
Abstract:
Multi-robot systems in complex physical collaborations face a "shared brain dilemma": transmitting high-dimensional multimedia data (e.g., video streams at ~30MB/s) creates severe bandwidth bottlenecks and decision-making latency. To address this, we propose PIPHEN, an innovative distributed physical cognition-control framework. Its core idea is to replace "raw data communication" with "semantic c…
▽ More
Multi-robot systems in complex physical collaborations face a "shared brain dilemma": transmitting high-dimensional multimedia data (e.g., video streams at ~30MB/s) creates severe bandwidth bottlenecks and decision-making latency. To address this, we propose PIPHEN, an innovative distributed physical cognition-control framework. Its core idea is to replace "raw data communication" with "semantic communication" by performing "semantic distillation" at the robot edge, reconstructing high-dimensional perceptual data into compact, structured physical representations. This idea is primarily realized through two key components: (1) a novel Physical Interaction Prediction Network (PIPN), derived from large model knowledge distillation, to generate this representation; and (2) a Hamiltonian Energy Network (HEN) controller, based on energy conservation, to precisely translate this representation into coordinated actions. Experiments show that, compared to baseline methods, PIPHEN can compress the information representation to less than 5% of the original data volume and reduce collaborative decision-making latency from 315ms to 76ms, while significantly improving task success rates. This work provides a fundamentally efficient paradigm for resolving the "shared brain dilemma" in resource-constrained multi-robot systems.
△ Less
Submitted 20 November, 2025;
originally announced November 2025.
-
2D Gaussians Spatial Transport for Point-supervised Density Regression
Authors:
Miao Shang,
Xiaopeng Hong
Abstract:
This paper introduces Gaussian Spatial Transport (GST), a novel framework that leverages Gaussian splatting to facilitate transport from the probability measure in the image coordinate space to the annotation map. We propose a Gaussian splatting-based method to estimate pixel-annotation correspondence, which is then used to compute a transport plan derived from Bayesian probability. To integrate t…
▽ More
This paper introduces Gaussian Spatial Transport (GST), a novel framework that leverages Gaussian splatting to facilitate transport from the probability measure in the image coordinate space to the annotation map. We propose a Gaussian splatting-based method to estimate pixel-annotation correspondence, which is then used to compute a transport plan derived from Bayesian probability. To integrate the resulting transport plan into standard network optimization in typical computer vision tasks, we derive a loss function that measures discrepancy after transport. Extensive experiments on representative computer vision tasks, including crowd counting and landmark detection, validate the effectiveness of our approach. Compared to conventional optimal transport schemes, GST eliminates iterative transport plan computation during training, significantly improving efficiency. Code is available at https://github.com/infinite0522/GST.
△ Less
Submitted 24 November, 2025; v1 submitted 18 November, 2025;
originally announced November 2025.
-
MS-Mix: Unveiling the Power of Mixup for Multimodal Sentiment Analysis
Authors:
Hongyu Zhu,
Lin Chen,
Mounim A. El-Yacoubi,
Mingsheng Shang
Abstract:
Multimodal Sentiment Analysis (MSA) aims to identify and interpret human emotions by integrating information from heterogeneous data sources such as text, video, and audio. While deep learning models have advanced in network architecture design, they remain heavily limited by scarce multimodal annotated data. Although Mixup-based augmentation improves generalization in unimodal tasks, its direct a…
▽ More
Multimodal Sentiment Analysis (MSA) aims to identify and interpret human emotions by integrating information from heterogeneous data sources such as text, video, and audio. While deep learning models have advanced in network architecture design, they remain heavily limited by scarce multimodal annotated data. Although Mixup-based augmentation improves generalization in unimodal tasks, its direct application to MSA introduces critical challenges: random mixing often amplifies label ambiguity and semantic inconsistency due to the lack of emotion-aware mixing mechanisms. To overcome these issues, we propose MS-Mix, an adaptive, emotion-sensitive augmentation framework that automatically optimizes sample mixing in multimodal settings. The key components of MS-Mix include: (1) a Sentiment-Aware Sample Selection (SASS) strategy that effectively prevents semantic confusion caused by mixing samples with contradictory emotions. (2) a Sentiment Intensity Guided (SIG) module using multi-head self-attention to compute modality-specific mixing ratios dynamically based on their respective emotional intensities. (3) a Sentiment Alignment Loss (SAL) that aligns the prediction distributions across modalities, and incorporates the Kullback-Leibler-based loss as an additional regularization term to train the emotion intensity predictor and the backbone network jointly. Extensive experiments on three benchmark datasets with six state-of-the-art backbones confirm that MS-Mix consistently outperforms existing methods, establishing a new standard for robust multimodal sentiment augmentation. The source code is available at: https://github.com/HongyuZhu-s/MS-Mix.
△ Less
Submitted 13 October, 2025;
originally announced October 2025.
-
DRAE: Dynamic Retrieval-Augmented Expert Networks for Lifelong Learning and Task Adaptation in Robotics
Authors:
Yayu Long,
Kewei Chen,
Long Jin,
Mingsheng Shang
Abstract:
We introduce Dynamic Retrieval-Augmented Expert Networks (DRAE), a groundbreaking architecture that addresses the challenges of lifelong learning, catastrophic forgetting, and task adaptation by combining the dynamic routing capabilities of Mixture-of-Experts (MoE); leveraging the knowledge-enhancement power of Retrieval-Augmented Generation (RAG); incorporating a novel hierarchical reinforcement…
▽ More
We introduce Dynamic Retrieval-Augmented Expert Networks (DRAE), a groundbreaking architecture that addresses the challenges of lifelong learning, catastrophic forgetting, and task adaptation by combining the dynamic routing capabilities of Mixture-of-Experts (MoE); leveraging the knowledge-enhancement power of Retrieval-Augmented Generation (RAG); incorporating a novel hierarchical reinforcement learning (RL) framework; and coordinating through ReflexNet-SchemaPlanner-HyperOptima (RSHO).DRAE dynamically routes expert models via a sparse MoE gating mechanism, enabling efficient resource allocation while leveraging external knowledge through parametric retrieval (P-RAG) to augment the learning process. We propose a new RL framework with ReflexNet for low-level task execution, SchemaPlanner for symbolic reasoning, and HyperOptima for long-term context modeling, ensuring continuous adaptation and memory retention. Experimental results show that DRAE significantly outperforms baseline approaches in long-term task retention and knowledge reuse, achieving an average task success rate of 82.5% across a set of dynamic robotic manipulation tasks, compared to 74.2% for traditional MoE models. Furthermore, DRAE maintains an extremely low forgetting rate, outperforming state-of-the-art methods in catastrophic forgetting mitigation. These results demonstrate the effectiveness of our approach in enabling flexible, scalable, and efficient lifelong learning for robotics.
△ Less
Submitted 23 December, 2025; v1 submitted 7 July, 2025;
originally announced July 2025.
-
Graph Fourier Transformer with Structure-Frequency Information
Authors:
Yonghui Zhai,
Yang Zhang,
Minghao Shang,
Lihua Pang,
Yaxin Ren
Abstract:
Graph Transformers (GTs) have shown advantages in numerous graph structure tasks but their self-attention mechanism ignores the generalization bias of graphs, with existing methods mainly compensating for this bias from aspects like position encoding, attention bias and relative distance yet still having sub-optimal performance and being insufficient by only considering the structural perspective…
▽ More
Graph Transformers (GTs) have shown advantages in numerous graph structure tasks but their self-attention mechanism ignores the generalization bias of graphs, with existing methods mainly compensating for this bias from aspects like position encoding, attention bias and relative distance yet still having sub-optimal performance and being insufficient by only considering the structural perspective of generalization bias. To address this, this paper proposes Grafourierformer, which innovatively combines GT with inductive bias containing Frequency-Structure information by applying Graph Fourier Transform to the Attention Matrix: specifically, eigenvalues from the Graph Laplacian matrix are used to construct an Eigenvalue matrix mask (reflecting node positions and structural relationships with neighboring nodes to enable consideration of node range structural characteristics and focus on local graph details), and inverse Fourier transform is employed to extract node high-frequency and low-frequency features, calculate low-frequency and high-frequency energy, and construct a node frequency-energy matrix to filter the eigenvalue matrix mask, allowing attention heads to incorporate both graph structural information and node frequency information optimization, adaptively distinguish global trends from local details, and effectively suppress redundant information interference. Extensive experiments on various benchmarks show Grafourierformer consistently outperforms GNN and GT-based models in graph classification and node classification tasks, with ablation experiments further validating the effectiveness and necessity of the method. Codes are available at https://github.com/Arichibald/Grafourierformer.git
△ Less
Submitted 28 April, 2025;
originally announced April 2025.
-
Temporal-contextual Event Learning for Pedestrian Crossing Intent Prediction
Authors:
Hongbin Liang,
Hezhe Qiao,
Wei Huang,
Qizhou Wang,
Mingsheng Shang,
Lin Chen
Abstract:
Ensuring the safety of vulnerable road users through accurate prediction of pedestrian crossing intention (PCI) plays a crucial role in the context of autonomous and assisted driving. Analyzing the set of observation video frames in ego-view has been widely used in most PCI prediction methods to forecast the cross intent. However, they struggle to capture the critical events related to pedestrian…
▽ More
Ensuring the safety of vulnerable road users through accurate prediction of pedestrian crossing intention (PCI) plays a crucial role in the context of autonomous and assisted driving. Analyzing the set of observation video frames in ego-view has been widely used in most PCI prediction methods to forecast the cross intent. However, they struggle to capture the critical events related to pedestrian behaviour along the temporal dimension due to the high redundancy of the video frames, which results in the sub-optimal performance of PCI prediction. Our research addresses the challenge by introducing a novel approach called \underline{T}emporal-\underline{c}ontextual Event \underline{L}earning (TCL). The TCL is composed of the Temporal Merging Module (TMM), which aims to manage the redundancy by clustering the observed video frames into multiple key temporal events. Then, the Contextual Attention Block (CAB) is employed to adaptively aggregate multiple event features along with visual and non-visual data. By synthesizing the temporal feature extraction and contextual attention on the key information across the critical events, TCL can learn expressive representation for the PCI prediction. Extensive experiments are carried out on three widely adopted datasets, including PIE, JAAD-beh, and JAAD-all. The results show that TCL substantially surpasses the state-of-the-art methods. Our code can be accessed at https://github.com/dadaguailhb/TCL.
△ Less
Submitted 4 April, 2025;
originally announced April 2025.
-
Semantic-guided Representation Learning for Multi-Label Recognition
Authors:
Ruhui Zhang,
Hezhe Qiao,
Pengcheng Xu,
Mingsheng Shang,
Lin Chen
Abstract:
Multi-label Recognition (MLR) involves assigning multiple labels to each data instance in an image, offering advantages over single-label classification in complex scenarios. However, it faces the challenge of annotating all relevant categories, often leading to uncertain annotations, such as unseen or incomplete labels. Recent Vision and Language Pre-training (VLP) based methods have made signifi…
▽ More
Multi-label Recognition (MLR) involves assigning multiple labels to each data instance in an image, offering advantages over single-label classification in complex scenarios. However, it faces the challenge of annotating all relevant categories, often leading to uncertain annotations, such as unseen or incomplete labels. Recent Vision and Language Pre-training (VLP) based methods have made significant progress in tackling zero-shot MLR tasks by leveraging rich vision-language correlations. However, the correlation between multi-label semantics has not been fully explored, and the learned visual features often lack essential semantic information. To overcome these limitations, we introduce a Semantic-guided Representation Learning approach (SigRL) that enables the model to learn effective visual and textual representations, thereby improving the downstream alignment of visual images and categories. Specifically, we first introduce a graph-based multi-label correlation module (GMC) to facilitate information exchange between labels, enriching the semantic representation across the multi-label texts. Next, we propose a Semantic Visual Feature Reconstruction module (SVFR) to enhance the semantic information in the visual representation by integrating the learned textual representation during reconstruction. Finally, we optimize the image-text matching capability of the VLP model using both local and global features to achieve zero-shot MLR. Comprehensive experiments are conducted on several MLR benchmarks, encompassing both zero-shot MLR (with unseen labels) and single positive multi-label learning (with limited labels), demonstrating the superior performance of our approach compared to state-of-the-art methods. The code is available at https://github.com/MVL-Lab/SigRL.
△ Less
Submitted 31 May, 2025; v1 submitted 4 April, 2025;
originally announced April 2025.
-
DCA: Dividing and Conquering Amnesia in Incremental Object Detection
Authors:
Aoting Zhang,
Dongbao Yang,
Chang Liu,
Xiaopeng Hong,
Miao Shang,
Yu Zhou
Abstract:
Incremental object detection (IOD) aims to cultivate an object detector that can continuously localize and recognize novel classes while preserving its performance on previous classes. Existing methods achieve certain success by improving knowledge distillation and exemplar replay for transformer-based detection frameworks, but the intrinsic forgetting mechanisms remain underexplored. In this pape…
▽ More
Incremental object detection (IOD) aims to cultivate an object detector that can continuously localize and recognize novel classes while preserving its performance on previous classes. Existing methods achieve certain success by improving knowledge distillation and exemplar replay for transformer-based detection frameworks, but the intrinsic forgetting mechanisms remain underexplored. In this paper, we dive into the cause of forgetting and discover forgetting imbalance between localization and recognition in transformer-based IOD, which means that localization is less-forgetting and can generalize to future classes, whereas catastrophic forgetting occurs primarily on recognition. Based on these insights, we propose a Divide-and-Conquer Amnesia (DCA) strategy, which redesigns the transformer-based IOD into a localization-then-recognition process. DCA can well maintain and transfer the localization ability, leaving decoupled fragile recognition to be specially conquered. To reduce feature drift in recognition, we leverage semantic knowledge encoded in pre-trained language models to anchor class representations within a unified feature space across incremental tasks. This involves designing a duplex classifier fusion and embedding class semantic features into the recognition decoding process in the form of queries. Extensive experiments validate that our approach achieves state-of-the-art performance, especially for long-term incremental scenarios. For example, under the four-step setting on MS-COCO, our DCA strategy significantly improves the final AP by 6.9%.
△ Less
Submitted 19 March, 2025;
originally announced March 2025.
-
IFIR: A Comprehensive Benchmark for Evaluating Instruction-Following in Expert-Domain Information Retrieval
Authors:
Tingyu Song,
Guo Gan,
Mingsheng Shang,
Yilun Zhao
Abstract:
We introduce IFIR, the first comprehensive benchmark designed to evaluate instruction-following information retrieval (IR) in expert domains. IFIR includes 2,426 high-quality examples and covers eight subsets across four specialized domains: finance, law, healthcare, and science literature. Each subset addresses one or more domain-specific retrieval tasks, replicating real-world scenarios where cu…
▽ More
We introduce IFIR, the first comprehensive benchmark designed to evaluate instruction-following information retrieval (IR) in expert domains. IFIR includes 2,426 high-quality examples and covers eight subsets across four specialized domains: finance, law, healthcare, and science literature. Each subset addresses one or more domain-specific retrieval tasks, replicating real-world scenarios where customized instructions are critical. IFIR enables a detailed analysis of instruction-following retrieval capabilities by incorporating instructions at different levels of complexity. We also propose a novel LLM-based evaluation method to provide a more precise and reliable assessment of model performance in following instructions. Through extensive experiments on 15 frontier retrieval models, including those based on LLMs, our results reveal that current models face significant challenges in effectively following complex, domain-specific instructions. We further provide in-depth analyses to highlight these limitations, offering valuable insights to guide future advancements in retriever development.
△ Less
Submitted 6 March, 2025;
originally announced March 2025.
-
Learning Code Preference via Synthetic Evolution
Authors:
Jiawei Liu,
Thanh Nguyen,
Mingyue Shang,
Hantian Ding,
Xiaopeng Li,
Yu Yu,
Varun Kumar,
Zijian Wang
Abstract:
Large Language Models (LLMs) have recently demonstrated remarkable coding capabilities. However, assessing code generation based on well-formed properties and aligning it with developer preferences remains challenging. In this paper, we explore two key questions under the new challenge of code preference learning: (i) How do we train models to predict meaningful preferences for code? and (ii) How…
▽ More
Large Language Models (LLMs) have recently demonstrated remarkable coding capabilities. However, assessing code generation based on well-formed properties and aligning it with developer preferences remains challenging. In this paper, we explore two key questions under the new challenge of code preference learning: (i) How do we train models to predict meaningful preferences for code? and (ii) How do human and LLM preferences align with verifiable code properties and developer code tastes? To this end, we propose CodeFavor, a framework for training pairwise code preference models from synthetic evolution data, including code commits and code critiques. To evaluate code preferences, we introduce CodePrefBench, a benchmark comprising 1364 rigorously curated code preference tasks to cover three verifiable properties-correctness, efficiency, and security-along with human preference. Our evaluation shows that CodeFavor holistically improves the accuracy of model-based code preferences by up to 28.8%. Meanwhile, CodeFavor models can match the performance of models with 6-9x more parameters while being 34x more cost-effective. We also rigorously validate the design choices in CodeFavor via a comprehensive set of controlled experiments. Furthermore, we discover the prohibitive costs and limitations of human-based code preference: despite spending 23.4 person-minutes on each task, 15.1-40.3% of tasks remain unsolved. Compared to model-based preference, human preference tends to be more accurate under the objective of code correctness, while being sub-optimal for non-functional objectives.
△ Less
Submitted 23 October, 2024; v1 submitted 4 October, 2024;
originally announced October 2024.
-
Multi-modal Crowd Counting via a Broker Modality
Authors:
Haoliang Meng,
Xiaopeng Hong,
Chenhao Wang,
Miao Shang,
Wangmeng Zuo
Abstract:
Multi-modal crowd counting involves estimating crowd density from both visual and thermal/depth images. This task is challenging due to the significant gap between these distinct modalities. In this paper, we propose a novel approach by introducing an auxiliary broker modality and on this basis frame the task as a triple-modal learning problem. We devise a fusion-based method to generate this brok…
▽ More
Multi-modal crowd counting involves estimating crowd density from both visual and thermal/depth images. This task is challenging due to the significant gap between these distinct modalities. In this paper, we propose a novel approach by introducing an auxiliary broker modality and on this basis frame the task as a triple-modal learning problem. We devise a fusion-based method to generate this broker modality, leveraging a non-diffusion, lightweight counterpart of modern denoising diffusion-based fusion models. Additionally, we identify and address the ghosting effect caused by direct cross-modal image fusion in multi-modal crowd counting. Through extensive experimental evaluations on popular multi-modal crowd-counting datasets, we demonstrate the effectiveness of our method, which introduces only 4 million additional parameters, yet achieves promising results. The code is available at https://github.com/HenryCilence/Broker-Modality-Crowd-Counting.
△ Less
Submitted 10 July, 2024;
originally announced July 2024.
-
Self-Supervised Spatial-Temporal Normality Learning for Time Series Anomaly Detection
Authors:
Yutong Chen,
Hongzuo Xu,
Guansong Pang,
Hezhe Qiao,
Yuan Zhou,
Mingsheng Shang
Abstract:
Time Series Anomaly Detection (TSAD) finds widespread applications across various domains such as financial markets, industrial production, and healthcare. Its primary objective is to learn the normal patterns of time series data, thereby identifying deviations in test samples. Most existing TSAD methods focus on modeling data from the temporal dimension, while ignoring the semantic information in…
▽ More
Time Series Anomaly Detection (TSAD) finds widespread applications across various domains such as financial markets, industrial production, and healthcare. Its primary objective is to learn the normal patterns of time series data, thereby identifying deviations in test samples. Most existing TSAD methods focus on modeling data from the temporal dimension, while ignoring the semantic information in the spatial dimension. To address this issue, we introduce a novel approach, called Spatial-Temporal Normality learning (STEN). STEN is composed of a sequence Order prediction-based Temporal Normality learning (OTN) module that captures the temporal correlations within sequences, and a Distance prediction-based Spatial Normality learning (DSN) module that learns the relative spatial relations between sequences in a feature space. By synthesizing these two modules, STEN learns expressive spatial-temporal representations for the normal patterns hidden in the time series data. Extensive experiments on five popular TSAD benchmarks show that STEN substantially outperforms state-of-the-art competing methods. Our code is available at https://github.com/mala-lab/STEN.
△ Less
Submitted 28 June, 2024;
originally announced June 2024.
-
A Masked Semi-Supervised Learning Approach for Otago Micro Labels Recognition
Authors:
Meng Shang,
Lenore Dedeyne,
Jolan Dupont,
Laura Vercauteren,
Nadjia Amini,
Laurence Lapauw,
Evelien Gielen,
Sabine Verschueren,
Carolina Varon,
Walter De Raedt,
Bart Vanrumste
Abstract:
The Otago Exercise Program (OEP) serves as a vital rehabilitation initiative for older adults, aiming to enhance their strength and balance, and consequently prevent falls. While Human Activity Recognition (HAR) systems have been widely employed in recognizing the activities of individuals, existing systems focus on the duration of macro activities (i.e. a sequence of repetitions of the same exerc…
▽ More
The Otago Exercise Program (OEP) serves as a vital rehabilitation initiative for older adults, aiming to enhance their strength and balance, and consequently prevent falls. While Human Activity Recognition (HAR) systems have been widely employed in recognizing the activities of individuals, existing systems focus on the duration of macro activities (i.e. a sequence of repetitions of the same exercise), neglecting the ability to discern micro activities (i.e. the individual repetitions of the exercises), in the case of OEP. This study presents a novel semi-supervised machine learning approach aimed at bridging this gap in recognizing the micro activities of OEP. To manage the limited dataset size, our model utilizes a Transformer encoder for feature extraction, subsequently classified by a Temporal Convolutional Network (TCN). Simultaneously, the Transformer encoder is employed for masked unsupervised learning to reconstruct input signals. Results indicate that the masked unsupervised learning task enhances the performance of the supervised learning (classification task), as evidenced by f1-scores surpassing the clinically applicable threshold of 0.8. From the micro activities, two clinically relevant outcomes emerge: counting the number of repetitions of each exercise and calculating the velocity during chair rising. These outcomes enable the automatic monitoring of exercise intensity and difficulty in the daily lives of older adults.
△ Less
Submitted 22 May, 2024; v1 submitted 21 May, 2024;
originally announced May 2024.
-
CodeFort: Robust Training for Code Generation Models
Authors:
Yuhao Zhang,
Shiqi Wang,
Haifeng Qian,
Zijian Wang,
Mingyue Shang,
Linbo Liu,
Sanjay Krishna Gouda,
Baishakhi Ray,
Murali Krishna Ramanathan,
Xiaofei Ma,
Anoop Deoras
Abstract:
Code generation models are not robust to small perturbations, which often lead to incorrect generations and significantly degrade the performance of these models. Although improving the robustness of code generation models is crucial to enhancing user experience in real-world applications, existing research efforts do not address this issue. To fill this gap, we propose CodeFort, a framework to im…
▽ More
Code generation models are not robust to small perturbations, which often lead to incorrect generations and significantly degrade the performance of these models. Although improving the robustness of code generation models is crucial to enhancing user experience in real-world applications, existing research efforts do not address this issue. To fill this gap, we propose CodeFort, a framework to improve the robustness of code generation models, generalizing a large variety of code perturbations to enrich the training data and enabling various robust training strategies, mixing data augmentation, batch augmentation, adversarial logits pairing, and contrastive learning, all carefully designed to support high-throughput training. Extensive evaluations show that we increase the average robust pass rates of baseline CodeGen models from 14.79 to 21.74. We notably decrease the robustness drop rate from 95.02% to 54.95% against code-syntax perturbations.
△ Less
Submitted 28 October, 2024; v1 submitted 11 April, 2024;
originally announced May 2024.
-
BASS: Batched Attention-optimized Speculative Sampling
Authors:
Haifeng Qian,
Sujan Kumar Gonugondla,
Sungsoo Ha,
Mingyue Shang,
Sanjay Krishna Gouda,
Ramesh Nallapati,
Sudipta Sengupta,
Xiaofei Ma,
Anoop Deoras
Abstract:
Speculative decoding has emerged as a powerful method to improve latency and throughput in hosting large language models. However, most existing implementations focus on generating a single sequence. Real-world generative AI applications often require multiple responses and how to perform speculative decoding in a batched setting while preserving its latency benefits poses non-trivial challenges.…
▽ More
Speculative decoding has emerged as a powerful method to improve latency and throughput in hosting large language models. However, most existing implementations focus on generating a single sequence. Real-world generative AI applications often require multiple responses and how to perform speculative decoding in a batched setting while preserving its latency benefits poses non-trivial challenges. This paper describes a system of batched speculative decoding that sets a new state of the art in multi-sequence generation latency and that demonstrates superior GPU utilization as well as quality of generations within a time budget. For example, for a 7.8B-size model on a single A100 GPU and with a batch size of 8, each sequence is generated at an average speed of 5.8ms per token, the overall throughput being 1.1K tokens per second. These results represent state-of-the-art latency and a 2.15X speed-up over optimized regular decoding. Within a time budget that regular decoding does not finish, our system is able to generate sequences with HumanEval Pass@First of 43% and Pass@All of 61%, far exceeding what's feasible with single-sequence speculative decoding. Our peak GPU utilization during decoding reaches as high as 15.8%, more than 3X the highest of that of regular decoding and around 10X of single-sequence speculative decoding.
△ Less
Submitted 26 June, 2024; v1 submitted 24 April, 2024;
originally announced April 2024.
-
Token Alignment via Character Matching for Subword Completion
Authors:
Ben Athiwaratkun,
Shiqi Wang,
Mingyue Shang,
Yuchen Tian,
Zijian Wang,
Sujan Kumar Gonugondla,
Sanjay Krishna Gouda,
Rob Kwiatowski,
Ramesh Nallapati,
Bing Xiang
Abstract:
Generative models, widely utilized in various applications, can often struggle with prompts corresponding to partial tokens. This struggle stems from tokenization, where partial tokens fall out of distribution during inference, leading to incorrect or nonsensical outputs. This paper examines a technique to alleviate the tokenization artifact on text completion in generative models, maintaining per…
▽ More
Generative models, widely utilized in various applications, can often struggle with prompts corresponding to partial tokens. This struggle stems from tokenization, where partial tokens fall out of distribution during inference, leading to incorrect or nonsensical outputs. This paper examines a technique to alleviate the tokenization artifact on text completion in generative models, maintaining performance even in regular non-subword cases. The method, termed token alignment, involves backtracking to the last complete tokens and ensuring the model's generation aligns with the prompt. This approach showcases marked improvement across many partial token scenarios, including nuanced cases like space-prefix and partial indentation, with only a minor time increase. The technique and analysis detailed in this paper contribute to the continuous advancement of generative models in handling partial inputs, bearing relevance for applications like code completion and text autocompletion.
△ Less
Submitted 13 March, 2024;
originally announced March 2024.
-
DS-MS-TCN: Otago Exercises Recognition with a Dual-Scale Multi-Stage Temporal Convolutional Network
Authors:
Meng Shang,
Lenore Dedeyne,
Jolan Dupont,
Laura Vercauteren,
Nadjia Amini,
Laurence Lapauw,
Evelien Gielen,
Sabine Verschueren,
Carolina Varon,
Walter De Raedt,
Bart Vanrumste
Abstract:
The Otago Exercise Program (OEP) represents a crucial rehabilitation initiative tailored for older adults, aimed at enhancing balance and strength. Despite previous efforts utilizing wearable sensors for OEP recognition, existing studies have exhibited limitations in terms of accuracy and robustness. This study addresses these limitations by employing a single waist-mounted Inertial Measurement Un…
▽ More
The Otago Exercise Program (OEP) represents a crucial rehabilitation initiative tailored for older adults, aimed at enhancing balance and strength. Despite previous efforts utilizing wearable sensors for OEP recognition, existing studies have exhibited limitations in terms of accuracy and robustness. This study addresses these limitations by employing a single waist-mounted Inertial Measurement Unit (IMU) to recognize OEP exercises among community-dwelling older adults in their daily lives. A cohort of 36 older adults participated in laboratory settings, supplemented by an additional 7 older adults recruited for at-home assessments. The study proposes a Dual-Scale Multi-Stage Temporal Convolutional Network (DS-MS-TCN) designed for two-level sequence-to-sequence classification, incorporating them in one loss function. In the first stage, the model focuses on recognizing each repetition of the exercises (micro labels). Subsequent stages extend the recognition to encompass the complete range of exercises (macro labels). The DS-MS-TCN model surpasses existing state-of-the-art deep learning models, achieving f1-scores exceeding 80% and Intersection over Union (IoU) f1-scores surpassing 60% for all four exercises evaluated. Notably, the model outperforms the prior study utilizing the sliding window technique, eliminating the need for post-processing stages and window size tuning. To our knowledge, we are the first to present a novel perspective on enhancing Human Activity Recognition (HAR) systems through the recognition of each repetition of activities.
△ Less
Submitted 7 February, 2024; v1 submitted 5 February, 2024;
originally announced February 2024.
-
Code-Aware Prompting: A study of Coverage Guided Test Generation in Regression Setting using LLM
Authors:
Gabriel Ryan,
Siddhartha Jain,
Mingyue Shang,
Shiqi Wang,
Xiaofei Ma,
Murali Krishna Ramanathan,
Baishakhi Ray
Abstract:
Testing plays a pivotal role in ensuring software quality, yet conventional Search Based Software Testing (SBST) methods often struggle with complex software units, achieving suboptimal test coverage. Recent works using large language models (LLMs) for test generation have focused on improving generation quality through optimizing the test generation context and correcting errors in model outputs,…
▽ More
Testing plays a pivotal role in ensuring software quality, yet conventional Search Based Software Testing (SBST) methods often struggle with complex software units, achieving suboptimal test coverage. Recent works using large language models (LLMs) for test generation have focused on improving generation quality through optimizing the test generation context and correcting errors in model outputs, but use fixed prompting strategies that prompt the model to generate tests without additional guidance. As a result LLM-generated testsuites still suffer from low coverage. In this paper, we present SymPrompt, a code-aware prompting strategy for LLMs in test generation. SymPrompt's approach is based on recent work that demonstrates LLMs can solve more complex logical problems when prompted to reason about the problem in a multi-step fashion. We apply this methodology to test generation by deconstructing the testsuite generation process into a multi-stage sequence, each of which is driven by a specific prompt aligned with the execution paths of the method under test, and exposing relevant type and dependency focal context to the model. Our approach enables pretrained LLMs to generate more complete test cases without any additional training. We implement SymPrompt using the TreeSitter parsing framework and evaluate on a benchmark challenging methods from open source Python projects. SymPrompt enhances correct test generations by a factor of 5 and bolsters relative coverage by 26% for CodeGen2. Notably, when applied to GPT-4, SymPrompt improves coverage by over 2x compared to baseline prompting strategies.
△ Less
Submitted 2 April, 2024; v1 submitted 31 January, 2024;
originally announced February 2024.
-
Detecting subtle cyberattacks on adaptive cruise control vehicles: A machine learning approach
Authors:
Tianyi Li,
Mingfeng Shang,
Shian Wang,
Raphael Stern
Abstract:
With the advent of vehicles equipped with advanced driver-assistance systems, such as adaptive cruise control (ACC) and other automated driving features, the potential for cyberattacks on these automated vehicles (AVs) has emerged. While overt attacks that force vehicles to collide may be easily identified, more insidious attacks, which only slightly alter driving behavior, can result in network-w…
▽ More
With the advent of vehicles equipped with advanced driver-assistance systems, such as adaptive cruise control (ACC) and other automated driving features, the potential for cyberattacks on these automated vehicles (AVs) has emerged. While overt attacks that force vehicles to collide may be easily identified, more insidious attacks, which only slightly alter driving behavior, can result in network-wide increases in congestion, fuel consumption, and even crash risk without being easily detected. To address the detection of such attacks, we first present a traffic model framework for three types of potential cyberattacks: malicious manipulation of vehicle control commands, false data injection attacks on sensor measurements, and denial-of-service (DoS) attacks. We then investigate the impacts of these attacks at both the individual vehicle (micro) and traffic flow (macro) levels. A novel generative adversarial network (GAN)-based anomaly detection model is proposed for real-time identification of such attacks using vehicle trajectory data. We provide numerical evidence {to demonstrate} the efficacy of our machine learning approach in detecting cyberattacks on ACC-equipped vehicles. The proposed method is compared against some recently proposed neural network models and observed to have higher accuracy in identifying anomalous driving behaviors of ACC vehicles.
△ Less
Submitted 1 January, 2025; v1 submitted 25 October, 2023;
originally announced October 2023.
-
A Multi-Stage Temporal Convolutional Network for Volleyball Jumps Classification Using a Waist-Mounted IMU
Authors:
Meng Shang,
Camilla De Bleecker,
Jos Vanrenterghem,
Roel De Ridder,
Sabine Verschueren,
Carolina Varon,
Walter De Raedt,
Bart Vanrumste
Abstract:
Monitoring the number of jumps for volleyball players during training or a match can be crucial to prevent injuries, yet the measurement requires considerable workload and cost using traditional methods such as video analysis. Also, existing methods do not provide accurate differentiation between different types of jumps. In this study, an unobtrusive system with a single inertial measurement unit…
▽ More
Monitoring the number of jumps for volleyball players during training or a match can be crucial to prevent injuries, yet the measurement requires considerable workload and cost using traditional methods such as video analysis. Also, existing methods do not provide accurate differentiation between different types of jumps. In this study, an unobtrusive system with a single inertial measurement unit (IMU) on the waist was proposed to recognize the types of volleyball jumps. A Multi-Layer Temporal Convolutional Network (MS-TCN) was applied for sample-wise classification. The model was evaluated on ten volleyball players and twenty-six volleyball players, during a lab session with a fixed protocol of jumping and landing tasks, and during four volleyball training sessions, respectively. The MS-TCN model achieved better performance than a state-of-the-art deep learning model but with lower computational cost. In the lab sessions, most jump counts showed small differences between the predicted jumps and video-annotated jumps, with an overall count showing a Limit of Agreement (LoA) of 0.1+-3.40 (r=0.884). For comparison, the proposed algorithm showed slightly worse results than VERT (a commercial jumping assessment device) with a LoA of 0.1+-2.08 (r=0.955) but the differences were still within a comparable range. In the training sessions, the recognition of three types of jumps exhibited a mean difference from observation of less than 10 jumps: block, smash, and overhead serve. These results showed the potential of using a single IMU to recognize the types of volleyball jumps. The sample-wise architecture provided high resolution of recognition and the MS-TCN required fewer parameters to train compared with state-of-the-art models.
△ Less
Submitted 19 October, 2023;
originally announced October 2023.
-
Eliciting Model Steering Interactions from Users via Data and Visual Design Probes
Authors:
Anamaria Crisan,
Maddie Shang,
Eric Brochu
Abstract:
Domain experts increasingly use automated data science tools to incorporate machine learning (ML) models in their work but struggle to "debug" these models when they are incorrect. For these experts, semantic interactions can provide an accessible avenue to guide and refine ML models without having to programmatically dive into its technical details. In this research, we conduct an elicitation stu…
▽ More
Domain experts increasingly use automated data science tools to incorporate machine learning (ML) models in their work but struggle to "debug" these models when they are incorrect. For these experts, semantic interactions can provide an accessible avenue to guide and refine ML models without having to programmatically dive into its technical details. In this research, we conduct an elicitation study using data and visual design probes to examine if and how experts with a spectrum of ML expertise use semantic interactions to update a simple classification model. We use our design probes to facilitate an interactive dialogue with 20 participants and codify their interactions as a set of target-interaction pairs. Interestingly, our findings revealed that many targets of semantic interactions do not directly map to ML model parameters, but instead aim to augment the data a model uses for training. We also identify reasons that participants would hesitate to interact with ML models, including burdens of cognitive load and concerns of injecting bias. Unexpectedly participants also saw the value of using semantic interactions to work collaboratively with members of their team. Participants with less ML expertise found this to be a useful mechanism for communicating their concerns to ML experts. This was an especially important observation, as our study also shows the different needs that correspond to diverse ML expertise. Collectively, we demonstrate that design probes are effective tools for proactively gathering the affordances that should be offered in an interactive machine learning system.
△ Less
Submitted 12 October, 2023;
originally announced October 2023.
-
Otago Exercises Monitoring for Older Adults by a Single IMU and Hierarchical Machine Learning Models
Authors:
Meng Shang,
Lenore Dedeyne,
Jolan Dupont,
Laura Vercauteren,
Nadjia Amini,
Laurence Lapauw,
Evelien Gielen,
Sabine Verschueren,
Carolina Varon,
Walter De Raedt,
Bart Vanrumste
Abstract:
Otago Exercise Program (OEP) is a rehabilitation program for older adults to improve frailty, sarcopenia, and balance. Accurate monitoring of patient involvement in OEP is challenging, as self-reports (diaries) are often unreliable. With the development of wearable sensors, Human Activity Recognition (HAR) systems using wearable sensors have revolutionized healthcare. However, their usage for OEP…
▽ More
Otago Exercise Program (OEP) is a rehabilitation program for older adults to improve frailty, sarcopenia, and balance. Accurate monitoring of patient involvement in OEP is challenging, as self-reports (diaries) are often unreliable. With the development of wearable sensors, Human Activity Recognition (HAR) systems using wearable sensors have revolutionized healthcare. However, their usage for OEP still shows limited performance. The objective of this study is to build an unobtrusive and accurate system to monitor OEP for older adults. Data was collected from older adults wearing a single waist-mounted Inertial Measurement Unit (IMU). Two datasets were collected, one in a laboratory setting, and one at the homes of the patients. A hierarchical system is proposed with two stages: 1) using a deep learning model to recognize whether the patients are performing OEP or activities of daily life (ADLs) using a 10-minute sliding window; 2) based on stage 1, using a 6-second sliding window to recognize the OEP sub-classes performed. The results showed that in stage 1, OEP could be recognized with window-wise f1-scores over 0.95 and Intersection-over-Union (IoU) f1-scores over 0.85 for both datasets. In stage 2, for the home scenario, four activities could be recognized with f1-scores over 0.8: ankle plantarflexors, abdominal muscles, knee bends, and sit-to-stand. The results showed the potential of monitoring the compliance of OEP using a single IMU in daily life. Also, some OEP sub-classes are possible to be recognized for further analysis.
△ Less
Submitted 5 February, 2024; v1 submitted 5 October, 2023;
originally announced October 2023.
-
Chat2Brain: A Method for Mapping Open-Ended Semantic Queries to Brain Activation Maps
Authors:
Yaonai Wei,
Tuo Zhang,
Han Zhang,
Tianyang Zhong,
Lin Zhao,
Zhengliang Liu,
Chong Ma,
Songyao Zhang,
Muheng Shang,
Lei Du,
Xiao Li,
Tianming Liu,
Junwei Han
Abstract:
Over decades, neuroscience has accumulated a wealth of research results in the text modality that can be used to explore cognitive processes. Meta-analysis is a typical method that successfully establishes a link from text queries to brain activation maps using these research results, but it still relies on an ideal query environment. In practical applications, text queries used for meta-analyses…
▽ More
Over decades, neuroscience has accumulated a wealth of research results in the text modality that can be used to explore cognitive processes. Meta-analysis is a typical method that successfully establishes a link from text queries to brain activation maps using these research results, but it still relies on an ideal query environment. In practical applications, text queries used for meta-analyses may encounter issues such as semantic redundancy and ambiguity, resulting in an inaccurate mapping to brain images. On the other hand, large language models (LLMs) like ChatGPT have shown great potential in tasks such as context understanding and reasoning, displaying a high degree of consistency with human natural language. Hence, LLMs could improve the connection between text modality and neuroscience, resolving existing challenges of meta-analyses. In this study, we propose a method called Chat2Brain that combines LLMs to basic text-2-image model, known as Text2Brain, to map open-ended semantic queries to brain activation maps in data-scarce and complex query environments. By utilizing the understanding and reasoning capabilities of LLMs, the performance of the mapping model is optimized by transferring text queries to semantic queries. We demonstrate that Chat2Brain can synthesize anatomically plausible neural activation patterns for more complex tasks of text queries.
△ Less
Submitted 10 September, 2023;
originally announced September 2023.
-
Few-Shot Data-to-Text Generation via Unified Representation and Multi-Source Learning
Authors:
Alexander Hanbo Li,
Mingyue Shang,
Evangelia Spiliopoulou,
Jie Ma,
Patrick Ng,
Zhiguo Wang,
Bonan Min,
William Wang,
Kathleen McKeown,
Vittorio Castelli,
Dan Roth,
Bing Xiang
Abstract:
We present a novel approach for structured data-to-text generation that addresses the limitations of existing methods that primarily focus on specific types of structured data. Our proposed method aims to improve performance in multi-task training, zero-shot and few-shot scenarios by providing a unified representation that can handle various forms of structured data such as tables, knowledge graph…
▽ More
We present a novel approach for structured data-to-text generation that addresses the limitations of existing methods that primarily focus on specific types of structured data. Our proposed method aims to improve performance in multi-task training, zero-shot and few-shot scenarios by providing a unified representation that can handle various forms of structured data such as tables, knowledge graph triples, and meaning representations. We demonstrate that our proposed approach can effectively adapt to new structured forms, and can improve performance in comparison to current methods. For example, our method resulted in a 66% improvement in zero-shot BLEU scores when transferring models trained on table inputs to a knowledge graph dataset. Our proposed method is an important step towards a more general data-to-text generation framework.
△ Less
Submitted 9 August, 2023;
originally announced August 2023.
-
Greener yet Powerful: Taming Large Code Generation Models with Quantization
Authors:
Xiaokai Wei,
Sujan Gonugondla,
Wasi Ahmad,
Shiqi Wang,
Baishakhi Ray,
Haifeng Qian,
Xiaopeng Li,
Varun Kumar,
Zijian Wang,
Yuchen Tian,
Qing Sun,
Ben Athiwaratkun,
Mingyue Shang,
Murali Krishna Ramanathan,
Parminder Bhatia,
Bing Xiang
Abstract:
ML-powered code generation aims to assist developers to write code in a more productive manner, by intelligently generating code blocks based on natural language prompts. Recently, large pretrained deep learning models have substantially pushed the boundary of code generation and achieved impressive performance. Despite their great power, the huge number of model parameters poses a significant thr…
▽ More
ML-powered code generation aims to assist developers to write code in a more productive manner, by intelligently generating code blocks based on natural language prompts. Recently, large pretrained deep learning models have substantially pushed the boundary of code generation and achieved impressive performance. Despite their great power, the huge number of model parameters poses a significant threat to adapting them in a regular software development environment, where a developer might use a standard laptop or mid-size server to develop her code. Such large models incur significant resource usage (in terms of memory, latency, and dollars) as well as carbon footprint.
Model compression is a promising approach to address these challenges. Several techniques are proposed to compress large pretrained models typically used for vision or textual data. Out of many available compression techniques, we identified that quantization is mostly applicable for code generation task as it does not require significant retraining cost. As quantization represents model parameters with lower-bit integer (e.g., int8), the model size and runtime latency would both benefit from such int representation. We extensively study the impact of quantized model on code generation tasks across different dimension: (i) resource usage and carbon footprint, (ii) accuracy, and (iii) robustness. To this end, through systematic experiments we find a recipe of quantization technique that could run even a $6$B model in a regular laptop without significant accuracy or robustness degradation. We further found the recipe is readily applicable to code summarization task as well.
△ Less
Submitted 9 March, 2023;
originally announced March 2023.
-
ReCode: Robustness Evaluation of Code Generation Models
Authors:
Shiqi Wang,
Zheng Li,
Haifeng Qian,
Chenghao Yang,
Zijian Wang,
Mingyue Shang,
Varun Kumar,
Samson Tan,
Baishakhi Ray,
Parminder Bhatia,
Ramesh Nallapati,
Murali Krishna Ramanathan,
Dan Roth,
Bing Xiang
Abstract:
Code generation models have achieved impressive performance. However, they tend to be brittle as slight edits to a prompt could lead to very different generations; these robustness properties, critical for user experience when deployed in real-life applications, are not well understood. Most existing works on robustness in text or code tasks have focused on classification, while robustness in gene…
▽ More
Code generation models have achieved impressive performance. However, they tend to be brittle as slight edits to a prompt could lead to very different generations; these robustness properties, critical for user experience when deployed in real-life applications, are not well understood. Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation. In this paper, we propose ReCode, a comprehensive robustness evaluation benchmark for code generation models. We customize over 30 transformations specifically for code on docstrings, function and variable names, code syntax, and code format. They are carefully designed to be natural in real-life coding practice, preserve the original semantic meaning, and thus provide multifaceted assessments of a model's robustness performance. With human annotators, we verified that over 90% of the perturbed prompts do not alter the semantic meaning of the original prompt. In addition, we define robustness metrics for code generation models considering the worst-case behavior under each type of perturbation, taking advantage of the fact that executing the generated code can serve as objective evaluation. We demonstrate ReCode on SOTA models using HumanEval, MBPP, as well as function completion tasks derived from them. Interesting observations include: better robustness for CodeGen over InCoder and GPT-J; models are most sensitive to syntax perturbations; more challenging robustness evaluation on MBPP over HumanEval.
△ Less
Submitted 20 December, 2022;
originally announced December 2022.
-
Multi-lingual Evaluation of Code Generation Models
Authors:
Ben Athiwaratkun,
Sanjay Krishna Gouda,
Zijian Wang,
Xiaopeng Li,
Yuchen Tian,
Ming Tan,
Wasi Uddin Ahmad,
Shiqi Wang,
Qing Sun,
Mingyue Shang,
Sujan Kumar Gonugondla,
Hantian Ding,
Varun Kumar,
Nathan Fulton,
Arash Farahani,
Siddhartha Jain,
Robert Giaquinto,
Haifeng Qian,
Murali Krishna Ramanathan,
Ramesh Nallapati,
Baishakhi Ray,
Parminder Bhatia,
Sudipta Sengupta,
Dan Roth,
Bing Xiang
Abstract:
We present new benchmarks on evaluation code generation models: MBXP and Multilingual HumanEval, and MathQA-X. These datasets cover over 10 programming languages and are generated using a scalable conversion framework that transpiles prompts and test cases from the original Python datasets into the corresponding data in the target language. Using these benchmarks, we are able to assess the perform…
▽ More
We present new benchmarks on evaluation code generation models: MBXP and Multilingual HumanEval, and MathQA-X. These datasets cover over 10 programming languages and are generated using a scalable conversion framework that transpiles prompts and test cases from the original Python datasets into the corresponding data in the target language. Using these benchmarks, we are able to assess the performance of code generation models in a multi-lingual fashion, and discovered generalization ability of language models on out-of-domain languages, advantages of multi-lingual models over mono-lingual, the ability of few-shot prompting to teach the model new languages, and zero-shot translation abilities even on mono-lingual settings. Furthermore, we use our code generation model to perform large-scale bootstrapping to obtain synthetic canonical solutions in several languages, which can be used for other code-related evaluations such as code insertion, robustness, or summarization tasks. Overall, our benchmarks represents a significant step towards a deeper understanding of language models' code generation abilities. We publicly release our code and datasets at https://github.com/amazon-research/mxeval.
△ Less
Submitted 28 March, 2023; v1 submitted 26 October, 2022;
originally announced October 2022.
-
Zero Stability Well Predicts Performance of Convolutional Neural Networks
Authors:
Liangming Chen,
Long Jin,
Mingsheng Shang
Abstract:
The question of what kind of convolutional neural network (CNN) structure performs well is fascinating. In this work, we move toward the answer with one more step by connecting zero stability and model performance. Specifically, we found that if a discrete solver of an ordinary differential equation is zero stable, the CNN corresponding to that solver performs well. We first give the interpretatio…
▽ More
The question of what kind of convolutional neural network (CNN) structure performs well is fascinating. In this work, we move toward the answer with one more step by connecting zero stability and model performance. Specifically, we found that if a discrete solver of an ordinary differential equation is zero stable, the CNN corresponding to that solver performs well. We first give the interpretation of zero stability in the context of deep learning and then investigate the performance of existing first- and second-order CNNs under different zero-stable circumstances. Based on the preliminary observation, we provide a higher-order discretization to construct CNNs and then propose a zero-stable network (ZeroSNet). To guarantee zero stability of the ZeroSNet, we first deduce a structure that meets consistency conditions and then give a zero stable region of a training-free parameter. By analyzing the roots of a characteristic equation, we theoretically obtain the optimal coefficients of feature maps. Empirically, we present our results from three aspects: We provide extensive empirical evidence of different depth on different datasets to show that the moduli of the characteristic equation's roots are the keys for the performance of CNNs that require historical features; Our experiments show that ZeroSNet outperforms existing CNNs which is based on high-order discretization; ZeroSNets show better robustness against noises on the input. The source code is available at \url{https://github.com/LongJin-lab/ZeroSNet}.
△ Less
Submitted 27 June, 2022;
originally announced June 2022.
-
Enhancing Knowledge Tracing via Adversarial Training
Authors:
Xiaopeng Guo,
Zhijie Huang,
Jie Gao,
Mingyu Shang,
Maojing Shu,
Jun Sun
Abstract:
We study the problem of knowledge tracing (KT) where the goal is to trace the students' knowledge mastery over time so as to make predictions on their future performance. Owing to the good representation capacity of deep neural networks (DNNs), recent advances on KT have increasingly concentrated on exploring DNNs to improve the performance of KT. However, we empirically reveal that the DNNs based…
▽ More
We study the problem of knowledge tracing (KT) where the goal is to trace the students' knowledge mastery over time so as to make predictions on their future performance. Owing to the good representation capacity of deep neural networks (DNNs), recent advances on KT have increasingly concentrated on exploring DNNs to improve the performance of KT. However, we empirically reveal that the DNNs based KT models may run the risk of overfitting, especially on small datasets, leading to limited generalization. In this paper, by leveraging the current advances in adversarial training (AT), we propose an efficient AT based KT method (ATKT) to enhance KT model's generalization and thus push the limit of KT. Specifically, we first construct adversarial perturbations and add them on the original interaction embeddings as adversarial examples. The original and adversarial examples are further used to jointly train the KT model, forcing it is not only to be robust to the adversarial examples, but also to enhance the generalization over the original ones. To better implement AT, we then present an efficient attentive-LSTM model as KT backbone, where the key is a proposed knowledge hidden state attention module that adaptively aggregates information from previous knowledge hidden states while simultaneously highlighting the importance of current knowledge hidden state to make a more accurate prediction. Extensive experiments on four public benchmark datasets demonstrate that our ATKT achieves new state-of-the-art performance. Code is available at: \color{blue} {\url{https://github.com/xiaopengguo/ATKT}}.
△ Less
Submitted 9 August, 2021;
originally announced August 2021.
-
Activated Gradients for Deep Neural Networks
Authors:
Mei Liu,
Liangming Chen,
Xiaohao Du,
Long Jin,
Mingsheng Shang
Abstract:
Deep neural networks often suffer from poor performance or even training failure due to the ill-conditioned problem, the vanishing/exploding gradient problem, and the saddle point problem. In this paper, a novel method by acting the gradient activation function (GAF) on the gradient is proposed to handle these challenges. Intuitively, the GAF enlarges the tiny gradients and restricts the large gra…
▽ More
Deep neural networks often suffer from poor performance or even training failure due to the ill-conditioned problem, the vanishing/exploding gradient problem, and the saddle point problem. In this paper, a novel method by acting the gradient activation function (GAF) on the gradient is proposed to handle these challenges. Intuitively, the GAF enlarges the tiny gradients and restricts the large gradient. Theoretically, this paper gives conditions that the GAF needs to meet, and on this basis, proves that the GAF alleviates the problems mentioned above. In addition, this paper proves that the convergence rate of SGD with the GAF is faster than that without the GAF under some assumptions. Furthermore, experiments on CIFAR, ImageNet, and PASCAL visual object classes confirm the GAF's effectiveness. The experimental results also demonstrate that the proposed method is able to be adopted in various deep neural networks to improve their performance. The source code is publicly available at https://github.com/LongJin-lab/Activated-Gradients-for-Deep-Neural-Networks.
△ Less
Submitted 9 July, 2021;
originally announced July 2021.
-
Syft 0.5: A Platform for Universally Deployable Structured Transparency
Authors:
Adam James Hall,
Madhava Jay,
Tudor Cebere,
Bogdan Cebere,
Koen Lennart van der Veen,
George Muraru,
Tongye Xu,
Patrick Cason,
William Abramson,
Ayoub Benaissa,
Chinmay Shah,
Alan Aboudib,
Théo Ryffel,
Kritika Prakash,
Tom Titcombe,
Varun Kumar Khare,
Maddie Shang,
Ionesio Junior,
Animesh Gupta,
Jason Paumier,
Nahua Kang,
Vova Manannikov,
Andrew Trask
Abstract:
We present Syft 0.5, a general-purpose framework that combines a core group of privacy-enhancing technologies that facilitate a universal set of structured transparency systems. This framework is demonstrated through the design and implementation of a novel privacy-preserving inference information flow where we pass homomorphically encrypted activation signals through a split neural network for in…
▽ More
We present Syft 0.5, a general-purpose framework that combines a core group of privacy-enhancing technologies that facilitate a universal set of structured transparency systems. This framework is demonstrated through the design and implementation of a novel privacy-preserving inference information flow where we pass homomorphically encrypted activation signals through a split neural network for inference. We show that splitting the model further up the computation chain significantly reduces the computation time of inference and the payload size of activation signals at the cost of model secrecy. We evaluate our proposed flow with respect to its provision of the core structural transparency principles.
△ Less
Submitted 27 April, 2021; v1 submitted 26 April, 2021;
originally announced April 2021.
-
V2F-Net: Explicit Decomposition of Occluded Pedestrian Detection
Authors:
Mingyang Shang,
Dawei Xiang,
Zhicheng Wang,
Erjin Zhou
Abstract:
Occlusion is very challenging in pedestrian detection. In this paper, we propose a simple yet effective method named V2F-Net, which explicitly decomposes occluded pedestrian detection into visible region detection and full body estimation. V2F-Net consists of two sub-networks: Visible region Detection Network (VDN) and Full body Estimation Network (FEN). VDN tries to localize visible regions and F…
▽ More
Occlusion is very challenging in pedestrian detection. In this paper, we propose a simple yet effective method named V2F-Net, which explicitly decomposes occluded pedestrian detection into visible region detection and full body estimation. V2F-Net consists of two sub-networks: Visible region Detection Network (VDN) and Full body Estimation Network (FEN). VDN tries to localize visible regions and FEN estimates full-body box on the basis of the visible box. Moreover, to further improve the estimation of full body, we propose a novel Embedding-based Part-aware Module (EPM). By supervising the visibility for each part, the network is encouraged to extract features with essential part information. We experimentally show the effectiveness of V2F-Net by conducting several experiments on two challenging datasets. V2F-Net achieves 5.85% AP gains on CrowdHuman and 2.24% MR-2 improvements on CityPersons compared to FPN baseline. Besides, the consistent gain on both one-stage and two-stage detector validates the generalizability of our method.
△ Less
Submitted 7 April, 2021;
originally announced April 2021.
-
Bridging Unpaired Facial Photos And Sketches By Line-drawings
Authors:
Meimei Shang,
Fei Gao,
Xiang Li,
Jingjie Zhu,
Lingna Dai
Abstract:
In this paper, we propose a novel method to learn face sketch synthesis models by using unpaired data. Our main idea is bridging the photo domain $\mathcal{X}$ and the sketch domain $Y$ by using the line-drawing domain $\mathcal{Z}$. Specially, we map both photos and sketches to line-drawings by using a neural style transfer method, i.e. $F: \mathcal{X}/\mathcal{Y} \mapsto \mathcal{Z}$. Consequent…
▽ More
In this paper, we propose a novel method to learn face sketch synthesis models by using unpaired data. Our main idea is bridging the photo domain $\mathcal{X}$ and the sketch domain $Y$ by using the line-drawing domain $\mathcal{Z}$. Specially, we map both photos and sketches to line-drawings by using a neural style transfer method, i.e. $F: \mathcal{X}/\mathcal{Y} \mapsto \mathcal{Z}$. Consequently, we obtain \textit{pseudo paired data} $(\mathcal{Z}, \mathcal{Y})$, and can learn the mapping $G:\mathcal{Z} \mapsto \mathcal{Y}$ in a supervised learning manner. In the inference stage, given a facial photo, we can first transfer it to a line-drawing and then to a sketch by $G \circ F$. Additionally, we propose a novel stroke loss for generating different types of strokes. Our method, termed sRender, accords well with human artists' rendering process. Experimental results demonstrate that sRender can generate multi-style sketches, and significantly outperforms existing unpaired image-to-image translation methods.
△ Less
Submitted 25 February, 2021; v1 submitted 31 January, 2021;
originally announced February 2021.
-
An Efficient QP Variable Convolutional Neural Network Based In-loop Filter for Intra Coding
Authors:
Zhijie Huang,
Xiaopeng Guo,
Mingyu Shang,
Jie Gao,
Jun Sun
Abstract:
In this paper, a novel QP variable convolutional neural network based in-loop filter is proposed for VVC intra coding. To avoid training and deploying multiple networks, we develop an efficient QP attention module (QPAM) which can capture compression noise levels for different QPs and emphasize meaningful features along channel dimension. Then we embed QPAM into the residual block, and based on it…
▽ More
In this paper, a novel QP variable convolutional neural network based in-loop filter is proposed for VVC intra coding. To avoid training and deploying multiple networks, we develop an efficient QP attention module (QPAM) which can capture compression noise levels for different QPs and emphasize meaningful features along channel dimension. Then we embed QPAM into the residual block, and based on it, we design a network architecture that is equipped with controllability for different QPs. To make the proposed model focus more on examples that have more compression artifacts or is hard to restore, a focal mean square error (MSE) loss function is employed to fine tune the network. Experimental results show that our approach achieves 4.03\% BD-Rate saving on average for all intra configuration, which is even better than QP-separate CNN models while having less model parameters.
△ Less
Submitted 29 December, 2020;
originally announced December 2020.
-
Ape210K: A Large-Scale and Template-Rich Dataset of Math Word Problems
Authors:
Wei Zhao,
Mingyue Shang,
Yang Liu,
Liang Wang,
Jingming Liu
Abstract:
Automatic math word problem solving has attracted growing attention in recent years. The evaluation datasets used by previous works have serious limitations in terms of scale and diversity. In this paper, we release a new large-scale and template-rich math word problem dataset named Ape210K. It consists of 210K Chinese elementary school-level math problems, which is 9 times the size of the largest…
▽ More
Automatic math word problem solving has attracted growing attention in recent years. The evaluation datasets used by previous works have serious limitations in terms of scale and diversity. In this paper, we release a new large-scale and template-rich math word problem dataset named Ape210K. It consists of 210K Chinese elementary school-level math problems, which is 9 times the size of the largest public dataset Math23K. Each problem contains both the gold answer and the equations needed to derive the answer. Ape210K is also of greater diversity with 56K templates, which is 25 times more than Math23K. Our analysis shows that solving Ape210K requires not only natural language understanding but also commonsense knowledge. We expect Ape210K to be a benchmark for math word problem solving systems. Experiments indicate that state-of-the-art models on the Math23K dataset perform poorly on Ape210K. We propose a copy-augmented and feature-enriched sequence to sequence (seq2seq) model, which outperforms existing models by 3.2% on the Math23K dataset and serves as a strong baseline of the Ape210K dataset. The gap is still significant between human and our baseline model, calling for further research efforts. We make Ape210K dataset publicly available at https://github.com/yuantiku/ape210k
△ Less
Submitted 8 October, 2020; v1 submitted 24 September, 2020;
originally announced September 2020.
-
TempNodeEmb:Temporal Node Embedding considering temporal edge influence matrix
Authors:
Khushnood Abbas,
Alireza Abbasi,
Dong Shi,
Niu Ling,
Mingsheng Shang,
Chen Liong,
Bolun Chen
Abstract:
Understanding the evolutionary patterns of real-world evolving complex systems such as human interactions, transport networks, biological interactions, and computer networks has important implications in our daily lives. Predicting future links among the nodes in such networks reveals an important aspect of the evolution of temporal networks. To analyse networks, they are mapped to adjacency matri…
▽ More
Understanding the evolutionary patterns of real-world evolving complex systems such as human interactions, transport networks, biological interactions, and computer networks has important implications in our daily lives. Predicting future links among the nodes in such networks reveals an important aspect of the evolution of temporal networks. To analyse networks, they are mapped to adjacency matrices, however, a single adjacency matrix cannot represent complex relationships (e.g. temporal pattern), and therefore, some approaches consider a simplified representation of temporal networks but in high-dimensional and generally sparse matrices. As a result, adjacency matrices cannot be directly used by machine learning models for making network or node level predictions. To overcome this problem, automated frameworks are proposed for learning low-dimensional vectors for nodes or edges, as state-of-the-art techniques in predicting temporal patterns in networks such as link prediction. However, these models fail to consider temporal dimensions of the networks. This gap motivated us to propose in this research a new node embedding technique which exploits the evolving nature of the networks considering a simple three-layer graph neural network at each time step, and extracting node orientation by Given's angle method. To prove our proposed algorithm's efficiency, we evaluated the efficiency of our proposed algorithm against six state-of-the-art benchmark network embedding models, on four real temporal networks data, and the results show our model outperforms other methods in predicting future links in temporal networks.
△ Less
Submitted 16 August, 2020;
originally announced August 2020.
-
Visual Perception Model for Rapid and Adaptive Low-light Image Enhancement
Authors:
Xiaoxiao Li,
Xiaopeng Guo,
Liye Mei,
Mingyu Shang,
Jie Gao,
Maojing Shu,
Xiang Wang
Abstract:
Low-light image enhancement is a promising solution to tackle the problem of insufficient sensitivity of human vision system (HVS) to perceive information in low light environments. Previous Retinex-based works always accomplish enhancement task by estimating light intensity. Unfortunately, single light intensity modelling is hard to accurately simulate visual perception information, leading to th…
▽ More
Low-light image enhancement is a promising solution to tackle the problem of insufficient sensitivity of human vision system (HVS) to perceive information in low light environments. Previous Retinex-based works always accomplish enhancement task by estimating light intensity. Unfortunately, single light intensity modelling is hard to accurately simulate visual perception information, leading to the problems of imbalanced visual photosensitivity and weak adaptivity. To solve these problems, we explore the precise relationship between light source and visual perception and then propose the visual perception (VP) model to acquire a precise mathematical description of visual perception. The core of VP model is to decompose the light source into light intensity and light spatial distribution to describe the perception process of HVS, offering refinement estimation of illumination and reflectance. To reduce complexity of the estimation process, we introduce the rapid and adaptive $\mathbfβ$ and $\mathbfγ$ functions to build an illumination and reflectance estimation scheme. Finally, we present a optimal determination strategy, consisting of a \emph{cycle operation} and a \emph{comparator}. Specifically, the \emph{comparator} is responsible for determining the optimal enhancement results from multiple enhanced results through implementing the \emph{cycle operation}. By coordinating the proposed VP model, illumination and reflectance estimation scheme, and the optimal determination strategy, we propose a rapid and adaptive framework for low-light image enhancement. Extensive experiment results demenstrate that the proposed method achieves better performance in terms of visual comparison, quantitative assessment, and computational efficiency, compared with the currently state-of-the-arts.
△ Less
Submitted 14 May, 2020;
originally announced May 2020.
-
Semi-supervised Text Style Transfer: Cross Projection in Latent Space
Authors:
Mingyue Shang,
Piji Li,
Zhenxin Fu,
Lidong Bing,
Dongyan Zhao,
Shuming Shi,
Rui Yan
Abstract:
Text style transfer task requires the model to transfer a sentence of one style to another style while retaining its original content meaning, which is a challenging problem that has long suffered from the shortage of parallel data. In this paper, we first propose a semi-supervised text style transfer model that combines the small-scale parallel data with the large-scale nonparallel data. With the…
▽ More
Text style transfer task requires the model to transfer a sentence of one style to another style while retaining its original content meaning, which is a challenging problem that has long suffered from the shortage of parallel data. In this paper, we first propose a semi-supervised text style transfer model that combines the small-scale parallel data with the large-scale nonparallel data. With these two types of training data, we introduce a projection function between the latent space of different styles and design two constraints to train it. We also introduce two other simple but effective semi-supervised methods to compare with. To evaluate the performance of the proposed methods, we build and release a novel style transfer dataset that alters sentences between the style of ancient Chinese poem and the modern Chinese.
△ Less
Submitted 25 September, 2019;
originally announced September 2019.
-
Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?
Authors:
Mingyue Shang,
Zhenxin Fu,
Hongzhi Yin,
Bo Tang,
Dongyan Zhao,
Rui Yan
Abstract:
Natural language understanding is a challenging problem that covers a wide range of tasks. While previous methods generally train each task separately, we consider combining the cross-task features to enhance the task performance. In this paper, we incorporate the logic information with the help of the Natural Language Inference (NLI) task to the Story Cloze Test (SCT). Previous work on SCT consid…
▽ More
Natural language understanding is a challenging problem that covers a wide range of tasks. While previous methods generally train each task separately, we consider combining the cross-task features to enhance the task performance. In this paper, we incorporate the logic information with the help of the Natural Language Inference (NLI) task to the Story Cloze Test (SCT). Previous work on SCT considered various semantic information, such as sentiment and topic, but lack the logic information between sentences which is an essential element of stories. Thus we propose to extract the logic information during the course of the story to improve the understanding of the whole story. The logic information is modeled with the help of the NLI task. Experimental results prove the strength of the logic information.
△ Less
Submitted 13 December, 2018;
originally announced December 2018.
-
Y^2Seq2Seq: Cross-Modal Representation Learning for 3D Shape and Text by Joint Reconstruction and Prediction of View and Word Sequences
Authors:
Zhizhong Han,
Mingyang Shang,
Xiyang Wang,
Yu-Shen Liu,
Matthias Zwicker
Abstract:
A recent method employs 3D voxels to represent 3D shapes, but this limits the approach to low resolutions due to the computational cost caused by the cubic complexity of 3D voxels. Hence the method suffers from a lack of detailed geometry. To resolve this issue, we propose Y^2Seq2Seq, a view-based model, to learn cross-modal representations by joint reconstruction and prediction of view and word s…
▽ More
A recent method employs 3D voxels to represent 3D shapes, but this limits the approach to low resolutions due to the computational cost caused by the cubic complexity of 3D voxels. Hence the method suffers from a lack of detailed geometry. To resolve this issue, we propose Y^2Seq2Seq, a view-based model, to learn cross-modal representations by joint reconstruction and prediction of view and word sequences. Specifically, the network architecture of Y^2Seq2Seq bridges the semantic meaning embedded in the two modalities by two coupled `Y' like sequence-to-sequence (Seq2Seq) structures. In addition, our novel hierarchical constraints further increase the discriminability of the cross-modal representations by employing more detailed discriminative information. Experimental results on cross-modal retrieval and 3D shape captioning show that Y^2Seq2Seq outperforms the state-of-the-art methods.
△ Less
Submitted 6 November, 2018;
originally announced November 2018.
-
View Inter-Prediction GAN: Unsupervised Representation Learning for 3D Shapes by Learning Global Shape Memories to Support Local View Predictions
Authors:
Zhizhong Han,
Mingyang Shang,
Yu-Shen Liu,
Matthias Zwicker
Abstract:
In this paper we present a novel unsupervised representation learning approach for 3D shapes, which is an important research challenge as it avoids the manual effort required for collecting supervised data. Our method trains an RNN-based neural network architecture to solve multiple view inter-prediction tasks for each shape. Given several nearby views of a shape, we define view inter-prediction a…
▽ More
In this paper we present a novel unsupervised representation learning approach for 3D shapes, which is an important research challenge as it avoids the manual effort required for collecting supervised data. Our method trains an RNN-based neural network architecture to solve multiple view inter-prediction tasks for each shape. Given several nearby views of a shape, we define view inter-prediction as the task of predicting the center view between the input views, and reconstructing the input views in a low-level feature space. The key idea of our approach is to implement the shape representation as a shape-specific global memory that is shared between all local view inter-predictions for each shape. Intuitively, this memory enables the system to aggregate information that is useful to better solve the view inter-prediction tasks for each shape, and to leverage the memory as a view-independent shape representation. Our approach obtains the best results using a combination of L_2 and adversarial losses for the view inter-prediction task. We show that VIP-GAN outperforms state-of-the-art methods in unsupervised 3D feature learning on three large scale 3D shape benchmarks.
△ Less
Submitted 6 November, 2018;
originally announced November 2018.
-
One "Ruler" for All Languages: Multi-Lingual Dialogue Evaluation with Adversarial Multi-Task Learning
Authors:
Xiaowei Tong,
Zhenxin Fu,
Mingyue Shang,
Dongyan Zhao,
Rui Yan
Abstract:
Automatic evaluating the performance of Open-domain dialogue system is a challenging problem. Recent work in neural network-based metrics has shown promising opportunities for automatic dialogue evaluation. However, existing methods mainly focus on monolingual evaluation, in which the trained metric is not flexible enough to transfer across different languages. To address this issue, we propose an…
▽ More
Automatic evaluating the performance of Open-domain dialogue system is a challenging problem. Recent work in neural network-based metrics has shown promising opportunities for automatic dialogue evaluation. However, existing methods mainly focus on monolingual evaluation, in which the trained metric is not flexible enough to transfer across different languages. To address this issue, we propose an adversarial multi-task neural metric (ADVMT) for multi-lingual dialogue evaluation, with shared feature extraction across languages. We evaluate the proposed model in two different languages. Experiments show that the adversarial multi-task neural metric achieves a high correlation with human annotation, which yields better performance than monolingual ones and various existing metrics.
△ Less
Submitted 8 May, 2018;
originally announced May 2018.
-
Algorithmic Complexity and Reprogrammability of Chemical Structure Networks
Authors:
Hector Zenil,
Narsis A. Kiani,
Ming-Mei Shang,
Jesper Tegnér
Abstract:
Here we address the challenge of profiling causal properties and tracking the transformation of chemical compounds from an algorithmic perspective. We explore the potential of applying a computational interventional calculus based on the principles of algorithmic probability to chemical structure networks. We profile the sensitivity of the elements and covalent bonds in a chemical structure networ…
▽ More
Here we address the challenge of profiling causal properties and tracking the transformation of chemical compounds from an algorithmic perspective. We explore the potential of applying a computational interventional calculus based on the principles of algorithmic probability to chemical structure networks. We profile the sensitivity of the elements and covalent bonds in a chemical structure network algorithmically, asking whether reprogrammability affords information about thermodynamic and chemical processes involved in the transformation of different compound classes. We arrive at numerical results suggesting a correspondence between some physical, structural and functional properties. Our methods are capable of separating chemical classes that reflect functional and natural differences without considering any information about atomic and molecular properties. We conclude that these methods, with their links to chemoinformatics via algorithmic, probability hold promise for future research.
△ Less
Submitted 18 March, 2018; v1 submitted 16 February, 2018;
originally announced February 2018.
-
Identifying emerging influential Nodes in evolving networks: Exploiting strength of weak nodes
Authors:
Khushnood Abbas,
Mingsheng Shang,
Cai Shi-Min,
Xiaoyu Shi
Abstract:
Identifying emerging influential or popular node/item in future on network is a current interest of the researchers. Most of previous works focus on identifying leaders in time evolving networks on the basis of network structure or node's activity separate way. In this paper, we have proposed a hybrid model which considers both, node's structural centrality and recent activity of nodes together. W…
▽ More
Identifying emerging influential or popular node/item in future on network is a current interest of the researchers. Most of previous works focus on identifying leaders in time evolving networks on the basis of network structure or node's activity separate way. In this paper, we have proposed a hybrid model which considers both, node's structural centrality and recent activity of nodes together. We consider that the node is active when it is receiving more links in a given recent time window, rather than in the whole past life of the node. Furthermore our model is flexible to implement structural rank such as PageRank and webpage click information as activity of the node. For testing the performance of our model, we adopt the PageRank algorithm and linear preferential attachment based model as the baseline methods. Experiments on three real data sets (i.e Movielens, Netflix and Facebook wall post data set), we found that our model shows better performance in terms of finding the emerging influential nodes that were not popular in past.
△ Less
Submitted 5 September, 2016;
originally announced September 2016.
-
A Fast Recommendation Algorithm for Social Tagging Systems : A Delicious Case
Authors:
Yao-Dong Zhao,
Shi-Min Cai,
Ming Tang,
Ming-Sheng Shang
Abstract:
The tripartite graph is one of the commonest topological structures in social tagging systems such as Delicious, which has three types of nodes (i.e., users, URLs and tags). Traditional recommender systems developed based on collaborative filtering for the social tagging systems bring very high demands on CPU time cost. In this paper, to overcome this drawback, we propose a novel approach that ext…
▽ More
The tripartite graph is one of the commonest topological structures in social tagging systems such as Delicious, which has three types of nodes (i.e., users, URLs and tags). Traditional recommender systems developed based on collaborative filtering for the social tagging systems bring very high demands on CPU time cost. In this paper, to overcome this drawback, we propose a novel approach that extracts non-overlapping user clusters and corresponding overlapping item clusters simultaneously through coarse clustering to accelerate the user-based collaborative filtering and develop a fast recommendation algorithm for the social tagging systems. The experimental results show that the proposed approach is able to dramatically reduce the processing time cost greater than $90\%$ and relatively enhance the accuracy in comparison with the ordinary user-based collaborative filtering algorithm.
△ Less
Submitted 28 December, 2015;
originally announced December 2015.
-
Iterative resource allocation based on propagation feature of node for identifying the influential nodes
Authors:
Lin-Feng Zhong,
Jian-Guo Liu,
Ming-Sheng Shang
Abstract:
The Identification of the influential nodes in networks is one of the most promising domains. In this paper, we present an improved iterative resource allocation (IIRA) method by considering the centrality information of neighbors and the influence of spreading rate for a target node. Comparing with the results of the Susceptible Infected Recovered (SIR) model for four real networks, the IIRA meth…
▽ More
The Identification of the influential nodes in networks is one of the most promising domains. In this paper, we present an improved iterative resource allocation (IIRA) method by considering the centrality information of neighbors and the influence of spreading rate for a target node. Comparing with the results of the Susceptible Infected Recovered (SIR) model for four real networks, the IIRA method could identify influential nodes more accurately than the tradition IRA method. Specially, in the Erdos network, the Kendall's tau could be enhanced 23\% when the spreading rate is 0.12. In the Protein network, the Kendall's tau could be enhanced 24\% when the spreading rate is 0.08.
△ Less
Submitted 12 May, 2015;
originally announced May 2015.
-
Group-based ranking method for online rating systems with spamming attacks
Authors:
Jian Gao,
Yu-Wei Dong,
Mingsheng Shang,
Shi-Min Cai,
Tao Zhou
Abstract:
Ranking problem has attracted much attention in real systems. How to design a robust ranking method is especially significant for online rating systems under the threat of spamming attacks. By building reputation systems for users, many well-performed ranking methods have been applied to address this issue. In this Letter, we propose a group-based ranking method that evaluates users' reputations b…
▽ More
Ranking problem has attracted much attention in real systems. How to design a robust ranking method is especially significant for online rating systems under the threat of spamming attacks. By building reputation systems for users, many well-performed ranking methods have been applied to address this issue. In this Letter, we propose a group-based ranking method that evaluates users' reputations based on their grouping behaviors. More specifically, users are assigned with high reputation scores if they always fall into large rating groups. Results on three real data sets indicate that the present method is more accurate and robust than correlation-based method in the presence of spamming attacks.
△ Less
Submitted 4 January, 2015;
originally announced January 2015.
-
Uncovering the information core in recommender systems
Authors:
Wei Zeng,
An Zeng,
Hao Liu,
Ming-Sheng Shang,
Tao Zhou
Abstract:
With the rapid growth of the Internet and overwhelming amount of information that people are confronted with, recommender systems have been developed to effiectively support users' decision-making process in online systems. So far, much attention has been paid to designing new recommendation algorithms and improving existent ones. However, few works considered the different contributions from diff…
▽ More
With the rapid growth of the Internet and overwhelming amount of information that people are confronted with, recommender systems have been developed to effiectively support users' decision-making process in online systems. So far, much attention has been paid to designing new recommendation algorithms and improving existent ones. However, few works considered the different contributions from different users to the performance of a recommender system. Such studies can help us improve the recommendation efficiency by excluding irrelevant users. In this paper, we argue that in each online system there exists a group of core users who carry most of the information for recommendation. With them, the recommender systems can already generate satisfactory recommendation. Our core user extraction method enables the recommender systems to achieve 90% of the accuracy by taking only 20% of the data into account.
△ Less
Submitted 25 February, 2014;
originally announced February 2014.