Skip to main content

Showing 1–10 of 10 results for author: Sim, R H L

Searching in archive cs. Search in all archives.
.
  1. arXiv:2510.09240  [pdf, ps, other

    cs.LG cs.GT

    Incentivizing Time-Aware Fairness in Data Sharing

    Authors: Jiangwei Chen, Kieu Thao Nguyen Pham, Rachael Hwee Ling Sim, Arun Verma, Zhaoxuan Wu, Chuan-Sheng Foo, Bryan Kian Hsiang Low

    Abstract: In collaborative data sharing and machine learning, multiple parties aggregate their data resources to train a machine learning model with better model performance. However, as the parties incur data collection costs, they are only willing to do so when guaranteed incentives, such as fairness and individual rationality. Existing frameworks assume that all parties join the collaboration simultaneou… ▽ More

    Submitted 22 October, 2025; v1 submitted 10 October, 2025; originally announced October 2025.

    Comments: Accepted to NeurIPS 2025

  2. arXiv:2509.07909  [pdf, ps, other

    cs.LG cs.AI cs.CL

    Uncovering Scaling Laws for Large Language Models via Inverse Problems

    Authors: Arun Verma, Zhaoxuan Wu, Zijian Zhou, Xiaoqiang Lin, Zhiliang Chen, Rachael Hwee Ling Sim, Rui Qiao, Jingtan Wang, Nhung Bui, Xinyuan Niu, Wenyang Hu, Gregory Kang Ruey Lau, Zi-Yu Khoo, Zitong Zhao, Xinyi Xu, Apivich Hemachandra, See-Kiong Ng, Bryan Kian Hsiang Low

    Abstract: Large Language Models (LLMs) are large-scale pretrained models that have achieved remarkable success across diverse domains. These successes have been driven by unprecedented complexity and scale in both data and computations. However, due to the high costs of training such models, brute-force trial-and-error approaches to improve LLMs are not feasible. Inspired by the success of inverse problems… ▽ More

    Submitted 9 September, 2025; originally announced September 2025.

    Comments: Accepted at EMNLP Findings 2025

  3. arXiv:2505.05064  [pdf, ps, other

    cs.LG

    WaterDrum: Watermarking for Data-centric Unlearning Metric

    Authors: Xinyang Lu, Xinyuan Niu, Gregory Kang Ruey Lau, Bui Thi Cam Nhung, Rachael Hwee Ling Sim, John Russell Himawan, Fanyu Wen, Chuan-Sheng Foo, See-Kiong Ng, Bryan Kian Hsiang Low

    Abstract: Large language model (LLM) unlearning is critical in real-world applications where it is necessary to efficiently remove the influence of private, copyrighted, or harmful data from some users. Existing utility-centric unlearning metrics (based on model utility) may fail to accurately evaluate the extent of unlearning in realistic settings such as when the forget and retain sets have semantically s… ▽ More

    Submitted 2 February, 2026; v1 submitted 8 May, 2025; originally announced May 2025.

  4. DUPRE: Data Utility Prediction for Efficient Data Valuation

    Authors: Kieu Thao Nguyen Pham, Rachael Hwee Ling Sim, Quoc Phong Nguyen, See Kiong Ng, Bryan Kian Hsiang Low

    Abstract: Data valuation is increasingly used in machine learning (ML) to decide the fair compensation for data owners and identify valuable or harmful data for improving ML models. Cooperative game theory-based data valuation, such as Data Shapley, requires evaluating the data utility (e.g., validation accuracy) and retraining the ML model for multiple data subsets. While most existing works on efficient e… ▽ More

    Submitted 22 February, 2025; originally announced February 2025.

    Comments: 16 pages, 7 figures, the paper got accepted AAMAS 2025

    Journal ref: Proc. 24th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS '25), Detroit, MI, USA, 19-23 May 2025, pp. 1557--1565

  5. arXiv:2406.14507  [pdf, other

    cs.LG cs.AI

    On Newton's Method to Unlearn Neural Networks

    Authors: Nhung Bui, Xinyang Lu, Rachael Hwee Ling Sim, See-Kiong Ng, Bryan Kian Hsiang Low

    Abstract: With the widespread applications of neural networks (NNs) trained on personal data, machine unlearning has become increasingly important for enabling individuals to exercise their personal data ownership, particularly the "right to be forgotten" from trained NNs. Since retraining is computationally expensive, we seek approximate unlearning algorithms for NNs that return identical models to the ret… ▽ More

    Submitted 27 August, 2024; v1 submitted 20 June, 2024; originally announced June 2024.

  6. arXiv:2406.14473  [pdf, other

    cs.LG cs.CL

    Data-Centric AI in the Age of Large Language Models

    Authors: Xinyi Xu, Zhaoxuan Wu, Rui Qiao, Arun Verma, Yao Shu, Jingtan Wang, Xinyuan Niu, Zhenfeng He, Jiangwei Chen, Zijian Zhou, Gregory Kang Ruey Lau, Hieu Dao, Lucas Agussurja, Rachael Hwee Ling Sim, Xiaoqiang Lin, Wenyang Hu, Zhongxiang Dai, Pang Wei Koh, Bryan Kian Hsiang Low

    Abstract: This position paper proposes a data-centric viewpoint of AI research, focusing on large language models (LLMs). We start by making the key observation that data is instrumental in the developmental (e.g., pretraining and fine-tuning) and inferential stages (e.g., in-context learning) of LLMs, and yet it receives disproportionally low attention from the research community. We identify four specific… ▽ More

    Submitted 20 June, 2024; originally announced June 2024.

    Comments: Preprint

  7. arXiv:2404.01676  [pdf, other

    cs.LG

    Incentives in Private Collaborative Machine Learning

    Authors: Rachael Hwee Ling Sim, Yehong Zhang, Trong Nghia Hoang, Xinyi Xu, Bryan Kian Hsiang Low, Patrick Jaillet

    Abstract: Collaborative machine learning involves training models on data from multiple parties but must incentivize their participation. Existing data valuation methods fairly value and reward each party based on shared data or model parameters but neglect the privacy risks involved. To address this, we introduce differential privacy (DP) as an incentive. Each party can select its required DP guarantee and… ▽ More

    Submitted 2 April, 2024; originally announced April 2024.

    Comments: Accepted to NeurIPS 2023

  8. arXiv:2312.11413  [pdf, other

    cs.LG cs.AI

    DeRDaVa: Deletion-Robust Data Valuation for Machine Learning

    Authors: Xiao Tian, Rachael Hwee Ling Sim, Jue Fan, Bryan Kian Hsiang Low

    Abstract: Data valuation is concerned with determining a fair valuation of data from data sources to compensate them or to identify training examples that are the most or least useful for predictions. With the rising interest in personal data ownership and data protection regulations, model owners will likely have to fulfil more data deletion requests. This raises issues that have not been addressed by exis… ▽ More

    Submitted 21 January, 2024; v1 submitted 18 December, 2023; originally announced December 2023.

  9. arXiv:2212.00630  [pdf, other

    cs.LG cs.CY

    Probably Approximate Shapley Fairness with Applications in Machine Learning

    Authors: Zijian Zhou, Xinyi Xu, Rachael Hwee Ling Sim, Chuan Sheng Foo, Kian Hsiang Low

    Abstract: The Shapley value (SV) is adopted in various scenarios in machine learning (ML), including data valuation, agent valuation, and feature attribution, as it satisfies their fairness requirements. However, as exact SVs are infeasible to compute in practice, SV estimates are approximated instead. This approximation step raises an important question: do the SV estimates preserve the fairness guarantees… ▽ More

    Submitted 1 December, 2022; originally announced December 2022.

    Comments: 37th AAAI Conference on Artificial Intelligence (AAAI 2023)

  10. arXiv:2010.12797  [pdf, other

    cs.LG cs.GT cs.MA stat.ML

    Collaborative Machine Learning with Incentive-Aware Model Rewards

    Authors: Rachael Hwee Ling Sim, Yehong Zhang, Mun Choon Chan, Bryan Kian Hsiang Low

    Abstract: Collaborative machine learning (ML) is an appealing paradigm to build high-quality ML models by training on the aggregated data from many parties. However, these parties are only willing to share their data when given enough incentives, such as a guaranteed fair reward based on their contributions. This motivates the need for measuring a party's contribution and designing an incentive-aware reward… ▽ More

    Submitted 24 October, 2020; originally announced October 2020.

    Comments: 37th International Conference on Machine Learning (ICML 2020), Extended version with proofs and additional experimental results, 17 pages