Skip to main content

Showing 1–3 of 3 results for author: Fatsi, N

Searching in archive cs. Search in all archives.
.
  1. arXiv:2602.21553  [pdf, ps, other

    cs.IR cs.AI cs.LG

    Revisiting RAG Retrievers: An Information Theoretic Benchmark

    Authors: Wenqing Zheng, Dmitri Kalaev, Noah Fatsi, Daniel Barcklow, Owen Reinert, Igor Melnyk, Senthil Kumar, C. Bayan Bruss

    Abstract: Retrieval-Augmented Generation (RAG) systems rely critically on the retriever module to surface relevant context for large language models. Although numerous retrievers have recently been proposed, each built on different ranking principles such as lexical matching, dense embeddings, or graph citations, there remains a lack of systematic understanding of how these mechanisms differ and overlap. Ex… ▽ More

    Submitted 24 February, 2026; originally announced February 2026.

  2. arXiv:2505.10900  [pdf, ps, other

    cs.IR cs.AI

    Tuning-Free LLM Can Build A Strong Recommender Under Sparse Connectivity And Knowledge Gap Via Extracting Intent

    Authors: Wenqing Zheng, Noah Fatsi, Daniel Barcklow, Dmitri Kalaev, Steven Yao, Owen Reinert, C. Bayan Bruss, Daniele Rosa

    Abstract: Recent advances in recommendation with large language models (LLMs) often rely on either commonsense augmentation at the item-category level or implicit intent modeling on existing knowledge graphs. However, such approaches struggle to capture grounded user intents and to handle sparsity and cold-start scenarios. In this work, we present LLM-based Intent Knowledge Graph Recommender (IKGR), a novel… ▽ More

    Submitted 11 March, 2026; v1 submitted 16 May, 2025; originally announced May 2025.

    Comments: Accepted in Learning on Graphs (LoG) 2025

  3. arXiv:2311.07763  [pdf, other

    cs.LG cs.AI

    The Disagreement Problem in Faithfulness Metrics

    Authors: Brian Barr, Noah Fatsi, Leif Hancox-Li, Peter Richter, Daniel Proano, Caleb Mok

    Abstract: The field of explainable artificial intelligence (XAI) aims to explain how black-box machine learning models work. Much of the work centers around the holy grail of providing post-hoc feature attributions to any model architecture. While the pace of innovation around novel methods has slowed down, the question remains of how to choose a method, and how to make it fit for purpose. Recently, efforts… ▽ More

    Submitted 13 November, 2023; originally announced November 2023.

    Comments: 6 pages (excluding refs and appendix)