Skip to main content

Showing 1–4 of 4 results for author: Bazinska, J

Searching in archive cs. Search in all archives.
.
  1. arXiv:2511.21990  [pdf, ps, other

    cs.LG cs.AI cs.CR

    A Safety and Security Framework for Real-World Agentic Systems

    Authors: Shaona Ghosh, Barnaby Simkin, Kyriacos Shiarlis, Soumili Nandi, Dan Zhao, Matthew Fiedler, Julia Bazinska, Nikki Pope, Roopa Prabhu, Daniel Rohrer, Michael Demoret, Bartley Richardson

    Abstract: This paper introduces a dynamic and actionable framework for securing agentic AI systems in enterprise deployment. We contend that safety and security are not merely fixed attributes of individual models but also emergent properties arising from the dynamic interactions among models, orchestrators, tools, and data within their operating environments. We propose a new way of identification of novel… ▽ More

    Submitted 26 November, 2025; originally announced November 2025.

  2. arXiv:2510.22620  [pdf, ps, other

    cs.CR cs.AI cs.LG

    Breaking Agent Backbones: Evaluating the Security of Backbone LLMs in AI Agents

    Authors: Julia Bazinska, Max Mathys, Francesco Casucci, Mateo Rojas-Carulla, Xander Davies, Alexandra Souly, Niklas Pfister

    Abstract: AI agents powered by large language models (LLMs) are being deployed at scale, yet we lack a systematic understanding of how the choice of backbone LLM affects agent security. The non-deterministic sequential nature of AI agents complicates security modeling, while the integration of traditional software with AI components entangles novel LLM vulnerabilities with conventional security risks. Exist… ▽ More

    Submitted 24 February, 2026; v1 submitted 26 October, 2025; originally announced October 2025.

    Comments: Julia Bazinska and Max Mathys contributed equally

  3. arXiv:2501.07927  [pdf, ps, other

    cs.LG cs.AI cs.CL cs.CR

    Gandalf the Red: Adaptive Security for LLMs

    Authors: Niklas Pfister, Václav Volhejn, Manuel Knott, Santiago Arias, Julia Bazińska, Mykhailo Bichurin, Alan Commike, Janet Darling, Peter Dienes, Matthew Fiedler, David Haber, Matthias Kraft, Marco Lancini, Max Mathys, Damián Pascual-Ortiz, Jakub Podolak, Adrià Romero-López, Kyriacos Shiarlis, Andreas Signer, Zsolt Terek, Athanasios Theocharis, Daniel Timbrell, Samuel Trautwein, Samuel Watts, Yun-Han Wu , et al. (1 additional authors not shown)

    Abstract: Current evaluations of defenses against prompt attacks in large language model (LLM) applications often overlook two critical factors: the dynamic nature of adversarial behavior and the usability penalties imposed on legitimate users by restrictive defenses. We propose D-SEC (Dynamic Security Utility Threat Model), which explicitly separates attackers from legitimate users, models multi-step inter… ▽ More

    Submitted 4 August, 2025; v1 submitted 14 January, 2025; originally announced January 2025.

    Comments: Niklas Pfister, Václav Volhejn and Manuel Knott contributed equally

  4. arXiv:2308.12093  [pdf, other

    cs.LG cs.PF

    Cached Operator Reordering: A Unified View for Fast GNN Training

    Authors: Julia Bazinska, Andrei Ivanov, Tal Ben-Nun, Nikoli Dryden, Maciej Besta, Siyuan Shen, Torsten Hoefler

    Abstract: Graph Neural Networks (GNNs) are a powerful tool for handling structured graph data and addressing tasks such as node classification, graph classification, and clustering. However, the sparse nature of GNN computation poses new challenges for performance optimization compared to traditional deep neural networks. We address these challenges by providing a unified view of GNN computation, I/O, and m… ▽ More

    Submitted 23 August, 2023; originally announced August 2023.