-
Measuring What AI Systems Might Do: Towards A Measurement Science in AI
Authors:
Konstantinos Voudouris,
Mirko Thalmann,
Alex Kipnis,
José Hernández-Orallo,
Eric Schulz
Abstract:
Scientists, policy-makers, business leaders, and members of the public care about what modern artificial intelligence systems are disposed to do. Yet terms such as capabilities, propensities, skills, values, and abilities are routinely used interchangeably and conflated with observable performance, with AI evaluation practices rarely specifying what quantity they purport to measure. We argue that…
▽ More
Scientists, policy-makers, business leaders, and members of the public care about what modern artificial intelligence systems are disposed to do. Yet terms such as capabilities, propensities, skills, values, and abilities are routinely used interchangeably and conflated with observable performance, with AI evaluation practices rarely specifying what quantity they purport to measure. We argue that capabilities and propensities are dispositional properties - stable features of systems characterised by counterfactual relationships between contextual conditions and behavioural outputs. Measuring a disposition requires (i) hypothesising which contextual properties are causally relevant, (ii) independently operationalising and measuring those properties, and (iii) empirically mapping how variation in those properties affects the probability of the behaviour. Dominant approaches to AI evaluation, from benchmark averages to data-driven latent-variable models such as Item Response Theory, bypass these steps entirely. Building on ideas from philosophy of science, measurement theory, and cognitive science, we develop a principled account of AI capabilities and propensities as dispositions, show why prevailing evaluation practices fail to measure them, and outline what disposition-respecting, scientifically defensible AI evaluation would require.
△ Less
Submitted 10 February, 2026;
originally announced March 2026.
-
In-Context Function Learning in Large Language Models
Authors:
Elif Akata,
Konstantinos Voudouris,
Vincent Fortuin,
Eric Schulz
Abstract:
Large language models (LLMs) can learn from a few demonstrations provided at inference time. We study this in-context learning phenomenon through the lens of Gaussian Processes (GPs). We build controlled experiments where models observe sequences of multivariate scalar-valued function samples drawn from known GP priors. We evaluate prediction error in relation to the number of demonstrations and c…
▽ More
Large language models (LLMs) can learn from a few demonstrations provided at inference time. We study this in-context learning phenomenon through the lens of Gaussian Processes (GPs). We build controlled experiments where models observe sequences of multivariate scalar-valued function samples drawn from known GP priors. We evaluate prediction error in relation to the number of demonstrations and compare against two principled references: (i) an empirical GP-regression learner that gives a lower bound on achievable error, and (ii) the expected error of a 1-nearest-neighbor (1-NN) rule, which gives a data-driven upper bound. Across model sizes, we find that LLM learning curves are strongly influenced by the function-generating kernels and approach the GP lower bound as the number of demonstrations increases. We then study the inductive biases of these models using a likelihood-based analysis. We find that LLM predictions are most likely under less smooth GP kernels. Finally, we explore whether post-training can shift these inductive biases and improve sample-efficiency on functions sampled from GPs with smoother kernels. We find that both reinforcement learning and supervised fine-tuning can effectively shift inductive biases in the direction of the training data. Together, our framework quantifies the extent to which LLMs behave like GP learners and provides tools for steering their inductive biases for continuous function learning tasks.
△ Less
Submitted 12 February, 2026;
originally announced February 2026.
-
Can vision language models learn intuitive physics from interaction?
Authors:
Luca M. Schulze Buschoff,
Konstantinos Voudouris,
Can Demircan,
Eric Schulz
Abstract:
Pre-trained vision language models do not have good intuitions about the physical world. Recent work has shown that supervised fine-tuning can improve model performance on simple physical tasks. However, fine-tuned models do not appear to learn robust physical rules that can generalize to new contexts. Based on research in cognitive science, we hypothesize that models need to interact with an envi…
▽ More
Pre-trained vision language models do not have good intuitions about the physical world. Recent work has shown that supervised fine-tuning can improve model performance on simple physical tasks. However, fine-tuned models do not appear to learn robust physical rules that can generalize to new contexts. Based on research in cognitive science, we hypothesize that models need to interact with an environment to properly learn its physical dynamics. We train models that learn through interaction with the environment using reinforcement learning. While learning from interaction allows models to improve their within-task performance, it fails to produce models with generalizable physical intuitions. We find that models trained on one task do not reliably generalize to related tasks, even if the tasks share visual statistics and physical principles, and regardless of whether the models are trained through interaction.
△ Less
Submitted 5 February, 2026;
originally announced February 2026.
-
Exploring Major Transitions in the Evolution of Biological Cognition With Artificial Neural Networks
Authors:
Konstantinos Voudouris,
Andrew Barron,
Marta Halina,
Colin Klein,
Matishalin Patel
Abstract:
Transitional accounts of evolution emphasise a few changes that shape what is evolvable, with dramatic consequences for derived lineages. More recently it has been proposed that cognition might also have evolved via a series of major transitions that manipulate the structure of biological neural networks, fundamentally changing the flow of information. We used idealised models of information flow,…
▽ More
Transitional accounts of evolution emphasise a few changes that shape what is evolvable, with dramatic consequences for derived lineages. More recently it has been proposed that cognition might also have evolved via a series of major transitions that manipulate the structure of biological neural networks, fundamentally changing the flow of information. We used idealised models of information flow, artificial neural networks (ANNs), to evaluate whether changes in information flow in a network can yield a transitional change in cognitive performance. We compared networks with feed-forward, recurrent and laminated topologies, and tested their performance learning artificial grammars that differed in complexity, controlling for network size and resources. We documented a qualitative expansion in the types of input that recurrent networks can process compared to feed-forward networks, and a related qualitative increase in performance for learning the most complex grammars. We also noted how the difficulty in training recurrent networks poses a form of transition barrier and contingent irreversibility -- other key features of evolutionary transitions. Not all changes in network topology confer a performance advantage in this task set. Laminated networks did not outperform non-laminated networks in grammar learning. Overall, our findings show how some changes in information flow can yield transitions in cognitive performance.
△ Less
Submitted 17 September, 2025;
originally announced September 2025.
-
Cognitive Science-Inspired Evaluation of Core Capabilities for Object Understanding in AI
Authors:
Danaja Rutar,
Alva Markelius,
Konstantinos Voudouris,
José Hernández-Orallo,
Lucy Cheke
Abstract:
One of the core components of our world models is 'intuitive physics' - an understanding of objects, space, and causality. This capability enables us to predict events, plan action and navigate environments, all of which rely on a composite sense of objecthood. Despite its importance, there is no single, unified account of objecthood, though multiple theoretical frameworks provide insights. In the…
▽ More
One of the core components of our world models is 'intuitive physics' - an understanding of objects, space, and causality. This capability enables us to predict events, plan action and navigate environments, all of which rely on a composite sense of objecthood. Despite its importance, there is no single, unified account of objecthood, though multiple theoretical frameworks provide insights. In the first part of this paper, we present a comprehensive overview of the main theoretical frameworks in objecthood research - Gestalt psychology, enactive cognition, and developmental psychology - and identify the core capabilities each framework attributes to object understanding, as well as what functional roles they play in shaping world models in biological agents. Given the foundational role of objecthood in world modelling, understanding objecthood is also essential in AI. In the second part of the paper, we evaluate how current AI paradigms approach and test objecthood capabilities compared to those in cognitive science. We define an AI paradigm as a combination of how objecthood is conceptualised, the methods used for studying objecthood, the data utilised, and the evaluation techniques. We find that, whilst benchmarks can detect that AI systems model isolated aspects of objecthood, the benchmarks cannot detect when AI systems lack functional integration across these capabilities, not solving the objecthood challenge fully. Finally, we explore novel evaluation approaches that align with the integrated vision of objecthood outlined in this paper. These methods are promising candidates for advancing from isolated object capabilities toward general-purpose AI with genuine object understanding in real-world contexts.
△ Less
Submitted 7 April, 2025; v1 submitted 27 March, 2025;
originally announced March 2025.
-
Bringing Comparative Cognition To Computers
Authors:
Konstantinos Voudouris,
Lucy G. Cheke,
Eric Schulz
Abstract:
Researchers are increasingly subjecting artificial intelligence systems to psychological testing. But to rigorously compare their cognitive capacities with humans and other animals, we must avoid both over- and under-stating our similarities and differences. By embracing a comparative approach, we can integrate AI cognition research into the broader cognitive sciences.
Researchers are increasingly subjecting artificial intelligence systems to psychological testing. But to rigorously compare their cognitive capacities with humans and other animals, we must avoid both over- and under-stating our similarities and differences. By embracing a comparative approach, we can integrate AI cognition research into the broader cognitive sciences.
△ Less
Submitted 4 March, 2025;
originally announced March 2025.
-
Testing the Limits of Fine-Tuning for Improving Visual Cognition in Vision Language Models
Authors:
Luca M. Schulze Buschoff,
Konstantinos Voudouris,
Elif Akata,
Matthias Bethge,
Joshua B. Tenenbaum,
Eric Schulz
Abstract:
Pre-trained vision language models still fall short of human visual cognition. In an effort to improve visual cognition and align models with human behavior, we introduce visual stimuli and human judgments on visual cognition tasks, allowing us to systematically evaluate performance across cognitive domains under a consistent environment. We fine-tune models on ground truth data for intuitive phys…
▽ More
Pre-trained vision language models still fall short of human visual cognition. In an effort to improve visual cognition and align models with human behavior, we introduce visual stimuli and human judgments on visual cognition tasks, allowing us to systematically evaluate performance across cognitive domains under a consistent environment. We fine-tune models on ground truth data for intuitive physics and causal reasoning and find that this improves model performance in the respective fine-tuning domain. Furthermore, it can improve model alignment with human behavior. However, we find that task-specific fine-tuning does not contribute to robust human-like generalization to data with other visual characteristics or to tasks in other cognitive domains.
△ Less
Submitted 30 May, 2025; v1 submitted 21 February, 2025;
originally announced February 2025.
-
PredictaBoard: Benchmarking LLM Score Predictability
Authors:
Lorenzo Pacchiardi,
Konstantinos Voudouris,
Ben Slater,
Fernando Martínez-Plumed,
José Hernández-Orallo,
Lexin Zhou,
Wout Schellaert
Abstract:
Despite possessing impressive skills, Large Language Models (LLMs) often fail unpredictably, demonstrating inconsistent success in even basic common sense reasoning tasks. This unpredictability poses a significant challenge to ensuring their safe deployment, as identifying and operating within a reliable "safe zone" is essential for mitigating risks. To address this, we present PredictaBoard, a no…
▽ More
Despite possessing impressive skills, Large Language Models (LLMs) often fail unpredictably, demonstrating inconsistent success in even basic common sense reasoning tasks. This unpredictability poses a significant challenge to ensuring their safe deployment, as identifying and operating within a reliable "safe zone" is essential for mitigating risks. To address this, we present PredictaBoard, a novel collaborative benchmarking framework designed to evaluate the ability of score predictors (referred to as assessors) to anticipate LLM errors on specific task instances (i.e., prompts) from existing datasets. PredictaBoard evaluates pairs of LLMs and assessors by considering the rejection rate at different tolerance errors. As such, PredictaBoard stimulates research into developing better assessors and making LLMs more predictable, not only with a higher average performance. We conduct illustrative experiments using baseline assessors and state-of-the-art LLMs. PredictaBoard highlights the critical need to evaluate predictability alongside performance, paving the way for safer AI systems where errors are not only minimised but also anticipated and effectively mitigated. Code for our benchmark can be found at https://github.com/Kinds-of-Intelligence-CFI/PredictaBoard
△ Less
Submitted 17 June, 2025; v1 submitted 20 February, 2025;
originally announced February 2025.
-
A little less conversation, a little more action, please: Investigating the physical common-sense of LLMs in a 3D embodied environment
Authors:
Matteo G. Mecattaf,
Ben Slater,
Marko Tešić,
Jonathan Prunty,
Konstantinos Voudouris,
Lucy G. Cheke
Abstract:
As general-purpose tools, Large Language Models (LLMs) must often reason about everyday physical environments. In a question-and-answer capacity, understanding the interactions of physical objects may be necessary to give appropriate responses. Moreover, LLMs are increasingly used as reasoning engines in agentic systems, designing and controlling their action sequences. The vast majority of resear…
▽ More
As general-purpose tools, Large Language Models (LLMs) must often reason about everyday physical environments. In a question-and-answer capacity, understanding the interactions of physical objects may be necessary to give appropriate responses. Moreover, LLMs are increasingly used as reasoning engines in agentic systems, designing and controlling their action sequences. The vast majority of research has tackled this issue using static benchmarks, comprised of text or image-based questions about the physical world. However, these benchmarks do not capture the complexity and nuance of real-life physical processes. Here we advocate for a second, relatively unexplored, approach: 'embodying' the LLMs by granting them control of an agent within a 3D environment. We present the first embodied and cognitively meaningful evaluation of physical common-sense reasoning in LLMs. Our framework allows direct comparison of LLMs with other embodied agents, such as those based on Deep Reinforcement Learning, and human and non-human animals. We employ the Animal-AI (AAI) environment, a simulated 3D virtual laboratory, to study physical common-sense reasoning in LLMs. For this, we use the AAI Testbed, a suite of experiments that replicate laboratory studies with non-human animals, to study physical reasoning capabilities including distance estimation, tracking out-of-sight objects, and tool use. We demonstrate that state-of-the-art multi-modal models with no finetuning can complete this style of task, allowing meaningful comparison to the entrants of the 2019 Animal-AI Olympics competition and to human children. Our results show that LLMs are currently outperformed by human children on these tasks. We argue that this approach allows the study of physical reasoning using ecologically valid experiments drawn directly from cognitive science, improving the predictability and reliability of LLMs.
△ Less
Submitted 3 January, 2025; v1 submitted 30 October, 2024;
originally announced October 2024.
-
Centaur: a foundation model of human cognition
Authors:
Marcel Binz,
Elif Akata,
Matthias Bethge,
Franziska Brändle,
Fred Callaway,
Julian Coda-Forno,
Peter Dayan,
Can Demircan,
Maria K. Eckstein,
Noémi Éltető,
Thomas L. Griffiths,
Susanne Haridi,
Akshay K. Jagadish,
Li Ji-An,
Alexander Kipnis,
Sreejan Kumar,
Tobias Ludwig,
Marvin Mathony,
Marcelo Mattar,
Alireza Modirshanechi,
Surabhi S. Nath,
Joshua C. Peterson,
Milena Rmus,
Evan M. Russek,
Tankred Saanum
, et al. (15 additional authors not shown)
Abstract:
Establishing a unified theory of cognition has been a major goal of psychology. While there have been previous attempts to instantiate such theories by building computational models, we currently do not have one model that captures the human mind in its entirety. A first step in this direction is to create a model that can predict human behavior in a wide range of settings. Here we introduce Centa…
▽ More
Establishing a unified theory of cognition has been a major goal of psychology. While there have been previous attempts to instantiate such theories by building computational models, we currently do not have one model that captures the human mind in its entirety. A first step in this direction is to create a model that can predict human behavior in a wide range of settings. Here we introduce Centaur, a computational model that can predict and simulate human behavior in any experiment expressible in natural language. We derived Centaur by finetuning a state-of-the-art language model on a novel, large-scale data set called Psych-101. Psych-101 reaches an unprecedented scale, covering trial-by-trial data from over 60,000 participants performing over 10,000,000 choices in 160 experiments. Centaur not only captures the behavior of held-out participants better than existing cognitive models, but also generalizes to new cover stories, structural task modifications, and entirely new domains. Furthermore, we find that the model's internal representations become more aligned with human neural activity after finetuning. Taken together, our results demonstrate that it is possible to discover computational models that capture human behavior across a wide range of domains. We believe that such models provide tremendous potential for guiding the development of cognitive theories and present a case study to demonstrate this.
△ Less
Submitted 28 April, 2025; v1 submitted 26 October, 2024;
originally announced October 2024.
-
metabench -- A Sparse Benchmark of Reasoning and Knowledge in Large Language Models
Authors:
Alex Kipnis,
Konstantinos Voudouris,
Luca M. Schulze Buschoff,
Eric Schulz
Abstract:
Large Language Models (LLMs) vary in their abilities on a range of tasks. Initiatives such as the Open LLM Leaderboard aim to quantify these differences with several large benchmarks (sets of test items to which an LLM can respond either correctly or incorrectly). However, high correlations within and between benchmark scores suggest that (1) there exists a small set of common underlying abilities…
▽ More
Large Language Models (LLMs) vary in their abilities on a range of tasks. Initiatives such as the Open LLM Leaderboard aim to quantify these differences with several large benchmarks (sets of test items to which an LLM can respond either correctly or incorrectly). However, high correlations within and between benchmark scores suggest that (1) there exists a small set of common underlying abilities that these benchmarks measure, and (2) items tap into redundant information and the benchmarks may thus be considerably compressed. We use data from n > 5000 LLMs to identify the most informative items of six benchmarks, ARC, GSM8K, HellaSwag, MMLU, TruthfulQA and WinoGrande (with d = 28,632 items in total). From them we distill a sparse benchmark, metabench, that has less than 3% of the original size of all six benchmarks combined. This new sparse benchmark goes beyond point scores by yielding estimators of the underlying benchmark-specific abilities. We show that these estimators (1) can be used to reconstruct each original individual benchmark score with, on average, 1.24% root mean square error (RMSE), (2) reconstruct the original total score with 0.58% RMSE, and (3) have a single underlying common factor whose Spearman correlation with the total score is r = 0.94.
△ Less
Submitted 20 February, 2025; v1 submitted 4 July, 2024;
originally announced July 2024.
-
The Animal-AI Environment: A Virtual Laboratory For Comparative Cognition and Artificial Intelligence Research
Authors:
Konstantinos Voudouris,
Ibrahim Alhas,
Wout Schellaert,
Matteo G. Mecattaf,
Ben Slater,
Matthew Crosby,
Joel Holmes,
John Burden,
Niharika Chaubey,
Niall Donnelly,
Matishalin Patel,
Marta Halina,
José Hernández-Orallo,
Lucy G. Cheke
Abstract:
The Animal-AI Environment is a unique game-based research platform designed to facilitate collaboration between the artificial intelligence and comparative cognition research communities. In this paper, we present the latest version of the Animal-AI Environment, outlining several major features that make the game more engaging for humans and more complex for AI systems. These features include inte…
▽ More
The Animal-AI Environment is a unique game-based research platform designed to facilitate collaboration between the artificial intelligence and comparative cognition research communities. In this paper, we present the latest version of the Animal-AI Environment, outlining several major features that make the game more engaging for humans and more complex for AI systems. These features include interactive buttons, reward dispensers, and player notifications, as well as an overhaul of the environment's graphics and processing for significant improvements in agent training time and quality of the human player experience. We provide detailed guidance on how to build computational and behavioural experiments with the Animal-AI Environment. We present results from a series of agents, including the state-of-the-art deep reinforcement learning agent Dreamer-v3, on newly designed tests and the Animal-AI Testbed of 900 tasks inspired by research in the field of comparative cognition. The Animal-AI Environment offers a new approach for modelling cognition in humans and non-human animals, and for building biologically inspired artificial intelligence.
△ Less
Submitted 17 January, 2025; v1 submitted 18 December, 2023;
originally announced December 2023.
-
Predictable Artificial Intelligence
Authors:
Lexin Zhou,
Pablo A. Moreno-Casares,
Fernando Martínez-Plumed,
John Burden,
Ryan Burnell,
Lucy Cheke,
Cèsar Ferri,
Alexandru Marcoci,
Behzad Mehrbakhsh,
Yael Moros-Daval,
Seán Ó hÉigeartaigh,
Danaja Rutar,
Wout Schellaert,
Konstantinos Voudouris,
José Hernández-Orallo
Abstract:
We introduce the fundamental ideas and challenges of Predictable AI, a nascent research area that explores the ways in which we can anticipate key validity indicators (e.g., performance, safety) of present and future AI ecosystems. We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems, and thus should be prioritised over pe…
▽ More
We introduce the fundamental ideas and challenges of Predictable AI, a nascent research area that explores the ways in which we can anticipate key validity indicators (e.g., performance, safety) of present and future AI ecosystems. We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems, and thus should be prioritised over performance. We formally characterise predictability, explore its most relevant components, illustrate what can be predicted, describe alternative candidates for predictors, as well as the trade-offs between maximising validity and predictability. To illustrate these concepts, we bring an array of illustrative examples covering diverse ecosystem configurations. Predictable AI is related to other areas of technical and non-technical AI research, but have distinctive questions, hypotheses, techniques and challenges. This paper aims to elucidate them, calls for identifying paths towards a landscape of predictably valid AI systems and outlines the potential impact of this emergent field.
△ Less
Submitted 6 January, 2025; v1 submitted 9 October, 2023;
originally announced October 2023.
-
Inferring Capabilities from Task Performance with Bayesian Triangulation
Authors:
John Burden,
Konstantinos Voudouris,
Ryan Burnell,
Danaja Rutar,
Lucy Cheke,
José Hernández-Orallo
Abstract:
As machine learning models become more general, we need to characterise them in richer, more meaningful ways. We describe a method to infer the cognitive profile of a system from diverse experimental data. To do so, we introduce measurement layouts that model how task-instance features interact with system capabilities to affect performance. These features must be triangulated in complex ways to b…
▽ More
As machine learning models become more general, we need to characterise them in richer, more meaningful ways. We describe a method to infer the cognitive profile of a system from diverse experimental data. To do so, we introduce measurement layouts that model how task-instance features interact with system capabilities to affect performance. These features must be triangulated in complex ways to be able to infer capabilities from non-populational data -- a challenge for traditional psychometric and inferential tools. Using the Bayesian probabilistic programming library PyMC, we infer different cognitive profiles for agents in two scenarios: 68 actual contestants in the AnimalAI Olympics and 30 synthetic agents for O-PIAAGETS, an object permanence battery. We showcase the potential for capability-oriented evaluation.
△ Less
Submitted 8 October, 2025; v1 submitted 21 September, 2023;
originally announced September 2023.
-
Harms from Increasingly Agentic Algorithmic Systems
Authors:
Alan Chan,
Rebecca Salganik,
Alva Markelius,
Chris Pang,
Nitarshan Rajkumar,
Dmitrii Krasheninnikov,
Lauro Langosco,
Zhonghao He,
Yawen Duan,
Micah Carroll,
Michelle Lin,
Alex Mayhew,
Katherine Collins,
Maryam Molamohammadi,
John Burden,
Wanru Zhao,
Shalaleh Rismani,
Konstantinos Voudouris,
Umang Bhatt,
Adrian Weller,
David Krueger,
Tegan Maharaj
Abstract:
Research in Fairness, Accountability, Transparency, and Ethics (FATE) has established many sources and forms of algorithmic harm, in domains as diverse as health care, finance, policing, and recommendations. Much work remains to be done to mitigate the serious harms of these systems, particularly those disproportionately affecting marginalized communities. Despite these ongoing harms, new systems…
▽ More
Research in Fairness, Accountability, Transparency, and Ethics (FATE) has established many sources and forms of algorithmic harm, in domains as diverse as health care, finance, policing, and recommendations. Much work remains to be done to mitigate the serious harms of these systems, particularly those disproportionately affecting marginalized communities. Despite these ongoing harms, new systems are being developed and deployed which threaten the perpetuation of the same harms and the creation of novel ones. In response, the FATE community has emphasized the importance of anticipating harms. Our work focuses on the anticipation of harms from increasingly agentic systems. Rather than providing a definition of agency as a binary property, we identify 4 key characteristics which, particularly in combination, tend to increase the agency of a given algorithmic system: underspecification, directness of impact, goal-directedness, and long-term planning. We also discuss important harms which arise from increasing agency -- notably, these include systemic and/or long-range impacts, often on marginalized stakeholders. We emphasize that recognizing agency of algorithmic systems does not absolve or shift the human responsibility for algorithmic harms. Rather, we use the term agency to highlight the increasingly evident fact that ML systems are not fully under human control. Our work explores increasingly agentic algorithmic systems in three parts. First, we explain the notion of an increase in agency for algorithmic systems in the context of diverse perspectives on agency across disciplines. Second, we argue for the need to anticipate harms from increasingly agentic systems. Third, we discuss important harms from increasingly agentic systems and ways forward for addressing them. We conclude by reflecting on implications of our work for anticipating algorithmic harms from emerging systems.
△ Less
Submitted 11 May, 2023; v1 submitted 20 February, 2023;
originally announced February 2023.