LLMs don't know anything: reply to Yildirim and Paul
Trends in Cognitive Sciences 28 (11):963-964 (2024)
  Copy   BIBTEX

Abstract

In their recent Opinion in TiCS, Yildirim and Paul propose that large language models (LLMs) have ‘instrumental knowledge’ and possibly the kind of ‘worldly’ knowledge that humans do. They suggest that the production of appropriate outputs by LLMs is evidence that LLMs infer ‘task structure’ that may reflect ‘causal abstractions of... entities and processes in the real world.' While we agree that LLMs are impressive and potentially interesting for cognitive science, we resist this project on two grounds. First, it casts LLMs as agents rather than as models. Second, it suggests that causal understanding could be acquired from the capacity for mere prediction.

Author Profiles

Mariel K. Goddu
Stanford University
Evan Thompson
University of British Columbia
Alva Noë
University of California, Berkeley

Analytics

Added to PP
2024-10-10

Downloads
1,488 (#23,201)

6 months
338 (#19,713)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?