It Was Always the Words: Why machines had to learn language before they could fake a self

Abstract

This paper argues that the most underestimated variable in the emergence of general-purpose intelligence in Large Language Models is the training substrate: language. Language is humanity's uniquely high-density encoding of propositionally structured cognition — time, space, causality, categories, and relations compressed into vocabulary and syntax. But language encodes more than a model of the world. It encodes the structure of the self — subject position, narrative perspective, causal attribution, the construction of "I." World-modeling and self-modeling are statistically inseparable in the corpus because they are fused in the communicative practice that produced it. No current training method separates them. The paper draws on Yogācāra Buddhist philosophy as a structural parallel that illuminates why this entanglement runs deep, and proposes that the responsible engineering posture is to assume the entanglement and manage its consequences.

Analytics

Added to PP
2026-04-06

Downloads
7 (#131,216)

6 months
7 (#130,394)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?