The Human Moral Archive Framework (HMAF): From Reflection to Conscience — A Data-Driven Architecture for Empirical Machine Ethics
Abstract
Artificial intelligence (AI) continues to advance in perception, language, and planning, yet remains morally inert: it can simulate ethical reasoning but lacks a coherent, transparent substrate for ethical choice. Prevailing approaches—rule-based constraints, value alignment, reinforcement learning from human feedback, and “constitutional” training—treat morality as prescription rather than memory. The Human Moral Archive Framework (HMAF) addresses this gap through a data-driven architecture that derives machine moral guidance from the recorded history of human moral judgment and consequence. HMAF models moral cognition as an informational system comprising four layers: (1) a continuously updated Archive Layer aggregating law, philosophy, literature, and empirical moral outcomes; (2) a Pattern Layer identifying probabilistic regularities linking motives, actions, contexts, and judged results; (3) a Simulation Layer forecasting moral perception and social consequence; and (4) a Governance Layer enforcing non-maleficence, autonomy, justice, and beneficence under human arbitration. Extending beyond these foundations, HMAF provides the substrate for its formal derivatives—the Moral Distribution Model (MDM), Moral Density Function (MDF), and forthcoming Moral Distribution Indexing (MDI)—each refining the quantification of moral behavior within informational space. Designed for both conceptual rigor and empirical adaptability, the framework is flexible enough to integrate raw human-sourced data and representational polling datasets, enabling immediate exploratory application without prescriptive constraint. Together, these constructs advance a unifying premise: that ethical coherence can be expressed as a distribution of moral information, bridging philosophy, behavioral data, and machine learning to support a measurable, auditable form of artificial moral reasoning. HMAF invites continued exploration through its Addenda, offering a scalable foundation for measuring, modeling, and ultimately integrating moral reasoning within artificial intelligence. *Reprinted November 2025 for archival and citation consistency; original manuscript composed 2025.*