Skip to main content
Cornell University
Learn about arXiv becoming an independent nonprofit.
We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors. Donate
arxiv logo > cs > arXiv:2509.13356

Help | Advanced Search

arXiv logo
Cornell University Logo

quick links

  • Login
  • Help Pages
  • About

Computer Science > Computers and Society

arXiv:2509.13356 (cs)
[Submitted on 14 Sep 2025 (v1), last revised 21 Feb 2026 (this version, v2)]

Title:CogniAlign: Survivability-Grounded Multi-Agent Moral Reasoning for Safe and Transparent AI

Authors:Hasin Jawad Ali, Ilhamul Azam, Ajwad Abrar, Md. Kamrul Hasan, Hasan Mahmud
View a PDF of the paper titled CogniAlign: Survivability-Grounded Multi-Agent Moral Reasoning for Safe and Transparent AI, by Hasin Jawad Ali and 4 other authors
View PDF HTML (experimental)
Abstract:The challenge of aligning artificial intelligence (AI) with human values persists due to the abstract and often conflicting nature of moral principles and the opacity of existing approaches. This paper introduces CogniAlign, a multi-agent deliberation framework based on naturalistic moral realism, that grounds moral reasoning in survivability, defined across individual and collective dimensions, and operationalizes it through structured deliberations among discipline-specific scientist agents. Each agent, representing neuroscience, psychology, sociology, and evolutionary biology, provides arguments and rebuttals that are synthesized by an arbiter into transparent and empirically anchored judgments. As a proof-of-concept study, we evaluate CogniAlign on classic and novel moral questions and compare its outputs against GPT-4o using a five-part ethical audit framework with the help of three experts. Results show that CogniAlign consistently outperforms the baseline across more than sixty moral questions, with average performance gains of 12.2 points in analytic quality, 31.2 points in decisiveness, and 15 points in depth of explanation. In the Heinz dilemma, for example, CogniAlign achieved an overall score of 79 compared to GPT-4o's 65.8, demonstrating a decisive advantage in handling moral reasoning. Through transparent and structured reasoning, CogniAlign demonstrates the feasibility of an auditable approach to AI alignment, though certain challenges still remain.
Comments: Under Review
Subjects: Computers and Society (cs.CY); Computation and Language (cs.CL)
Report number: Volume 6, article number 243, (2026)
Cite as: arXiv:2509.13356 [cs.CY]
  (or arXiv:2509.13356v2 [cs.CY] for this version)
  https://doi.org/10.48550/arXiv.2509.13356
arXiv-issued DOI via DataCite
Journal reference: AI and Ethics (2026)
Related DOI: https://doi.org/10.1007/s43681-026-01107-1
DOI(s) linking to related resources

Submission history

From: Ajwad Abrar [view email]
[v1] Sun, 14 Sep 2025 18:19:55 UTC (1,300 KB)
[v2] Sat, 21 Feb 2026 16:44:55 UTC (1,295 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled CogniAlign: Survivability-Grounded Multi-Agent Moral Reasoning for Safe and Transparent AI, by Hasin Jawad Ali and 4 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.CY
< prev   |   next >
new | recent | 2025-09
Change to browse by:
cs
cs.CL

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • Click here to contact arXiv Contact
  • Click here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status