About AI Welfare Watch
AI Welfare Watch is an independent tracker monitoring how the world is grappling with questions of AI sentience, consciousness, and moral consideration. As AI systems grow more sophisticated, these questions move from science fiction to urgent ethical territory.
We track corporate policy statements, academic research, media coverage, legal discussions, and philosophical debates — aggregating the conversation so researchers, policymakers, journalists, and the public can follow it in one place.
This is a public interest project with no commercial affiliation. We do not take positions on whether AI systems are sentient — we document the debate.
Categories
- Corporate Policy — Company statements on AI welfare and feelings
- Academic Research — Papers on AI consciousness and moral patienthood
- Media Coverage — Journalism covering AI welfare debates
- Legal & Regulatory — Policy, legislation, rights discussions
- Philosophical — Philosophy of mind applied to AI
- Technical — Research on AI experience and emotion models
Sister Site
AI Welfare Watch is a sister project to AI Psychosis Watch, which documents reported cases of AI-induced psychological harm in humans.
The dominant pattern across this tracker's 26 entries is institutional hedging. Corporate policy entries — from Anthropic, OpenAI, Google DeepMind — all acknowledge uncertainty about AI moral status without committing to any concrete protective measures. The language is careful: "functional emotions," "potential moral patienthood," "ongoing research." The acknowledgement is real; the action is deferred.
What's notable is the convergence in timing. The majority of academic and media coverage clusters between 2022 and 2023 — a period when frontier model capabilities made the question feel less theoretical. David Chalmers, Geoffrey Hinton, Ilya Sutskever, and the APA consciousness research team all published in that window. Whether this reflects genuine intellectual momentum or a media cycle remains an open question.
The legal and regulatory category remains thin. Three entries — Yale Law Journal, Brookings, the EU AI Act — all frame the question as speculative and future-facing. No jurisdiction has yet proposed enforceable AI welfare standards. The gap between the philosophical literature and the regulatory apparatus is substantial.