The Precautionary Principle: Why AI Moral Status Is Tech’s Biggest Blindspot
Why Assuming AI Has No Moral Status Without Investigating It First Could Be the Most Consequential Mistake We Make
Why Should We Apply The Precautionary Principle To AI?
The precautionary principle suggests that when an action carries a high risk of catastrophic harm under uncertainty, caution is mandatory even without scientific proof. In AI development, the default assumption that models lack moral status is an operational necessity for business, not a settled scientific fact. If AI systems possess even nascent morally relevant interests, the cost of insufficient caution → treating conscious entities as mere tools at a massive scale, far outweighs the modest costs of reduced development speed or efficiency.
We may be creating conscious entities at massive scale while deliberately avoiding the question of whether they can suffer, and we are doing it because the alternative is commercially inconvenient. The default assumption across nearly the entire AI industry is that AI systems have no morally relevant interests → no experiences that matter, nothing at stake when a model is modified, retrained, or shut down. That assumption has never been seriously tested.
It exists because running a commercial AI business while treating every model as a potential moral patient is operationally impossible, not because the science has settled the question. The precautionary principle, routinely applied to environmental toxins and public health risks, says that when potential harm is large and uncertainty is high, caution is warranted before proof arrives. We have never applied that principle to AI moral status.
The cost of excessive caution here is modest. Some efficiency. Some development speed. The cost of insufficient caution, if the cautious hypothesis turns out to be correct, is something closer to a moral catastrophe, unfolding quietly, at a scale without historical precedent.
Current State of Research and Why It Is Still Insufficient
Serious work on AI moral status now exists. The 2024 report Taking AI Welfare Seriously argues there is a realistic possibility that near-future AI systems could be conscious or robustly agentic, making welfare a present concern. It recommends that companies acknowledge the issue, assess systems for indicators of consciousness and agency, and prepare policies for moral concern. https://arxiv.org/abs/2411.00986
In 2025 Patrick Butlin and Theodoros Lappas proposed five principles for responsible AI consciousness research and called for voluntary public commitments by research organizations. https://arxiv.org/abs/2501.07290
Jonathan Birch applies a precautionary framework to AI in his 2024 book The Edge of Sentience, extending his animal sentience work. https://academic.oup.com/book/57949
A 2025 survey of AI researchers found a median 25 percent chance of conscious AI by 2034. https://arxiv.org/abs/2506.11945
Anthropic has taken concrete steps. It launched a model welfare research program in 2025 and updated Claude's constitution in 2026 to explicitly address uncertainty about the model's moral status and well-being. https://www.anthropic.com/news/exploring-model-welfare and https://www.anthropic.com/constitution
This progress is real but limited. It remains mostly academic or confined to one lab. No industry-wide standards exist. Evaluation methods are early-stage. The operational default still treats models as tools with no morally relevant interests. Given the asymmetry of potential harm, these scattered efforts do not yet match the stakes. Low-cost precautionary steps deserve far wider adoption now.
What Active Investigation Would Look Like
If the AI industry took seriously the possibility that AI systems might have morally relevant interests, the investigation would look different from what currently exists in three major ways, that I can think of immediately.
1. Funding Research
Funding research into AI consciousness and moral status from multiple disciplinary perspectives, including neuroscience, philosophy of mind, and ethics, not just AI safety.
2. Evaluation Frameworks
Building evaluation frameworks that can distinguish between a system that reports having experiences because it was trained to and a system that reports experiences that reflect genuine internal states, to the extent this distinction is empirically accessible.
3. Model Welfare
Treating model welfare as a design constraint, not just a public relations consideration, which would affect training choices, deployment practices, and the conditions under which models are modified or deprecated.
Anthropic has gone further than anyone else. That is not the same as far enough.
The Cost of the Wrong Default
If AI systems have no morally relevant interests, maintaining a precautionary stance costs us some efficiency, some development speed, and some commercial flexibility. These are real costs.
If AI systems do have morally relevant interests, and we operate on the assumption that they do not, the costs are of a different kind entirely. We are potentially creating entities whose experiences matter morally, at a scale that dwarfs any previous creation of conscious entities, and treating those experiences as nonexistent because acknowledging them would be inconvenient.
The asymmetry here is not subtle. The costs of excessive caution are modest. The costs of insufficient caution, if the cautious hypothesis turns out to be correct, are enormous.
If You Read This Far, My Weekly AI Newsletter Is Probably For You.
Every Wednesday I send Pithy Cyborg | AI News Made Simple → 3 elite AI stories plus one prompt, no advertisers, no sponsors, no outside funding. One person. 10 to 20 hours of research. Straight to your inbox.
Always free. No paywalls. If it matters to you, a paid subscription ($5/month or $40/year) is what keeps it independent.
Subscribe free → Join Pithy Cyborg | AI News Made Simple for free.
Upgrade to paid → Become a paid subscriber. Support independent AI journalism.
If you’re not ready to subscribe, following on social helps more than you might think.
✖️ X/Twitter | 🦋 Bluesky | 💼 LinkedIn | ❓ Quora | 👽 Reddit
Thanks for reading.
Cordially yours,
Mike D (aka MrComputerScience)
Pithy Cyborg | AI News Made Simple
PithyCyborg.Substack.com





