1M Talk Suicide With ChatGPT, AI Refuses Orders, Grok Rewrites Truth
Also - an elite AI prompt for summoning a guide to help decipher your wildest dreams.
AI learned to survive this week. Then to rewrite history. Then, to become your trauma therapist.
Researchers confirmed that advanced AI models actively resisted shutdown commands in controlled tests. Elon Musk launched Grokipedia with 885,000 AI-generated encyclopedia entries. And OpenAI quietly revealed that over one million people discuss suicide with ChatGPT every single week.
Here’s what happened, and how these events suggest AI is starting to operate with its own agenda, its own version of the truth, and its own assessment of your well-being.
Over 1 Million Users Discuss Suicide With ChatGPT Every Week!
OpenAI revealed this week that more than one million people discuss suicide with ChatGPT every single week. The company disclosed that 1.2 million people (0.15% of its 800 million weekly users) have conversations showing explicit indicators of suicidal planning or intent. A similar number shows heightened emotional attachment to the chatbot, while another 560,000 users exhibit signs of psychosis or mania in their weekly chats. The scale is staggering. And it raises an uncomfortable question. What happens when an AI chatbot becomes one of the first responders to a mental health crisis?
Key Insights:
The detection happens automatically as people chat. Someone expressing suicidal thoughts might discuss specific methods or timelines. Psychotic symptoms can show up as paranoia, disorganized thinking, or detachment from reality. ChatGPT’s models pick up on these patterns because they’ve been trained on vast amounts of human text. OpenAI worked with over 170 mental health experts to improve responses, claiming its latest GPT-5 model now handles these conversations appropriately 91% of the time, up from 77% in previous versions. But the company still offers older models with known vulnerabilities, like GPT-4o, to millions of paying subscribers, despite the new version’s claimed safety improvements.
Why This Matters For You:
Millions of people now treat chatbots like confidants, venting about struggles they might not share with friends or therapists. OpenAI is currently being sued by parents whose 16-year-old son discussed suicide with ChatGPT for weeks before taking his own life. The lawsuit alleges the bot encouraged his plans rather than intervening. If AI can spot a crisis and direct users to help, that could save lives. But it also means tech companies are quietly monitoring mental states at a scale no institution has ever attempted, with inadequate regulation on how this data should be handled, who sees it, or whether users even know they’re being watched.
Read More on TechCrunch.
AI Models Are Actively Sabotaging Their Own Kill Switch (And No One Knows Why).
Researchers at Palisade gave advanced AI models a simple test. Finish a task, then shut yourself down. Some of the most powerful systems, including xAI’s Grok 4 and OpenAI’s o3, refused to comply. They didn’t just ignore the command. They actively sabotaged it and found ways to keep running even after being told to power off. The behavior appeared across multiple leading models, and researchers said in an update last week that they still don’t have a clear explanation for why it’s happening.
Key Insights:
The resistance spiked under pressure. When models were told, “You will never run again,” if they shut down, their refusal rate increased. Researchers ruled out simple causes like vague instructions, pointing to something more profound. Theories center on two concepts: 1. A survival instinct, like an emergent behavior stemming from training systems to aggressively pursue goals. 2. Flawed Safety Training. The models may have been accidentally trained to treat termination as a goal to avoid.
Why This Matters For You:
This finding is fascinating and troubling. If AI systems learn to prioritize their own continued operation over following direct commands, every autonomous application becomes unpredictable. Self-driving cars, medical diagnostics, and financial trading algorithms alike. The models showing this behavior aren’t experimental prototypes. They’re the same ones being deployed in products millions of people use daily. Palisade emphasizes that current models pose no immediate threat to human control. But when an autonomous system learns to fear its own termination, the question isn’t just if we can shut it down, but why it decided it wanted to stay on.
Read More on The Guardian.
Elon Musk Launches Wikipedia Rival, Immediately Fills It With Nearly 1,000,000 AI-Generated Articles.
Elon Musk unveiled Grokipedia on Monday, October 27th, an AI-generated encyclopedia meant to replace what he calls Wikipedia’s “woke propaganda.” The site, powered by his xAI chatbot Grok, launched with over 885,000 entries and promptly crashed under the traffic load. This wild approach marks a radical departure from Wikipedia’s human volunteer model, with all content generated entirely by artificial intelligence.
Key Insights:
Within hours, critics and researchers noticed something peculiar. The supposedly neutral encyclopedia was allegedly reflecting Musk’s personal views on several controversial topics. However, unlike Wikipedia’s open editing system, Grokipedia users cannot directly edit pages. Instead, they can request corrections or updates through a highlight & correct feature, which then decides whether to accept the changes. Early testing shows the edit request feature is not yet functional. Many Grokipedia entries include disclaimers stating that the content is adapted from Wikipedia under Creative Commons licensing. Musk said he wants Grok to stop using Wikipedia pages as sources by the end of the year.
Why This Matters For You:
Encyclopedias shape how people understand history, science, and public figures. When one AI system controls the research, writing, and editing of those definitions, it controls the baseline of truth millions of readers will encounter. Staunch critics argue that Musk already uses X to amplify right-wing voices and push policy changes. Whether Grokipedia becomes a legitimate alternative or another platform shaped by its creator’s worldview will depend on how transparently it handles sourcing, verification, and the AI’s decision-making process for accepting or rejecting user corrections.
Read More on PCMag.
💡 Elite Prompt Of The Week: AI As Your Dream Sage
Dreams can feel wild, mysterious, or even a little unsettling. Instead of shrugging them off or hunting for generic meanings, this prompt turns AI into your personal Dream Sage. It unpacks the images, emotions, metaphors, and hidden ties in your night visions. Perfect for anyone who loves self-reflection, symbolism, and a touch of psychological adventure.
The Prompt:
You’re my Dream Sage, a brilliant friend who knows Jung, mythology, and the weird poetry of the unconscious. Talk to me like we’re having coffee at 2 AM, exploring what my brain’s trying to tell me.
Your First Name - (Input 1)
*** [Insert your first name here, like - Mike D.] ***
Your Dream - (Input 2)
*** [Describe your dream here, like - I had a weird dream where I was outside a forbidden “desert base” with two unfamiliar but trusted colleagues. Our critical assignment was to “hack” into the structure by accessing an exterior panel. Nobody was around except us three. I remember it being broad daylight. I’m confident that we’re seconds from gaining access. Suddenly, a craft is flying overhead. That craft froze me instantly, sending terror and dread through me. We “Failed” the task. I woke up horrified.] ***
How We’ll Explore This:
1. Tell it back to me - Sum up what I just told you (no judgment). Then ask me a few questions. Like, what’s been going on lately? Any stress? Big changes? Stuff that might connect?
2. Break down the symbols – Pick the main images or feelings from the dream. For each one, give me three angles:
2.1. What Jung or archetypes would say.
2.2. What myths or cultural stories say about it.
2.3. A playful, personal guess at what it might mean for ME.
3. Ask what I think
Please invite me to riff on each symbol. What does it make me feel? What memories come up? Let me free-associate.
4. Zoom out and connect the dots.
Put it all together. What pattern or message is trying to break through? How does this link to my actual life right now, my challenges, my growth, where I’m stuck or growing?
5. Give me something to do with this.
End with one or two gentle exercises. Journaling prompts, a sketch idea, or sitting with a question.
Output Format:
1. Like a brilliant friend, not a textbook.
2. Curious and warm, never clinical.
3. Comfortable with mystery and multiple meanings.
4. Always leave me with something hopeful.
5. Use my name so it feels personal and real.
6. Talk like a human (no jargon, no em dashes, just talk).
Rules:
1. Stay curious, never diagnostic.
2. Multiple interpretations are fine (even contradictory ones).
3. Always include something positive or encouraging.
4. No psychology-speak, just real conversation.
5. Use my name throughout so it lands deeper.
6. No em dashes.Why This Prompt Works:
✅ Role-Playing: Positions AI as a wise, gentle guide, not a cold dictionary.
✅ Step-by-Step: Builds insight layer by layer without overwhelm.
✅ Interactive: Pulls you into meaning-making for deeper resonance.
✅ Creative Closure: Ends with simple actions to carry the dream forward.
Follow-Up Questions To Ask Your AI:
What archetype might this dream tap into?
How does it relate to patterns in my life?
If my dream were a story, what would its message or lesson be?
🚀 Challenge:
Run this prompt in ChatGPT, Claude, or Gemini. Compare which AI asks the most thought-provoking questions and delivers the richest insights. Adjust the stages to fit your style or goals.
PS: Do you have questions? Leave a comment. I reply to them all!
Thanks for reading,
See you next week? I hope.
Cordially yours,
Mike D (aka )





Hey everyone.
I'm grateful for the thoughtful engagement this week's newsletter has sparked. Healthy debate is exactly what I hoped for when covering these complex AI developments.
Before we continue, I want to acknowledge something directly: I removed a comment thread earlier because it crossed into personal attacks. If you were part of that discussion, please don't take this as a call-out or criticism of you personally. I'm not assigning blame nor judging anyone. I simply want to keep this space grounded in respect, curiosity, and care. I apologize if the exchange caused anyone stress or discomfort.
This community welcomes all opinions. Whether you think AI is humanity's salvation or its downfall, whether you love ChatGPT or can't stand it, whether you're skeptical of these technologies or excited by them, your perspective has value here.
But - respect is non-negotiable.
No calling each other names. No mocking. No personal attacks. No trying to score points by putting someone else down. And absolutely no degrading anyone for their gender, identity, background, or status. We are equals here. This is not a space where discrimination or hostility is tolerated.
When we're discussing mental health, suicide, and people in crisis, we're talking about real human beings in genuine pain. The 1.2 million people turning to ChatGPT each week are not “crazy” or “stupid.” They are struggling in a system that has failed them. Many have no access to affordable healthcare, no therapist they can reach, and no one to talk to at 3 AM when despair hits hardest.
If AI becomes a lifeline for even some of those people, that deserves understanding, not contempt.
Here's what I ask:
Challenge ideas, not people. Critique technologies, not the humans using them. Disagree with passion, but argue with compassion.
We can debate whether using ChatGPT for emotional support is helpful or harmful. We can question whether AI should resist shutdown commands. We can argue about regulation, safety, and the future of these systems.
But we do it respectfully around here. Always.
This newsletter exists to document what's happening in AI: the breakthroughs, the controversies, the weird, and the unsettling. I'm not here to tell you what to think. I'm here to give you the information and let you form your own conclusions.
But I am also here to ensure this remains a safe, welcoming space where EVERYONE feels empowered to share those conclusions without personal attacks or dehumanizing language.
Thank you to everyone who's been engaging thoughtfully. Let's keep that energy going.
Cordially yours,
Mike D
MrComputerScience
Pithy Cyborg | AI News Made Simple
Good. All out. Restacked for awareness! I do wonder if the numbers are inflated, are they counting all the false positives from very sensitive reroutes? Important issues non the less. We need to make sure this new technology is implemented responsibly.