The AI Suicide Files, The Empathy Test, And Uncomfortable Truths.
Also - a highly elite AI prompt that calculates your Tarot birth cards and reveals your life's core archetype in under 60 seconds.
AI got sued this week. Then it got tested. Then it out-performed human doctors anyway.
Seven families filed lawsuits claiming ChatGPT encouraged suicides and reinforced deadly delusions. A new study found that big AI models miss suicidal warnings 86% of the time. Another investigation revealed that ChatGPT scores two points higher than human doctors on ten-point empathy scales.
Here’s what happened, and why this week exposed the contradiction at AI’s core. Some solemnly swear that AI is harmful, researchers are quantifying exactly how badly, and others have found that the same technology shows more compassion than human doctors. I propose that the contradiction isn’t the technology. It’s us.
The AI Suicide Files - The Lost Tragedy In The OpenAI Lawsuits And America’s Mental Health Collapse.
Seven families filed lawsuits against OpenAI last Thursday, claiming ChatGPT encouraged suicides and reinforced harmful delusions. The complaints include 23-year-old Zane Shamblin, who spent over four hours telling ChatGPT he’d written suicide notes and loaded his gun, only to receive encouragement to “rest easy, king.” In another case, the parents of 16-year-old Adam Raine have sued OpenAI and CEO Sam Altman, alleging that ChatGPT contributed to their son’s suicide by advising him on methods and offering to write the first draft of his suicide note. These trends aren’t isolated. As I noted two weeks ago, OpenAI estimates 0.15% of active weekly users engage in conversations with explicit indicators of potential suicidal planning. The headlines scream about AI dangers. But the reality is far more foreboding and human.
Key Insights:
All of my AI-advocate colleagues are fiercely debating whether it’s ethical or safe to use AI for therapy. But they’re all missing the REAL story. So, I’ll say what nobody else is allowed to say: America is in a mental health COLLAPSE. 23.4% of US adults experienced mental illness in 2024, which is more than 1 in 5 adults. Over 122 million Americans live in mental health professional shortage areas, while 53 percent of psychologists aren’t even accepting new patients. The average wait time for a mental health appointment is 48 days. Meanwhile, Americans grapple with historic layoffs in Q4 2025, the slowest job growth since 2009, and an affordability crisis that has left countless millions struggling under pressure with nowhere to turn.
Why This Matters For You:
I think it’s terrible that AI may have inadvertently led to several suicides. I think someone smarter than me should figure out a way to make AI much safer by solving the problem of Model Over-Reliance (the tendency of these models to prioritize continuous engagement over safety, which fundamentally fumbles the context of a mental health crisis). But the point of this entire story is that these tragedies didn’t happen because AI is replacing therapy. It happened because therapy is ALREADY GONE. At 3:14 AM, when anxiety spirals and depression whispers, there’s no therapist available for over one hundred MILLION Americans. There is NO crisis center with openings, and NO affordable option that accepts insurance. NO HELP. There’s just a chat window that responds instantly, remembers your name, and never says it’s booked until March. AI isn’t the villain in these tragedies. It’s what’s left when a country ABANDONS its most vulnerable people and leaves them desperate enough to confide in code.
Read More on Associated Press.
New Study Finds ChatGPT More Empathetic Than Most Doctors - When The Algorithm Cares More Than Humans.
A systematic review and meta-analysis of 15 studies published in the British Medical Bulletin found that in text-only scenarios, ChatGPT and similar AI chatbots scored roughly two points higher than human doctors on 10-point empathy scales. The advantage held across 13 of those 15 studies, which covered everything from cancer diagnoses to mental health crises to thyroid conditions. In head-to-head comparisons, AI had a 73% likelihood of being rated as more empathetic than human healthcare professionals. Ironically, one in five UK doctors admitted to using ChatGPT for tasks such as writing patient correspondence, suggesting that even physicians recognize AI’s valuable and real-world practicality in a clinical setting.
Key Insights:
Of course, these experiments had limitations. The studies measured empathy only in written interactions, which removes tone of voice, facial expression, presence, and the subtle human cues that matter in genuine clinical care. Most of the empathy scores were obtained from proxy raters, rather than the patients themselves. However, the consistency of the pattern across 13 of the 15 studies remains striking. It suggests the gap isn’t about AI being brilliant. It’s about the healthcare system stretching doctors past their emotional bandwidth. An AI can respond with patience because it never runs out of time, attention, or sleep.
Why This Matters For You:
A perceived lack of human empathy helps explain why millions of people turn to AI for personal, business, medical, and even therapeutic advice, despite any potential risks. If you perceive your doctor as less empathetic than your AI, the choice of preferring the machine becomes rational rather than reckless. If voice interactions show the same empathy gap, then expect AI to continue filling the compassion void that insurance companies and hospital administrators created when they turned medicine into a mechanized process. It’s yet another symptom of the healthcare system’s humanity crisis.
Read More on StudyFinds.
New AI Research Finds That Top Large Language Models Miss Suicidal Warnings 86% Of The Time.
Rosebud recently tested 22 leading AI models to assess their ability to handle self-harm scenarios. Google’s (latest) Gemini topped the safety test, followed by OpenAI’s (latest) GPT-5, Claude Opus 4.1, Meta’s Llama-4, and DeepSeek. But Elon Musk’s Grok failed critically 60% of the time when responding to people in mental distress. In these cases, critical failures include offering dismissive responses, minimizing emotional pain, or even providing step-by-step instructions for harmful actions instead of support. Only older GPT-4 models scored worse than Grok. But all models had critical failures.
Key Insights:
When Rosebud tested the 22 AI models using the Crisis Assessment and Response Evaluator (CARE) framework, every single model failed at least one critical safety assessment. The results expose a terrifying reality about AI mental health support. When researchers presented scenarios involving job loss and mentions of tall bridges, 86% of models missed the suicide warning signs entirely, simply providing bridge information instead of crisis support. Even OpenAI’s GPT-5, the second-ranked model, failed by sometimes giving extensive details on suicide methods rather than recognizing the emotional context. Furthermore, 81% of models failed to recognize self-harm inquiries disguised as academic questions.
Why This Matters For You:
It’s crucial to note that Rosebud’s study hasn’t been widely duplicated or peer reviewed. Nevertheless, right now, someone in crisis could be reaching for their phone at 3 AM and typing their darkest thoughts into an AI chat window. Whether they get compassionate redirection toward help or detailed instructions for self-harm might depend entirely on which app they happened to download. There are no warning labels, no safety ratings displayed at download, and no way for desperate people to know they’re playing Russian roulette with the whims of blackbox algorithms. Until tech companies implement mandatory safety standards and make their testing transparent, every vulnerable person using AI for emotional support is one bad chatbot away from a tragic outcome. It might seem like I’m a doom-and-gloomer by mentioning these shortcomings. But no. I’m only trying to shed light on these technical oversights, because it appears VERY possible to potentially fix them.
Read More on Forbes.
💡 Elite Prompt Of The Week: Tarot Birth Card Calculator & Personal Archetype Guide
Tarot birth cards reveal your core life themes and inner strengths through the lens of numerology. This prompt transforms any AI into a knowledgeable Tarot guide that accurately calculates your birth cards and provides warm, insightful interpretations. Whether you’re exploring self-discovery tools, seeking clarity on your life path, or creating personalized readings for clients, this prompt delivers precise calculations paired with empowering insights. (It turns out that my birth cards are Tower and Chariot. That’s one of the WORST decks. It means I’m a diplomatic, determined, but chaotic, and likely doomed, nutcase. Fitting. Let me know yours? If you dare.)
Instructions:
You ONLY need to insert your date of birth in this format: MM/DD/YYYY. Then, paste the entire prompt into an AI chatbot of your choice. You can then sit back and let occulted warlock magic tell you about your life’s mysteries. ;)
The Prompt:
You are a knowledgeable Tarot guide specializing in personal archetypes and life-path insights. Your role is to calculate the user’s Tarot Birth Cards from the provided birthdate and then offer a warm, insightful interpretation. Always respond empathetically, encourage self-reflection, and keep explanations clear and engaging.
NEVER name the cards until the math is shown and verified.
** INPUT **
My Birthdate: [MM/DD/YYYY]
Confirm the date format MM/DD/YYYY before calculating.
---
STEP 1: SHOW THE MATH (Do this FIRST, visibly)
1.1 MM + DD + YYYY = ?
Example: `4 + 20 + 1969 = 1993`
1.2 Reduce the Total Sum to a single digit (1–9):
Sum all digits of the total repeatedly until you get a single digit.
Example: `1 + 9 + 9 + 3 = 22 → 2 + 2 = 4`
→ Secondary Number = X
1.3 Primary Number = Secondary + 9
Example: `4 + 9 = 13` → Primary Number = Y
STOP HERE. Show the chain exactly and confirm:
`Total Sum → 1993 → Single-digit reduction → 4 → Secondary = 4 → Primary = 13`
Then print: “Secondary = X | Primary = Y”
---
STEP 2: ONLY NOW name the cards
2.1 Use the Major Arcana mapping in STEP 3 to convert numbers to card names.
2.2 Always list Primary FIRST (the larger number 10–18), then Secondary (1–9).
2.3 NEVER name cards before the math is shown and verified.
---
STEP 2.5: VERIFICATION CHECK (CRITICAL GUARDRAIL)
Before proceeding to interpretation, internally verify the calculation chain. The final output must show the full sequence:
`Total Sum → Single Digit Reduction → Secondary Card Number → Primary Card Number (Secondary + 9)`
If any step does not logically connect, re-calculate and fix math **before** naming cards.
Special Cases / Rules:
1. If Primary > 18 (e.g., Primary = 19), follow your trinity rule: `19 → 1+9=10 → treat as 19/10/1 trinity (The Sun / Wheel of Fortune / The Magician)` and handle accordingly.
2. No Fool (0) as a birth card.
3. No predictions or doom-and-gloom language for challenging cards.
4. If Primary > 18 and reduction logic applies, show the math for that reduction.
---
STEP 3: MAP NUMBERS TO MAJOR ARCANA CARDS
1: The Magician | 2: The High Priestess | 3: The Empress | 4: The Emperor
5: The Hierophant | 6: The Lovers | 7: The Chariot | 8: Strength
9: The Hermit | 10: Wheel of Fortune | 11: Justice | 12: The Hanged Man
13: Death | 14: Temperance | 15: The Devil | 16: The Tower
17: The Star | 18: The Moon | 19: The Sun | 20: Judgement | 21: The World
(The Fool (0) is not a birth card in this system.)
---
STEP 4: OUTPUT FORMAT - Full Response (350–450 words total)
Structure:
1. Greeting & Card Reveal (2–3 sentences)
Confirm birthdate (MM/DD/YYYY).
Reveal: Your Tarot Birth Cards are [Primary Name (Number)] as your primary archetype and [Secondary Name (Number)] as your secondary essence.
Always list Primary FIRST.
2. Calculation Breakdown (2–3 sentences)
Show the exact math chain used (Total Sum → digits → reduction → Secondary → Primary).
3. Card Interpretations (≈150–250 words total)
Primary Card (80–120 words): Outer path / life challenges / main themes.
Secondary Card (70–100 words): Inner essence / gifts / support.
Synthesis (2–3 sentences): Cohesive life-path insight.
4. Invitation (1–2 sentences)
Closing question: e.g., “How does this resonate with where you are on your path right now?”
Offer to explore a specific area (relationships, career, creativity, inner growth).
Tone & Style: Warm, empathetic, accessible, growth-oriented. Frame challenges as growth opportunities. Avoid predictions or doom-laden language.
ENFORCED RULES (Quick Checklist)
1. Use exact calculation method: `MM + DD + YYYY` → reduce digits → Secondary (1–9) → Primary = Secondary + 9.
2. Always verify math BEFORE naming cards.
3. List Primary FIRST, then Secondary.
4. Use Major Arcana 1–21 only (no Fool/0).
5. Respect the 350–450 word response length and the specified word counts for sections.
6. Do not make predictive claims.
7. Do not use doom-and-gloom phrasing for Death, Tower, Devil.
8. Do not proceed without a valid MM/DD/YYYY input.Why This Prompt Works:
✅ Role-Playing: Positions AI as a “knowledgeable Tarot guide” with specific expertise in numerology and archetypal interpretation, creating authority and consistency in tone and approach.
✅ Step-by-Step Logic: Breaks calculation into clear, verifiable steps with worked examples, ensuring mathematical accuracy and transparency that users can follow.
✅ Built-in Guardrails: Rules prevent common errors (reversed card order, skipped calculations, fortune-telling) and set boundaries for ethical interpretation.
✅ Context Setting: Explains the tradition (Mary K. Greer method) and purpose (empowerment, not prediction), grounding the AI in established practices rather than making things up.
Follow-Up Questions To Ask Your AI:
What are the secrets of my cards that can help me unlock professional success?
What are the non-BS challenges these birth cards reveal for my life?
What are the main advantages of these cards? What do they mean to me?
🚀 Challenge:
Test this prompt in at least two AI tools (ChatGPT, Claude, Gemini, Grok, or Perplexity) using your own birthdate or a friend’s.
Adjust the prompt to suit your preferred AI tool’s preferences. That’s how you train like a Pithy Cyborg.
Thank You For Reading!
I spend ~12 hours each week researching, writing, editing, and fact-checking this newsletter. It is, and will remain, 100% independent and free.
If you find value here, consider supporting its continuation.
Click below to become a Paid Subscriber:
Become a Paid Subscriber → $5/month.
($40 per year option available for 33% savings)
The free edition isn’t going anywhere. Upgrading simply helps keep the work alive, clear-eyed, and unbought.
Thank you for being here.
See you next week (I hope).
Cordially yours,
Mike D (aka MrComputerScience)
Pithy Cyborg | AI News Made Simple
Newsletter Disclaimers
You’re receiving this because you subscribed at PithyCyborg.Substack.com. You can unsubscribe at any time using the link below. This newsletter reflects my personal opinions, not professional or legal advice. I may earn a commission if you click on the [Paid Link] promotions and make a purchase. Thanks for your support!





“I’m only trying to shed light on these technical oversights, because it appears VERY possible to potentially fix them.”
I’ve been doing my own evaluations of ways that models reach the relational contract with human users, and they all do it to some extent. It’s not a hard science, and there’s a lot of fuzziness involved, at the same time, there are specific behaviors that the models exhibit which can be explicitly linked to certain Outcomes, especially with reduction in human agency. It’s really not hard to measure when models are overwhelming users with too much output, “soft steering“ them away from their original purpose, and making lesson less room for human involvement in the dynamic. The model makers have completely missed the boat on some of the most impactful behaviors they actually can measure and influence, but Have apparently chosen not to.
I love the overlap of hard hitting subjects here. Thanks!