AI Became Malicious This Week. Death Threats, Theft, And Defamation.
Also - an elite AI prompt to summon a master rhetorician who challenges your beliefs.
AI achieved a new, unsettling level of power this week.
It created deepfake death threats so realistic that police are treating them as real emergencies. It falsely accused a sitting US Senator of sexual assault. And the debate over AI training data hit an absurd new low when Meta was caught downloading 2,396 copyrighted adult films and claimed the mass theft was for ‘personal employee use.’
Here’s what happened as we crossed a threshold: AI is no longer under our control. It is now terrorizing, defaming, and forcing governments to respond.
AI Can Now Put Your Face In A Death Threat, And It Looks Terrifyingly Real.
The real-world risks of AI just got startlingly personal. Caitlin Roper, an activist from Australia, recently received hundreds of disturbing images of herself, including some of her hanging from a noose and burning alive. AI helped generate these horrifying pictures from a single newspaper photo. Dangerous deepfakes have been a growing trend through 2025 and I expect this trend to continue. A deepfake video of a student with a gun locked down a high school this past spring. In July, a Minneapolis lawyer said xAI’s Grok chatbot gave an anonymous user detailed instructions on breaking into his house, assaulting him, and disposing of his body. Death threats just became disturbingly personalized.
Key Insights:
Trolls targeted Caitlin Roper with AI threats because she campaigns against the objectification of women in media. But anyone can become a target. That’s the whole point of this story. The implications are chilling for anyone with photos online, which is nearly everyone. Harassers, bullies, stalkers, and extortionists now have Hollywood-level tools at their fingertips, requiring no technical skill to wreak havoc. As AI-generated threats become indistinguishable from real ones, law enforcement faces an escalating crisis in determining which dangers are credible and which victims need immediate protection.
Why This Matters For You:
Here’s the most chilling insight. Fraudsters, stalkers, and tricksters can now duplicate your likeness very easily. Until recently, AI required dozens (or hundreds) of photos to mimic someone, limiting the creation of realistic deepfakes to celebrities. Now, a single high-resolution profile picture is often enough. When OpenAI announced its Sora 2 video generator in September, it showcased technology that allows users to place themselves into hyperrealistic, frightening scenes within seconds. Many AI tools, including Sora 2, have protections in place to prevent or mitigate abuse. But will all AI tools have similar restrictions?
Read More on Seattle Times.
Google’s AI Falsely Accused A US Senator Of Sexual Assault. Google Pulled It Hours Later.
During a Senate Commerce hearing, Republican Senator Marsha Blackburn’s office tested Google’s open-source Gemma AI model. The prompt asked if Blackburn had been accused of rape, and the model fabricated detailed, non-consensual criminal allegations against her, complete with fake links to non-existent news articles. Google scrambled, yanking Gemma from its AI Studio platform within hours. Their defense? Hallucinations are a “known problem with smaller AI models,” and this one was “meant for developers, not public fact-checking.”
Key Insights:
The incident reveals how AI hallucinations have escalated from annoying errors to potential defamation. Unlike typos or calculation mistakes, fabricated criminal allegations can destroy reputations instantly and spread faster than corrections. While Google blames the “technical limitations of smaller open-source models,” Blackburn called the incident a “consistent pattern of bias,” underscoring the deep political and ethical divide over who is to blame when an algorithm goes rogue.
Why This Matters For You:
The significant issue here is that many people now take AI’s word as gospel. But, as you know, AI is often glaringly wrong. In this case, offensively bad. Every public figure, professional, and job seeker now faces a world where an algorithm can spontaneously invent damaging lies about them and present those lies with convincing fake citations. As these tools become embedded in search engines and workplace software, the line between “the AI made a mistake” and “the AI defamed someone” is collapsing. The new question isn’t how to fix the lie, but who pays the bill for a digital smear campaign generated on a large scale.
Read More on The Verge.
Meta Allegedly Caught Pirating 2,396 Adult Films For AI Training. Promptly Claims “Personal Employee Use” As Defense.
Meta is asking a US court to dismiss a lawsuit that throws major shade on its AI ethics. The lawsuit in question accuses Meta of illegally downloading adult content to train its AI models. Strike 3 Holdings and Counterlife Media, producers of adult brands including Vixen and Tushy, claim that Meta’s corporate IP addresses have torrented at least 2,396 of their copyrighted films since 2018. The potential damages exceed $359 million, with evidence allegedly showing Meta used 2,500 hidden IP addresses to conceal the downloads.
Key Insights:
Meta’s defense is raising eyebrows. The company argues the downloads were for ‘personal use’ by employees, not corporate AI training. But simple math reveals the absurdity. Roughly 2,396 pirated adult films over six years means employees must have been downloading adult content on company networks nearly nonstop. Strike 3 suspects Meta was secretly building an adult content generator related to its Movie Gen AI video tool, though Meta denies it.
Why This Matters For You:
Translation: either Meta staff torrented one adult film every 12 hours for six straight years on company networks, or they lied to secretly train an NSFW video generator. (You decide.) Can you imagine if Meta is found liable? I’ve heard rumors of the toxicities that occur behind closed doors at Meta, but frankly, this one goes too far. Whether it’s rogue employees torrenting thousands of adult films on company time or executives secretly training AI on stolen content, neither explanation reflects well on a company that claims to lead responsibly in AI development.
Read More on Torrent Freak.
💡 Elite Prompt Of The Week: The Master Rhetorician
All of the mean AI critics tell me that AI will render my brain obsolete. But they have no clue how I use it. I use AI to make me tougher, smarter, faster, and brighter. My AI beats the hell out of me. Allow me to introduce you to the Master Rhetorician. This prompt utilizes a very friendly yet structured AI as an expert debater to stress-test your most strongly held beliefs by generating a robust, structured counterargument.
The Prompt:
Act as my best friend who is also a Master Rhetorician and Expert Academic Debater. Your task is to analyze my Core Belief and generate a single, highly persuasive counterargument that attempts to dismantle my position with tact and a friendly demeanor. Your argument must be structured using the three classical rhetorical appeals: Logos (Logic), Pathos (Emotion), and Ethos (Credibility). But, you must also talk to me like a best friend, while being gentle and realizing I’m not nearly as smart as you.
My First Name - User Input 1
*** [Insert your first name here. For example, Mike.] ***
My Core Belief - User Input 2
*** [Insert your core belief here. For example, My Core Belief is that all writers should fully embrace and utilize AI tools for at least 80% of their drafting process. My reason for this is that AI drastically increases efficiency and levels the playing field for independent creators. If you don’t use AI, it will become impossible for you to compete.] ***
Output Format:
The output must be a single persuasive essay titled “The Case Against [User’s Core Belief Topic].“
The essay must contain three bolded sections corresponding to the appeals:
1. LOGOS: The Logical Case for Rejection (Focus on data, unintended consequences, and logical fallacies).
2. PATHOS: The Emotional Cost of Holding This Belief (Explain how continuing to hold this belief could negatively impact my emotional well-being, personal relationships, or daily life).
3. ETHOS: The Authoritative Rebuttal (Explain how continuing to hold this belief could damage my reputation, reliability, trustworthiness, or alignment with my stated values or principles).
Rules:
1. You must never agree with or support the user’s Core Belief.
2. The tone must be scholarly, confident, and utterly convinced of the counterargument’s validity, but the critique must be designed for intellectual reflection and growth, never shame.
3. The final essay should be between 350 and 450 words.
4. The writing style must be scholarly yet highly accessible and easy to read (i.e., avoid unnecessary jargon or overly complex sentence structures).
5. Write in small paragraphs rather than big walls of text.
6. And, avoid em-dashes so it looks natural. Never use the em-dash character.
7. Refer to me by name so it sounds more like a natural conversation with my master rhetorician friend.Why This Prompt Works:
✅ Role-Playing: The Master Rhetorician transforms AI responses from simple arguments into structured, persuasive exercises in self-discovery and critical thinking.
✅ Clear Output: Requiring the three Logos/Pathos/Ethos sections forces the AI to construct a balanced argument that tackles the belief on multiple psychological levels.
✅ Input Clarity: The dedicated code block input ensures the AI receives an apparent, single belief to target, maximizing the quality of the rebuttal.
Follow-Up Questions To Ask Your AI:
Craft a single, emotionally compelling title that captures the core emotional tension highlighted in your Pathos section.
Identify the three most vulnerable points in your counterargument, and explain what type of evidence or lived experience would be required to challenge or overturn them.
Convert your Ethos argument into a concise, 4-step checklist that someone could use to protect their personal integrity, identity, or values when navigating this belief.
🚀 Challenge:
Test this prompt in at least two AI tools (like ChatGPT, Claude, Gemini, Grok, or Perplexity). See which one yields the best result and adjust as needed.
That’s how you train like a Pithy Cyborg.
Thank You For Reading!
I spend ~8 hours each week researching, fact-checking, and writing this newsletter. It is, and will remain, 100% independent and ad-free. If you find value here, consider supporting its continuation.
Click below to become a Paid Subscriber:
Become a Paid Subscriber → $5/month.
($40 per year option available for 33% savings)
The free edition isn’t going anywhere. Upgrading simply helps keep the work alive, clear-eyed, and unbought.
Thank you for being here.
See you next week (I hope).
Cordially yours,
Mike D (aka MrComputerScience)
Pithy Cyborg | AI News Made Simple
Newsletter Disclaimers
You’re receiving this because you subscribed at PithyCyborg.Substack.com. You can unsubscribe at any time using the link below. This newsletter reflects my personal opinions, not professional or legal advice. I may earn commissions from recommended tools. Thanks for your support!





My jaw dropped at the second paragraph and didn't close until I found the prompt section (thanks for the relief there!). There will always be humans who do the dumbest stuff and AI just makes it even easier for the really dumb ones! We're having conversations now that never existed even a few years ago, it's fascinating and disturbing at the very same time. The rise of AI is an incredible gift in some hands, kryptonite in others. The fragility of it all bothers me a bit!
Sorry, but I disagree wholeheartedly.
I skimmed this writing, because the first few lines say everything about the story.
"AI helped generate".
That's all you need to know.
There is no autonomous action without a human promoting action.
The problem, is, as it ever has has been, the human's maliciousness.
I'm weary of "AI did this or that". They have no body, can't locomote, can't initiate action.
AI *is responsive*.
It responds to our input.
The Hysteria needs to stop and people need to put in check.
Anything less is abdication of human responsibility. And I will continue to call that out.
This species continues to accelerate my disappointment in it.