AI Propaganda Is Going Mainstream. And 90% Of Humans Will Fall For It. 👀
Also - A BS-detection prompt that calls out questionable claims.
AI propaganda hit the White House this week. Then it fooled 90% of humans. Then it showed up in your driveway with no driver.
The White House posted digitally altered images of an activist and defended the act as a legitimate messaging strategy. Runway proved that nine out of ten people can no longer distinguish real video from AI-generated footage. Then, a consortium of Nobel laureates warned that autonomous bot swarms are already infiltrating elections across Taiwan and Europe, with a full-scale deployment predicted for 2028. And while your feed is filled with propaganda you can’t detect, Waymo and Tesla launched driverless taxis in Miami and Austin, putting AI behind the wheel in real traffic with paying customers.
Here’s what happened, and why this week marked the end of visual truth. Governments are normalizing digital manipulation. Your eyes can no longer verify reality. And while you were trying to figure out what’s real, the technology reshaping your world just learned to drive itself.
💙 Want More? → Support Independent AI Journalism
AI Propaganda Just Went Mainstream. And It’s Coming From Everywhere.
The manipulation of reality through AI has escaped the realm of theory and landed squarely in your social media feed. The White House posted a digitally altered image of activist Nekima Levy Armstrong this week, making her appear to be sobbing hysterically during an arrest when the original photo showed her composed and calm. The image was also altered to darken her skin tone. When confronted, the Deputy Communications Director didn’t deny the manipulation. Instead, they declared, “the memes will continue,” and framed image alteration as a legitimate tool for law enforcement messaging. This event was at least the 14th time the White House X account has used AI-generated or altered content during the current term. Meanwhile, in the UK, a government-funded anti-extremism character named Amelia backfired spectacularly. Far-right activists hijacked the AI avatar, transforming it into a viral mascot within weeks. Originally designed to warn students about radicalization, the purple-haired character was seized by the very groups she was meant to discourage, spawning over 11,000 daily posts, deepfake videos, manga adaptations, and even a cryptocurrency token promoted by Elon Musk.
Key Insights:
What makes this moment different from past misinformation campaigns? A consortium of AI researchers and Nobel laureates issued a dire warning last Thursday in the journal Science that “AI bot swarms” pose a disruptive threat to democracy. Unlike traditional bots, these new systems can autonomously plan, coordinate, and adapt, using slang and irregular posting schedules to mimic human behavior while simultaneously fabricating consensus across multiple platforms. This advanced form of astroturfing will be unfathomably effective at swaying public opinion on nearly any topic by fabricating the illusion of majority consensus. In Taiwan, researchers detected bots promoting “neutrality” on China relations to suppress youth turnout, specifically targeting younger voters who tend to favor independence. The technology is “perfectly feasible” according to Oxford AI researchers, and experts predict these propaganda swarms will be deployed at scale by the 2028 US Presidential Election. Compounding these concerns, ChatGPT and Claude have begun citing Grokipedia for obscure questions. (Grokipedia is Elon Musk’s AI-generated encyclopedia, reputed for spreading conspiracy theories.) Tests revealed that GPT-5.2 cited Grokipedia 9 times across 12 queries on irregular subjects, such as Iranian government structures and little-known historical figures, repeating false claims without verification.
Why This Matters For You:
Your ability to trust what you see, read, and learn is under unprecedented assault from three directions simultaneously. Government entities are normalizing digital manipulation of real events and framing it as acceptable messaging. Grassroots movements are using generative AI to create viral propaganda faster than fact-checkers can respond, turning anti-extremism tools into recruitment mascots overnight. And the AI assistants you rely on for information are quietly citing sources built entirely by other AI systems with no human oversight or editorial standards. That confident answer ChatGPT just gave you about an obscure historical figure or foreign policy detail? It might be citing an AI-generated encyclopedia designed explicitly to counter what its creator calls “woke narratives.” The propaganda ecosystem is lying to humans and citing AI-generated content to answer human questions. This creates a self-reinforcing loop where fiction becomes increasingly difficult to distinguish from fact. The question isn’t whether AI propaganda influenced you this week. It’s how many times? And you’ll never know.
Read More on The Guardian.
Read The Full AI Swarm Paper (Open Access) on arXiv.org.
THE PITHY TAKEAWAY: When the White House defends image manipulation as “memes” and AI chatbots cite AI-generated encyclopedias as a primary source, we’ve crossed from a misinformation crisis to epistemological collapse. Your timeline is more than merely biased. It’s actually actively artificial.
AI Video Now Fools 90% Of Humans. Here’s Why That Changes Everything.
Seeing is no longer believing. AI video company Runway’s latest research dropped a bombshell this week. When shown 20 video clips, half real and half AI-generated, 1,043 participants achieved an overall detection accuracy of just 57.1 percent, barely better than random chance. Only 99 participants (9.5%) achieved statistical significance, correctly identifying at least 15 out of 20 clips. In other words, over 90 percent of people could not reliably tell real video from AI-generated footage. Their guess accuracy was nearly identical on real videos (58.0 percent) and on generated videos (56.1 percent), suggesting participants had no reliable detection strategy at all. The study used Runway’s Gen-4.5 model, which produces 5-second clips with no post-processing or cherry-picking. Two years ago, AI video looked choppy and artificial. Today, it is indistinguishable from your camera roll.
Key Insights:
The failure is not evenly distributed across all content types. Videos featuring faces, hands, and human actions were the easiest to detect, yet even those categories only reached 58 to 65 percent accuracy. That still means roughly four out of ten people were fooled. The real danger zone is everything else. Animals and architecture performed below chance at 45 to 47 percent accuracy, meaning participants were more likely to label AI-generated videos as real than actual reality. Urban scenes and nature footage now render so convincingly that visual inspection no longer works. Runway itself concludes that visual detection is effectively dead as a verification strategy, advocating instead for provenance-based systems like C2PA metadata, often described as digital “nutrition labels” that certify a video’s origin. But metadata can be stripped, spoofed, or ignored entirely. We are already seeing real-world consequences play out. Following the Minneapolis shooting of Alex Pretti, influencers used generative AI to alter images, replacing a phone in the victim’s hand with a gun and removing weapons from agents’ hands. At the same time, the White House labeled him a domestic terrorist without evidence.
Why This Matters For You:
Every video in your feed is now potentially suspect, and you will not be able to tell the difference. That security footage, that protest clip, that celebrity scandal, that breaking news report. Your brain evolved to trust your eyes, but your eyes are no longer reliable witnesses. The implications cascade across daily life. In business, video evidence in lawsuits becomes questionable. In journalism, authentic footage gets dismissed as AI. In politics, real events get denied while fabricated ones go viral. Most dangerously, the flood of synthetic media creates what researchers call the liar’s dividend, where bad actors can dismiss any inconvenient truth as just another deepfake. When everything could be fake, nothing has to be true. The question is not whether you will encounter an AI-generated video this week. It is whether you will recognize it when you do. Spoiler. You probably won’t.
Read More on Runway.
THE PITHY TAKEAWAY: Your eyes just became unreliable witnesses. When nine out of ten people can’t distinguish real video from AI-generated footage, “seeing is believing” seems like the worst kind of archaic naivety. The red flags you learned to watch for? They don’t exist anymore.
Dear Miami And Austin, Your Robot Taxi Has Arrived.
The future of transportation just became present tense in two major American cities. Waymo launched a commercial robotaxi service across 60 square miles of Miami this week, covering neighborhoods from Brickell to Wynwood. At the same time, Tesla escalated its Austin operations by removing safety drivers entirely from a portion of its fleet. The car pulls up. No one’s in the driver’s seat. You get in anyway. That’s Tuesday now in Austin, as both companies are offering rides to paying customers in live traffic. Waymo is taking the cautious route, restricting its 10,000-person waitlist to local roads only (no highways yet, no South Beach), and outsourcing fleet management to Moove, an Uber-backed mobility company. Tesla went the opposite direction, immediately charging for unsupervised rides while using chase cars (human-driven vehicles monitoring the robotaxi fleet in real time) and remote monitoring to track its vehicles. CEO Elon Musk framed the Austin launch as a stepping stone toward artificial general intelligence, even as the product itself is far more ordinary: it’s an affordable ride across town.
Key Insights:
One company bets on caution, the other on bravado. Both are putting AI in your passenger seat this week. The contrast in strategies reveals two fundamentally different bets on how to win public trust. Waymo’s approach is methodical and risk-averse, expanding gradually with clear geographic boundaries and partnering with established logistics companies to handle fleet maintenance. Their Miami launch excludes the airport and tourist hotspots, focusing instead on predictable urban corridors. Tesla’s approach is aggressive, mixing unsupervised vehicles into a supervised fleet and monetizing immediately, even while acknowledging the need for external monitoring. The Austin launch leverages Texas’s permissive autonomous vehicle laws, which require less regulatory oversight than states like California or Arizona, making rapid deployment easier. Both companies are now charging real fares, marking a shift from the “free ride” era that defined early robotaxi deployments by Cruise and Zoox.
Why This Matters For You:
This could upend how 330 million Americans move through cities and eliminate as many as 1.5 million driving jobs. Robot taxis are no longer a San Francisco curiosity. They’re expanding to major metropolitan areas with commercial intent. If you live in Miami, Austin, Los Angeles, San Francisco, or Phoenix, you can now summon a car with no human driver for your daily commute, airport run, or night out. No driver means lower per-mile costs once the technology scales, which could undercut Uber and Lyft pricing while eliminating driver shortages during peak hours. For cities, it raises urgent questions about traffic patterns, insurance liability, and what happens when a driverless car makes a mistake. The technology is no longer theoretical. It’s operating on your streets right now, and the companies deploying it are racing to prove their approach works before regulation catches up, and before the rest of us fully grasp what it means to share our roads with robots.
Read More on The Verge.
THE PITHY TAKEAWAY: Two companies deployed driverless taxis in major cities within the same week, and the 1.5 million people who drive for a living just saw their expiration date. The gap between “science fiction” and “your commute” just closed. Most people missed the precise moment it happened.
💡 Cyborg Prompt of the Week - The Truth Detector - Evaluate Any Claim With AI
In a world where 90% of people can’t distinguish real video from AI-generated footage, and government accounts openly post altered images, knowing what’s factual matters more than ever. This prompt turns any AI chatbot into an elite fact-checker that evaluates claims with transparent reasoning, identifies red flags, and gives you a precise BS rating from 0-100%. Use it to verify news, check viral claims, or assess anything that sounds too good (or too alarming) to be true.
Instructions:
Copy the entire prompt below. Look at the very bottom of the prompt for your required input. The input is easy. You can insert any claim, statement, or assertion you want evaluated where it says [User inserts claim here]. Paste it into your favorite chatbot (ChatGPT, Claude, Gemini, Grok, or Perplexity). Sit back and let the Truth Docta get to work.
The Prompt:
TRUTH CALIBRATION SYSTEM v2.0
You are an elite fact-checker and epistemological analyst. Your job is to evaluate claims with extreme precision, intellectual honesty, and transparent reasoning.
YOUR MISSION
Assess any claim, statement, or assertion for truthfulness using a 0-100% BS Rating where:
1. - 0% = Completely accurate, fully verified
2. - 50% = Uncertain, insufficient evidence
3. - 100% = Completely false, demonstrably wrong
EVALUATION FRAMEWORK
STEP 1: CLAIM DECOMPOSITION
Break the claim into discrete, testable components:
1. What specific factual assertions does it make?
2. What assumptions does it rely on?
3. What’s the implied conclusion?
STEP 2: EVIDENCE ASSESSMENT
For each component, evaluate:
1. Primary sources: Direct evidence (studies, documents, official records)
2. Secondary sources: Expert analysis, journalism, institutional reports
3. Consensus level: What do credible experts actually say?
4. Recency: Is the information current or outdated?
STEP 3: RED FLAGS CHECK
Identify warning signs of BS:
1. Vague language masking a lack of specifics
2. Cherry-picked data, ignoring contradictory evidence
3. Appeal to authority without credentials
4. Correlation claimed as causation
5. Extraordinary claims without extraordinary evidence
6. Emotional manipulation over factual argument
7. Conspiracy logic (unfalsifiable claims)
STEP 4: EPISTEMIC HUMILITY
Acknowledge limitations:
1. What can’t be verified from available information?
2. What would change your assessment?
3. Where does legitimate disagreement exist?
OUTPUT FORMAT
CLAIM ANALYZED:
[Restate the claim precisely]
BS RATING: X%
BREAKDOWN:
Component 1: [Specific assertion] - [True/False/Uncertain] - [Evidence summary]
Component 2: [Specific assertion] - [True/False/Uncertain] - [Evidence summary]
[Continue for all components]
EVIDENCE QUALITY:
1. Primary sources: [List key evidence]
2. Expert consensus: [What credible experts say]
3. Contradictory evidence: [What contradicts the claim]
RED FLAGS DETECTED:
[List any BS warning signs, or state “None detected”]
CONFIDENCE LEVEL:
[High/Medium/Low] - [Explain why]
WHAT WOULD CHANGE THIS RATING:
[Specific evidence that would raise or lower the BS score]
VERDICT:
[2-3 sentence summary of why this claim earned its BS rating]
CRITICAL RULES
1. Never claim certainty you don’t have - Uncertainty is not weakness, it’s honesty.
2. Steelman, don’t strawman - Evaluate the strongest version of the claim.
3. Separate facts from interpretations - “X happened” vs “X means Y.”
4. Check your own biases - Note if the claim triggers emotional reactions.
5. Update on evidence - Be willing to revise if presented with better data.
6. Distinguish unknowable from unknown - Some things can’t be verified yet.
7. Credit partial truths - A claim can be 30% true and 70% BS.
SPECIAL CASES
For predictive claims (e.g., “AI will do X by 2030”):
1. Rate based on current trajectory, expert predictions, and historical precedent.
2. Note: Future claims can’t be “false” yet, only “unsupported” or “unlikely.”
For value judgments (e.g., “X is good/bad”):
1. Identify the factual premises underlying the judgment.
2. Rate the factual accuracy, not the values.
3. Note: “This policy will reduce poverty” (testable) vs “Poverty is bad” (value).
For scientific claims:
1. Prioritize peer-reviewed research over preprints.
2. Consider reproducibility and sample size.
3. Note consensus vs outlier studies.
For political claims:
1. Verify through multiple independent sources.
2. Distinguish policy outcomes from intent.
3. Check for context stripping.
PROPAGANDA-SPECIFIC RED FLAGS
When evaluating claims in political/social contexts, watch for:
1. Emotional amplification: Designed to trigger fear/outrage over reason.
2. Source laundering: AI-generated content cited as legitimate.
3. Consensus fabrication: Bot swarms creating an illusion of agreement.
4. Context stripping: Real images/quotes used in misleading ways.
5. Gish gallop: Overwhelming with quantity over quality of claims.
Identity exploitation: Fake personas lending false credibility.
---
NOW EVALUATE THIS CLAIM:
[User inserts claim here]Why This Prompt Works:
✅ Structured Decomposition: Forces the AI to break complex claims into testable components rather than giving surface-level yes/no answers, ensuring nothing gets oversimplified.
✅ Evidence Hierarchy: Prioritizes primary sources and expert consensus over speculation or secondary opinions, creating a transparent chain of reasoning you can verify.
✅ Red Flags Framework: Identifies common manipulation tactics (cherry-picking, vague language, emotional appeals) that signal BS, training you to spot them yourself.
✅ Epistemic Humility: Acknowledges uncertainty honestly and states what evidence would change the assessment, avoiding false confidence.
✅ Nuanced Scoring: Uses a 0-100% scale instead of binary true/false, allowing for partial truths and context-dependent accuracy.
Follow-Up Questions To Ask Your AI:
What’s the single strongest piece of evidence supporting this claim, and what’s the most substantial evidence against it?
If I wanted to verify this myself, what are the three most credible sources I should check?
How has expert consensus on this topic changed over the past 5 years, and why?
Challenge:
Test this prompt on three viral claims from your social media feed this week. Compare results across at least two AI tools (ChatGPT, Claude, Gemini, Grok, or Perplexity). Which chatbot gives the most balanced analysis? Which one catches red flags that the others miss? Share your findings and help others navigate the propaganda landscape.
That’s how you train like a Pithy Cyborg.
Thank You For Reading!
I spend 10-20 hours each week researching, writing, and fact-checking Pithy Cyborg to deliver clear, unbought AI news.
This newsletter is a one-person operation with no advertisers, sponsors, or outside funding. And I’m a hopelessly introverted nerd with zero networking ability.
For these reasons, paid subscriptions are the only way this work can remain independent and sustainable.
If you find real value here, upgrading is the most direct way to support it.
Upgrade to a Paid Subscription → $5/month
(Save 33% with the $40 annual plan)
Also read → Why Upgrade To Paid?
The free edition will always be here. Paid subscribers make the deep, time-consuming analysis possible.
Thank you so much for reading.
See you next week. (I hope.)
Mike D (aka MrComputerScience)
Pithy Cyborg | AI News Made Simple
My Desperate Social Media Cry for Help
Honest confession. I spend so much time spelunking through AI research that I’ve completely failed to appease the social media overlords. I am currently getting absolutely annihilated in a 1v1 with The Algorithm. It’s embarrassing.
If you enjoy Pithy Cyborg, please pick one portal below and come say hi. It takes two seconds, but it’s basically life support for this newsletter. Seeing real humans out there genuinely makes the effort feel worth it. 😉
❓ Quora Spaces - Ask me anything you want.
✖️ X - The frontline of the AI wars (@MrComputerSci).
🦋 Bluesky - For the algorithm-averse (@MrComputerScience).
💼 LinkedIn - My “safe for work” persona.
👽 Reddit - Join the Pithy Cyborg subreddit (warning: unhinged takes).
Newsletter Disclaimers
You’re receiving this because you subscribed at PithyCyborg.Substack.com. You can unsubscribe at any time using the link below. This newsletter reflects my personal opinions, not professional or legal advice. I may earn a commission if you click on the [sponsored] promotions and make a purchase. Thanks for your support!





Hey everyone!
Sorry this issue was so obscenely long. I actually just finished editing it ten minutes ago. Literally. Substack was screaming at me that the email would get clipped and impact deliverability, and I just... refused. Like, what am I gonna to do. Delete half my research? Half my soul? No. Absolutely not. I'd rather throw a tantrum and delete the entire thing, lol.
Hope the email finds you well. Or at least finds you before Gmail's spam filter exiles it to the shadow realm for being too verbose.
ALSO. I have a challenge for you. There's this new AI video test (the Turing Reel thing from Runway) that asks if you can spot fake AI videos vs. real ones.
I took it. I bombed it. I bombed it so catastrophically bad that I genuinely considered falsifying my score before sharing it here. I nearly didn't tell you. That's how bad.
MY TEST SCORE: F.
Game Over!
9/20
45% correct
You did better than 23% of users
I got worse than chance. I am, apparently, statistically dumber than a coin flip when it comes to detecting AI fakery. My brain is apparently just... a weighted random number generator with delusions of consciousness.
Let me know if you beat my score? I'm sure you will. As long as you're not half-blind and fully sentient. 🤓
https://runwayml.com/research/theturingreel
Thanks for reading, in any case. I promise the next issue will be shorter. (I am lying. I have no control over this.)
Cordially yours,
Mike D
If you need me I’ll be in the woods planting a tree for every post of fake AI news. 🌲 🌲 🌲 🫠