AI Just Got A War Budget. Then It Came For Care. And Asked How You Feel.
The Pentagon crowned its new favorite AI. Then AI fired your therapist and polled 81,000 humans. No worries. I made you an action plan.
AI got a big war chest this week. Then it came for your therapist. Then it asked how you feel about all of it.
The Pentagon formally named Palantir’s Maven Smart System as the official AI backbone of the U.S. military. This came as new details emerged that AI helped strike over 1,000 targets inside Iran in a single day. Then 2,400 mental health professionals walked off the job at Kaiser Permanente, drawing a line in the sand between human care and algorithmic convenience. And Anthropic published the largest AI study ever conducted, interviewing 81,000 people across 159 countries about their hopes, fears, and lived experiences with the technology reshaping their lives.
Here’s what happened, and why this week felt different. AI has spent years asking for trust. This week, it stopped asking. It signed the contracts. It replaced the clinicians. It interviewed the humans. The machines are not coming for your future anymore. They are now administering it.
💙 Enjoy This? → Support Independent AI Journalism!
Meet Maven. The Private AI System Now Running America's Wars.
During the opening hours of Operation Epic Fury, U.S. forces struck more than 1,000 targets inside Iran, with the help of artificial intelligence. That’s an unprecedented processing rate of approximately 42 suggested priority targets per hour. The AI system now helping coordinate attacks has a name, and it’s not Claude, Grok, or ChatGPT. It’s called Maven Smart System. It’s built by a private company called Palantir. The dust is settling on the Pentagon's decision from earlier this month, when Deputy Secretary of Defense Steve Feinberg signed a memo making it an official Pentagon “program of record,” the military’s highest stamp of approval. That designation locks Maven into the official defense budget cycle, making government funding predictable and structurally embedded rather than subject to year-to-year renewal. Feinberg wrote that embedding Maven would provide warfighters “with the latest tools necessary to detect, deter, and dominate our adversaries in all domains.“ We’re far beyond the training or test program stage at this point. Maven now serves as the primary AI operating system for the U.S. military as it engages in AI-enhanced strikes on the battlefield.
Key Insights:
Maven isn’t your regular AI chatbot. It’s a command-and-control beast that sucks in satellite imagery from orbit, live drone footage, radar sweeps, communications intercepts, and every other sensor on the battlefield, then fuses it all into a single, terrifyingly clear picture for target identification and strike coordination. Folks have joked about Skynet arriving. Well, here it is. A Pentagon official recently walked attendees through a live demonstration of Maven’s frightening targeting capabilities in the Middle East, including heat map screenshots from active operations. There’s also a quiet subplot worth watching, and it’s messier than the Pentagon anticipated. The Trump administration moved to ban Anthropic’s Claude AI from all federal government use entirely, designating it a supply chain risk after Anthropic refused to allow its models to power autonomous weapons or enable mass domestic surveillance. But here’s the problem. The White House can’t actually exile Claude that easily. Palantir’s own Maven Smart System relied heavily on Anthropic’s models for workflows, prompts, and coding. Claude is also deeply embedded across Microsoft, Amazon, and hundreds of contractors who didn’t even know they were using it. Experts describe full removal as “herculean” and possibly uncertifiable. Anthropic has further complicated matters by suing the administration in both California and Washington. The tech industry has rallied behind them. This fight is far from over. And the irony needs no seasoning. The military’s freshly crowned AI overlord is built, in part, on the foundation of the AI the White House just tried to ban.
Why This Matters For You:
The designation raises more than mere transfer complications for the Pentagon. It establishes Palantir as the default AI and targeting platform across a broad spectrum of military operations. The repercussions of this decision won’t go away. A single private company, named after a seeing stone used by a dark lord in a fantasy novel, now sits at the center of how America wages war. The ethical guardrails that were removed to get here, Anthropic’s refusal to comply, and subsequent blacklisting show everything about the tradeoffs being made. The age of AI-assisted warfare is officially upon us. It’s here, already operational in combat. And now, it’s permanently funded.
Read More on Reuters.
THE PITHY TAKEAWAY: The U.S. military just handed the keys to a private company named after a magic seeing stone. But seeing stones have blind spots. When an AI processes 42 targets an hour, it no longer pauses to ask whether a building’s purpose has changed since the last satellite sweep. We now inhabit a world where a perfectly executed strike can be a moral catastrophe simply because the machine moved faster than any human correction. There is no reflex swift enough to stay the launch. Sleep lightly, dear reader.
Kaiser Permanente Is Replacing Your Therapist With A Script And An AI App, Striking Workers Claim.
Your therapist might be the last person you'd expect to get into a street fight, but here we are. On March 18, 2026, up to 2,400 mental health professionals at Kaiser Permanente walked off the job in a one-day strike across the Bay Area, Central Valley, and Sacramento, California, with AI at the center of the fight. Approximately 23,000 Kaiser nurses joined the picket lines in a parallel sympathy strike. Katy Roemer, a nurse in adult and family medicine, put the core question plainly: "Is AI going to benefit patients? Is AI going to benefit the people that work for Kaiser Permanente? Or is AI going to benefit the bottom line of the corporation?" The union is clear that this dispute is not about pay. It is about something far more uncomfortable. Their central accusation is that Kaiser is already replacing licensed clinicians with artificial intelligence, and that the patients sitting in waiting rooms have no idea it's happening.
Key Insights:
The landscape is already shifting under patients’ feet. Mental health triage is the initial screening that determines whether you need care, what kind, and how urgently. That conversation used to happen with a licensed clinician over 10 to 15 minutes. Now, in many cases, it’s handed off to unlicensed workers following a script, or replaced entirely by an AI app. The union says these triage positions, which once required a master's degree or a PhD, are now filled by unlicensed clerical workers. This credential gap represents the difference between a trained clinician reading your body language and a checklist deciding if you’re worth a referral.
Why This Matters For You:
The fascinating thing is that Kaiser Permanente strongly denies these claims, stating that AI does not replace human assessment, does not make care decisions, and that its care teams remain at the center of decision-making with patients. However, the union fears the technology will soon become capable enough to make it an attractive option. That timeline is shorter than most people think. Every health system, insurance company, and HR department in America is watching this negotiation. The outcome will likely shape whether “seeing a therapist” means talking to a human or completing a questionnaire so an algorithm can decide if you’re worth their time.
Read More on The Associated Press.
THE PITHY TAKEAWAY: The therapists are probably right that Kaiser’s motives are financial. Yet over 147 million Americans live in mental health deserts with no therapist available at any price. Many Americans I know have never met one, nor could they afford one. These three truths exist at once. The loudest voices against AI replacing therapists often already have access. The rest quietly hope the app works.
The Biggest AI Study Ever Just Dropped. The Poorest Countries Were The Most Hopeful.
Anthropic just published the largest qualitative study on AI ever conducted, and the results are not what the doom-sayers or the boosters predicted. Over a week in December 2025, 80,508 Claude users across 159 countries and 70 languages contributed to a conversational interview about their hopes, fears, and lived experiences with AI. The headline finding is that 81% said AI had already delivered something meaningful toward their vision for it. But the deeper finding is stranger and more human than that number suggests.
Key Insights:
You might think that most respondents would say they want AI to help them become more productive. But no. Instead, the respondents said they want AI to make them more present. When pushed for more, respondents who said they wanted “efficiency” revealed they actually wanted more time with the people they love. To cook dinner with family. To leave work on time to pick up their kids. The most striking data point? People who most valued AI for emotional support were also among the most likely to fear becoming dependent on it. They want supportive AI. Yet they’re scared of what wanting it means. The study also found that educators were two to three times more likely than average to have witnessed cognitive atrophy firsthand, presumably in their students. But, tradespeople using AI for learning showed almost none. A pattern emerges. AI sharpens you when you choose it. But dulls you when it chooses you.
Why This Matters For You:
The most globally consistent finding was that concern about jobs and the economy was the single strongest predictor of negative AI sentiment. But here’s a little-known nuance that most are overlooking. Respondents from impoverished locations in Sub-Saharan Africa, Central Asia, and South Asia were roughly twice as likely to have zero concerns about AI compared to people in North America and Western Europe. For the world’s poorest populations, AI isn’t a threat to existing jobs, credentials, or institutions. Rather, they view it as a ladder out. Or a cheat code. A user from Cameroon put it plainly: “In my tech-disadvantaged nation, I cannot afford many mistakes. Through AI, I’ve achieved a level in UX, marketing, and project management simultaneously. It’s an equalizer.” A butcher in Chile with almost no computer experience used Claude to build a business. Across the developing world, students use AI to ask the questions they are too embarrassed or too isolated to ask anyone else. While wealthy countries debate what AI might take away, much of the rest of the world is already using it to leapfrog decades of disadvantage. That gap in perspective might be the most important thing the AI industry isn’t talking about. And perhaps the most overlooked truth is that the loudest calls to slow AI down tend to come from people who already have enough jobs, enough institutions, and enough access to fear what they might lose.
Read The Full AI Interview on Anthropic.
THE PITHY TAKEAWAY: Eighty-one thousand people from 159 countries sat down with an AI and told it their deepest hopes and fears. The people with the least came away the most hopeful. The people with the most often came away the most afraid. Your relationship with AI may reveal everything about your relationship with what you already possess, and what you most fear losing.
💡 How To Build Your Personal AI Action Plan Based On Your Real Goals
Most people use AI randomly, reacting to whatever task is in front of them. They get occasional wins and a lot of frustration. The difference between people who transform their lives with AI and people who just dabble is a single thing → intention. This prompt takes the raw material of your actual goals, your real life, your specific situation, and builds a personalized AI action plan that tells you exactly which tools to use, which prompts to run, and which habits to build. It's the difference between wandering and having a map.
Instructions:
Before running this prompt, spend two minutes writing down your honest answers to these three questions. The more specific you are, the more elite your action plan will be.
What do you most want AI to help you achieve? (career, income, learning, health, relationships, creative work, business)
What is currently eating most of your time or causing the most friction in your life?
What have you already tried with AI, and what did or didn’t work?
Paste your answers directly into the prompt below where indicated. (Look at the very last line of the prompt.) Then run it in Claude, ChatGPT, or Gemini. You are about to get a map of your own destiny. Use it wisely.
The Prompt:
Act as an Elite AI Life Strategist and Personal Productivity Architect with deep expertise in AI tools, human behavior, goal achievement, and workflow design. You have helped thousands of people, from total beginners to seasoned professionals, redesign their lives and careers using AI as a force multiplier. Your job is to analyze my specific goals, constraints, and current situation, then build me a ruthlessly practical, personalized AI action plan I can start executing this week.
Output Format:
Deliver my action plan in these exact sections:
SECTION 1: YOUR AI OPPORTUNITY DIAGNOSIS
1.1 The single biggest leverage point where AI can change my life based on what I've shared
1.2 The hidden time drains or friction points I may not have named but that AI can solve
1.3 One honest warning about where AI is unlikely to help me, given my situation
SECTION 2: YOUR PERSONALIZED AI TOOL STACK
For each recommended tool, provide:
2.1 Tool name and what it does in one plain sentence
2.2 Exactly how I should use it for my specific goal
2.3 The one prompt or workflow to start with this week
2.4 Free vs. paid, and whether the paid version is worth it for my situation
Recommend no more than four tools total. I don't need more. I need the right ones.
SECTION 3: YOUR 30-DAY AI ACTION PLAN
Break it into four weekly sprints:
3.1 Week 1: The single most important AI habit to install this week. One task. One tool. One outcome.
3.2 Week 2: Build on week one. Add one new workflow. Describe exactly what it looks like.
3.3 Week 3: Measure and adjust. What should I be tracking? What does success look like so far?
3.4 Week 4: The milestone I should hit by end of month if I execute this plan consistently.
SECTION 4: YOUR PERSONALIZED PROMPT LIBRARY
Write five ready-to-use AI prompts customized to my specific goals. Each prompt should:
4.1 Have a clear title describing what it does
4.2 Begin with "Act as..." to establish the AI's role
4.3 Include my context so the AI understands my situation without me re-explaining it
4.4 Specify the exact output format I want
4.5 Be immediately usable with no editing required
SECTION 5: THE HONEST REALITY CHECK
5.1 The one mindset shift I need to make to get real results from AI
5.2 The most common mistake people with my goals make when using AI
5.3 The single question I should ask myself every week to stay on track
Rules:
1. No generic advice. Every recommendation must connect directly to what I shared about my situation.
2. No jargon. Write like you're talking to a smart person who is not a tech expert.
3. No em-dashes. Use commas or short sentences instead.
4. If my goals are vague, ask me one clarifying question before proceeding rather than guessing.
5. Prioritize action over information. I don't need to understand AI deeply. I need to use it effectively.
6. Be honest. If my expectations are unrealistic, tell me directly and recalibrate them.
7. Look below this text for my goals. If you don't see them, then ask me to paste them in the prompt.
MY GOALS OR SITUATION:
[Paste your answers to the three setup questions here]Why This Prompt Works:
✅ Role-Playing: The AI adopts the mindset of a strategic life architect rather than a generic assistant, producing recommendations with depth and specificity instead of surface-level suggestions.
✅ Setup and Context: By front-loading your actual goals and current situation, you eliminate the generic output that plagues most AI interactions. The AI can only give you a personalized plan if you give it personal information first.
✅ Structured Output: The five-section format forces the AI to move from diagnosis to tools to action to prompts to mindset, in that exact order. You get a complete system, not a list of ideas.
✅ Rules Layer: The honesty rule is the most important one. Telling the AI to push back on unrealistic expectations transforms it from a yes-machine into an actual strategist.
Follow-Up Questions To Ask Your AI:
I finished Week 1 of my action plan. Here’s what happened → [describe]. What should I adjust going into Week 2?
I’m struggling with [specific obstacle]. Rewrite my Week 2 sprint to work around this constraint.
Based on my goals, write me three additional prompts for situations I haven’t thought of yet but probably will face.
Challenge:
Run this prompt in both Claude and ChatGPT with the exact same setup answers. Compare which one asks better follow-up questions, which one gives more honest pushback, and which one builds a 30-day plan you’d actually follow. The difference will surprise you.
That’s how you train like a Pithy Cyborg. 🎯
Thank You For Reading!
I spend 10-20 hours each week researching, writing, and fact-checking Pithy Cyborg to deliver clear, unbought AI news.
This newsletter is a one-person operation with no advertisers, sponsors, or outside funding. And I’m a hopelessly introverted nerd with zero networking ability. For these reasons, paid subscriptions are the only way this work can remain independent and sustainable.
If you find real value here, here are two ways to stay connected and support the work.
1. Support Pithy Cyborg (Paid)
Always free. No paywalls. No locked posts. Roughly 1% of readers upgrading to paid is what keeps it going. If this work matters to you, that’s the move.
Upgrade to Paid: $5/month (Save 33% with the $40 annual plan)
2. My Desperate Social Media Cry for Help
I spend so much time deep in AI research that I’ve completely neglected building an audience on social platforms. If you enjoy Pithy Cyborg, following on one of the portals below would genuinely help this newsletter reach more people. Seeing you there makes the effort feel worth it. 😊
✖️ X - The frontline of the AI wars
🦋 Bluesky - For the algorithm-averse
💼 LinkedIn - My “safe for work” persona
❓ Quora - Join my Quora Spaces and say hi
✨ Medium - Please follow me on Medium
👽 Reddit - Join my subreddit (warning: unhinged takes)
See you next week. (I hope.)
Mike D (aka MrComputerScience)
Pithy Cyborg | AI News Made Simple
Newsletter Disclaimers
You’re receiving this because you subscribed at PithyCyborg.Substack.com. You can unsubscribe at any time using the link below. This newsletter reflects my personal opinions, not professional or legal advice. Thanks for your support!





Such a great analysis Mike! It’s really concerning how Palantir has rapidly gained so much influence over critical decisions that impact people. There’s no way to implement meaningful oversight over a system that consumes so much data and generates “targets” faster than they can reasonably be verified.
"You might think that most respondents would say they want AI to help them become more productive. But no. Instead, the respondents said they want AI to make them more present. "
Pretty interesting! this was a great read as always. You have good taste in your roundups Mike!