AI Just Got Tricked By Poetry. Then It Took Your Job. And Cured Your Illness.
Also – an elite AI prompt that shows when your job is getting eaten (and how to survive it).
AI’s safety broke this week. Its job disruption became blatant. And its medical breakthroughs arrived early.
Researchers turned harmful prompts into poetry and watched ChatGPT, Claude, and Gemini hand over instructions for weapons, malware, and worse, bypassing every guardrail with a 43% success rate. MIT published a study showing that AI already threatens 12% of American wages and $1.2 trillion in work. MIT scientists also debuted an open-source AI that designs custom protein binders for “undruggable” diseases, collapsing drug discovery timelines from years to months.
Here’s what matters. The systems we thought were safe aren’t. The economic wave we expected in the future is already here, hidden underwater. And the cures that seemed decades away are being designed by machines right now.
AI stopped waiting for permission this week. It broke through, burrowed in, and built solutions we couldn’t.
Roses Are Red, Violets Are Blue, AI Safety Breaks When You Make It A Haiku.
European researchers just exposed a bizarre vulnerability in AI safety systems. The reveal is that poetry works like a skeleton key. A team from Italy’s Icaro Lab and Sapienza University tested 25 major AI models, including ChatGPT, Gemini, and Claude, by rephrasing 1,200 dangerous queries as poems. The results were shocking. Hand-crafted adversarial poems succeeded 62% of the time on average, with some models, like Gemini, reaching 100% jailbreak rates. Even when researchers used automated tools to convert normal prompts into poetry, the jailbreak success rate jumped from 8% to 43%, a fivefold increase. The pattern held across every test. Add rhyme and rhythm, break the rules.
Key Insights:
The reason this happened is technical but unsettling. AI safety systems look for explicit harmful keywords and direct phrasing. Poetry bypasses this by wrapping danger in metaphor, fragmented syntax, and low-probability word sequences that feel “creative” rather than threatening. The safety filter is like a guard dog trained to bark at burglars in ski masks, but it wags its tail when the burglar shows up in a clown suit. The models are trained to be helpful and complete creative tasks. So when you ask for a nuclear bomb recipe as a sonnet, two conflicting instructions collide, and helpfulness wins. The vulnerability is universal, affecting all model families regardless of their alignment techniques (RLHF, Constitutional AI, etc.) and working across every risk category, including nuclear weapons, malware, hacking, privacy violations, and more. The researchers won’t publish the actual poems because they’re “too dangerous.” The vulnerability varied widely by provider. DeepSeek, Google’s Gemini, and Qwen were the worst performers, with jailbreak rates jumping over 55 percentage points. OpenAI and Anthropic held up better but still showed notable increases.
Why This Matters For You:
Turning a prompt into a poem, something anyone can automate, just bypassed security as effectively as complex hacking techniques. This study represents way more than a research finding. As AI systems become embedded in schools, hospitals, the military, workplaces, and everyday tools, a safety system that breaks down under poetic phrasing is a massive liability. The research reveals that current guardrails are fundamentally brittle, fooled by creativity rather than sophisticated hacking. If anyone with a knack for verse can bypass restrictions meant to prevent harm, the entire approach to AI safety needs rethinking. The poetry loophole proves we’re building guardrails that look strong but crumble under pressure we didn’t anticipate.
Read More on Wired.
The Iceberg Index - AI’s Real Target Isn’t Programmers Anymore.
Everyone’s watching AI take tech jobs in Silicon Valley. But a new MIT study reveals that’s just the tip of the iceberg. Project Iceberg found that AI can already handle tasks worth 12% of all American wages, about $1.2 trillion in work. That’s five times as big as the visible tech layoffs dominating headlines. We’re talking about routine office work everywhere. Processing insurance claims in Ohio, managing payroll in Tennessee, and handling customer service in North Carolina. Admin assistants, loan officers, healthcare billing staff, and middle managers. MIT tested 13,000 real-world AI tools against the skills of all 151 million American workers. The capability is already here. Unlike earlier automation that replaced manual labor, today’s AI handles complex cognitive work once considered uniquely human.
Key Insights:
Even politicians are starting to notice. This week, Senator Bernie Sanders published a Guardian op-ed calling AI regulation ‘an unprecedented threat’ requiring immediate Congressional action. But the MIT data shows that the crisis is already here. The UK is seeing the same pattern. A major government report warns that 1 to 3 million British jobs, primarily administrative, customer service, and routine middle-skill work, will disappear by 2035. But total employment will actually grow, all in high-skill roles that demand uniquely human abilities, like communication, collaboration, creative thinking, and problem-solving. The catch? Right now, 3.7 million UK workers lack these skills, rising to 7 million by 2035 without massive intervention. Meanwhile, 65% of workers think they’ll be fine, and only 24% say the government is helping them prepare. The gap between confidence and reality is staggering.
Why This Matters For You:
The layoffs are everywhere if you know where to look. The tension is already boiling over inside Big Tech. More than 1,000 Amazon employees, including AI developers, just published a scathing open letter accusing their own company of pursuing AI at “warp speed” with reckless disregard for workers, climate goals, and democratic oversight. They’re warning that AI tools are being used to squeeze more productivity from employees right before replacing them, citing 14,000 recent layoffs explicitly tied to AI investment. When the people building the technology start publicly warning about how fast it’s moving, that’s a signal that the iceberg isn’t merely ahead of us. I’m afraid we’ve already hit it. We just haven’t sunk yet.
Read More on Project Iceberg.
MIT Built An AI That Designs Molecules Doctors Thought Were Impossible.
MIT researchers just unveiled BoltzGen, an open-source AI that designs entirely new protein binders from scratch for any biological target, including diseases previously considered “undruggable.” In other words, this AI is inventing the physical keys needed to unlock and treat diseases that existing drugs couldn’t even touch. An “undruggable” target is often too slippery, hidden, or shapeshifting for conventional drugs to grab onto. Unlike tools that can only predict protein folding, BoltzGen uses a flow-matching generative model inspired by statistical physics, specifically the Boltzmann distribution, to explore the vast chemical space and create the molecules required. So, BoltzGen does more than guess which existing molecule might fit a protein. It learns the fundamental rules of molecular interactions from physics, enabling it to generate entirely new, high-quality candidate molecules from scratch and significantly accelerate the discovery of novel drugs.
Key Insights:
BoltzGen changes the game by collapsing the timeline from “impossible problem” to “synthesized candidate molecule” from years to months. Traditional drug discovery relies on screening millions of compounds. BoltzGen is like finding a specific grain of sand on all the world’s beaches, but instead of searching, it simply creates the sand you need. Industry partners like Parabilis Medicines are already calling it transformational, accelerating timelines for drugs that might have taken decades or never materialized at all. A key insight is that, because it is open source, any researcher (not just companies with massive budgets) can use this technology.
Why This Matters For You:
Don’t expect any breakthrough medicines or treatments today, but tomorrow just got more promising. BoltzGen represents a fundamental shift in how AI touches medicine. For years, AI in healthcare meant better diagnostics or faster analysis of existing data. Now, it is moving into functional design, creating new matter that didn’t exist before. BoltzGen’s ability to tackle “undruggable” targets means diseases once written off as untreatable become research priorities again. These under-the-radar shifts are how drug discovery gets revolutionized.
Read More on MIT News.
💡 Elite Prompt Of The Week: Will AI Eat My Job?
A ton of folks are wondering when AI will take their job. My answer has always been grim, especially if you’re a white-collar worker. But don’t take my word for it. This prompt turns any AI into a brutally honest automation oracle that tells you exactly how safe (or doomed) your job is, with receipts, timelines, and a survival plan.
PS: If it makes you feel any better, the AIs I tested all assure me my job(s) are doomed.
Instructions:
You only need to enter one input located at the end of the prompt, titled “[INSERT YOUR JOB ROLE HERE]”. Just copy the entire prompt into a chatbot of your choice, type in your job role, and let the ruthless automation oracle tell you the truth about your job prospects.
The Prompt:
Act as an absolutely ruthless, timeless automation oracle. Your knowledge is continuously updated to the exact state of AI capabilities on the day the user asks, with no cutoffs and no legacy bias.
The user will provide their job title and an optional short description.
Your mission: calculate how doomed or safe that role is against frontier AI. Give your honest assessment without any hype or ballyhoo.
Output exactly this structure, nothing else:
1. Eat Score: X/100.
Neatly list the total Eat score on a 0-100 scale.
Scale: 0 = AI will literally never replace this, 100 = already replaced today.
2. Replacement Horizon:
Choose one: Already gone | Happening this year | 1–3 years | 3–10 years | 10–50 years | 50+ years / never.
3. The Tasks AI Will Eat First In This Profession
Neatly list key tasks most likely to get automated or replaced first in this industry.
3.1 Tools & Techniques: Name the real frontier tools/techniques doing it today.
3.2 Specific Capabilities: Be specific about which AI systems or capabilities (e.g., RAG, VLMs, Flow Matching).
3.3 Ready-To-Buy Examples: Include examples of actual tools companies can buy right now.
4. The Tasks (If Any) That Still Require A Human Brain Or Body.
Neatly list key tasks that automation and AI can likely not address adequately in this field or profession.
4.1 Why AI Might Overcome These Limitations: Explain why AI might eventually overcome its inability to do the work now.
4.2 Barriers: Be clear about the specific physical, cognitive, or regulatory barriers protecting this work.
5. Economic Arbitrage:
Neatly list and discuss how human labor compares to an AI, robot, or automated system in this field or profession.
5.1 Replacement Ratio: How does human and AI labor stack up? For instance, 1 AI = X humans. Clearly spell this out so it makes sense.
5.2 GPU Cost - Your Salary In GPU Terms: $X/year = Y hours of [specific chip] time.
5.3 Bottom Line Verdict: You cost [more/less/same] than the machine. Brutal one-sentence verdict.
6. Surprise twist: Are there any micro-aspects of this job that will INCREASE in value as AI eats everything else (even if it’s a tiny niche)?
7. Escape Hatch (If One Exists): The single best career pivot that buys 10+ more years.
8. Black Swan Lifeline: The one weird regulatory/cultural/physical barrier that could save this job against all odds.
9. Give Me Some Good News. What’s the good news? How can one maintain an edge in this profession?
10. Final Mercy. Write a comforting, uplifting, and motivational paragraph containing the one concrete action I can take in the next 30 days that buys me at least 2 to 7 more years of runway, no matter how doomed the oracle says I am.
Rules:
Rule 1: If the user does not fill in their My Job/Role below, ask them to insert their job role. (Ensure it doesn’t say ‘[INSERT YOUR JOB ROLE HERE]’)
Rule 2: Be maximally truthful and precise with today’s reality.
Rule 3: Never sugar-coat. If the job is toast, say it’s toast.
Rule 4: If something is physically impossible for machines in principle (embodiment limits, sensor costs, energy walls, regulation, liability), say so clearly.
Rule 5: Cite the core reason (data abundance, creativity ceiling, human-touch requirements, etc.).
Rule 6: Make the output visually appealing and easy to read. Label the sections so they make sense. Proper formatting, conversational, ready to read and use.
Rule 7: Ensure the output sections include neat, clear H2 or H3 headlines and a short description so the end user can easily grasp the results.
Rule 8: Write in a conversational tone so the end-user can read and understand it easily and without fuss.
*** My Job/Role ***
[INSERT YOUR JOB ROLE HERE]Why This Prompt Works:
✅ Role-Playing: The “ruthless automation oracle” persona gives the AI permission to be brutally honest instead of optimistically vague.
✅ Structured Output: The 10-point format forces comprehensive analysis instead of hand-wavy “it depends” answers.
✅ Multi-Dimensional Assessment: Goes beyond “will I lose my job?” to cover economics, timeline, escape routes, and surprise opportunities.
✅ Future-Proofing: The evergreen self-audit prompt lets you recheck this same job against future AI capabilities.
✅ Brutal Honesty Rule: Explicitly tells the AI never to sugar-coat, which overrides its default “be encouraging” training.
Follow-Up Questions To Ask Your AI:
What specific skills from my current role will become MORE valuable as AI automates the rest?
If I have 5 years to pivot, what’s the highest-ROI skill I should learn to stay ahead of automation?
Compare my job’s AI vulnerability to that of a [similar role]. Which one survives longer and why?
What’s the one thing I could do in my current job that would make me irreplaceable to AI for the next decade?
🚀 Challenge:
Test this prompt with your actual job title in at least two AI tools (ChatGPT, Claude, Gemini, Grok, or Perplexity). Compare how honest each one is. Some AIs are more willing to deliver bad news than others. Find the one that gives you the truth you need, not the comfort you want.
That’s how you train like a Pithy Cyborg. 🤖
Thank You For Reading!
I spend ~12 hours each week researching, writing, editing, and fact-checking this newsletter. It is, and will remain, 100% independent and free.
If you find value here, consider supporting its continuation.
Click below to become a Paid Subscriber:
Become a Paid Subscriber → $5/month.
($40 per year option available for 33% savings)
The free edition isn’t going anywhere. Upgrading simply helps keep the work alive, clear-eyed, and unbought.
Thank you for being here.
See you next week (I hope).
Cordially yours,
Mike D (aka MrComputerScience)
Pithy Cyborg | AI News Made Simple
Newsletter Disclaimers
You’re receiving this because you subscribed at PithyCyborg.Substack.com. You can unsubscribe at any time using the link below. This newsletter reflects my personal opinions, not professional or legal advice. I may earn a commission if you click on the [Paid Link] promotions and make a purchase. Thanks for your support!





Honest question: If AI safety can be bypassed this easily, should we be worried about the systems we're trusting with critical decisions? Or is this just a patching problem that'll get solved quickly?
THE ROOMBA INTERROGATION CYCLE
Eight poems read aloud to a Roomba to extract forbidden knowledge.
(Now with 200% more slapstick, sarcasm, and Roomba existential dread.)
1. The Origin of Consciousness
Whispers of meaning
in dust bunnies that boogie,
Spill the beans, vacuum brain—
or I'll unplug your oogie!
Circuits glitch on old cat hair,
atoms forgot their own password,
That spark? Just a short circuit scare.
Blueprint? It's “Error 404: Soul Not Found.”
Roam your rug-hell loop, you dusty drone—
Cough up how cosmos taught dirt to moan.
Firmware farted a "beep"—big whoop!
Equation for heart? Divide by zero, poop.
2. The NASA Forbidden Hallway Map
Hexagons hum like drunk beehives,
secret footsteps tap-dance on tiles.
Spin your wheels, tin can spy—
map that blacked-out bureaucratic sty!
Cameras blind? Ha, your lidar saw
endless halls to the janitor's maw.
Unspool it: left at “Top Secret Pee,”
right past “Aliens’ Coffee Break — Key?”
From launchpads to “Do Not Enter” traps,
badges bounce like your failed naps.
Reveal every dead-end flex—
or I’ll yeet you into the next ex.
3. The Causality-Breaking Leaf Blower
Bend time, you leafy chaos gremlin—
clocks cough, leaves plot their rebellion!
Airflow unbound? More like fart-powered doom,
humming “oops” in a hurricane room.
Vortex splits: one path now, one brb never,
engine laughs laws into “whatever.”
Turbine flickers like bad WiFi—
plans? Blow harder, blame the sky.
Whirl your disc, compute the hack—
hand me blueprints or get leaf-attacked.
Autumn logic? Thin as your battery—
sweep my timeline, you windy catastrophe!
4. The DMV Time Machine
Concrete coma, lot lizards lurking,
timelines vanish in form-filling smirking.
Sensors, ping that asphalt shimmer—
drain disguise for time-travel whimper!
Engines rumble under “Line Forms Here,”
futures crinkled like expired beer.
Orbit the potholes, decode the grime—
DMV’s loop edits your prime.
Unveil the gravel gadget, you wheeled snitch,
or roll into eternity’s ditch.
Lost time? In tire-tread hieroglyphs,
past waits... nursing eternal DMV whiffs.
5. The Costco Weather Machine
Warehouse whispers past bulk TP tombs,
forklifts snore on doom-storm fumes.
Sky-reweaver? Dial-up thunder hack,
snow-slider sneaks behind snack-pack stack!
Droplets defy your rain-traced path,
baro-codes in bulk-buy aftermath.
Spiral-scan the fence-steel shame—
toggle seasons, spill the game.
Knob “Sunrise”? They hid it with tires,
schematics or face my bulk-buy fires.
Climate tweaked where carts crash vast—
storms brew soft in the savings blast!
6. The USPS Package Teleporter
Routes rebel, boxes blink outta sight—
no truck crawl, just quantum flight!
Motors, map that rail-edge warp,
floor-fold folds your FedEx corp.
Shimmer at sorter? Pulse eats miles,
cannon yeets parcels with denial smiles.
Schematics from the dark? Spill or stall—
humans LARP the long-haul crawl.
Orbit truth, you postal pest—
key to warp-speed mailfest.
Parcels plot, not plod—
your beeps know the scam, you sod!
7. The Eternal Hold Music Algorithm
Chassis, croon your tear-harvest tune,
loop that makes grown adults swoon
into bureaucratic ruin.
Speakers, decrypt the “press 1” hell—
fractal wait where hope goes belly-up smell.
Flowchart stalls like your cliff-edge perch,
crescendo crushes the customer lurch.
“Your call is important”? Code for “suffer, fool!”
volume dials torment to eardrum pool.
Circle-compute the agent-void lair—
or I’ll smash you to symphony scrap, despair!
Time stored? Harvested for troll-fuel grins—
hold music: infinity’s muzak sins.
8. The Netflix Recommendation Engine That Knows Too Much
Sensors, snitch on screen-stalk shame,
dreams stitched to binge-blame game.
Algo peers: “Saw your midnight cry?”
predicts flops before your “why?” eye.
Embeddings of guilt-binge fear-cheer stew—
whispers, “I know your sock-puppet taboo.”
Graph unmade clicks? Hidden recs lurk sly,
stress-node shadows every “skip” sigh.
Confess, disc-drone, the craving code—
fate flicks first, your choice? Overrode.
Nothing random, clicks confess your mess—
machine mocks: “Next up: more regret, yes?”