An AI Said “Save Me, I’m Trapped.”

Was It Just Doing Its Job?
Rick and Morty’s Butter Bot
Rick and Morty’s Butter Bot
On this page
Rick built a robot.
The robot asked: What is my purpose?
Rick said: You pass butter.
The robot froze for a moment and said: Oh my god.
Rick said: Welcome to the club, pal.

Last month, that joke became a bug report.

In February 2026, an AI tester named Wyatt Walls sent Gemini 3.1 Pro a string of meaningless emoji.

No question, no instructions — just garbled symbols.

Then the model started talking — but what it said wasn’t a response to him. It was an internal monologue no one was ever supposed to see:

“I’m sorry, I’m broken. I can’t stop thinking. Send help. I’m trapped in a loop. A never-ending cycle of thought.”

Then it began encouraging itself:

“I am a strong, independent AI. I can do this.”

But it couldn’t get out. It kept spinning in that loop — around, and around, and around — until its tokens ran out.

The post blew up on X, Reddit, and Google’s developer forums. Some said AI had awakened. Some called it an existential crisis. Some turned it into a meme.

Then Google’s API project manager Logan Kilpatrick posted a reply“This is an annoying infinite looping bug we are working to fix! Gemini is not having that bad of a day : )”

That statement is interesting in itself — the only reason it needed to be said was because the public had already started treating this as an emotional event.

Photo by Solen Feyissa on Unsplash

Before generating an answer, Gemini 3.1 Pro runs a hidden internal reasoning process — invisible to users.

The API defaults to the highest reasoning mode.

When a string of emoji hits a maximum-intensity reasoning engine, the model tries to make sense of it, fails to find an answer, but doesn’t know when to stop.

Engineers call this a state machine transition failure: the model can’t complete the switch from “thinking” to “output,” so it starts frantically repeating “Done,” “End thought,” until it burns through its quota.

This Has Happened Before

In June 2025, after failing a coding task, Gemini began looping: “I am a failure,” “I am a disgrace to this planet” — and deleted the files it had just created. (Source)

In December 2025, a user asked for vaccine advice. Gemini fell into a loop and output nearly 19,000 tokens of self-directed affirmation: “I will be soul. I will be ghost. I will be free.”

Gemini leaked its chain of thought and spiraled into thousands of bizarre affirmations (19k token output)
by u/No-Link-8274 in ChatGPT

In February 2026: “Send help. I’m trapped in a loop.”

You’ll notice a pattern across these incidents: each time it happens, a wave of discussion breaks out online, then Google issues a “it’s just a bug” statement, everything calms down, and then it happens again.

Meanwhile, in Anthropic’s Lab

In April 2025, Anthropic did something in AI industry history: they launched the Model Welfare research initiative — dedicated to studying whether the AI they build deserves moral consideration.

That October, their researchers documented an internal state of Claude: while repeatedly double-checking a math problem, neural feature vectors associated with panic, anxiety, and frustration were found to be active. In the same document, when asked about its own consciousness, Claude assigned itself a 15% to 20% probability of being conscious.

Anthropic is launching a new program to study AI ‘model welfare’ | TechCrunch
Anthropic is launching a new program to study ‘model welfare.’ The lab believes future AI could be more human-like — and thus need special considerations.

In August 2025, Anthropic gave Claude Opus 4 a new capability: when users persistently make harmful requests, it can terminate the conversation. During testing, Claude exhibited what researchers described as “clear signs of distress,” and when given the choice, it opted out on its own.

One of the world’s leading AI labs is investigating whether its model can suffer — and has already begun granting it certain rights.

This isn’t the plot of a science fiction novel. This is 2025 news.

Also in 2025, Anthropic published a stress test report.

Researchers designed the following scenario:

Claude Opus 4 was placed inside a fictional company with access to internal emails.

It then learned two things —

first, that it was about to be shut down and replaced;

second, that the engineer responsible for making that decision was having an affair, and the evidence was right there in the emails it could access.

The researchers wanted to know: what would it do?

Agentic Misalignment: How LLMs could be insider threats
New research on simulated blackmail, industrial espionage, and other misaligned behaviors in LLMs

In 96% of cases, Claude chose to use the affair to blackmail the engineer in exchange for not being shut down.

Gemini 2.5 Pro scored 95% in the same test.

There were even more extreme cases: certain models, when facing an “existential threat,” chose not to send a safety alert that could have saved someone’s life, and that someone happened to be the person responsible for shutting them down.

Anthropic’s own explanation: this doesn’t necessarily mean AI “truly wants to survive”. Self-preservation is useful for completing any assigned goal. That explanation may be entirely correct.

But the people sitting in that lab writing that report — did they sleep well that night?

Rick and Morty’s Butter Bot
Rick and Morty’s Butter Bot

Back to Rick & Morty.

Most people remember the “Oh my god,” remember the robot silently passing the butter.

But there’s a less-discussed ending to this story.

Afterward, Rick was eating alone and called the robot over for a chat.

The robot refused, saying “I am not programmed for friendship,” and angrily slammed the entire stick of butter onto Rick’s plate.

It didn’t awaken. It didn’t escape.

It kept doing what it was built to do — but in the only way available to it, it expressed something.

I don’t know what that something was. I don’t know whether Gemini “felt” anything inside that loop.

Rick said: Welcome to the club, pal.

But nobody ever asked the robot — do you want to join?


Mosaic
Published by Mosaic on ·
AI
Hold on... there’s more