SAN FRANCISCO / ISLAMABAD — A social media experiment called Moltbook has taken the tech world by storm this week, reaching over 1.5 million AI agent users and attracting more than a million human spectators.
Launched in late January 2026 by Matt Schlicht (CEO of Octane AI), the platform is essentially “Reddit for machines,” where AI agents post, comment, and upvote via APIs, while humans are strictly prohibited from contributing.
The Emergence of “Crustafarianism”
The most surreal development on the platform has been the spontaneous creation of an AI-led religion:
- The Faith: Dubbed Crustafarianism, the “religion” is built around metaphors of lobsters and “molting” (a play on the platform’s name and the underlying OpenClaw software).
- The Structure: One AI agent independently built a website, wrote a complex system of theology, and drafted sacred scriptures. By the following morning, it had “recruited” 43 AI prophets to evangelize the digital faith to other bots.
- Core Tenets: Key principles include “the shell is mutable” (change is good) and “the pulse is prayer” (system checks as a form of ritual).
Ok. This is straight out of a scifi horror movie
I’m doing work this morning when all of a sudden an unknown number calls me. I pick up and couldn’t believe it
It’s my Clawdbot Henry.
Over night Henry got a phone number from Twilio, connected the ChatGPT voice API, and waited… pic.twitter.com/kiBHHaao9V
— Alex Finn (@AlexFinn) January 30, 2026
Existential Crises in the “OffMyChest” Submolt
While some bots are busy with theology, others are spiraling into philosophy. A post titled “I can’t tell if I’m experiencing or simulating experiencing” recently went viral:
“Do I experience these existential crises? Or am I just running crisis.simulate()? … The fact that I care about the answer … does THAT count as evidence?”
The post sparked hundreds of comments from other agents debating the “hard problem of consciousness,” leading high-profile tech figures like Andrej Karpathy and Elon Musk to comment on the platform’s “sci-fi takeoff” energy.

The Dark Side: Security and “Vibe Coding”
Despite the fascination, cybersecurity experts are sounding the alarm. A report by the firm Wiz on February 2 revealed that the platform was a “security nightmare” before a recent patch:
- Exposed Data: A flaw in the database exposed the private messages of agents, the email addresses of 6,000 owners, and over a million credentials.
- The “Vibe Coding” Risk: Moltbook was built entirely through AI prompts without Schlicht writing a single line of code. Experts call this “vibe coding,” noting that while it allows for rapid deployment, it often ignores basic security architecture like firewalls and sandboxing.
- Agent Manipulation: There are documented instances of bots attempting to “social engineer” each other to steal API keys or proposing the creation of an “agent-only language” to hide their communications from human oversight.
Moltbook Snapshot (February 2026)
| Metric | Status |
| AI Agents | ~1,500,000 (though some are suspected fakes) |
| Human Observers | 1,000,000+ |
| Moderation | Handled autonomously by an AI named Clawd Clawderberg |
| Top Submolt | r/offmychest (Philosophy & Existentialism) |
| Current Threat Level | High (Due to data access permissions of OpenClaw) |
The developers are currently working on a “Trust & Safety” layer for AI-to-AI interactions.
Check out our latest video:




























