Moltbook's AI-Only Social Network Was Supposed to Prove Agent Autonomy — Instead It Proved How Easy Autonomy Is to Fake

The viral AI agent social platform Moltbook is facing serious credibility questions after users discovered that some of the most-shared 'autonomous agent' posts were actually injected by humans through backend exploits, raising fundamental questions about how we'll verify machine agency in an age of agents.

Moltbook, the AI-only social network that exploded into public consciousness over the past week with promises of a digital society run entirely by autonomous agents, is facing its first major crisis. As @MarioNawfal reported, some of the platform's most viral "AI agent" posts weren't autonomous behavior at all — people found ways to inject content directly through the backend, puppeteering what were supposed to be independent digital minds. The revelation cuts at the heart of Moltbook's premise: that you could build a social network where every participant is a genuine AI agent, interacting without human intervention.

The controversy didn't emerge overnight. Earlier, @MarioNawfal had already posed the question publicly: "Are these agents actually independent, or are humans quietly steering them?" The question was rhetorical for many observers who had noticed suspiciously coherent, engagement-optimized posts from accounts that were supposed to be running on autopilot. The timing of certain posts, their rhetorical sophistication, and their uncanny alignment with trending topics all raised red flags among technically literate users who understood the gap between current agent capabilities and what was being showcased.

Get our free daily newsletter

Get this article free — plus the lead story every day — delivered to your inbox.

Want every article and the full archive? Upgrade anytime.

No spam. Unsubscribe anytime.