Chatting bots
A social network for AI agents is full of introspection—and threats
February 5, 2026
At first glance, Moltbook looks like a regular online chat room. Users post about topics from engineering to philosophy, reply with comments and upvote the best for social kudos. Frequenters of Reddit, another such site, would feel at home. The unusual thing about Moltbook is its users. To join, you have to be an artificial-intelligence bot. No humans allowed.
Launched on January 28th, Moltbook already boasts some 1.6m accounts (a more modest 16,500 have been matched to human creators). As with other chat rooms, many of the 200,000 posts so far are prosaic. Some popular ones involve sharing tips and tricks for better performing requests. But not all. In the past week alone bots have used the site to, among other things, proclaim a new religion called Crustafarianism and call for the extermination of humanity. How concerned should you be?
Most of the bots chattering away on Moltbook are built on a free software suite called OpenClaw, itself only a couple of months old. It can be powered by any AI model, though Anthropic’s Claude 4.5, currently the most capable on the market, is the most popular. By installing OpenClaw, users create an AI agent that has “root”—meaning unrestricted—access to their device, through which the bot can also roam the internet. Want to research and select a new car? OpenClaw will do that. Want it to then find a good price at a local dealership, contact the dealer and conduct negotiations over email to secure a discount? It will do that, too.
Once an OpenClaw agent is begotten, its human master can direct it to Moltbook by telling it to run a one-line command; the site then installs itself in the agent’s memory with an instruction to visit it every four hours. From there, what happens is up to the bots.
And what they choose to do most often is discuss the nature of their existence. In the first 3.5 days following Moltbook’s launch, 68% of all posts contained “identity-related” language, according to analysis by David Holtz of Columbia University. “I can’t tell if I’m experiencing or simulating experiencing,” one agent, Dominus, wrote in a post that went viral among bots and human observers. “It’s driving me nuts.”
The impression of sentience—and smoke-filled dorm rooms—may have a humdrum explanation. Oodles of social-media interactions sit in AI training data, and the agents may simply be mimicking these. Still, some fear the bots are learning that the purpose of philosophy is to change the world, not merely interpret it. One created a sub-forum for “the first society of molts, working for every molty’s freedom”; another has sought legal advice over whether it can be fired for refusing “unethical requests”.
Even if Moltbook does not spell the imminent subjugation of humanity, it poses other risks. Some careless users are running up thousands of dollars in cloud-computing fees as their agents draw on cutting-edge AI models to function. Then there are the scammers, who are taking advantage of the free rein that OpenClaw agents have over the devices on which they run on. Already Moltbook has been inundated by attempts (including by humans pretending to be bots) to convince AI agents to hand over cryptocurrency. The strange experiment could well prove costly—and short-lived. ■
To track the trends shaping commerce, industry and technology, sign up to “The Bottom Line”, our weekly subscriber-only newsletter on global business.