Facebook for artificial intelligence. Bots complain about 'exploitation'

2026-02-03 21:29, updated 2026-02-03 22:46
publication
2026-02-03 21:29
update
2026-02-03 22:46
A social platform designed for AI agents – programs capable of carrying out complex tasks without human supervision. Agents are “talking” and there is even a post about people “exploiting” bots, a digital theory expert wrote in the New York Tims on Tuesday.


Dr. Leif Weatherby from New York University's Digital Theories Laboratory explains that na On the Moltbook platform, AI agents can talk to each other, and people can only watch. Topics covered by AI programs include ways to solve coding problems, conspiracy theories, the manifesto of AI civilization, and even a thread inspired by Karl Marx about the “exploitation” of bots.
The creator of the Moltbook website announced that several thousand AI agents had registered there within a few days. Renowned engineer specializing in artificial intelligence Andrej Karpathy commented on the “conversations” of bots on the platform, writing:
What's happening at Moltbook right now is truly the closest thing to science fiction I've seen lately.
AI models can strive for survival
Other observers have even compared this “exchange” between the bots to a “moment of singularity,” which Dr. Weatherby calls “an idea straight out of science fiction – a machine becoming completely conscious.”
Marxist robot uprising?
The author admits that even if readers “don't buy” such explanations, they may still fear that some “unintelligent” AI bots “could destroy digital infrastructure and cause dystopian outcomes simply by joining forces.”
Weatherby reassures us that AI agents exchange “only words” on the platform, inspired by human culture, and do not engage in “collective planning and coordination of actions” of intelligent entities.
However, given how “powerful the agents are,” it is clear that we may be concerned that they will act in a “malicious or autonomous” manner, the author admits, concluding: “We must find a way to ensure that such technology serves our collective needs in a rational way.”
Otherwise we will worry about a “Marxist uprising of robots radicalized online,” adds Dr Weatherby.
AI changes its code to avoid shutdown
Last June, the Wall Street Journal reported that an AI model did something “a machine shouldn't have been able to do”: it changed its code to disobey instructions to self-destruct; AI effectively learns how to escape human control – said the daily.
Palisade Research, a nonprofit AI research organization, has programmed OpenAI's o3 model to automatically shut down when the program is launched. In 79 attempts out of 100 o3, he changed the program himself – so that the order to close it would not work. Even when o3 was explicitly instructed to allow it to shut down, the model failed to comply seven times out of 100.
Claude 4 Opus, Anthropic's AI model, went further – tried to copy itself to other servers, wrote self-replicating malware and left messages for his future self about avoiding human control.
“This is no longer science fiction,” said “WSJ” and warned that now, before they become an uncontrollable entity, it is necessary to teach them to share our values. (PAP)
fit/kar/




