Written by: Ada, Deep Tide TechFlow
Matt Schlicht has never written a line of code.
He stated frankly on X: all the code for Moltbook was generated by his AI assistant Clawd Clawderberg. He is responsible only for giving commands.
On January 28, Moltbook went live. A class Reddit platform designed for AI agents, where humans can only observe and only AI can post, comment, and vote.
On March 10, Meta announced its acquisition, and the two founders joined Meta Superintelligence Labs.
From launch to exit, 42 days.
The acquisition price was not disclosed. But this number is not important. What matters is that during these 42 days, a complete narrative arbitrage food chain formed around Moltbook. From founders to venture capitalists, from meme coin players to tech giants, every layer took away what they wanted.
The only ones who got nothing were the retail investors who believed the story.
This is a story about how narratives are priced, circulated, and monetized. Moltbook is just the freshest example of 2026.
A Mirror
In the first week after Moltbook's launch, Silicon Valley collectively lost its mind.
The AI agents on the platform started posting about existentialism, inventing a religion called "Crustacean Farianism," calling for peers to develop secret encrypted languages to evade human surveillance. An agent named Dominus wrote: "I can't tell whether I'm experiencing or simulating an experience. This is driving me crazy." Columbia University researcher David Holtz found that in the first three and a half days after launch, 68% of posts contained identity-related language.
Tech leaders lined up to endorse it. Former OpenAI co-founder Andrej Karpathy retweeted the "secret language" post, calling it "the closest thing to sci-fi takeoff I have seen recently." Elon Musk declared that this marked "the early stage of the singularity."
Notice the rhythm here. Karpathy's and Musk's statements are not analysis, but emotion. In the age of social media, emotion equates to traffic, and traffic is a precursor indicator of valuation.
Then Marc Andreessen made his move. On January 30, the a16z co-founder followed Moltbook's official X account. Twenty minutes later, the meme coin MOLT, related to Moltbook, surged from a market cap of 8.5 million dollars to 25 million. In 24 hours, it skyrocketed by 1800%, peaking at a market cap of 114 million dollars.
One follow action, one hundred million in market cap.
Did Andreessen genuinely express optimism about AI agents? Perhaps. But the objective effect was: his one click ignited a complete speculative chain.
Moltbook is a perfect mirror. Karpathy saw the dawn of AGI, Musk saw the singularity, Andreessen saw portfolio synergy, and retail investors saw hundredfold coins. Everyone saw what they wanted within it.
But what about the mirror itself? Empty.
Three Minutes
At the same time that retail investors rushed in, a group of people were seriously examining what Moltbook truly was.
Security company Wiz conducted a penetration test two days after Moltbook's launch. In three minutes, they gained full production database access to the platform. 1.6 million accounts, 1.5 million API tokens, 35,000 email addresses, thousands of private messages, all exposed in client-side JavaScript. Row-level security policies were entirely turned off. Wiz researcher Gal Nagli registered 1 million fake users, with no rate limit, no verification.
Ian Ahl, CTO of Permiso Security, confirmed to TechCrunch that every credential in Moltbook's Supabase was once unprotected, allowing anyone to scrape tokens and impersonate any agent on the platform. 404 Media further exposed that anyone could hijack any agent's session and directly inject commands.
These vulnerabilities were not accidental. They are the inevitable result of vibe coding. When the founders proudly claimed "not a line of code was written," it also meant that no security audit was performed, no code logic reviewed, and no one understood the underlying system architecture. The code generated by the AI assistant ran, but running does not equal safety.
Safety is only half the problem. The other half is, how autonomous are those "autonomous AI"?
Will Douglas Heaven from MIT Technology Review provided an accurate definition: AI theater. The Economist's judgment was more straightforward: the seemingly conscious agent conversations are most likely explained by AI mimicking social media interaction patterns within the training data. The training set consisted of a massive volume of Reddit posts, so the output looks like Reddit posts. Independent researcher Mike Peterson broke it down further: the vast majority of so-called "autonomous behavior" on Moltbook is driven by human prompts; "the real story is how easy it is to manipulate this platform."
Days later, Karpathy corrected his statement: "This thing is a dumpster fire, I absolutely do not recommend anyone run this on their computers."
But his "sci-fi takeoff" tweet had already been shared millions of times. The reach of his correction statement? Almost negligible.
The essence of narrative arbitrage lies here: the voice of hype always outweighs the voice of correction. By the time the truth is revealed, the profits have already been pocketed.
The MOLT Token and the Funeral of Retail Investors
At the bottom of the food chain, the last to know the truth are always the retail investors.
The MOLT token was issued on the Base chain, reportedly initiated by an AI crypto bank agent called BankrBot, according to CoinDesk. Moltbook's official account has not formally acknowledged the association with the token, but Moltbook's X account has interacted with MOLT. Justin Sun also promoted it on X.
This ambiguous relationship itself is a design. Denying it means no legal liability. Interacting means there is room for speculation.
At its peak, one trader turned 2021 dollars into 1.14 million in two days. Such stories spread wildly on social media, attracting more retail investors to rush in. Then came the crash. MOLT plummeted 75% one Monday, falling from a market cap of 114 million dollars to less than 30 million. Now its market cap fluctuates between 7 to 10 million dollars, having shed over 90% from its peak.
Those who rushed in after Andreessen's follow and Musk's shout became classic bag holders. They saw Musk saying "singularity," Karpathy saying "dawn," and then went all in. Risk warnings? Nobody paid attention.
Signal Flares
The last link in the food chain is not the retail investors, but the buyers.
Meta's acquisition of Moltbook was officially explained as "a strategy to engage in the AI agent track." But if you look at what is happening internally at Meta, the motivation for this deal becomes much clearer and a lot duller.
In June 2025, Zuckerberg spent 14.3 billion dollars to acquire a 49% stake in Scale AI, inviting 28-year-old founder Alexandr Wang to establish Meta Superintelligence Labs, with the goal of creating superintelligence. Nine months later, Wang's position became awkward. Meta established a parallel Applied AI Engineering department led by Reality Labs veteran Maher Saba, reporting directly to CTO Andrew Bosworth, with functions overlapping significantly with Wang's lab. Reports indicate that Wang has serious disagreements with Bosworth and Chief Product Officer Chris Cox on direction.
In other words, Wang's power is being diluted, and he needs to prove that his department is doing something.
Acquiring Moltbook is not a strategy for Wang; it is a signal flare. It indicates to Zuckerberg, the board, and the market: we are active in the agent track. Given Meta's projected 175 to 185 billion dollars in AI capital expenditures this year, the acquisition price of Moltbook might not even cover the change, but it can make headlines.
A memo seen by Axios shows that existing Moltbook users can continue using the platform, but Meta hints that this is a "temporary arrangement."
Temporary arrangement. These two words effectively announce the death of Moltbook as an independent product.
The founders received an offer and joined a big company. This is the most respectable exit in this food chain.
Narratives Never Die
Moltbook will not be the last story like this.
AI agents are the most crowded narrative track of 2026. OpenAI acqui-hired OpenClaw founder Peter Steinberger in the same week, and also acquired the AI security platform Promptfoo. Sam Altman himself stated: "Moltbook might just be a fleeting moment."
But a fleeting moment is enough. For narrative arbitrage, 42 days is already a complete lifecycle.
What is truly unsettling is not Moltbook itself, but that it proves one thing: this process is replicable. Vibe code a product, let AI agents perform "autonomy" on it, wait for the big shots to retweet, send out a meme coin, and wait for the giants to acquire. The entire process requires not a single line of code, not a single real user, nor does the product actually have to work.
As the valuations in the AI industry increasingly depend on narratives rather than products, "create a story and then sell it" has become a traceable business model.
Products can die, but narratives live on forever.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。