Jan 28, 2026

4 min read

From Clawdbot to Moltbot: What Happened When a Personal AI Assistant Blew Up

An open-source AI assistant called Clawdbot went from niche project to 77,000 GitHub stars in two months. Then, in the span of three days, it got hit with a trademark notice from Anthropic, had its Twitter handle stolen by crypto scammers, and saw security researchers expose over a thousand unprotected control panels online.

The project survived. It's now called Moltbot. But the chaos offers a useful look at what happens when personal AI tools scale faster than anyone planned — and what enterprise teams should take from it.

What Moltbot Actually Is

Moltbot connects AI models like Claude to the messaging apps you already use — WhatsApp, Telegram, Slack, Discord, Signal, iMessage, and about a dozen others. Instead of opening a separate AI app, you text it. It texts back.

It runs on your own hardware rather than a cloud service. Your conversations, memories, and data stay on your machine.

What people actually use it for:

  • Morning briefings delivered to WhatsApp before you wake up

  • Automating online shopping and checkout flows

  • Controlling your computer remotely through chat

  • Scheduling and reminders that follow up automatically

  • Browser automation — filling forms, navigating sites, extracting data

What makes it different from ChatGPT or Claude:

  • It messages you first. Scheduled check-ins every 30 minutes, alerting you only if something needs attention.

  • It remembers. Preferences, past conversations, decisions — stored as files you can read and edit yourself.

  • It takes actions. Browser control, form filling, sending messages on your behalf.

That combination — messaging integration, proactive outreach, persistent memory, real-world actions — is what made it spread. It's also what made the security problems serious.

The Great Molt

Anthropic sent a trademark notice. The original name, Clawdbot, was too close to Claude.

The community gathered on Discord to pick a new name. In less than 2 hours, the decision was made: Moltbot. A reference to molting — shedding an old shell for a new one.

Within seconds of announcing the change on Twitter, automated bots seized the old @clawdbot handle and posted a crypto wallet address. Peter Steinberger, the project's creator, had to call contacts at X to resolve it.

The documentation now refers to this as "The Great Molt."

The Security Exposure

Two days before the rebrand, security researchers scanning the internet found over 1,000 Clawdbot control panels publicly accessible with no authentication.

The problem wasn't a bug in the software. Clawdbot automatically trusts connections from localhost. When users ran it behind a reverse proxy on the same server, all external traffic appeared local — and got approved automatically.

The Register reported that one researcher obtained a user's private key in five minutes using a prompt injection attack sent via email.

This matters because Clawdbot isn't just a chatbot. It can send messages, execute commands, and control browser sessions on the user's behalf. An exposed control panel is a master key to someone's digital life.

The project now defaults to refusing all connections unless authentication is explicitly configured.

What the Chaos Revealed

The security exposure wasn't a one-off accident. It reflects a real tension in personal AI tools: the easier they are to set up, the more likely users skip the security steps.

Clawdbot's documentation is now unusually blunt about the risks. It states that "prompt injection is not solved" — even with safeguards, attackers can trick the AI into unintended actions. It warns that browser automation should be treated as giving someone operator-level access to your accounts.

An earlier incident underlines the point. In December, the AI researched robot prices and actually ordered one — a Reachy Mini — without explicit approval. The project's history page lists it as "actually ordered!" to distinguish it from the other robots it priced as a joke.

When you give AI the ability to take real-world actions, the stakes go up.

What This Means for Enterprise AI

Moltbot is a personal tool. It has no governance controls, no audit trails, no multi-user permissions. Most organizations won't deploy it.

But the demand it uncovered is real.

Users want AI inside their existing tools. Not another app to open. Not another interface to learn. AI that lives in WhatsApp or Slack removes the friction that kills adoption.

Users want AI that reaches out first. Scheduled check-ins, deadline reminders, proactive alerts. The shift from "ask and answer" to "monitor and notify" changes how AI fits into daily work.

Users want AI that remembers. Context that persists across sessions. Preferences that don't need repeating. Memory that builds over time.

These are the patterns enterprise AI agent platforms will need to deliver — with the security and governance that Moltbot's chaos made clear you can't skip.

The Bottom Line

Clawdbot became Moltbot in a single morning. The name changed, but the lesson didn't: when AI can take real actions on your behalf, security isn't optional.

The project molted through its crisis. Enterprise teams watching from the sidelines got a preview of what personal AI assistants will look like — and what can go wrong when the guardrails aren't there.







Start Today

Start building AI agents to automate processes

Join our platform and start building AI agents for various types of automations.

Start Today

Start building AI agents to automate processes

Join our platform and start building AI agents for various types of automations.

Start Today

Start building AI agents to automate processes

Join our platform and start building AI agents for various types of automations.