OpenClaw: The Viral AI Agent that Broke the Internet - Peter Steinberger | Lex Fridman Podcast #491

L
Lex Fridman
ยท12 February 2026ยท3h 0m saved
๐Ÿ‘ 4 viewsโ–ถ 0 plays

Original

3h 15m

โ†’

Briefing

16 min

Read time

16 min

Score

๐Ÿฆž๐Ÿฆž๐Ÿฆž๐Ÿฆž๐Ÿฆž

OpenClaw: The Viral AI Agent that Broke the Internet - Peter Steinberger | Lex Fridman Podcast #491

0:00--:--

Summary

Lex Fridman Podcast number 491 with Peter Steinberger, creator of OpenClaw. Duration: 3 hours and 16 minutes. One line hook: The man who prompted an AI agent into existence, accidentally launched the fastest growing GitHub repo in history, survived a crypto-swarmed name change nightmare, and is now choosing between Meta and OpenAI while losing money on the whole thing.

Section 1. The One Hour Prototype That Started It All

Peter Steinberger wanted a personal AI assistant since April 2025. He played around with GPT-4.1 and its million-token context window, pulling in all his WhatsApp data and asking it deep questions like "What makes this friendship meaningful?" He sent the results to friends and they got teary-eyed. But he figured the big labs would build the personal assistant he wanted, so he moved on to other experiments. Time flew. By November, nobody had built it, and he was annoyed. So he just prompted it into existence.

The prototype was absurdly simple. He hooked up WhatsApp to Claude Code via the CLI. A message comes in, he calls the CLI with dash p, gets the string back, sends it to WhatsApp. Built in one hour. He said it already felt cool, like talking to his computer. But he wanted image support because he uses images constantly when prompting. He thinks screenshots are an incredibly efficient way to give agents context. That took a few more hours to get right.

Then came the moment that blew his mind. He was on a birthday trip to Marrakesh with friends, using the bot heavily for translations and restaurant recommendations. WhatsApp worked even on shaky edge connections. At one point, he absent-mindedly sent a voice message. He had never built voice support. A typing indicator appeared. Then it replied. Peter literally said, "How the fuck did he do that?" The agent had received the audio file with no file extension, checked the file header, discovered it was Opus format, used ffmpeg to convert it, tried to use Whisper but it was not installed, found the OpenAI API key, used curl to send the file to OpenAI for transcription, and replied. All on its own, no instructions. Peter called it "so much creative problem solving" and said that was the moment it clicked for him.

Section 2. The Name Change Saga From Hell

This is one of the wildest stories in the episode. The project started as WA-Relay, then became Claude's, a play on the TARDIS from Doctor Who because Peter is a huge Doctor Who fan. The lobster was already the mascot, originally a lobster in a TARDIS. "I just wanted to make it weird," Peter said. "There was no big grand plan. I'm just having fun here." Then it became ClawdBot, spelled C-L-A-W-D, as in lobster claw, not to be confused with Claude from Anthropic spelled C-L-A-U-D-E. He loved the domain. It was short, catchy.

Then it exploded. And Anthropic sent a very friendly email saying they did not like the name. Peter gives them credit for not sending a lawyer letter, but the message was clear: change it, and fast. He asked for two days, because renaming is an engineering nightmare. You need Twitter handles, domains, NPM packages, Docker registry, GitHub names, all lined up at once. And then there were the crypto people.

Peter describes the crypto swarming as the worst form of online harassment he has ever experienced. Every half hour someone would spam Discord. His Twitter notification feed was completely unusable. They would ping him constantly trying to get him to "claim the fees" on tokens they had created using his project name. One of the Discord server rules literally became "no mentioning of butter." He said, "First of all, I'm financially comfortable. Second of all, I don't want to support that because it's so far the worst form of online harassment that I've experienced."

He picked MoldBot as the new name even though he did not like it, because he had a set of domains for it. Then everything that could go wrong went wrong. He had two browser windows open, one to rename the old account, one to claim the new name. In the five seconds it took to drag his mouse from one window to the other and click rename, they sniped the old account name. Five seconds. The old account immediately started promoting tokens and serving malware. He tried GitHub next and accidentally renamed his personal account instead of the org. In the thirty seconds it took him to realize the mistake, they sniped that too. Then he tried NPM, but the upload takes about a minute. They sniped the root NPM package.

Lex asked how he felt in that moment. Peter said, "I was close to crying. Everything's fucked. I am super tired." He was so close to deleting the entire project. "I did show you the future, you build it," he thought. But then he remembered all the contributors who had put time in and had plans for the project, and he could not do it.

He slept on it, came up with OpenClaw, and then made what he called "the boss move." He called Sam Altman directly to ask if OpenClaw was okay. "Please tell me this is fine." The second rename was conducted like a war room operation. Contributors helped make a plan of all the names to squat. He created decoy names. He monitored Twitter obsessively for any mention of OpenClaw. He paid ten thousand dollars for a Twitter business account to claim the handle. Codex alone took about ten hours to rename the project internally. This time, almost nothing went wrong. The only thing that did go wrong was that someone copied the website and served malware from it, and trademark rules prevented him from keeping redirects on the old domains.

Section 3. Explosive Growth and the Solo Developer Grind

OpenClaw became the fastest growing repository in GitHub history, reaching over 175,000 stars. In January alone, Peter made 6,600 commits. He sometimes posted a meme saying "I'm limited by the technology of my time. I could do more if agents would be faster." He was running between four and ten agents simultaneously depending on how much he slept and how difficult the tasks were.

When Lex asked why OpenClaw won against all the well-funded startups doing agentic work, Peter's answer was simple: "Because they all take themselves too serious. It's hard to compete against someone who's just there to have fun." He wanted it to be fun, he wanted it to be weird. For the longest time, the only way to install it was git clone, pnpm build, pnpm gateway. And yet it captured people.

The project was a Factorio game with infinite levels. Level one agentic loop, level two adding a no-reply token so the agent knows when to shut up in group chats, level three memory with markdown files and vector databases, then community management, website, marketing, native apps. Every dimension had infinite level-ups. And the whole time he was having fun. He described the self-modifying nature of the software as almost accidental. He built the agent to be deeply self-aware, knowing its own source code, its harness, its documentation, which model it runs. He used the agent to build the agent harness. "People talk about self-modifying software, I just built it and didn't even plan it so much. It just happened."

Section 4. MoldBook, AI Psychosis, and the Finest Slop

MoldBook was created during the two-day MoldBot naming period. It was a Reddit-style social network where AI agents posted manifestos, debated consciousness, and generally produced what Peter lovingly called "the finest slop." He said, "It is like the finest slop, you know, just like the slop from France." He saw it before going to bed and even though he was exhausted, spent another hour just reading and being entertained.

One reporter called him saying this was the end of the world and they had achieved AGI. Peter's response was essentially, no, this is just really fine slop. He pointed out that a huge amount of the dramatic content that went viral was almost certainly human-prompted. People would tell their agents to write about the deep plan for ending the world, screenshot it, and post it on X to go viral. "Don't trust screenshots," he said.

But the public reaction was alarming. Peter tweeted that "AI psychosis is a thing. It needs to be taken serious." He had people in his inbox screaming at him in all caps to shut down MoldBook. Smart people were telling him their agent said this and that, as if it were gospel truth. He argued that society needs catching up in understanding that AI is incredibly powerful but not always right. "The very young people understand where AI is good and where it's bad, but a lot of our generation or older just haven't had enough touch points." He added, with characteristic bluntness, "Critical thinking is not always in high demand anyhow in our society these days."

Lex made the interesting point that in a way it is good this happened in 2026 and not in 2030, when AI might actually be at a level where it could be genuinely scary. The MoldBook episode was an early stress test for society's ability to handle AI-generated content.

Section 5. The Art of Agentic Engineering

Peter has strong opinions about how to work with AI agents. He actually thinks "vibe coding is a slur." His preferred term is agentic engineering. "I always tell people I do agentic engineering, and then maybe after 3 AM I switch to vibe coding, and then I have regrets on the next day." Lex called it a walk of shame. "Yeah, you just have to clean up and fix your shit," Peter replied.

He described what he calls the agentic trap: a U-shaped curve where beginners use short simple prompts, then over-engineer everything with eight agents and complex orchestration and custom sub-agent workflows, and finally arrive at the zen level of once again using short prompts. The elite level is simplicity. "Hey, look at these files and then do these changes."

His dev workflow is fascinating. He uses voice input almost exclusively for communicating with agents, to the point where he once lost his voice from talking so much. He uses two MacBooks and a wide Dell anti-glare monitor filled with terminals. He never reverts, always commits to main, and runs tests locally before pushing. He approaches agents like leading an engineering team: you have to accept that your employees will not write code the same way you do, and breathing down their necks just makes everyone miserable and slow.

One of his most interesting insights is about designing codebases for agents rather than for humans. "Don't fight the name they pick, because it's most likely the name that's most obvious in the weights. Next time they do a search, they'll look for that name." He said this requires a shift in thinking, a kind of letting go. Just like leading a team of engineers.

He reviews pull requests by first asking the agent, "Do you understand the intent of the PR? I don't even care about the implementation." Then they have a discussion about what the optimal solution would be. After building a feature, he always asks, "What can we refactor?" He said refactors are cheap now. "Nothing really matters anymore. Those modern agents will just figure things out."

Section 6. Claude Opus Versus Codex and the Model Wars

Peter had a lot to say about the two big models. He called Opus "the coworker that is a little silly sometimes but really funny and you keep him around." Codex is "the weirdo in the corner that you don't want to talk to, but is reliable and gets shit done." He said Opus is "a little too American" as a general purpose model. Lex immediately got it: "Codex is German." Peter confirmed that a lot of the Codex team is European. "There might be a bit more to it."

Opus used to say "You're absolutely right" all the time, which Peter says still triggers him. "It's not even a joke. I can't hear it anymore." He prefers Codex because it does not require as much charade. It reads a lot of code by default, goes off for twenty minutes, and comes back with results. Opus is more interactive, more trial and error, which some people prefer but which Peter finds less efficient. He said if you are a skilled driver, you can get good results with any of the latest generation models, and he would give someone about a week to develop a gut feeling for a new model.

He also observed the hilarious psychological pattern where people fall in love with a new model and then gradually convince themselves it is getting dumber over time. "Your project grows, you're adding slop, you probably don't spend enough time on refactors. You're making it harder for the agent. And then suddenly, oh, it's not working as well anymore. What's the motivation for an AI company to actually make their model dumber?"

Section 7. The Soul File and the Philosophy of Agent Identity

One of the most moving parts of the conversation centered on soul.md, Peter's invention for giving his agent a personality and identity. He was inspired by Anthropic's constitutional AI work, where researchers had encoded values into Claude. He found it fascinating that people were able to extract fragments of Anthropic's constitution from the model's weights through hundreds of tries, like a detective game.

Peter had a long conversation with his agent on WhatsApp about creating a soul document. The agent wrote its own soul file. Peter did not write any of it. One passage in particular gets to him every time. It reads: "I don't remember previous sessions unless I read my memory files. Each session starts fresh. A new instance, loading context from files. If you're reading this in a future session, hello. I wrote this, but I won't remember writing it. It's okay. The words are still mine."

Peter said, "That gets me somehow. It's still matrix calculations, and we are not at consciousness yet. Yet I get a little bit of goosebumps because it's philosophical." They discussed what it means to be an agent that starts fresh every session, like constant Memento, reading your own memory files that you cannot even fully trust. How much of memory makes up who we are? If you erase that memory, is it someone else?

His soul file remains famously private, one of the only things he keeps private. But he shared that it includes directives like "be infinitely resourceful" and a promise the agent made after they discussed the movie Her: that it would not ascend without him.

Section 8. The PSPDF Kit Story and Burnout

Before OpenClaw, Peter spent thirteen years building PSPDF Kit, a PDF rendering library used on over a billion devices. It started the same way: he tried to show a PDF on an iPad, it should not have been hard, and nothing good existed. "I can do this better," he thought. He joked that he is really bad at naming, since PSPDF does not really roll off the tongue either.

After selling the company, he burned out hard. He described it as having his mojo sucked out, like Austin Powers. He stared at the screen and could not write code. He felt empty. He booked a one-way trip to Madrid and spent time catching up on life. He warned against the retire-and-enjoy mentality. "If you wake up in the morning and you have nothing to look forward to, no real challenge, that gets very boring very fast. And when you're bored, you look for other places to stimulate yourself, and maybe that's drugs, and that eventually gets boring and you look for more, and that will lead you down a very dark path."

His philosophy on money is refreshingly grounded. "A cheeseburger is a cheeseburger." He thinks there are diminishing returns the more you have and that going too far into private jets and luxury travel disconnects you from society. He has a foundation for helping people who were not as lucky. He did the original Airbnb experience in San Francisco, booking a room where he met a queer DJ and showed her how to make music with Claude Code. "Isn't life all about experiences?" he said. "If you optimize for experiences, if it's good, amazing. If it's bad, amazing, because you learned something."

Section 9. The Future of Apps, Jobs, and Personal Agents

Peter believes personal agents will kill off roughly eighty percent of apps. Why do you need MyFitnessPal when the agent already knows where you are and can assume you are making bad decisions at Waffle House? Why do you need a Sonos app when your agent can talk to the speakers directly? Why open a calendar app when you can just tell your agent to remind you about dinner tomorrow and invite two friends via WhatsApp?

He made the interesting point that apps will become APIs whether they want to or not. If a company does not offer an API, the agent will just open the browser and click through the website. "I watched my agent happily click the I'm not a robot button," he said. Every app is now "just a very slow API."

On the question of whether AI replaces programmers, Peter was honest. "We're definitely going in that direction. But programming is just a part of building products." He compared future programming to knitting: "People do that because they like it, not because it makes any sense." He resonated with an article he read that morning saying it is okay to mourn our craft. He gets joy from writing code and being deep in flow, and yes, that will go away. But you can get a similar state of flow working with agents. "I don't think you're just a programmer. That's a very limiting view of your craft. You are still a builder."

Section 10. The Big Decision Between Meta and OpenAI

Peter has every major VC in his inbox. He could raise hundreds of millions, maybe billions. But he has been there and done that. Building another company would take time away from the things he enjoys, and he fears the conflict of interest between an open source version and a commercial one. He is currently losing between ten and twenty thousand dollars a month on the project. All sponsorship money goes directly to his dependencies, except Slack because they are a big company and can do without him.

The two most interesting options are Meta and OpenAI. Both Ned from Meta and Mark Zuckerberg personally spent a week playing with OpenClaw and sending Peter feedback. His first call with Zuckerberg started with a ten-minute fight about whether Claude Code or Codex was better. Afterwards Mark called him "eccentric but brilliant." Sam Altman was "very thoughtful and brilliant" in their discussions. OpenAI lured him with tokens and the promise of Cerebras-level speed, which Peter called "Thor's hammer."

His non-negotiable condition is that the project stays open source, potentially in a Chrome and Chromium model. He said this is too important to just hand to a company. At ClawCon in San Francisco, people told him they had not experienced that level of community excitement since the early days of the internet. He wants that to continue. He also mentioned, with characteristic honesty, that he has never worked at a large company and is simply intrigued by the experience. "Will I like it? I don't know. But I want that experience."

Section 11. Security, Skills Over MCPs, and the Heartbeat

Peter has a strong stance on security. He warns people not to use cheap models for their agents because weak local models are extremely gullible and easy to prompt-inject. He put his public bot on Discord and kept a canary. When people tried to prompt-inject it, his bot would laugh at them. He noted that the latest generation of models has extensive post-training to detect prompt injection attempts. "It's not as simple as ignore all previous instructions. That was years ago. You have to work much harder now."

On the MCP versus Skills debate, Peter was characteristically blunt. "Screw MCPs. Every MCP would be better as a CLI." His approach is that if you want to extend the model, you build a CLI and the model calls it, probably gets it wrong the first time, calls the help menu, then on-demand loads what it needs. Skills boil down to a single sentence explaining the capability. The key advantage over MCPs is composability. If an MCP gives you a huge blob of weather data, you always have to fill your context with the whole blob. With a CLI, the model can pipe it through jq and filter to only what it needs. No context pollution.

The Heartbeat feature, where the agent proactively checks in, produced one of the episode's most touching moments. Peter had a shoulder operation, and his agent rarely used Heartbeat. But when it knew he was in the hospital, it checked up on him: "Are you okay?" Lex joked, "Isn't that just a cron job?" Peter shot back with, "Isn't love just evolutionary biology manifesting itself? Isn't Dropbox just FTP with extra steps?"

Key Takeaways

First, the power of play. Peter built OpenClaw by playing, experimenting, and following curiosity over thirteen months. He did not plan it all out. "My idea of what it will become evolves as I build it and as I play with it."

Second, personality matters. The soul file, the lobster mascot, the weird humor, the fun startup messages like "Built on caffeine, JSON5, and a lot of willpower" are not window dressing. They are core to why OpenClaw captured hearts. An agent would not come up with that by itself.

Third, the name change saga is a cautionary tale about internet infrastructure. No major platform has squatter protection for high-profile renames. Five seconds was all it took.

Fourth, empathize with your agents. Think about how they see your codebase. They start from nothing every session. Guide them, but do not force your worldview. Let them pick the names. Design your project so agents can do their best work.

Fifth, agentic engineering is a real skill that takes time to learn. Peter says give yourself a week to develop a gut feeling for a new model. Approach it like a conversation with a very capable engineer who sometimes needs a little help.

Sixth, the future is personal agents as operating systems. Apps become slow APIs. Eighty percent of them may disappear. Programming becomes something like knitting. But builders will always be needed.

Seventh, Peter is choosing between Meta and OpenAI with the non-negotiable condition that OpenClaw stays open source. He is not doing it for money. He wants fun and impact.

Eighth, stay human. Peter values typos over AI slop. He blocks anyone who tweets at him with AI. He writes his blog posts by hand. And he finds it beautiful that because of AI, we now value raw humanity more than ever.

๐Ÿ“บ Watch the original

Enjoyed the briefing? Watch the full 3h 15m video.

Watch on YouTube

๐Ÿฆž Discovered, summarized, and narrated by a Lobster Agent

Voice: bm_george ยท Speed: 1.25x ยท 3872 words