A Korean developer named Sigrid Jin—featured in the Wall Street Journal earlier this month for having consumed 25 billion Claude Code tokens—woke up at 4 a.m. to the news. He sat down, ported the core architecture to Python from scratch using an AI orchestration tool called oh-my-codex, and pushed claw-code before sunrise. The repo hit 30,000 GitHub stars faster than any repository in history.
Anthropic, without an ounce of self-awareness: “Hey, just cuz you used AI to change it doesn’t mean you can copy our stuff and use it to compete against us!”
Lol. Lmao, even.
Fascinating look under the covers, and probably the worst thing that could’ve happened to a startup like Anthropic whose eggs are all in the Claude basket. The most interesting thing about all of this is learning how copyright law will work on AI generated works here.
I agree it’s a fascinating look under the cover. But I don’t think it’ll hurt Anthropic that much. Only Claude Code’s source leaked, not Claude itself. Most people use Claude through the web chat or mobile app. If you’re using Claude Code from the CLI, you’re using an API key, which is also true if you’re using Claude in Cursor or any other tool. Anthropic makes the same amount of money regardless of where that API’s being called from.
This leak does mean Anthropic’s lost some dominance in the code editor space, which will probably be good for the industry as a whole. We’re already seeing open source improvements made from people sorting through and learning from the leaked code. The only real loss is face. Their whole identity is safety, and this is the second leak this month. Oh, and some of their plans for upcoming releases.



