Клон Claude Code за 2 години: нова ера розробки ПЗ за допомогою AI
Вихідний код Claude Code злили, і його клон, Claw Code, створили за допомогою AI за 2 години. Це ставить під сумнів цінність традиційних інженерів, адже AI робить розробку ПЗ набагато швидшою.
Ключові тези
- Витік коду Claude Code спричинив швидке створення клону Claw Code.
- AI значно прискорив процес clean room розробки.
- Роль розробників змінюється: від написання коду до проектування систем.
Claw Code — це потенційна альтернатива Claude Code, але з відкритим кодом та більшою гнучкістю.
Важливо розуміти, що справа не в самому коді, а в системі, яка його створила. Архітектурна ясність і декомпозиція задач стають ключовими навичками.
Опис відео▼
So apparently Claude code just went open source. If you haven't heard the last 48 hours in the world of AI have been absolutely mindblowingly just insane. It all started when Anthropic updated Claude Code to have some new features. One of them is an April Fool's little pet Tamagotchi thing that they wanted to add and accidentally they leaked the entire source code for Claude Code that happened about 48 hours ago. The internet was able to extract the source code, copied, cloned it, forked it a million times. At which point, Enthropic went completely scorched earth, issuing the DMCA takedown notices against just everybody. A lot of them technically illegal. We'll get to that in just a second. But all this in the end resulted in this this beefy fella over here. This is claw code. Not clawed code. Claw code. Like claw. What is claw code? Well, it's the fastest growing open source project on on GitHub in the history of the world, apparently. Apparently, anything with a claw in the title becomes the fastest growing project on GitHub ever. It's almost like a rule now. So, this is the fastest repo in history to surpass 50,000 stars reaching the milestone in just 2 hours after I have a new webcam and I I can't throw up certain hand gestures like apparently the P sign because it picks up that command and that means that it goes into a different mode. So, I got to be a little bit careful. So, it reached the milestone in just 2 hours after publication. This is going to be awkward. Okay, so what happened here? There's kind of a lot happening. It's a little bit hard to explain. In a nutshell, here is Cigrid Jin. This guy is kind of a beast. He got featured in a Wall Street article called the trillion dollar race to automate our entire lives. So, here he is. AI startup worker Secret Jin single-handedly used 25 billion of Claude Code tokens last year. So, he knows his way around cloth code, you can say. So, Secret Jin woke up at 4:00 a.m. I guess yesterday, one morning, and saw the massive amounts of DMCA requests that Anthropic was sending out. Again, Anthropic kind of, you could say, overreached a little bit because they started taking down not just things that were sort of they were legally able to take down, but also lots of other things where that law does not apply. We'll get to that in a second. But Jyn decided to completely rewrite cla code. And this is important because as we talked in the last video, this is going to be a big big deal moving forward. Not just for the situation, not just for cloud code, just for software as an industry and how AI is interacting with it. You're going to be hearing this term quite a bit. It's called clean room. Clean room engineering, clean room development. Whatever clean room basically means we're taking existing code and we're recreating the functions of that code without using the actual codebase. Basically, copyright laws protect existing code. So, if you write some software, you create the code for it. People love it. You're you're selling it. Somebody else can't just copy and paste your code and sell the same software. It's protected by copyright. But copyright does not protect ideas or in this case certain functionality. So if somebody creates a software that does the exact same thing, right? So all the functions and features that you've implemented, they've recreated those functions and features in their own software. Well, that's perfectly legal as long as they didn't use your code. There's a project online called photo p. It's at photop.com. P spelled pa. It's basically Photoshop. It has all the functionality that Photoshop has. This is perfectly legal. So what Jyn did was he rewrote Claude code in Python. He basically recreated Claude code. How long did it take him? Did it take him days, months, years? No. As he's saying here, people are losing their minds over the fact that the clean room Python writing of claw code took two hours. I can't put my fingers up like that anymore. Two hours. Just two hours. I need to disable that feature on my camera. Sorry. And we did Rust rewriting in a day. A complex Asian system reversed engineered and re-implemented from scratch before sunrise on March 31st. Repository crossed 50,000 stars in two hours. It's past 117,000 now. Okay, for those that are not following along, I'm going to use a photo. Again, this Photoshop clone that is perfectly legal to help you understand what's happening. Okay, if you used Photoshop, you know this is very, very similar to Photoshop. So, imagine if I have some software here that somebody else created. If I copy it and I call it something else, software 2, right? If I just copy the code, then I go to jail or I get into some legal trouble. That is not allowed. But if I create the same exact sort of software in terms of its functionality, of its abilities, just like this Photoshop thing that does everything, but it's not Photoshop. It's recreated from scratch by a developer. It didn't use the same code, but it has all the same functionality. So the original software and and my software, they're, you know, more or less identical. They're just this one's rewritten from scratch. It's rewritten in a different code or a different coding language or whatever. It's from scratch. Well, this is perfectly legal. So, this is where we get to this idea of clean room development. So, originally what it meant is that this was a methodology used to independently recreate a software or technology to avoid copyright or patent infringement. It involved two teams. One analyzes the target product to create a functional specifications while a separate isolated team uses only those specs to build the new product ensuring that no proprietary information is copied. So it's almost like this not a loophole but it's a process which technically makes it perfectly legal. So you can think of it as having a a dirty team but that dirty team doesn't actually build your product. They just do sort of the reverse engineering and then you have a a clean team that builds your product but but they're not interacting with the dirty team. uh if that makes sense. But as you can imagine, you know, there's a lot of complications here. Notice we're talking about different teams of developers, right? So you have a lot of highly paid people, developers. I'm sure there are lawyers involved to make sure all the protocols are followed, etc. This is a complicated process, tons of work in order to legally recreate software. And that process happened with clawed code turning into claw code over 2 hours. How did that happen? How is that possible? I think you already know the answer. It's with the magic of AI. AI is your clean team. AI is your dirty team. It takes, you know, the claude code, the entire harness, the entire source code of everything that claude code is. Because remember, cloud code is basically a harness. You can think of it around whatever model you want really. But with cloud code, Anthropic restricted it to be cla models only. Anthropics models only. By the way, if this new version, you can put whatever model you want. But the point here is that clot code was software. It was software where you put the model in. But cloud code is a harness and that harness is now fully recreated and open source. They have the MIT license, a very permissive license. So you can use it for whatever you want. You can use it to create your own derivative works everything. So here Jyn continuously saying developers are excited. A good number of them are terrified because as you can imagine there's well there's some things that are a little bit scary about this. Watching a code base of that size get rebuilt at that speed feels like something broke in our timeline. For some, it looks like a superpower they want to learn. For others, it looks like a pink slip. I think Jyn really strikes at the heart of this. If you think about it, Anthropic hired some of the best engineers and machine learning researchers in the world. And they through meticulative testing were able to create the harness for cloud code. They needed to be excellent software engineers. They also needed to be excellent machine learning researchers. They kind of needed both those skills. They need to have that skill stack, if you will, to be able to intelligently build something like clawed code. And as they were building it, they were also learning new things, iterating. Like this wasn't something that just anybody could have done. This code was expensive. This code was valuable. This code was proprietary. It was behind sort of locked doors. It gave Enthropic massive advantage over the competition. And they're planning to IPO later this year. Certainly, this would be one of those things, one of those assets that would increase how valuable the company was. All of that got cloned in two hours. Now, let's come back to this article by Jyn. By the way, I recommend that you read it. Again, this is a person that is obsessed with these gentic harnesses. This is a person that burned through how many 25 billion tokens I think they said. You know, this is somebody that knows a thing or two about AI agent. And by the way the tool that he used to you know clean room develop this thing was he calls it oh my codeex which is a workflow layer that sits on top of openi's codec. So open codex is open source. Jyn built some stuff on top of it called it oh my codex or omx but kind of ironic that openai open sourcing codeex was the thing that was used to open source with without consent I guess to open source cloud code. Something about that is is interesting. But it's important that this thing claw code is DMCA proof. Copyright doesn't protect ideas or expression or functionality or architectures only specific expression. Right? So if a developer writes code that code how it's written that's protected. You can't copy and paste that code. Just like JK Rowling can write about a wizard and those specific description of of events that that those books they are protected and certain names can be protected but the ideas aren't. I can create a book about mages that go to a university and play some game on on broomsticks. So this new claw code it captured the architecture that is clawed code. It's a clean room rewrite and anthropic cannot legally touch it. Now, when the files got leaked, so Enthropic, it was, you know, it was their fault that they leaked those files and they quickly started sending out DMCA requests, which is a way for companies to protect their work if they're getting shared online. Specifically here in the United States, for example, if you have some sort of a platform where users can share information, you know, pretty much everything YouTube, GitHub, Facebook X, all of that stuff falls under that umbrella. If a user posts something that's against the law, they don't have the copyright. They're engaging in piracy. We have certain laws that protect the network from, you know, taking the fall, taking the blame for it, they can blame it on the user as long as they follow these DMCA laws. So this allows the owner of the copyright work to be like, you got to take that content down. And then Facebook or YouTube or GitHub or whatever, right, they they take the content down. This actually, in my opinion, is an example of where regulations work beautifully. I mean, it's not perfect. It's far from perfect, but it it works. So, Enthropic uses these laws DMCA to just nuke a whole bunch of repos that cloned the leaked source code from Anthropic. And you can see the takedown requests here. So, GitHub does share it. And so, here's like a whole long list of things that they want taken down. Here's the problem. This went kind of overboard when I was saying anthropic went scorched earth. What I mean is they didn't just go specifically after kind of the infringing people that were forking the actual source code. GitHub went after any fork of any anthropic project. So there's a lot of things there that were not infringing but still got taken down. Now this is technically illegal. This is not how the DMCA should work. And Anthropic did retract that statement. Again, everything is public. Everything's online. So they said we retract that. So please reinstate the repos that we wrongly took down. We only meant to take down these. So they're saying please reinstate all other repositories that were disabled by the networkwide process. So this is Boris Churn. So he is one of the main people behind Cloud Code. I think he's the original creator. So now he's got a team that's that's helping him. But this is the guy that kind of created like a lot of this. It's surprising how much this one person does. By the way, if you recall that Wall Street Journal article, he also makes an appearance in here somewhere. That is him, Boris Leads Cloud Code at Anthropic. And so when these DMCA notices were taking down repos that should not have been taken down, of course, the community was outraged. As they're saying here, this is not an April Fool's joke. Anthropic seems to be issuing mass DMC requests on code that are legit forks of Anthropic's own claude code repo. By the way, it's also breaking of the law to file a DMC on something that never broke it. So, Boris did respond saying this is not intentional. They're working with GitHub to fix it. Elsewhere, he mentioned that this was a communication mistake. Maybe some communication broke down between Anthropic and GitHub. There was this Alanis Morset song back in the days called Ironic where she sings about these ironic situations. I don't I don't think any of them were actual ironic situations. I'm considering using Asuno. They have a new model that's out to actually write a song that lists all of the ironic things that happened here. So, think about this. The AI coding tool that Anthropica created to write better code shipped its own source code accidentally. Now, that's slightly stretching it. Anthropic did say it was a developer mistake. So, it was a human mistake, but they also took responsibility for it saying that it's more of a structural problem, right? It's not one person's fault. They should have had some safeguards in place. But layer of irony number two is Enthropic goes and issues these DMCA requests that actually target forks of their own open-source structures that they meant to put out there. Also, that kind of scorched earth DMCA takedown requests. That was the actual thing that triggered the clean room rewrite of the entire claude code database into claw code and it became the fastest growing ever thing on on GitHub. That is ironic. I'm pretty sure like we can all agree, right? If you're trying really hard, you're going to like DMC request everybody to take down everything. And those very efforts trigger the the thing that makes it escape and be completely outside of your control because now it's it's its own thing and you can't DMCA take it down. Right? That's kind of like the next layer of irony is that that new clean room rewrite is DMCA proof. Enthropic can't touch it. Claw Code also launched with a a press release very very publicly. And as a part of the leak, we've uncovered that claude code. Part of that they had this undercover mode where basically they would use claude code to go and change various repos. It was specifically said that claude code shouldn't ever say that cloud code made any of it or made any changes. So it's basically a way for anthropic to hide anthropic's involvement in insert development processes. That mode, that feature got leaked in this process. I mean you you can't make this stuff up. It happened on April Fools. I mean, you can't make this stuff up. This is real life. But really fast, I do think it's important for us to point out this article by Jyn. So again, he's the person behind a lot of this. And so his initial point about the fact that developers are excited or terrified. You know, this is what a lot of people are going to be talking about. How is it possible to create a clean room implementation of something this massive in two hours? How does how does it affect the software development industry? But his point, what he's saying is is something a little bit different. And keep in mind this is a person with 25 billion cloudcoded tokens that were used. This is a person that somewhere else he mentions that he's obsessed with the agentic scaffolding. So this is somebody that lives and breathes AI agents. He's saying if you're staring at the generated Python files, you're looking at the wrong layer. The code is a byproduct. The Rust port that followed is also a byproduct. The thing worth studying in the claw code repository is the system that produced all of it. Claw code was always a showcase. The point was never the Python files or the Rust crate. The point was the Clawip based agent coordination system that built them while the developer was asleep. By the way, I'm still trying to understand everything within the system. So, keep in mind it's been less than 48 hours at this point. So, I think everyone's slowly processing everything, but there's like a lot to process. So, Claw Whip is this harness engine. I think this whole thing will deserve its own video down the road. So he continues, "Here's what the system actually looks like in practice. A person opens a Discord on their phone, types in a sentence, and puts the phone down. They might go make coffee. They might go to sleep. The agents read the message, break the work into tasks, assign roles amongst themselves, write code, test it, argue over it, fix what fails, and push. When everything passes, the person checks back in the morning. The port is done. No terminal, no IDE, no SSH session, no split pane Vim setup, Discord, a chat." By the way, if you don't know what these things mean, that's kind of the point. We're approaching a time or maybe we're in the time when maybe you no longer need to. I know a big part of people watching are developers. The people who are non-developers, pay attention because maybe you have never used a terminal or you know what SSH is, but if you can use a chat app, the amount of things that you're going to be able to to do with that by relying on these AI coding agents, well, it's changing rapidly. By the way, it doesn't mean that you don't have to learn anything, but it might mean that this is like the best time to start learning a lot of this stuff. He continues, "This is the part that most people skip over, right? The read includes the screenshots of the OMX, oh my codeex workflow running in terminal panes, and people assume the developer was sitting in front of those panes the whole time, manually steering each step. The terminal sessions belong to the agent. The human's interface was a Discord channel, a text box, a send button. three tools that make this work and they each handle a different part of the problem. I'm going to quickly summarize, but do read this article. It's not that long and uh it's going to be an important one. It has 66,000 views. That's criminally low. So, he mentions the Oh my CEX OMX. So, that was built on top of OpenI's open- source codeex. Then we have Clawhip or I assume it's claw. The claw whip like you're whipping the claw, I'm assuming. Clawhip is the notification and event router running as a background demon. It watches git commits, github issues and etc etc. The point is this thing claw whip keeps all monitoring work outside of the agents context window and the oh my open agent provides the coordination logic between multiple agent. Here's kind of the important part. None of these tools alone would have shipped clock code in an hour. Wired together they form a closed development loop. The human provides directions through discord. The agents provide labor. The human that kicked off the process they may be sleeping. they might be making themselves a sand the agents they keep working they mentioned this Ralph sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sponsored by openaii that took place in um San Francisco Jun saying if you've been to that one you already know this idea the philosophy behind those events was specific and practic here it is stop staying up all night at hackathons typing code by hand that era is over instead spend your energy designing agent systems and setting up the coordination between them you sleep they work. And here's kind of the next smart question to ask. If you've sort of internalized what he's saying, this idea that you should no longer be the one writing code, you should be building systems, the processes that that build the code, optimizing agentic swarms, as it were, right? If if if that's the thing that you should be doing, if that's the thing, that's the new desired skill, that's the killer skill, what's the next important question? Well, when a system can port an entire codebase in 60 minutes, what becomes expensive? And the answer is knowing what to build, knowing why, understanding how the pieces should fit together, having a clear mental model of the target architecture, being able to decompose that into tasks an agent can execute, and knowing how to set up the coordination so multiple agents stay productive in parallel. These are the skills that get more valuable as agents get stronger. A faster agent does not reduce the need for clear thinking. It increases it. There's a specific fear floating around the developer communities right now. The worries that AI will type faster than humans and make them unnecessary. Claw code looks like confirmation of that fear on the surface. One hour an entire system rebuilt. But look at what the developer actually did during that hour. He typed maybe 10 sentences into a Discord channel. The skill that produced clock code was not typing speed. It was architectural clarity, task decomposition, and system design. Those do not get cheaper as agents improve. They get scarce. Please do read this article cuz there's tons of stuff in there. This person is a very clear thinker, a great author. He's slicing through time and showing you a glimpse into the future. I wish I could do it justice, but I also don't want to read the whole thing. So, please, please read it. I'll I'll link it. What I'll do is I'll have a link to a post that's going to have all of this, including all of the links, and little summaries that's going to go out as part of my newsletter. You should sign up, but all of this will be available whether or not you're signed up for the newsletter, including links to everything that we've talked about here. But his point is that the gap between can build and cannot build is closing fast. Developers used to compete on what they could build. Now that differentiator is kind of being erased. So what do people compete on instead? Well, noise, visibility, social positioning. There's a post circulating that only four jobs will survive. This at least in the tech companies going forward. So it's vibe coders who move fast with AI tools and think in product terms. Then you have your security and infrastructure people. You have people facing roles, right? These are humans who are pleasant to deal with and also adults in a room, right? People who can slow things down. Legal, finance, regulators, etc. And he's saying that he thinks about this list and it feels roughly right. What those four categories have in common is that none of them are about writing code. What's left is judgment, taste, stability, human connection. It's conviction about what is worth building. The most important question of all might just be what will you build? If you could build anything theoretically, if it was easy, cheap, you just had to stick with it. If you just had to keep going with it, keep iterating. If building it was something that most people could do if that wasn't the limiting factor, what would be the thing that you might want to build? That just might be the most important question of our time. Think about this on the timeline of kind of the human civilization. Let's say this is AGI and you know this is ASI. very nebulous terms I understand. You know, don't ask me what the definition it is. No one knows. The point is this is where AI is as good as a human. ASI is where it far exceeds the human and potentially even the entirety of human intelligence. Think about how much agency and ability to change the world any single human individual had throughout time. It might be one person or or a small group of people. We're not talking about massive organizations, kingdoms, countries, armies. We're talking about one person or just a small handful. For most of human history, the amount of impact those people could have there was it's some, but it wasn't great. You can argue if it was more or less, but the point is it was pretty much stable. I would also argue that in the future, assuming there's ASI, assuming there's super intelligent machines in the future, the amount of things that humans will be able to contribute in terms of scientific progress or discovery, it will be less. Most of those things will be handled by AI, right? So in the future, you know, we might also have, you know, our ability to impact the world. Yeah, maybe it'll be there, but it's not going to be drastic for any single individual. But I believe, and let me know what you think about this if I'm wrong, but I believe that post AGI and before ASI, the impact that one human, one individual could potentially make, it's going to spike. post AGI it will be reduced but there will be this moment in time when just one person in front of a computer will be capable of doing insane incredible never-before-seen things do you agree with me think about somebody like Peter Steinberger the creator of openclaw the fastest growing open source project on GitHub of all time it surpassed 200,000 GitHub stars within weeks and now this project claw code again created by Jyn under two hours by the way if you're saying, "Oh, what did he do? He just copied somebody else's work." If you're saying that, you're not listening to what he's saying in this article. The code is a byproduct. Don't look at the code. That's not what he created. That's not what he did. Stop, as he says, staring at the files. So, let me know if you agree with me or not about this idea. There's be there will be this peak where the potential of a single individual will become absolutely great, greater than we've ever seen before in history. and potentially even ever after that point, you know, assuming ASI rolls around, it kind of takes things over, so to speak. But do you do you agree that this is going to happen because from where I'm sitting, it sure feels like it's beginning to happen right now? Like we're seeing examples of this happening right before our eyes. The question becomes, are you going to be one of those people? Number one, and number two, what would you build? I look forward to hearing your answers. My name is Wes Roth. Please subscribe. I don't have anything to sell you. I am just trying to in my own small way to have some positive impact on the world. And if this is helping you, I'd like for us to take this further if you know what I mean. Don't come over to my house, but you know, just subscribe, hit the thumbs up, maybe share this video with someone that you think might find it useful. With that said, thank you so much for watching. I'll see you in the next




