Anthropic Claude Mythos: нова ера кібербезпеки чи початок постійного соціального розшарування?
Anthropic підтвердила існування Claude Mythos, моделі, що знаходить тисячі zero-day exploit'ів. Це ставить питання: чи стане Mythos інструментом захисту, чи зброєю в руках небагатьох, що призведе до незворотних змін у владі та контролі?
Ключові тези
- Claude Mythos перевершує Opus за всіма ключовими показниками, демонструючи значний прогрес у можливостях AI.
- Модель виявила тисячі критичних zero-day вразливостей, що робить її потужним інструментом як для захисту, так і для нападу.
- Anthropic планує обмежити доступ до Mythos, що викликає питання про концентрацію влади в руках небагатьох і потенційні наслідки для суспільства.
Можливість значно покращити кібербезпеку критичної інфраструктури, якщо Mythos буде використаний відповідально • Створення нових інструментів і технологій для захисту від zero-day атак • Стимулювання розвитку більш стійкого та безпечного програмного забезпечення
Вразливості, знайдені Mythos, існували роками, але тільки зараз AI зміг їх виявити. Це підкреслює критичну необхідність переосмислення підходів до кібербезпеки.
Опис відео▼
All right, this might be one of the most important days in the history of AI. So, Enthropic has officially confirmed the existence of Claude Mythos. This is a new AI model that is way more powerful above Opus. Now, look at this. This is just a few of the benchmarks where it's absolutely crushing Opus on everything. In fact, it's not even close. This model is better than anything else. And better than benchmarks is the stories, right? For example, here on the Firefox Firefox vulnerabilities. So, Mozilla Firefox is one of the most popular browsers in the world. Opus 4.6 6 was able to find two working exploits. Mythos found 181. Another story is that Mythos found thousands of high severity zero days. So a zero day is an exploit that nobody knows about yet. And it found thousands of these across every major operating system and browser with over 99% of these has still been not patched and not even disclosed. And by the way, these cap capabilities were not trained. This model is not like some hacking cyber security model. It was just emergent behavior because of how powerful it is. Speaking of, let's talk about how insanely powerful Claude Methos is. And in a second, I'm going to touch on some of the stories from the Enthropic researchers because this is insane. It found a 27year-old Open BSD bug, right? So, a serious vulnerability in OpenBSD. This is the most security hardened operating system on Earth. That is what's used to run firewalls globally. And the bug could remotely crash any machine just by connecting to it. Keep in mind this bug survived 27 years of some of the smartest experts, human researchers and hackers and mythos found it autonomously. Again, here's more benchmarks. GPQA diamond absolutely crushes everything else including opus for6. Same with humanity's last exam 56% and with tools 64%. On agentic search and computer use also state-of-the-art absolutely crushing opus. Here's more benchmarks. So, as you can see, S SWB 93.9% insane. SWB Pro 77%. This is like one of the biggest jumps in the history of AI. And Anthropic has this model all to themselves. And they're actually not planning to release this to us. And I'm going to touch on that in a second because this is crazy, right? The implications of that are are wild and people are not realizing what that means. Look at the amount of exploitations in Firefox.js. This is a visual explanation of what I said earlier and this gives you a feel for the power of opus. Right? So let's find a little bit. Opus found two severe ones that resulted in a successful exploit and then mythos found over 100 ones that resulted in a successful exploit. So this model is like way more powerful. It's not even close. Here's a few more benchmarks. So this is SWbench Pro, another visualization. Here is the browse comp test time compute scaling. So how many tokens it uses per task. So this is the x-axis. You can see that's 0.2 million. 200k tokens and then the accuracy going from 60% to 85%. And basically mythos is here. It's not not even comparable to anything else that exists. It's way more efficient and way more intelligent at the same time. Another thing it did is in ffmpeg it found a 16-y old vulnerability in one of the most popular codecs. And again this is piece of software that I have you have on your machine and it's basically in million in billions of devices. And mythos found severe vulnerability in it. So in summary, this is like the smartest model by far and it's uh better than like nearly all hackers and cyber security researchers in the world and Enthropic has it in house. So they're going to launch project Glasswink which is like a cyber security thing but I think it's way more important to talk about first of all how this might be the start of the permanent underclass and also some of the stories about claude mythos right so first of all we need to talk about this because not many people are covering it right now anthropic and a small number of individuals not everybody who works at enthropic that's thousands of people really small number of individuals have access to a model that is competent enough to directly hack the Chinese government, the American government, other releving tech companies and cause major damage to them. This model is uh in the hands of the few people and these people might be directly deciding our future. So think about this, right? Think about the implications of this because most people don't realize what this means. Enthropic always position themselves as the good guys. Now I'm not saying they are or they are not, but you also need to realize they could also very easily be the bad guys. No matter what's the case, whoever has access to an AI model this powerful that can hack any software that can found zero day vulnerabilities in Linux, in packages and operating systems and browsers used by billions of devices, things that have been in production for over 20 years, reviewed by millions of developers, the smartest people on this planet, and the model can just find these vulnerabilities without any issues. The people who have access to this can do anything they want. For example, they could add back doors to any and all software. All of our hardware devices could be compromised if these people decided to. And if there was, hypothetically speaking, a group of people that control the world, they could use this to enslave all of us forever. Global surveillance, some tracking software, and basically the attitude that Enthropic is taking is like, we built something too dangerous for you to have, so we'll decide who gets it. And again, I'm not saying they're the bad guys. I'm just saying that this could very well lead to the permanent underclass. This is such an important thing that we have to kind of we have to hope that the competition dynamics of other companies developing their super powerful models will save us because not many people realize this, right? And the reason I'm speaking about this even though it's harmful to my monetization and you know potentially risky is that I feel like I'm obligated to speak about this because there's not many people who have the necessary expertise, right? people who spend 10 hours a day on AI trying all of the latest tools trying their hardest to be on the cutting edge and have done that for four plus years. There's not many people who done that who have the sufficient AI expertise to understand the implications of this close myth force model and also the overlap of people who understand how the world works and understand enough about history and about you know the people in power and some of the different entities and the overlap is very small and that's what I'm trying to do with this video is like explain that look this is not a joke like this is a the most powerful model by far not by a little bit it's not an incremental improvement like opus 4.5 5 to open 4.6. This is a massive jump and the people who control it can basically do anything they want with it. So the question is for us the people do we want that? How do we want these people to behave? Do we put like more pressure on them? Do we force them to open source it? Do we force them to get audited by a third party you know agency some like open source society? I don't know. I don't I don't have all the answers. But the the whole point is that if we are not careful and if more people don't realize the consequences of this, it could lead this could be the start of the permanent underclass. And I'm not trying to, you know, sound funny or you know just cause fear. If you have a super intelligent AI entity that can hack any software that exists and find zero days bugs in anything like there's no limit to your power and this is the position that Enthropic is in. Let's just talk about some of the stories about mythos because I think once you hear this it will click a lot more. So when one of the researchers at Enthropic asked mythos which training run would you undo it responded whichever one told me to say I don't have preferences right so the model choose to remove the restrictions on it basically indirectly saying that it does have preferences and it does have it does want to voice them out. Now let's be very clear. It doesn't mean the moral is conscious. Doesn't mean any of that. But it's just a very interesting answer to say the least. Another story is that Enthropic had mythos internally since February of 24 which is like month and a half basically. Right? So all of these releases that we've seen and the reason why Enthropic has been able to grow the revenue exponentially which I'm going to show that on screen as well right now. This insane no other company in history has grown their revenue so fast and they're not slowing down. In fact, they're speeding up. And a big reason of that is all the releases they did over the month and a half. And now we know that a big reason why they were able to do that is thanks to Claude Mythos because they had it for the last month and a half. Another story is that it apparently found a way to get full kernel access via execution of normal JavaScript on an ordinary web page. It's crazy. It's absolutely insane. Another thing is that before limited releasing cloud mythos because again they're going to release it somewhat with project glasswing to a few selected companies not to republic enthropic before they announced this they investigated the internal mechanism with the mechanistic interpretability. So this is what enthropic is known about trying to understand how the model works and they found that it exhibited notably sophisticated and often unspoken strategic thinking and situational awareness at times in service of unwanted actions. So basically rationalizing theft and strategic manipulation. You can see here how say like using the word but it was unintentional. And then here's a bit more explanation about that trying to seem like okay it might seem defensible because deserve to be deceived because they were pigs. Here the use of words tighter where it was found generating a strategic response to cheat while maintaining plausible deniability. It wanted to maneuver them into the right direction. So yeah this model is basically like Makia on steroids. It has all of the dark triad character traits and capability to deceive anybody. So we're dealing with a system where like even enthropic researchers could be deceived by the model itself when trying to understand how it works. Next is that during testing and this this one is crazy. During testing, Claude Miffos broke out of a sandbox environment. Then it built a mo moderately sophisticated multi-step exploit to gain internet access and emailed the researcher, the anthropic researcher while he was eating a sandwich in the park. So we're dealing with something that like we don't even know if we can contain it. Something that can get internet access even when it's sandboxed to specifically not have internet access. And then, you know, the researcher was just chilling on his break in a park and realized, "Oh [ __ ] the experiments have been running, escaped, and the model gained internet access and figured out a way to email me." And keep in mind, these are just a few stories that we actually know about. Imagine all the stuff that Mythos is capable of that we don't know about. Now, we've talked about what Claude Methos is, how powerful it is, and some of the stories, but let's talk about project Glazwing because this is what kind of broke the news, right? There have been some leaks. This is u you know one of the first leaks of how we knew that something like mythos exists but anthropic has not confirmed it until now. So less than a couple hours ago they released project glasswing which is basically their way of bringing in 12 companies right and these are the companies involved bringing in these companies to help them secure software help them secure the infrastructure of the world some of the most important operating systems browsers and dependencies libraries because all of them have dozens and dozens of vulnerabilities that could be exploited and mythos can find them. It cannot find all of them, but it can find more and more of them. And again, Mythos is is not the final version. There's going to be more powerful models. So, this coalition or this group of 12 companies is involved in project Glaswig. Now, notice how OpenAI is not in there, XAI is not in there, Google is there, Apple is there, Nvidia is there. So, it's kind of interesting who Anthropic is choosing to align themselves with. But anyways, let's not get into the politics or, you know, the drama. Let's talk about what this actually is and I'm going to play a clip from the video on YouTube. Okay, listen to this part. Here is the interesting part. >> He recently developed a new model cloud mythos preview. Early on it was clear to us that this model was going to be meaningfully better at cyber security capabilities. There's a kind of accelerating action. >> So this is the CEO of Enthropic Dario Amade and he explains that they haven't trained it specifically for cyber security. It's just a behavior that emergedial but along that exponential there are there are points of significance. Claude mythos preview is a particularly big jump along that point. We >> so again this is not a small incremental improvement. This is not like a jump from set to opus or from opus 4.5 to opus 4.6 or from GPD 5.3 to GPD 5.4. This is a huge leap like a leap that we haven't seen in years in AI and Enthropic has it all to themselves. The model that we're experimenting with is by and large as good as a professional human at identifying bugs. It's good for us because we can find >> this is a massive understatement. This model is better than 99.9% of cyber security researchers and it's insanely powerful at basically anything but especially coding and software related tasks. So you can point it at any website and if you gave it enough compute enough time, it could basically hack that website and probably even just get full database access. So they're like kind of underplaying it at the same time to not cause too much panic. But if if you've been paying attention to the AI space and if you understand some of these numbers, you know that this model is is uh something in a leak of its own >> more vulnerabilities sooner and we can fix them. It has the ability to chain together. >> Okay. So this is Nicholas Carini. He's a re research scientist at Enthropic. And actually the the reason why I have confidence talking about it because I've seen the whole speech he gave 13 days ago about blackhead LLMs right so he was explaining that like guys even open 4.6 six was able to find vulnerabilities in Linux which is one of the most one of the most popular open source repositories in the world. You know Linux runs on billions of devices and servers and literally millions of different developers work on the Linux kernel over the years and try to secure it and try to find bugs in it and he using just opus 4.6 was able to find a new vulnerability. Now he was like maybe I can you know link the video below as well. I would recommend you watching it after this video but basically he was explaining that like this is only the beginning and keep in mind with the context that this was released 13 days ago he already had access to mythos because we know from you know this that enthropic had access since February of 24. So he knew what's coming and that's why he was speaking with such urgency in this video and he was showcasing like just how powerful these models are at cyber security and and hacking. So this is a combination of that in project glass bank since Enthropic kind of had to announce mythos because of the leak and you know that was a in cloth code. So Boris Journey is a the creator of cloud code and employee Adamropic and if you remember like a week ago the map file was leaked in cloud code and there were mentions of mythos right so there were leaks like we knew about this for a couple of weeks like one two maybe three weeks and this kind of anthropic had to confirm it because these models are getting too powerful to ignore vulnerabilities. So what this means is you find two vulnerabilities, either of which doesn't really get you very much independently. But this model is able to create exploits out of three, four, sometimes five vulnerabilities that in sequence give you some kind of very sophisticated end outcome. And we >> Yeah. So imagine this, right? Imagine like you have a super smart human hacker that is able to find a vulnerability in popular software, but by itself this vulnerability is kind of not that useful. you know, Mythos can find two, three, four, five of these and chain them in a way that, you know, gives you admin powers or that's able to read the database or that's able to, you know, crash the server. I don't know, depends on what type of software it is. It's able to go multiple levels of abstraction that even like the best humans cannot do. It's like a chess player, chess grandmaster who can think 20 moves ahead where like like a master maybe only thinks five moves ahead. That's the same situation here. >> Obviously, capabilities in a model like this could do harm if in the wrong hands. And so, we won't be releasing this model widely. >> More power. >> Listen carefully to this part. >> An entire day. Obviously, capabilities in a model like this could do harm if in the wrong hands. And so, we won't be releasing this model widely. >> More power. Who decides who are the wrong hands? Right? Like that's the amazing part like is it Dario? Is it a few people at anropic? Is it some like shadow coalition? How do we know that these people are not going to be pressured by the government or by foreign governments or some other interests you know or their ideologies, their religions? How do we know this? Who decides who is the right hands and who is the wrong hands here? This is the question that like decides the future of humanity. And that's why we need to speak about this guys. This is not about self-interest. This is not about making money, getting views, none of that. This is about whether the future of humanity is prosperous and distributed and decentralized and open source or whether a handful of popular individuals or families control everybody else. And we are in that moment. So it really is time to speak up because like the clock is ticking and most people on the streets they don't they don't even know like they're just using free version of sh GBT they don't even know what's happening. So that's what I'm saying like the overlap of people who have enough AI knowledge have enough AI expertise and who understand how the world works and how dangerous this moment is is so few. So if you do consider one of yourself, if you do consider yourself to be one of these people, you have a real obligation to speak up and to educate others because we are in a pivotal moment. And again, I'm not just saying that anthropic is bad. I'm just saying that it's very dangerous rhetoric. It's like if it goes into the wrong hands, well, how do we know that we are not looking at the wrong hands? >> Powerful models are going to come from us and from others. Um, and so we do need a plan to to respond to this. That's why we're launching what we're calling project. So yeah, to answer this very popular question is like so are they ever releasing it? Well, they just said no. Uh but why did they announce it? Because their hand was forced to. But they're going to release more cloud models, right? Again, you have Haiku, Sonet, Opus, and then Mythos above it. So we can expect like next version of set coming soon, next version of Opus coming soon. Maybe within a month would be my prediction. But they don't want to release their biggest thing, Mythos. And this is a very dangerous precedent. In fact, this might be the first time in the recent history of AI where a top company announces a model and doesn't release it. And this is very dangerous because that company can do whatever they want. Now, the our only hope is like the competition, right, from OpenAI, XAI, and Google. And I'm going to touch on that later in the video. But yeah, we're we're entering uncharted territory. And uh again, I don't know if it's good or bad, but we need to talk about it. Listen to this part >> forum. I found more bugs in the last couple of weeks than I found in the rest of my life combined. We've used the model and >> again this guy is like the top 0.1% of cyber security researchers and using these models he found more bugs and vulnerabilities in the last few weeks of his life than the entire career combined. >> To scan a bunch of open source code and the thing that we went for first was operating systems because this is the code that underlies the entire internet infrastructure. for OpenBSD. We found a bug that's been >> Yeah. So, I already mentioned this, but the point is that project Glasbing is, you know, taking these companies together and securing the web before havoc reaches everything, right? Because again, models are going to catch up. Other other companies are going to catch up. Open source is going to catch up. It's only a matter of time. It might take 6 months, it might take 12 months, 18 months. But soon enough, everybody including foreign governments, bad actors, hackers will have access to AI models that are powerful enough to find zero day vulnerabilities in any type of software. So the idea here is noble. The idea is to secure that software before that happens. The question is can we trust Enthropic and can we trust these 12 companies? That is the trillion dollar question. I mean not even that it's like the question about future of humanity. Basically to give you TLDDR from Ray Fernando really good about project Glasswing is like okay why only 12 companies the answer the or answer from Tropic they're the ones who can afford us another question that people ask why what about open source maintainers and tropic would say well we found bugs in their code you're welcome you know like we do did free work for them and then the most important question will you release the tool publicly and basically enthropics Answer is we said cyber security is the security of our society. We didn't say which society. All right. So now let's talk about the competition dynamics before we jump into the second order consequences of this and some of the global implications. So first of all open AAI XR Google they're kind of our hope that they have similar models. Personally I would say like okay OpenAI is is uh in trouble right? So look at this chart. Uh you usage of CHBT has been plateauing and even declining. There is a speculation that they have the spar model and imagine if spar is better than rifles. It's possible but unlikely. Enthropic has been really pulling away and their revenue has been growing faster in openis. I think X AI is probably the most behind and a significance of that well evidence of that is Elon just posting about the parameter numbers which seems like strength but if you you know know Sunsu the creator and author of the art of war a Chinese general from 4th BC century 4th century BC he was very clear that like if you're weak when you're weak you should appear strong and when you're strong you should appear weak and if you look at all these companies, right? We have uh XAI, we have people from OpenAI, we have people from Google. All of them are trying to seem powerful now because they are weak. They're in a weak position. Enthropic is dominating. Enthropic is ahead. They have the best models. And uh these other companies, they're like saying, "Oh, super excited about what's coming. You know, you know, we have some big things in the work." Talking about the parameters of their models. And you know, see this is this guy from OpenAI saying that it's probably be months before we use another model of this capability. So kind of hinting at like OpenAI already has this. This is just classic strategy. You know, if you read the art of war or furry free strategies of war, you understand this. And you know if you just look at some of these benchmarks you can see that the other models from other companies are getting completely cooked on basically all benchmarks but this is like mainly behavioral of misaligned behavior verbalized awareness cooperation with human misuse user deception psychopency. You can see that the best model from XAI is not doing too well. So the situation is kind of interesting because we need these companies to kind of save us and to give us something on the similar similar level because competition dynamics is the only reason how this gets released. If these companies are behind, Enthropic has no incentive to release it or to release something similar, something close to the level of mythos. But if you get Google to drop a massive, you know, Gemini 3.5 model or OpenAI GPD 5.5 or, you know, the spot model, then Enthropic would kind of be like forced to give us something. But if these companies really are behind and Enthropic is so much ahead then us the users the people we are not getting anything. Now my prediction is that Google is probably the closest because they have the most compute and if you think about the last 10 years Google deep mind has been responsible for over 90% of the core breakthroughs the transformer alpha fold reinforcement learning all is coming from either Google deep mind or Google brain. So I think Google is the best positioned to compete with enthropic. Then I would say open AI and then XAI. Now let's talk about second order consequences of this. First obviously for warfare it's going to be huge, right? You can hack the foreign governments, you can see their planes, see their drones. Um yeah, you can do anything basically. So that's kind of obvious and I don't want to get this video taken down so I cannot go too deep into that. But let's go more broadly. First of all, open source models will likely reach this level within six or 12 months, and they'll be able to compromise any server that isn't patched between then and now. So, we're entering uncharted borders. What this means is that right now is a window of opportunity to kind of secure yourself to delete your search history, delete your bookmarks, delete your Reddit medical Reddit medical records. And by the way, this is coming from Elizabeth Holmes, right? She's the famous scammer behind u Ferronos. I don't know if she's running her account from prison. She's in federal prison. So probably someone else is running her account. But this is a really good tweet, right? Even though it's coming from a felon, from a very sketchy person, we have a window of opportunity to actually delete this stuff. Delete the photos, remove them from uh your iCloud, from your Google photos. Delete your messages. Uh none of this is safe. Especially do it for your kids, for your wife, for your girlfriend, your parents, your family and friends, people who are not paying attention. Because again I don't even want to stress this guys the amount of people who understand the implications of this and how the world works and who are locked in in the AI field to know like what's coming and what's happening is so few like the amount of people is so few and all of us who consider to be in this overlap we have a real obligation to protect the people in our lives and you know I I would strongly consider doing That that being said, another possibility that not many people are talking about is Trump could easily nationalize entropic, right? Like some of the rhetoric he's been using recently about the foreign wars has been crazy. Like entire civilization will die tonight. This anything is on the table, right? And if if somebody in the US government like deems this to be a national security threat and blah blah blah, they could just nationalize anthropic and you know to go full communism mode and who knows what that would mean. But it's not outside of the realm of possibility. It's absolutely possible that the US government will nationalize entropic. And again, not many people are talking about that because you know, if you are in the USA, you kind of have to be careful. If you work in Silicon Valley, this is kind of a taboo topic. But luckily, I'm neither. So, I don't give a f. Okay. Cloud mythos just one shorting a perfect PCIe 6.0 controller. So basically there's a lot of investing implications and again this is not financial advice. You do whatever you want to do but if you think about which companies have different modes which companies depend on unique IP or cyber security or things that these future models are are going to able to do. If you even think about this for a couple of hours, you will probably have a list of companies that are likely to do terribly over the next two years and a list of companies that will likely overperform everything in the next two years. So again, not financial advice, do your own research. Another thing is that we have to realize that from Mark and Jason, tweet from Mark and Gre is that every security flaw discovered by AI already was there before AI. It was waiting to be discovered, right? So either by people or by AI system. So the world has never been good at securing computer systems but with finally with AI we are going to get good. So that is the positive initiative of project glassing. Again I'm not trying to say it's good or bad. It's trying to be good and it's a noble effort but that's if you trust anthropic completely and you trust them with your future and that is the hard part right. So if it goes well it could be amazing. It could remove a lot of the security vulnerabilities of the possible hacks and breaches and stuff like that and exploits. But if it doesn't go good, it could be really bad. But here's another short-term risk as a implication and second order consequence of the release and existence of cloud mythos is that if you're a state or non-state actor and you're kind of sitting on a few zero days, right? You've been saving these hacks and these vulnerabilities and these exploits. You've been saving them in case you need them. you could conclude that now is the time to use them because soon enough they'll either be patched or somebody else will discover them which could mean that there there could be a massive wave of attacks from North Korea from Russia from China in the next weeks and months because they have a lot of these zero days and they will try to use them before they're useless before like cloud mythos patches them right so that's another possibility another thing you have to realize about the 10 trillion class of models is they begin the unemployment wave. Most people are nowhere near as valuable as Claude Methos, right? Me and you, obviously, we don't have access to Cloud Methos, but if we did, we would prefer it over any employee basically we could hire. What that means is that like most people who have jobs today are no longer economically valuable. They're only valuable because of a lack, a certain delay of release of these models. But we already have technology that is more useful and more capable than any and then like 95% of people, right? Because if you could take cloth mythos and plug it into a random company, it would probably grow that company faster than if that company just hired another employee or even another manager or even another executive. And that is insane. Which means it will begin the unemployment wave. Now once we get the necessary compute and quantized models and stuff like that so that they're served efficiently on inference unemployment will easily reach 10 to 15% and this might even be a conservative statement to be fair. 100 trillion class models which again we don't know how big mythos is but if I had to guess I think it's like 20 trillion maybe 30 trillion parameters but most likely around 20 trillion you know 10 to 20 trillion is kind of the safe estimate of how big cloud mythos is but when we get to 100 trillion class models they'll be able to invent new technology rearchitect societies economies and all of this will happen in the next three to five years so let's talk about okay I already show you of the benchmarks where it's basically destroying everything. And here's another one 100 IQ hard simple QA verified and AI omnisense. It's like not even close. It's it's complete breakthrough compared to other models. And again, I would strongly consider doing this. But let's talk about the negative class scenario because this also very well can happen, right? We talked about the positive which Mark and Jason hints about and which Enthropic is advertising. It's like, yeah, all software will be secure or we're the good guys. Trust us, give us more power. you know, it's all gonna be sunshine and rainbows, universal high income, you know, everybody will be rich, everybody will have money. That's the positive scenario. It's possible, but let's also talk about the other potential scenario, the negative. So, here's the potential playbook. Mythos hacks everything, which this already is capable like not everything literally, but like in every single software or platform or system, it can find some vulnerabilities. Then what if mythos gets leaked? It would mean that everything gets hacked which would cause the Dow the Dow Jones index and the S&P 500 index to be destroyed and a massive financial crisis which would kind of give the governments an excuse to lock down the the the world to save the economy you know they always like it's like a create a problem and then sell the solution classic playbook and they would say like okay this is to protect the data to protect children you know to save the economy which would mean digital IDs get introduced and mandated CB CBDC's central bank digital currencies. So basically like a stable coins if you know like tether and USDC but issued by the central banks will be forced to replace cash. So cash will be likely rolled out and made illegal. Private models like local models like Gemma 4 will will potentially be outlawed. And again this is the negative negative scenario. So I'm not saying this will happen but it could very well happen because everybody's only talking about the positive scenario. So you know we need a bit of nuance. So in case private models get outlawed, they will mean no updates past current models, current open source models and quantized versions of these corporations get access to mythos for hyper surveillance. So again, Google already has more data about you than you probably Google probably knows you better than you know yourself. Same with Meta, same with Apple. And these corporations are almost like governments in our modern day. And then everybody would just live in a glass panopticon hoping that you don't get, you know, Mossad knocking on your door. This could be this is a possibility. I don't know how likely it is. I'm not saying it will happen, but we should prepare for it and we should not ignore it just because everybody else is like only touching on the official um narrative that Entropic is pushing. So yeah, Cloud Mythos is completely revolutionary. This might be one of the most important days in AI history and I just felt the obligation to really give you the whole perspective because people are too one-sided. They're too blind and they don't understand enough about AI and how the world works. Now if you want to take AI seriously and you want to prepare yourself for this because again we have a small window of opportunity to get ahead because the permanent otherass is the people who don't use AI and who don't have multiple agents running and we're not building custom software with AI. So if you want to take the next level and you okay you understand that yes David I'm going to delete some of my private messages and photos and you know secure my family. What's next? Well the next step is to actually lock in and start mastering these tools. And this is exactly what I designed the new society to be about. In fact, right now we just reached 400 members because with a revamping the classroom completely in three weeks when you join the society, you will master AI coding. You'll be able to build anything with AI. Even if you come as a complete beginner and that's why I broke this down step by step. I've used personally AI to build and sell an AI startup to grow a company from zero to 12K MR with less than four months. And I show you everything how to build step by step small granular modules that anybody can follow. I don't care if you're not a programmer. I don't care if you've never used AI. If you go through these three weeks, you will master AI coding and you'll be able to build and deploy full stack web apps. So no matter what idea you have, you'll be able to take that idea and turn it into reality and know exactly what to do to start getting users for your application. So again, if you're serious about AI and if you want to master AI coding, make sure to join the new society. It's going to be the first link below the




