Game AI's Existential Crisis (Part 1) | AI and Games
Опис відео▼
Game AI, the practice of implementing artificial intelligence to craft the experience of players in video games, is facing an existential crisis. A crisis of confidence in technology. A crisis that at its heart is about perils of scale and complexity. A crisis of handling risk in a land of the risk averse. A crisis of lost knowledge and of forgotten values. a crisis born of a niche that exists for a single purpose in a world that seldom acknowledges its existence. The process and study of how we use artificial intelligence in games as a means to craft fun, engaging, and memorable experiences faces a threat to its survival. We are at an interesting inflection point in the games industry as a whole. And while game AI is sharing in many of these challenges with other disciplines, there are many problems faced by this corner of game development that are unique to it. Let me stress this video is not a rant about generative AI. But it is one of the factors that has led to me writing this piece. Rather, this is a sentiment that has been brewing in my mind for the past couple of years. And I figured it was time I put it all down in writing and give it a voice. by making my thoughts on this a little clearer. I want to drive a discussion in our community over how we address it. I'm Tommy Thompson. This is AI and Games and it's time to talk about game AI's existential crisis. This is the first video of an essay that I started to put together in 2025 where I really reflected on the fragility of the games industry. I think a lot about how games is this big expansive and powerful sector, but it is equally one that is fraught with challenge and relies on inconsistent and unreliable strategies in how it funds projects, how it nurtures talent, and how it builds technology. For me, as someone who has spent well over a decade researching, designing, building, and communicating the intricacies of AI for the games industry, I think game AI currently has some very specific challenges. Addressing how risk consumes our capability for experimentation. Assessing how we design to systems rather than design for purpose. Considering how knowledge is exchanged, paywalled or lost. Discussing how our communities, while wonderful, are fraught and threatened by the world they exist within and the voices louder than them. This is a challenge we can address together. We will figure this out. That of which I am certain. I meet many people across the sector who shares in the values that we have both here at AI and games alongside my friends who run game AAI events CIC with me the organizers of the AI and games conference. We have the capacity to make this happen but in order for us to combat the issues we need to bring them into the light. This video was originally published as an article on my website aienames.com back in November of 2025. You can read the full edition which is also slightly longer than this video at the link in the description where you can also sign up to support the work we are doing. Also, please note this piece reflects my thoughts and opinions on this issue and it does not reflect the opinions of my colleagues at GameAI Event CIC who I collaborate with to run the AI and games conference. Okay, disclaimer out the way. Let's do it. Back in 2024, it was the 10-year anniversary of the AI and Games YouTube channel, and I made some videos to celebrate the occasion. First, I reflected on how things had changed in the decade that had passed, both in the field, but also for me personally and professionally, as I'd somehow built my own little enterprise off the back of a modestly successful YouTube channel. Then I made a second video sharing stories and how some of the YouTube episodes were made and how some videos led to my working on other projects with various businesses and organizations. The plan was always to make one final video. Having looked 10 years into the past, I wanted to look 10 years into the future, a future for AI and game development where we explore this particular niche of the sector in new and exciting ways that empower developers and in turn bring new and fun experiences to players. But every time I tried to put it to virtual paper, it rang hollow. I attempted to write it on three separate occasions in the space of around 9 months. And every time it felt forced, bordering on naive. But perhaps the more concerning aspect was the more I thought about it. I didn't like where my thoughts were taking me. It sat in the back of my head for most of 2024. And it was only in spring of 2025, upon returning from GDC, that a single statement took shape in my mind. Game AI faces an existential crisis. Since then, I've shared my concerns with peers in the industry, and that's when it began to take shape as I lay out my argument, and people nodded in agreement. What has really helped put the icing on the cake has been my more public efforts to engage in the dialogue on where AI is headed in the sector in the past 2 years. be it running the AI and games newsletter which has thousands of readers each week, my speaking at industry events all over the world, and my friends and I founding a nonprofit organization to run our very own game conference. Now, I'm going to walk through this thesis in three specific areas. First, discussing the challenges of game AAI itself as a methodology in contemporary game development. Secondly, how education is a real challenge both within the community and for new generations of developers. Lastly, the issues we face in bringing that community together as the world continues to spin. For this first video, we will start by digging into the challenges that face game AI itself. Then, in future parts, we will explore the challenges surrounding education and how our community is going through upheaval. I will also close this series by summarizing my recommendations and critically what we are doing to address these issues. For context, if you only know me from watching this YouTube channel, you might not know that I used this platform to create my own business. In 2019, I formed my own company, AI and Games Limited, which provides consultancy and professional training on AI for video games. Since then, I've worked with major tech companies, indie, double AAA, and AAA studios, trade organizations, and more to help teams get to grips with AI as it intersects with the games industry, ranging from good old-fashioned game AI, all the way up to the current hype cycle in generative models. Meanwhile, in 2024, my friends and I founded a nonprofit organization, GameAI Events CIC, that unsurprisingly runs events for our community, with our flagship event being the AI and Games Conference. Before I get any further in, you might be new to me, my work, and all things AI and games. So, here's a quick primer on what game AI is and why I'm talking about it. If you're already familiar with Game AI or you're a regular follower of my work, particularly here on this channel, odds are you can go ahead and skip this segment. You're going to hear a lot of stuff you've already heard from me before. Game AI is the practice of using artificial intelligence to enhance the experience of playing a video game. This can manifest in a variety of ways. First up, there's nonplayer characters that appear as opponents, enemies, or allies in a game. This can range from cannon foder in shooters like Halo, Call of Duty, and Tom Clansy's The Division, antagonistic enemies like in horror games such as Resident Evil Village, Amnesia: The Bunker, or Alien Isolation. Companions as seen in games like The Last of Us, HalfLife 2, or Bioshock Infinite. We use it for navigating complex environments in games, not just for combat NPCs, but incidental characters in City Skylines, plus the likes of vehicles and other relevant game objects. We build custom solutions that handle complex open worlds in games like Elden Ring, Death Stranding, and Sea of Thieves. We use game AI for strategic opponents in games. Real-time strategy games like Total War or Starcraft run game AI systems to make decisions on what individual units do over time. And similarly, this extends into tactical levels and also tactics games like XCOM, Gears Tactics, and Into the Breach. Even deck builders like Hearthstone or Marvel Snap have game AI systems that act as bot players, typically to support noviceses getting into the game. But also gameplay systems that support the experience in way players seldom think about. Director systems in the likes of Left 4 Dead help ensure gameplay remains tight, consistent, and engaging. Combat management systems in the likes of Batman Arkham or the Spider-Man games by Insomniac that help ensure combat feels fresh but not overwhelming. Open world games like Far Cry and Assassin's Creed manage the spawning of NPCs such that they appear in interesting situations and don't waste computes resource. Now, to do all of this, we rely on a variety of tools, techniques, and methodologies. Finite state machines, behavior trees, automated planning, navigation meshes, and pathfinding algorithms, utility systems, belief, desire intention models, and many, many more. I'll forgo explaining what each of these do. After all, I've documented this at great length over the past decade or so on YouTube and on my website. But the important part here is that virtually all of it relies on technologies that are increasingly no longer in vogue. These classic or symbolic forms of AI are designed for humans to express the rules of the problem and then allow the AI to make decisions either by applying specific rules or searching within the defined logic to find an answer to its problems. releasing this video in 2026. When we talk about AI in the common parliament, we're most commonly talking about machine learning. Many of the contemporary applications of AI, be it in drug discovery, social media analytics, and recommendations, or more recently, the boom in generative AI models are powered using machine learning. Machine learning is a process of training an AI model based on data we have about a problem such that it learns the intricacies of that problem and it can solve new problems just like it. So you train a model for image recognition such that it can then identify future images. Similarly, generative AI processes large numbers of examples of text or images such that it can then produce new text or new images. Game AI for the most part does not rely on machine learning. Rather, it is powered by these symbolic methods. This means we often express the logic for a character in a game that makes sense for us both as developers and designers to interpret it while also making it computationally tractable for a computer to process it. So, for example, if you're playing a first-person shooter, we're paying attention to the distance of enemy NPCs to the player, whether they can see the player, how much resource they have, or whether they're near other relevant resources, weapons, vehicles, etc. It may have a goal it is trying to execute, ranging from killing the player to finding a resource or moving to cover. and we find the appropriate actions to take in that moment such that it can achieve that goal. Maybe not now, but it could do it in a future point in time. This is not to say machine learning does not bring any value to games. Far from it. As we have explained many times on this channel over the years, game developers have found a variety of ways to use machine learning, but most of it isn't in game AI. There are exceptions, of course, such as Gran Turismo's Sophie and Forza's Drivear, but the bulk of it is in production. matchmaking algorithms in games like Dota 2 in Marvel Rivals, cheat detection in Call of Duty or Player Unknown's Battlegrounds, or gameplay testing in Candy Crush Saga. Plus, while in 2026, the conversation over generative AI's value to the industry is ongoing, we've used it already in the likes of texture upscaling for games like in Mass Effect Legendary Edition and God of War Ragnarok, and of course, in super sampling technologies such as Nvidia's DLSS and AMD's FSR, but I'm not going to focus on that today. Generative AI will get its own essay for sure in the coming months. Now, perhaps one of the biggest things most people fail to grasp about game AI is it's one of the few corners of artificial intelligence where the emphasis is not on achieving a level of accuracy or optimality. If we think about everything from logistics to image recognition, regardless of the technique, our ambition is for the AI to make the best decisions possible, ensuring it always makes the right action or for a machine learning model to predict with a high level of accuracy. But for game AI, what is the best option is often highly subjective. If Alien Isolation Xenomorph always took the optimal action, you'd just die again and again and again. We want NPCs and other AI opponents to make interesting decisions. But interesting is not necessarily optimal. An enemy in a game should be taking actions that lead to fun gameplay moments, and often that means making them fallible and leaving room for error. This is one of many reasons why symbolic AI is best suited to these problems. Given designers can express what they want characters to do in specific moments that give you an opportunity, be it to defeat them, avoid them, or complete other in-game objectives. Even aspects of game AI where optimality or near optimality is still important, such as navigation, still need to feel appropriately designed within confines of the game world. Hence, we have NPCs avoid running through water or stumbling across rocky terrain. Because while it might be the most optimal in terms of distance, it feels strategically or performatively out of place. A point I've made over the years, be it in presentations or otherwise, is that I often equate game AI to theater. It's about crafting an experience that makes sense within confines of the game you are playing. That doesn't mean it's guaranteed to be realistic because again, we're interested in creating fun and we're always doing this within fantastical scenarios. But it needs to make sense in the world that it exists within and reinforces the fantasy as a further axis through which the all desired but often unattainable immersion begins to take shape. We have to think a lot about how these characters should not just exist within these fantastical worlds, but also react to the player's actions. Expanding on the theater analogy, the added complexity of game AI is that the player is not only the audience but an active participant. Meaning we need to not only ensure they're seeing interesting events take place, but we then must ensure the game will respond to how the player engages with it such that it feels like they exist within that space. Naturally, we don't always get that right, but that's half the battle. On top of all this, we then need to express many of these systems to players in a way that they can interpret them and respond to those signals in kind. Hence, a lot of game AI has overlaps in animation, in character art, in visual effects, in narrative design, and in sound design, as we rely on the support from other corners of the development team to express the thought processes, if you will, of a game AI system to the end user. And we often do this in an exaggerated format, much like theater, because we're expecting players to rely on the same behavioral recognition and processing faculties that they use in everyday life, but as means to react to an algorithmically constructed scenario. So to get into the meat of my thesis, I want to talk first about the immediate future of game AI and some of the concerns that have rattled around in my head the past few years. And these largely fall under four specific themes. The first is that game AI techniques as they stand are struggling under the weight of contemporary game design. Second, that there is a need for ongoing experimentation in game AI methods, but game productions seldom have the time to explore them. Thirdly, as a result, we confine our thinking to designing against known systems rather than designing systems for purpose. And amongst all of this, generative AI is often incorrectly held aloft as a solution, failing to understand what the real problems are and what benefits machine learning could bring to this sector. Now, before I get into the weeds here, I want to take a moment to express unreservedly that this is neither a reflection nor a condemnation of the hard work of so many programmers, designers, and other developers who build these systems out there in the industry. One of the most rewarding aspects of what I do, be it to communicate game development to a broader audience, to support studios in building their own games, or heck, sometimes even making my own game AI systems, is engaging with so many amazingly smart and talented individuals, many of whom I consider colleagues and friends. Making games is difficult, and every game that makes it out there in the wild is a real victory. My argument here is more a reflection of our industry and how we exist within it. that the situation we are in is brought more by the broader business factors of the sector rather than the performance of individual contributors. In truth, what I'm talking about is risk, about the need to mitigate and manage risk. This is largely due to games being an expensive yet speculative proposition. Every new game is full of unknowns of whether a particular design idea will work, whether a feature resonates with players, but even prior to that of knowing what is the best approach to take to deal with the challenges we face. I often think about how the game AAI techniques we use today are the same ones that we've been using for 20 years. I often mention in talks that I give that if you plot the history of game AI, the most important period is from 1995 to around 2005. During that time, we had 3D games building off the likes of Doom, leading not just to ID Software's subsequent release Quake in 1996 with the likes of Golden 007, HalfLife, and Thief: The Dark Project, refining aspects of character behavior, sensory design, goal selection, and navigation. Strategy games ranging from Command and Conquer to Starcraft and Shogun Total War, establishing the need for modular systems to handle various tasks. Early experimentation in machine learning and games like Creatures and Black and White helped establish the technology wasn't really ready for prime time. Simulation games such as well the Sims leading to adoption of utility and BDI systems to help manage competing interests and interacting with a complex environment. And then the age of groundbreaking firstperson shooters early 2000s led to techniques such as behavior trees courtesy of Halo 2 and goal oriented action planning thanks to fear. In summary, many of the game AAI techniques we still use today were largely conceived and invented during this window with a handful solidifying their worth in later years with the likes of HTN planning courtesy of Guerilla's work in Killzone and the Horizon games and more recently state trees in Unreal Engine are beginning to take shape as a potential replacement or companion to behavior trees. Now, these systems continue to serve as well. After all, many a game is shipped every year using these techniques, and we continue to find new ways to utilize them. But it's a little concerning that the bulk of our game AI methodologies are now so old. In many respects, they were built to support games from 20 years ago rather than the games we build now. In an era where everything from graphics pipelines to animation systems, entity component systems, and networking frameworks have evolved and changed from one generation to the next, game AI has not truly evolved since 2005. It leads to an interesting situation where for decades we have developers working to find ways to use old ideas to fit into this larger scope of building larger and more complex architectures that often merge one or more of these technologies together. All in an effort to achieve a desirable level of functionality. Meanwhile, players are left wondering why the visuals of their game have evolved significantly, but the NPCs in the game world are retaining a level of intelligence exhibited in games 20 years ago. Heck, it's a question that gets leveled at me by my friends and family who are not game developers. Why does it feel like nothing has changed? What about all these recent innovations in AI? Surely, they could lead to some sort of evolutionary breakthrough in behavior in the past 5 to 10 years. Now, an important point I will raise now only for it to recur later is that games are best perceived as a sum of their parts. Regardless of the level of visual fidelity, players will complain when the writing is poor, the controls feel stiff, or the 101 other ways that the game may feel lacking. With Game AI, you can have characters that look great with great voice art, motion captured animation, and great modeling and texturing, but it falls apart when they act in ways in congruent with the world or too smart to a point of unfair. This is an evergreen concern of game design. But as games become more complex, the minimum barrier of quality continues to increase, and there's no simple solution to address these issues. To the first question raised a moment ago of why does it feel like nothing has changed, I'd argue it's understandable people feel this, even if, in my opinion, their observation is incorrect. Things have changed significantly in that time, but I'd argue players seldom notice it, given that achieving the status quo is a challenge in and of itself. If you were to compare the original Assassin's Creed from 2007 to 2025's Assassin's Creed Shadows, the scale and complexity of the game's design has exploded, and all of that has an impact on the game AI systems. Be it in understanding and processing the real estate of the game world to the number of gameplay systems NPCs are expected to recognize and interact with. Trying to achieve the same base level of intelligence in Shadows is a much larger task than the original game. The 2007 title had a much more limited scope, but equally it had far lower expectations from players. We forget that Assassin's Creed was quite groundbreaking on its release. Not to mention that the player's agency is far more vast in Shadows versus the original, which also has an impact on how flexible and engaging these systems are expected to be. Meanwhile, there is an ongoing challenge of how to address handling so many NPCs at once, often in large scale simulations that players are becoming increasingly more immersed in when game worlds can now render and process hundreds of in-game characters at once. How do we ensure that those NPCs are always doing something reasonably sensible? Or perhaps more accurately, how do we convince players the NPCs were doing something sensible both when unobserved and when once again in the player's view? This is something that games ranging from City Skylines to Watchd Dogs Legion have explored in various capacities, and it continued to be an evergreen problem with more recent titles such as Kingdom Come: Deliverance 2 handling it in their own way, and we'll have a video on that later this year. It's a challenge of resource. I need to make sure NPCs are constantly making decisions that make sense in context of the game and their goals, can be processed quickly with limited CPU budget, and can often be processed in areas of the game not loaded into memory. Players often fail to grasp that when you're in a vast open world, so much of the game is not active and loaded in. As a result, we often have to simulate what the rest of the game world outside of the player's purview is doing, but do so in a cost-effective fashion so that you don't experience performance hits in your game because, you know, the world is thinking. We exist in an age where games continue to increase in scale, be it in virtual space or in mechanic and dynamic complexity. While game AI handles these issues well enough, it comes with it a number of challenges. As the complexity of these systems increase, there is a need to experiment, not just to find new ways to repurpose and rework old ideas, but also whether the everchanging landscape of AI can lead us to new innovations that can work in lie of or in tandem with these existing methods. There is most certainly experimentation ongoing within the sector. We wouldn't have people presenting about their work at events if they weren't. I'd say over the past decade or so, the industry has done a great job of finding new spins on existing ideas to help with specific design challenges. In this context, I think about Vladis Lavansich's fantastic talk at our 2024 AI and Games Conference on Warhammer 40k Space Marine 2, which I then made into an episode here on the channel that every innovation built for that game was taking existing systems from other games and reworking them to fit the design of their Zeno Smashing Simulator. It speaks to how most studios experiment by taking existing ideas and fitting or revising them to fit their needs. This is of course a smart and sensible way to go about things, particularly when you have a team of developers familiar with those techniques. I recall when I wrote in my newsletter about the AI summit at GDC in 2024, that there were more talks from studios revising spatial and navigation systems than there was on the adoption of large language models. Now, of course, I had a hand in that. I helped curate the talks as a member of the advisory board, but it made sense to include them because it was reflective of the conversation in our community in the moment. Games like The Hunt Showdown, Warframe, and Suicide Squad: Kill the Justice League were all revising navigation systems or building custom ones to service their game design. Clearly, this speaks to a broader challenge and individual efforts by studios to meet it. This is the product of that experimentation and that it results in us attending conferences to share our experiences and have that conversation. It's why game developer conferences are so valuable. And I don't say that to big up our own endeavors. You know, we run our own event now, but rather to reinforce their importance in the global development landscape given, as we will discuss in a future part of this thesis, those events and our ability to share and exchange information is under threat. But what if a larger or more complex effort of research is required? It's not often a game that makes it to market is given the time or budget to allow for such creative affordances. After all, games need to ship. And I feel like game AI in particular is treated as such a known commodity. It is not given the same level of diligence and resource from a budgetary side of things for experimentation and innovation to take place unless it aligns with investor narratives. It's notable that the two big games in development in recent years where NPC AI systems were at the forefront of their design were subsequently cancelled. Monolith's Wonder Woman and Cliffhanger Games as Black Panther titles were both exploring the same idea of establishing relationships between NPCs such that it changes the structure and experience of the game as you play it. In other words, one of them was using the Nemesis system and the other one was building something like it. Now, for what it's worth, I've heard from numerous sources about what those games were shaping up to be, and they sounded like really interesting experiences. In fact, the stuff I heard about Wonder Woman really felt fresh in a way that truly excited me. But they were complex and expensive propositions that needed time to flesh out the concept and have it be cohesive. But for two large studios owned by large corporations, that proved unviable given their desire to cut costs and maintain a more efficient profit margin. Naturally, there are 101 other things going on that affected those productions. But you wonder what level of leeway would be given if those game AI systems were treated in the same way as investors in seuite discuss generative AI. Look, just because these are older and established technologies doesn't mean they just work out the box. The strength of symbolic AI is that it relies on human designers to encapsulate the thought space, you know, the state space of the problems they seek to address and then design algorithms to explore them. But this is also their inherent weakness in that it often leads to massive explosions of complexity that need to be explored and coralled. While machine learning can and does help in these situations, it then often robs us of the ability to control it as finely as is needed for the game's design. But that's just for the big experimental ideas. Sometimes we don't get enough time to just do the basics. As I said earlier, making games is difficult. Sometimes the time spent investing in ideas and technology are what will lead to larger returns in the future. This was readily apparent when I sat down with Jeff Orin for a conversation about the development of the AI of Fear, Monolith's 2005 firstperson shooter, which has been highly celebrated over these past 20 years. And I'm really proud of the retrospective we put out as a result, even if the closure of the studio has left a horrid taste in my mouth. But I want to focus on one specific part of that video. Jeff told me Monith spent about a year in pre-production and what would eventually become fear and use that time to rewrite their lith engine such that it could address the needs of their upcoming project. To that end, he spent a year, largely uninterrupted, focusing on writing new game AI systems, including both a navigation mesh system, which of course is now standard in most game engines, but it wasn't at that time, plus the adoption of a planning system that would later become goal oriented action planning. One of the recurring conversations I've had with game developers who watched that video is that they were blown away, jealous even, that Jeff had a year just to focus on designing new tech. The ability to spend 12 months largely without additional concerns researching an idea that existed outside of games and then employing it in a game engine. It seems fanciful in the modern day, right? As I said earlier, more often than not, it's less about experimenting with new ideas, but finding new ways to integrate existing ideas into new designs. But that's only if you even have that kind of time available. While I was writing this piece originally for my newsletter in the fall of 2025, I spoke with some developers who worked on a well-received AAA game released that year, and they told me they simply didn't have time to experiment. Their AI team was small, their resources were tight, and the time frame ever looming. Just getting it done and ensuring the AI was passibly good was all they could afford to do. And as discussed, just that minimum barrier to quality is being set ever higher, courtesy of the design of the games themselves. To their credit, I think they did a great job and I had a great time playing the game. And that's coming from AAA. And I often doubt I'm going to see really experimental ideas for game AI coming from Indie because making great AI for games requires time, expertise, and resource. And there's a reason most AAA studios have dedicated AI teams. You need a team of people working largely on these systems, often in conjunction with gameplay and other elements over that multi-year development period just to get it to the level of quality that they can. It's why anytime I've been asked to consult or help write AI for games, it's been with indie studios because these teams, and don't get me wrong, these are highly skilled and very competent people are far more constrained in resource and seldom have a dedicated AI programmer. As a result, a common issue that emerges as they build game AI systems that work but are not particularly engaging. Hence, I've consulted on several games in the past couple of years, all of them indie, where their NPC AI is missing that something special, that secret sauce, and I've gave advice and guidance on how to address it. Given developers are relying on tried and trueue techniques, I often wonder whether there is a risk that we avoid ideas that would prove complex for those approaches to solve them. This was a thought I've had bouncing around in my head to some degree for a while now, and I could not quite put my finger on it exactly. However, it was affirmed to me when a former AI lead of a AAA studio put it to me quite succinctly. There is a risk we overdesign to fit game AI systems. What I mean by that is that we build AI to behave in ways that make sense both in terms of existing game designs, but also in how we know we will develop it in the back end. Naturally, we want to be able to transpose a game idea from scribbles on a notepad to a functional gameplay system. But the argument is we're making that easier for ourselves because when we design game AI, we design it with the knowledge of what methodology or technique we are going to use in the future. And this ultimately could breed conformity because we're stopping ourselves from thinking outside of the box on what we could do and coming up not just with new ways to approach existing problems but designing new approaches for potentially new problems. I think all of these things of a lack of experimentation of designing to systems speaks to the realities of shipping games. It's an expensive and time-consuming endeavor and we're aware that the clock is running down and the budget is running out. Hence, as I stressed earlier, this is less to do with developers themselves, but more a result of the economics that surround a lot of game development. The success of the industry and the everinccreasing scope of games being released, breeds a sense of conformity that we stick to known principles, perhaps even play around with them to create new ideas, but we stay within the parameters of what we have because there's little space or an appetite to do otherwise. And so as complexity increases and costs continue to go up, what a perfect time for a speculative technology that claims to fix it all. Since OpenAI released GPT4 back in 2023, we've spent the past couple of years being besieged with the idea that generative AI is going to fundamentally change the video games industry. Now, that is an argument that is complex, nuanced, and in many respects presented in bad faith. I don't have the time to focus on that in this piece. This video is long enough as it is. That topic will get its own rant at a later date. But for now, let me stress that generative AI as a technology that is an updated approach to contextsensitive machine learning is really interesting and quite powerful. Yet, sadly, this has been exploited by the industry of generative AI. mass market models offered by corporations who have invested hundreds of billions of dollars for a venture that has no definable path to profit, leaving everyone trying to kill flies with bazookas as we decide to use AI tools rather than think and apply ourselves. All the while exploiting the works of others without renumeration, a gambit that the internet has provided for those willing to exploit it and one they can get away with unscathed, courtesy of a lack of regulation against it. As I say, the AI industry will get its own rant another time. But for now, there are a bunch of problems to address with the generative AI boom in games, even assuming that it was developed in responsible and ethical ways. And there are two issues to raise with game AI. The first is that stakeholders are keen to revise game AI history to suit their needs. And the second is that very few people seem interested in using generative AI in ways that actually address game AI's problems. Far too often, the narrative around game AI is conveniently rewritten by generative AI advocates. that essentially everything in NPCs is just rules until now with generative AI being a solution for a you know living and breathing ecosystem. In most instances, what they're really saying is NPCs can have conversations now, courtesy of large language models. The argument is often made that what has come before is not real AI. A point used to avoid having a more nuanced conversation about the issues facing the sector because the focus is on getting attention and in turn investment. But when we get into the details, I tend to see the same familiar tropes. When we hear about the future of NPC design with generative AI, it doesn't deal with the problems that game AI handles on a daily basis. I never hear new AI startups talking about handling edge cases and navigation like in Death Stranding. I never hear how we can build new spatial awareness systems to handle openw world game design like in even Suicide Squad. I never hear how generative AI can make for engaging search and hunt behaviors for the likes of the upcoming Splinter Cell reboot, or how it can manage appropriate combat distances and other relevant game balancing for titles like Assassin's Creed Shadows, or how we can use this technology to handle thousands of NPCs running performatively in games like Kingdom Come: Deliverance 2. Instead, the loudest of AI hype merchants try to reframe game AI as the things that generative AI is more likely to address successfully. It then leads to us talking about the same things over and over again. Non-player characters that can utter realistic and emergent dialogue or running emergent narrative systems that can generate infinite stories or of crafting NPCs that react to human players through natural language processing. This of course doesn't cover the full gamut of ideas being raised given much of the conversation around generative AI is on cost cutting and changing existing paradigms. But let me stress again for right now I'm focusing purely on the game AI issues here. A big reason that these are the topics of focus is that the two underlying themes here of story generation and natural language processing have historically been difficult to build, impossible to scale, and challenging to implement in game AI. We have attempted to do this over the years with games like Verse, Facade, and Event Zero, all using symbolic AI methods like planning because ML-based natural language processing was simply nowhere as good then as it is today. But in each of those cases, they required significant amount of structure of building systems that lead to natural language interaction. And as we'll see in a second, almost all the generative AI systems that are actually succeeding are relying on similar ideas. Dare I say it, the future of AI native games where generative AI is used for gameplay relies on game AI to make it work. The idea that generative AI can fix game AI speaks to a fundamental misinterpretation of what machine learning is good at. As discussed earlier, when we need to find the rules of my problems or they're too difficult to express, I can train an ML model. But more often than not in game AI, I know what my rules are, but I need to be able to reinforce them at scale. I don't need an ML model to do all the heavy lifting of designing the experience. I need it to figure out some of the more contextsensitive aspects because players are capable of approaching my curated game design from a thousand1 directions. Now, the thing is generative AI is really good at contextsensitive decision-m provided it has been designed and trained with this in mind. But the thing everyone tries to use Gen AI for is the one thing it sucks at creating meaningful and rich outputs. This is literally where we need good writers, good artists, good designers to create the context and content we need. But rather, I don't want to simply assimilate all of that data into a trained AI model to regurgitate it without nuance. I would much rather I train a model to guide me to human crafted experiences more efficiently. The idea of curating content and guiding towards it extends to decisions and actions. Having a generative model that helps contextualize player actions such that meaningful responses and interesting scenarios occur would be great, but I want complete control of what those scenarios are because a stage performance without a script isn't of much use to me as a designer. Even the most chaotic of simulation games like The Sims are instill inherently controlled by a logic-based system that designers have defined, refined, and ultimately dictate. Behavioral systems for game AI require an internal model of the world that is reinforced. I can't rely on an LLM generating a plan of action given there is still a statistically likely possibility. It creates something implausible. Injecting chaos in a world of often total order. So even if it works, it still needs validated against a system capable of doing that. You know, like a planner already does. Plus, even when machine learning and the like is used in game AI, we tend to work around it. Forza's Drivear has designer crafted rules to override its behavior in specific contexts. Meanwhile, Gran Turismo's Sophie relies not just on having multiple models being built to handle varying expectations of player skill, but they're trained with designer expectations of how they should behave during play. I have long felt there is an opportunity to utilize machine learning more in the context of game AI, but only when we reach a point where there's a greater understanding of what ML models can do and can more readily interface them with traditional workflows. And the thing is we have done this but people either don't know about it or are dismissive of it because the term AI is now so toxic it's hard to have a meaningful conversation. There is a lot of useful ways in which we can use machine learning models, be it in broader production, but also in-game AI. I mean, yes, I often bring up Gran Turismo and Forza, but there is so much more than that. Last year, we talked about how Star Wars Outlaws uses neural networks for speeders to solve openw world vehicle movement. We covered little learning machines back in 2024 that use reinforcement learning as a puzzle mechanic. Age of Empires 4 is still perhaps one of the biggest examples of an overlap in traditional game AI with reinforcement learning, and it's still sitting in my backlog as a super interesting topic to make a video about. More recently, the goalkeepers in EA Sports F26 are trained using machine learning. The vast majority of PvP mobile games out there use ML bots, too. Meanwhile, there are companies out there trying to figure out what use there is in generative AI and large language models to work alongside rather than replace traditional game AI practice and doing it using their own data without stealing the works of others and leaning into what generative models are good at rather than what hype merchants want to sell. But again, this gets drowned out by investor hype trying to replace everything we do with stuff that just won't work. Or it gets shot down by angry players who equate any and all forms of generative AI as plagiarism engines, a problem created by big corporations using it to do exactly that. I dedicate a future part of this essay to this issue, but it's worth stressing now that we are trapped in a space where meaningful conversation on AI in the games industry is practically impossible with so many larger players leaning into the hype as means to garner investment or be seen in the broader conversation. Meanwhile, you have players who already don't understand how video games are made and then assume any and all AI in games is slop. It's rather depressing that I'm sitting here as one of the only voices still out there trying to sit in the middle. That yes, most generative AI we're seeing in games right now is garbage. But there is some actual interesting stuff happening if you know where to look. Heck, it's why the AI and games newsletter came to be because making videos on this and trying to react to the headlines quickly is timeconuming and I felt it was much easier to do this in a weekly digest. I mean, this channel has always been about showing off the most interesting work happening in the sector rather than having it act as my pulpit. Though, with this video, I've reached a point where I can no longer really sit silent on this platform. In the past year or two, we've seen everything from InWorld's Origins demo all the way up to Enzoy by Craftton that advocates for a future of AI native games and the use of generative AI as a means to craft entertaining gameplay. But so many of these games rely on using traditional game AI as a mechanism to achieve their goals. In truth, virtually all companies fall back on structured frameworks when they need to make any sort of LLM based interaction work. Because LLMs are statistical inference engines, they lack the ability to maintain broader context, understand gameplay structure, and larger enthoral intent, never mind even being consistent from one prompt to the next. When I played things like Origins back in 2023, the fantasy fell apart within 2 minutes because it's clear there is no narrative design being built around these characters, nor do I feel like they actually exist in the world because LLMs can't do that without a huge amount of handholding and a whole stack of game AI systems to support them. You need this because otherwise these characters feel empty, soulless, and clearly don't exist within the world you've built. And these are all issues that traditional game AI has already addressed where we sit with narrative designers, game designers, animators, and voice actors working through the expectations of how that character should exist within that space such that it feels authentic. In fact, after I first wrote this for the AI and games newsletter and published it in early November of 2025, I was then proven right. But weeks later, when Ubisoft did exactly this, I was invited to Paris to try out their teammates demo. And the reason that this works is because when you talk to the NPCs, they then run game AI code under the hood to execute on what I asked them to do. They literally run behavior trees. Meanwhile, they have teams of narrative and gameplay designers working for months to ensure that these Genai powered NPCs have some sort of understanding of the world around them. All of which to create an experience that's roughly 30 minutes long. And to their credit, they are very quick to point out that this demo is in no way reflective of something they could actually ship. Now, I stress this because Generative AI introduces a game design problem that I feel not enough people talk about in that if we consider the base level of player expectations, one of the most surefire ways to highlight the limitations of an NPC is to make it conversant, courtesy of a large language model. because players will implicitly assume a level of intelligence to an avatar in a game world because you make it capable of having a conversation, for lack of a better term. It's the exact same problem I mentioned earlier that Modern's Assassin's Creed and other similar open world games have to contend with. Because you've increased the complexity of the overall game, we expect more of the NPCs and now you're pretending that an NPC can have a fullyfledged and nuanced conversation with you. As such, players would expect this character to be far more intelligent in the moment. Putting an LLM into the game and getting it to speak to the player is just half of the problem at most. If the player is engaging in a dialogue, then the LLM needs to remember the context necessary to ensure that experience stays consistent and engaging. And when you look at studios who have achieved this to some degree of success, many of whom are indie, they have LLM tech stacks that then have structures, one might even call them rules placed upon them in order for them to be practical. We talked about this with the UK studio Meaning Machine on the newsletter last year because they themselves are fully aware that LLMs are nowhere near as capable of achieving their game design goals as is often advertised. But that's only for the narrative framing. More often than not, they still suck as NPCs because all of the game AI considerations from 20 minutes ago in this video are still not being addressed. Characters aren't attached to navigation meshes. They don't have a proper behavior tree for understanding and acting within the world. They lack any spatial and contextsensitive information. All of the same problems we've been trying to fix since 1995. I'm conscious I could talk a lot more about the generative AI part of this essay. a lot more in fact particularly on the advocacy and bad faith arguments raised about using the technology for game AI. But to repeat, it's important to close this out by stating generative AI is not the solution to the problems that game AI faces. Rather, game AI is a solution to the problems generative AI faces. And this point needs reinforced and communicated on repeat. And frankly, until games actually start shipping with these lessons learned, the conversation isn't really going to evolve. In a time when we rely on established convention as budgets and timelines are tight, we need to continue to experiment and explore to figure out solutions to game AI's challenges. But stop looking to generative AI as this silver bullet. In reality, what value machine learning can have in game AI is working in conjunction with existing systems to work around the cracks or limitations of them. Having machine learning models, be they generative or otherwise, more readily respond to gameplay events to capture edge cases that are difficult for designers to express or recognize contextual behaviors by players in ways that feel congruent and sensible. Building new behaviors for one or more characters to execute at once in ways that are appropriately staged and constructed, but lifted from designer template. All of that sounds like a valuable thing to explore, and it's exactly the sort of thing machine learning is suited for. But in the execution, it needs to be performant and it needs to be easy to edit and tweak to a designer's needs. As this AI bubble continues to grow with cash coming into the space, investment should be focused on figuring out where game AI and ML can meet in the middle in a way that actually works for us all. Rather than this naive insistence that the new shiny thing will replace decades of expertise in the sector, I mean, heck, give me the money and I'll go work on it. Clearly, I have an opinion or two on this issue. So, this is the first part of a much larger essay I've been working on. And in truth, when I started writing it all down, I didn't suspect it was going to be as long as this. So, I'm glad I let myself break this up somewhat. In part one, I have discussed the challenges faced by game as a field. But for part two, which will appear on this channel soon, the focus will be on education. What impact changing trends in AI are having on higher education syllabus. We're going to talk about cultural changes that dictate government policy and also how game education is a precarious sector held aloft by a handful of individuals with the best of intentions and yes one of them is me. Plus the impact all of this is having both on the present and future of game AI. Naturally as someone who spends a lot of time educating others online never mind 10ear career in academia. Unsurprisingly I have a lot of thoughts on this. Thanks for watching and I'll see you then.




