Чому AI-проєкти провалюються: реальні історії клієнтів
Відео розглядає причини провалів AI-проєктів, зосереджуючись на організаційних та структурних бар'єрах, а не на технічних недоліках. Підкреслюється важливість адаптації операційних моделей і робочих процесів для успішного розгортання AI, висвітлюючи проблеми, такі як розростання пілотних проєктів, прогалини у продуктивності та борги процесів.
Ключові тези
- Компанії інвестують в AI, але не змінюють операційні моделі.
- Вузькі місця часто зміщуються з якості моделі на організаційний дизайн.
- Організаційна мудрість часто не документується і пропускається AI.
Зниження витрат на операційні процеси на 20-30% при правильній інтеграції AI • Підвищення продуктивності команди до 40% за рахунок автоматизації рутинних задач • Покращення якості прийняття рішень на 15-20% завдяки аналізу даних AI
Недооцінений фактор — внутрішні знання співробітників, які не задокументовані. AI не може їх врахувати, що призводить до неефективності та помилок. Важливо створити систему для збору та інтеграції цих знань.
Опис відео▼
So this tutorial we're going to focus on why AI projects fail. Really important topic. So war stories from real customer deployment. So Raja Cababal is going to be taking you through this incredibly important lesson um on where things tend to go wrong. Um and this especially tends to happen at the point of when you go to deploy in the real world. So we'll look at how you know failure happens and some steps organizations can take to avoid failure. So I'll hand it over to Raja to start the tutorial. >> Thank you Rebecca. Uh just I will get situated properly all my screens and everything and the chat window. Okay, we are all set. And one last thing I have to start sharing my screen. Okay. And my screen should also be visible now. Okay. We are all set. Um welcome everyone. U you have seen me yesterday in case you attended some of the panels. Uh here I am uh back here and uh what I'm going to talk about uh is um essentially what we have observed uh the patterns we have observed while talking to some of the our customers on the training side uh on the upscaling side and also on the product side. Uh so we have been talking to uh a lot of customers, training a lot of customers, uh building solutions for a lot of customers and deploying uh a product that we have built uh on uh and within a lot of customers u um own cloud. Um I don't think I uh formally introduced myself. So here I am. Uh feel free to connect with me on LinkedIn. Uh you can look me up. Um and uh the QR code is there there. Uh I've been practicing AI for more than half of my life. I long time ago uh started in grad school. Um ended up doing machine learning. Um worked for Microsoft a bit. U most of my time was at Bing and Bing ads. Uh and currently I am the founder of Aentoai. I am also an instructor for data science dojo and I am also uh an adjunct faculty at the University of Pittsburgh. I teach generative AI uh a few a grad level course and an undergraduate course. Um let's uh dive right in. So when companies they uh they start investing in AI and when they start building AI products um there is this core paradox uh and what is that core paradox um pretty much every company is investing in AI right now I mean any company that matters um you know everyone is rushing to u so that uh they are not left behind um companies they launch uh pilots um pilots are launched Um and uh at an individual level uh for um for a for a customer they um or maybe uh not a customer but an actually at an employee level or maybe at a small team level uh the pilot shows success. Uh there is some uh individual game that is visible um and they try to um and they try to um deploy uh these pilots. But what ends up happening is that when these pilots are deployed widely and I can I can give you examples firsthand that we have seen uh a sales co-pilot deployed that the sales team does not want to use. um you have uh you know uh some marketing uh CRM type tool uh deployed a companion to CRM or CRM are already there but a companion to CRM uh deployed but uh no one wants to use it right so and there are uh a lot of different reasons that we will talk about but fundamentally what happens is companies they do adopt they at least try to adopt on surface they try to adopt AI But uh they don't change their operating model. Um uh and uh they don't adapt uh their um standard workflows and um and the gains that apparently uh are visible at an individual or a very small team level. These gains are not visible um on the balance sheet. And as a um AI uh the case that we'll make is that a AI and especially agentic AI it requires your enterprise to be fundamentally rewired. You have to do certain things um differently. You have to start to do certain things differently when you u when you uh are in that uh in that a AI transformation journey. And very often in this case the bottleneck is usually not model quality or data availability. Uh models are amazing right? So uh uh having building having built these systems in the last couple of years. I can tell you the models have come a long long way. It it is incredible how models are um the innovation has accelerated on the model side and even the architecture side. token limits uh inference uh you know the latencies uh uh in terms of uh you know um context windows context windows have improved reasoning has incredibly improved a lot of things that we had to prompt for now you don't have to right so it's part of the reasoning architecture and usually the challenges that the organizations are now facing are not really uh the technical capability only it is the tech technical capability um not able to meet the organizational design. It is it is the last mile problem as as we call it. So uh here is this uh this uh idea that there are these islands of productivity. I have this tool I can write proposals uh very well on my laptop or perhaps uh one or two people. But uh there is uh when I try to scale these pilots uh um widely things don't work things don't work as well um uh perhaps uh you have this uh this example here it is it it is the bottleneck that is shifting previously you were spending more time in drafting this complex contract but now there is a manual legal review that has to happen and then manual legal review is well um uh it is still the bottleneck right. So you have uh you have um um uh you have automated part of the your u this uh contract pipeline but you have not fundamentally fundamentally designed the entire workflow uh to move at an agentic pace. Um also there is uh uh what we have observed is that companies usually they start at the top uh companies would start scaling agents um right away just you know just let's go and build this agent um only to realize that their infrastructure their tools um data and tools are not ready. You may not have the the right connectors. you don't have the right access that you don't have the right permissions and you know the processes are not there then you have this security privacy and compliance and governance problems that are um that are a big challenge then you have uh this uh um uh this interoperability uh problem right so do do the data sources and agents can they talk to each other and just talking is talking is not enough talking in a manner that it is safe, right? So, uh do they talk in a compliant manner? Do they talk in a um are they maintaining the governance aspect of it and so on. Um then there is the the general literacy in your organization which is uh I will I will mention in later slide. It is an underappreciated area more so it is um it is incredibly important more important that than ever um that your entire organization is AI upskilled and uh uh you know you will if you are a decision maker you will have tons of people who will come to you service uh service providers and you know solution integrators and services companies they will come and say hey we will just come and deploy things for you right so we'll just come in and uh it's going to be roses uh um and uh it is going to be happy or you will live happily happily ever after. But the challenge is it doesn't work that way. You it doesn't work that way because the details are actually quite gory. Look at what there are. So first we will look at some key friction points. By the way, this is not primarily a technical talk. Uh this is uh maybe for a change. I think most of our talks have been technical but this is going to be uh less of a technical talk and more on the other non-technical aspects of uh that are needed for success. So we'll discuss some canonical friction organizational friction points um and um these are not technology gaps. I'm not going to fret about technology in this talk. I'm going to talk about uh really the organizational design aspect of uh AI adoption. Um as I said structural barriers with uh structural barriers within an organization and what my hope is by the end of this talk all of you will be in a position to um to at least assess where the company stands and what the what your AI transformation roadmap must address. We'll talk about these seven points. Uh the first one uh what we call the pilot proliferation there is this thing called productivity gap there is a process uh there's this thing called process debt um then governance is there um architectural complexity is there the efficiency trap and the tribal knowledge is there so the first one so um this uh I like this the way um you know some of these ideas are coming from an article Um uh so I like the the way u you know the the authors actually talk about this that this is uh you know they call it uh pilot rich and transformation poor right so and if you look at it um um this is the reality of many many companies at the moment. uh but the problem happens is the problem is that uh many cases there is no repeatable path and each pro each pilot remains in an island and the the question that you should be asking in your organization is this proposal writer that I have built or this um this um maybe CRM automation agent that I have written or this uh automatic automated email responder that we have created or this legal uh automated legal document review um or this auto maybe AI parallegal that we have created. What is the repeatable path from this working pilot uh to to the point that this becomes a standard uh standard company workflow. Um the next thing that you need to ask is um how would these individual gains the respective gains that you uh that you achieve in respective areas how do they show up in your balance sheet um um and worse right so uh there is there's a worse uh something that is worse uh I I don't know how many of you have heard of this uh this term called AI slob uh I mean they used to call it work slob now they say AI AI slop. Basically, instead of AI helping, uh, now there is this AI slop being created and now you're spending more time fixing the problems left by either a sloppy AI agent or a sloppy use of an AI agent by uh someone who is uh who has not done their job, right? So, um, so how do you handle with this? Right? So, but this is the best case scenario. Let's say you have a productive agent that actually saves you time. How is the time going to be reabsorbed? Because what we uh have seen is um you know basically worst case scenario is that AI instead of helping uh there's an illusion of AI helping in productivity whereas AI is actually creating more work than uh actually helping. Um so the question uh even if it is a successful co-pilot is where specifically would the reclaimed time go at your company and who owns that redesign? Okay. Um sorry here and process debt. Um broken processes they cannot be redesigned. they need a rebuild. And uh for those of you who have been building or have been working in data and analytics for a while now, uh we saw the same thing when we were going through this um the previous wave of predictive modeling and machine learning and data science or analytics transformation that uh well garbage in garbage out as we call it. Right? So if your data is broken um how do you uh can you build models on it? Um in this case um if your processes are broken, if your data quality is bad, um data is still the king, how do you redesign your processes uh from scratch so AI can run on the on them reliably? Uh does everyone know how to how to use this AI agent? because um this end to-end automation it is easier said than done uh in the AMD demo that we were looking at this whole open clothing it comes with a disclaimer comes with a disclaimer um someone was telling me uh just yesterday I think one of the speakers or um I'm forgetting who told me but open claw actually uh modified its own guard rails uh it found it to be restricting because when you give autonomy to these tools well they are autonomous. Um the blessing in these tools is the autonomy and the curse is also that that that can also become the curse. And how do you how do you um make sure that your process is redesigned such that um such that it is uh it is AI ready and uh which workflows you need to rebuild uh before we can uh layer AI on top. um and uh and for that matter which uh uh which workflows can be used as is uh this is a good one. Now when we look at when we look at AI um that's GPD 5.2 to GPD 5.1 uh uh llama 4 um cloud sonnet claude opus right so you name any model and tell me um these models are incredibly knowledgeable they have seen a lot of data um across the board probably all the books that are published uh the entire internet all the tweets all of Reddit data you just use your imagination But one one piece of knowledge or one aspect of knowledge uh that these models do not have is your organizational um um organizational knowledge right so your uh whatever is inside your organiz organization because they don't have access to it um well when you build a retrieval augmented generation application or you when you build these data sources and can or you connect these data sources is with your models. Well, you bridge that gap as well. Now, these models can now see your internal enterprise or your confidential or your proprietary knowledge as well. But there is still a piece of knowledge that these models do not have, which is what is sitting in my head. I never documented it. Right? I'm a factory worker. Um well well this when this machine vibrates I have to kick it right and it starts working. I've not documented it, right? Or um you know, you know how when the the remote starts working when you just tap it, right? So, one of those things when this machine um um starts making noise, hey, unplug it and plug it back. And these these uh these pro uh some of these and these are extreme examples, but there's a lot of organizational wisdom. This is that 30 40 year old factory plant worker. This is your uh receptionist who has been there for 20 years. Um and they have all of this in their head. Uh how would AI know it? It's not documented. There's no MCP for scooping out that knowledge. We don't have that neural link yet. How do you scoop out that knowledge from this experienced worker? Call it the wisdom part of it. Right? So AI is very intelligent and um the knowledge is there but the wisdom is not let's call it wisdom. Where is that wisdom? So well you will have to get that documented. Uh how do you get the documented? Because the moment you start talking about it uh well people you know they will feel unsafe. So uh a key aspect of your organizational AI transformation design has to be that you have to reframe knowledge capture more uh as well legacy building uh improving things as opposed to being a threat to uh you know the not a cost cutting exercise because the moment we frame it as a cost cutting cutting exercise well it is uh you you will start seeing that defense ensive behavior and uh lack of cooperation. I'm talking about witnessing this um in organizations where we help uh we help them build perfectly fine very good AI assistants but uh uh adoption was hard for one reason or the other and this is one of the reasons that was observed. Then another aspect and there is a QR code feel free to actually scan the QR code for there is a link to this article um uh that actually addresses this there is this uh aspect of u governance going back to the open claw example autonomy if you go give autonomy to uh to systems that are non-deterministic They're very powerful. They can reason on their own. And I'm I'm sure um the fact that you guys are here, you are interested. You must have seen some of those news that uh you know some replet agent or or lovable agent going and um going and deleting a production database. Right? So we we have seen some of those. By design these tools are meant to be autonomous which makes them very powerful. What kind of governance framework do you have on top of it? What roles and what permissions these agents or these systems should have for them to be able to uh take actions? Um and when you when you build these systems when you start looking at these systems these systems are fairly complex. uh when I go and use an assistant it is not necessarily a single uh large language model call it's not a single LLM call there is a lot that happens or maybe 10 20 30 LLM calls that are happening think about this that a task that is given to apparently a single agent there is sub aents that are running how much of can human in the loop even be an option here can human in the loop even exist in this case. So the question is how do you assign permissions to these? How do you assign roles to these? How do you control access? This is a going to be a challenge. So um invest in uh AI governance framework earlier on and understand what are your company's non-negotiable appro human approval gates as agentic workflows. So um these things must be approved by a human period. Uh because um for instance, right? So uh you cannot uh uh your triage nurse can never write a prescription. Your AI triage nurse can never write a prescription uh because there are regulatory concerns. And now uh it is you the business owner uh who is going to be um uh who will be in a better position to decide which of these um human approval gates are non-negotiable. Um and in this case organizations they must treat agents like a managed workforce. Treat them like your employees. Give them permissions uh onboard them. uh do a background check, monitor performance, uh you know, uh make sure they're compliant, give them permissions to for certain actions and restrict them from uh from certain actions. Uh don't let them actually uh don't give them permissions to elevate their own role and um and uh or elevate their own access uh things like that. So treat them like you're a managed human workforce and that will actually make a lot of things easier. Um then there is u u if you look at this there is u u an enterprise um architecture usually it is actually quite complex right. So how do you um what connections and what interconnections must be in place before uh you can actually layer AI on top of uh up of on top of your uh architecture. Um there is going to be um your enterprise is changing constantly. How do you um how do you uh make sure that your uh your platform evolution it does not outpace I mean things don't change to the point that your initial assumptions are uh void. Um then I think this is one once again that uh the same uh idea the tribal knowledge part um the efficiency trap uh frame the the frame the this whole idea of AI adoption more not as a headcount reduction exercise more frame this as a as a value creation and competitive advantage tool uh not necessarily about headcount elimination. So how how can your um your organization actually scale AI pilot successfully? Um uh I will come back to this slide. Uh I want to be mindful of the time we have. I will I promise I will come back if we have time. Uh but uh let's very quickly go through a few things. The first and foremost thing is that you start with business first. Um and some of these um some of these things we just uh talked about uh take a more business uh centric approach. Do not actually assume that this is merely a technology problem. It's it's more than just a technology problem. Identify uh you know what are the the key KPIs uh what's what is the needle that you need to move um where are the opportunities and what are the bottl uh what are the bottlenecks that to to realizing the potential of uh AI transformation. So we finished this part just now but there is uh another aspect of it. Everything that we talked about uh none of it is going to make sense unless your entire organization and is upskilled. And when I talk about entire organization I mean entire organization every single person. And uh when you look at it th this is an underappreciated area because most of the focus um even in analytics even uh when we talk about even when we talk about uh the machine learning and predictive modeling and I' I've been teaching uh I've been teaching for quite some time now and majority of the people who attend uh attend these uh trainings I would say significant majority is people who are who are builders who are actually going to go and implement things. But there is no emphasis on the the rest of the ecosystem. people who are the enablers of these products. The project managers, the program managers, the product managers, uh the the people who will consume these products, the the the person who will create uh you who will manage your data, the person who's going to actually use this uh or deploy this uh in the field. So we uh look at it uh if you look at this uh uh you know uh usually companies focus on um on the AI capable tier. what what I'm saying here is that we have to uh also focus on this AI aware gear which is um uh you know some very very basic literacy in case of AI and knowledge work at least some very basic understanding of how to how to build how to write effective prompts um how to get get uh um uh you know understanding of Well, uh if you phrase the question poorly, um it is going to cost you more. Um if you if you just bring in a lot of context, well, um Anthropic is going to be happy because they will send you a fat bill. But the reality is uh you have to you have to have people um in your entire organization, every single person who ha who is a good power user of these tools. being able to use understand some basic concepts. How does well uh how does not at the um uh the architecture level but generally how are these tools actually working so you are able to they are able to everyone is able to actually get the most out of these tools um so and awareness of possibilities and challenges everyone should be aware as well if I am a front desk uh associate and I'm working at the reception how can I use it to make my job more efficient If I'm a nurse, how can I do it? If I am a marketing manager, how do I do it? If I'm a developer, that one is the more obvious case, but you name it. Any knowledge worker should be able to uh should have at least a very basic understanding. Then um there is this AI ready tier where they uh where uh this portion of the workforce that can connect uh that can connect um the opportunities um in their own industry in their own domain to AI. Well, if I saw someone uh doing an automated proposal writer, well, what does it mean in my case? If I see a legal document reviewer in legal industry, how can I use it uh in retail? So, uh being able to connect those opportunities and engage with people in the AI capable tier to plan some widespread AI initiatives. Um so if you look at this um uh most of the people they are in the bottom uh this uh the bottom tier um uh the bottom tier and I see some questions coming up uh I will pause after this slide to actually take those questions just uh and please type in your questions I mean they are coming up Um so uh as I said every single member um in your organization must be AI literate. Um uh the builders I don't need to emphasize it's builders actually they want to be very very literate. I don't need to make a case there but the ecosystem people who are consuming or enabling uh um consuming and guiding there is a typo here right so people who are consuming and building uh data and AI products uh they have to be AI aware I mean so ideally they have to be AI ready um u then leadership they must be right so there's no uh no other way right so I know actually someone uh very high up. I mean I think CXO of a billion plus dollar company um last uh last week uh on-site meeting and the guy actually started building uh using clot code. Never coded in entire life ever. And the guy was so excited about uh being able to build a proof of concept for uh an application and uh giving a tough time to the uh to the AI team because well uh started building I mean said I mean is it even possible uh and yes I mean if you look at this you're um now from ideiation to actually a proof of concept it is quite easy and now a lot is possible and this was uh uh this was actually not possible uh until a few years ago. So uh let me actually take some of the questions. Uh I will go back uh Jane your question. What role in an organization fit into the AI ready stage? Um um it depends upon the organization. What I can tell you is AI ready is at least your product managers, your project managers, your program managers, people who are either um involved in building uh products, not the hands-on part, but at least they are they are um they're the ones who are guiding the products, right? So, they must be AI ready. Um and I actually think that uh an AI product manager is going to be one of the most sought-after roles. Um and this is based on um me hiring a few people here and there uh looking at customers uh talking about what they need. Um and also based on our own products that we are building we are uh I think that I mean anyone who can think uh as uh with more customer empathy uh can uh can be more product focused uh you know can envision has has an understanding of uh has an understanding of uh the business and the domain and if they are AI ready that's actually an an amazing combination. So but anyone right now who is actually helping the builders in a supporting role uh and think product program project uh manager type people they have to be I already I mean there's no other way in my opinion of course I mean depend upon depending upon what industry what company you are you may be able to survive but if you want to be you know if you're very very obsessed or selfish about your career you better be AI ready. Um let me take uh um okay let me go from the top a lot of questions. What is one practical change that most improves pilot to production conversion? Trinat I mean you got me. I don't know. I'm I'm uh I'm not sure if there's one practical change that most improves pilot to production conversion. That's a tough one. Basically, let me rephrase the question and uh please feel free to uh type in if you think that this is I I'm rephrasing it correctly. You can say what is the biggest reason the pilot to production conversion fails? you know maybe let me flip it because if we change it then you know it will be successful governance I would say because um a lot of times um uh the issues are around accidental exposure of data accidental um access unauthorized access accidentally uh uh not uh exposing any PII and all of that I think that would be probably if I had to if you gave me hey you Raja you can fix only one thing I will fix governance and this is assuming that the pilot pilot from a technical standpoint is working well if it is working well from a technical standpoint I think governance is the first thing that I will fix thanks thanks for forcing me to think and come up with a a concrete answer let me see um okay this is scrolling very quickly Um to expect to okay Jane the question here is uh do you think whether adopting aentic AI at an enterprise level C uh will cause uh human resource redundancy Jane um I think so u and uh and this is something Um and maybe at a very personal level I can tell you right. I was uh just this morning I was talking to uh talking to a family member member about uh my nephew a first year computer science um you know a freshman or I think going into sophomore like uh after summer and one piece of advice is um this is going to happen right so and I'm I'm not painting a doom and gloom scenario So um but what is going to happen is people who are more of generalists they're going to thrive and survive. So if I if I say hey I write beautiful code and I only write beautiful code I don't know anything else. Well there is a lot of people who can do that do it. uh anyone smart can write code now but not everyone who's smart and can write code also has customer empathy. Not everyone can uh think about think in terms of uh you know what what makes business sense and not everyone can simplify things. So um well some awesome amazing content marketer hey I write beautiful blogs sorry that boat has sailed. I mean um Claude and Chad GBD and these models they can write beautiful blogs but not everyone can think uh strategically in terms of marketing right so you cannot uh can who can execute their end to end marketing strategy who can actually uh not only use claude to cut down their uh writing uh blog writing time by 80%. they still are good writers. They can still you know massage it and you know understand I mean what it takes to write a better blog and they can also uh have a better content marketing plan and also they can step in if there are other opportunities in marketing. So a lot of roles are going to you know uh companies they will need people who are multi-dimensional um who who can do multiple things at the same time right these very specialized roles especially in knowledge work they are going to actually become quite uh difficult to actually retain okay let me see maybe I will take one more question and then I'll come back Um uh let me see can Agentic AI ever bridge the gap between external data and internal human knowledge? Uh I think so and that is a question mine you are asking this question. Um I I think so. Right. So um when uh back in the day when I started working in online services right after my PhD, right? So I started working I was fascinated by how uh how granular the logging uh is becoming right. So um you know at that time maybe we we logged all the activity in uh what that would happen on the on the service the search engine then you al are also bringing in logs from your app and joining them then you're also bringing in like think Google um uh you are gathering data from your search but you're also gathering data from Chrome You're also also gathering data from Google Maps. You're also gathering data from your home automation, Google Home. Now you're bringing in more and more data sources. It it is going to be only a matter of time. uh the point that I'm trying to make is when there is economic value in bringing in um more data these the internal human knowledge um organizations they will start finding that finding a way to uh to uh to extract that knowledge maybe every single meeting is going to be transcribed right we'll have every single interaction which is already happening by the way in many places but every single interaction that I'm having it is going to be recorded. Maybe companies might come up with, you know, interview processes. Hey, how do you do this? Every time you do it, just talk it out loud. I I don't know what that would look like, but that is going to happen because there is tremendous economic opportunity in scooping out that internal knowledge, right? And this is this has uh uh this is how we have evolved historically. Okay, I will come back to some questions. Uh let me just uh finish this up. Um the third third the third thing is and uh going back to I think there was a question around this develop a more governancecentric view of your AI tools and what that means is um that um uh think about this a lot of times um uh when you go and use SAS or when you use uh a third-party tool um maybe your industry does not um maybe your industry your regulation you are in a country I mean you have to be um your cloud has to be sovereign um or you are in healthcare there are certain restrictions um have um proper o and access controls uh for people using these agents and for agents accessing the the actions and the actions and the uh sources of context. Uh have proper understanding and data governance and explain uh explanation of where the where all the data set uh data is, who has access, who does not have access. um observability in the previous uh uh if I may um let me call it uh the revolution the predictive modeling and machine learning wave that happened. Um a lot of industries they would say um that uh they still want to use regression models because they are more explainable and any model that was not explainable like things like random forest or um ad boost or xg boost they were not uh cool because well we cannot explain uh why certain uh things are happening the certain way. So h having an observability layer on top of your uh infrastructure that is going to be important. Then you have uh guardrails incredibly important. Uh guardrails are incredibly important because you may have implications um um if your uh your chatbot or your assistant um um if uh your assistant misfires there can be a problem. Um, Air Canada recently got sued uh by a passenger because uh the the chatbot gave uh some offer some information that was wrong. Matter went to court. Uh court ruled in favor of uh of uh the customer not Air Canada uh evaluation. You have uh uh your models rely on sources of context. Your data sources organizational data is constantly shifting. How do you make sure that your um your assistants they keep performing correctly and correctly and correctly? Um so there is there is a lot of things I mean if you have to take this organizational design or your a let's say agentic AI organization let's say you have a human org and then you have your uh organization of AI agents uh the way you do it you are going to actually u uh have similar um concepts similar ideas um that you would be uh actually implementing ing. Um, okay. Let me see what we have further here. So, uh, just u how can we help as an organization, me, my myself, data science dojo, Aento. Uh, we actually are, uh, the first boot camp uh, in the world on agentic AI and large language models and there are others that are popping up. We have been actually building but the uh we have been actually teaching but the difference is we are not purely a training company. We are uh trainers who are also builders. Right? So I'm I'm one of the trainers myself. Uh a lot of things that you hear are actually I learned them the hard way. These are wounds and uh scars and scabs uh that are uh that I was mentioning. So we uh I mean if you're looking for a training program, we have training programs. Um I personally trained more than 8,000 people in a faceto-face setting. Uh you know the company has trained a lot more uh than I have. Um any company that matters on the planet. I mean someone from that company has been trained by us and we can also do upskilling, organizational upskilling, enterprise trainings at all levels. We have trained leadership, we have trained uh well the ecosystem and um and then of course uh a builder um we have uh trained a lot of builders actually from scratch. Um we al also we have an AI product uh which actually mirrors the philosophy uh that you um that I was mentioning uh a product that is that treats your agents and agentic workflows as um um as a human team uh with all the compliance and governance and uh other controls and uh we not only provide provide the product and services. We also provide this advisory how to implement things in a manner uh that uh your AI products are successful. So the product that we have as I said right so very quickly it's a it's a licensed uh product that it comes in and sits in your uh cloud um uh you can use any open source any closed source uh LLM um uh set up role based access controls so your agents uh if they decide to go rogue uh they don't have uh access uh um to the wrong things uh uh Then guardrails, observability, evaluation, AI, red teaming is built in uh connect to any data. I mean all the all the discussion that you were you were listening to MCP uh actions and tools uh and um you know security observability everything that you heard in the last couple of days. Uh so the the the agenda was actually based on our experience uh as uh builders and our experience deploying uh these uh for our customers uh as opposed to some topics that seem to be cool and this is a complete platform. You can actually once deployed you can actually build um uh build um compreh um very similar uh approach I mean our slogan is it is the new HR right so what that means is that the c there is a centralized management of AI co-workers for the entire life cycle from the call it hiring um use case identification, deployment uh to agent learning and development, agent ops, observability monitoring, feedback, cost monitoring, uh organizational alignment and compliance. All of that is actually controlled centrally. Um and I will actually uh so if you so this is the uh link to contact us if you think that we can be of any help to you in uh uh for your um for your training upsklling or uh or product or advisory needs uh you can use this form just uh just send us a note and we'll reach out to you. I will go back. I know I am right uh it's right on top of our we are here. Let me just take I will just take uh two more questions and then we will uh I will hand it back to Rebecca. Yes. Uh even if you are nontechnical we have trainings. I'm actually pretty confident right. So uh I'm pretty confident that as long as you are committed I mean code uh is not even a problem anymore. So even if you're nontechnical um you should be able to actually do it and we have trainings actually for non-technical knowledge workers as well. So depending upon what you're looking for we are uh we going to I mean that definitely uh it should not be a problem. We have trainings for everyone. Even our technical trainings, I think the way we have structured them, uh, you should be able to actually keep up. Um, okay. What is the best way to manage accountability of quality of results and reliability of the pipeline as there are a lot of under the hood dependencies? Have a take a more observability, evaluation, and governance first approach. Right? So everything that you're talking about, you have to have uh um all those controls in place before you even get started. Uh okay. And I think I will hand it back to uh Rebecca. Thank you so much Raja for that you know really useful um discussion really important one too and I really liked the point about you know what about the wisdom sitting kind of exclusively inside a human's mind that's not specifically injected into a knowledge base um for an AI agent and you know organizational wisdom often goes undocumented so it's an important piece of missing data and and who knows maybe in the future um it'll start to become more of a requirement for some documentation coming out of people's a deeper thought process that goes into a task. >> Maybe a direct link, right? So just a neural link, right? So yeah, >> nothing's nothing's farfetched. Um, so we have about seven minutes until our next session. So we'll take a seven a short seven minute break and then we'll be back soon for the next session. Okay. >> Thank you, Rebecca.
Ще з цього каналу

Rethinking Knowledge Work in the Age of AI
близько 14 годин тому

Tutorial: SN50™ for Agentic AI Infrastructure | Future of Data and AI | Agentic AI Conference
2 днi тому

Tutorial: Run Open-Weight AI Agents on AMD GPUs | Future of Data and AI | Agentic AI Conference
3 днi тому

Tutorial: How Docker Builds Guardrails for AI Coders| Future of Data and AI | Agentic AI Conference
4 днi тому
