Insights View Recording: The State of AI 2026: Purpose Built AI & ML

View Recording: The State of AI 2026: Purpose Built AI & ML

The State of AI 2026: Purpose Built AI & ML

AI is now a core IT responsibility—but not all approaches are created equal. In this session, we’ll break down the State of AI in 2026 and what it means for IT leaders responsible for scalability, security, and long-term value. Learn why purpose-built AI and machine learning are replacing one-size-fits-all solutions, where organizations are seeing real impact today, and what decisions matter most when designing AI that can be governed, integrated, and sustained. Attendees will leave with a clearer framework for evaluating AI initiatives and prioritizing what to build next.



AI adoption is accelerating—but are you climbing the right mountain? In this webinar, Brian Haydin, Solution Architect at Concurrency, explores how organizations can transition from generic AI tools to purpose-built solutions that deliver measurable ROI. From governance to local edge deployments, this session equips you with actionable strategies for 2026.

WHAT YOU’LL LEARN

In this webinar, you’ll learn:

  • Why commodity AI isn’t enough for competitive advantage
  • Four pillars of purpose-built AI: Models, Context, Actions, Trust
  • Predictions for 2026: Local edge momentum, multimodal mainstreaming, and shadow AI governance
  • Practical steps for CIOs, CISOs, and business leaders to scale AI securely

FREQUENTLY ASKED QUESTIONS

What’s the difference between commodity AI and purpose-built AI?

Commodity AI uses off-the-shelf tools like ChatGPT or Copilot to improve baseline productivity. Purpose-built AI is tailored to your organization’s workflows, data, and goals—delivering competitive advantage through customization and governance.

Why should my organization invest in purpose-built AI?

Generic AI tools level the playing field but don’t differentiate your brand. Purpose-built AI enables unique customer experiences, operational efficiencies, and ROI-driven outcomes that competitors can’t easily replicate.

How do I start implementing purpose-built AI?

Begin with an AI readiness workshop, conduct a data audit, and prioritize quick-win pilots that deliver measurable ROI within 6–12 months.

What governance measures are critical for AI adoption?

A seven-layer governance framework is recommended:

  • Identity & least privilege access
  • Data boundaries and classification
  • Prompt injection defense
  • Human-in-the-loop approval gates
  • Observability and traceability
  • Continuous evals and red teaming
  • Incident response playbooks

How do I prevent “shadow AI” in my organization?

Establish clear governance policies and visibility tools like Microsoft Purview. Enable innovation while maintaining oversight through telemetry, evals, and operating model blueprints.

  • Local edge AI deployments for data control
  • Multimodal AI (text, image, audio) going mainstream
  • Hardware cost decline enabling on-prem AI
  • Rise of agent-based workflows with action capabilities
  • Increased focus on authenticity to avoid “AI slop”

ABOUT THE SPEAKER

Brian Haydin is a Solution Architect at Concurrency, specializing in AI and app development. With hands-on experience in deploying enterprise AI solutions, Brian helps organizations navigate the fast-changing AI landscape.

TRANSCRIPT

Transcription Collapsed

Brian Haydin 0:06 All right. Well, hello everybody. Happy New Year. I guess it’s not still too late for me to say that. I just got some emails from people trying to sell stuff to me too, and they’re still saying Happy New Year. So I don’t want to like Christmas trees are down. Everybody’s ready to to get past this, but this is my first. First webinar of the year and I’m super excited to talk about it. It’s not gonna be just another like what are we expecting or what happened, you know, 2025 to 2026, but you know, let’s get into it. We’re gonna talk about like The State of AI and how to really focus on. Building purpose built use cases for the organization, but a little bit about me as we get started. So my name is Brian Haydin. I’m a solution architect at Concurrency and I do a lot of talks. Both in person in these webinars, talking about AI, talking about app development, that’s a lot of my background. If you want to connect with me after the session or if you want to get a link, talk about any of the content that we’re going through here, there’s a QR code. Reach out to me on LinkedIn. That’s a really great way to connect with me. I’m not super. Active on other social medias. But you know my day job, I spend the days helping organizations like most of your organizations figure out like how do I actually use AI and what am I supposed to do with it and how do I make it work for me and and help us accomplish our business goals. So doing this in the real world, not just, you know, demos and and whatnot. And I like to bring some of my learnings and apply that to like my family experiences, my outdoor experiences and use things, use analogies that make this these kind of topics a little bit more relatable. So probably going to hear a little. A little bit about some fishing stuff. You’re probably gonna hear some stuff about some hiking trails and whatnot, but nonetheless, fundamental fact about me, I am and I got a twin. So if you come across somebody that looks like me and doesn’t know anything about AI, that’s probably him. He’s a firefighter. So all right. Right. I had this presentation written and then something happened. So three days ago Claude announced cowork and I think they they’re kind of framing it as cloud code for the rest of your work. And it’s not just like cloud code building apps and building agents. It’s like literally a co-working agent that is running locally on your computer. And this hits like, you know, this really hits home with one of the central themes that I was gonna talk about, which is doing things for real on my workstation. As an individual, this coworker now reads documents. It edits your files, it organizes your work. It has access to a lot of things on your computer that you give access to, but it’s only helpful. Until it becomes an unsupervised kind of agent, and that introduces quite a few risks. Right now this thing is in a researcher preview, so it’s not being used, you know, by a lot of people. I don’t have access to it, but you know, it is going to be a fantastic tool. I think you’re gonna see a lot of people start to adopt it, you know? As they roll this out to the masses, and I’m excited to see what it does. But the most important thing to think about is that it isn’t this magic tool that’s gonna suddenly make our lives easier. It will do some of that. It’s going to introduce a lot of complexities and things that we need to start thinking about. Around governance and security. So we’re not just hiking in the woods at woods anymore. We’re heading up the mountain. We we need some climbing ropes, you know, because it’s not, you know, this is getting into some of the the dangerous, you know, we can slip and and make some really big mistakes. But so as we start to think about like, you know what we’re going to be doing in 20/20/16, I want to take a step back and just frame what each of you might be thinking of. You know, feel free to throw some questions in the chat or whatever. But nonetheless, maybe your titles or something like that in the chat, are you a C? You know, are you working the security side? Are you a CIO or a VP? Are you working the infrastructure? Are you looking at it just as a consumer? I want to be working this in marketing comps. So as a as a CISO, you know what you’re probably really thinking about this year is permissions and auditability. And what agents are accessing inside of your system? Is it just some SharePoint files which can be problematic? Or is it something a little bit more security focused? So how do you control all that? If you’re a CIO or you’re a VP, you’re probably looking at what’s the operating model? How am I making a decision about build versus buy? I mean, all these A I capabilities are rolling out faster than we can keep up with. And how do I control that and build out a scalable architecture that is going to be resilient to the fast pace of the technology growth? If you’re on the infrastructure, the automation side, you’re probably thinking about like how do I integrate this and make sure that this is working cohesively? With you know my data center, what we’ve laid out, what’s the cost implications of all these tokens that are coming through? What about latency? You know when I have to go from my on premise to you know some AI services that are in the cloud and if you’re on the marketing the communication like. I want to make sure that I still have a brand voice and that it’s authentic and it’s not coming out as like a I slop, right. So everybody that’s on this call has some skin in the game and I’m hoping that that we’re going to talk a little bit about each and every one of your your. Your use cases. Let’s think about 2025, and I’m not going to like build out a thesis. There’s plenty of people talking about, you know, The State of A I in in 2025 and 2026, but let’s just rewind back to January of 2025. Most organizations that I was working with were kind of like at the base camp they were using. Commodity tools, chat kind of interfaces. They might not even have been building their own agents, but just using like ChatGPT on the web or Copilot web. And you know, if I think about where the technology state was, we were just getting into the GPT 5 kind of signals and those deep reasoning models. Were starting to gain some of the traction like we really was. I mean it really was a remarkable year in terms of the pace that we went in. But then as we started to move on like I was at build in second quarter. You know, back in May and local AI was something that, you know, we got to experiment with. And what that is, is basically taking these small language models and deploying it to my computer and being able to run, you know, reasoning models without having to, you know, use or instantiate something on the the web. We’re a big, big model. So it changed the way that we were thinking about how we were, you know, how we’re going to use these, these language models. And then right after that, OK, hey, well, what about all the security concerns? We got the the EU’s, you know, AI Act about security and privacy. And then Manus came out with their tools. That was really phenomenal. I remember seeing like, you know, here I can spin up this thing and it’s going to make an application for me, you know, really taking a I to the next level. And then, you know, now we, you know, right around now Claude. Is, you know, coming out with cowork and agents are really pervasive, you know, within the organizations. And I I knew we would be focusing on agents this year, but I wasn’t quite, you know, prepared for the the pace of how companies were going to be ready to adopt it using tools like Copilot Studio and other agent frameworks. So a huge amount of distance that we traveled this year and my prediction is that we’re going to travel quite a bit more, you know, in 2026. But I mentioned like where companies were at the beginning of the year and where they landed at the end of the year. And I’ve got a little bit of a maturity curve and I I broke this into like how I saw companies leveraging a I last year. So people that were in the first camp, you know, commodity only kind of camp one weren’t really willing to experiment with custom A I technologies within the organization. But they wanted to enable the organization to use it. So they were using off the shelf kind of tools like ChatGPT or Claude. Those are kind of two of the big ones. copilot was being rolled out, not the copilot like the M365 copilot, although that was, you know, I guess you kind of lump that into commodity too. But like at least the enterprise copilot, copilot for the web, if you want to think of it that way. Camp two was kind of was the organizations that were ready to let or let the organization experiment. And so they were enablers. They were asking their Employees to go out and build agents using things like copilot Studio. Or really invest time into thinking about how they could leverage AI to make their workload a little bit more free flowing and use the AI to kind of augment them as a teammate. And then camp three, these are the companies that I worked with that were looking at it as a strategic bet. How am I going to leverage AI, you know, within the organization to solve real problems, real impact, to really increase sales or really decrease our production costs or really optimize our delivery schedule? And in those cases, we’re really talking about purpose-built projects, not just enablement projects like how do I use Copilot Studio? I want to be, you know, clear if you’re in camp one, I do not advise you to go to camp three and like go directly there. You know, altitude sickness is kind of a real thing. You know, I went, I I me and my family over the summer, we drove up, I think 12,000 feet in about 3 hours and my son got, you know, altitude sickness and was throwing up. So it’s a real thing. You climb too fast up this mountain on the maturity curb and you know you’re you’re gonna, you’re gonna see some bumpy, you know, bumpy results and people are gonna get disoriented. So try to take a methodical approach, you know, avoid some of the hallucinations, you know that that a I can, you know, can provide. Don’t go too fast past the security things that you need. And you know, if you shoot for the moon and you haven’t really had any experience with it, you’re going to wind up building a lot of pilots that never really ship, you know, or the investments don’t really pay off. So try to think about this strategically as you as you plan out for 20/20/16. Just try to keep an eye on the the chat here. Like I said, feel free to throw some stuff in the chat. Aaron’s asking how do I continually improve the tool sets for our staff? Well, I’m going to talk about what’s coming. And so Aaron, I think one of the ways that you can continually improve the tool sets to your staff is to stay on top of you know what are the latest trends or what are the things that are that we think are going to be coming out and enable your organization to do that in a secure and govern secure and governed way. So going back, so that let’s talk about like the commodity versus the purpose-built. This is really kind of like the the central thesis of today’s talk. So commodity AI gets you started and purpose-built is really what’s going to get you the outcomes that you need. Using the same generic AI that everybody else is using is going to improve your baseline productivity. I use a ton of commodity AI tools on an hourly, minute-by-minute basis, you know, throughout the day, and it might even be necessary for you to do something like that, you know, just first table stakes. To stay in the game, but it’s not going to give you a competitive advantage. So think of it this way, like if every bank is using the same off the shelf chat chat bot, then are they really giving cut like the kind of answers that differentiate or distinguish them from their competitors? Everybody’s service is getting faster, but nobody is really, you know, you know, having a value proposition that’s unique to their customers. So you need to differentiate yourself and apply AI in a way that others can’t easily copy. I I think one good way to think of this is copilot. The Microsoft copilot engine is basically driven by ChatGPT 5.2. I have ChatGPT 5.2, you know, running in Open AI as well. They’re different, they’re differentiated. So the system prompts that sit in copilot are different than the ones that sit in ChatGPT or open AI’s modules. And so that’s what I mean by being purpose built and like differentiating is that you can’t just. Use you know the exact same tool and and differentiate yourself and have a more memorable experience. And so when when we talk about purpose depth, you know, purpose-built, what am I really talking about? So we need to start thinking about how we’re going to deliver these projects and I have a little bit of a definition that that I think will resonate with with this group. First is let’s pick right sized models. Are we matching the tool to the kind of problem or complexity that we really need? Do I need a large? Language model like one of the big frontier models in order to answer these questions. Am I asking very detailed, isolated questions where I might be able to use a more fine-tuned model that would give me an advantage in speed or allow me to run it locally? On a device or am I doing like classic machine learning kind of predictive optimization problems? I can use some of the out-of-the-box ones that might do inventory forecasting. But I’m probably not going to get great results because it’s not really fine-tuned to what I need. Am I using the right context? Like what data am I giving the access, AI access to? How is it retrieving the data from there? What are the boundaries that I’m setting up on memory? How much it can remember? You know, this is really about like what the A I knows about the question that’s being asked out of it or the action that it’s being, you know, asked to perform. And then on the third Third Point here is what can a I do like we heard about a MCP and A to A last year. The model context protocols and being able to give AI access to tools to perform actions for you. But that brings in, you know, you know, quite a bit. What am I going to give it access to? Is it my ERP system, my MES system, my electronic healthcare records? Like what do I want to give AI? I access to and by giving it access to it, I’m giving it permission to do things on my behalf. Which leads us to the last point, which is pick a trust stack. Like how are you going to do identity management, policy management, evals, audit trails? How do I govern, you know, and make sure that we’re controlling it? So purpose-built AI isn’t just about picking the model like and doing the fine-tuning. It’s about all four of these different problems and how you’re going to stitch them all together. So when we think about AI, I just had a conversation with one of my customers yesterday. He’s like, I don’t even know which tools to be using and how am I going to think about it? So I wanted to like give you a little bit of a preview of a decision tree that I’ve started to been coming out. And when you ask the question like which is the right approach for this problem, the answer is probably gonna be a little bit of all of it. Or maybe the consultant is coming out at me when I say it depends. But classic ML is kind of when you’re trying to take. A structured problem that’s measurable. Things like forecasting and optimization are really good classic ML problems. Anomaly detection is something that was pretty sexy for a while. It’s not as sexy anymore. It’s really effective. It’s kind of been. Monetized to a certain degree. Classifications are becoming really easy to do, but those are classic machine learning problems rather than true AI structure problems, measured outcomes. We’re building a model, we’re training a model. We’re gonna get very. You know, structured results on the rag side. That’s like I need something, I need knowledge in order to or my customers are asking for knowledge about the product that I may be offering or how to use my software. So knowledge bases, document Q&A, customer support, those are the rag types of problems and those tools are becoming pretty easy to spin things up and work with. But you’re not fine-tuning anything, you’re just giving, you’re sort of directing this is the these are the types of questions that you’re allowed to ask and here’s the information that you. You can use to supplement your answers on the fine tuning side. This is when behavior or style really needs to be consistent and you need to have a lot of examples to be able to train this on. This was a technique that we used quite a bit where I have a list of. 10,000 questions and answers and I can fine tune that model knowing that I’m going to get the the consistency in the types of answers that I want. This is where you’re going to like really want a good brand voice or if you have very domain specific questions. Our specialized tasks, that’s when you’re going to want to introduce something like fine tuning. And then lastly like another use case that we might want to consider is the the small language models and this is just basically one of the models like ChatGPT running locally on a device or on Prem. And that allows you to sort of guarantee, like, I know that this isn’t being this data is not being used to retrain another model because it’s sitting right here on a server that’s not connected to the Internet. I control that traffic. I know that there’s no bad actors that have access to this because, you know, I have offline access. Some of the use cases where we’ve been looking at this have been in like remote areas. Maybe I’ve got some workers that need to go into a factory where the Internet connection is not going to be very good and they don’t really have a Wi-Fi coverage. Or maybe I’m at a trucking facility that’s in the middle of Texas and there’s no cell phone towers and I need to be able to do object detection, you know, on a on a frequent basis with low latency. So these are the kind of different use cases that you’re going to want to consider, think about, and this is like the little bit of a decision tree. That that I put together. We get into each one of these and we get more decisions that have to be worked out and you know, but at a high level, this is where where I think you can start. Dennis asks, will I have a recording in this presentation? Absolutely. We’re gonna share the recording. And we’ll also share the deck with anybody that’s, you know, that’s interested in this as well. So I’m gonna get some predictions. What’s coming in 2026? What are the things that you should be thinking about? And my, I kind of mentioned co-work before. I think that like agents are gonna move. From chat systems where you just sort of get questions and answers, so they’re gonna start to do actions. And so security is now about like agent permissions rather than just like protecting my data. So what do I mean by? By that I mean that I’m going to be connecting my agents to my ERP system, my EHRS, you know, etcetera, and I need to know what the agent’s going to be capable of doing. Some of the protocols like MCP, some of the implementations don’t really take the security aspect of it into consideration. And so we need to find ways to to control that and make sure that our chat, you know, our chat experiences, which is how the modality that people are going to be comfortable with, can perform these actions in a nice, safe and governed way. You know, Claude Co-work is a really good example of this. And I think you’re gonna see that type of work environment, really local agent kind of workloads be pervasive by the end of the year. Shadow IT, shadow AI kind of crisis. I am certain that there’s a bunch of you that are nodding right now and saying I was hoping that somebody would talk about this. I worked with a customer a couple of years ago about Power Platform governance and the shadow IT that would happen with that. I can tell you stories about the shadow IT with Access databases back in the. 90s and and early 2000s, we’re starting to see people really think about this in 2026. So shadow AI is all these different agents being built inside of the copilot, you know, system. And CSO’s are really kind of. At a loss of how I can control this, but still allow people and enable the organization to to leverage these tools. So we’ve been having a lot of conversations around governance and how do I manage shadow IT. And I think this is one of the biggest risks that AI is going to present in 2026 because. I don’t think enough companies are thinking through this. They’ve been told by the CEO, they’ve been told by the board, you need to start leveraging AI and you need to get there faster. You need to keep going faster and they’re not taking a step back and pausing on it, so. Coming up with a mitigation model is, you know, pretty critical this year and having visibility into these agents but not killing it, you know, the innovation that’s happening with it, it’s a really tough, you know, it’s it’s really tough to kind of balance those two things together. So organizations are going to have to figure out how to enable the, you know, innovation while still maintaining visibility that that’s going to allow them to win. And there’s some really great tools. Purview, you know, is is starting to really be useful in. You know the deployments of these these agents. Copilot Studio came out with evals that help us understand like you know if it’s doing the right things, the telemetry, you know and and traceability is starting to really come, you know, become elevated. So next prediction, I think that this local edge momentum is real to be honest with you. So data, you know, I work with a lot of a lot of organizations that don’t want to give their data regardless of what you know, the different vendors say. Different cloud platforms say I really just don’t want my data to be exposed in this way. It’s too loose, too imprecise for me to be able to manage it. And so local deployments of AI weren’t really possible in a meaningful way conversation I had. Here at the office just yesterday, as I was sort of talking about some of my ideas, is that beginning at 2025, the small language models that I started to experiment with were as capable as the large foundation models. That were coming out about a year, a year prior, and I think you’re kind of still in the same way. So like if I have the most recent, you know, OSS models and I’m using those to solve pretty, you know, complex thinking, you know, problems, I’m about where I was a year ago, but I’m doing it locally. So it’s really powerful tool sets. And it gives you local data control, local action services like Cowork, Clouds, Cowork being able to access local files, local apps, things that are running on my sandbox environment or sandbox VM inside of my machine. And then I can do things like with the models that are very specific to what’s running on you know for your organization local inferencing. Another kind of like thing that I that illustrates why I think this is going to happen this year is that the. The the physical hardware that is capable of doing this is starting to become cheaper and cheaper, and you’re seeing some of this deployed not only in the the workstations themselves. I have a Microsoft Copilot Plus PC that has MPUs built into it. But also like just the the really sophisticated GPUs are getting smaller and less pricey. We were at the Agentcon. One of the the conferences that I helped put on was Agentcon through the global AI community. And we had a couple of individuals do a talk on on some of the local GPUs. They brought them in and they were doing some experiments with them. So I think you’re going to see this local edge momentum. We’re going to start talking about how do we do this for organizations. Multimodal is gonna go mainstream, so I I think it’s already there to a large degree. When we talk about multimodal, we’re talking about text and images kind of, you know, going hand in hand. So AI that handles documents and being able to really like understand the charts as well as the. Text and how it’s organized, maybe even incorporating audio like transcripts. They’re not going to be separate tools now. There’s just sort of like everything. Like I ask a question about a meeting. It reviews the recording. It looks at people’s facial, you know, expressions, looks at things that were shared on the screen. All that’s going to be like handled by a single multimodal model and the capabilities are largely there. I think there’s going to be, there’s going to be some wow factors. And then I guess I kind of like I already talked about the hardware from a local standpoint, but even the hardware. That you know at the cloud level, like what the big players are using is becoming bigger, you know, bigger, better, cheaper, you know as well. So the cost curve is, you know has I’ve got a slide that shows some of the charts here in a little bit, but the cost curve is really going down in terms of token costs. And someone like Nvidia’s new Verichip, the Vera Rubin platform is about 5X faster than inferencing that was previously happening. So that means like you’re, you know, you’re going to be able to do automation at a like real-time, you know, type of basis. And the cost of these things is going down. So I think you’re going to see that continue to accelerate. There’s some talk about supply chain and we’re going to be able to build these chips faster. I think that’s going to be a problem, but we’re going to find more use cases. In addition to that, the cost of some of these like GPUs and systems are coming to coming down to a point where it’s actually practical for an organization to buy some of these GPUs and run them in their data center. AI slop. I think that AI mean, there’s no question about it. AI slop is real and and it’s kind of a, you know, it’s a term that people are using pretty, you know, pretty frequently. But really what’s happening is it’s like your authenticity is getting lost and people’s, you know, actually. Unique ideas aren’t coming through. I mean, hey, I used AI to help me with this presentation. I’m not gonna lie to anybody about it, you know? And but I think that I was able to pull together a lot of good ideas, do a lot of research. Add my own flavor, add my own ideas to it, challenge it, and you need to do that in order to make sure that you aren’t putting your brand at risk so that your AI generated content is going to still reflect. Your brand or your voice. And if you don’t do that, you’re going to lose trust with your customers. So you know, just throwing something out there with some hallucinations, whether you’re doing like, you know, some sort of a legal document and and you have some. Characterized phrases in there. That stuff is really happening and and I want to make sure that people are not just taking the output of AI and just, you know, giving that out as as their own. You need to really think about what it is that you’re putting out with your name on it. OK, So what does this mean for people? People have been talking about this, you know, for the last year. I know, you know, we’ve talked to a lot of organizations about like. How do I become orchestrators versus just builders? What is this? What is this really going to mean for for the the work for workers that I have? First off, I would say like you need to build. AI skill plans into the organization and understand what it is that you need your workers to do and how you’re going to give them access to those tools to be able to do it. I’ve heard anything from just 15 minutes a day or 1/2 hour a day sort of allocated to, you know, learning a new tool or a new way of. Doing things, people are gonna have to do that and I think the organization needs to be supportive of that. And then how are you gonna help measure that kind of that kind of progress? So, you know, forecasts are going to be, you know, people are thinking about this, like what’s going to happen this year in a lot of different ways. Some forecasts or some predictions that I’ve read are anywhere from 40 to 50% of white collar jobs are going to be displaced. I really don’t think that’s going to happen. I just think that people are going to go from doing remedial kind of like mundane tasks to offloading a lot of that work to agents and doing more meaningful work and hopefully doing it better. Most of the organizations that I’ve worked with and roll out some of these tools haven’t. Really led to layoffs. They’ve just, you know, been able to not hire as their organization has been able to grow and you know, so they’re just able to get more, more done with less, with less people. That’s that’s one of the things that we typically look for, you know, with AI projects, like how can I, how can I get more work done? So if you think about like your typical developer using AI tools, what does that mean as they transition from 2025 to 2026? This one I feel pretty passionate about because I wrote software for so many years and I I don’t think that AI is a perfect replacement. I mean, I haven’t found use cases where in a AI agent writing code is going to completely displace me. But but what I have seen is that it’s taken some of my more junior developers that work here and really elevated them. And so your typical coder can now be at the level of an AI as as an architect by using some of these AI tools. And where I started this conversation was talking about Claude’s cowork. One thing that I didn’t really like highlight is how fast that became a reality. The articles that I read, Anthropic was able to build that in about a week and a half. So you think about it from ideation to. A really sophisticated tool being developed and deployed in like under two weeks. It’s absolutely incredible and I think it is possible, but it also took somebody that could ask the right questions, use AI in the right way in order to make that happen. Analysts, I think data crunchers are gonna become insight curators, so they’re gonna start thinking about the data. Rather than just working in a spreadsheet and doing Vlookups, QA is going to evolve quite amazingly and I feel really passionate about this. That QA testing is going to be a good niche skill set again. It sort of lost its way over the last, you know, few years I think. And but now people that are really going to know how to write good evals and how to test these AI tools, that’s going to be a good skill that is going to accelerate this year and then finally, you know. People that are in management positions, they’ve had a bunch of people working for them. Now it’s going to be how do I get these agents to work for me so that I can help to serve the people that report to me a little bit better. So I think this year you’re going to see a lot of orchestration being the new momentum and that’s how we’re going to use AI better this year. Going back to the security, so I know a lot of CSOs are here. So let’s talk a little bit about like you know, A7 layer governance framework because I think this is something that most organizations are skipping past as I mentioned before. So at the ground level, you know they’re probably, you’re probably thinking about identity and like least privilege. So how can I like scope the minimum required permissions in order to get something done and how do I manage that across the ecosystem of the agents that are being deployed? If I if the agent doesn’t need to read the files, don’t give it access to it. If it needs to read the files but it doesn’t need to delete them, make sure you’re putting those kind of controls in there. Data boundaries, like what data are you going to give it access to? What’s off limits? How are you going to classify that that type of data? And how do you take a response that may have privileged or sensitive information in it and maintain that kind of protections across it as well? So you need to define this explicitly, and I think this is where a lot of organizations kind of get tripped up is how do I do it? And how do I start to incorporate that into the processes? Third layer would be prompt injection defense. So this is real and it’s getting harder and harder to really predict. It’s not necessarily just about somebody using your chat experience. It’s getting sophisticated now where the content that’s sitting out on the Internet has prompt injections built into it. So when AI reads it, it re redirects AI and how it responds. And it’s a real attack vector vector and I think there’s going to be a lot of tools that are going to come out this year to to really control that. Human gates. So what actions are requiring approval by people before it even starts? Not everything needs to be or should be an autonomous action. A lot of the projects that we started last, almost all of the projects that we started last year started with a human in the loop to begin with. So plan for those. Human gates. And only when you have a level of comfort that you want to turn this into autonomous should you take that human gait out of it. Observability. So we need to make sure that we can trace or replay. What happened when something does go wrong and it’s going to happen like you’re going to get complaints. You’re going to want to understand how can I build something a little bit better so it doesn’t make these same mistakes over and over again and you need to be able to see how how it happened and so using the observability tools. That are out there is going to be important. Continuous evals. So red teaming, drift detection. Your AI systems should be tested as rigorously as any of your other software, and most people skip past the evals. They don’t do a lot of red teaming. And I know the tools, you know, have been out there for a year and a half, for two years, but we need to start, you know, planning on using those things as well. And then finally, how am I going to respond to it? So what do I have a playbook in place for when, like for when an AI system goes bad? Who’s gonna get called? How am I gonna remediate? What do I tell stakeholders? Stuff isn’t really theoretical anymore. It’s like this is what people are are thinking about and doing. For those of you from the infrastructure side, what are some of the things that you’re thinking about to the things that that I think you that’ll resonate here is just a brief word on like a I Fin OPS and like you know, sustainability. So 2 aspects here. One is and I’ve got a, I know it’s really, really hard to to read some of the graphs that are on there, but the the cost of tokens is going down and in addition to that very dramatically it’s going down like you know 50X, you know. Year over year and but the level of intelligence that you get per token is going up. So it’s getting smarter and you’re paying less. So whereas horror stories of $30,000 token bills, you know, were pretty commonplace. Like two or three years ago, you you really have to do something hard, you know, to to run up those kind of bills or you have to do something a lot, right? So not seeing that happen as much, but people are still paying attention to the token economics and how they can deliver smarter. Smarter, faster and cheaper solutions. And then from a sustainability standpoint, all these new data centers, there’s protests and projects getting canceled over and over again. So I think it is important to think about like the sustainability aspect of it. Do I need to use? A cloud AI. If I can run some of this stuff locally and offload it to my workers devices using a small language model, that’s going to take some of the pressures off these massive data centers. You’re going to get better performance and and you probably get better results because it’s. Because working closer to the data, so economics, sustainability and that leaves me at like what’s next? So how are you going to make, you know, something actionable for you in the organization? If you’re at camp one, you know, kind of like starting at that commodity only or you’re looking to get into that commodity only, start with like an AI readiness readiness workshop. Start to triage some of the use cases that the organization is asking for. I would recommend that you do a data assessment because most useful purpose-built AI is going to be grounded on some sort of data, whether it’s in a system or just knowledge that you know is intrinsic in in your organization. But doing an assessment, who has access? Can we access it? And then potentially like know where you stand and what’s possible. So how can I identify like the quick wins? What’s what sits in like that golden quadrant where it’s high and low effort? And that’s something that we can help out with if you’re already at camp two, if you’re looking to get at camp two in the enablement space, I think. I would say organizations that are there are asking me for a framework. How do I start to scale this for the organization? What’s my operating model blueprint? How am I incorporating governance into this framework? How am I doing trust? You know, and managing that and how do I prevent the same kind of shadow IT or shadow AI that’s happened, you know, in the past when we’ve had new, you know, new emerging technologies. So the way that you do that is kind of getting your house in order before you, you know, set this out for scale and we’ve got some really good. Tools that we can work with you on to help get some of that governance in place and help you develop an environment that is an innovation enabler but makes your security team and your infrastructure team feel confident. And then if you’re at camp three, so you’re looking for the purpose built pilots, the very strategic bets. We can help you pick a pilot that’s going to have a high impact. One of the things that I spend a lot of time on when I do purpose-built pilots is ask questions about the ROI. How can I deliver ROI in six to 12 months rather than 18 to 36? Months. Quite frankly, something I’ve said quite a lot is that the technology is going at such a pace that if you can’t find ROI opportunity in the next 12 months, you’re probably better off just waiting. Another six months and reevaluating it because commodity tools might get you there faster, cheaper than what you’d be spending doing something that can’t be implemented today. Typically we see things get into production, you know, 6812 weeks, not 6812 months. And so look for those kind of opportunities where you can provide value fast and build the use case that’s going to, excuse me, that’s going to really open up the organization for bigger, broader investment and harder use cases when you can start to save the money right out of the gate. And that’s pretty much what I got here. So you know I will take a look here at the chat and see if anybody’s had any other questions and but before you do bounce off the call, I would ask that you jump. Into this link that’s in the chat that Amy put out there. Give me a little bit of feedback. Fill out the form. If you want to get a copy of the deck, I can share some of my thoughts about the frameworks that we’re using, like BXT to do some of the evaluations. Um, but most importantly. Get out there, do something. 12 months from now, you’re going to look back at this moment. You’re going to be standing at the trailhead where you started out the year or you’re going to be up that mountain looking back and say I’m really glad that that we picked some projects. Other than that, thanks for joining us and looking forward to hearing from you.