Insights View Recording: Building an AI Center of Excellence

View Recording: Building an AI Center of Excellence

Join us for an insightful workshop on “Building an AI Center of Excellence,” where we delve into the key strategies and practices essential for establishing a robust AI Center of Excellence (AI CoE). In this session, our experts will guide you through the process of creating an environment that not only fosters commodity AI but also facilitates best practices for large-scale AI projects.

Whether you are embarking on the journey of establishing an AI CoE or looking to optimize your existing framework, this webinar provides valuable insights to empower your organization in harnessing the full potential of artificial intelligence. Join us to stay ahead in the rapidly evolving landscape of AI and drive meaningful business transformation.

Transcription Collapsed

Nathan Lasnoski 0:12 OK, welcome everyone. We are gonna have an awesome hour talking about building an AI center of excellence. Nathan Lasnoski 0:19 We are gonna talk about where you go from here and we have a jam packed set of content and we have awesome presenters here. Nathan Lasnoski 0:25 So let’s just do a little introductions. Naples, Noski I’m concurrency chief technology. Brandon Dey 0:52 Did Nathan Freeze for anybody else, or is that me? Chris Blackburn 0:55 OK, I was just. Amy Cousland 0:55 I’ve been meetings for not nathans frozen. Please hold on. Hopefully he’ll come back. Brandon Dey 0:59 Such. Brandon Dey 1:05 As CTO, he’s responsible. Let’s chief technical difficulty officer so. Amy Cousland 1:14 There wasn’t a big storm in Brookfield, but the power was out at our office, so we had to head home to make sure we had Internet. Brandon Dey 1:25 Well, maybe while he’s coming back online, we can do intros of other people. Chris Blackburn 1:25 Sort of thing. Yeah, yeah, go ahead. Brandon Dey 1:30 I’ll go first. Brandon Dey 1:31 I’ll I’m. Brandon Dey 1:32 I’m Brandon. I’m concurrencies lead data scientist. I’ve been with the company since September. Brandon Dey 1:37 Nathan’s back where we moved on to other intros, but we’ll circle back to you. Nathan. Brandon Dey 1:42 Yeah. Lead data scientists here has a group. Brandon Dey 1:46 Very and to talk about what it takes to build an AI center of excellence. So with that, I’ll pass it over. Probably back to Nathan and then? Chris Blackburn 1:57 He’s working to various. Nathan Lasnoski 1:57 Awesome. And then Chris, you got yourself in there. Chris Blackburn 2:00 Yeah. They’re. I’ll. I’ll go ahead and do a quick introduction on Chris Blackburn. And so she’s like architect on our modern workplace practice. So as we talk about commodity AI, that is right at the heart of everything we do in Microsoft 365 and happy to share today some, you know, citing things to think about in that commodity space. Chris Blackburn 2:19 So back to you, Nathan. Nathan Lasnoski 2:21 Awesome. Well, I’m glad I’m back. That was interesting. Losing. Losing connection. Alright, so let’s dig in. OK. So what we’re going to do right now is we’re going to frame the AI journey, give you a perspective of why you would be creating a center of excellence in the 1st place. And then we’re gonna breakdown based upon some different lanes that you be implementing within your organization that need to be considered independently. But part of the same seamless AI journey. So when you’re thinking about AI within your organization, we want you to remember that you have to start with why, if you’re kind of listened to a synthetic video, you’re thinking about your business. Nathan Lasnoski 3:03 What we want you to consider is the mission of your business. The mission of your business exists for a reason, and it’s there because you’re delivering some kind of good within the world. So anytime you’re thinking about building a center of excellence, implementing copilot, deploying an AI project, building something with copilot studio, all of that needs to ultimately serve the mission of your business. And we find sometimes it organizations do as they enter right here and they say. I want to do that it roll up and resist that temptation Nathan Lasnoski 3:35 Resist that temptation to think about this the way you’ve thought about it. Projects that have come before us, this is a sea change in your organization and similar. To the Internet, hit us. Transportation hit us. This is a significant change. That’s not even akin to, say, rolling out Office 365 or something like a mobile app. This is a change that’s going to come across the entire business and was going to impact the way people do work within your organization. So you’re executive team. Your leaders need to be thinking about this in the context of the businesses mission and where we encourage companies to start is to back up to that stage to enable themselves to prepare for how they bring AI to an organization in a way that impacts the business and impacts in a responsible way and enables every person with your organization to be more by starting at that executive level. Nathan Lasnoski 4:31 That then enables you to move into envisioning with organizational support to then move down lanes of mission driven opportunities and commodity opportunities that create value in Pocs, MVP’s production and then scaling that across the organization by building governance patterns that enable AI across all the different lanes. Working in conjunction with each other, so we’re going to talk about how center of excellence enables these activities to be successful. So you can see real business impact and not just be deploying a technology product. So as you’re getting started, I want you to think about these two big domains that are gonna impact inside the context of AI. The first is the impact of the commodity domain. So this is any tool you’re using that’s enabling AI capabilities that you didn’t need to build. Copilot is a great example of this. Most of you are probably already Office 365 users. You can enable copilot and some of those capabilities are going to be ones that you adopt with almost no training and some of them are going to be like ohh, why doesn’t this do what I expected it to do? And it’s because you haven’t learned the techniques and tools to be able to take advantage of that capability of copilot yet. Someone it’s very nonintuitive, and some of it is intuitive, and that’s going to be the same for other AI capabilities that light up within your business systems that you’re using today. Nathan Lasnoski 5:54 Commodity AI isn’t just gonna happen on its own. It is something you’re gonna have to help individuals within your organization through. Especially because they’re going to learn new human skills as part of that technique, especially the skills of delegation to an AI capability and on the right hand side, you can see mission driven opportunities which are really building a data science team, enabling you to create ideas that impact the revenue production or operations of the business in a very direct way that create our ROI and drive down into creating value within your organization. Nathan Lasnoski 6:27 And that’s very much tied to the mission of your organization, especially as you think about how it goes to market, how it operates on shipping and building its products or delivering services. Mission driven opportunities are where you’re investing in something very specific to that mission. So those two fall into broad delivery programs within a center of excellence. So Center of Excellence exists in order to enable these capabilities within your organization and by enabling a some cases staffing. But in the very best case, just enabling progress within the organization in a way that’s structured and some very high skilled organizations, this means many, many people in other organizations, it might be very more nuanced approach to rolling this out. Nathan Lasnoski 7:12 Ultimately, you need to think about your AI program and the context of both of these lanes, and then both lanes contain their cycles that are centric to achieving success on the right hand side, you can see how human centric AI becomes part of that picture in human centric AI simply means as we take advantage of mission centric opportunities to achieve value or commodity capabilities. Nathan Lasnoski 7:36 Both of those are impacting people and that’s a proactive exercise, not a reactive exercise. How can I enable my individuals in my organization to be able to relearn new skills that that they either had before and forgot, or have just adopted now that enable them to be able to function within an AI enabled error? So this is a very practical breakdown of the places where AI is enabling use cases within organization. So on the left hand side you can see commodity and mission driven and commodity mission driven. You denote those kind of adoption capabilities using capabilities or building capabilities. And then as you start to move over to the right hand side, you see us starting to breakdown examples of tools that fit within those zones and then maybe even overlap those zones to a degree. So as we look at constructing an ability for us to leverage these capabilities within our organization, this you can see how the skills that are necessary in each of these buckets are different. So this first top chunk is about using AI capabilities in products. Nathan Lasnoski 8:42 So right at the top here we have things like Microsoft Copilot, which is essentially like used to be called Bing AI. Right, this idea of just using public web capabilities, but still in a commercial data protection oriented zone to be able to ask questions of large language models. Things that you might put to Chappie. GPT, but you’re concerned that if you use that, it’s going to be used to retrain. You could use Microsoft copilot and be able to get those questions answered and enable a person to be able to use that to augment their work, but then you might start adopting things like M365 copilot, which let you do things like writing a faster email, or drafting a presentation, or a summarizing in meeting that you were in in its action item. Not having a person have to take notes anymore or looking for answers that are out on your corporate intranet and pointing you in the right places or finding. Do I have meetings coming up with this person? These are all things that copilot can do. Understanding that some of those are really easy to use copilot for, some of them might not be as easy and need adoption assistance in order to get that done. This is using AI capabilities and then you get to this middle zone where you’re sort of configuring prebuilt elements. There are all the Lego blocks already exist. You just kind of putting them together to be able to achieve the sort of moderate middle level accuracy and precision scenarios. So like if you start at the top, you’re accuracy and precision is really, really low, right? You’re like, I’m getting whatever it decides to pump out at me, and I use it as a copilot, hence the name to doing the work in the middle zone. Nathan Lasnoski 10:12 You’re starting to achieve more accuracy and precision because you’re grounding it with something beyond your. Just your corporate corpus. You’re being a little bit more specific, so like maybe I’m plugging ServiceNow into teams or and enabling a chat bot for my customers in the business. Nathan Lasnoski 10:28 Or maybe I’m putting general documents that I wanna surface and enable through a specific chat experience, or surfacing data from my business systems. Copilot Studio is the way to put those tinkertoys together and enable you to achieve some modern results, but at some point you get to this spot where you need high trust, high safety accuracy and precision from the ML model needs to do very specific things very well and that’s not the commodity space. That’s very mission driven and that’s where you are enabling data scientists and AI engineers to be able to create full custom ML products that achieve outcomes. So high visibility chat bots that you’re putting in front of your customer, for example, or a customer service team that needs to answer specific questions about your product, that’s not something that you put General AI in front of. It’s something you put very specific, accurate, precise AI in front of safety, stock optimization and demand planning, market campaign optimization or next best action when you’re doing with financial products or potential sale. These are things that you’re doing to specifically impact revenue operations and you’re platform’s goal is to be able to respond to those kinds of needs. Chris, I know you’ve had some experiences helping companies through some of these kinds of questions. Anything you build on that with. Chris Blackburn 11:45 Yeah, especially at the top two cases, especially that copilot for that public AI access, especially in light of the large language models that are publicly accessible and not wanting to give your companies data to those models, directing your users to ahead of getting those M365 copilot licenses is a great foundation. And we have a lot of customers that are doing that. They’re implementing security controls to prevent access to those large language models and their websites that are even doing redirection with certain network appliances to ensure that users are going to those sites. And of course, tying on some education to that as well. So that’s a great first place to start if you’re an M365 customer, you’re working towards those first few pilot license of that Microsoft 365 copilot because then once you do have the Microsoft 365 copilot licenses, you don’t even need to access that website anymore. You can just go to chat and you have that same experience and it’s also grounded in your data that’s in your tenant. Nathan Lasnoski 12:51 Really. Yep. OK, I think it’s important to note and especially for your architects in the room here, I think it’s important to note that all these things are part of the same ecosystem. So you may start with rolling out copilot and copilot. Chat is part of that experience and or you might start with a custom chat bot that’s built in ML Studio and surfaced through a unique chat endpoint. Or you might even have a I’m Alan’s project alright, sorry custom ML project. That’s not even surfaced through a chat. It’s an analytics view or it’s in app somewhere? It’s like an inline component of an application. The suite we’re talking about, the capabilities we’re talking about are all intended to be able to lead to each other. So I might enter into the journey through compile chat, but I could have a skill that points to something that I’m doing it in copilot studio, or I might even parlay that over into a custom model that I’ve created to answer a question through copilot chat. So the idea is when you think about your center of excellence, don’t think about each of these lanes as completely independent activities. Think about them as components of your AI journey that need to be part of the same journey. They need to be able to live in the same ecosystem, and sometimes it means that like you’re not building the same thing that you built for someone else, you’re reusing something to you already created. You’re enabling some of the strengths you already built to be built on top of, rather than necessarily creating a whole lot of independent activities. Or you might create independent activity if that precision is necessary for the use case. So all this sort of fits together into a picture as you start going down the journey. And then of course, we have this like, this journey of how how autonomous are we creating things that really goes back to that maturity of how important this is to me and how much the person is fully handing over activities to that AI agent. So if it’s a chat bot, it might function differently than say what a copilot functions like. Or maybe what a fully autonomous system might do, and in case of a fully autonomous system, those have an extremely high bar by which they need to function. Whereas maybe a copilot, it’s a little bit lower. It can do. You can do less things, and even in this context of, you know, fully autonomous, you have this like range of how much is 100% success or is 80% success. And I can have an exception process that certain things run up to. So Brandon will talk a little bit more about that later with a little bit more robust version of this model. OK. So as we get into building a center of excellence, one of the things we want you to think about is executive alignment. What are the goals? How do I ensure that these things exist? I won’t go through every single one of these, but it’s important to note that every one of these questions need to be answered in conjunction with your leadership team to be able to gain success in a broad AI effort. You’re not going to get success in a broad AI effort just by tackling this from it, and it needs to be tackled from the level first and then it is asset in making that happen. Nathan Lasnoski 15:57 So things like what is the mission of my organization and how does it relate to AI and how do I even prepare my employees for that job skill change and what focus areas will create the most impact? Nathan Lasnoski 16:06 All of these need to be thought about as you’re going down that road and then you need to be thought about in two lanes. One is the the incremental things you might be adopting. The things are doing now that then become automated, or it might be disruptive activities that you’re bringing to the market might be a way that you’re engaging is as if you were starting the business over with AI as an asset to you. Nathan Lasnoski 16:30 How would I think about my business in that way? So both of these fall into line of how we think about building up a center of excellence because we’re also partnering this with the innovation engine. So every executive needs to think about what is the mission of my organization. How do I challenge every employee to leverage AI? How do I invoke those new skills and how do I measure that engagement? Nathan Lasnoski 16:53 And these are things that bleed into the way that you build up your center of excellence, your center of excellence is the muscle that takes the executive intent and makes it present within the organization. So Chris, I’d love for you to just kind of talk a little bit about like the ideas around commodity AI. Nathan Lasnoski 17:11 Does everybody need to be a data scientist? Chris Blackburn 17:15 No, they do not. Nathan Lasnoski 17:19 Exactly. Well, impact less than 1% of your staff, right? Like this is not the skill that in the commodity space you’re not enabling everyone to be data scientists, even data engineers. You’re enabling them to be people that leverage AI is part of the journey, but everyone deserves to be to have a IIS in the sistant and this really what is it you said? Chris, is something about the draft, right? What? Chris Blackburn 17:43 Never create a draft again. Nathan Lasnoski 17:45 There you go. And that, that, that, even that idea, like some of us have gotten really used to using AI as an asset and creating those drafts. I think the really bad ones have just used it for the final draft. That’s why it’s called copilot, right? Umm. Chris Blackburn 18:02 Right. Nathan Lasnoski 18:03 Ultimately, it’s not about data science. It’s about enabling most of your employees in this space. The AI practitioner lane the I’m taking advantage of AI to accomplish more and then those custom ML scenarios all fit over here. This idea of creating something that’s really impactful, that’s what brand is going to go into. So skills that come into mind, there things that are already thinking about things that get force multiplied as you think about building a center of excellence that that center of excellence needs to think about in the way it engages the organization, growth mindset, experimentation, end in mind, these all come into play in that space. So Chris, I love for you to walk through what does a typical center of excellence plan look like for the commodity lane? Chris Blackburn 18:48 From the commodity perspective, usually that’s a great foundation to be thinking about the whole of AI and those use cases in your organization. I think that’s first and foremost looking at the pain that or or challenges that your users within the different departments, the different teams and your organization are struggling with every day or maybe there’s not even a struggle. Maybe it’s they’re doing something over and over and over again, and that’s something that they could leverage AI to take and do that more effectively. Do it quicker and give your end users more time to focus on the things that really matter, so that really sits as part of and we roll it through the executive alignment, the leadership and everyone underneath including it are all lined that AI, is that direction envisioning is really that next step and being able to think about how to apply AI. What are the right tools to use to leverage AI to help your organization do more? And then from there really it’s looking at your readiness and your prepare preparation from tools that Microsoft makes available to help your team be educated all the way through doing some assessments and we’ll talk about that in a couple of slides of some things that you could be doing for putting stepping your organization in a position of success. And so really, even before you license your first user, before you get to that pilot group, there’s some forethought that you’ll want to put into enabling AI in your organization before it even gets into those users hands. Nathan Lasnoski 20:29 Absolutely. Chris Blackburn 20:31 And so here’s here’s as we think through what that AI journey is going to be before we even license our users think about the journey, think about the vision that your organization has. And there’s really, as we’ve set down and we’ve talked with different companies, there’s a really these four areas that we work with them to really flush out how and what they’re gonna use those different AIS, they’re just those different copilots for within their environment. It’s looking at the different personas and the challenges each of them face. So that’s looking at the pain it’s being able to measure. How many people do those challenges impact and how much time do they spend in executing against those challenges or those pain points a week, a month, a day even? And then from there, as we start to think about those activities, we can determine what’s going to be the right building blocks. Is it really that copilot for Microsoft 365 route? Or is it potentially even something within a custom route? Maybe it’s leveraging that copilot studio to integrate a third party application, or maybe it is a custom built solution. Chris Blackburn 21:40 We will sit down and ideate with customers as to what’s the right tool is and then look at what is that priority looking at is those AI scenarios that we want to accomplish, things that should happen within the next three months, the next year, the next three years, how much effort that’s going to be. Those usually are conversations that will really gauge when some of those AI related investments or even assistance for your end users really start to come into play. But even before that, or actually, I would say while you’re going through this journey and you’re thinking about how you’re going to be successful on those next steps, really there’s three core areas that we engage with customers and getting them to think about and areas of governance and control before you license your users. Nathan Lasnoski 22:26 Umm. Chris Blackburn 22:32 It’s thinking about your data. You’re estate. What are permissions in place over sharing in SharePoint is more common than you think and so looking at those areas and how to ensure your data secure. And if you are M365E3 or E5 customer and I mentioned that because you can use copilot without things like the Enterprise ability suite or other components of those Microsoft 365 licenses. Chris Blackburn 22:57 If you do have them leveraging per view to look at your data classification, your labeling will help you along this journey and give you that piece of mind that your data is properly secured. From there, we can think about devices. Chris Blackburn 23:11 Aren’t you on the Click to run as they call it, or the Microsoft apps for enterprise on the current channel? Chris Blackburn 23:18 So that’s the monthly. That’s the preview. That’s the beta. It will not be effective for you if you’re on semiannual, so that’s really that big step forward. It does work with the new in the in the old version of teams for outlook, the one that’s built into Windows 11 is the one that is supported as of right now. So thinking about apps and their readiness is important. And then even looking at the version of Windows that you’re running the most recent versions of Windows 11, even even Windows 10, actually gets rid of Cortana. Rest in peace and replaces it with copilot as kind of that center of being able to interact with Windows as well as and with AI. Nathan Lasnoski 23:50 Umm. Chris Blackburn 23:57 And then finally, it’s the human aspect of AI that really sits at the core of that center of excellence and it’s your staff. It’s your teams. It’s ensuring that they have access to that content, that it’s ensuring that they are educated on how to communicate with technology, how to talk to AI, how to have the right prompts given to AI to get the best results, and then being able to really taking back to the slide that we looked a few before this, letting them know that they should be focusing any searches they do within that Microsoft 365 copilot or even Microsoft copilot space and not putting organization information in those large language models in that chat. GPT as it is a security risk and everyone it’s a security is everyone’s job in every organization. So these are things to be thinking about as you are looking at that envisioning phase? Nathan Lasnoski 24:53 Awesome. Thank you, Chris. Alright, Brandon, take it away. Let’s talk a little bit about the mission driven side. If you’re building a data science team, if you’re constructing that meaningful AI models within your organization. Brandon Dey 25:06 Yeah. Thanks everybody. Hear me, OK. Nathan Lasnoski 25:08 Yeah. Brandon Dey 25:09 Beautiful, Nathan. You can or Amy can tap forward two slides. Next one excellent my name is Brandon. I’m concurrency lead data scientist and I’m gonna talk about the second prong to the two prong process that Nathan just talked about right, the mission versus the commodity side. My responsibility is to build the mission driven ML systems and to do that in a framework which we’re calling the AI Center of excellence that maximizes your investment, right? Brandon Dey 25:40 So what do I mean by AI center of excellence? I think if you ask 10 people what that is, you’ll get roughly 10 different answers. But the way that can currency defines this is the systematized approach to how you do a few verbs, and those verbs are on the screen. Here is how you find. Prioritize, implement, and maintain AI systems at scale. And I’m gonna spend the next 20 minutes or so talking through our approach to systematizing each of these verbs. And we can get started with why the heck you want to do this in the 1st place. So if you go to the next slide and the next one, we will get a sense of that. So really you wanna do this to umm. True, create a competitive advantage if it exists in the space, even if it doesn’t, though, you really ought to be doing AI just for efficiency sake. Anyway, that’s good for maximizing ROI. Taming all of the technical complexity that’s associated with building these systems in house. Brandon Dey 26:44 That includes enforcing repeatability like I mentioned before DE risking the dollars that go into it, and then doing all that in a way that makes your people compliant to best practice, right? So going back to how Chris mentioned that security is everybody’s job, the AI Center of Excellence is there’s a way to enforce that. That’s roughly in a very small nutshell. Why you’d wanna establish a AI center of excellence for in-house mission driven applications. But let’s move forward. And talk about how we do that. So if you click forward two more slides, we’ll start with really how to find those. Yeah, AI opportunities in the 1st place and there isn’t really any secret to this. Umm, but it might be, it might be nonobvious. So you’re you’re starting with either a pain or the need for some vision, but on the pain side of things, let’s start with a clear pane that is well articulated, right? And what I found to be the best combination of factors that lead to a successful AI system is, first of all, that that pane should be benchmark table, right? There should be. Maybe there’s a legacy solution that exists, or there’s like a brute force brute force method driven by by people that are doing this, or some of the heuristics that folks are using are inadequate. Maybe there’s already a machine learning model in production, but it’s stale in all of these cases. Performance is benchmark able, right? That’s important for currying, more investment and establishing ROI. That sort of head and shoulders above the incumbent solution. The next thing that you need to look for is having like the domain that you’re problem exists in. Should have data right? You need. That’s sort of obvious, but the data needs to be hopefully a trusted data source. It should be at least cleanable. It need not start clean, and in fact almost every project, no. All projects I’ve ever worked on never start with clean data, but they have the potential to be cleaned and the data set should be big and by big I just mean big enough. We’re big enough depends on the specific application. The third thing that you should look for when finding a AI opportunities though, is that the conditions. Are favorable to adoption, right? And what I mean by that is people have reasonable expectations for the problem to be solved. Not all use cases have low expectations, right? You can think of obvious examples in healthcare where if you’re building an autonomous system to diagnose cancers and so on, you can’t really tolerate a high false positive or false negative rate. And so that’s an example where conditions are too stringent to to really start. So you you’d wanna start in a in an area with lower accuracy requirements and these are pretty common actually. The other thing to look for along the lines of a favorable adoption conditions are the evaluation window or that period after the model is built that you want to figure out if it’s working. That should be sufficiently short, right? You don’t wanna have to wait a year or six months to figure out if the model is doing what you expect it to do, and so ideally evaluation window under one month. I’m our preferred. And then lastly you, you wanna make sure that where you’re starting is relatively undisruptive to people, their process and their tech. This is not a hard and fast rule, but the easier it is for your solution to be adopted and like the the less like disruptive it is to incumbent behavior and habit habits, the better. And then lastly on the technical side, you want to pick a problem that can be solved with known techniques. It should be feasible, so let’s start with sort of textbook applications. We don’t need to, you know, try to build a self driving car right from right from the start. So that whole chain of conditions stems from. There’s a known problem. It’s clearly articulated and the combination of those leads to maximizing the success of your your application, your AI application. We can click forward to the next slide. Umm this is perfect. Once you’ve found, let’s say 5 possible AI use cases, the next thing you need to do is prioritize them. And folks have different ways of doing this, but what we found to be most successful is treating each use case. Or really respecting the complexity of each use use case across three of the most important parts, right? The impact economically that it can make the cost in creating that impact and then the risk in delivering that impact. So impact you can back into estimates of this depending on the application and what’s outlined on the slide is just like a toy example of how you might back into the ROI or really the economic impact of let’s say a campaign optimization AI system. You know, it’s trying to target the right customers at the right time with the right message and so on the cost figure. This is pretty typical to most IT projects where there’s a cost to build it to use it to maintain it, minus any sort of things that you can use to accelerate that process, right? Things like using open source tech or there’s lots of good documentation which is often more scarce than we would like and so on. The last category is super important to organizations getting started with AI, and that’s the risk category. There are seven key risk factors in delivering AI systems, and those are sort of displayed horizontally across the screen. What I have circled are the most important ones for folks getting started with AI. Brandon Dey 33:23 That’s whether or not the project has support. If the expectations are optimal, OK, low, and the goal is well defined. Obviously you need good data and you need the skills and knowledge and tools to bring it about, but it’s it’s our experience that I’m not having one of the the three risk factor or really the presence of one of these three risk factors, support expectations or goal. Brandon Dey 33:51 They typically are are most likely to make a project go sideways. Alright, we can move forward to the next step. Now there’s a couple of animations on this slide that I’ll ask you over is driving to to slowly walk through with me. Brandon Dey 34:06 So we found opportunities. We have prioritized them, but the next thing that you should do is really a part of prioritizing them as figure out the economic ROI, right? Brandon Dey 34:18 That’s the impact part of the equation and the way that we at concurrency do this is pretty, umm, robust in that we estimate how impactful economically a system is going to be by simulating how effective it could be based on our experience and how the business is likely to perform in the future given how it’s performed in the past, right. This happens in a simulation framework that is shown on this slide. Basically, we’ve got three different parts of this. On the left we have the business metrics that we want to simulate. Did the example shown is for an autoquoting an AI autoquoting system that we designed for a manufacturer who’s engineers basically Git requests from prospective customers to build a certain widget, and then they need to estimate how much the widget is gonna cost. We designed the system to automatically quote that based on the sort of 3D render the CAD drawings and this is really the way that we simulated the ROI of that system. So anyways, there’s a lot of metrics here that are relevant to that project, but would change obviously depending on your use case. So we simulate some numbers based on the historic performance and then if you click through those numbers impact certain metrics which we call dependent metrics, right? The win rate, the monthly revenue margin and so on, all of those drive the two ROI factors of a system of this type right margin compliance and winrate, all of the all there’s a lot of information on this. I’m kind of cruising through a little kind of quickly, but feel free to post questions in the chat and I’ll address them afterwards. The whole point of this slide is to point out that we estimate the economic impact of the solution so that we can accurately prioritize what we should focus on 1st. Alright, if we move to the next slide. Thank you. If you move to the next slide, you’ll see that economic ROI is really just one of three potential ROI that you should think about. And on the screen it’s called measurable. It’s just the quantifiable economic stuff, but there are. There’s also a strategic ROI, which is really how doing this project helps you do your your business strategy the best with machine learning and AI, that’s strategic ROI, but then the third ROI is of importance to folks who want to build this capability in house and that’s why it’s called capability ROI. It’s really how a project helps you get good at doing these things and house versus relying on at concurrency or similar render to to do it for you. All of these things are super important. The measurable economic ROI typically takes precedence for new to AI organizations, but the others? The other two should not be neglected as well. Alright, so let’s start to talk about implementation. If you go to the next slide and then the one beyond it, we’ll, we’ll we’ll talk about how it’s important to uh stays the implementation. Could you jump to the next slide? Brandon Dey 37:45 Loop God go back. Go back one. Thank you. Perfect. So the next few slides show how and why it’s important to phase your ML project in specific ways in order to derisk the investment. And so this slide is really what I think people think of when they think of a machine learning system or an AI tool. They think you know, it’s this little black box with some machine learning code on the inside, but if you Click to the next slide, you’ll see that the reality is a little more complicated. And the the real real AI systems are often mostly non AI and that’s all the surrounding stuff there is, uh, you know, data collection code, there’s code to do data verification, monitoring, model validation, metadata management. All of these sort of more boring, not headline grabbing components that are all critical to a successful system, and I point this out umm to to really highlight how big these systems are and if it sounds like a lot, it kind of can be. Which is why on the next slide you’ll see why we phase these projects, umm into three different phases. And so that first phase, there’s some animations on here too. The first phase is the POC and we’re only building 4 of the highlighted components and we’re doing that to make sure that the system can first establish economic ROI before we invest in doing anything more. Brandon Dey 39:31 So you’re building? Yes, you’re building the machine learning code, but you’re also writing code for the data collection. The feature engineering and model validation. Once you’ve validate the model and by validating I mean this thing can generate economic ROI, then you stop and then you bring it back to the business and they thumbs up or thumbs down Brandon Dey 39:51 It depending on if that’s sufficient. If it is sufficient, you move into the next phase, which is all about turning it on. So this is the MVP phase where the solution is not full featured. It doesn’t consist of everything that could consist of, but it does consist of the minimum things that let you capture economic ROI. So the point of this phase is to turn the POC on and you do that by building 4 more components. Those are highlighted in dark blue. Umm, the most important of these things is really the serving infrastructure. These are things like it could be a UI, it could be a managed online or batch endpoint, it could be just writing the model output to some table that gets picked up by another system. Brandon Dey 40:41 The point is you turn it on. Once that phase is done and it’s on and it’s been running for a little while, then you can move into the next phase, which we call ML OPS. But the whole point of this phase is to make the means of generating that economic ROI reliable. As an engineered system, so you you build a few more of the components, you add some monitoring and automation, process management, metadata management. This topic alone is very large and we could talk most of the day about it. The point of this slide, though, is to highlight that you need to phase how you develop these ML projects so that you derisk the investment at the critical milestones. All right. If we go to the next slide, you’ll basically see the same information, but just in tabular format. You guys will have access to this deck afterwards, so please feel free to take the time to review that then. But just in the interest of time, I’m not going to drain this whole thing. So we can actually skip ahead to slide 47. Umm, there’s more detail here on examples of what each of those things are, but I’ll let you guys prove that on your own. OK. So in terms of implementation, that’s really like we talked about before is all planning and design, but then they need to implement and we we’re always in the process of building reference architectures for all of the key machine learning steps that are part of every project. So that’s build, deploy pilot and evaluate and these four things happen in the framework of those 3 phases that I talked about before. So visual depiction of that this first phase, which is still the POC phase is on this slide. So there’s a few animations in here. If you click through, yeah, you have the actual steps highlighted in this first phase, there’s build, deploy, pilot and evaluate. You’re doing those in that order, the build phase. Consists of certain activities where you’re you’re doing some discovery and planning. You’re planning the deployment, building the training data set, doing the actual ML work, validating, then documenting. You’ll notice that the top right, there’s a phase gate. At that point, you stop, right? This is when you’re done with the actual POC and then you move into the deploy phase. Thank you. If it feels like a real time, like NFL game where you like making all the the squiggles on the screen, then you move into the deploy phase where you’re turning it on before finally moving into the pilot and evaluation phase. Now, in the next few slides, I’m gonna unpack each of these. OK, so this is just a zoomed in view of the build phase. All of these activities are on the screen because they exist in every machine learning project. You’re always gonna start with discovery and then you should move into planning the deployment. But you might not actually be deploying, you’re just planning it. Then you move into building your training data set, doing your actual ML work, validating the model which is critical, and then finally documenting there are animations, but just click through all of them and we can move into the next slide. Brandon Dey 44:11 In the interest of time, I think we only have 13 more minutes. Are we? Is that right on timing? Nathan Lasnoski 44:22 Yep. Yep, 13 more minutes. You’re good. Brandon Dey 44:24 OK. Brandon Dey 44:25 Deployment umm this is. This is a really big, big part of the project and can be standardized in code more than the other other steps because this is, this is all very ML OPS related, right? Brandon Dey 44:40 So you’re creating things like YAML scripts to drive the deployment of your environments and ML OPS infrastructure. And then you sort of end this phase with, umm, testing both unit testing and integration testing. Let’s cruise through this to the next slide so that we can talk a little bit about pilot, the piloting and evaluation. Piloting is really the MVP. It’s. It’s testing this with a small group of users who have high air tolerance so that they can give you good feedback. So you can improve the product before it rolls out to some wider audience, and this could be either internal or external users, right? And then you move into the evaluation phase on the next slide, we’ll zoom into the pilot, the pilot part of this. OK so. Well, actually we’re zooming into the evaluation part of this. OK, so once the thing has been live and the users are using it, the question is, is it generating as much ROI as we expect it to, right? Brandon Dey 46:02 This is really important. Umm. And you can do this in, uh, a way that sort of rhymes across every project. The point of this is just say is this working? And then there are various ways of answering that depending on the type of ML system it is and where in its evaluation life cycle you are. There’s a lot of detail on this, but we’re going to move. We’re gonna move forward just in the interest of getting through all of this material. OK, so this is the recap. Does everything we just talked about and then on the next slide. Sorry, I shouldn’t have used these animations, should I? So that’s the overall picture of implementation. And as I mentioned before, my team is always investing in reference architectures for common machine learning problems in certain business functions, ohm and just to give you a taste of that which we won’t dwell on, we’ll move to the next slide where there’s a a quick picture of roughly what sorts what we mean when we talk about our common ML solutions, right? So there’s let’s just focus on the green. Within, uh, the within supply chain, basically we do a number of things. We’ll do things like demand planning, order, source, order, sourcing, optimization. This is basically figuring out where manufacturing order needs to be processed depending on the network of plants and distribution centers, and then we’ll also do things like safety stock optimization within marketing. Brandon Dey 47:54 We’ll do things like building next best action systems, right? Like what do we do now for this customer, given their entire history with us? And their propensity to buy things that are relevant to us? The list goes on, but the broader point of this is when you develop a AI center of excellence, you’ll want to systematize how your organization also implements very common. Brandon Dey 48:22 Uh use cases, right? That’s important because your engineer engineers will need to abstract the code in such a way as to allow you to plug that solution in to a number of different domains where the problem sort of rhymes with the problems in other domains. Alright, let’s let’s skip forward to the next slide and talk about what we do. If you go back one slide, sorry this is about. Maintenance and evolving these solutions when they’re in production, right? So because we have those 3 phases, each of which have a different goal, right POC. Can this thing generate economic and ROI to MVP, which is about actually capturing the ROI to the last Phase MVP, which is about automation because we have those three different things we need a way of quantifying how mature the system is so that we know how to invest resources, time and energy into getting pulling it along the maturity curve. And so we’ve, we’ve systematized this as well and on the next slide is an example of this. We are basically evaluating the maturity of the four core components of an AI system and we do that with a specific rubric. That’s pointed out on this slide. Let’s move into the next. Uh, really? The point of this is just show that there was a systematic way of doing this. This slide, I think, summarizes it a little bit better. What we’re looking at is the scorecard that we give an AI system and it can range from. Brandon Dey 50:15 You’ll notice this ties nicely into the three phase process I described POC and RED MVP and Yellow ML OPS and green. The colors correspond to the maturity, right? A POC is the least mature sort of definitionally, and it’s given a score to reflect that the scores will range from 0 being the least mature to A7, being the most mature. Where? How we arrive at that score is basically by checking different conditions and if they’re met in the system. Those conditions. Are dependent on which component in the ML system we’re talking about, but this is a quick way of just getting a gut check on how mature the system is and and where we need to invest to improve it. So let’s go to the next slide and we’ll see roughly how to close the gap. Now this is an animation that’s moving too fast for you guys, probably follow along, so I apologize for that. But what’s happening on the far left under the component column is you’re seeing 4 components, the data, the model, the infrastructure and monitoring that are the core features of a machine learning system. Irrespective of how mature it is, so the next column over is that assertion or condition that needs to be checked that determine the score it gets right. Brandon Dey 51:49 These are things like is a simpler model better than the existing model, or do we have the the data captured in a schema and are we? Are we testing for these things at all? Are we testing for these things manually or are they automated tests? The answer to the means of testing whether it’s happening at all, whether it’s manual or it’s automated, tell us how mature it is, right? You basically get a a number that reflects the answer to that, and then you add up all the numbers and hire the the total of the more mature it is. We can move. Actually we have about 5 minutes left and I don’t want this to feel super pressed for time, but we are at the. Maintenance side of the system, which is really the last step in the life cycle of an AI system. So maybe at this point, yeah. He, Nathan. Maybe we open it up for questions or any. Yeah. Any questions you guys have based off of the fire hose? I just pointed at you guys. Nathan. Brandon Dey 53:07 Chris, what do you guys think? Nathan Lasnoski 53:09 Sounds great. All right, so feel free to either drop them in the chat or come off mute and ask question either chats probably the easiest if you have a question. Chris Blackburn 53:25 And there was a question earlier and hopefully I don’t mess with the name from a rota about is leveraging AI here means leveraging copilot and so from the perspective of as we look at AI, copilot is a tool to leverage AI copilot in Microsoft 365 is a tool in Azure. Open AI and ML. Those are all tools. The real question at the end of the day becomes what is the specific use case that you would be applying to one of those types of AI? Those could even be one of the multiple copilots as well. And so that’s where we typically will sit down and look at that, envisioning process, looking at use cases to help determine which one of these components of AI even in this example of looking at atonomy would we apply the specific technologies where as a tool that would be something more in the commodity space. And then as we move more towards the right, that’s where we are. Semi, uh. Commodity. More custom and then very custom is is great ways to look at it, especially based on that maturity level. And as you think through use cases, it’s very natural to automatically start to shift to the right because you wanna be able to do more as an organization over and above even commodity and commodity is a tool to help fuel that. Nathan Lasnoski 54:57 Yeah, I think there’s gonna be lot thing. There will be a lot of commodity AI solutions. Copilot is just one of them, and you’re even seeing numerous plugins that are existing in the copilot ecosystem. Chris Blackburn 55:03 Yeah. Nathan Lasnoski 55:09 But then, even outside of that, like your other business system, Salesforce or SAP or whatever, they’re all going to have different AI agents or aspects that light up as part of your experience, so. Chris Blackburn 55:11 Yeah. Chris Blackburn 55:19 There’s a lot of companies even starting to adopt their. Uh, their naming of AI within their platform as copilot, and whether that is whether that’s actually something that they have autonomously or they built that to integrate with Microsoft. Nathan Lasnoski 55:27 Yeah. Chris Blackburn 55:33 Since there’s Microsoft really will be that driver from the OS to the apps to how you work day to day. The lot is really hedging on that copilot story, both internal and external to those plugins. Nathan Lasnoski 55:48 They’re in danger of becoming the Kleenex of AI. Chris Blackburn 55:51 Yeah, right. Or or or the OR the Windex. Right? Everyone, everyone uses glass cleaner and they will call it Windex. Nathan Lasnoski 55:56 Yeah, yeah. Chris Blackburn 55:59 But there’s probably, you know, 50 different types of glass cleaner. Nathan Lasnoski 56:06 OK. Well, take some more questions. One thing I wanna make sure you do before you leave his fill out the survey survey is gonna be super helpful for us. Hey, we had a ton of contents in here, so would love your feedback on that content. Nathan Lasnoski 56:21 You want the slides, which you can see how much we didn’t even have a chance to really get into in detail. Make sure you indicate that. So we can provide you with the deck and if you’re interested in meeting, for envisioning sessions or talking about how we can make this really organization that’s also present in that follow up form. So make sure you feel that out. Umm, alright. Any other questions? We’d love to take take more questions from anyone who’s who’s here. Like the slides? No problem. Thank you, Robert. Nathan Lasnoski 57:13 Awesome. Alright. Well, we hope you have a fantastic rest of your day. Hopefully this was a great walkthrough of the two different lanes. Nathan Lasnoski 57:21 We’re looking forward to following up with you and hope you have a great afternoon. Thank you. Brandon Dey 57:26 Thanks everybody. Chris Blackburn 57:26 Thank you.