Insights View Recording: Best of Microsoft Build 2025

View Recording: Best of Microsoft Build 2025

Join us for a fast-paced, insight-packed webinar recapping the most important announcements, innovations, and demos from Microsoft Build 2025. Whether you missed the live event or want a distilled overview of what matters most to IT leaders, developers, and business decision-makers, this session is for you.

We’ll cover:

  • The latest on AI and Copilot advancements across Microsoft 365, Azure, and GitHub
  • Key developer tools and platform updates to accelerate innovation
  • Major cloud infrastructure and security enhancements
  • Highlights from partner solutions and real-world use cases

Stay ahead of the curve with expert analysis, practical takeaways, and time for live Q&A.

Transcription Collapsed

Brian Haydin 0:05 OK, Mac, this is like you know, this cooking competitions where hands off like, you know, the bells rung we’ve been feverishly working on this deck up until the last couple of minutes. Welcome to Mac and our presentation about the best of Microsoft build. The two of us got a chance to go to Seattle last week and went to a ton of sessions. Met a lot of people, did a bunch of labs, took some certs and we’ve got 59 minutes to get through four days of. Content and I don’t think we’re gonna make it through the deck. Mac, do you think we’re gonna make it through? So. OK. So to make it even worse and harder on us. Mac Krawiec 0:40 Absolutely. Brian Haydin 0:45 I’m gonna ask, you know, everybody on the call. Go ahead and post the questions in here. Amy’s gonna help moderate, and we’ll see if we can answer some of your questions. So with that, we’ll go ahead and get started. I’m Brian Hayden. I’m a solution architect here at concurrency. And Mac, introduce yourself real quick. We don’t have time for a long one. Mac Krawiec 1:06 I’m Matt Robinson. I’m a senior software engineer at concurrency and let’s get to the meat. Brian Haydin 1:11 All right. So First things first, if you did not get a chance to go through the sessions and want a quick study guide, the Book of news is the best resource for you to go to get like a 50,000 foot, you know, just information on this is. Is This is why it’s important and it’s a list of all the announcements that were made at Microsoft build. And so I’ve got the QR code up there. If you wanna take a picture or snap it. But definitely a huge resource. I even went through it over the weekend just to kind of refresh my memory. What did I forget? And. So use that resource perfect. I’m gonna give you my key takeaways. I had a bunch of posts, maybe some blog posts coming up on this. But my key takeaways is, first and foremost, I didn’t find anything that I didn’t find any announcements that I thought were unexpected or a big leap. There were things that I I would have projected that would be coming, you know, through the frameworks and through the capabilities, but that they got there this fast to me was the biggest surprise from Microsoft build what they were announcing, things that you can actually play with and. Touch today. Today was absolutely fantastic, and so what’s gonna be what it’s gonna be like next year when I go to build for in 2026. I I just can’t even think about it. I just can’t even imagine it. But the other thing that was really clear to me is that the agent revolution is here. Microsoft is all in on the semantic kernel stack. All of their tools are being built on it. They are enabling all the prominent. Agent protocols open source. And and so you know, we need to start focusing on how to manage agents in our work life, in our workloads in order to make us more effective and efficient developers and even frontline workers. And then the last thing that was my key take away is that there is a lot of learning to go to go around. There is no way. We cannot keep up with it and I think we owe it to ourselves not only to dedicate time in our day-to-day routine to learn. But also to help others that are struggling to, you know, find those resources. Find some of the things that are important for your organization and so share the knowledge. Get involved in user communities and and just be, you know, good partner to your coworkers. My favorite highlights around the technology. You know, if I get into like more the the technical things, I think local, local foundry and being able to do development. With AI LMS on your devices is going to be the single biggest thing that came out of build. The cost, we’ll just get more into a little bit later, but app modernization, I have a lot of customers that are struggling with getting their legacy code up to snuff. The tooling is almost there. We’re seeing it in GitHub copilot. The agent mode. We’re seeing it in some of the the tools that are there. Samantha kernel I meant I mentioned is definitely. A pivotal part of the ecosystem they’re using it in logic apps. They’re using it in copilot, copilot studio like it’s the the base layer. And if you want to build modular systems that are going to plug into the Microsoft ecosystem, definitely leverage semantic kernel. And then the last. Point that I wanted to make is that you’re going to see throughout this that low code and pro code are becoming. Really close knit partners. There’s a couple of slides here where I’m talking about copilot studio being able to develop those, which is a. It’s a low code platform, but you can actually build it in Visual Studio or you can bring Python skills into into play as well. So, Mac, what’s your key takeaways? Mac Krawiec 5:12 Yeah, I have. I have planned. This is my first build so once I got through like this the just the absolute craziness of like Oh my God, where do I go? I ended up talking to Bryan. I’m like, OK. There’s there’s 40 sessions today and four time slots. What do I do? And I ended up going to all the to a lot of the non recorded technical labs. I spent a lot of time in in those where some of the stuff that Brian’s talking about, like semantic kernel I got to play hands on. There is this and we talked about that in in a little bit the the big focus takeaways. Are that GitHub copilot is not. A pair programmer. It is a peer programmer. We’ve seen evidence of that. It was amazing to see things. That that we that we were exposed to literally solving or or or bringing to resolution the the, you know, six or five or six tickets all within 30 minutes time. As a developer, that tool and that the power of GitHub copilot in is just infinite. One thing that’s a little bit more obviously how we’re going to do things and how we’re going to develop is going to change. But the way that we’re going to develop from a pro from a project perspective is also significantly going to change GitHub and Azure DevOps in an integrated fashion is probably going to be the future just based on on how much they’ve invested in GitHub. And what we’re seeing with how much copilot has a play in GitHub and then the integration of tickets or user stories with an ado straight to GitHub. Big, big change there that I think it will impact projects. Get familiar with Azure AI foundry. Whether you know it’s it’s gonna be the one stop shop for everything AI and if if you’re picking it up every everything’s gonna be AI. So you better get familiar with Azure AI Foundry or you’ll be in trouble. I was. I used to think that, you know, I was reading about MCP and AI and I was like, oh, my God, how do you do this? And it has to be like this crazy secret sauce. And then I got to actually build an MCP server in an Azure function and and use that agent that I just built inside of copilot. GitHub copilot got. I got it to do something for me within code. I was extremely impressed how easy it was and I and I have some some code that I can share to to, to with the team or with everybody here to just display how that works. I was extremely impressed how easy it is. It’s not secret sauce like we all think it is. It’s very straightforward. And then there’s a lot of improved AI, security and permissions aspects on the horizon. Things we’ll talk about today. But I got to to play with some with some Python libraries. The the works that Microsoft is is is releasing that ultimately allows us to, you know, test for safety and and violence prevention and a few other things. So that was really cool. And then the last thing I’ll say. If you have been following a product, whether it’s power, BI, fabric, whatever. Look at the road map again. What I noticed is the road maps completely revamped on day one of build. We had our our team members running. Into that, but the road maps are different. They’re renew. They’re revamped, so please take a look. And and and take a look there. So I’ll hand it back to you, Bryan. Brian Haydin 8:54 One of the things that I said was agents are here to stay. So not only do you have your typical agent patterns that we’ve been talking about for the last, you know, 12 months inputs and outputs and being able to enable skills and tools. You know we’ve got. Now MCP agents and recently announced was Microsoft’s adoption or support for the A to a protocol as well, so. They’re doubling down on this. I’m seeing it in a lot of the tools I’m seeing agents kind of being surfaced up, you know, in the Microsoft ecosystem. So start investing like Mac said in MCP specifically, but if you’re interested in the A to a, it’s certainly an open protocol that is worth looking into it. So Mac, you want to explain a little bit how agents work? Mac Krawiec 9:47 Yeah, absolutely. If you want to hit, I don’t know if we added, why can’t we can’t see the whole thing, but effectively the way that that agents work, there you go. Thank you. I think we made some changes to this and I didn’t know. The way that agents in at a high level work is a user will prompt an AI agent something, and ultimately that’ll execute a model and that model. Now in a lot of fashion will be will be available to us in Azure AI Foundry. Foundry and those models those agents will will have tools that are that are exposed to them. Tools are things like MCP servers or agents, or your Azure functions that are running on an MCP server. And the beautiful thing. And I was so totally blown away by this contextually by using by using embeddings, your agent will know which tool to use based on the prompt that you specify for your agent. And so effectively from from from if we go back to the idea of Azure functions when you create an Azure function as an MCP server, ultimately you specify the kind of prompt in an embedded fashion that you would like for that for that. For that function to follow and the agent based on the prompt that it received will know that hey, I got this prompt. It uses the embedding model to determine that hey, in this scenario it is. It is best to use this. Agent or this tool to get you the right answer so the user prompts the agent. The agent determines what model to use, and ultimately you get a response back. The tools are where the bread and butter is, where you can integrate all those things like AP is like web server, all all this stuff is available now at your fingertips. So there’s a lot of a lot of. Cool opportunities there. Brian Haydin 11:44 So the announcement of the fact the foundry agent service as well is you know another layer to be able to connect the core pieces of AI foundry. You know to the rest of your ecosystem, so development tools. And you know all the different knowledge sources. So the other parts of this that are really great are the being able to connect to like the content safety. The observability and we’re gonna get into some of the features of the observability in a little bit, but the foundry agent service is what’s going to help you manage that orchestration. And manage all the network isolation and all that type of stuff. I also mentioned the A to a support, so here’s just a quick little snippet. You know for A to a how easy it is for you to be able to actually use it. So just a couple of you know, imports and boom, you are off and running and you’ve now implemented your A to a you know compatible agent. We touched a little bit about or Mac mentioned something about the Azure functions and so we’ve got new capabilities for AI apps leveraging Azure functions. I think I’ve got a slide in here too and this is going to be so chaotic because the two of us were just. Like going nuts on this. Mac, do you wanna talk a little bit about what you saw with the the functions? Mac Krawiec 13:15 Yeah, the the. So in much the same fashion that Brian just showed you with A to A and how easy it is effectively the way that. Azure functions have adapted. This is just as as bindings. They’re just decorators over your Azure function and within one line of code you can specify hey, this Azure function should use this embeddings model to embed every message and then ultimately another decorator that says hey, turn this into an MCP server. And use this prompt. It was absolutely amazing. How easy they made it to integrate, but effectively the the long and short of it is, they’re extending the capabilities of AI apps within Azure functions. Obviously there’s things that like cement the kernel that you could have played with already, but then these open AI triggers and bindings. That’s a big step up to how easy they make it, and then obviously you know we love we all love Azure functions. They’re not going to go away. So yeah. Brian Haydin 14:18 Maybe I’ll skip ahead really quick because I the slide that I got. Like 26. Let’s go back. This is so ad hoc because I did throw something in here. Mac talked about the bindings in here and I just wanted to give you some examples of what that looks like. So these are the bindings that you can use as of today in the Azure open AI extension. For Azure functions and so it’s just basically decorations. What you’re doing is you are teaching open AI what kinds of things you can do with it. With those Azure functions, they can be triggered directly from it so. You know, do a text completion. Have a trigger for, you know, an assistant messaging an assistant reading. Text embeddings. All these things are triggers that are now supported within. The Azure Azure function. Through the openingi extensions, let’s go back to. To where we were because we got a lot. Yeah, we got. We got like 50 slides and we’re like probably 50 minutes behind schedule already. So MCP Azure, you know function architecture, you know just a quick snapshot of what this looks like. You know, Mac, did you wanna pocket anything more about it? Mac Krawiec 15:41 Yeah, so this is. This will be just a very quick one. Just a sample, right? You have an Azure function in here and you know the in the yellow boxes you have your typical, you know getters or posts or whatever. That’s your typical HTP triggers. And then with the, with the advent of these of these bindings and these and these other functionality, this is another function. It becomes an MCP tool and that MCP tool then is able to be surfaced to vs code or or any other things, or you’re now able to add it into. To an existing agent through Azure AI Foundry. So it’s just that easy. It’s really not secret sauce. And that’s what blew me away. Brian Haydin 16:20 Speaking of how blown away I was, I remembered NL Webb coming up on like the build presentation I was like, oh, that’s interesting. What is this? What is this gonna be? And so I I is it a thing? Is it not a thing and I I took a little bit closer. Look at it and the best way I can explain it is that think of like RSS feeds. You know that. Are ubiquitous with anybody that’s publishing content on a regular basis. The idea here is that naweb is a protocol that sits on top of an MCP server. And right now it only supports one method and it’s basically asked. And so you can now interact with your website content like an RSS feed. You know, through a common protocol common interface. So it’s really kind of a layer sitting on top of MCP server. Of an MCP server and then the other functional aspect of this to call out is that if you think about RSS feeds as being, you know, a semantic layer semantic context of like the information that’s available on websites, whether it’s your shopping portal or if it’s your blog. Portal or whatever. It’s kind of pre built for, you know, being able to ask natural language questions. So that’s what it is. Is it going to be a thing? I don’t know. I mean, is it going to? Is it going to replace? RSS feeds. I don’t know, but I think it’s interesting. I think that it’s something we’ll have to pay attention to this year. Any thoughts on that Mac? Mac Krawiec 17:55 No, I first read it when I first heard it. I was like, I’m not really sure that this is gonna get bought into it, but I guess we’ll see. I’m in much the same boat as you are, but moving on with GitHub copilot. The big one that I that I I was feverishly typing during the keynote when I heard this, I was feverishly typing to to coworkers and and and then some of our clients agent mode. Is now available in Visual Studio and I got to use it and it’s and it’s I’ve. I’ve been using agent. Studio code for months. I have Visual Studio code insiders, so a lot of this stuff is even fresher. It’s not. It’s not the same. It’s not the Visual Studio and agent mode is not the same as Visual Studio code insiders. The other thing is ssms 25 obviously now, which is an announcement in and of itself, has copilot enabled as well, and that’s a disparate experience. So all three of these. Code editor. Well I DS code editors. All have disparate experiences. For the most part. I really hope they’re gonna bring em all together. So it’s the same and I really like Visual Studio code the most. So we’ll see this and then, Brian, you wanna talk about application modernization? Brian Haydin 19:07 Yeah, there was a. There was a slide during the keynote where they talked about doing like as 400 migrations and that’s what we’re kind of talking about with the app modernization. So there are tools that are out there, like your net upgrade assistant that you know have some copilot features that are enabled in it. This is taking it to another level. This is basically incorporating agent mode into not only the discovery, but the implementation of. Migrating your legacy code, whether it is in javais400.net doesn’t really matter. That’s what we’re talking about here. I have a customer that is really interested in this. They’ve got a huge monolithic, you know, application millions of line of code, lines of code and I specifically talked with this group about the app modernization. It’s currently available in preview for Java. Right now as it stands. And. And Net’s gonna be released in a couple of weeks. The claim that they made to me, I said, well, give me an idea of how well this thing is gonna work and they have some examples of customers that are using it in the Java workloads. And it took what would normally be about a three-week kind of process down to like 3 days. So think about, think about that kind of scale in terms of being able to modernize your application and we’re not talking about like switching the entire framework. We’re talking about being able to do not the minimal viable, but cloud readiness, you know, being able to get your applic. Up to snuff to be able to be hosted in the cloud. So fantastic tool. I can’t wait to start playing with it around here. We don’t have a ton of legacy applications, but you know our customers do. And so that’ll be. That’ll be something to pay attention to. And then the next topic is like the the SRE agent. So I got a couple of slides on that. We can talk through. You know the SRE agent. You know, we’re talking about site reliability engineer and I think last year, Bill, they announced copilot in Azure and it took weeks and weeks and maybe even months for us to be able to get access to. Copilot in Azure. This is like the next level and the next level and the next level above that, you know, in terms of what it’s capable of doing. First and foremost, it can help you just answer questions about your ecosystem. Like what are the resources being used? What are the? What are the types of resources that are being used? And you can see here in that chat we’re just making a pie chart of how this is now. Are some of these things already built into the dashboards in in Azure? Sure, but they’re not in a way that you can really ask. Questions about it and maybe do like impromptu like filtering. So really cool. Really cool tool. I know Max got a bunch of other thoughts on it, so I’ll let him sort of dive into it as well. Mac Krawiec 22:04 Hmm. Yeah, we we can kind of skip over this first slide. The the prompts are one thing. A lot of these questions are are things that you can ask. The bottom line is, is continuously. This agent once deployed to your to your environment, continuously learns about your Azure resources. Quite possibly the most interesting thing I saw. Is the Azure SRE agent go through finding errors and then actually resolving them and doing even more? So what you see here on the left is the SRE agreent is determining that an Azure monitor alert was detected. It acknowledges that alert. All within seconds. Actually one second. And then and and some of the stuff they were actually showing us live during the keynote, which was another amazing thing. And it wasn’t in the recording or a screenshot like like in a PowerPoint. But the SRE agent acknowledges the the the alert. It actually starts investigating doing. Things and determining and performing analysis on logs and and all these other things components. To determine what went wrong and in the next slide, what you’ll see is it’ll it’ll. It’ll come to. It’ll come to come back to you in some time and say, hey, you know, this is what happened. This is what happened, and based on this I think it’s this or that. And then the next thing after that is will actually automatically resolve your problem. And so if if in this scenario it was a bad deployment, it’ll revert the swap and go back to the previous deployment and then the next best thing which I was actually blown away again seeing this live at the time is that it created an issue related to. This. In GitHub so that a developer can say can pick it up and know that this happened. And then ultimately, you know, go further on it and determine what what’s what to do next. And then the beautiful part of this is you can then use GitHub copilot, your peer programmer, to actually resolve the issue thereafter. So the Azure SRE agent totally blew my mind. My mind away. I think it’s gonna be hugely powerful to just ensuring that everything is up and running and and is gravy. Brian Haydin 24:19 That was some of the more jaw-dropping demonstrations where they were just assigning copilot to the user story or to the bog, and it would go out and create a pull request in like 6 minutes. You know, after thinking about it and refactoring the code, absolutely fantastic if. Mac Krawiec 24:34 Either. Brian Haydin 24:39 You didn’t get a chance to watch that live, you know, go and go and take a look at some of the recordings around that or some of the sessions. Mac Krawiec 24:47 Absolutely. Brian Haydin 24:49 So agent factory, you wanna talk a little bit about Agent Factory? Mac Krawiec 24:55 Yes, so Azure AI Foundry is a new not not entirely a new stack, but their Microsoft is going all in on this stack of the founding models, the foundry agents. And they’re really thinking about how do we put everything AI into this in the into this Azure a. Foundry and then also allow for proper observability and governance now. Microsoft is going head first into Azure AI Foundry. They’re working with a tremendous load of partners. I think now we’re up to 11,000 different models. All available at your fingertips within Azure AI Foundry. The one cool thing? Yeah, actually. And we’ll talk a little bit about those models here in a second. The one cool thing I like that I took out of out of build amongst other things is actually this. This spectrum and it shows. You you know, what do you want to use based on your use case and and you and your company for for your problems. Now it might be just a drag and drop UI and with copilot studio. But a lot of us will find ourselves in this past past. Place right in the middle and that’s where Azure AI Foundry is monumental. There’s over 80 million companies already. I’m sorry 80,000 companies already using Azure AI Foundry. And and so. So if we go on to the next slide, we’ll get a a quick list of the types of models that we have available. One of the bigger announcements. Was the. Was the partnership with hugging face? So all of those models that are in hugging face will have available to us. And then there is also the bigger one. We had a chance to watch Elan speak, but grok so grok and and Grok 3. Will will all be available to us. In Azure AI Foundry, so you can start to leverage. All sorts of models grok and and included inside of your agents. Brian Haydin 27:06 Now if you take a step back and sort of look at the picture holistically, MCP, not a Microsoft protocol. 8:00 to 8:00 not a Microsoft Protocol hugging face open, you know, open source models. Grok. You know, being able to partner with other companies in the ecosystem was a theme that I heard screaming loud and clear from Satya during during the keynote. So I you know this really to me was I had AI had an opportunity to to give some feedback to, you know, several of the different product owners, you know, within Microsoft not only just at the booths, but also like in some private like feedback sessions and I. Gave them hats off to, you know, to that openness in the ecosystem. But more importantly, what are they going to do to support that, you know? That openness as well. And we’re going to get to the some of the security features in a little bit, but I think it’s, you know, it’s important to just pause and reflect on that and say that whether? You know, just whatever, whatever ecosystem is out there, Microsoft is willing to play nicely in that sandbox right now. Model router Mac and you you said to me this morning. Oh man, I can’t believe I forgot about that. I thought this was, you know, super cool. And did you want to give some thoughts? I’ve got some thoughts on this as well. Mac Krawiec 28:35 Yeah, so so initially my thoughts were this kind of simplifies a lot. If you’re if. If you’re trying to be somewhat cost effective, the model router effectively is something that determines what model to best use. Which in Azure AI Foundry which will ultimately be cost effective for you and and and and also fast. So as you can see in this graphic on the right, we’re asking in a simple question of what’s 2 + 2. And based on that, it’s gonna say, OK. I wanna use GPT 41 nano small lightweight model. Just get a quick answer. But then if you give it something a little bit more complex, it determines that no, you know, 41 nano. Probably not powerful enough. I’m gonna need 04 mini. This model router is an auto magic ability for you to just pick whatever model you need. Even though you may not know what model to choose. And that’s crazy powerful. Largely because even when I use visual. When I use visual or GitHub copilot, I sometimes ask it the same question and use different models to see what I could use it for best. This takes that away. It’s a huge timesaver, and it also is a is a monthly saver to some degree. But yeah, Brian, thoughts. Brian Haydin 29:54 Yeah, I, you know, stacking out of that you you hit the the nail on the head around the cost optimization. So most of my customers are really concerned about building model or building in AI and AI foundry and trying to calculate what the tokens are gonna cost when you open it up on a website that has, you know, 50,100 thousand million visitors a day, you know. Those tokens start to add up really, really quickly. And so having the model router be able to. To, you know, discern between different types of requests and find the optimal path. You know from a cost optimization standpoint. Is really, really important to how we’re going to be developing these. The other thing that really sticks out to me too, is that it’s multimodal. So it’s not just text that you can send to this to the model router, but it gives you the ability to include images and stuff like that so that it’ll pick up what you know that aspect of the routing as well. Mac Krawiec 30:42 Yeah. Brian Haydin 30:51 So very cool feature. Looking forward to incorporating this into a lot of our solutions. And. So. So you know the agent agent service enabling trust, choice and tools. Mac, what are your thoughts on this? Mac Krawiec 31:10 Yeah, I’ll close it up and then we’ll move on to local foundry. I know you. You really want to talk about that? So with Azure AI foundry agent service effectively there there’s there’s three things in mind that you can see. And I really like this graphic. There’s a lot of elements of of bring your own right. So you could bring your own file storage, your own search index. Pretty soon we’re gonna be able to integrate our our V Nets. Another cool announcement in this in this kind of general space. Is that agents are gonna get their own intra objects so that you’re not identifying agent with with with an intra and that’s gonna allow us a whole new space of of of permissions and and just security, but also usability. So there’s we’re not losing trust by and security by using Azure AI Foundry. That’s a big one. The next one is choice 11,000 actually over 11,000 models. Available to us at our fingertips that we can with one click with a button click. Already enable and then the. Probably the most important. Well, I can’t say most important cause security is important too. But one of the most important ones? I can’t pick one is is just the interfacing right. Brian Haydin 32:15 They’re almost important. Mac Krawiec 32:20 So Azure AI Foundry makes it extremely easy for us to to interface our models and and our and our agents with fabric with Azure functions like we talked already. I mean SharePoint, you, you, you you know, you name it probably supported. And if it’s not yet, it will be in very short. Amount of time. A lot of companies are using MCP to integrate with these with Azure AI foundry well or at least at the very minimum since you can it’s already integrated so it’s a huge selling point and those 3 pillars are are pretty fundamental. Brian Haydin 33:00 Yeah, we did this slide already. So we’ll skip past that. All right, so local foundry. I was pretty pumped when I saw this and I got even more pumped when I got to play with it at build and then asked the the team some questions around it. I’m so pumped about it. I’m gonna. I’m actually gonna try and do a demo of this really quick, so here’s a quick little link. Quick little link up at the top. Aka Miss Foundry local. Quick start. So that’s gonna give you kind of a walk through, but I’ll, I’ll give you the the 32nd tour of that. So let me share. Let me share. Another window. Here is just PowerShell, so in order to get started I’ve got just a couple of commands I’m gonna run through here. I’m gonna run a win, get command to install local foundry. Thankfully, I’ve done this already, so it’s gonna look for some versioning. Make sure that it’s good. And but real simple really simple command there and then the next command I’m gonna run is basically running the foundry with the model and I’m specifying which model I wanna run in this particular case. I’m gonna run five 3.5 mini. And that’s about a 2.2 gig model. So I thankfully have. Also, you know, downloaded that and I don’t really have to wait too long for it to to to go and then boom, Foundry’s open and and running now locally. And I can go ahead and type something. So what is the difference? Between. Mcp and A to a. And it is. It is now answering, you know that question pretty verbosely and this is all running locally on my workstation right now. You know it’s using my CPU. And my NPU. And it’s not necessarily really ******* things out that bad. So let me switch sharing here again. And I got to do this like super quick. But if you can see this really quick, you’re seeing the CP utilization did spike a little bit? Memory was pretty unaffected. I had some Internet stuff going on, but you know my NP, you even got nagged a little bit, but it didn’t. You know, it didn’t crush my computer. Running these these models locally. So I think that like just being able to do that in just a few seconds was pretty cool. And let’s go back into the rest of the presentation. And there I did a demo live. In and you all got to see it without me embarrassing myself. So let’s resume. Other thing to say about that is that this is being brought to you both Mac and Windows. So AI foundry or the local foundry is gonna be available whether you’re Mac developer or if you’re Windows developer as well. So why is this important to me? Here’s the biggest you know, the biggest thing about it is that if I can run these models locally, I am not hitting round trips. To Azure AI service, I am not paying for tokens. I am not doing any of that kind of stuff, so if you wanna have a cost effective solution and have that running in a container, if you wanna have that running, you know on a a local server or an edge device this opens up the door for you. To be able to do that pretty efficiently and without a lot of horsepower that’s gonna go behind it. So I’m excited about it, Mac. Did you have any? Any other thoughts? Mac Krawiec 36:57 No, I I my main one was in conversations with you was was actually the cost effective. So I remember I was walking up to. I got together with Brian again at build and I see him as a laptop and a table at Bill. I’m like, what are you doing? He was doing just this. But what came what comes to mind is just the scalability of it. And then ultimately being able to bring this home, bring this to your bring this to your company and bring this onsite and start. You know, I actually have a general good feeling that. This is gonna get adopted by by a lot of companies and and we’re gonna see a lot more, more effort around this and from Microsoft. Brian Haydin 37:34 Yeah, absolutely. And you know the the cost of Gpu’s, you know, starting to drop pretty rapidly and so being able to run even bigger models than you know something like a 3.5 mini is going to be, you know, efficient and effective as well. Mac Krawiec 37:52 Yeah. Brian Haydin 37:53 So let’s let’s pause here for a second. Kind of bring it all together, right? So we’ve got MCP, you know, we’ve got local AI and we have these frameworks now. That. Help us orchestrate all this. All being delivered on a local device and so building solutions that aren’t gonna rely on these software as a service or plat. You know, products like open AI Foundry or AAI Foundry I think is really important as people start to adopt these agentic patterns and workflows, you know. Going forward. Couple of things that are coming in the windows. Ecosystem. So in MCP registry within the windows. Operating system. And actual MCP servers for Windows, allowing you being able to use Sdks to interact with different components of the the operating system. Those are in private preview and I they weren’t really sharing a lot of demos or or information about edit build, but that’s kind of the direction where some of the stuff is going, being able to do device local development. So let’s talk a little bit about safety. You know this is super important topic and most of my conversations with CIOCT OS CEOs talk about safety. You know, you know first and foremost, can we use it? What should we use? What are the guard rails? And and so some really cool things came out at build that help with not just like the, you know, four walls of my tenant making sure that my. Data is secure, but how can I test it? How can I be sure that this is that this is working and there’s a bunch of different ways that you want to test your AI agents and your AI applications? You know all the way from the development to the endpoints when you’ve when you’ve deployed it and red teaming, you know AI is a thing that hasn’t really. Been matured over the last, you know, 24 months or so. Or 36 months since. You know we’ve all been using open AI, and now that actually is being released in Mac, you and I hung out with the red teaming team for a little bit. I spent, I think more time with them the day before. But what this is is just like you would do a penetration test on any application that you might have built. You know, you get a third party service to kind of evaluate it. This is directly addressing your AI that’s being enabled in the application, so it’s gonna look at, you know, prompt tax. It’s gonna look at, you know, all sorts of different components. And it’s super easy to get spun up. There’s some agent libraries that you can that you can leverage to get it up and running and run automated scans to pick up things. Like content safety or you know injecting. Attacks or jailbreaking? All that stuff is already built into their Red Team agent. Anything else that jumped out at you, Mac when you were when you were talking to him? Mac Krawiec 41:13 Yeah. One thing that jumped out to me was the ability to integrate some of these tests. The these red team tests, but then also there was just general content safety tests into pipelines. So you’re so you’re able to take a lot of these? Brian Haydin 41:27 Yep. Mac Krawiec 41:34 Tests and integrate them to your deployment as as as your agent’s being deployed. That’s huge. For just ensuring that everything is is gravy. From from you know from day over day. So that would be one of the thoughts that that I had. There’s also some libraries that, both for pyth, I know for Python. I got to actually play with. With it, we were able to implement some specific tests using this Python library. For agents there is, that’s out. Where you’re where you’re able to just. It’s almost like unit testing your your agent, which I liked, but they’re just building a lot more and more. I know that it used to be a kind of a forethought. And or not a forethought. And now it’s more of an afterthought, but I think there were definitely catching up. Brian Haydin 42:21 And so that’s red teaming, and that is Mac. You you sort of touched on it. Something you’ll be able to integrate into your DevOps pipeline, but there’s also the Microsoft purview aspect of things as well. So Microsoft purview has got a few features that are not only looking at your data ecosystem or your corporate ecosystem, but. Has also incorporated into AI Foundry as well. So some of the things you can do around content safety would be spotlighting to detect, like injection attacks in real time. And not only detect them, but set up alerts. And you know, sort of control, like different thresholds for, you know, if it’s a self harm or if it’s a violent, you know, type of interaction. So spotlighting is coming. Mac. You mentioned the agent ID. So this is something that, you know, we’re starting to see. I started to see a little bit with the copilot studio agents and people just building, you know, a plethora of agents in their ecosystem. And you know, I think back to like couple years ago when the power platform Center of Excellence, you know, was starting to be ad. By a lot of organizations with with the agent sprawl and. You had the same thing happening in all these different. Copilot studio agents, or custom agents that are being built and so. You know, that’s probably needs to be managed and those agent identities are gonna be important for us to to keep tabs on as well. Anything else to add there? Mac Krawiec 44:06 No, you pretty much hit the nail on the head and I kind of got ahead of you earlier. Brian Haydin 44:10 Yeah, you did a little bit, but all right, so we talked a little bit about the risk monitoring and alerts. So here in purview, you’re in AI foundry. You’re going to be able to, you know, set up different risk criteria and have those notify alerts. So Jailbreak attempt was, you know, on your AI model deployment was detected by prompt Shields. And and then you can, you know, assign different risk levels to that. And take actions against it. We are. We got 15 minutes or so. Think we’re doing relatively OK on this. Another thing that I thought was really cool. Was the sensitivity labels from purview, so for those that may not be, you know, kind of like familiar with that or or whatever, but with Microsoft purview, you get the ability to, you know, automatically detect and tag content within within your documents. And and so natural inclination is to create these agents around, you know, different content types. And so a paradigm, you know, comes up where I have this privileged document that may be tagged as confidential, maybe tagged as Phi. You know, some sort of sensitivity label has been applied to that document. But I’ve exposed that document in a rag, you know, some. Sort of a rag pipeline, maybe through copilot studio and so by leveraging Perv D You can. Be able to take that that sensitivity label and it bubbles up with the content that’s being served up by the agent. And so you know, it would prevent users from being able to, you know, copy and paste the AI content, which might have sensitive organization information into an. E-mail or into a chat. So this was really cool. I have not seen it work in person and be able to play with it and see what you can and can’t do. But I did see some demos, you know, not only in the keynote, but you know also working with the purview team. We dove into this a little bit, but. Super great to see this feature. It’s going to make a lot of our customers more comfortable using agents. Alright, so Mac, I think we talked with the power platform, the guy who created this, right? This is his. This is this was his job and I don’t remember seeing an announcement about it. You know in build and we had some questions, you know, around sprawl and governance and you know different different things. He’s like hey, have you checked this out, Matt, you actually saw it before like you were. You were just in the power platform like the day before and saw it. Mac Krawiec 46:51 Yeah. Absolutely. So I we were, you know, first day of build, I ended up having to do something for a client on that day as well. So I got into power platform and I look, I’m like there’s a there’s a, there’s a switch. I’m like new power platform admin center. I’m like, OK. Well, let’s let’s click and see what happens. So that’s I I got to look at it pretty, pretty quickly. They never announced it, not at least not loudly. If, if not for that button. And if not for that conversation? We probably wouldn’t really heard much about it, but it’s not just the power platform admin center that’s getting a facelift. It’s everything. I mean, it’s power, it’s power. Apps got a new got an entirely new UI. And and a few others, but the the huge observability gains around power platform admin Center is is is huge. And then we got the talking with with with the product owner that you mentioned, they’re actually putting in a lot of changes. I think there’s five pillars that they’re focusing on within power platform and there’s going to be just a lot of incoming changes around how you look at power platform. There is. So they’re going to, there’s going to be managed platforms. So we have managed solutions, managed environments and now there’s going to be entirely managed platforms. So. So they’re adding on new stuff and adding on new language to confuse us a little bit more, but then also give us more, more, more ability. To do other things. Brian Haydin 48:21 Yeah, thing that really jumped out at me, you know, with this announcement and playing with the tool when we got back here in town. Was that, you know, copilot studio? It was sort of like this like step child of the power platform. It had a whole different URL, different management console that you went to, and now it’s kind of been brought into the power platform admin center. And I was thought it was weird for it to be really disconnected because the nature of using copilot agents isn’t just about, you know, doing things like answering questions about documents. It’s about performing actions as well and having to hop back and forth between, you know, power autom. And power apps and copilot studio not just from a development standpoint, but from like a governance picture and being able to understand that to me was was a little bit disjointed and I’m glad to see that they brought it all together as part of the power platform other. Things you can see here on the the graph that’s on that page is, you know they’ve changed the copilot studio licensing. And So what is happening, you know, with my messages, what am I doing for, you know, how am I trending for consumption? We’re no longer just fixated on a $200.00 a month copilot studio license for the organization. They’re going to like a peer consumption model. So you can have as many ages as you want, and then they just accrue messages or tokens. If you want to think of it in that nature. And so being able to like, go and take a look at those metrics within the dashboard is important as well we. Got a lot to go through yet so. Multi agent orchestration. I think this is like. Like super cool. You know, in the power platform, being able to support you know that multi agent orchestration, you can now connect in copilot studio. You’ll be able to connect to other agents that you’ve created. So you can specialize the skills and have reusability around that and build, you know more and more complex. You know agent frameworks and agent patterns. Another announcement or another thing that I walked through was I’m doing a talk later tonight. Anybody. Somebody in Milwaukee wants to come, you know, talk about this. But you know, having the copilot, you know, functionality built into the developer skill set is allowing you to take business requirements and directly translate those into agentic solutions. And so you don’t need to be a developer or a power platform developer even necessarily understand how to build these flows. All you really need to do is design really good business requirements and you can build agentic solutions out of that. A lot of lot of features are coming out that are going to make it fun to play with. There’s plans. That is it. It’s not just building a plan, but it’s it’s a thing in power apps that just became generally available. And so you can do. End to end solution building just by creating kind of a plan in the power platform agent feeds Mac. You touched a little bit on the managed platform, you know being able to expand the governance and security aspects of these solutions as we’re starting to build them. Mac Krawiec 51:41 There’s one tidbit if you don’t mind. Before we go too far about multi agent or about multi agents, I got to get into AI had a chance to to spend time in a lab where we effectively built multi agent with semantic kernels. Brian Haydin 51:43 Yeah, yeah. Mac Krawiec 51:56 So semantic kernel, the framework is if you’re more interested in the high code, the semantic kernel is is adopting some of the nuances of autogen. Where depending on whether your model is deterministic. Or not. And if it is deterministic, you can actually start to employ. Specific orchestration patterns all that are and it’s and it’s kind of it’s it’s pretty easy to to choose. Hey, you want to use sequential or concurrent or magentic. It’s basically an enum and and ultimately you can use semantic kernel to drive your deterministic multi agent implementation. That’s super powerful as I was. So we got to do this in the lab. And then I got out of the lab. I’m like, OK. I’m gonna redo this clients that clients and that other clients entire implementation ’cause now I can I can I can use this so so these multi agents especially in in with this kind of auto Gen. feel is is is pretty cool to see. Brian Haydin 53:01 So yeah, absolutely. And Mac, you can attest to the fact that a lot of the solutions that I’m bringing to our customers are are using low code for you know things like a user interface. But we still need some of the high code type of workloads you know to be able to support some of the advanced reasoning that we’re doing or you know the the advanced pattern matching or different types of models, so. We’ve got two different aspects. First off, now I can bring code pro code. Inside of my low code environment and develop things like, you know Python script to be able to do a plotting, you know some, you know, something very specific that’s not supported in the platform or if I’m from the other direction, I can use my Visual Studio environment to. Develop more complex copilot studio agents and deploy those into the copilot ecosystem, giving advanced capabilities that you know I’m more comfortable as a developer developing in. And. You know, and still have a tool that is supported in the copilot studio ecosystem. So a lot of synergies between the Pro code and the Low Code environment. And I think that is you know that’s going to make us a a better organization model support and power platform. I’m just going to blow through this stuff, Mac, because we we only have 6 minutes left and I want to make sure that we stay here on time. I promised 59 minutes. Model support and power platform, so being able to fine tune your models and use. Task specific model, so a couple of different concepts here. You’re gonna be able to fine tune towards documents that you have. Maybe you have a legal analyzer and you wanted to answer in certain ways, so you can build task specific models and select from there. But you can also bring your models in from Azure from the AI foundry as well and leverage it that way. Cool interfaces to be able to customize your model for your task, making it really simple for you to use. Let’s see databases in fabric. So I know this is a little weird looking slide. There’s not. There’s just a big bunch of white space foundry connection for Azure for Cosmos DB. I thought this was really cool Cosmos DB very, you know AI centric rag centric, you know kind of concept bringing that into you know fabric ecosystem. I actually talked to the cosmos people about why I would want to use this versus just deploying Cosmos directly. And the there’s a couple things you’d want to consider. Performance is one of EM costs is another. The biggest difference between the cost aspect of it is gonna be in Cosmos DB. You pay for R use. You know essentially, you know, resource units. And in fabric you use capacity units and I don’t know that there’s a real equation there that matches up, you know specifically to it, but if you’re paying for C capacity units and fabric, you don’t have to worry about the ru’s in Cosmos DB. So that’s probably the biggest reason why you would use it one way or the other. SQL Server 2025 was announced. Some you know, it’s getting a step up in being able to support the AI workloads. You not only have, you know, vector column types, but you’ve got these new functions that you can use within SQL Server 2025. And and then Postgres Mac. I’m gonna let you speak on this for a second because we got into an argument. And I think you won at the end of the day. So talk about Postgres. Mac Krawiec 56:33 Yeah, we have two minutes, but it’s probably gonna consume a lot of it. But Postgres there is a lot with the plugins extend the extensibility of Postgres. There’s just a ton of more things that you can do with it in the in the, in the space of, you know, vectors and all that I got to deploy an entire application using Postgres during the lab as well. So that was that was really great. It’s really. Easy to. Connect to Azure using Postgres from from the database. Directly. So there’s a lot of opportunities there. It’s cheaper, so it’s cheaper because it’s not being licensed as a as a Microsoft SQL Server, but you also lose a lot. You lose some performance. Monitoring and just in general, that is no longer taken care of for you because you’re kind of on your own. But there’s other there’s other answers to that. So I’ve seen a lot of cool things happen in the AI space using Postgres. And I I actually, I’m probably gonna be leaning as a proponent of that over over SQL Server most likely. Brian Haydin 57:40 It supports support for RAG Systems and LLM you know type of workloads. Being a structured database versus a Nosql database I think is important and one of my biggest challenges with recommending Postgres was. The cost of it in Azure and I’m not going to plug anybody’s names, but we ran into a Microsoft partner that has a really low cost deployment mechanisms. So maybe talk to us, you know, reach out to us a little bit later on that. I’m gonna skip the next couple of slides cause where they ended I thought was really cool. And Microsoft discovery. So I remember them talking about researcher and discovery and some of the other copilot features that that were coming out. Discovery was really cool one. They talked about developing or actually having copilot go out and research and develop. Develop a new chemical compound. You know, what was it for? Cooling you you know this. It was cooling your your race cars, right? So how do you find this stuff? I the phase two rollout of researcher was announced at build and I guess we’re in phase two because I found it. And so for those of you that are wondering if you have it, if you have a copilot license for the organization. Go to, add new agents and click on the built by Microsoft and you’ll see some of those frontier copilot agents that are available to you. So that’s how I was able to find researcher. For reference, I do have the ChatGPT deep research and you know the Pro license and whatever else. And I’d love. Always felt that like Microsoft needed to get me into the game. So I stopped trying to use, you know, other tools. And I actually am pretty happy with the researcher. You know right now and I think it’s a fantastic tool. The other thing that I did not mention and thought was really interesting. Speaking of researcher and open AI in my my pro licensing purview is going to be able to support content safety across other ecosystems as well. So if you do have a git. ChatGPT Enterprise license you can get covered under your purview. Licensing is as well with that. So with that said, next steps if you want to learn a little bit more, talk to us a little bit more detail. Have a one-on-one, you know, reach out to us. We’re going to send out a survey here at the end of this. And we’re happy to talk more about this. We love this AI at work, Azure Business, app Security, anything. And thanks for your time. Loved it. We made it. Mac Krawiec 1:00:43 Nice job. Brian Haydin 1:00:45 All right. Thanks everybody, click the link. Mac Krawiec 1:00:47 Thanks all.