/ Insights / View Recording: Best of Microsoft Build Insights View Recording: Best of Microsoft Build May 30, 2024Join us for an engaging discussion following the Microsoft Build conference in Seattle, where technology enthusiasts and industry professionals come together to explore the latest innovations and trends in the tech world. Microsoft Build has a rich history of showcasing groundbreaking technologies and inspiring developers to push the boundaries of what’s possible.This year, the main theme of Microsoft Build is “How will AI shape your future?” highlighting the pivotal role that artificial intelligence plays in shaping the future of technology and innovation.Join our experts after attending Microsoft Build, as they are eager to share their firsthand experiences and insights from the event. Transcription Collapsed Transcription Expanded Brian Haydin 0:05 Thanks to everybody for joining. Uh, we have a lot of people sign up for this webinar and I’m really, really excited to share what I learned in Seattle and this is coming from the eyes of a former developer. I mean, I don’t write a ton of code, but we’ll maybe we’ll talk about that when I introduced myself. Uh, uh. But uh, Nathan, how about you introduce yourself really quick? Nathan Lasnoski 0:27 Sure. Nice to see everybody. I’m Nathan lisnaskea. I’m concurrencies chief technology officer and I just loved build. I loved everything about it. I loved the new announcements, the building on the building that adding on to what we talked about last year and get into more and just thrilled to have this conversation if you want to connect with Brian or I hit our QR codes, they’re currently on the screen that’s going to go to our LinkedIn. We love to build and create communities, so we’re excited to connect with you and to talk about the exciting things that we’re talking about today more. Brian Haydin 1:01 Yeah. And so I’m a solution architect that concurrency I’ve been out to build this is, I think my third or fourth time. This was a fantastic build and feel free to reach out to me on LinkedIn. I got my official like build badge today. Ohh I’m gonna give you a little bit of a fair warning. Nathan Lasnoski 1:17 Nice. Brian Haydin 1:19 I typically tell myself to slow down when I do presentations, so I don’t talk too fast. I am ripping this up today because there is way too much content for us to go through. So to kick things off, I wanted to level set a little bit about what build the 2023 was, so last year was really exciting. There was a big wow factor we got to see what copilots looked like. I mean, it was just incredible. Like you know, seeing the interactions and what you could do inside the Microsoft ecosystem was just something that I will never forget that kind of an experience. There was one other like I don’t wanna talk about Google too much, but like when they did that a booking your hairdresser kind of call like that was like wow Moment build 2023 was even better. I mean, it was just, this is stuff I can do and I’ve been using all these copilots, you know, for the entire year. But what was kind of missing was how the developers actually gonna use it. I remember in 2023 there were a couple of boots that I went to talking about building various copilots and even from the product teams. They’re like, well, eventually you’re going to be able to put this into a marketplace, but we haven’t really built the marketplace and we haven’t really finished like these SDK’s this year was all about enabling developers to build copilots and use all these leverage, all these different tools. So very exciting. That’s what we’re gonna talk about today. And the first thing I’m gonna talk about is GPT 4.0. So just a couple of weeks ago, 4.0 was released and this was brought into the ecosystem in really highlighted, you know, as part of build this year and two things I would you know, two things I would talk about here. First off, it was almost immediate that 4.0 was or 40 was delivered from opening AI as it was available in the Microsoft ecosystem. So on the right side of the screen you can see you can actually go in your opener service right now and deploy a GPT 40 model. And if you’re wondering how cool this is, what you’re seeing here is an image. It’s a picture I took on my phone at one of the sessions and I said explain this to me and this is something that you would not have been able to do with any of the previous GPTS multimodal. I’ve got this image. I’ve got some people in there. I’ve got some text on the screen. It’s like at odd angle and I just said, hey, explain this image to me and it came back and said looks like this as part of a Microsoft event. There’s some branding. It knew what the words were on there. It was able to read that and describe it fairly accurately in terms of that capability. So this is something that is now available in the open AI service and you could start to play with it. I’m going to give you one caveat or one thing that you should understand is that like last year, when opening I was released. The open AI service was released. It’s kind of throttled through this four point OHS model is throttled. I believe it’s around 10 or 15 requests every two minutes is what the current throttle is, but as they build some of this capacity, you know in the Azure infrastructure, I’m sure that that will become a little bit more readily available, but very, very cool, super powerful GPT model, multi modal, absolutely fantastic ohm. The other thing that they really stressed was around this, you know, building the developer community. So Microsoft announced a couple of new applied skills. Credentials. These aren’t like certifications, but these are learning tracks. Essentially, for building AI into your application ecosystem, and we’ll talk about the specifics of the different components that you can use and what’s out there. But I wanted to call this out because there were several talks. Scott Hanselman was talking about getting into these training tracks. Uh and uh, and and so, you know, if you’re looking to dive deeper into this and you felt a little bit lost in the ecosystem, Microsoft announced the these learning learning tracks for you. Ah. Nathan Lasnoski 5:36 I’m not usually like a big Microsoft security guy. Microsoft training Guy, but I was really impressed by how quickly they had a practical. Certification like Mini Cert on semantic kernel like that. That’s already a thing that you have learned material and guys that go through given how new that is and how like fundamental it is to the agent ecosystem now is that was pretty cool to me. Brian Haydin 6:00 Yeah. In like that first bullet .2 uh how to actually prompt your GitHub copilot right? So like adoption is like a big thing and we talk about this, Nathan, with our customers all the time in terms of copilot, you know, whether it’s GitHub or whatever, the M365 copilot’s. But if people, you can give them the license, but if they don’t use it, they’re not getting the value. And I was at a talk last night where the presenter made the point in this really nails it with that first bullet point. If you aren’t investing the $20.00 a month for your developers to use copilot, you are wasting their money and their time. This is a productivity tool that will increase productivity 40 to 60% like literally 40% forty, 60% more productivity out of your developer for $360.00 a year. I mean, it’s just it’s a no brainer. You’re not being a good steward of people’s money if you weren’t using it, but alright, so Visual Studio benefits again. More on this learning track and supporting the developer. I did talk to a friend of mine. Jacqueline. She’s awesome. She’s one of the product managers. I met her in the last couple of years and I asked her what, what’s the big thing you want me to talk about? And in your MSDN subscription, now it’s a new benefit. Our credits up to $900.00 for virtual events or onsite events like the VS Live, and if you haven’t done VS live, it’s kind of like a more focused. Workshop based uh mini build if you wanna think of it, there’s four of them every year spaced out around the country. There’s one coming up I believe, in June in Seattle, the last one was in Chicago. You know, they kind of do them across the different areas. So that is like a huge important benefit that you’re going to want to take advantage of. I looked at some of the online training. There’s some virtual sessions for how to like develop these AI capabilities inside of your application, so definitely check that out. It’s it’s a huge benefit, which is basically like 1/2 off of like some of these virtual sessions on. They’re like maybe twelve, $1300, and 900 bucks off would be, you know, pretty cool. Uh, I did also ask her a little bit about, like, uh, the next version and the next release. Visual Studio 2022 is the last you know, last major release and they don’t have anything in the roadmap that she was able to share with me. They don’t have any real plans to release a new version of Visual Studio, so I asked her the question, what about like being more like Visual Studio code and not having to always install new versions? I had a lot of getting prepped, furred for build on my personal computer trying to get all the components of the date and all that kind of stuff. And she said, right, you know, right now that’s not even something that thinking about because Visual Studio is more for an enterprise environment. There’s just most companies, aren’t they? Wanna have control over what versions are being installed? Different softwares so. That I don’t know. I gave you the feedback. I didn’t like it. Uh. OHS, next coming up, we talked about GitHub a little bit GitHub extensions, so GitHub extensions are ways for you to interact with different organizations that have set up specialized. Uh umm, Q&A bots. If you wanna think him that way. So you can add Docker and ask a question about specifically about Docker and that’s feeding in through an extension. Developers now have the ability to add their own extensions into GitHub and be able to prompt as well. And so that’s a capability that’s out in private preview. You know, to build your own extensions, but these extensions will now be available in GitHub as a GitHub user, so that’s pretty exciting. Umm, I know that the the screen is a little bit small here in terms of you know what’s actually on it. But you know, there’s a there’s a variety of different one that can be adding them, you know probably in a weekly basis moving forward. Developers life in teams got an upgrade, so you’re starting to see some really cool cards coming out in, in copilot and inside of teams. And here’s one that I thought was really cool, where you can take code snippets, just basically link to repo inside of teams and it builds out the card for you. So as developers are sort of collaborating and working in teams. Uh, you know, they’re they’re able to share code in a more real time fashion without having to click the link. And you know, go take a look at this code snippet. So this is something that again the mantra here is that the developer is taking a first class seat in this build. All these different features kind of tie together. And then lastly, before we get into into some of the more specifics like you know diving deep into the technology, there is a new AI toolkit that’s out for Visual Studio code. This is going to this is an extension for VS code that is going to make your life easier as you’re working with different models. It exposes you to an AI toolkit and the model catalog, which is a set of curated models from both the Azure No AI studio, but it’s also gonna have support for like hugging face models as well. So you be able to bring uh stuff in there. It also sets up a model playground so you can experiment with things right inside of Visual Studio code rather than having to go across different platforms. Ohm as well as A and then doing a little bit of the fine tuning you know directly in Visual Studio code. Other little things talk about you can deploy basically single click deploy from Visual Studio into your container apps and also like get some of the telemetry metrics as well to see how things are kind of performing in real time. So that was this is a really cool feature. I did not have a lot of success getting this installed. It is in preview mode, so there’s some versioning things that you wanna be careful about. So give yourself a little bit of time to get this up and running. But yeah, make sure you get your right versions and updates and all that kind of stuff. Moving on one of my AHA moments in in build this year was around API management and some of the things that we’ve had some of the problems that organizations are having using these models and using the open AI service is the expense. I remember talking to a CIO a couple of weeks ago and they shared a story about oops. They made a mistake. It was playing around with this. I got a bill for 40 or $50,000. Uh and uh. And so it’s real, you know, especially if you’re using like 4 GPT 4 models which get expensive and the token counts and you set up all these automations where it’s firing things off, it gets really, really pricey. So now an API management. There’s two major things that I would like to talk about around that. 1st is that you can set up policies to your open AI service endpoints to do throttling and token management so that you can limit the kind of expense that you’re going to incur right at the API level. The other part is that open AI service is elevated to a first class citizen in API in APM. So you can essentially add your open AI endpoints directly from the API management platform and it’ll manage like all the all the security aspects of it for you. Get that all wired up so you don’t have to do it manually. You can kind of just you know it. It’ll discover the opening I instances that you have in your service, and then you can apply the different endpoints that you want, or the different policies to it so the policies that support are both the token rate limits. There is a policy that is for setting up ohm redundancy and failover. So like a load balancing kind of thing and then lastly, they have an A new policy that allows you to emit some additional telemetry. You know, tying it back to IP addresses or subscription IDs so that you can better monitor you know what’s going on in addition to the throttling capabilities. So very cool, very easy to set up. I watched this demo. Was 10 minutes. Uh, and it was boom, boom, boom. Got everything set up? It was really, really cool so. Here’s an example of what that token policy looks like, and then you know ohh semantic caching. Man, this goes so fast. I keep forgetting stuff. Semantic caching policy. Really, really cool. So this is, let’s say that you ask your API. Your open AI service, a question like ohm, you know. Tell me how to build a elephant. You know a trailer or something like that. And so now you’ve got that it goes into the cache and you ask a very similar question. How do I make a elephant trailer to different words are being used and typically what that would result in is another request, but it’s essentially the same answer. So semantic caching takes now, it looks at the request, sees that there was a semantically similar request that was sent, and then use the cache. So you’re not getting hit with the token again, that was. That’s another way that they’re gonna help to save money and also perform better. So you’re not waiting for the LM to generate a response, so APIM support very cool, very excited about it. Nice to see this kind of elevated to a production level support and being able to govern the opening AI service through a pen. .Net 9 preview is UH Preview 4 is out. That was released in build. There’s a a couple of a cool features that were that were talked about. I’ll dive into those a little bit, but before I get there, net nine is set to be released. .Net conf. This year, save the date. That’s November 12th and November 14th, and certain we’re gonna be doing a webinar. I know a bunch of the local user groups are gonna be doing launch events. It should be a good time, but what was included in it? Uh, so at a high level, uh, there’s a bunch of AI features that we’ll dive into. There’s this thing called .net aspire, and I didn’t really know what that was. And I walked up to the booth and was like, am I an idiot for not knowing what this is? And they’re like, no, no, we just released this. Let me tell you all about it. So I’ll tell you all about that. And then lastly, kind of the big thing that I noticed was that MongoDB got elevated to support in entity framework core, so you can now start using my God with you already are using MongoDB. Uh, you could start using entity framework for that as well. Umm, but before I go too much further, when I was talking to theasp.net aspire guys they shared with me that there is a new opening I client library for .net you there was a .net library to be able to connect to open AI natively inside of your C sharp code. But that was open source and not managed by Microsoft. This is being released. I don’t know if, like the official official announcement was was set up, so I kind of asked what can I talk about this? And he’s like, well, if you see something published, maybe, but there will be an official blog post. I haven’t seen the official blog post, but I did find a reference to the client library online, which means that somebody leaked it out before me and I’m not going to get in trouble for breaking my NDA, but this is gonna allow you to use essentially your NET Framework to be able to interact with these. The GPT alums and, you know, really easy to use code. All the configuration for your request is gonna be type safety and all the goodness as well. On aside, he did. Kind of ask my feedback, so maybe if you have a an opinion you wanna drop something in the chat, be kind of cool. Maybe if we did a poll, but I’m not ready to do that and I’m talking way too fast. But they had asked me, like, what do you think about having very specific class libraries for each one of these models? So having a specific chat GPT having a specific name, your model that you want to have right now, they just have a single class for each one of those. Mostly because the configurations of how you would interact with those are different. Or would you rather have one that’s like one to rule them all? They’re looking for that feedback because they kind of advance this, so feel free to drop something like that in the chat if uh, if you’ve got an opinion on it. But going back to going back to the AI features. Umm, one of the coolest things that developers I think are gonna be able to use is the new .net smart components. So you can see like a little animated GIF here, what it’s doing right now is you’ve got this form field that’s like your typical first name, last name, phone number, email address, whatever. And umm, this, you know this is showing going to the clipboard this in this case an email and copying that content and then clicking the smart paste button and it sends all that like all that raw text data to an AI service to figure out what’s supposed to go where in an auto fills your field. I think this is a really cool feature. This is just one way that AI can be used a little bit more tactically. So like make your user experience your apps better and and not have to build like you know custom implementations of how to do this. So this is available in Blazer and would be a great thing for you to start to get your feet wet in it and start using it in a variety of different scenarios. I mean the address stuff is pretty easy, but maybe you’ve got some claims forms that you know you’re capping data back and forth all the time. Peter would make this stuff easier. Smart text area is basically getting that auto complete. OK, you’re sentences as you’re typing, and so that’s really awesome. Smart combo box you know pretty similar which would be typing in a word and it’s gonna find a match in the combo list based on like the semantic value of what you’re typing in there. And then finally, there’s, uh, some new features. This goes into like this started to see some of this work in in verse net version 8, but it’s kind of coming to a maturity level, but you can now start doing things like local embeddings. Uh, So what this means is that you can go and read from a table, you know some values, load up a load up. You know that data as an embedding and then run this locally to get a semantic search value or result you know out of that as well. So the benefit here is that you’re not making round trips to a service, and you’re using your CPU to do this. So it’s you’re not using any consumptive service. You know, you’re just running in a local CPU, so those are some cool features around the coding aspect. I promised we would talk a little bit about the net aspire. So he tried explaining it when we first before I went up to the booth, I like looked at it and read the the description and was like, OK, this is not making any sense to me. So just ask them. Like what is this thing? And he described it. Uh and I said Ohh so this is like unity but a little cooler, right? And he’s like, no, no, no, no, it’s this is an orchestration like it’s about how to build this stuff and the apps and the tooling. And I’m like, I’m listening to and I’m like, yeah, but it like the code that you’re showing me. Looks like unity is just like how I manage my dependencies and at the at the end of the day that’s kind of what it is, but a little bit more than that. So it’s not just IOC. Ohh you know dependency injection. It’s actually like orchestration between the applications, so good way to good way to think about this is I might have a couple of different APIs, one that’s using one that’s using Redis and one that’s not using Redis. Well, I can kind of orchestrate who’s using what services, and then I don’t have to go in and manually set up all my different connection strings for all these different shared services. Everything’s all managed by the ASPIRE framework. Uh, there’s some good quick start tooling. It also is like kind of an app launcher as well. And then it also gives you the ability to look at some of the telemetry that’s going on in the resources in real time. So you can see on the screen here there’s a little dashboard on the bottom right. You can actually monitor the health of the application and the different components that are running in it in real time, and that comes basically out of the box. So I’m excited to start looking at it and see how this can simplify. Uh, you know, application development like with green field like kickstart new things. And yeah, so check into that. It was cool feature. Umm. Nathan, you added this. Nathan Lasnoski 24:21 Yeah, I I was really excited about what they’re talking about with GitHub. Copilot workspaces on. In particular, it it goes along with some of the other agent conversations that we’re gonna have later in the deck, but in particular, what I thought was really interesting was that initial versions of GitHub copilot were like, give me a suggestion on how to write this code, and it was grounded in the content that was just generally available but not grounded in the content of your actual get up environment. Now that code is grounded in the your environment, so that’s sort of change one, but change two is that it’s getting to a point where it can start to build a plan based upon an intent. So if I said I want to build an application that looks like X, give me the components of that application that need to exist and start to build on a plan for building it so it can be a collaborator, not just at the like base code level, but can be a collaborator in what do I need to do to be able to construct this application at scale? But then what else? They added into the set that was really interesting was, as you’re as suggesting changes to the code or a ground up development as it’s going through these different steps in the workflow, you can edit that as you go. So you’re not just sort of like taking what it brings to the table. You have the ability to collaborate with the code. Suggestions that it’s making or the entire build suggestions to be able to participate with it almost in a sense that you’re participating with another person that’s collaborating with you along the way. So really exciting. What? They’re like you heard a sort of a noticeable change, and they’re like, this is just a copilot to. Now we want you to start handing off activities to GitHub copilot and functioning within workspaces to allowing it to be able to do things for you, not just suggesting things that you could do. Brian Haydin 26:20 Yes. Copilot workspaces. Uh, I didn’t see a lot of demos outside of the keynote, but looked like absolutely powerful and looking forward to working with the team. Umm. Alright. Next. Uh, there’s uh, there’s copilot PC’s and I I got another slide that talks about what the PC’s are but leading into that. What? What’s gonna start happening is that you can start building your applications, not relying on a service, but actually start building native applications for Windows desktops or even like deploying to your cloud services and using the CPUs and GPUs that are built in and mpus that are built into your PCs. So they announced from a small language model, which is, you know, pretty sizable, it’s 3.3 billion parameters called Phi Silica. And now this is a deployable uh model that you can start to use inside of your applications natively to do you know generative, you know, AI work. So that’s that was a huge announcement. I watched a lot of the work that was going on at the booth and the sessions talked to a bunch of people. Very, very cool. In order to support that, though, we need the runtime and libraries that are going to be able to support that. So coming out are the new Windows copilot runtime Ohm which is going to allow you to interact at all the different levels you know in inside of copilots inside of your applications and then APIs that are gonna be available to expose not just the physical that you can bring along, but also as Microsoft builds on this, there’s going to be 40 or so different models that are going to be shipping with Windows as well natively. So you get access to that Tom recall is, is this cool feature where you’re able to look at, uh, a users activity over time and basically recall events that may have happened using natural language. So that’s a really cool feature I saw a couple of demos. I didn’t get a chance to, like, really unpack it like in my head. But you know, if you think about like being able to like go back in time and review activities, that’s going to be a really cool feature. And then like in this localized development, there’s gonna be a lot more support for the the compute power, right? So direct ML is going to support uh. Quantization. Uh, we’ve got Onyx runtime. That’s gonna be, you know, available as part of the native windows and Web NN. You know to be able to interact with these impus that are coming out. So this is all these are all features that are just gonna make the performance of it better and sort of direct the the computational traffic to the right GPU, CPU and GPU in the ecosystem to make it perform better. So what does this copilot PC I have a I have not preordered one. I’m going to preorder one before they’re coming out. I’m not entirely sure the date, but let’s call it somewhere between 6:00 and eight weeks is when these things are going to start to ship. These are going to be PC that have AI processing built into it to do really, really cool things. There were a lot of different applications that were showcased in both the keynotes, but also like in the different like, you know, walk around and interactive environments, something that I saw that was really cool or actually interacted with with the PCs is I was talking with some other young woman that, you know, was from South Asia, you know, country and started speaking in her native language. And I was speaking in my wife’s native language and, you know, we were able to have a conversation, and that was closed caption across the top of the screen. Really, really cool. It was all in English. Like. I mean, we weren’t speaking in English, but like we were able to converse just by looking at the close caption of the screens and answer way cool if you saw the keynote, you saw uh, you saw the real time ability for copilot to assist a user in an application. The one that was in the demo was Minecraft, and so the storyline here was that I’m a dad and my son plays, you know, Minecraft all the time. But I don’t know how to use this game, but I wanna like be able to do it with my son. So I jump in and I ask copilot like, hey, do you know how to use this thing? And it literally walked the user through. Well, you’re looks like you’re playing Minecraft and you need to build a sword because there’s some zombies behind you and you gotta fight those off. And here’s the resources you need. So to get those, you gotta walk in this direction. It was like it was like, way cool. So this is like these are the cool features that are coming out this called pilot PC uh. I didn’t think the price of them was really, you know, out of a out of line with the current offering in the surface. There’s a few other vendors that are coming out with them, but I’m excited to get one. I’m definitely going to pre order one and whether it’s from my work PC or my personal PC, or maybe I’m going to get both, I don’t know. We’ll see ohm copilot in Azure, so this is being able to use a copilot inside of Azure Portal to ask questions about your ecosystem in the same way that you would use M365 copilot to ask for the next time that Nathan and I have available to meet. So for example, show me all the VMS that are running with 10% utilization or less so that I can destroy this stuff or help me deploy a new app service. Uh and help me configure some of these things. So really this is a this feature was released during Build. It’s in preview, and while it’s in preview, it’s going to be free for everybody to use. So go ahead and get a taste of it today. You can start using it ohm and in get a flavor for what copilot can bring to you. I don’t know what the cost structure is going to be. They didn’t talk about it. No announcements on that, but it’s in preview. It’s gonna be there for at least a few months for everybody to use. Umm. Cost effective rags at scale? So here it was, kind of like uh again, like the developers right now are are exploring different ways to use to use the AI capabilities in doing rag type of solutions. But they get kind of expensive when you don’t control things, and specifically with the with Azure search there were performance limitations in terms of the size of the indexes that you could create or, but also like the competitional performance which results in consumptive cost. So what’s happening is that there is now support for vector search quantization, which is essentially reducing the number of operations that are required to get the same type of result. You’re going maybe from a 32 bit to an 8 bit, you know, so that the resolution is going to be a little less, but in a lot of use cases you may find that you’re gonna get the same answer. You know, with that resolution, but you’re going to get that at a much lower cost. Umm, they also increase like you know the number, uh, the size. Uh. Capacity. Uh, and without any additional cost. So just better support for The Reg. The Reg ecosystem ohm were there any cost estimators or examples that can be shared? Umm I not off the top of my head. There’s so much going on in my head. I’m not sure if I can answer that with any kind of justice. Uh, and I’m not sure if they’ve updated the cost calculator in Azure yet so. What else, man? We are only halfway through the slides, Nathan, and we are halfway through the time. We are like really on on point, uh, right on track. Nathan Lasnoski 35:03 Right on track. Brian Haydin 35:06 And I’m going like a mile a minute Azure AI studio. So I think, well, one of the one of the comments that I heard at the booth was that over time, you’re gonna see all these different Azure AI playgrounds and areas start to converge into one. So it’s not happening overnight in immediately, but it is starting to coalesce and you’re starting to see some of this as well. So on the right side, you’re seeing, I went in preparation for this and created an Azure AI studio instance and I was already able to see like all the different AI components that I had in my subscription for other use cases, all sort of like being brought into this. So you’re going to see all this converge into AI studio versus Azure ML and all the other different components you’re going to start using it as a single ecosystem. Umm it’s very developer centric. Uh, you know a code first? Kind of a developer experience. Uh. Being able to interact directly with the CLI. And you know, work with this inside of Visual Studio using like the extensions that I talked about with the AI, the AI extensions ohm in addition you know this is bringing a single source of where your model catalog is. You know, we talked about that a little bit in the in that AI toolkit, but essentially it’s leveraging the AI studio as well. So models are as a service are coming out, so you’ll be able to use, you know, a bunch of different models prompt flow. And then probably the last thing that I liked about this, the well the thing that I like the most about it was being able to monitor the the telemetry and doing your tracing and debugging directly through the studio. So all these features are becoming a little bit more robust for developers to be able to interact and inside of the Azure ecosystem. Umm. And it not having to learn like 6 different ways of doing things to get, you know, to get AI embedded into some of your code bases. Nathan, was this you? Evaluation framework. Nathan Lasnoski 37:25 This was me. Yes, I one of the things that we’ve seen is we’ve been building out AI solutions for customers has been that the sort of base protections that exist within an AI system are insufficient to protect it. So like if you think about like ways that an individual can work around the system prompt to get a AI system to respond in a way that it’s not supposed to respond are pretty significant, right? So we’ve had to build these like shims that exist between the person that’s interacting with the AI system and the ah, the actually AI system itself to protect either a the quality of the responses or B the types of things that it’s allowed to say are not saying and what you’re seeing here. And Microsoft talked a lot about it Build, if you wanna watch a really great session and watch Sarah Birds Sessions on these topics is they’re building AI protection tooling right into the platform itself. And then can expose that protection tooling on a variety of models, not just opening I. So what you can see here is that the building ability to create an AI evaluation framework where you essentially create the test, the test material, whether your questions or there’s sort of like stock questions that you can use that are available to be able to test certain types of jailbreaks or problems. And then have that auto run on a regular basis against your platform and then also build the Shim by using Microsoft’s core tooling. So if you kind of progress to the next. Click next on it, right? Brian Haydin 39:01 Umm, working on it? Yep. Next, there we go. Nathan Lasnoski 39:05 There we go. OK, cool. So this is an example of the kinds of protections that can be built into the it’s essentially through Microsoft Shim onto your AI platform. So groundedness being probably one of the most significant new additions. So things like self harm or hateful content or violent content existed before that’s continued to get enhancements, but the one that’s really exciting to me is the ground and is protection because a lot of times you’d see that you build AI system and you would even Build in reference assets for it to go back. Say did this, you know, show me where this existed in the source material and give me a link to it, but that even doing that it might say that it was in the source material, but it wasn’t really in the source material. So Groundedness protections goal is to evaluate was it actually in the source material and then being able to have that validation before it goes back to the customer. So you can see what Microsoft is doing is building not only the vehicle to provide protections, but also an ability to evaluate that those protections are working by giving evaluation material with there’s questions or different types of things you want to test. And then being able to evaluate them. Brian Haydin 40:14 Yeah, I wanna like stack onto that. I was having a conversation with a few QA QA folks yesterday and even like UX people as well. In order to build like a really mature, you know feature or system that uses these components like let’s not forget that this is all new technology. There’s not a lot of testing. There’s not a lot of like UX support. You know in the tooling built into it. And so this is a way that Microsoft is maturing that to be part of the normal development lifecycle. Uh, you know, being able to do groundedness detection is so important. You know, for QA person who’s trying to evaluate is is the is this response actually hallucinating or not? You know and and can I actually get this out in front of my customers in the form of a public facing chat bot, right? So a lot of maturity, you know happening and again that just goes back to the theme that I started with, right. This is for the developers now. It’s not just for the data scientists to do their magic and their voodoo. Umm agents. I know we talked a lot about agents, you know, Nathan and some and there’s there’s a bunch of tools out there to be able to build agents, Lang chain, you know, Lambda index, lane graph. These are all components that are more Python centric frameworks, but semantic kernel. I know you and I have talked about this. Nathan is more for the Microsoft developer, right? More, it’s a it’s a .net. Uh, you know based framework that we can use. You wanna you wanna stack on that? Nathan Lasnoski 41:57 Yeah, just the the one of the feedbacks I had heard was that about 50% of the Microsoft Copilots that had been built by Microsoft themselves have been built with semantic kernel. The others started before semantic kernel even existed and there having to like catch up now and what they’re finding is the ones built on semantic kernel, they’re able to take advantage of the framework so they can move faster. They have to build their own framework and that would be my number. One advice would be try not to build your own framework for building an agents, because this is a space that’s evaluated that’s moving so quickly and you’ll see in the kind of successive slides will move very quickly through that Microsoft is invested in the open source community to be able to create semantic kernel and auto Gen so if you keep moving to the next slide, what you’ll find is there’s a tremendous existing asset around what semantic kernel based on a lot of community engagement around its existence. But also, that’s really how they’re building it in general. So if you keep moving to the next one, you know, keep going. There’s a discord community around it, but what is it really? It’s an ability to combine a AI system that has its own context, but also has an ability to recall memory from past events to align that with the ability to create a plan based upon a execution, just like we talked about earlier with copilot workspaces, the ability to create a plan to take action, interact with other systems, and then bring that into relationship with other agents that you’ve created that can hand off activities and do so with a framework that you didn’t build yourselves. That’s really the the main point is you have to build the framework yourself. You have to learn a framework that others are contributing to as well. Brian Haydin 43:44 Yeah. And and really accelerate your ability to develop, you know along with Microsoft and that journey you brought it up before about the 50%, the the copilots that are most successful in bringing new features are the ones that are using semantic kernel. The other ones are starting to lag behind at this point, and probably because they’re trying to figure out how quickly can we get this semantic kernel. Nathan Lasnoski 44:08 Totally. Brian Haydin 44:09 Yeah. And they, you know, the memory aspect, right? This is all about the recall. You know, stuff that’s coming on right. Nathan Lasnoski 44:17 Yeah, I I thought this was interesting because there was really speaking to like there’s a diversity of ways that you’re going to give your agent memories, whether it’s short term or long term memory in a sense, or even querying other systems. Some of our more mature AI customers are now they’re not building just like RAG patterns that return from documents. Not that that’s not valuable, but they’re also starting to build agents that interact with business systems. And then perform actions, but that’s tied to the existence of the ability to return information. That would be a memory for that agent and then ultimately this next slide is about like the idea we don’t necessarily not necessarily Build one agent to rule them all. We’re going to have specialized agents and this gets into the copilot studio conversation of 1 agent that does the plan. And other one hands off to an agent that can perform XY or Z and you’re going to see a lot of like if 2004 was like the year of Rag, this next year is gonna be the year of agents performing actions, taking activities that we trust them to do. Brian Haydin 45:17 Yeah, right. Just send a request you know do and just it’s gone like you have that workload taken care of for you. We have 15 minutes left, so let’s blow through a couple of these things really quick. Uh Copilot studio Team copilots. Umm, what do you? Uh, Nathan uh. Let me just jump to this uh, process automation and Gen AI. Alright, so power platform. You know, I don’t know if I can do raise a hints or whatever, but uh, you know, power platform is is a very, very great tool for process automation and and there are couple of really really cool announcements that first slide was one of them. So, but in the power platform and select first talk about there’s cloud flows, which are I’m gonna do some automation around maybe some different APIs or systems that I can connect you to a cloud based. But then there’s also like a use case for like doing desktop automation like UI path you know is pretty mature in the space ohm and the power platform, you know has a desktop, you know flow as well. But it you know, it’s kind of lagging behind a little bit with the new show and tell experience though I don’t think it’s lagging behind anymore. I saw some really really cool demos on the desktop flows. And so like a good use case here would be and we talked to customers about this all the time. I’ve got this AP process where I get some emails and I’ve got to go to my as 400 but the as 400 isn’t something that’s cloud based, they can’t connect to that data so it can’t use a traditional cloud flow. I have to actually go into like these screens and perform these functions. So with the new show and tell experience, you can essentially bring up a screen recorder and explain to the RPA tool how you’re using the application so that it can build out the automation for you. And even more importantly, it’s got this new self healing feature which there were no demos at build about it. But when I was talking to the product owner, she whipped out her phone and she’s like, check this out. The demo that she showed me was somebody entering an equation and in calculator, but somebody wiped out the seven key and moved it around. They switched the seven and nine key. I didn’t know you could do that, Microsoft, but but apparently they did, but it broke their desktop flow and so in this video she’s showing me that it pops up in the power platform, power automate environment and says, hey, I can’t find the seven key anymore. Where the mouse click was supposed to be. Uh. And but I did find, you know, the seven key over here. Do you want me to repair your desktop flow and click the button and boom, it works. I mean self healing. Totally cool. Let’s talk a little bit about the automation flows with Jenna. Jenny, I. So this was a talk that I went through. Very cool. Step number one define the problem using words. In this particular case, it was pretty robust. I mean, he didn’t type this stuff on the screen, but it was. I am a claims adjuster that is doing automotive claims for people that get into accidents and I’ve got some business rules. I’ve got a document here. Any references that you know in the prompt ohm that tells you like what the coverage limits and what the limits are for different aspects of it. Like uh, some of its service works. Some of its parts work and essentially what I want you to do is anything that’s under $1000 within these parameters. I want you just go ahead and approve it. I don’t wanna deal with it. It’s $1000. Just kick it off and get the approval going. Ohh, and by the way I’ve got some people that have defrauded us in the past, so here’s a list of people that if you see a claim from them like automatically reject it. So he did this, loaded it up in the chat, and the next step was to review it. And there were a couple of things that he did. One was he reviewed the inputs and the outputs like I’m looking for an approval process, and here’s the specific inputs that I want and then he was able to take that document that was just direct link on a SharePoint and he was able to ground that as a rag document. So load it up into the power power automate flow, and then test it and it validated everything worked correctly and he was done. So got rid of all that messy work of like building all the different steps and building out the business rules. So I’ll handle it by Jen AI super super powerful. It was. It was actually one of the coolest things that I saw it Build. No. I, in addition to that, more support for copilot inside of the power platform. So we wanna do some adjustments. Uh, you know this is going to become a little bit more intuitive. I had some personal experience just before Build, so they they kind of like have this stuff out a little earlier where I was really having a like trouble getting some connectors to work and then I ask copilot to do it and it got rid of all the messy stuff and it was super fun to to work with. The other part is the other thing around agents that he talked about. There’s new custom copilots that you can start using. You can, you know, learn based on like you know, the feedback from the user’s there’s a lot of different plugins to be able to connect, you know, different things, and it’s all through, like a copilot. Like describe the problem that you’re trying to solve and inside the power platform environment, and then it’s gonna give you some of these like baseline flows built out for you. New copilot extensions. There’s a whole bunch of them. That library is getting really big. I think there is about 50 different extensions that are now now available and even more importantly, if your company has something to offer, you can build your own copilot extensions and put those into the marketplace as well. Which was not. You could not do before ohm and then here is something really cool that I like. But I also am like, completely terrified about what this is gonna do for governance in SharePoint. Basically, if you wanna create like a custom copilot to answer some very like specific documents, maybe this would be like you know your HR documents are sitting here or some policy documents that you wanna make into a quick little chat bot. Uh, you can go just directly from SharePoint. Select the five or six documents that you want to be grounded in, and then it’ll automatically click, create a copilot for you and publish it to teams. Super cool. Super easy to use, a lot of. Really, really good use cases, but I can see, you know, organizations letting this get carried away and having a call. Concurrency to say I’ve got 250 copilots. How do I get rid of them? You know, so very cool, but very scary as well. We’re running really, really tight on time here, so let’s talk a little bit about fabric. Two. Really cool features came out in fabric. Fabric was announced at build last year, so I was expecting them to take some horse corrections this year at build and they did not disappoint. Uh, one of the there are two two major use cases when I’m talking to customers about fabric that that make it appropriate or not appropriate for us to like build a POC. One is real time analytics, so fabric had a limitation to what the data refresh rates could be and specifically you couldn’t do some of the real time analytics like you can currently do in power BI in the fabric ecosystem because of how it ingests data. So with the real time intelligence in Microsoft Fabric, you can now start to bring data in in event streams and perform real time analytics in your dashboards. And this is a game changer. A lot of the manufacturing companies that we work with in the Midwest have IoT sensors all over the place that trying to create these dashboards and there wasn’t real time support. So they’re always looking at 20 minutes behind on the data and that’s no longer a problem. So a lot of like a lot of support around that, this is gonna be exciting to work with, especially here in the Midwest manufacturing. But I’m sure you know every lot of other people have some use cases as well, but the other thing that’s really cool is being able to do external data sharing. So there’s some limited support in this. I mean, this is the first iteration of it, so it’s only gonna work with lake houses and KQL databases, but you own your data. You still own your data, but you can share it to an external customers tenant and your your clients. Umm, it’s not quite measuring but, but it’s actually is gonna kind of work with like that. But the most important part about it, I think, is that your governance, the way that your data is tagged, the ownership of that data stays within your organization. It’s your governance right before this presentation, I was talking to a customer and that’s super, super important to them because they don’t want to lose the governance of that data that they set up ohm. But that still allows them to enrich the data on their own tenant. They’re essentially getting the raw data, or even in riched form of the data, but they can do further enrichment for their use cases and play with the data on their own. So that’s a really cool, really cool feature and then random thoughts. I think we actually, umm, we actually covered a lot of these. Uh, we did talk about agents and semantic kernel. Nathan, thank you for for getting me out of this one ohm. But Cosmos DB? I wanna talk a little bit about Cosmos DB. So a feature that was released is built in vector indexing. I know a lot of people are using Cosmos DB already, but it had some performance limitations and with these updates I’m gonna. I’m gonna go out on a limb here and see say that in addition to AI search, lot of companies are going to start using Cosmos DB in their AI solutions. It was already kind of like tailor made for this, with a couple little tweaks. So super happy to see that. Uh that come in, Nathan. You talked a little bit about the prompt Shields and the groundedness to detection. So this is like, you know answer. You know this is gonna help enhance the safety of AI. Make them more testable. Make them more extensible, umm, and then a couple other like little side notes. Uh, Azure functions. Uh, are fun to work with. I know that I’ve done some direct editing of the code, but it was kind of a pain. They’re introducing Visual Studio code embedded into the into the Azure functions, so it’s gonna be a more rich editing experience to be able to edit your code directly in the Azure function. Poulomi is a popular uh infrastructure as code framework that’s getting support and then this last bullet point here was something that came directly from my powerapps team. We work on big, you know, many screens, 2050 screen canvas apps. You know things like that and the support for get is pretty lacking. So that is, that’s actually in preview. There was, I believe there was a talk on it and so that’s coming. And then uh, so that’s going to enable better support for coauthoring in your canvas apps or power apps? Ohm. I’ll pause on this for just a second. Two big resources, I think that you’re gonna wanna look into would be go look at the book of news. Everything that I talked about in here has got a blog post that’s referenced, and then I also talked about the dev blog for Microsoft, you know, set up something that you watch that on a regular basis. That’s where some of these learn announcements are gonna be coming out. Just yesterday I saw a really comprehensive blog post sort of recapping a lot of the things that we talked about, but more in a blog fashion. Alright, next steps. I would love to have a conversation with you if you saw something here that you want to dive a little bit deeper into. So we have we do a lot of AI and copilot executive envisioning workshops. Umm and but I I would say just any kind of envisioning here if you wanna talk about fabric, if you wanna talk about you know some of the .net stuff that’s coming out copilots uh, we would love to have that conversation as the next step. So please feel, please fill out the survey and somebody will be in contact you after that and.