Insights View Recording: Govern with Confidence: AI Safety, Red Teaming & Compliance with Microsoft Purview

View Recording: Govern with Confidence: AI Safety, Red Teaming & Compliance with Microsoft Purview

As generative AI moves from experimentation to enterprise-wide adoption, ensuring safety, trust, and compliance is no longer optional—it’s essential. Join us for a practical look at how organizations are implementing AI governance through adversarial testing (AI red teaming) and the powerful capabilities of Microsoft Purview.

We’ll explore proven strategies for labeling, securing, and monitoring AI-generated content, along with lessons learned from real-world use cases. Whether you’re leading AI innovation or responsible for managing risk, you’ll gain insights to help scale AI responsibly and confidently across your organization.

Transcription Collapsed

Brian Haydin 0:06 Well, good morning or afternoon, wherever you’re from. Thanks for joining us for this webinar. We’re going to be talking about governing A I with confidence and a couple of topics I’m going to dive into today are generally A I safety red teaming. If you don’t know what that is, I’ll kind of explain it to you. And A. lot of this is going to be centered around compliance within Microsoft Purview. But before I get started, I guess you guys might want to know who I am. My name is Brian Hayden. I’m A. solution architect here at Concurrency. Just A. couple little fun facts about me. Obviously I’m A. little bit of an outdoorsman, you can see from the picture. I’m the chief nerd here at Concurrency. Love playing with toys. I’ve got two kids, A. 17 year old daughter and A. 10 year old son married. And I guess the fun fact about me is that I’m A. twin, so follow me on LinkedIn. I’ve got. A tech tracker blog that I write every couple of weeks or so, trying to take outdoor analogies to explain some of these technology conversations that I have and make IT A. little bit fun. You’ll probably see some of the outdoorsy stuff coming out through. Today, but why are we here? So generative AI adoption is absolutely exploding this year. 2023 was kind of like the breakout year and as your current state now 95%. Of enterprises, according to some studies, I’ve got some links here, Trustmark and IT Pro studies, but 95% of enterprises are at least piloting or deploying some Gen. AI today already. And you know, 30% of them are maybe 30% of them have, you know, some sort of governance in place. And I think that’s going to be A. problem as we move, you know, move through this over the next year. It’s kind of like, you know, being, you know, just like I’m just going to go fishing on Lake Michigan and for those of you that are in the Midwest. West, like Michigan’s pretty big. It can get pretty dangerous. It’s like just going out there in the middle of A. fog and saying I’m just going to wing it, you know, and figure IT out when I get there. Not A. really great idea. So, you know, let’s see if we can, you know, come up with some strategies today, what you should be doing, what you should be thinking about. in order to get you and your organization ready for for AI. And at the core of this, we’re looking to ensure trust, ensure safety and you know compliance if IT applies to to your organization, because these are things that are no longer optional. It’s pretty much going to be mandatory. And so let’s size up the the A I landscape. I’m going to talk A. little bit about Microsoft’s security stack, kind of like what you’re going to take into your survival kit and put in your backpack. We’ll probably spend A. lot of time talking about purview More than, you know, More than I would like to because I I prefer to write code. But then we’ll head out and we’ll do some A I red teaming and I’ve got A. bunch of different stories that you know we’re going to be sprinkling in. You know today to to help you understand it. So All right, gold rush, 99% of companies are expected to use AI and the big stories that we that we feed that we hear from our customers or the biggest reasons why they want to do it. Are typically productivity gains. That’s the number one idea that people come to us with new revenue, which is the ones that typically get the most traction and then cost savings you know as well. So that that also is you know is pretty important. And you know, I think executives are starting to smell gold, you know, and so they’re they’re panning, you know, A I for some gold and trying to find out, you know, where they should make their investments. But you know, if you think back to like the history of the gold rush, I mean there were All sorts of problems, bus wagons. Abandoned mines and in tech terms, you know, that’s kind of like More like data leaks and hallucinations and biased outcomes. So, so now after this breakout year in 2023 and sort of like, you know, escalation in 20 twenty-four, 2025. Is where I think that the majority’s organizations are going to Start taking action and not just being not just surviving through like this flood of, you know, the gold rush, but actually thriving in the intelligence age. So Start off with A. story, Samsung. This goes back A. couple of years to an older story, but maybe hopefully A. lot of you remember this. But Samsung, some of the engineers were playing around with ChatGPT because it’s what everybody wanted to do is play with ChatGPT. And they accidentally leaked some of their proprietary chip code into ChatGPT. And at the time, ChatGPT like as A. disclaimer, their policy, you know, just used whatever people put into IT to help them re prompt their re. You know, reengineer their their model. And so within days of this leak happening, the legal Team at Samsung, you know, got completely panicked, productivity tanked and they put A. kibosh on ChatGPT, blocked IT and said we’re done. So big, huge PR headache, not just for Samsung, you know, but also for like A. I in general. And that’s where like, you know, over the last couple of years, A. lot of organizations are really worried about that. It’s not as much of A consideration. We’ll get to the reasons why, but. But definitely is, you know, something that you want to be thinking about, you know, as you deploy different solutions. So I want you to like picture, you know, sunrise, you know, maybe you’re going out for some steelhead on the lake. The fog rolls in, your GPS is starting to glitch out A. little bit. What’s the thing that you usually go to when you’re you’re stuck like that? You got this like nice little compass on your boat and that one is, you know, just using the magnetic north and IT works All the time, so. Let’s let’s look at some tools that we can use that are going to have that sort of always working simple but reliable, you know, type of of solutions. So here I’m thinking More along like data protection, privacy compliance. You know, red teaming and adversarial testing and ongoing monitoring are All the kind of things that our compass is going to point this towards. And let’s use, let’s figure out which tools you know we can use to keep us to, you know, true north or true West or true east. So governance. You know, good governance is gonna, is gonna be funneling in trust, safety, you know, in in compliance. So safety, making sure that no one falls overboard. Trust, you know, is making sure that your crew really believes in the plan. And compliance is like when the Coast Guard comes and says do you have All the the things that you know that you need on your boat, they’re gonna give you the the thumbs up. And so the goals of governance are really in the in the state of A I is to secure your data and there’s A. lot of tools that’ll help you do that. Test the resiliency of what you’ve actually built and the controls that you’ve put in place. Make IT safe for you to develop and share your intellectual property, your services, you know, through A. variety of different platforms. Help you with real time threat protection. So being able to identify when things happen just like you would A. DDoS attack and giving the ability to have continuous monitoring, you know throughout and in the Microsoft stack. Really what we’re looking at is A. few different tools that’s gonna that bundles up every layer of the defense into like one single backpack. So you’ve got identity, you have data, model safety, runtime defenses and and then your monitoring aspect. And so if one of those fails, you need to have resiliency across the the platform, make sure that everything is sort of has everybody’s back. So the security stack in Microsoft generally comprises of this list of of items. So at the core of IT is going to be identity and there’s some exciting new features that came out this year around identity with A I. So identity and access management and that’s really what we’re talking about is users or like even like system users or like an agent is A. user. And then Microsoft Purview is like the general data layer, you know, kind of component. So IT this is where it’s going to help you classify and do labeling and protect the data and DOP policies. You’ve got Microsoft Defender for Cloud and Defender for A I. So that provides A. little bit of visibility and hardening into, you know, your implementations. The Azure AI Foundry has some fantastic new features around content safety, prompt Shields, you know, and securing your AI development, you know, kind of at that. At that system layer, you know not so much just at the like the developers are responsible for it, but actually building IT into the the infrastructure layer and then Microsoft Sentinel and Security Copilot. Security Copilot’s pretty cool in terms of the interactions itself. I don’t have any real time examples. I can’t do any demos today, but if you get a chance to play with it, it’s it’s fantastic to be able to like help you uncover and triage you know in real time same you would like same way that you would use like your M365 copilot. So on the identity layer, let’s start off and dig into into the enter agent ID. And so this is new. You know, we’re entering into an era where AI agents aren’t just like tools anymore, like it’s not just something that you deploy to teams. But they’re like catalysts for smarter work in more empowered ways of working. And Microsoft’s, you know, at Microsoft Build, you know, they sort of got into this like like it was really all about agents this year. And now we’re kind of like becoming agent bosses and not just using agents. To do things, but we’re like managing a bunch of agents that are doing work on our behalf. And so in order for this to work we have to like think of them as coworkers and coworkers have identities and and so now Microsoft Entra supports Entra identity. Agent IDs and So what does this really actually look like? So identities now are assigned through Entra and paired to agents and so when you look at. You know your security risks and threats assessments. You’ll start to see these identities show up as well. So visibility is critical to organizations as you build new AI solutions and you want to be able to see which agents have access to what components in your environment. And the agent ID through Entro is how that’s actually going to be managed. And so you’re going to start to see that now should be able to use that and find these agents within your Active Directory, you know, today. So next you know kind of layer, the data layer is Microsoft Purview and so it’s like the map maker of your data lake, right? Like that data lake in the context of like. Fabric, but like you’re in the context of my analogy. So like drawing boundaries and adding like don’t swim here signs when it’s needed. So Microsoft Purview, the things people probably know about it the most is that it scans your data, your documents or even inside of fabric scanning your. Data and it can classify and and attach sensitivity labels to the data. And so that you don’t have to just like go into a document and say, oh, these have a bunch of Social Security numbers. It’s gonna identify the fact that it has Social Security numbers and label that as sensitive information, you know, from the gate. And then it enforces the labels like end to end even within like the AI prompts like Microsoft Purview is starting to really become ingrained into the 365 copilot, you know type of solutions as well. So Microsoft’s layer, the data layer purview, you’ve got data security, data governance, and then like the risk and the compliance stuff are all like components of like the offering that allow you to maintain a secure and compliant system. On the application side, you know I mentioned some of the things of you know that are built into the Azure ecosystem as well and around building applications, you know the predominant tool these days for pro code developers. Is Azure AI Foundry and that’s where you’re like really going to forge custom, custom co-pilots. And so what’s important here is like having the security rails, you know, being baked into the solution, being able to implement with RBAC and network isolation. Prompt shielding as like a middleware component, so we’re not really duct taping things together. And the major things that they have put in place here are content safety, like text content moderation and image content moderation. You can. Configure security alerts for when people do things outside of the boundaries, what you’re expecting or what you want them to do. There’s a lot of and from the get go they’ve had like safety policies that have looked for things like hate and violence or you know, even fairness. You know and self harm and then I did mention network isolation and being able to use private endpoints and networking security that is also being built into the AI foundry as well. So it’s supporting these kind of solutions and scenarios where you want to have it just internal or. You wanna be able to really limit the exposure to the chat bots or however you’re using your LLMS and then finally like the prompt shield. So being able to at the infrastructure level really start to protect yourself against injection attacks and things along those natures. So we’re protecting ourselves at the. At the application level as well through these tools, but once the bot is live, what are you gonna do? How are you gonna manage that? And so the Azure AI content safety is a tool that you can use within Azure AI. AI Foundry as well. And So what you’re gonna do is set up filters almost like you’re on a hiking trail and you run out of water and you gotta use the water filter straw. So but it’s gonna do it’s gonna look at that stuff in real time and filter out things that might be offensive language or. Imagery and it’s before it ever even really hits the user. It’s sometimes even before it hits the LLM. So it’s a runtime defense to keep your system safe and so you don’t have to worry too much about it and it comes out-of-the-box. As well. So there’s a lot of pre-configured filters, you know, set up for you getting real-time security recommendations. So once I have something that’s built, I can go into AI Foundry and I can get some security recommendations. And I don’t have to, you know, I don’t have to be a a super technical expert to get the recommendations and what should I do to actually correct that. So really fantastic features that they’re starting to develop, you know, within the ecosystem. A. Microsoft Defender is gives you that threat visibility. So it’s like the drone’s, you know, drone eyes view of every AI asset across your across your entire ecosystem. And that could be, you know, Azure, AWS, GCP. It looks for like, you know, exposed API keys, you know, misconfigured storage buckets, out-of-date libraries that might exist, and so it’ll generate these security alerts for you and monitor the activities, you know, and keep you up to date to any kind of deficiencies that you might have in your system. And I also mentioned a little bit earlier security co-pilots, this is like your SOC integration and all these alerts that we that we talked about through like Microsoft Defender. Once that happens like analysts are going to get assigned a ticket and generally like the first thing is like. Like what happened, right? Show me how this, show me what happened. So analysts could ask the security copilot like show me the potential prompt injections that happened last week and then help me figure out like a filter that I should be configured that I should be configured in order to prevent these injection. And so these threads help you get faster answers to what you’re looking for and have been. I mean, most of the SRES that I’ve been working with are using Security Copilot just because it’s fantastic and gets them quickly through their. Your call queue. So stories. Another story is I don’t know if you remember when Windows Recall like got called out on the carpet. And so if you don’t know what Windows Recall is, it’s it’s kind of like this feature. In Windows, that acts like your PC’s like photographic memory and sort of captures snapshots of what you’re working on and what you see and kind of lets you go back to a state, you know, a working state, not like rolling back to like. Like an install state, but more like to a working state. And so these screenshots were silently capturing like every app’s content and a lot of privacy people got into like an uproar. Microsoft wound up hitting pause. And you know, Brave like actually put like, you know, disable Windows recall feature into, you know, into their browsers. So there’s a lot of backlash that happened. And you know, lesson learned here is that runtime governance and transparent content really aren’t like, you know, optional accessories. But you have to be careful about it and be transparent about it as well so that you know, you don’t cause, you know, a bunch of concerns or privacy concerns for your users as well. So how would we kind of like, you know, mitigate that is, you know, looking at the, you know, gear updates, you know, throughout the year, you know, we went through. Some of these are ready, but these are the release dates just this year. Defender SPM Multicloud kind of like preview is coming out. Microsoft launched their Enter Agent ID DLP for Copilot. I’ve got a couple of cool slides. I’ve got a couple of cool slides on that coming up and you can see like throughout the year we’ve got new, you know, new investments, new releases, new things that are going from preview to GA, you know, throughout this year and it’s happening very, very quickly. So they keep sharpening, you know, their tools and it’s, you know, important for you. To keep an eye on those blogs so you can stay current and take advantage of these features that are coming out. So let’s dive into a couple of these in a little bit more detail. So data security posture management for AI is. In Microsoft Purview, these are basically default policies that are set up and I’ve listed a few of them here, like text sensitive info that’s been being added to AI sites or detect when users even visiting AI sites. The text sensitive information that’s being shared and so you could set up these, you know, policies you know in Microsoft Purview and it’s kind of like showing you where unlabeled PII lurks and which sites are over permissioned and which copilot sessions might be referencing sensitive things. So green means clean, clean and red means there’s like sharks in the water looking for blood, you know? So fantastic tools to help you kind of monitor that. And I, you know, if you look at the out-of-the-box policies, there’s probably 15 to 20 of them that are pretty that are pretty relevant to most organizations. I mentioned DLP for copilot and so you know when we can think of this is like you can tell copilot hands off anything that’s labeled like top secret policies can be put in place to really limit what copilot and 365 copilot has the ability to do. But you know, it’s now doing DLP checks directly, you know, at the the chat level and checking the files as well to make sure that it’s not pulling data that it shouldn’t, you know, be pulling. It’s always had the the feature in place where. The where copilot only can answer or talk about you know documents or data that you have access to, but now it’s taking it to the next level with like the the security posture. The compliance manager in purview has shifts with ready made checklists for you know like. The EI EU AI Act, things along like that nature. So you don’t necessarily need like to go bushwhacking through the mountains on your own. You know you’ve got a compliance partner with all the compliance templates that the compliance manager offers. It’s not just the assessments or the checklists, you know, but it’s also workflows to help you efficiently get through it or step-by-step guidance to improve, improve your your environment. And then the other thing that’s kind of cool about it is that it’ll review what you have in place and give you a. Client score across a bunch of different metrics and then coach you how to increase that score, you know, over time. So I did mention like sensitivity labels through purview and you know, I I I just wanted to like talk about that for a second. So they’re they’re like trail markers, you know in in the AI system. So they guide AI models, you know, along the safe routes and making sure that you’re, you know, you have access or you’re going in the right direction. And if you don’t have the appropriate labeling in place, it’s like taking all the signs out of the the wilderness or taking all the signs off the trailer. So sooner or later somebody’s gonna, you know, fly off the Cliff. AI did a really good job of making that image for me and but you don’t want to have that happen. So what are some things that we can do to test our system? And you may or may not have heard the term red teaming before. And that like historically has described a process that people go through like basically basic penetration testing, adversarial attacks through a systematic, you know, testing paradigm. But now with with AI, red teaming is like it’s it’s taking on a little bit of a different, a little bit of a different twist. And so AI red teaming is really comprised of doing things like probing, testing and attacking your. Your AI systems and and it’s evolving into a more complex, more rich kind of way of of doing penetration testing. So at the end of the day, red teaming can be, you know, broken into five steps that you kind of rinse and repeat, you know, through a cycle. So plan out your scenarios, orbit with some attacks. You know, get a report that tells you about what the findings are, resolve any of those findings, or, you know, figure out what those weaknesses are and what you’re going to do to mitigate those weaknesses and then retest, you know, until you get to like a a success rate of 0. The issue with this is that like it’s difficult, right? Like all these, all the different ways, like how are you supposed to know how to deal with this? And you know, before I talk about the solutions, let’s talk a little bit about like real life. Why is this important? So. I don’t know. Most people, actually, it’s kind of weird. Not a lot of people know about this or like people that I talked to or like, I didn’t really realize this, but Google Gemini released, you know, one of their models recently and got into like. I don’t want to say trouble, but they got called out pretty quickly because people were noticing a lot of biases that were built into the model. And so some of the, like the really egregious ones were showing me some pictures of German soldiers and the German soldiers were, you know, based on like diversity photos. And rather than sweeping it under the rug, you know, they came out right away and said, hey, we we messed up, we get it, we understand. They took that model offline pretty quickly to control the brand damage and then rolled it out. I think it was maybe like one or two weeks later. But really what would have saved them here is this red teaming aspect of it and having the right kind of testing process in place to look for those kind of biases. And so, so I think that’s an important like lesson learned out of this is. How do you make sure that you’re not just pushing stuff out there to get it out there cause it’s cool, but making sure that it it doesn’t have the biases or the the data exposure that you know that could get your company or organization in trouble. And so, but so red teaming is a solution for this and we’re gonna dive a little bit into the tooling here. But Microsoft’s been doing this red teaming workload for quite a while. So all the way back into like, you know, the 2010s is when the first like, you know, kind of AI red teaming. Showed up and their mantra is like we break it so you can build it back stronger. They’ve got the tools, they have the playbooks, they have the threat taxonomies and it’s all part of like what they put into place, you know, as an out-of-the-box solution. I remember working with the OR talking with the Red Teaming team at Microsoft Build this year and I was just blown away on how easy it was for you to just point this, point this at your new agents and say, you know, give me, you know, give me a report on what’s going wrong with it. So what does it do? You have basically prebuilt scans that says go run my red team scan and you know 5/10/15 minutes later, depending on it, you’re gonna get a scorecard. You know, red, you know, green. You know, telling you like what you know, what were the results, what were the prompt injunction, you know, results on it, what kind of data, you know, exfiltration did I get, you know, and then a detailed report on each one of the incidents that it was able to uncover and. And then a risk categorization and like what kind of exposure it is. So and this is relatively out-of-the-box, but also gives you the ability to configure those tests as well. And so you can build this as part of your CICD pipeline or your ML OPS pipeline. And continuously monitor the deployments of the models that you’re building and generating and deploying to the public. And so just to give you a little bit more view into like what this looks like, I mean there’s some coding aspect to it as well. So giving that kind of configurability to it. And and so it’s, but it’s not a lot. It’s not really heavy, you know, in terms of being able to get this up and running. So you want a little bit more open source freedom. So Pirate is a toolkit that kind of goes along with this. And can generate like schedules and score, you know, scores of, you know, adversarial prompts to build into your pipelines as well. They’ve got a open source GitHub repo here that I put the link on here, but the main components of Pirate are really, you know, the prompts, the orchestrators. Converters, targets and scoring and all this kind of works cohesively as a a library that you can use within within your system. So definitely go check that out and you know and and start to think about it. That’s going to benefit your organization, incorporate into your development workloads. Um, so. Take a deep breath. I’ve been speaking really, really fast, haven’t I? So like taking like just a step back and and thinking about what we’ve been talking about. Mountaineers kind of they do these things called shakedown hikes if you’re not familiar, but before they go biking or hiking on like. You know, Mount Everest, they’re gonna like hike around with their full packs. And so Red Team is kind of like that, right? It’s like a shakedown hike that you’re gonna take before you roll things out. Discover where the blister points are before you get to the trailhead, not when you’re at, you know, 1819. 1821 thousand feet and you don’t need to, you know, get to safety really quickly. So test before you’re sent. You know, simulate your adversary. Minimize your, you know, mid mountain kind of failures by discovering those blister points earlier. Build out some of that team muscle memory. So having your Sherpas, you know, along for the ride, making sure everybody knows what they’re supposed to be doing, and then being able to practice so you get that muscle memory and learning how the terrain changes over time. Super important. Wanted to zoom a little bit more into Purview and break this down for you. So inside of Purview’s, you know, swim lane, really we’re looking at three different areas we’re going to be classifying. You know the data, so that’s auto discovery, your sensitive data, your label application and making your AI aware, you know for scanning and prompt purposes. So Purview really ensures that AI knows what data is off limits before it actually goes and sees it. So then we have to build in a protection layer, you know, kind of like your your water filter, right? So in here you’re doing DLP for copilot. You’ve got your Azure Open AI DLPS as well. You’ve got label inheritance across the AI generated content. And you know, I’m gonna show you a little bit what that looks like and then enforcement of policies across like a bunch of different applications and and services. So even if the AI generated output, you know, has sensitivity labels. You know the that output’s gonna have the same protections as the original document as well as is the idea and then monitoring, right. So we wanna make sure that we, you know, have our AI activity audited and logged and somebody can go back and look at it over time. Risk management for, you know, insiders and doing oversharing analysis and recommendations. Some of the things that we’re going to talk about in a minute. So if your AI goes rogue, you want to be able to see it before the PR team has to get engaged and put the fires out. So I did mention label inheritance, and I thought this was one of the coolest things that I saw at at Microsoft Build. So I’ve got a document that’s marked as confidential. I have a document that’s marked as sensitive. It’s got something in there. And I asked Copilot to go ahead and create a summary for me. And it’s gonna do that cuz I have access to the document, but I wanna make sure that I don’t accidentally share that information as well. So that’s what we’re talking about here. And inside of Microsoft Copilot, you’re gonna see that those. The labels start showing up like you do on the right side of the screen. So when I talk about that label inheritance with Copilot, that’s what I’m talking about. And I thought this was like an aha moment for me at Microsoft Build because it made me realize that LMS don’t care once they. Have the, you know, once they’re generating output and and how do I make sure that people can’t share that information, you know, outside and so fantastic. I was really happy to see Microsoft pull that one together and be able to see that, you know, in person. It was pretty cool. So like Star Purview, hats off to you oversharing. I talk to customers all the time about this and you know, Purview’s got get some tools that can actually help you with this in doing oversharing risk reports. So looking at a SharePoint site and what the permissioning is, because it’s hard for people just to look at a site and say who has access to it. I can go in to a Teams Microsoft Team site, which is SharePoint and say I want to share just this file with somebody. And but if I just looked at the the site, you know area, it’s not really gonna give me that you know that view of who who can see all the different documents documents. So having the ability to like scan and then you know make decisions on those. Is really important and that’s where the oversharing risk reports come in. And so that you know that kind of like work up front is going to help you make sure that you don’t have to bring your PR. Team back in to talk about other problems. Also in purview is like communication compliance. So generating policies you know purview can flag like if employees AI chat is you know harassing people or sneaky. Like data kind of requests like asking for it in a couple of different ways. And it does this by previewing the messages against different corporate policies, risk management policies, and regulatory compliance policies. Some of the key features that the Communication Compliance suite offers is. You know, intelligent, customizable templates that are, you know, out-of-the-box remediation workflows that you can adjust, you know accordingly to to your for your organization and then the ability to generate actionable insights. So what am I actually going to do with this once I see these things happen? And so bring this back into the real world and how this is important. So Concentric, you know did a study and found that like 40, I think it’s 40% of organizations deploy copilot on data sets that are way too. Leaning it too broad, too lax, and they haven’t really done a a due diligence of of this and are gonna cause over oversharing. So their study, if you read the you know TLDR on it, they were pretty pretty blunt about it. copilot has overly broad permissions. I wouldn’t say that copilot does. I would say that people are managing it poorly, so it has a really broad permissions and that’s causing, you know, a risk for your data exposure to be quite significant if you look at 60% of the organizations haven’t really put in. Governance controls in place. The chances are there are sensitive documents that can get overshared. And you know the other key take away is that organizations do need to be, you know, proactive. So go read this study. I thought it was pretty fascinating. And maybe at the end of it, you’re gonna make sure that you’re not gonna be the next headline in the news. And another quick story to to sort of like bring up to to button all this like governance together is. In, you know, March of 2023, a bug in an open source library that ChatGPT used exposed some aspect of conversations and stuff like that. So you know there. Bunch of like root causes and and what actually happened. But at the end of the day, like message content, you know, had some exposure and it caused a bunch of, you know, like PR nightmares. And So what did this, you know, one of the lessons learned kind of goes back to, you know, that red teaming conversation that I had before. And that you need to really think about testing your solutions, not just the AI components, but testing it holistically. And that’s where the red teaming agent can come into play, because it, you know, is a comprehensive suite of different types of things that might actually happen. And especially in multi-user and distributed and distributed systems. So for those of you that want to take screenshots, here’s, you know, kind of the four or five different stories that I talked about today. But I want to bring this up because. If you take a look at the the lessons on each one of these, you know they kind of resonate that the tooling is there for all these problems to have been remediated before they ever became an issue. You know, DLP, red team tests, governance gaps, being able to do like you know scans ahead of time over sharing. And audits, you know, so I think these tools that I that I talked about today are are super important. And so if I was going to ask you to have like 3 kind of key takeaways today, I would say use your compass, you know, use your purview labels and audits to keep you oriented in the right direction. Make sure that you have the AI security stack Shields like in place so you can protect, you know, every layer, you know every layer the the stack that you’re using and you know, be a part of the red team scouts, right? Like keep your eyes open, have somebody that’s looking out for you over your shoulder. And and then I’ll keep you on the right path for some safety. So hope you enjoyed it. If you want to get started on Monday, we can do a few things together, running some oversharing scans and piloting, you know, some DLP rules, but there’s a survey link, you know, in the chat. And so First off, want to get your feedback. Tell me how I did, but also if you’re looking to take a next step or you want to learn a little bit more, here’s a couple ways. Just reach out to us. We can do a AI governance ready readiness assessment taking a look at some of the gaps that you might have in your organization and you know what we. We might recommend to remediate some of that. If you’re looking for a demo on Microsoft Purview, we can walk you through how that can help you monitor and classify and mitigate some of the risks. And then last but not least, Microsoft’s. New funding, new fiscal year started this month. I guess it’s technically still this month and so we can explore opportunities to have Microsoft maybe pay for some of this on the partner LED funding, take a look at the the funding initiatives they might have there as well so. Hope you enjoyed it and I take a quick little look here and see if there are any questions in the chat, but thanks a lot for joining us today. Paige Wamser 41:37 Thanks, Brian. We did have a quick question in the Q&A section. Brian Haydin 41:40 Oh, there we go. Are the pervy functions that we see here open to M365 business license standard license holders? Man, you’re getting into the licensing question, Steve. On the the business, on the business licensing, I’m not as familiar where Purview comes into play, I believe. I believe it’s gonna be some sort of an add on security scheme, but somebody’s gonna shoot me and tell me that I’m wrong. But I’ll tell you what, if you reach out to me, I will definitely get you an answer for that. Paige Wamser 42:21 And we had one more question. Yes, the recording will be provided. Look out for an e-mail later today with that link to the recording. Brian Haydin 42:31 All right, fantastic. Thanks, Paige. Thanks, everybody. Paige Wamser 42:34 Thank you. Brian Haydin 42:35 Have a good day.