/ Insights / View Recordings: Governance & Trust in AI Insights View Recordings: Governance & Trust in AI October 2, 2025Understand best practices for managing AI risks, building trust, and implementing governance frameworks.As AI adoption grows across enterprises, organizations face new security and governance challenges—from data leakage to biased model outputs. In this webinar, we explore practical strategies for AI security governance, helping your teams safely implement AI while ensuring trust and compliance. Learn how to manage identities, protect sensitive data, and enforce responsible AI principles with Concurrency’s Microsoft and ServiceNow expertise. Whether you’re in Chicago, Milwaukee, or Minneapolis, this session equips you with the frameworks to increase adoption, reduce risk, and maximize the benefits of AI within your organization. WHAT YOU’LL LEARNIn this webinar, you’ll learn:How to manage AI agents and employees under zero trust principlesStrategies for protecting sensitive data and preventing leaks in AI systemsBest practices for application access control and monitoring AI usageImplementing responsible AI principles for fairness, reliability, and transparencyOngoing governance techniques to ensure compliance and maintain trustPractical tools and frameworks using Microsoft Purview, Intune, and AI HubFREQUENTLY ASKED QUESTIONSWhat is AI security governance and why is it important?AI security governance establishes policies and controls that protect sensitive data, ensure compliance, and maintain trust in AI outputs. It helps organizations mitigate risks like data leakage, unauthorized access, or biased model results while maximizing AI adoption and benefits.How can organizations implement zero trust for AI agents?Treat AI agents like employees with unique identities, enforce least privilege access, and use conditional access controls and MFA. Continuous monitoring ensures that only trusted agents interact with sensitive systems and data, reducing exposure to potential threats.What are best practices for protecting sensitive data in AI models?Classify and label data using Microsoft Purview, enforce data loss prevention policies, restrict untrusted applications, and monitor AI outputs. Ensuring sensitive data remains secure throughout processing prevents leaks and maintains regulatory compliance.How do responsible AI principles reduce bias and improve trust?Responsible AI frameworks promote fairness, inclusivity, transparency, and accountability. Organizations can detect bias, monitor model outputs, and tune AI models to produce reliable results, ensuring stakeholders trust AI decisions.How can enterprises monitor AI usage and maintain compliance?Use tools like AI Hub, Defender, and Purview to track interactions, detect risky prompts, and audit AI outputs. Combined with identity governance and ongoing training, this approach ensures secure, compliant, and responsible AI adoption.ABOUT THE SPEAKERJoe Steiner is a Solutions Architect at Concurrency with deep expertise in security, governance, and enterprise modernization. He advises organizations on building secure, scalable platforms and implementing best practices for identity, compliance, and responsible AI adoption. Joe is passionate about helping businesses innovate while maintaining trust and resilience.Event Transcript Transcription Collapsed Transcription Expanded Security concerns there too. Jailbreak and indirect prompting injection where because AI models are interfacing with different data sources and different systems, you can have unexpected results inside of your responses from the model. Because of perhaps somebody malicious actor interjecting information in there that the models then using to relay to relay on in a model will operate off of that underlying data and those underlying systems and unless you put some safety mechanisms in there to prevent. The wrong information from getting in there. There again, you can have unexpected, undesirable results. So with the new risks, you have new attack surfaces here too. The way people are interacting with prompts, putting information into AI models that maybe they shouldn’t be and and asking. Then for the responses, the responses coming out again, as we spoke a moment ago, you need to be able to protect that data that’s inside of those responses in the same way that the underlying data is in some ways. AI orchestration. This is where we’re combining AI agents perhaps with each other and making sure that they’re interfacing. Right way and sharing information correctly. And then the underlying training data, rag data and the underlying models themselves need to be managed so that that is all providing the results that you would expect and that you users are going to end up wanting. End up trusting and that the organization can trust ultimately too, that you’re not exposing things you don’t want, that you’re getting reliable results from the models and that the vectors across there that this will operate on is obviously at the application at the agent level there. There certainly across the cloud and the different data sources that are inside of there. Identity is very important here and we’ll talk about it in a moment, really treating agents as additional Employees in a lot of ways and identifying them. As you would an employee and carrying that through and tracking that and managing that accordingly, all make a big, big impact in terms of how you’re securing your underlying data and your AI state. So the zero trust security model evolved really from the last major change and shift in technology, which was the move to cloud. And really there are so many different attack surfaces in so many ways that. Things could go sideways. Zero trust security model really came out of a a means of OK, how do we enable people to use this technology in a way that we can feel comfortable with? And by taking the never trust, always verify approach towards this, where you verify every interaction explicitly, you use least privilege access and controls there to limit what the systems and the your ultimately users are interfacing with. And always assume breach. So you have to continually monitor the environment to find where things are going wrong and and that things are are maybe having unexpected consequences. So taking that zero trust model. And then combining that with the responsible A I principles that Microsoft provided, where it’s not just about security, it’s also about how do you ensure reliability and safety, how do you ensure fairness and inclusiveness with the model and what it’s producing? How do you have some transparency there so you know where this data is coming from and where the model create, how the model created its inferences and ultimately having the accountability there. At the end of the day, people need to be accountable for for the A I models, but you also need to be able to track what the A I model’s doing on its own as well. But things like the responsible AI framework and and the dashboard can assist with this beyond security. As with the cloud, AI operates best if you and creating your AI security governance policy works best if you understand the shared. Model. If you’re using more of the bought AI models, SaaS offering like offerings like Copilot, then your usage of that is really the extent of your responsibility in terms of managing. We’re gonna spend a lot of time talking today about. That usage layer and even into the application layer a little bit. If you’re creating your own model using Azure AI, then that starts to extend into the model design and underlying infrastructure there too that. You’re then responsible for. If you’re building your own from scratch, then there’s other pieces there that you’re responsible for. Thankfully, if you’re using the again copilot or Azure AI, there are certain responsibilities under that that Microsoft undertakes that allows you to. Focus just on user behavior, identity, access, management, data governance, and then if you’re creating your own model, then kind of the architecture and design of that model and the systems and data that that’s interfacing with. So in order to secure and Govern that considering all that. Really, the best place to start is by crafting an AI policy. Now, how you do that, I believe, is best educated by what are the things that I can control underneath that? What are the the areas that I need to be thinking about? And so we’re going to spend a lot of the time today focusing on the how. And really the areas where you would be enforcing this and thereby using that to educate how you create the policy, then educate your Employees and then enforcing that policy across these different areas, different areas being identity management, making sure that. You understand who the users are, but also the AI models and what they’re doing and being able to track that in all their interactions. In addition, putting controls on applications and access to those different models, So what the AI model can interface with. The AI models that your users or Employees can interface with and being able to control what data can go into those. In addition, you need to protect your data state. This is probably one of the most fundamental pieces of. Securing your in AI estate and that you need to make sure that you have a strong data protection mechanism if you’re exposing sensitive data through an AI model and so. So you know, tagging the information using AI models that respect that tagging and that the responses inherit that tagging is very, very useful for providing that protection end to end in addition to those enforcement, you know, kind of upfront. Pieces you then have ongoing monitoring governance. There’s going to be new changes to the technology. There’s going to be new interactions. People are going to take these models in different places. This is how you prevent things from going and being used unexpectedly overtime. And then finally, the last piece with responsible AI model management. This really is most important when you’re creating your own models, AI models, those use cases where maybe I’ve got a purpose-built autonomous model or. I’ve done something beyond what Copilot Copilot Studio would allow me to do and then what are some of the the principles that I need to to be thinking about there? So let’s start with identity management and I’m splitting this between. Managing your users or your employees and then managing AI agents because both of them need to be treated again as individual entities. I create an identity for each of them in Entre ID. Which will allow me to then provide certain security controls around those, including who can be interfacing with the AI agent, controlling the permissions for that, also controlling the permissions for what systems and data that that AI agent can access because I understand what that agent is and who it is that’s provided. Those requests on the user side, they use conditional access rules to control what apps and AI agents that that user can interface with, given their role in the organization and the type of sensitive information that they’re gonna be dealing with. All that would be controlled through Entre. And then that in concert with with Intune frequently. So in terms of the Entre Intune architecture there behind there, we really might recommend that you’re at the cloud first level in terms of Microsoft’s. Kind of five step architecture progression at that cloud first level. We’re leveraging the cloud technologies. We can leverage the security features almost fully within the the cloud there in terms of what Entre can provide and also what Intune can provide to be able to control. Access be able to monitor what AI agents are interfacing with, what your users are interfacing with, and so on. So those upper levels of the Intune architecture are really important. That you’ve achieved that in your organization in order to do the other things that we’re going to talk about today. Certainly something we can have conversations about, talk about where you are and how you might get there. But I just want to state that that is that is an important prerequisite, if you will, for for doing everything else. We’re gonna talk about the next level is we talk about application access controls. Now we know who the users are, we know who the agents are. We’ve put their controls using conditional access, maybe MFA when we’re in certain scenarios judging the risk. Of both the user, the device, where they are at any point in time and the next level is that OK, what are the application access controls we want to enforce there for the user. We want to restrict access to only trusted apps and really for the purposes here today. AI agents, there are ways to control within your environment the ability to use other outside agents. Now this will be an ongoing battle as it was with cloud technologies, but there are ways to at least limit that. And even if you’re allowing for access, perhaps on mobile devices where you aren’t fully managing what they can and can’t do, you can at least block your data from going over to untrusted apps or agents. We’ll show you a small demonstration of that in a moment. You also want to restrict access to AI agents that are beyond the user’s scope. Here’s where, again, identity becomes important, or you may have a trusted agent, but this user shouldn’t be using that. Perhaps it’s an agent that is designed to work against sensitive financial data that’s for. For the financial department’s purposes, I don’t need somebody in production or in sales accessing that necessarily, because that really may not be anything that’s appropriate for them to see. So aside from just limiting trusted apps and agents for the organization, we also want to control what agents. That users can access that may be beyond what they should be seeing from the AI agent side as I’m constructing, again, approaching these as I would a person in some levels, I’m going to restrict access to only trusted and permissioned users. Again, kind of the other side of what we were just talking about. I also want to make sure that I’m restricting access to only trusted systems, that I’m not pulling data from systems that I don’t trust. Those are the things that can lead to jailbreaks, indirect prompts, even ultimately hallucinations on there as you’re impacting the model with data and system data that. Maybe I shouldn’t be trusting inside of it like I do others. Always maintain make sure to and this identity helps with who and what the agent is interfacing with. This is something that’s gonna continue to have to be looked at over time. It really reinforces the importance of ongoing monitoring. So in terms of restricting access to trusted AI apps only, Microsoft has in their Defender in a couple of places, the cloud discovery piece, which you can actually look strictly at AI agents there too. It’ll provide a risk score for you and then you can determine whether you want to block or permit use of that AI agent and then anywhere that you are managing the device or you have control of that access, it will then block that from. From being able to be used, if you’re not blocking outrights, perhaps on a personal mobile device where I’m taking more of the I’m going to trust these apps, but not these, but I’m not blocking them, then preventing data from entering untrusted AI becomes very important. Here you’re seeing a quick demonstration of. Somebody’s copying data from sensitive document. You can see that it’s it’s marked as as sensitive data, but when they go to paste it into the public ChatGPT, they are blocked from doing that. Purview is very important in in how this happens also in concert with Intune. In some ways where I can’t copy and paste trusted company data outside of trusted applications. So there’s kind of two forms of of protection there. One is blocking out right. The other side of that is then for any of those that we were not blocking those ones we haven’t even caught yet. Yet maybe they’re new to market. I also wanna make sure that I’m preventing data from exfiltration into untrusted AI, and purview’s very, very important for that. This brings us to data protection. Ultimately, within the organization, you should have established and teach a data protection standard. What is? Considered confidential data? What is company only data? What is public data? What is information that can be shared? It’s important that not only do you have that labeled and enforced, but first of all make sure people understand it. They’re gonna have to make judgment calls. You need to make sure that they understand what’s expected there and what the policies that have been established. Once you’ve established the data protection standard, then you can work at restricting sensitive data to only those who should have access. Again, if we have our identities controlled. The right way we’re able to to do that and limit those by security groups and be able to even enforce encryption on the data if you choose to. But if nothing else, I’ve at least labeled it and can then take certain actions and allow and disallow certain activities based on that. From there, I can then again use data loss prevention means which are carried throughout the Microsoft platform and in other areas that respect that sensitive data labeling that’s occurred and be able to enforce that on your behalf. So it’s one thing to teach people. It’s another thing to prevent mistakes, which is frequently the most common form of data exfiltration. In large scale, it tends to be a little more deliberate, but this is something that where if you’re not careful, people can share information. Naively. And so that’s all the more reinforce the importance of educating at the front end and having some form of DLP to to protect that, to prevent people from making honest mistakes, if you will, as well as to help limit those that are intentionally trying to do some harm. On the AI agent side, this is really important. We’ve had this in the user place, you know, DLP and e-mail and then into some of the other cloud apps over time. But now this becomes equally important with the AI agent because the AI agent is taking information. Surfacing that to anybody that interfaces with it is allowed to interface with it, and then they’re producing output from that, which also needs to carry those protections forward. So you need to ensure that the agent is able to not only only show the data that any user interacting with it should have access. So it’s again the identity importance here and being able to respect the identities and what security permissions that individual should have. You also want to protect in the response output in alignment with your data protection standards so that as it’s made an inference on sensitive data. That that same data classification is carried forward into the output there. You also, in terms of data protection, want to make sure that the agent isn’t making changes to underlying data if you’ve allowed for some autonomy in in terms of the agent that you don’t want. So that’s fairly easy to control in terms of model building and the construction, but it’s another consideration here when we’re talking about data protection. For all of this, Microsoft Purview is particularly in and if you’re using Microsoft platforms, very important one that allows you to provide the. With information tagging, sensitive information labels, having that respected throughout the Microsoft platforms, both in Microsoft 365, in Copilot, anything you’re constructing with Copilot Studio. It also allows for a data governance so that I can then see over time where is data being utilized, where is it flowing, what systems are able to access this, what agents are able to access this. And be able to control that. It also allows for ability to remain compliant. So with those same sensitivity labels, you can label information that maybe is subject to privacy laws that’s subject to other. Financial regulations, HIPAA, all of those, you’re able to control the flow of that in accordance with the regulations as as well. Purview, just, you know, a little background, so probably the only licensing part we’re going to talk today. Just but this can get confusing sometimes. Preview for unstructured data via documents. So Word, Excel, PDFs, PowerPoints, what have you. That’s the preview that’s included with Microsoft 365 E 3 and then ultimately E5 licensing in varying capabilities. They’re in for structured data, data that you might have in the cloud and databases and that where you’re doing row level security on that data cataloging, that tends to be more of a pay as you go in Azure model. So just be aware of that, but ultimately all that comes together and can be managed using this. Same dashboards and controls there. So here’s a quick example of sensitive information. Again, that same project Obsidian document we used in the demo earlier was used. To surface response here in an AI session, but you can notice that it’s actually flagging this as this is sensitive information. This cannot be seen outside of the organization. So it’s it’s respecting that data privacy, the data sensitivity labeling that was tagged behind on the data that this is using to create inferences from and then carrying that forward to the responses automatically if the user shouldn’t have be able to see this at all. They wouldn’t. They wouldn’t be able to see this. It wouldn’t be used for them in that in that session if it is something that they can, they can see. But again, this provides that reminder that you should not be sharing this outside of there and can also put some protections on that as it surfaces that to. To those using the AI agents. In order to secure the environment, kind of the recommended approach is start with the recommended basic default labeling. Again, you’re not putting full enforcement there, you’re not putting full encryption on here, but just start understanding. The underlying data state that you have being able to put labeling on, start using DLP for more sensitive labeled content and ensuring that that doesn’t flow places where you don’t want it to, and then start using auto labeling and providing deeper context, perhaps more sophisticated. Labeling based on compliance needs and or individual org needs and then you start using DLP for content that’s not labeled as well, allowing it to ensure that things aren’t flowing. You want to take a progression here because. If you turn on full encryption, full enforcement right away and don’t really understand the data state, you can kind of bring the flow of information to a halt within the organization. So there is kind of a stepwise approach here. Make sure you’re gradually understanding your data, that you’re providing increased levels of enforcement. Moving forward, as you continue to move on, you can start taking actions to improve the labeling, ensuring that you have high confidence, high trust in the labeling that’s being applied, and then ultimately really start enforcing encryption and protection. That beyond that, which would be encrypting that underlying document so that wherever it’s surfaced, it’s encrypted going forward. So even if it is shared, they can’t access it. So all the components of Microsoft Security portfolio work together here. Again, we’ve talked about Purview, we’ve talked about. Entre and Intune together talked a little bit about Defender. All these work in concert in order to protect your underlying data and ensure the apps are operating in a way in a trustworthy manner that you can use going forward. It’s kind of an example of how these link together. You might use Defender for Cloud Apps to start with the discovery of the use of AI apps and the user interactions with the apps using Purview and the AI communication compliance and AI use that’s available there. By viewing that, I then can start looking at OK, what are the things I want to allow and disallow for? You can start blocking access to unsanctioned AI apps, limit access, limit the data that can start going. Flowing from there, then going on to securing the underlying data and exposing that through the model, I can then say, OK, I’ve established a comfortable level of security around my data and then as I’m creating. AI apps that’s accessing that sensitive data, I can ensure that that is enforced and that that’s not sent into certain AI apps and that’s only sent to certain restricted AI apps from there. I mean, this is an ongoing governance battle where you’re going to continue to audit what’s happening here, continue to monitor it, make sure you’re looking for inappropriate behaviors and prompts. You’ll start diving and find yourself diving a little deeper into OK, we’ve allowed for this AI agent. It has this data here. We are trusting the people that are interacting with and what’s happening to a degree. But let’s walk for what people are doing in in the prompts and saying, OK, maybe that’s we don’t want them asking that and how do we refine things to prevent? Some of those prompts and or the responses you you get from those all of which you know uses the entire security platform to to do that. So ongoing it’s you set up the prevention mechanisms at at the front end to stop the behaviour. That you don’t want to have happen and stop the data flow. That would be detrimental to your organization, but this is gonna be an ongoing monitoring and governance activity, both in terms of identity. You can use Entre ID governance to monitor security groups. So if I’m trusting the identity to help provide a layer of protection here, I need to make sure that people aren’t added to security groups. Maybe they shouldn’t be, and that we’re not leaving identity sitting out there that could then be hacked and utilized for nefarious purposes. You want to continue to monitor that identity plane. You also want to continue to look at the app and access level where again AI Hub and Purview can monitor that AI usage that’s occurring and so. I’ve trusted these apps, but now am I trusting the use of those apps too? Defender XDR and the risk assessments associated there can also be very helpful in this realm and then ultimately I may have my data sensitivity. Standard set and have my labeling and enforcement to a degree that I want, but you need to ensure that continues over time. That’s where using the audit and communication compliance features of purview as well as then ultimately discovery. What has happened? What have people been doing with this? So that’s going to end up having to be exposed in. In different legal realms there too. And so how am I able to surface that easily without causing undue difficulty with the organization? Here’s what AI Hub looks like where you can see. The prompts and the risk assessment on the prompts in there. You can drill in further into what were some of those prompts, the nature of some of those prompts, but it gives you kind of an overview of how are people using AI within my environment, even in these trusted AI apps. Um, what’s what’s happening there? And then obviously you also have views into the the untrusted ones too. In addition, as I drill in in the communication compliance realm, I can see the actual prompts that people are putting in there. And perhaps these may be things that I want to educate users on 1st, but also maybe provide some enforcement there. I might change how I’m labeling certain data. Off of this, I might restrict access to certain agents if I’m finding that certain people are asking questions of it that I hadn’t anticipated before. This is how you’re providing for that ongoing management of the underlying estate and being able to continue to control what’s happening with your underlying data there. And Defender also provides alerts here and then they it’ll flag certain interactions with you know copilot agents and others that I have that I’m monitoring there and say OK hey you know these they’re they’re starting to trying to dig up finance related files that maybe. They shouldn’t be and it’s it’s an odd behaviors there. It allows you to see where people are trying to attack and or you know, exfiltrate data and it could be again innocent enough requests. They may just be education. But there also may be some enforcement you want to undertake from there too. And this allows you to see what’s happening in the environment, again, allowing the organization to trust the use of this technology. So that’s how we’re managing really your copilot, copilot studio level of AI agents. If you’re building custom AI and ML models, you’re going to want to take further action and further review of what the model is, because you’re responsible for what that model is producing as well to a greater degree. This is not just privacy and security. It also should incorporate, based on the responsible AI framework, reliability and safety. So am I able to to trust the responses? Is it producing responses that? Are inciting negative behavior that I may not want? Is it? Does the model have bias? So fairness and inclusiveness? Are we ensuring that the model’s been trained in such a way that we’ve avoided bias as much as possible? There are ways to. To measure and detect that and to retrain around them. Do I have enough transparency here as to where this data is coming from? Again, so that my ultimately my users can trust the results that are out of this, but that I can as the organization can trust. What they’re being told by the AI responses and outputs from from AI and it’s it’s variety of forms and then ultimately you know how am I what’s the accountability here is this do I have measures in place to ensure that? People ultimately are responsible for what’s happening inside of the environment and and able to take action and be able to have again, you know, some responsibility for what this model is producing. Again, ensuring that you can trust the model and what it’s. Producing Microsoft’s AI content safety, which is prompt guards and a host of other tools in there, API’s that can interface within your model and ensure that some very obvious things aren’t happening. But also you can create your own block lists inside of there. Say I really don’t want to permit these kind of. Questions you can you can stop that inside of there. And the broader responsible AI dashboard from Microsoft also has means to assist with this. Couple of forms that this will take is first of all in model debugging. So if I’m crafting my own model, I wanna make sure that I’m ongoing on an ongoing basis. errors that are that are coming off of this? Where is the model wrong and how sensitive is it to certain parameters and certain combinations of parameters really where you know if I’m flagging data that’s I’ve trained it primarily on US data. But Europe is a little different when it’s it may be wrong when I’m dealing with European context, but it’s it’s right for the US start to detect those kinds of things. There could be a whole host of of different biases with the within there. There’s also, you know, a fairness assessment tooling inside of there to help with detecting other forms of bias inside of there. I going, you know, forward from there, being able to interpret the model, make sure that I understand how it’s coming. Up with the responses and inferences that it is, how am I able to test for, hey, these two are producing different results off of similar data. Why is that happening right and being able to? To tune that so you get more reliable results ongoing and then continuing to explore the underlying data set to ensure there isn’t new data in there. If I’m using a more dynamic data set that there is a new data in there that’s driving. Unexpected behaviors or unwanted behaviors in the AI model and then taking action, mitigating fairness issues. There’s tooling in there that can help you with that and also ensuring that. We’re we’re providing enhancements in online data to ensure that that can be trusted, then that’s going to be an ongoing process that that you’re using to debug the model. Sounds like a lot. There’s a fair amount there, but there are tools there again with the responsibility dashboard and some other things that can help with this with this process. In addition, you want to make sure that again with understanding the data that I actually am understanding the causal inference from there too. So what is it that? The, you know, features within my data set are driving in terms of real world outcomes and what are the responses ultimately driving in terms of perhaps understanding by humans in terms of what they’re seeing from there and being able to do some analysis. Of that too ultimately will help create the best AI model, a more trusted AI model that then is used more and is ultimately results in in better greater benefits to the organization. So in summary, begin with identity management. Manage AI agents like humans. Verify all user requests, use least privilege accesses, particularly as with the AI agents in the system that it’s it’s undertaking. You don’t want to have standing admin access there, you want to make sure that it’s invoked as needed. So it’ll it prevents against some of the broader cyber threats on on AI systems and then you secure based on risk. So make sure that you’re ensuring that users are who they say they are and that AI agents are accessing data and systems the proper way and then you have the right. Mission set for different scenarios, right? Somebody’s on a mobile device versus and possibly has other data there. Might enforce that differently than we’re on a trusted managed compute device by the organization. Can log everything. Make sure you’re tracking all that. That’ll help with your ongoing management over time. In terms of application access control, restrict use of of untrusted AI agents and applications. You’re already doing this, most likely with certain applications in the environment. You need to extend this to AI agents, again having those identified, but also then you know the publicly available ones when you put sensitive data into public. AI models, in some cases that then is shown to others that are then providing prompts to that model. So you want to ensure that anywhere that your sensitive or private information goes, that that’s protected and that that is contained within AI models that you trust that you know. Won’t surface that to anybody else that shouldn’t be seeing that. You want to make sure that this is managed on all types of devices. So you might handle mobile devices again differently than you handle trusted managed computer devices within the enterprise. As far as data protection, first start by knowing your data, understand the underlying data. That’s exposed through AI agents. Secure the data at the source. It also ensures that that security follows it wherever it goes as it’s surfaced to users through their responses, that that is also still enforced there too, and then track where it goes. Be able to see certain trends and things like that and perhaps this agent is surfacing data that you hadn’t expected and be able to say, OK, we need to to change our underlying data protection models and and provide enforcement in a different way. And all this leads to just this is an ongoing challenge and this I none of this is meant to make this seem like it’s too much work or or that this is this is Oh my gosh, this is this is painful. There are tools to help with all of this. But it is important that it’s done so that you’re able to trust AI to use for the benefits that that you’re seeking out of there. Again, monitoring AI usage, monitor the data access underlying that continue to monitor AI model performance there, make sure that that’s. Producing trusted results from and again log everything. Continue to log that. It’ll help with surfacing maybe what’s happening in the environment later on for AI models. If you’re constructing your own models, then you need you have a responsibility to continue to debug your model and to make sure you understand. And the data is used in the systems connected to it and enforcing protection mechanisms on on all of those. So again with that, if you’d like to have a further conversation, please feel free to book time with us or we can dive more deeply into your specific scenarios. But hopefully this provided a high-level framework for the types of things you should be considering when you’re rolling out AI models within the enterprise. And hopefully driving the trust in there so that you really see all the great benefits that come from from AI. With that, thank you very much and I hope everyone has a great day. Here at Concurrency and for our next topic, we’re going to be talking about security, governance and trust in AI. Very important or to enable all the benefits of AI that you’ve established trust in it first and we’ll kind of talk through the means of doing that throughout the the course of the conversation here in the next 45 minutes or so. With you, please use that bookings link to go ahead and schedule time with us if you so choose. So starting here today, I wanna begin with the importance of trust here. It’s kind of an obvious topic, but. I think the importance of it isn’t in the ways that that manifest isn’t always so obvious. So all new technologies bring both risk and reward the Internet when we first started using that. 30 years ago now had its risks, but also had its benefits and its rewards. Mobile devices presenting new security challenges and risks and also obviously had benefits. Cloud technologies very much the same thing. However, how we address that really leads to how we realize the benefits of this. And if you address it the right way, you actually can increase the benefits from this as if you as you reduce risk, you increase trust obviously in the. Organization with allowing your Employees to use the technology. But if you do it the right way, particularly with AI, if you increase the trust there, you ultimately can see increased adoption. Obviously, as adoption increases, the benefits from everyone using it increases. Thereby you ultimately with a state, if you do it the right way, we’re implementing means to establish trust while increasing adoption that that will result in in greater reward and reduced risk and ultimately a better. Benefit there. Security governance plays a very important role with establishing that trust by reducing the risk involved, obviously. So in the AI world, the new attack services that have emerged are under a few different. Means one is data leakage. This is probably one of the most obvious at some levels. This hallucinations data leakage is where if I’m training my model and have allow my model access underlying data and potentially sensitive underlying data. That can be surfaced then to people that maybe I don’t want to see that and so they you need to take take steps to ensure that you’re protecting the underlying data, but also then the output from those models contains sensitive information. That needs to be protected as well, and we’ll talk about that a little bit. On the other end of the spectrum, we have hallucinations where the model’s not producing results that you would like, and that really is a function of how you’re training the model and continuing to debug it and manage that model going forward. There’s a host of other.