/ Insights / View Recording: Data Governance: DSPM & Insider Risk Management Insights View Recording: Data Governance: DSPM & Insider Risk Management January 13, 2026Data Governance: DSPM & Insider Risk ManagementWith the emergence of AI, governing and protecting your organization’s data has never been so important but where to begin? Learn how Data Security Posture Management (DSPM) and Insider Risk Management provide a great starting point for organizations seeking to safeguard their data against both external and internal risks. Discover how Microsoft Purview’s Data Security Posture Management (DSPM) and Insider Risk Management tools safeguard sensitive data, prevent compliance violations, and enable secure AI adoption for modern enterprises.WHAT YOU’LL LEARNIn this webinar, you’ll learn:Why AI amplifies data exposure risks—and how to mitigate them.DSPM explained: How to achieve full visibility and control over your data environment.Insider Risk Management: Preventing accidental leaks and intentional misuse.Practical steps to implement a secure, compliant data lifecycle strategy.FREQUENTLY ASKED QUESTIONSWhat is Data Security Posture Management (DSPM) and why is it critical for AI?DSPM provides visibility into your organization’s data assets, identifies vulnerabilities, and ensures compliance—essential in an AI-driven environment where risks multiply rapidly.How does Microsoft Purview integrate compliance and insider risk tools?Purview offers a unified portal combining advanced auditing, data loss prevention, insider risk management, and compliance controls—creating a holistic governance framework.What are the first steps to secure AI-driven environments?Start by assessing your data posture with DSPM, implement insider risk policies, and establish lifecycle management to remove unnecessary data and reduce exposure.How does Insider Risk Management prevent costly mistakes?It detects risky behaviors—like oversharing or unauthorized transfers—before they escalate, protecting your organization from fines, legal issues, and reputational damage.Can these tools help with regulatory compliance?Yes. Purview’s compliance controls and auditing features help organizations meet industry standards and avoid penalties.ABOUT THE SPEAKERJoe Steiner, Solution Architect at Concurrency, is a Microsoft Purview specialist and enterprise data security strategist with deep expertise in compliance, insider risk, and AI governance.TRANSCRIPT Transcription Collapsed Transcription Expanded Joe Steiner 0:06 All right. Well, hello everyone. Welcome to our first webinar of 2026. So happy 2026. It may be too late for a happy New Year by some people’s rules right now, but. Welcome to our first session of the year. Today we’re doing a session on Data Governance, specifically about data security, posture management and insider risk, which are a couple of pieces of the broader Purview suite from Microsoft. That maybe don’t get as much attention, but in our minds probably should get a little more than they than they have. So we’re gonna do a little discussion on those today, how those fit in with a broader data governance plan and some of the the unique benefits of of those and you know what what you can can look for from from those tools. So as we go forward here, you know we’re everything we’re doing in tech these days is really heading towards how we’re driving towards AI in so many ways. And you hear concurrency, we kind of look at that as. You need to, you know, first have a secure technology foundation with modern cloud operations and governance. So both, you know, security and technology there is at its foundation. You need to enable your people to be able to work with AI responsibly and. You know, kind of enable that frontier workforce that’s going to take you forward and be able to really get the most out of A I. And then finally you’ve got to build the A I tool with a purpose, right? And and make sure that you’ve got, you know, applying A I to business and to processes. And developing that in a smart and intelligent way. Today we’re gonna concentrate mainly on the modern cloud operations and governance portion and specifically on data governance and protection. It’s an area that when we’re comes to AI, we think it’s gonna be more and more important over time as AI can increases the risk exposure for a lot of the traditional. Data governance and protection issues in a much faster way and really in an unintentional way that most people wouldn’t be thinking of when they’re using these tools. So it’s important to have some good management tooling in place in order to make the most of that. We talk about data governance and protection. Really, we’re talking about knowing your data, protecting your data, and then keeping the data you need, but removing the data. Maybe you should. Making sure we’re not hanging on to data too long, which can affect AI prompt results, can affect search results, also increases some risk. Exposure when you’re looking at ediscovery events and things like that. So it’s important to maintain a data lifecycle as well. A lot of what we’ll talk about today though, is gonna be about knowing and protecting your data and how do you start down that path. You know the major concerns historically and these like you said get accelerated with AI too is over sharing of information. So even internally we don’t want necessarily to be sharing. Proprietary financial or project data with everyone within the organization. We need to make sure that that’s not happening and sometimes that can that can happen by accident or if we’re not protecting data the right way, kind of coincides with the data leakage or data spillage as will happen, which really the the biggest threats with A I. Come from a use of potentially risky AI tools or shadow IT things that aren’t governed the same as the rest of the environment. And now we’re sharing a data, not even realizing that that we’re doing that. So again, kind of that unintentional activity that can happen. Then you have your kind of classic, you know, intellectual property theft, people taking confidential information, maybe when they leave, maybe on purpose, compliance violations, which happen by accident frequently. Not everybody’s. Understands that all the time, but can happen on purpose as well. And then just general unethical behavior. All of these things, the previous suite of tools with data, particularly the data protection tools, can help us address and these can be from actors both inside. And outside the organization, most incidents are unintentional. It’s not necessarily always from people seeking to do harm, but sometimes they do harm accidentally. And so how do we protect people from making those costly mistakes? These incidents, any one of these can result in millions of dollars and. Fines, legal fees, lost business. So it’s important that we’ve got some tool around this. You can’t just stick your head in the sand and pretend these things aren’t happening and I’ll guarantee you every organization’s dealing with these at some levels. And so it’s important to have some tooling in place to address this. The tooling of choice for our standpoint is Microsoft Purview. It has a host of different capabilities within it to address both the data security, data governance, and data compliance needs for organizations and enterprises, particularly in the area of AI. They’ve adapted this tool set quite. Quite a bit. A lot of these announcements had been on their way, but a lot of them came from Ignite and they’re just coming into preview today. We’ll be talking about that provides some really important and powerful capabilities for organizations trying to manage this environment. In this new era of AI, some of these solutions, and here you see a list of some of those from within the Purview portal itself, are advanced auditing, compliance controls and management, records management, data catalog. Data loss prevention, information protection and insider risk management. And so all these things work together within the purview portal and you know affect each other and work off of each other to provide a more holistic. Data Governance and protection environment. Here’s the purview portal. It has been redesigned if you’ve been in here before. So there’s they’ve consolidated this quite a bit. They made it easier to use with the solution cards which we just showed you. Links to related portals from other toolings, whether that’s Fabric or Entre or Defender, as well as then the ability to search not only for different people and being able to do investigations on those, but also different. Topics and and capabilities within there and the settings and all this is exposed right right through the portal upfront in an easy to navigate environment. Now these portals, one of the first things you need to do in setting these up is setting the permissions in here because you have a lot of power with these tools. As well. And so we want to make sure that this isn’t this is open to the right people in the organization to be able to to work with these. We talk about data governance again kind of in the same things we were talking about before about understanding your data, protecting your data and managing the data lifecycle. Getting a little more specific with that, one of the first steps that we feel is very important is gaining visibility to your data. And this is where data security, posture management and insider risk management really, really help. started transcription Joe Steiner 7:45 Very important is gaining visibility into your data and this is where data security, posture management and insider risk management really, really help. It provides what I hope you’ll see today are broad views of what’s happening in the environment with your data and also in the case of insider risk management. Joe Steiner 0:22 Management with some of the user behavior related to your data, which is very important. The other nice thing that’s been built in more recently is the linkage between Security Copilot and Purview. So you can interact with this in the natural language way, ask questions of things you’re seeing. And get a very, you know, summarized responses across the multiple screens and the vast data that’s that’s available here. So it makes it a very, very powerful tool for understanding what’s happening in the environment and beginning to put some protections on there. Well, we’d look at this, you know, kind of we definitely would say start with understanding what you have work from there and then you kind of level up to, OK, let’s refine what we’re doing here in terms of sensitivity labeling, applying protection policies and DLP in the first level, we’re really stopping some and just being aware. Aware of some of the activity there. Second level, we really start protecting that and having those automated protections in there. Compliance management is part of that as well. And then finally you start managing the data lifecycle. So we’re managing the data from end to end aging data out when you need to ensuring that. You’re keeping data and you’re not deleting data that you shouldn’t be keeping for different compliance reasons. Managing the data catalog and putting endorsements on there so that as people are searching these things with AI tools or anything else, that they know what data is best to use for what purpose. And that data catalog and endorsement is very important for. And then the retention policy and records management kind of helps with the what to delete, what not to delete kind of questions that come across there. Again, today we’re going to focus on step one and hopefully show you some things that would help you get on your way. Let’s begin with Data Security Posture Management. As with any good IT tool, it has a acronym DSPM, which is Data Security Posture Management. And really what this does is it helps discover, protect and investigate against sensitive data risks. So. The tooling from Purview provides unified visibility control as well as automated remediation there. You can start putting actions in there that will automatically happen as it starts detecting certain events or certain situations. It works with both traditional applications as well as there’s a big focus on bringing AI apps and agents into the fold here too, and how do we manage those given their interactions with data, much like other applications have in the past. In the fact that they’re actually interacting in deeper ways than many traditional applications have supports data across Microsoft 365, Azure Fabric and a host of integrated third party software as a service platforms. The focus here really is on data independent of, you know, maybe my compute device or my broader infrastructure. It’s about, OK, what data do we have, where is it stored, who can access it, and what protections have I put in place there? It gives you kind of a view of that data. You know, right there, as we’ll talk about later, what written inside a risk adds to this is really a the chain of behaviors that are occurring with our data so that we can then see not only where the data is stored, who has access, how it’s protected, but what’s happening with it and what are people doing with it. So we’ll talk about that next. Next after this. So again, Data Security Posture Management ties into a host of other tooling here, including DLP, the Purview DLP environment, information protection, the sensitivity labels and the insider risk. It’ll actually link in with the same signals there and you can drive. Both shares data both ways and can drive some analytics that way. Again, you can get reporting out of this. You can see you know what has been protected, what’s unprotected, what’s how, where does that exist? Where are some of those risk areas? And provides recommendations for next steps. So we really like this. If you’re starting a data governance program, this is a great first step because it’ll tell you what those next things maybe you should be looking at are and then having it tied in with security copilot. Allows for more of that, you know, the ability to ask questions of it and have it have it provide some guidance in terms of what you what you should be concerned with and maybe what to what to look at next. Again, it’s all about providing information, providing insights and actionable insights there. Being able to create policies and then tracking everything with strong analytics and reports. So very very powerful visibility tool in terms of the current state of your data state. Here’s the new Purview Data Security Posture Management portal. It’s available under preview. Most of you if you have an E5 Microsoft E5 license or the what was the compliance add-on there or license for the Purview suite there. This will you’ll you’ll have access to this. You know first thing you have here is you know kind of the posture reporting. What is the state of our data state along with security copilot access immediately out of the portal. So you can create prompts for security copilot directly within the portal here. You have data security objectives, so you’ll see here like, OK, how much of the sensitive data in our organization are we able to identify? Where is it? Are we preventing data exfiltration so that data leakage or people stealing the data in there? Are we? We’re controlling that. You have a number of these objectives here that they’ll provide some scoring and some summary information there that you can then drill in on. It also provides some very strong AI observability tools here, so I can see what people are doing with AI tools in the environment and with like data. Information for our organization. We don’t even want everyone in the organization maybe knowing about this. So where is that data? Where is it stored? Who all has access to it? And you can create those views inside of Asset Explorer that kind of show me what’s happening with specific. Keywords with certain tooling, with different types of files, with different types of users. So I can mark high profile users, users that maybe have access to poor sensitive data. What’s happening with them? What are they? What’s going on there? And so it provides a broad variety of of views into what is actually out there. Now because you’re able to see all that, that’s again where we want to make sure that only certain individuals, you understand that they’re going to be able to see a lot in here. So you want to make sure that that only certain individuals have access. To these tooling, the next thing we’ll talk about is the AI observability tooling here. So here I can see a list of all different agents, AI tools, both those that we’ve developed, those that we’re managing, as well as productized ones. And third party ones. There’s a host of, I think a couple 100 at this point, different AI tools that can be monitored by Purview. Some of those will require a browser extension or connecting your device, your corporate devices in with Purview, but allows you. To monitor all that AI activity that if you’ve created a policy that I don’t want people using other AI tools, but I haven’t put in the tooling to stop it, this will be able to allow you to see if that’s happening. And even just, you know, if we’re just getting started in this world, it provides that quick visibility into what is being used out there and maybe what are the how should I craft my policy given how this is being utilized in my environment. So a very, very powerful tool. There are some one-click policies in addition to that where I can find some specific things in there in terms of, you know, risky AI usage, unethical behavior in AI apps. So maybe some prompts that maybe shouldn’t be asked of certain things. Detect sensitive information being shared via AI and being able to just capture copilot experiences in general and what are people asking there? It can be very, very useful for a broad variety of reasons. Certainly for protecting your environment and ensuring that your information is being shared appropriately, responsible use of AI, but a lot of power in these in these tools and these can be policies that are prebuilt. You can just be activated within inside of the portal. When it’s monitoring the tools, this will classify those in three different groups. One is that you’ll have copilot experiences and agents built on copilot. So Microsoft 365 Copilot, Security Copilot, Copilot and Fabric, Copilot Studio. And the agents that are built within USA Copilot Studio, then I’ll have my enterprise AI apps and those frequently are those apps that I’m registered via Entra on, which we highly recommend doing so that you’ve gotten these are approved. A I applications that are registered with Entran so they can be managed as a as a managed application in there and also then you know ChatGPT Enterprise is a third party tool that can be brought into that fold. Microsoft Foundry you I think you’ll see more and more come into this space over time. In terms of enterprise ready AI tools and then the host of commercial AI apps. So the standard commercial version of ChatGPT, the commercial version of Microsoft Copilot, Gemini, Deep Sea. Here’s the couple 100 lists that. The AI tooling can monitor and can be keeping an eye on throughout the course of the scope of your environment. So whether they’re browser based, whether they’ve got actually some installable on there, it can monitor all those. And we’ll just it’ll you’ll see these grouped into this third classification. One of the things that’s coming forward now that ties in with this is the Agent 365 integration. So Agent 365 was a big announcement at Ignite. And it is now available in preview via the Frontier Preview program and that can be integrated into Purview in a host of different ways, both in AI observability and being able to tie that in with. Agent 365 and what it’s tracking and be able to share data across those two tool sets. Agentic risk inside of Insider Risk Management, which we’re about to talk about in a moment, tie in with DLP and Info Protection so you can. Respect the sensitive information labels that are going that are assigned to the data that AI agents are scanning and drawing inferences from that that is respected then in the output then too. And so that some of that capability as well as the broader expanded Governance and boundary integration that comes along with that so. Those these tools again are linked more and more so that I have a broader, more cohesive management framework between Purview Agent 365. There’s some Defender things that would tie in with this as well as Sentinel as well, but. If I want to get started, Data Security Posture Management provides a great starting point for all of this that building the rest of that, it provides a solid foundation for your Data Governance and protection efforts. So let’s switch over to Insider Risk Management. Data Security Posture Management, very powerful, provides again a view of where data is, what data I have out there, who is access to what. Insider Risk Management takes that a step further and starts to show me, OK, what are some of these different things people are doing with data taking in signals? Across my entire ecosystem here. So it’ll pull data in from Entre, from the Microsoft Graph, from, you know, certainly from the rest of Purview as well from Defender. Take all that and it’ll start bringing those things together so I can have policies looking for certain scenarios. That are occurring that I want to watch out for, be able to develop policies for those which then create alerts which I can then triage and determine. OK, yeah, no, that’s that’s less concerning, but this one is can investigate those as a case, which is one of the primary. Workflows with Insider Risk Management and then take action off of that. You know, types of scenarios here we’re talking about would be a data theft by departing users. Not unfortunately an uncommon thing when somebody is going to be leaving an organization, whether by choice or by not. We’ll start to see mass downloads of data for them that maybe they shouldn’t have. You can scan for things like that, cuz that can be done in a variety of different ways. Not always as obvious as one might think. Intentional or unintentional leak of sensitive or confidential information. Maybe somebody’s sharing something they shouldn’t and how do I be able to to monitor that and be able to track that intentional or intentional security policy violations. I may have certain policies on what can and cannot be done. With data that can be violated here policies for users based on their position. So maybe higher ups in the organization making sure that I’m monitoring what they do since they have higher access levels and or being able to identify risky actors within the organization, people that maybe just don’t. Quite get it and or maybe don’t get it on purpose, but being able to identify those and be able to track what’s happening with those people, what they’re doing with data. In preview, there’s also some healthcare, particularly for those in the healthcare space, but this can be used for others too to. Try and find HIPAA violations that may be not so obvious and be able to prevent you from regulatory compliance actions there. Again, actions, behaviors by risky users and ultimately this provides kind of visual context for potentially. User activity that also provides then forensic evidence should you need to defend what’s happened one way or the other. Take those scenarios. The 1st place those they get translated into is then these policy templates which are prebuilt. There’s actually links here to each of those. You can see in more detail what all goes into those. But again, it’s pulling signals, trying to track certain things within certain time frames for each one of these that then constitutes a trigger for, hey, this policy has been violated and let’s create an alert for it. So host a wide. A variety of different things have already been pre-built. These are pre-built templates that can be utilized, again making an easy entry into this space so you can start watching what’s happening in your environment there and could start with education as well as then being able to do broader enforcement things, maybe putting in some automated enforcement things to protect against some of the unintentional. Things that are happening in the environment that are putting you in your data state at risk. The IRM again for insider risk management. We love our our acronyms in the IT world. We how those alerts get generated is you have your settings configured and your policy created and then as those are violated that trigger event occurs. I could be tied in with my HR system and say, OK, this person has resigned. I make a note of that and then I’m now tracking that user’s behavior from there and see are they downloading a lot of files? Are they destroying data? And that’s another thing that they can find within there. Are we seeing ex filtration activity here that we need to be monitoring then the user activity, there are tools in here then to evaluate that. We’re not assuming everyone’s guilty right away, but with you know the data is what the data is. But you can see, OK, was this maybe an accident? Was this something more intentional? What is the risk to the organization from this occurring? And be able to tie it in with broader user history and there’s a host of tools that you can drill in on there. And then obviously the alerts happen, which I call these to your attention. Now that can be happened in a variety of ways. There could be a host of alerts there. You’d see all of them in the alert dashboard reports, but in the spotlight and then the triage agent, there are two mechanisms there. That highlight the maybe the most critical of those alerts, maybe the ones that are most concerning, and there’ll be scores on these so that you can then focus on those first and then drill down from there. You can have a summary of any of these done with Copilot. There’s a lot of information that’s going to be contained within these things. Do they have copilot help make sense of it and do that in a natural language way? Then essentially off the alerts the next step in there is to dismiss it. Or do I assign this a case and do some further investigation? General risk factors under monitored cumulative exfiltration activities so. Anything across there that might look like OK with these, this data is leaving the organization through other cloud tools or through other behaviors, USB keys, the all kinds of different things, health record access, anything that’s tied in with HIPAA and. You know, anything that might be associated there, be able to monitor priority content, so particularly risky activities with content that’s marked as more sensitive by the organization. And one of the biggest things here is the sequences of activity. So it’ll say, OK, any one of. These things may not have tripped an alert, but the combination of these things does. That is the concerning part and that’s really the power of insider risk management is being able to see that set of behavior and the policies monitoring for that. Looking at, you know, unallowed domains, so places where our data shouldn’t be going, people shouldn’t be sharing information and just, you know, unusual activity for the user. So they normally aren’t operating like this. All of a sudden we’re seeing a lot of this type of behavior and be able to. You know, flag that. So a lot of powerful things that would be hard to pick up on if you didn’t have a tool monitoring for this for you. Here’s an example of the user activity report here for a for a case. Here’s some potential IP theft. You can see on here on one screen I’ve got, you know, different case actions that can be taken, the chronology of what happened here and see, you know there’s deletion event occurred. There is some exfiltration in terms of printing of a number of files or sensitive files. That this user at the same time did some renaming of files. Now you’ll note that the user you don’t see the username here anywhere yet. That is obfuscated until you’ve taken this a step further, but you can see that hey, this doesn’t seem right and then they can drill in and take. Reaction from there shows the time sequence, kind of the risk sequence, the cumulative activity that’s occurred here. In this case, you also have IP theft and there’s a resignation date set for this individual. And so you can kind of see the behaviors as we’re leading up to the resignation date and then post that as well and all in one view here. Very, very powerful. Once I have a case, actions that can be taken are I can send a user. Notice again, just this could be an educational opportunity. I could resolve the case as benign. Hey, you know what? That’s not an unusual behavior. Good to know, but I’m not overly concerned with that. I can share the case via ServiceNow so we can help you with. Linking ServiceNow into this and be able to create work items through there that flow through a workflow. Then I could share that through e-mail. I could create a team directly out of Insider Risk management just to manage that case that then gets archived once the case is is handled, but provides A collaborative environment. For maybe those sensitive cases I want a few different people involved with, I can escalate the case for a broader ediscovery investigation. Again, along with the purview suite, you have the premium ediscovery capabilities that allow you to really drill in on OK, what data was exfiltrated, affected and. What’s out there? And you can, you know, then confirm that hey, there was a policy violation here and then take the appropriate action for your organization. So that’s what the content that we had to share today. For next steps, we’d invite you to certainly schedule time to talk to us. We do have an offer for setting up the things we’ve talked about today with both. Data security, posture management and insider risk management. We also provide training for security copilot and purview and a customized road map there for our phase one data protection offer and be able to get you started using purview tooling, especially if you’re an E5 customer, you own the licenses. Let’s help you. Make use of those. So with that, if there are any questions, certainly feel free to populate those in the chat or in the Q&A section and we’re happy to happy to address those. If not, I hope everybody has a great day today. Amy Cousland 23:40 Thank you, Joe. We’ll see it for a minute if there’s any questions. Otherwise, we’ll go ahead and end the call. OK, I’m going to go ahead and end the end. Thank you everybody. Thank you, Joe. Joe Steiner 24:03 Thank you all. Have a good day.
Events Copilot For Sales: From First Prompt to Closed Deal Join us for an interactive, in-person session designed for sales leaders and account executives looking to turn AI into a true selling advantage. This seminar will walk through how Microsoft Copilot fits into the modern sales workflow—from everyday productivity gains to advanced prompt engineering and agents. You’ll hear a real-world “day in the life” perspective… February 18, 2026
Events Secure Endpoints & Access: Azure Virtual Desktop and Windows 365 In the new world of hybrid work, secure virtual desktop environments offer an agile, secure and cost effective means for providing & protecting data and applications. Learn how Azure Virtual Desktop and Windows 365 can be used for a variety of situations to safeguard remote users and enable secure, seamless work experiences across a distributed… January 22, 2026