/ Insights / View Recording: Frontier Firm Part 4: Becoming an Agent Boss – Managing an AI Agent Workgroup Insights View Recording: Frontier Firm Part 4: Becoming an Agent Boss – Managing an AI Agent Workgroup April 30, 2026 Frontier Firm Part 4: Becoming an Agent Boss – Managing an AI Agent Workgroup As AI agents become a core part of the workplace, the next frontier is learning how to lead and manage them effectively. In Frontier Firm Part 4: Becoming an Agent Boss, you’ll explore strategies for supervising AI agent workgroups, balancing human and AI contributions, and ensuring productivity, reliability, and compliance. Learn how to assign tasks, monitor performance, and optimize collaboration between humans and AI agents. Walk away ready to become an effective “Agent Boss” and maximize the value of your AI-powered workforce. What You’ll Learn: Lead AI effectively: Best practices for managing AI agent workgroups alongside human teams Optimize productivity: Learn how to assign tasks, monitor performance, and balance human-AI collaboration Ensure governance and compliance: Practical strategies for supervising AI while maintaining security, accuracy, and accountability As organizations move from experimenting with AI to embedding agents into everyday work, a new challenge emerges: how do humans effectively lead, manage, and govern a growing workforce of AI agents? Tools alone aren’t enough—success depends on mindset shifts, new operating models, and strong governance foundations. In this session, we explore Phase 3 of Microsoft’s Frontier Firm model: the transition to human‑led, agent‑operated work. This is the stage where AI agents move beyond simple assistance and task automation to operating across workflows, systems, and business functions—while humans act as managers, owners, and decision-makers. Rather than focusing on hype or future promises, this webinar provides a grounded, practical view of what it means to become an “agent boss.” You’ll learn how to prepare people for AI‑first work, where different types of agents make sense, how to orchestrate them responsibly, and what technologies are required to observe, manage, and secure an expanding agent estate. The session emphasizes a critical truth: agents only deliver value when paired with clear ownership, trust‑but‑verify oversight, and well‑governed data and systems. Without these, agent sprawl, shadow AI, and unmanaged risk quickly undermine adoption. WHAT YOU’LL LEARN The Frontier Firm Phase Shift How the Frontier Firm model evolves from human + assistant, to human + agents, to human‑led, agent‑operated business segments. Why Phase 3 requires new leadership skills—delegation, trust-building, and outcome management—not just technical enablement. How the concept of “work charts” replaces traditional org charts as agents become part of how work gets done. Becoming an Agent Boss (People First) The skills that separate successful agent managers from stalled adopters: Clear communication and prompt clarity Vision and outcome definition Decision-making and adaptability Delegation and self‑awareness Creativity and proactive experimentation Why AI is a career accelerator, not a threat—and how regular usage builds trust, familiarity, and faster value realization. How an AI‑first mindset shifts daily work: asking “Why am I doing this?” and “Could an agent do this?” before defaulting to manual effort. Designing Agent‑Driven Processes How to redesign workflows for autonomy with oversight, not automation-for-automation’s sake. A practical framework for agent design: Trigger (human, event, or schedule) Instructions and guardrails Systems and data access Plan of action Measurable outcomes Why incremental wins matter more than boiling the ocean—and how continuous improvement compounds over time. Types of Agents and Where They Fit Retrieval agents: intelligent search and summarization grounded in trusted data. Task agents: automating repetitive, rules‑based actions triggered by people or events. Autonomous agents: independently operating workflows that reason, coordinate with other agents, and escalate to humans when limits are reached. Realistic examples across IT, HR, finance, sales, and operations. Orchestrating Agent Workflows at Scale How to chain agents together across systems and channels to handle multi‑step processes end‑to‑end. Why Copilot acts as a natural UI for AI, reducing fragmentation and increasing adoption. When human-in-the-loop approvals make sense—and when monitoring alone is sufficient. Governing, Observing, and Securing Agents Why agents need identities, ownership, and lifecycle management—just like human employees. The importance of observability: Monitoring agent behavior Detecting unmanaged or shadow agents Understanding agent‑to‑agent and agent‑to‑data interaction paths How governance, data protection, and security policies protect users and the business. What centralized agent management brings together across identity, security, and compliance tooling. Preparing for What’s Next How emerging agent control planes enable real‑time monitoring, policy enforcement, and risk mitigation across your agent estate. Why secure foundations allow organizations to empower users safely, rather than locking innovation down. How to think about agents as long‑term organizational assets—designed, governed, improved, and retired over time. FREQUENTLY ASKED QUESTIONS Is this session focused on building agents or managing them? Both—but with an emphasis on management. The session shows how agents are built, but focuses on what leaders, managers, and teams must do to ensure agents actually deliver value safely. Is this only relevant for IT teams? No. While IT enables the platform and guardrails, this phase requires active involvement from leadership, HR, and business teams who own outcomes and workflows. Are fully autonomous agents ready for most organizations today? In limited, controlled scenarios. The session stresses incremental autonomy, strong guardrails, and clear escalation paths to humans. How do you prevent agent sprawl and shadow AI? By providing governed platforms, visibility, ownership, and a clear intake process for new agent ideas—making the right path the easy path. What’s the biggest mistake organizations make at this stage? Treating agents as tools instead of workers. Without ownership, monitoring, and continuous improvement, agents quickly lose trust or create risk. ABOUT THE SPEAKERS Joe Steiner, helps organizations navigate the human, process, and technology shifts required to adopt AI responsibly. His focus is on translating emerging AI capabilities into practical operating models that empower employees while maintaining trust, governance, and business alignment. TRANSCRIPT Transcription Collapsed Transcription Expanded Joe Steiner 0:05 Okay, well, hello, everyone. Welcome to session four of our Frontier Firm series. This one is focused on phase three of the Frontier Firm model, which is all about becoming an agent boss. How do you manage a group of AI agents? So as you’ve evolved over time to using AI to starting to deploy agents to now I’ve got them every day in my workspace. How is it that I’m managing these, both individually and as an organization? And I look forward to our topic here today. AI is here. There is no ignoring it. I don’t think anybody’s arguing with that anymore. You know, really what this is about is, are you ready for this? Right? We are in an AI era. Are you ready? The interesting thing this time around, compared to some other technical innovations in the past, is that AI is really a human-first technology, right? You know, things like Copilot will do nothing unless you’ve given it an instruction. And it’s up to the end users here. to provide a lot of that instruction in management and to decide, okay, what is this going to do for us? So it’s a little different than the days of old where maybe I have this system and I’ve got a binder with all the paper instructions for how to do this. Not the world we live in with AI and it’s evolving very quickly. So… This requires a partnership, if you will, amongst IT to provide a secure, reliable platform that can handle all the end user demand that can come from this and be able to do that safely and protect them from themselves in some ways. To have HR foster a culture where Employees can create and feel creative and feel empowered to realize the true business value from this and taking what they’re doing every day and automating that and applying agents to it and improving. And then it requires leadership here to ensure that the employee use of AI. is actually directed towards business value. And that we’re not just creating another toy and people aren’t just playing around, right? We really want to make sure that we’re driving all the benefits from this. And so organizations need to change to enable this and to ensure the users are all realizing the value and that the organization is. To this end, the Frontier Firm, which Microsoft presented last year, provides an interesting model for how this will proceed as organizations increase their AI adoption over time. And this is something we’ve been talking about the last few sessions of the Frontier Firm. First with phase one, with humans working with an assistant. That could be co-pilot, it could be others. We’ll be talking a lot about some Microsoft technologies today, but the same thing applies to similar technologies from other vendors as well. You have human agent teams then where, okay, now I’m starting to get a few agents, how do I start leveraging agents to augment myself and be able to get more done and be able to, from an organization standpoint, be able to offer agents there that will augment what your employees are doing and enable them to get more done? Finally, you get into phase three, which would really be kind of the focus for today. which is I now have human-led and agent-operated business segments and workspaces. So here’s where we get into, all right, I’ve got a set of agents that can complete a whole host of different tasks. What does it take from the human side to be able to guide that And what are some of the structure that we need in the organization to enable that? You know, kind of reviewing again, we, you know, phase one was really that human with assistance. So ensuring every employee as an AI assistant gets comfortable beginning to use AI in their daily lives. This helps, you know, provide solve for that gap that that begins to get exist, where business demands exceed what humans can do, and this allows humans to get more done by leveraging AI along along with them on a daily basis. Again, you know, in phase two, as you… Adopted some agents; you know, agents become part of the team as digital colleagues, maybe taking on specific tasks, but at human direction. This is the concept that Bryan talked to, I think, last month, where you shift from beginning to shift from having an org chart. to having work charts, like here’s the things that have to get done. The org may not just involve people, it may involve agents now. And then also talking about what’s that right mix of the human to agent ratio, where how many agents is it reasonable for a person to be interacting with? Um, and and providing oversight to in a in a typical uh work day. Finally, today, we’re going to focus on phase three, which is, again, that human-led agent operated. This is the agent boss. So now everyone becomes a manager of one or more agents. So I now have to have managerial or leadership skills to be able to drive the value from these agents as I put them to work for me. as if they were employees. I need to be able to build new agents to a degree, be able to delegate and comfortable delegating, which is a tough skill for some, and be able to manage agents to ensure that I’m getting what we need out of this and to ensure Trust throughout that process. So as they ran the survey, Microsoft ran the survey about being an agent boss, they studied these seven criteria, which we’ll kind of group into a couple in a moment. One is having familiarity with agents was very important. And then regular AI usage, which those two… come together. The more you use it, the more familiar you’re actually going to be with it. That then leads to trust, where I am starting to be able to trust AI with this, which then it accelerates that cycle so that I’m able to do more with agent and I’m comfortable doing more with agents as I continue over time. You need to, the other criteria here is being able to manage agents as we talk about that leadership thought process where I’m able to control a sense of control, but also delegation to agents there. And again, as alongside with the trust, being able to be comfortable using AI as a thought partner, to be able to accelerate getting from idea to reality for those things that I’m trying to get done. The people that really are taking this on realize early on that AI isn’t a threat. It’s a career accelerator. This AI skills are going to be essential going forward for everyone. This is not a maybe I do this, this is a must do this. And those, what can reinforce this is for those that are using it enough that they can start to save maybe at least an hour a day in what they would have been doing by leveraging AI. Again, it just helps foster that acceleration that happens as people are adopting AI. To do all this and to accomplish all this, you really need three different things in place here. One is the people side of it. You’ve got to ensure that the people and personnel are ready to take this on. With a lot of these are pretty soft skills that some may have given the nature of their career so far, some are developing. And actually it’s an opportunity again to develop these things as they’ll serve you well in so many areas of life. Another portion of this is the process side. So how do my putting the agents in the right places in certain processes to ensure that, one, I can continue to trust it, but that it’s getting things done effectively and better than it would have otherwise. It’s got to be an improvement. It can’t just be AI for AI’s sake. Finally, underneath all of that, I need a strong technology foundation. And this gets into a lot of AI governance and security, which we will dive further into in our next session next month, but we will touch on today and some of the core elements that need to be in place there. So let’s get started with people and really developing that AI-first mindset so that I am using AI anywhere I can and be able to leverage that further within my daily activities. And this really, for me, starts with leadership and creativity. Those are two of the core skills in order to really be able to realize the value from AI adoption that are essential. Certain prerequisites, if you haven’t doing these things today, I would highly encourage you to. One, get familiar with AI, right? The best ways to do that is to start using it. Reading about it’s not going to help as much as just getting in there and using Copilot, even just or ChatGPT or Claude. a lot. We’ve seen a lot of organizations really drifting more towards Anthropic and Cloud, which the Anthropic model is now available inside of Copilot. We’ve seen people leveraging that specifically. You can shift the model in there in M365 Copilot. But use it regularly. We’ve seen a lot of success with people developing a habit, right? So they consciously will say, I’m going to use AI every day for this amount of time or for these tasks, right? And I’m just going to make a habit of this. That will foster that familiarity and make it easier to do more with AI. And you’re going to find your learning accelerated there. As part of that, experiment with recommended prompts. There’s a host of sources for this. Microsoft has another of them out there. There’s a number throughout the web. You can get these on LinkedIn. Just start experimenting with these in safe fashion, obviously. Be smart. But start experimenting with this, getting familiar with all the things that AI can do for you. This will help you to really trust and embrace AI. And that’s kind of the next stage here where I get past the, okay, I’m learning this to be like, hey, I’m comfortable enough to really use this. The thing I’ll remind everybody here is it’s trust. but verify, right? You want to be conscious that, okay, if I’m going to have AI do this for me, I need to be in the loop. I need to be checking. This human in the loop concept is so important here. And it’ll help foster further trust as you develop that mindset there. But you’ll be like, okay, I trust this enough. Let’s verify. Okay, I’m good with this and I’m not uncomfortable with continuing to use AI. and realize that AI skills will be essential going forward and they’ll need to continue to evolve. And so you just, you’ll keep using this and you’ll discover new things that can be done and new things will be available for you to do as the technology continues to evolve. Finally, you need to be able to learn to manage AI to ensure value. So again, from the IT perspective, I need to ensure that the trust via governed data and AI tools. So as I’m starting to use AI more and more, there’s a responsibility of IT to ensure that, okay, I’m putting some guardrails around this so that you can go ahead and have at it and you’re not going to damage things. You’re not going to wreck things. and help, again, protect users from themselves with this. The other thing here is, aside from securing that and putting the guardrails, you want to make sure that you’re measuring the value of this. That’s going to be reinforced how valuable this can be and help everyone understand, okay, this was useful, this wasn’t, and start collaborating and sharing those things. And really then from there, developing those leadership skills to manage the agents who are your AI workers. I like to think of those as at this stage an intern, but it’s getting better all the time. I’m getting to the point where I might be willing to say that, hey, it’s not an intern, it’s an early career associate at some point. It can be pretty powerful. For leadership, there’s a host of skills here that when you apply this to people, the same things apply to agents. Development good communication skills. If you’re using an agent like an AI assistant, you’re communicating with it. The clearer your communication, the better your results will be. It’s the same thing with people. Being able to set vision there. This involves a certain element of creativity and being able to say, hey, here’s what I want at the end of this. It then helps you with the communication actually. And what do I want this agent to do for me as I’m putting in prompts? Being able to have strong decision making. Again, this involves a lot of the trust. It also involves the things that I should be doing with this and maybe not, but also being wise enough to realize, hey, this could do this faster than I can. I should use this for this rather than that. It’s another tool, right? Being adaptable so that I can decide, hey, I’m going to change my ways and start using this. differently. Conflict resolution, oddly enough, comes into year two. Again, AI may be telling you one thing. You need to figure out how to manage that around. And different people have different approaches towards AI. This is that’s going to be going to require, you know, certain elements of conflict resolution there. Being proactive with this. You can’t make, nobody can make people do things. They can encourage it. But the people that are really going to succeed in this space are the ones that are proactively seeking this out. And what can I do next? What can I drive it? That becomes the valuable employee in the AI era. Somebody that’s embracing it, that is proactively doing more with this and trying to drive value with this, highly valuable. Delegation. You are now taking tasks that you might have done. I know I have struggled with this at points in my career of saying, oh, I can do this better. Yeah, can you? And or. should you, right? Just because you can do it better is can you have something else do that well enough? And for what that really is, being able to delegate that to the agent and be able to say, okay, yeah, I’m going to have you do this and I’m going to go do something else. And being wise, again, decision making about how you’re using your time. Being self-aware. What I’m describing there is a self-awareness that is a great leadership skill to have to understand yourself and how do you progress and be willing to move along. Then finally, creativity. Just being willing to be creative with the prompts and the different ways you might be able to leverage AI within your everyday work. As far as creativity, it really is about rethinking work, rethinking my work day. I used to do come in and I’d type this email and I’d read all these things and I’d create this and then I’d, you know, approve this and then go to the next person and have a conversation and then maybe copy data from one thing to the other. A lot of those things can be automated. A lot of those things can be sped up using some of the generative AI capabilities. So thinking first, as once you’re familiar with this, having that AI first mindset becomes a lot easier and it becomes more of a habit. And then you can start to seek, oh, where else could I use this? Wow, this is being beneficial. If you’re mindful and measuring the value there, it’ll be that reinforcement mechanism to make this easier to adopt. And we’ll talk about the AI first mindset a little further in a moment. But really, you know, what you’re doing with that is redesigning processes for autonomy with oversight. Again, trust, but verify. But how do I take AI and apply some level of automation and speed to what I’m doing? As I’m doing that, I want to account for the fact that, all right, I’ve automated this. This is now going to be a lot faster. Is the next step in the chain going to now be my bottleneck? Can I do something there to help with it? to help really speed things along. Because I can shorten this up and makes it better for me, but maybe I can be working with somebody else to say, hey, this next step, we could do this and that’ll make your life better. Again, that collaboration becomes very important. As I’m doing this and I put these automated agents and agents in place, I’m going to find, I’m going to start adjusting the org structure, right? I need to incorporate AI workers into the org structure and ensure that there’s some responsibility for those. Again, human oversight on top of this. All this we’re describing is a continuous improvement loop. We’re going to continue to improve things You don’t have to boil the ocean all at once. Just start small, start with things that are low risk, realize the value and expand and expand and expand. That is the best approach towards driving AI. It’s not going to be perfect up front, but if you will grab… incremental improvements along the way that ultimately will grow and realize exponential value over time. And really, this involves an agile approach. If we’re going to be continuing improving, I need some element of ability to handle a bit of constant change. How am I continuing to evolve this over time? It requires a more agile approach to this rather than as much of the systemized, okay, we’re going to handle this huge thing and break it from there. It’s an agile approach. Let’s get this done. How’d that go? Great. Let’s learn from it and move on from there. That AI first mindset we talked about on the last one, to me really involves thinking about agentic automation first. I start with, why am I doing this? Is this something that I could have an agent do for me, right? If the answers to those are, why am I doing this, I don’t know, and then could an agent do this? Yes, you’re on your way. Next questions are really, okay, how am I going to do this the right way? How do I measure success or failure for this? Could be a simple task that could be very easy. Could be a little more complex in there. And then it’s worth thinking about both success, but also failure, because I want to be able to ensure that I’m driving towards what success is. but also handling the risks that are involved in this and ensuring that those things are accounted for as well. How could this process be improved, right? I may not want to just take an agent to do exactly those things that happened before. Could I make some changes to how this happens so that this could be even better? for everything. How should this be triggered? Is this automated? Is this need to be triggered by a person? How should this thing be handled, be started? How should this workflow be started? We’ll talk about that a little bit as we get into the process side of this. What other processes and systems are involved? What data is involved? What are the things that I need to account for that I’m going to have to interact with in order to get this thing done? What am I as a person interacting with today? Where does that data come from? What am I touching that I need to take into account so that as I’m impacting things, as I’m doing things, I’m able to replicate that with an agent? Finally, come back to again, what value does a human add to this process, right? Where do I need to incorporate people? And maybe where I don’t die? Maybe there’s, I’m like, oh, I need to have somebody here. Question that, right? And go with your answer. If it’s a, yeah, no, I need a person here, great. Ensure that you have a human in the loop. element to that. If not, hey, let it run. Make sure you’re just providing oversight. This then is kind of the leadership management side of things. And as part of that, really be focusing on what are the risks of this? What could this thing end up doing that I’m not going to be happy with? at the end of the day. Pretty simple mindset, and pretty, you can run through these things and kind of one process after the other and really start building agents pretty quickly through the course of that. So that’s the people mindset side. Let’s shift over to the process side. How am I developing agents and orchestrating them so that I’m able to provide this automation, actually able to do these things? We’re going to talk about kind of a core model for browsing to do this. It does leverage a lot of… where Microsoft is approached, puts a pretty good framework. So there’s a spectrum of agents that can be created at a simple level. One is retrieval, where I’m just retrieving information. This is that intelligent search. I’m retrieving information from grounding data, have it reason, summarize, and answer questions on this. So this is that, again, that intelligent search model where I’m working against certain data, I’m getting answers and responses from it, and it’s just speeding up that process and giving me faster answers and doing some basic processing there. Task style agents are where I’m actually taking actions when asked. I may involve some automation I made at some levels, but it might be triggered by an event or a person at the front end there. And it’s just really replacing those repetitive mundane tasks. Those are some of the best candidates for agent. It improves work life experience, but it also will help with as you start putting those in place, you will drastically start realizing improvements in the value that you’re deriving from AI over time. As I start chunking those down and taking those on one by one throughout the organization, that adds up. quick. Finally, putting in autonomous agents. So these operate independently. These are things that have been systematically built, are planned to handle dynamic situations, maybe orchestrating with other agents, and that will learn and then escalate to people where needed. But these are things where I’m actually, this is really getting into that true digital employee space. So let’s talk about each of those a little further. Again, just examples of the continuum here, you know, a help desk agent. How do I connect to a corporate network? I go back into my knowledge base, provide the information that’s retrieved from there. my project tracker agent. I might be looking into more specific set of data and say, hey, what’s phase two of project X and the remaining budget? It’s extracting more specific information for me and not just the general knowledge base article, but actually extracting certain pieces of information for me and giving me that information. Just like that. This device refresh agent example is, I want to have a request process for a new laptop and be able to have the approvals all handled by agents and through an automated workflow. Great use case. There’s a lot of those approval work cases are really good for these kind of things. Budget management agent. How do I set up an agent to review on a regular scheduled basis, perhaps, on outstanding open POs and start the financial planning or the process associated with how those are going to be handled? And how far can I take that then? You know, on the sales and customer support agent side, I might have a lead gen agent where an agent is regularly running autonomously to say, hey, I’ve identified these 15 new leads for you to review, giving you useful information to go after yourself and just operating on your behalf, doing work that you no longer have to do on your own. You’re doing the review, and this is really getting into more of the managing the output from that, and then customer support agent, where you know agents identify new support issues and then triage that out to other agents. That’s the kind of thing that we’re starting to see built into some of the embedded Copilot things we’ve. did another session on some time ago, that’s built into Azure and Security Copilot. And those, you’re starting to see those things built in the IT world, but we can build those in other elements of the business as well. I mean, kind of orchestrating series of autonomous agents to be able to just take care of things on your behalf. becomes very important to have the right policy and management structure over the top of that, especially when you get into the autonomous side of things, but it can be done. Very powerful. So let’s start first with the retrieval. Again, each of these are going to talk about what’s the trigger event, what’s going to kick this off, what’s the instruction that I want it to do, and then what’s the plan of action that it’s going to take, and then ultimately the outcome. So here we’ve got a request that comes in from Copilot. That’s A human-initiated. trigger there. The instruction is the prompt that’s been put in there. So you provide step-by-step instructions based on product knowledge. It’s then the plan is, okay, I’m going to search the product information that’s contained in SharePoint in this case. answer the questions that are coming from that, and then generate a response. And you see the instructions here and in the agent, it’s a providing a little more detailed description of what you want out of this. What do you want it to do? And then what do you want it to provide? And see those things. have to be thought about up front, but that’s a great example of retrieval. Task agents, again, I’ve got a trigger. That request could come from events that are happening, but also from people, from things that have come in. It might be somebody’s interacting with it. Here we’ve got an example of something comes through Teams. and maybe I’ve got an agent tied to a Teams chat channel there. The agent then, the instructions that it’s set to do is say, hey, use this HR co-pilot to answer these questions that come in there. So if a request comes in through Teams, I want the agent to prepare to respond. and help with basic tasks, and it gives it kind of instructions for the tone to use in this case. Plan here would be, all right, I have the data sources that I wanted to work from. Contoso is our fictional company in this place, so it’s going to search that. It’s going to search the training resources. And it will also be able to search vacation balance in the systems if it’s got access to that. So it can hear the things that it could interact with and perform these tasks on. And then, you know, based on the query that was come through, it’ll actually provide the response and potentially provide a link to where we can do things. Do more things. Again, the autonomous thing, this is triggering off of events, it’s triggering off of schedules. It’s not necessarily initiated by a person that’s set up to react to things that are happening in the environment. Again, I’ve got my instructions for what it to do. I have a set of systems and data that it’s going to work from. and be able to organize different workflows from depending on the ask. And then it will provide its responses for taking action from there. I can further extend those autonomous agents so that they can work independently, again, based on schedules, based on events happening within my environment, be able to automate long-running processes. So this could be a Hey, I’m going to set this. I’m going to set this to on a schedule to run this time. And it just takes care of a series of steps for me that I no longer have to do for myself. That might have been three or four people before that were passing information around and doing things. I can set up a set of autonomous agents linked together in a workflow. they’ll be able to handle all of those tasks on a regular basis far faster than people will because they’re busy doing other things. I can provide some level of dynamic reasoning there. I can scope this. This again is the, you know, where the policies and management of this becomes very important. But I can allow it some range to make decisions based on this within the guidelines that I that I foster. Again, I need those guardrails. And when I exceed those, I’ve got a mechanism to go ask for help from a person and make sure that the decisions being made by the agent are the right ones when I exceed certain boundaries. And in order to do that, you’ve got to continually monitor their performance and continually to adapt the instructions. This gets into the managing the life cycle of agents a little more. You know, we’ve got these intents at the front end. I need to keep watching that over time and doing evals and doing test prompts and things like that to ensure that I’m providing the response and that this thing is a trusted product that’s being out there and a trusted part of the organization. You’re retraining an agent, much like you might retrain a person that’s maybe not making the right decisions or not doing quite the right things. You’ve got to continue to monitor these things, make sure it’s going well, but When it’s going well, it’s going to save you and your organization a lot of time, provide a lot of value. There again, when we talk about these things, I’ve got my autonomous agents, but I can then combine these with other agents and be able to have triggers in multiple ways, be able to describe behavior in multiple things, connect to many different systems. and have chains of agents that then work across in multiple different channels. It doesn’t have to just be within Copilot. It could be on the web, you could have mobile apps, you can build these into other outside systems as well. I can extend these to meet people where they’re at. So, you know, in kind of orchestrating these, just talk about that quickly, describing these agent workflows becomes very important where I can describe to it in more detail how I want it to perform, right? I can leave it to its discretion for simple things, but there are certain things that should happen a certain way, and I can allow for certain decisions to be made, but I can dictate what those decisions are by organizing the workflows in here. We’re showing an example of this being done in Copilot Studio, but there are a number of different ways to craft this. It allows me to have approvals in there. I can bring humans in the loop. back in some of these ways and allows me to orchestrate series of tasks that allow me to really develop a fuller workflow for what I’m trying to accomplish. Copilot can be a UI for AI, right? If I need a UI for it, not all things will. Again, those autonomous agents may not, but if I need that, I certainly would encourage you to drive things through Copilot. You don’t want to create a number of different UIs. for people who are like, okay, where do I go to do this versus that versus that? If you have it in Copilot, Copilot exists in so many places. I can call those agents from different Copilot instances that exist already within my environment. It’s a great UI for AI then to interact with all these agents that are out there. It doesn’t have to be the UI for a specific agent if there’s a need it embedded in a system. But for general agent use cases, this can be very powerful as again, it meets people where they’re at rather than have them try and find this. And it just increases adoption. One of the other things, and this comes in with some of the co-work things that are happening now, is that agents can actually perform tasks where a lot of what we’ve talked about is, okay, I can work with these connectors and perform things kind of behind the scenes and then have some interactions that are a little more obvious. There are plenty of systems out there that… can’t be connected to this yet. And I need to go through the human interfaces, the UI for those systems today. This is where this Windows 365 for agents gets really interesting because I can spin up a virtual desktop, have the agent interact with that with the specific systems that I have inside of there as part of the agent of workflow. and be able to then tear that back down. So it allows it to interact with those human interfaces on my behalf as an agent, do certain things, fill in a form for some other third-party SaaS provider that I don’t have the back-end connectors with to interact with more directly. This allows me to provide agents and leverage agents in those workflows, further extending what’s possible inside of here to be able to automate things where maybe I historically, I’ve got this Excel spreadsheet, now I’ve got to enter it into this proprietary application. I could use an agent to do that. I just got to be able to access this. Windows 365 for agents and Cowork allows me to now do that. I spin it up quickly. It gives me the application instance. I have Cowork operate against that. It builds the information in, and then it ends the session. Very cost-effective for use cases like that. until I get that more direct connection, which you will see more and more of over time, as systems need to be able to be accessible via model context protocol and some of the other things that are emerging in order to deliver more agent autonomy throughout all applications and having some standards for how that’s going to be done. Agent tooling, you know, I would say I would encourage everyone to start getting familiar with building agents yourself in that co-pilot instance. Get familiar with what’s possible, kind of the little bit of the logic that goes into this. Some of the things to be thinking about, it’ll get you familiar with. thinking about what’s possible for agents, even if you’re not going to be the one building them. But it allows you to have an intelligent viewpoint in terms of what you’re maybe asking IT or others to build on your behalf and be able to let you see the visionary possibilities that are available within AI for your business. IT doesn’t know what you’re doing all day, right? You do, and you know what, hey, this feels like this is repetitive, and that could be automated. Great. Be aware of that, and ideally within the organization, there’s a collection point for those kind of agentic ideas that people then that understand the intricacies of this. can be able to build those on your behalf and that others may benefit from as well. For makers, Copilot Studio is a great tool for this. There are obviously others out there too, but if you’re operating within the Microsoft ecosystem. Copilot Studio is pretty powerful in terms of what it’s been extended quite a bit, particularly with the inclusion of model context protocol servers, the work IQ, and also the inclusion now of being able to extend that out into Azure AI as well. So now I can build these agentic workflows in Copilot Studio. even if I’ve got an AI model that requires more sophisticated services that aren’t specifically in Copilot Studio, those can be called within a agentic workflow built within Copilot Studio where that’s appropriate. So again, getting familiar at the front end for everybody on building agents and kind of what’s possible there. helps with then being able to lead the organization into, oh, but we could be doing this, we could be doing this, we could be doing that. And it will help everybody really find, okay, what are the best next cases to be able to automate and then keep iterating, have that continuous improvement cycle start. So we’ve talked about people, talked about process. Let’s talk about the underlying technology here. In order to do all this, they may say, hey, that’s great. Yeah, we get that we want to, we need to be thinking about AI first a little more. And we get that there’s, here’s some of the things that would be in place for that. But I am really concerned about having AI operate in my environment and not having visibility into that and or control. Well, there are mechanisms to be able to control that as well. I would say when I’m thinking about this, the things that come to mind are one, observability. How do I identify and monitor unmanaged agents, right? How am I able to keep track of what those things are doing so that I can continue to improve those agents over the life cycle and make sure they’re not doing things they shouldn’t because they’re managed. I’m probably trusting them a bit. Let me continue to trust but verify throughout the course of the life cycle of that agent. I also want to be able to identify unmanaged agents in AI in my environment. I need to know if people are doing things with AI that that maybe they shouldn’t and or that I’m not aware of. Maybe people, because I’m not providing certain capabilities, they’re going outside of this. That’s what inspires most shadow IT. It’s not from nefarious purposes most of the time. Most of the time, it’s I don’t have supply that will meet the demand. And so it’s a great source of information like, oh, why are people using this? Let’s find that out and bring them into more of a managed, trusted environment. And then, you know, managing and observing the agent interactions that are happening, not even so much with users, but with data, with systems, with other agents. What is this agent interacting with? And am I comfortable with some of the things going through there that I’ve got the right controls on my data so that stuff’s not being surfaced? It should be that, hey, this thing has the power to take action on these systems. Am I comfortable with that? Do I need to be watching something there? And also then with other agents, right? This agent is fine, but this next one’s tied into this. These 2 are now linked together. How am I ensuring that that’s not passing something off indirectly that then becomes a risk of some kind? So being able to observe all that becomes very important as I start to expand my portfolio of agents within the organization. With that, along with being able to see it, the next layer of that is, okay, how do I apply some governance to this, right? The best, one of the first things you should be doing is putting, treating agents like you would people and putting an identity to them so you can track, helps with observability, helps with management. It allows me putting identity there with through Entra, I can control who can access, I can apply policies to that specific agent. and I can observe what it’s working with. And I know throughout its course of its life and control the permissions of the systems that it’s accessing as well. It allows for that centralized control, much like you do with people. If I’ve got a person interacting with my environment, I’m certainly doing that today. I need to apply those same principles to agents. Applying policies agents, those could be systems that they can and can’t be accessed. It could be security policies on there. Having a mechanism for managing the life cycle of agents, right? Hey, I’m monitoring this agent performance over time. Is this doing the things that I’m comfortable with? Do I need to stop this for some time while we make some changes and then reinstitute it? Or does this agent now, again, over a longer period of time, could we retire this agent? And do we need to start thinking about that because we have other ways of doing things and ensure that that’s not… existing out there, becoming this legacy risk, because this is out there and I used to be able to access things this way. Now I’ve got a more streamlined approach towards that. And then obviously, as many have heard before, ensuring data governance here is so important. Making sure that I’m not exposing data to everyone that maybe shouldn’t be seeing it and ensuring that. The agent interactions that are happening and agent processing that’s occurring is happening against trusted data, not just any data that’s out there. Agent doesn’t know. You need to provide that governance to it so that it knows, hey, this is trusted data, so that it provides trusted responses, again, fostering adoption. On the security side, there are a host of new cyber threats that AI presents. There are a lot of things that can be used to manage that. A lot of them is using existing tooling that’s out there. But it’s important to be aware of some of those new threats and make sure you’re not creating new exposures for yourselves as you know that. Obviously, data security again. but also controlling access to the agent. I may have an agent that’s operating off of sensitive financial data. Not everybody should be able to access that agent. Not everyone should be aware that it exists. Only those people that have the right to see that underlying information and or do those things should be able to access that. I also need to control agent to agent interactions. Are these, you know, again, agents like people, should this agent that everybody can access be able to access this more sensitive agent and ensuring that that’s those controls are in place? So lots of lot to think about, but there is tools out there to help with that. Again, enterprise data protection is so important here. Those access controls and broader agent governance over time kind of the main themes that come across here. And again, some of these things we’re going to dive into more in our next session, but wanted to just talk about a little bit today, especially given we are currently a day away from the release of Microsoft Agent 365, which should help with a lot of this out of the gate for those that will have access to it. Certainly worth taking a look at. The idea here is that this provides that centralized control plane, not only for Microsoft agents, but for third party agents too, that can then be monitored and managed, observed, governed, and secured throughout my agent estate, which is what you’re going to start to see happen here. Observing, I’m monitoring and managing agents in real time. What’s happening with these things? What are they interacting with? What all the agents that I have out there, what people make sure I’ve got a viewpoint into, hey, we’ve got a lot of these here. I don’t have owners for these agents. I need to make sure that I’ve got people here again, controlling these things. providing the governance. What are the guardrails for this? What am I comfortable with the systems and data that these things are accessing? What am I not comfortable with? And ensure I put those guardrails in so that as people are using these, but also then building these more widely, that I’ve got those core guardrails in place there. And then just the core security things that have to be in place in order to protect the enterprise. I’ve got a few screens here that show kind of what Agent 365 will provide here. We’re seeing a view of the list of all agents, and specifically having clicked on one of them, it’s showing you the activities that are being monitored. You’ll notice here that it’s showing things related to Purview, Entra, and Defender. What Agent 365 is doing in the security and governance space particularly, but also some of the observability, is taking all the information in that Purview, the Entra Suite, and Defender are providing and bringing that together in a more focused one-one dashboard view related specifically to agents. So what it’s doing is zip line where I might have to go on this dashboard and then this one and this one, triangulate all that information. Now that’s all being surfaced right to be in one place. Very, very beneficial for managing things over time, particularly as I have groups that are more focused on agents as I’m building more of these. It also provides me a map where I can see the interactions between certain agents and other agents. You know, these right now will be grouped based on the nature of the agents who was creating those and that, but it does provide that interconnectivity there and kind of the groupings of a map. a visual view of what’s out there. I can apply policy templates here. So those can include identity protection, visibility. I can apply conditional access here. Again, as I’m putting identity on top of here, the conditional access capabilities from Entra come into play and being able to put things on, be able to control where people are able to access agents from, what devices. What are the things I want to prompt for if somebody is interacting with this? Be able to put a host of core security governance controls on top of the agents. I also have activity explorer here where this can come from purview, but can get brought up from Agent 365. to show me, okay, what are the, what kind of activities am I seeing across this agent? What is this, what’s happening here? And even drill into what prompts are being put there and the more detail as to what’s happening on there. I can put security controls on here. I have the ability to identify certain agents that maybe are at a higher risk than others. Maybe it’s sharing information I don’t want it to be. Maybe I’m exposing systems that I didn’t realize were because of the connections that are happening off of this. So I mean, very, very useful for managing this, again, fleet or portfolio of agents over time. You see here, you know, applying security things to this, being able to see the security events that are happening, the incidents that are occurring here, and being able to manage those, again, helping with that broader security management to these agents is going to be required as you have more of these over time. And here you see that that gets triggered through Defender. So existing tools that are in place are providing this information and the control mechanisms that exist that I could use from Agent 365 in a central place. So here in enabling AI and being moving forward towards a Frontier Firm approach, Again, we’ve talked about people, we’ve talked about process, we’ve talked about technology. You know, securing technology to meet the demands of the modern business world and AI, enabling people to work with AI responsibly and to achieve more from this. And the intent is, let’s get people going with this. But if I’ve got that secure technology basis, I can trust that they’re going to be doing the right things. Finally, then, how am I transforming the business and the processes they’re in and be able to gain intelligent insights and improve my work life experience? All those things are what is required in order to realize the benefits of the Frontier Firm at the end of the day. If you have interested in talking about any or all of this, we’d welcome a conversation with you. Please reach out to us, and we’re happy to schedule a 20-minute session with you to discuss this further. Thank you so much, and please join us again in May. We’ll be diving through through in more detail the governance and security of AI going forward. Thank you so much and appreciate your time. We’re happy to respond to any questions you might have through the chat or Q&A. Thank you all again.
Events Secure Cloud Foundations: Sentinel & Defender As organizations expand across cloud, hybrid, and AI‑enabled environments, security teams are facing a rapidly growing attack surface—and fragmented tools that make it harder to see and respond to threats. In this webinar, we’ll explore how Microsoft Sentinel and Microsoft Defender work together to form a modern secure cloud foundation, helping organizations gain visibility, detect… June 25, 2026
Events Concurrency Viral Topics Concurrency Viral Topics is a fast‑moving webinar series focused on the tech trends showing up in boardrooms, inboxes, and LinkedIn feeds. We’ll share the featured topics the week before each event—so check back soon to see what’s next. June 18, 2026
Events Best of Microsoft Build: What the Announcements Mean for Your Business Microsoft Build is where the future of Microsoft’s platform takes shape—and this year’s announcements delivered big momentum around AI, Copilot, and what Microsoft is calling the agentic era of work. But with dozens of updates across Azure, Microsoft 365, Copilot, and developer tooling, it can be hard to separate what’s interesting from what’s actually actionable.… June 11, 2026