/ Insights / View Recording: Frontier Firm Part 3: From Org Chart to Work Chart – Humans & Agents Working Together Insights View Recording: Frontier Firm Part 3: From Org Chart to Work Chart – Humans & Agents Working Together March 25, 2026 Frontier Firm Part 3: From Org Chart to Work Chart – Humans & Agents Working Together The future of work is not humans or AI—it’s humans and AI working together. In Frontier Firm Part 3: From Org Chart to Work Chart, we’ll explore how organizations can move beyond traditional org charts to optimize workflows, increase productivity, and empower teams by integrating AI agents alongside human workers. Learn practical examples of how agents can work alongside people to handle repetitive tasks, augment decision-making, and free your human teams to focus on higher-value work. Walk away with strategies for designing workflows where humans and AI collaborate seamlessly. In this session, Brian Haydin, Solution Architect at Concurrency, explores how organizations move beyond using AI as a personal productivity tool and begin redesigning how work actually gets done—by pairing humans and AI agents as teammates. Building on Microsoft’s Frontier Firm research, this webinar focuses on Phase 2 AI maturity: the moment when agents stop acting like one‑off assistants and start functioning as digital colleagues with defined roles, boundaries, and accountability. Brian walks through what this shift really means for teams, leaders, and operating models—moving from static org charts to dynamic work charts organized around outcomes, capabilities, and coordination. Rather than demos or hype, this session delivers practical, real‑world patterns for human‑agent collaboration, including coordinator agents, strategic advisor agents, and multi‑agent specialist chains. You’ll learn how to structure roles, design handoffs, establish ownership, and measure success—so AI adoption leads to durable ROI, not chaos or automation theater. WHAT YOU’LL LEARN In this webinar, you’ll learn: How Microsoft defines the three phases of AI maturity—and why Phase 2 (human + agent teams) is where most organizations should focus right now What makes an AI agent a true digital colleague, not just a chatbot The six defining traits of effective agents, including clear roles, bounded scope, handoffs, and human accountability How to transition from org charts to work charts that organize teams around outcomes and capabilities What the human‑agent ratio is, why it matters, and how it differs across functions like HR, marketing, operations, and compliance Three practical agent patterns organizations are using today: Coordinator agents that eliminate follow‑up friction and prevent work from falling through the cracks Strategic advisor agents that help leaders focus attention where judgment and relationships matter most Specialist agent chains that break complex deliverables into research, creative, and execution stages How to design effective handoffs between humans and agents without losing quality, ownership, or trust Why broken processes don’t improve just by adding AI—and how to avoid scaling dysfunction faster The metrics that actually matter for Phase 2 adoption, including follow‑through rates, exception rates, quality consistency, and human focus shift What Phase 2 prepares you for as organizations move toward agent‑operated workflows—and why skipping this step creates risk FREQUENTLY ASKED QUESTIONS What does “Phase 2” AI maturity really mean? Phase 2 is when AI agents take on ongoing, role‑based work alongside humans. Unlike one‑off assistants, these agents participate in workflows over time, with clear responsibilities, escalation points, and human ownership. How is a digital colleague different from a chatbot? A digital colleague has a defined job, clear boundaries, access to real tools and data, the ability to work across steps, and explicit human accountability. Without those elements, it’s just a chatbot wearing a disguise. Will agents replace people in Phase 2? No. Phase 2 is about structured delegation, not replacement. Humans stay accountable for goals, approvals, exceptions, and relationships, while agents provide operational muscle like coordination, monitoring, research, and draft execution. What is the human‑agent ratio? It’s a way of measuring how many agents operate alongside each human in a given role. The right ratio depends on the type of work—high‑judgment roles require fewer agents, while repetitive, bounded work can support many more. What’s a good first use case for an agent? Start with coordination‑heavy work that’s repetitive, bounded, measurable, fault‑tolerant, and prone to being dropped—like follow‑ups, status tracking, onboarding steps, or exception monitoring. How do you know if a human‑agent model is working? Measure it like work design, not a demo: time saved, follow‑through rates, output volume, exception rates, consistency, and whether humans are truly spending more time on judgment‑heavy, high‑value work. Are agent‑operated workflows the next step? Yes—but only after Phase 2 fundamentals are in place. Organizations that skip role clarity, handoffs, ownership, and metrics risk scaling errors instead of outcomes. ABOUT THE SPEAKER Brian Haydin, Solution Architect at Concurrency, helps organizations navigate practical AI adoption by redesigning work, not just deploying tools. His focus is on human‑agent collaboration, governance, measurement, and real operating models that scale responsibly. Brian works closely with leaders to translate emerging AI capabilities into clear roles, workflows, and outcomes that actually deliver value. TRANSCRIPT Transcription Collapsed Transcription Expanded Brian Haydin 0:05 All right. And we’re live. So welcome, everybody. I’m Brian Hayden. I’m a solution architect here at Concurrency, and I’m really glad that everybody’s able to join me today. I’m not sure if you’ve all been here for parts one and Part 2 of this series. But if you have, you’ve already got a little bit of a foundation. In part one, we really talked just about like readiness and the culture and governance, a little bit about organizational groundwork that you need to feel safe and effective in AI adoption. And then in Part 2, we shifted a little bit to the individual experience. So this is like how. Tools like Microsoft Copilot can help people get real value just in like the day-to-day work. So today we’re going to talk a little bit differently. It’s going to be a bridge between those two ideas. We’re moving a little bit away from like where AI helps me work faster into more how AI becomes part of how the team gets work done. So it’s going to be a little bit different conversation and it’s less about like personal productivity things and more about work design. It’s going to get a little nebulous because we’re going to talk about team structure and how humans and agents actually start to collaborate together. And in my opinion, this is the conversation that more most organizations really need to start having right now, right? In this day and age. So let’s zoom back a little bit and pull the whole picture into view. And so Microsoft’s Frontier Firm Research lays this out pretty clearly and they do it in three broad phases of AI maturity phase. One is where a lot of organizations are today. This is how AI shows up mostly is like a personal assistant. You prompt it, it helps you with a task and then you move on. Think things like ChatGPT, copilot, Claude. That’s that’s what a phase one type of assistant is. Phase two is kind of where things get way more interesting. This is where agents start to join the team as digital colleagues. They have specific roles, clear boundaries, and they work alongside people over time. Finally, there’s phase three, and we’re not going to talk a lot about it today. But this is where agents take on the end to end workflow execution, but they do so under human oversight, governance and policy. So today we’re intentionally going to stay in this middle lane. It’s not just about single user co-pilots and we’re not jumping all the way to full autonomy. We’re just going to focus on the human agent teams. Because that’s where actually right now the practical work is happening for most organizations and it’s where this stops being demos and starts becoming real things and a real operating model. So let me show you what we’re going to cover today and make that middle phase a little bit more real. So first, I want to get really clear on what phase two actually means and what makes a digital colleague different than just another chat bot. And I want to introduce a metric that I think more leadership teams are going to are going to need to start paying attention to, and that’s the human agent ratio. From there, I’ll talk about 3 practical examples that show different ways that we’re using agents, me personally using agents as well as my customers using agents and different ways they can join the team and contribute like real value to your work. Then we’ll pull back and look at the design rules behind those examples, the patterns that actually make. Make human agent teams work. Finally, I’ll close with a quick look ahead at where this is all going. We’ll dive a little bit deeper into that phase three, but with that, let’s talk about why phase two matters right now. If your team is using copilot every day and you’re starting to bump into the limits of. These one shot assistants, maybe you’re asking things like, OK, this is helpful, but what else can this actually do? Then you’re probably ready to start having this conversation, and that’s what phase two is about. It comes after the initial wow factor and when the novelty is worn off. Now the question becomes, how do I actually redesign the work that I’m doing around all this? And we’re starting to see that happen pretty clearly across the ecosystem. Microsoft’s moving Copilot beyond just a simple assistant into more of an embedded agentic experience. Long running delegated work and you’re starting to see things pop up into your into copilot and into Word things that are branded as copilot Co-work or maybe like assistants or agents that are actually doing work on the things that you’re, you know, things that you’re working in. And it becomes a broader model where agents can take on these like defined roles. But at the same time, Worklab is framing Employees more like agent bosses. And we’ve heard that out. We’ve heard that out in the marketplace now for a little bit. People don’t just, they’re not just using AI, but they’re actually delegating tasks to it. They’re managing it. Or they’re supervising digital workers as part of how the work actually gets done. The tooling right now, it’s already there. You’ve got Copilot Studio from our low code scenarios, and if you need procode scenarios, there’s the Microsoft Agent framework that’s really effective. And someone in the middle or maybe it’s tangential or to the side are standards like MCP that are making it easier to connect agents to real tools and real data. So I mean really this isn’t just a far off, maybe someday kind of conversation. This is actually stuff that’s available right now. And these building blocks are are on the table. So what does a digital colleague actually look like? And I want to get a little bit more specific. A digital colleague has six defining traits. First, it has a named role. In plain English, that means it has a job to do. And honestly, if Clayton. Jenson isn’t somewhere in the background asking what job this thing is hired to do. You’ve probably already gotten yourself a little bit off track. Second, it has to have a bounded scope. What I mean is it knows what it’s responsible for and it knows just importantly what it’s not responsible for. 3rd, it has to have the right context in order to do it and the right tools. That means reading data, like actually connecting to the data and reading it and interacting with these systems in a real world scenario and being attached to the real business process. 4th it can work across steps over time. It’s not just giving you like the one off answer and like disappearing off in the background. It’s actually participating in the workflow with you. It needs to also have very clear handoffs. So what I mean by that, it needs to know when to pass that work off to a person or when it might have to hand that work off to another agent. And finally, there’s a human ownership and oversight. Somebody has to be accountable for what an agent’s going to be doing. And that matters a lot. If it doesn’t have a role, if it doesn’t have boundaries, and if it doesn’t know how the work is going to get it handed off, then this isn’t what I would call a digital colleague. It’s just a clever little chat bot that’s wearing a fake mustache. And we think we’ve all kind of met that guy. So if a digital colleague is going to start joining the team. Then the way we organize work has to change with it. And that’s really what this that’s what this session is about is how that shift is going to occur. So for decades we’ve been organizing work around a standard org chart and you know, last year, the year before, what that meant is boxes and lines. That show who reports to who and that model kind of assumes capacity as is basically the same thing as headcount. Teams are fairly mixed and the staffing decisions tend to happen in planning cycles and that’s going to breakdown now when agents are starting to join the team. A work chart organizes around outcomes and capabilities instead of just titles and departments. Capacity is going to become a blend of human judgment and digital execution. Teams are going to form around goals and then they can reconfigure as those goals start to change. It’s a little like putting together a fishing crew. Like I do a lot of fishing on Lake Michigan and I need to put together a crew to go out based on the conditions. You’re not just looking at who’s on my roster, but you’re thinking about who is best at what and what’s best suited for the job, you know, or the weather conditions that are in front of you. And that’s kind of what the what this shift is about. Your role doesn’t disappear, but it’s going to evolve. So like a a procurement manager, maybe there’s a couple of you out there, you’re still a procurement manager, but now that person is also going to be directing two or three digital colleagues. And Microsoft is calling that the agent boss. So it’s not a catchy label, but it does point to a new set of skills. You have to know how to brief an agent, review its work, set its boundaries, and you need to know when to pull it off a task. That’s the new leadership muscle, the new agent boss muscle that we all need to start learning. So with that in mind, let’s take a look at the patterns in the market that are really starting to make this a real scenario. So across the ecosystem, the winning pattern right now is not one giant magic bot that’s going to do everything. I steer away from that. What’s actually emerging is a much more practical model. Specialized agents connected to tools working across multiple steps with a human that’s reviewing each one of those steps and has a lot better control. You’re seeing that in the Microsoft copilot and the copilot cowork. Tooling and that is allowing us to do longer running and delegated work. You’re seeing this in Copilot Studio for low code agent scenarios and agent framework for the more the pro code orchestration and then you’re seeing it in standards like MCP which have just been the big buzz lately and that makes it easier to. Connect to the actual tools and to the actual data, but you’re also seeing more attention going into evaluation and observability. I’ve been talking a lot about this just last week, and I’m doing another talk about this in a couple of weeks, because if you can’t measure what an agent is actually doing, then you probably shouldn’t be trusting that agent right now in the first place. And there’s one more pattern that I think is going to be especially interesting to watch. Not every digital colleague has to live in the cloud. For the last year or two, people have been picturing agents as these like tools or licenses that you get, maybe a shared workflow agent, copilot studio agents. They’re doing Cloud Run automation and we’re still going to be doing that, but that’s not the whole story anymore. That shift in starting like over the last month has become a lot more visible. It’s not just Openclaw, which you know, was the big rage for the last three weeks. You’re now seeing mainstream tools like Cloud starting to push this further into like a desktop based experience and having these agents tell you, you know, what this is telling me right now is that this pattern is really starting to get a lot of traction. I go back to Microsoft Build. Last year when they announced Local Foundry and I just keep thinking to myself, this has been, you know, on the the road map for quite some time. And this is going to matter because some of the most useful digital agents are not just giant enterprise like orchestrators. They’re the ones that are helping somebody prep research. Monitor, draft emails and keep working, you know, keep the work moving throughout the day. So when I look at phase two, I think the feature’s kind of like some sort of a hybrid. Some agents are going to be team level, some are going to be process level and some are going to be a little bit more personal. The important point is this. The infrastructure phase two isn’t just a hypothetical anymore. It’s here now and it’s starting to ship, you know, inside of your organization. So let’s talk a little bit about what phase two isn’t, because I think that’s where organizations are going to get. I think some of the organizations are going to get into trouble. It’s not a single assistant that’s answering random prompts anymore. That’s still kind of something that I would consider in phase one. And it’s not a 0 governance autonomy where agents are going to be running wild. That’s right now it’s more fantasy than a strategy, and honestly, it’s pretty dangerous fantasy. It’s not about replacing every human judgment call either. The whole point is that humans are going to stay in the loop where judgment, accountability and exceptions still matter. And it’s definitely not about sprinkling AI everywhere without rethinking how the work actually gets done. Phase two really is about structured delegation, not chaos with like some sort. $1000 multi $1000 GPU budget. Going back to a little bit about the fishing, you don’t just throw every line in the water and kind of hope that something happens out on Lake Michigan. I set up these spreads deliberately based on the conditions, the water depth, the temperatures and what species that I’m actually targeting on that day. And so think of it, you know, if you think of it in that frame, that’s what good human agent teaming is going to look like too. So let’s talk about a couple of the hard truths, some lessons, you know, that people need to think about. And this is the part where I’m going to say the quiet part out loud. Most organizations, they’re aiming too high right now, and they’re aiming too vaguely. They’re chasing autonomy when they’d be when right now you need to be focusing on redesigning and the work and coordination. The first real wins, they don’t come from some super genius like super agent. They’re coming from making sure that the work. Keeps from getting dropped and that you know it actually gets done. Second, a broken process does not become smart just because you wrapped it in AI. If your hands offs, right? If your handoffs are a mess. If the ownership is fuzzy and nobody agrees on what done means. Then just building an agent, that’s not going to save you. It’s going to help you fail faster, but maybe with a little bit better branding. Third, if no human owns the downside, then the agent doesn’t belong in production delegation without accountability. That’s not really transformation. Right now it’s negligence, but it’s in a nicer packaging. And finally, the first real ROI usually comes from follow through and not brilliance, coordination, reminders, monitoring, escalation. Persistence. These aren’t glamorous words, but all of it’s the valuable words. So this slide I think is kind of the big one. And if there’s one slide that you take a screenshot, I’d probably start with this one. Operating contract between human and agents is actually pretty simple. Humans are going to own the goals, the priorities, the approvals, the exceptions and the relationships. And I don’t want to lose sight of this one, but the accountability for the outcomes. Agents are going to own more of the operational muscle. What I’m talking about here is coordination, research, drafts, monitoring, summaries, and maybe even a first pass execution. And notice what stays on the human side. Importantly, it’s accountability. Not just a human in the loop is some sort of like checkpoint or a checkbox for governance, but real ownership. If an agent sends the wrong follow up to a supplier or misflags A compliance issue, somebody in the organization. Still needs to have the answer for that, and if you can’t name the human that owns the downside of that agent’s output, then you don’t have delegation. What you have is an abandonment. That operating contract is what separates well designed human agent teams from a chat bot demo that just happens to look good in some sort of a leadership tech. And once you have a contract that’s in place, you can start thinking about the next question. How do you operationalize this at scale? That’s where this next idea is starting is going to come in. So here’s the metric that probably wasn’t on any dashboards last year. And I think a lot of leadership teams are going to start, you know, talking about this and you’re going to start caring about it soon. And it’s the human agent ratio. If you haven’t heard about it, it’s pretty much what it sounds like. It’s for any kind of given function, any job function, how many agents are working alongside each one of the humans? And that answer is going to vary a lot based on the different scenarios. In high touch functions like HR or client strategies, maybe some of the executive relationships, you might be looking at something that’s a little bit closer to one human working with 1-2, maybe even 3 agents, but not much more than that. That work is really nuanced and it’s very judgment heavy. It’s often very relationship driven and agents, they can help with the research and the prep and maybe even drafting some of the things, but you still want to have a human that’s deeply involved in the interaction to make sure it stays on track. In the middle, think a little bit more around some of the marketing and customer service roles, maybe project management. Here you might see one man, one human managing 5, maybe maybe even up to like 20 agents across different campaigns tickets. You know, some work streams, there’s more delegation, but there’s still a meaningful, creative and judgment component that that you don’t want to lose sight of. And then in some of the more process heavy environments, think you know, supply chain and logistics or compliance monitoring, definitely something like data reconciliation. You could see one human overseeing 50, maybe even 100, maybe even multiple hundreds. But these are repetitive, really bounded tasks. The human is still there, but more of an oversight role managing the exceptions. Not the relationships and not the accountability. So the question for your organization isn’t how many agents do we buy? A better question right now is what’s the human agent ratio for each one of these functions and do we have the capacity to manage that ratio? And can we do it responsibly? Because if you get it wrong in either direction, you’re going to create problems. Too few agents, and you leave a lot of value on the table and too many and human judgment. It’s going to start to erode because you can’t keep up with it. Oversight’s going to turn into rubber stamping, and that’s where the risk is going to show up. So with that in mind, where are you going to start? Let’s talk about what makes a first good digital colleague. So not everything should be a digital worker or a digital colleague. You know, just like in the phishing analogy, you’re not going to set the line at every line at the same depth. You’re going to like, look for different. Patterns and different opportunities. You’re going to read the conditions and you target things accordingly. So here’s, you know, here’s how I might frame this up for a good first candidate. Start with work that’s really repetitive. I know this is going to sound repetitive, but it’s the stuff that happens over and over again. We’ve been talking about this for a long time, but. That’s where where these agents are going to provide the most value, but make sure it’s bounded. Make sure that there’s a clear start, a clear finish, and a clear definition of done. If you know you’re not looking for some open-ended kind of creative work or an odyssey that’s going to journey that you’re going to go on, you need to make sure that it’s bounded. But it should also be easy to measure. You want to make sure you can tell if it would get done, if it was done poorly, or if it wasn’t even done at all. Also, good candidates are usually contact risk rich, but not politically sensitive. Agents need enough context to be useful, but you probably don’t want your first assignment to be sitting in the middle of a board level negotiation, and it should also be fault tolerant. If the agent gets it wrong, the mistake that it made, it has to be recoverable. Annoying is OK. But killing a relationship, killing a deal, blowing up a sale, that’s a catastrophic action that’s not going to be acceptable. And the last one is the big one. Look for work that is like full of. Follow up friction or like somebody dropped the ball. Every team has something like that, a status update that never got sent or some other sort of follow up that slipped through the cracks. Coordination work that lives in one person’s head and starts to fall apart the second that they go on vacation. Those are really good candidates for you. Candidates for you to start considering. That’s where I would start, and that’s where a lot of our customers are starting. Not with the flashiest use cases, but with the most annoying ones. And some of the examples I’m gonna bring today are ones that I’ve personally been working on, not just for my customers, but for myself, because they’re kind of annoying things that I have to do. Like every single day or multiple times a week. So let’s get into some of these examples. And as I’m going through these, I try to make them, you know, fairly generic and you know what I think the audience is full of. But think about your own organization as I talk through some of these, you know, and maybe that’ll help. Um, I did it now. All right. So let’s start with a scenario that I think just about everybody in this group, which is probably pretty manufacturing rich, you might recognize this. You’ve got a procurement team maybe that’s managing 40 or 50 different active suppliers. You’ve got purchase orders, delivery confirmations. You’re dealing with quality exceptions or like payment, you know, payment milestones. Right now, you’re probably tracking a lot of this through e-mail. Maybe some of it’s in your ERP system, and probably what I see a lot of is spreadsheets that somebody built a long time ago, and nobody wants to get in there and touch it. And because of this, things are falling through the cracks all the time, or people are researching things that take a really long period of time. A delivery window might get missed because nobody followed up. Maybe a quality exception sits in somebody’s inbox for a week, or a payment milestone that doesn’t really, you know, like an issue that didn’t surface until somebody’s. We started calling and complaining. So these aren’t like unusual. They’re not exotic AI use cases. These are things that happen in most companies almost every day. And if you want to take this kind of scenario and look at what that looks like with a coordinator pattern, let’s take a look at that a little bit deeper. Coordinator agent. They keep track of everything. They keep track of the open POS across the ERP, and they’re getting this information probably out of e-mail as well. They know they can tell what to do, what’s overdue, and what’s coming next, and it can proactively remind suppliers about delivery windows that they turn in before they actually turn in. Into surprises, you know it can surface quality exceptions coming out of receiving, so issues get flagged before they snowball into production delays. Or it can identify payment milestone gaps when something you know is approaching or maybe something that’s already slipped. And then when it actually requires human judgment, maybe a supplier relationship is at risk, you know, or some other other kind of exception that doesn’t fit any one of these patterns. Then it gets escalated to a person, an actual human, the procurement lead. And that’s that part that I was talking about before that matters. Notice what the agent isn’t doing. It’s not negotiating any of your contracts. It’s not making any of your sourcing decisions. It’s just it’s not taking the call when a when like one of your suppliers calls and it’s upset, but it’s doing the unglamorous coordination work that keeps everything running. And then it frees up the procurement team, you know, so that you can spend more time doing things where the human judgment judgment actually matters. And you know, with that in mind, what’s the broader lesson? Your first digital colleague is almost always going to be something like this. It’s going to be a coordinator. We do quite a few of these. Any, you know, anytime that I recognize analysts that are loading information into a spreadsheet and just kind of going through muscle memory all day long, that’s a coordinator pattern and it’s not flashy. It’s definitely not the one that’s gonna wow the board, you know, but it’s the one that picks up. You know, the work that everybody knows that it’s important and frees up a lot of time and you know, stays on top of it. Every department has worked like this. Project milestone tracking, vendor onboard, onboarding checklists, service ticket triage. This is the kind of work that keeps the that keeps your business moving, even if nobody puts it on the company highlight reel. And it’s a lot like going up a mountain where you have a guide rope on a trail. It’s not there to be dramatic, but it what it is therefore is keeping you from wandering off the edge and when somebody slips. It’s there so that you can catch, catch yourself and, you know, stay on the mountain, really. But let’s talk about the second pattern. Let’s look at, let’s look at an agent that doesn’t just coordinate the work. It’s going to help shape where your energy is going to go. So this second pattern, it’s moving up the value chain. Now we’re not just coordinating work, we’re starting to help shape attention and judgment. Picture this. Picture a regional financial SERVICES form that’s onboarding maybe 15 to 20 new commercial clients every quarter. And every one of those onboardings has a lot going on. There’s compliance checks, there’s document collection. We have to get the account set up. Maybe our SAS product needs a lot of configuration. We’re building coordination with the relationship managers, all of that work. Sometimes the clients are pretty straightforward, others, they’re going to be a lot more complex, they have higher value, and honestly, they’re going to be more sensitive to problems. The problem here isn’t that the team doesn’t know how to do the onboarding for clients. The problem is that without a clear view across the pipeline, relationship managers end up. Spreading their attention way too evenly. They spend too much time on the easy stuff and not enough time where their judgment is actually going to be useful and where it actually matters. And that’s where a strategic advisor can really help. So just like I mapped that out before. Let’s map this one out, and I know that that last screen, it looked a little bit like a dashboard. So let’s unpack what’s actually happening. The onboarding advisor starts by scanning the incoming pipeline and pulling in client data from the CRM and intake forms. From there, it’s going to assess complexity. It scores each one of the clients based on things like regulatory requirements, product mix or relationship value. Then it maps out the onboarding steps for that specific client type. So instead of a one-size-fits-all process, you get the tailored checklist for that client. And after that, it starts surfacing where those bottlenecks are. Where’s the document collection lagging? Where’s the compliance step stalling? Where’s the setup waiting on a handout that still hasn’t happened yet? Based on all that, it can recommend prioritization. In other words, it’s going to identify which clients need the relationship management. There’s direct attention right now and which ones are progressing just fine. And then it keeps track of the follow through, monitoring the progress, watching for delays and escalating when something gets stuck. And remember that previous screen, the dashboard? Those aren’t static KPIs. It’s a live picture of what’s happening across the pipeline. So that when a relationship manager decides who gets the personal call, who’s going to get the white glove treatment, or where that process needs to flex, they’re making that call with a clear, current view of what reality is, not just gut feel or not just whoever emailed you the last. And that’s why the lesson here is a little bit different than the first example. The strategic advisor pattern, it’s not really just about saving time. It’s probably going to do that. But really it’s about improving the quality of the attention. When a relationship manager has a clear view of the whole pipeline, they don’t just move faster. They’re making better decisions about where to invest their energy. The high value, more complex client gets the personal touch. The more straightforward onboarding can keep moving through, like keep moving at a pace with the agent just helping keep track of what’s on course and what’s off course. They give it like reading, like reading the the water. A good angler doesn’t just cast everywhere and hold. They’re looking at the currents, they’re trying to read where that structure is, and they look for some things that, you know, might not be obvious, like temperature breaks. And then they put their presentation where those fish probably are. That’s what the agent’s helping to do here. It’s giving a. A clear read on where the attention actually matters. O let’s take a look at this last pattern because this is where it starts to get esecially interesting for me. So here’s this third pattern. Mostly what I like about this one is because it’s one that I use all the time myself. In fact, actually it’s how I built this this presentation. Instead of just asking one assistant to pretend it can do everything, I’ve started breaking the roles and the work into specialized roles. First, I start by using a research agent to go deep. It pulls together the information, the evidence, it synthesizes a bunch of sources, and it gives me all this raw material. Then after I’ve reviewed it, I hand that off to a creative agent. That’s where it’s going to help me start to shape the narrative. It’s going to help frame the story, the prose, build a good flow, and tell the story that, you know, I’m actually trying to tell with the analogies that I’ve given it. And that’s where I can start injecting my own flavor and my own point of view into it. And then after I’ve done that, I’m going to take that information and pass it off to a presentation agent that’s going to deliver things into like a really structured, you know, polished visuals, something like this that looks really nice, you know, is is these are things that I’ve handed off to a presentation agent. So each one of these agents, they’re doing their part and they’re doing what they’re best at and what they’ve been designed for. But it’s the handout between them that gives me the best results. And that’s how I, you know, stop from having a general purpose agent that just does everything from an end. I’ve tried that approach, just throwing an idea at Open AI or Claude and it falls apart. It comes across as AI slop and like something that’s not really in my voice, something that I, you know, sometimes don’t even really believe in. So this pattern though I want you to think just beyond the presentations. You can do this for things in your organization like compliance reporting, customer proposals, maybe marketing campaigns, really any kind of multi step deliverable where the different skills are needed at different stages. And the key point that I’m not is that I’m not actually removing myself at all from this process. In fact, I’m not even saving a lot of time in it. I just spend a lot more time with the attention to details. I get the opportunity to step into each one of these handoffs. I’m steering it, I’m refining it, and I’m making sure that it still sounds like. My own voice. So when I see these types of multi step orchestrated patterns emerge, that’s what I’m looking for for these specialized agents. And the best human agent teams tend to look a lot like what a good human team looks like. Clear roles, clear hand offs. A little coaching along the way so that each one of the participants can do what they’re do, what they’re best at. And that brings us to the biggest question for this one is how do the human fit into this just beyond the handoffs? So in this model, the human really becomes like an editor in chief. You’re briefing the agents and you’re setting the goals. You’re defining the constraints. Then you steer. You know you’re steering the work at each one of these stages, reviewing what the research agent found before the creative agent starts shaping that story, and then reviewing those outputs again before the presentation aid agent starts to turn that into your final deliverable. So the human is still very much involved, but it’s just in a different way. You’re not doing every step by hand anymore. You’re directing, reviewing, combining, and finalizing. And then you’re adding the judgment and the context and the taste, really, that only a human can bring. So you’re not out of the loop, you’re actually above that loop. And that’s the real management shift here in phase two. The same pattern applies well beyond these presentations. Think about those other topics that I brought up, the compliance report. They still need research and analysis and then some sort of an executive summary. Or a customer proposal. It’s going to need technical scoping and pricing, but it also needs a narrative that’s going to resonate with them, and that’s where humans are best at. So the structure’s the same. Specialized agents, clear handoffs and human direction at every one of the stages. All right, so we’ve taken a look at these three different patterns. We’ve got the coordinator, the strategic advisor, and the specialist chain. Let’s pull back a little bit and look at some of the design rules that make all three of these work for you in the organization. So I’ve come up with six design rules that are going to make them work. And you can apply these to any one of these different agents that I’ve talked about, the coordinator, advisor or specialist rule number one. And I’ve mentioned this before, start with a role and not a tool to find the job that the agent is there to do before you start debating what platforms it’s good for. If you start with the question like we bought this license, now where do we use it? You probably already run yourself off into a ditch. Rule #2 bound the job very clearly. An agent that has no boundaries is not a colleague, it’s a liability for you. Be clear about what it does, what it doesn’t do, and where its responsibility is going to stop. Rule #3, design those handoffs. Every digital colleague needs to be needs to have clear rules for when it should be passing it off to a human. When it should be passing it to another agent, or when it goes back into the system of record, you’re not going to orphan any tasks. There’s no dropped handoffs. Every step needs to have a home. Rule #4 put human checkpoints where the risk actually lives. Not everywhere that like, it’s just going to get annoying and it defeats the purpose. But wherever a bad output could damage a customer relationship or create some sort of a compliance problem, maybe cost real money. That’s where you need to put the human and the human review. Rule #5. Assign an owner for the output, not just the input. You know, there’s a lot of talk that’s going into context engineering, and it’s important, but it’s not, you know, it’s it. This isn’t just about prompting the agent. You need to have somebody who is accountable for what the agent actually produces. And if you can’t name that person, like I said earlier, you don’t really have delegation. I use the word abandonment and rule number six design the work chart before you buy the tool. Map out this work, define these different roles and think through the human agent ratio. After you’ve done all that work, then you can choose a platform. That’s the difference between building a real human agent team and just running like an experiment. So once you’ve designed the question or designed the team, the next question is how do you know it’s actually working? Anybody that’s been working with me over the last year or two knows what I’m about to bring up. Phase two should be measured like work design, not like a chatbot demo. And I’ve come up with a few ways to think about the scorecard and 1st would be the time saved. How many hours are your people getting back from coordination, from the follow up and from all those little bits of work that are piling up? What is the follow through rate? What percentage of actual task agent handle tasks made it to completion without a human having to jump in and rescue them? What about the output? How much of that work that’s being done is the human agent team processing compared to before? And it’s often overlooked like that importance of measuring the before and the after. So before you embark on this journey and you think about these these measurements, how are you going to? You’re gonna measure the first the before what it was before you started the agent 4th quality consistency. If the agent is producing is is the agent producing reliable steady output or is it like all over the place? And here we’re talking about reliable evals so that you can gauge that effectively and unbiased. What is the exception rate? How often do the agents have to escalate? And ideally is that is that rate improving over time? And honestly the the metric that I’m actually going to ask you to care the most. Just about is the human focus shift. Are your people actually spending more time on relationships, thinking through strategy, doing the human judgment and managing the exceptions? The work that really you think that only a human should be doing? Or are they just doing the same work that they were doing before, but just faster? Because if they aren’t shifting your attention to towards the higher value work, then it’s missing the point. I think we’ve seen a lot of stories over the year about layoffs and and what’s going on in the marketplace, you know, with job trends. And I think that’s where this is kind of resonating, you know, for me is that most of the organizations that I’m working with, they’re not laying off people when they build these agents. They’re recapturing that to actually do work. They are growing their business without adding more time, more people because those people that they currently have have the capacity. To do it so. This is something I’ve been spending a lot of time thinking about over the last couple of years. Not just what these metrics are, but how do you actually collect them in a way that tells you whether this project actually worked. And when you do this, these metrics are going to help you calibrate that human agent ratio over time. If exception rates start climbing, maybe one person is overseeing too many agents. If your follow through is strong, but humans are still doing a lot of the low value coordination work, maybe you just haven’t deployed enough and you’re not at the agent capacity at the right agent capacity yet. So let’s zoom back a little bit and let’s talk about what’s what’s what we’re going to be doing next. So I was focusing in that middle tier, that phase two agents joining the team with bounded roles, human checkpoints and share accountability. Honestly, that’s where most organizations, that’s where people on this call, it’s where you should be. Spending your energy right now, but that horizon this year, it’s going to shift and it’s probably going to shift very fast. Phase three is where agents start taking on more end to end workflow execution, not with zero human involvement, you know, but under a really solid policy, governance and structured oversight. At that point, this work chart that we just talked about, it’s going to evolve again. It’s no longer going to be about who does what. It becomes about which processes are going to stay human-led and which ones become more agent operated and where, where do we have to put these guardrails, you know, and where they, you know, to to maintain that quality of delivery. Now, a lot of organizations, they’re already talking like they’re ready for phase three, and I’m going to be transparent with you. Most of you aren’t. If you can’t define the roles, if you can’t define the handoffs, the owners and the metrics in phase two, then you probably don’t have any business talking to me or pretending that you’re ready for. Like an agent operated workflows, I use an analogy in another talk that fits here really well too. Organizations tend to camp out in one of three places. And you know, if you’re at camp one and try to jump straight to camp three, you’re going to get altitude sickness. And I don’t want you to be a part of that group. So I’m not going to unpack phase three today, but I just want to put this on the horizon for you to start thinking about. You’re going to get there and that’s where that’s where work is heading and it’s something that we’re going to explore in one of our upcoming sessions. If you want to be ready for that conversation, then the work you do here in phase two by defining the roles and the handoffs, assigning accountability and calibrating that human agent ratio, that’s the foundation that you’re going to need in order to to go into phase three. So I want to bring this home. Here’s the key take away. The goal here is not really just about adding more AI. The goal here is to redesign your work with the human agent mix in mind. The organizations that win this chapter are not going to be the ones with the most agents for the biggest AI budget. They’re going to be the ones that rethink how work actually gets done. Who does what, how handoffs happen, where the human judgment matters, and where digital execution is going to be able to scale. That’s the work chart, and that’s what’s going to separate a true frontier firm from a company that just bought a pile of licenses and kind of hope for the best. The leap from assistant to digital colleague isn’t about smarter prompts, it’s about the work design and that work should be starting now. So the real question is what are you going to do to take the next step? I know I’ve given you a lot to unpack and I really what I wanted you to leave here with is that concurrency is here to help. We’ve dropped the link in the chat and you can share feedback on this session. First, I’d really like to say I’d appreciate the feedback on that, but that’s also a way for you to connect with us and take the next step. If you’re sitting there thinking this makes sense, but I’m not really sure what where to start or what kind of agents should actually make sense in my. My organization. That’s the kind of organization that I love to have with our clients. So I’m offering a free workshop to help you identify those right use cases and talk through where you think agents might fit and make sure that you’re starting in the place where it actually is going to create value. A lot of organizations are finding that to be a really helpful use step, useful step, first step and honestly I really enjoy it. So finally I just want to say thanks. Thanks for spending a little bit of time today. I’d love to stay connected with you, so please, I’ve got a QR code and a link in the on the screen. Please connect with me on LinkedIn. I’ve been posting there pretty regularly. I’m probably going to unpack a little bit of this later today and you know, but in the meantime, if you have any questions, feel free to drop those in the chat. And so I’ll leave you maybe with one last thought, like where in your organization is the first real opportunity? Opportunity for a digital colleague? And just as importantly, what’s the work that keeps slipping through the cracks? And I usually don’t wait long enough. I almost always end this event and then I see somebody asking a question. All right, thanks everybody.
Events Secure Cloud Foundations: Sentinel & Defender As organizations expand across cloud, hybrid, and AI‑enabled environments, security teams are facing a rapidly growing attack surface—and fragmented tools that make it harder to see and respond to threats. In this webinar, we’ll explore how Microsoft Sentinel and Microsoft Defender work together to form a modern secure cloud foundation, helping organizations gain visibility, detect… June 25, 2026
Events Concurrency Viral Topics Concurrency Viral Topics is a fast‑moving webinar series focused on the tech trends showing up in boardrooms, inboxes, and LinkedIn feeds. We’ll share the featured topics the week before each event—so check back soon to see what’s next. June 18, 2026
Events Best of Microsoft Build: What the Announcements Mean for Your Business Microsoft Build is where the future of Microsoft’s platform takes shape—and this year’s announcements delivered big momentum around AI, Copilot, and what Microsoft is calling the agentic era of work. But with dozens of updates across Azure, Microsoft 365, Copilot, and developer tooling, it can be hard to separate what’s interesting from what’s actually actionable.… June 11, 2026