Insights View Recording: Concurrency Viral Topics: The Microsoft AI Developments Everyone Is Talking About

View Recording: Concurrency Viral Topics: The Microsoft AI Developments Everyone Is Talking About

Concurrency Viral Topics is a fast‑moving webinar series focused on the tech trends showing up in boardrooms, inboxes, and LinkedIn feeds.

Cutting through the AI noise in the Microsoft ecosystem.

AI headlines are coming fast — but not every story deserves your attention.

In this rapid-fire webinar, Concurrency leaders break down three of the biggest Microsoft and enterprise AI topics dominating conversations right now and explain what they actually mean for business and technology leaders.



In this session, Brian Haydin, Solutions Architect at Concurrency, is joined by Derek, Project Manager, and Mac, Technical Architect, for a wide‑ranging conversation on how Copilot and modern AI models are reshaping the way organizations work, build, and operate.

Rather than focusing on a single tool or vendor narrative, this discussion explores the evolution of Copilot from a prompt‑based assistant into something far more powerful: a collaborator capable of planning work, synthesizing complex context, orchestrating tasks across systems, and supporting long‑running, agent‑driven workflows.

The speakers examine how multimodal Copilot experiences, model choice, and emerging “cowork” and agent patterns are changing expectations—from individual productivity use cases to enterprise‑grade agent deployments. The conversation highlights the realities organizations are facing today: powerful technology, uneven adoption, underutilized licenses, and the growing need for thoughtful enablement, governance, and operational discipline.

This session goes beyond surface‑level demonstrations to address how teams should actually adopt Copilot: starting with individual efficiency, expanding into agents, and ultimately rethinking workflows instead of simply automating old ones. The panel also dives deep into model selection strategy, the tradeoffs between speed and rigor, and how different models excel at different tasks such as research, visual creation, implementation, and validation.

Finally, the discussion turns to AgentOps—the critical practices required to move from proof‑of‑concept agents to production‑ready solutions. Topics include evaluators, tracing, human‑in‑the‑loop design, stakeholder communication, and how organizations should balance “fail fast” experimentation with the safeguards required for enterprise deployment.

Overall, this session delivers a candid, experience‑driven look at where Copilot and agents provide real value today, where expectations need to be reset, and how organizations can responsibly move from experimentation to impact.

WHAT YOU’LL LEARN

In this session, you’ll learn:

How Copilot is evolving beyond prompt‑based assistance:

  • The shift from “answer my question” to “do this work for me”
  • What multimodal Copilot experiences unlock across text, files, transcripts, diagrams, and context
  • Why understanding native Copilot capabilities comes before building agents

Practical guidance for Copilot adoption:

  • Why email and meetings are the most common (and successful) starting points
  • How individual efficiency leads naturally to agent and team‑level use cases
  • Where organizations often struggle with unused licenses and low adoption
  • How coaching and usage insights help determine who needs enablement versus license reallocation

Rethinking workflows—not just automating them:

  • Why copying existing human processes is often the wrong goal
  • Applying “jobs to be done” thinking to AI‑enabled work
  • How Copilot can help prioritize work, not just complete tasks
  • When changing the process matters more than speeding it up

Agent‑driven ways of working:

  • When to move from ad‑hoc Copilot prompts to persistent agents
  • How lightweight agents can support recurring tasks, knowledge aggregation, and planning
  • Designing projects that give agents grounded, high‑quality context rather than relying solely on Microsoft Graph discovery

Model strategy inside Copilot:

  • Strengths of different models (e.g., deep research, visuals, methodical planning, implementation)
  • Why switching models—or running multiple—can improve outcomes
  • Pros and cons of automatic model selection versus intentional orchestration
  • Comparing outputs from multiple models to improve quality and confidence

Using Copilot as a “rubber duck” for complex problems:

  • Applying conversational AI to architectural, planning, and strategy challenges
  • Leveraging transcripts, artifacts, diagrams, and notes as shared context
  • Supporting long‑running initiatives such as proposals, talks, and discovery efforts

Introducing AgentOps for production readiness:

  • Why evaluators and tracing are the “unit tests” of AI agents
  • Understanding failure points in complex, multi‑step workflows
  • Monitoring behavior across sub‑agents and long chains of execution
  • Preventing surprise behavior before agents are released to hundreds of users

Balancing experimentation and governance:

  • When it’s okay to skip AgentOps during early POCs
  • Why “fail fast” still applies—but not in production
  • How to avoid boiling the ocean early in an AI journey
  • Iterating safely as organizational maturity grows

Communicating value and risk to stakeholders:

  • How to explain longer timelines caused by safeguards and testing
  • Identifying what can go wrong—not just what can go right
  • Designing guardrails with cross‑functional input (security, IT, business)
  • Reinforcing the importance of humans remaining in the loop

Licensing considerations and readiness:

  • Who should pay attention to emerging agent‑focused licensing models
  • Why advanced licenses only make sense for mature AI adopters
  • Signs your organization is (or isn’t) ready for enterprise‑scale agent deployment

FREQUENTLY ASKED QUESTIONS

Is this session focused on a single Copilot product or feature?

No. This session looks holistically at Copilot, foundation models, agents, and operational practices—rather than a single tool or demo.

Is the goal to fully automate work with agents?

No. A key theme of the session is human‑in‑the‑loop design. Agents should amplify people, not replace accountability, judgment, or governance.

Should organizations perfect agents before releasing them?

Not necessarily. Early POCs should prioritize proving value and learning quickly. However, evaluators, tracing, and safeguards become essential before wider deployment.

Why aren’t Copilot licenses always delivering value today?

Many users receive licenses without guidance, enablement, or workflow changes. Without intentional adoption planning, usage often stops at basic summaries instead of meaningful productivity gains.

Do all organizations need AgentOps right away?

No. AgentOps practices should scale with maturity. Early adopters can start lightweight, but production environments require stronger controls.

When does advanced agent licensing make sense?

When organizations are already deploying agents that perform meaningful, semi‑autonomous work at scale—not when they are still experimenting with basic Copilot use cases.

ABOUT THE SPEAKERS

Brian Haydin is a Solutions Architect at Concurrency who helps organizations adopt emerging Microsoft and AI technologies responsibly. Brian focuses on bridging strategy, architecture, and real‑world outcomes—especially as teams move from experimentation to production.

Derek is a Project Manager at Concurrency with deep experience guiding organizations through change, adoption, and delivery. Derek works closely with stakeholders to balance innovation, timelines, budgets, and risk—especially in rapidly evolving AI initiatives.

Mac is a Technical Architect at Concurrency specializing in software architecture, AI‑enabled solutions, and agent‑based systems. Mac brings a builder’s perspective to AI adoption, emphasizing thoughtful design, operational rigor, and sustainable long‑term solutions.

TRANSCRIPT

Transcription Collapsed

So today I’m going to be, I’m Brian Haydin, I’m a solution architect here at Concurrency. And today I’m joined with Derek, one of our project managers, and Mac, who is one of our technical architects. And we’ve got three different topics we’re going to kind of dive into today. So I’d say like first I want to talk about the multimodal co-pilot and co-work and that whole story. So right now, Copilot’s shifting a little bit away from like the answer my prompt into can you do something for me? And we’re seeing this with a lot of different, a lot of the different tools. You’ve got OpenClaw, you’ve got Cloud Code or Cloud Cowork. So we’re seeing kind of like this shift of being able to. not only consume multimodal data, but also like do multimodal things. So Derek, you know, what do you think about Copilot and when, think about this too a little bit, like what breaks when Copilot comp becomes like a junior analyst for you, right? How do you control that? How do you make sure it’s doing the right work for you? Yeah, I think the 1st way is just understanding what. native Copilot can do before you get into the co-work space because a lot of the power of Copilot co-work is going to be connecting all these different tasks together. But you don’t necessarily know right off the bat what you want it to do. And a lot of the demos that Microsoft has shown, there are certain things where it’s like, yeah, it will schedule a meeting for you. create a brief for you at a whole different times. You may want it to do some of those things, but not all of those things. So get into Copilot first and say, okay, what do I like about Copilot? What am I getting a lot of value out of? And then you can use Copilot to string together those valuable tasks instead of just saying something broad. And then Copilot Cowork’s going to work in the background on certain things that you may not want, that it’s taking. more time than maybe you would have put towards a couple of things instead of five or six things. That’s really what I start with. Where do you think some of the expectations are falling flat right now? I think the expectations fall flat when that we’re really in the early days and a lot of this stuff is truly powerful still, but… We’re just on the precipice of what this AI is going to be able to do long term for us. So while it’s still very powerful right now, we’re just in the beginning and kind of thinking through, you know, what you would like it to do longer term, we’re probably going to get there. Matt, there’s a, you build software for a living year currency. Is that that aspect of it? And so you’re less on the business side of it. But what architectural challenges do you see, you know, this technology bringing to the table? Yeah, I mean, one of the key things that I think about is the fact that, you know, we spent several decades thinking about, you know, how we solve problems and how we build these applications, first of all, but then how these applications then behave and, you know, the logic we build into them. So like the applications themselves are getting, by virtue of allowing AI in this space and by building applications that use AI, immediately we open up this can of worms where like even the API, like there’s no concept of an API contract necessarily from like step to step or like like very explicit logic from step to step. It becomes more like what are the inputs and what are the outputs. So more from like the application side, I certainly see that and how, like even the business requirements and some of the things that people want shift, where, you know, we’re no longer asking, hey, you know, if this, then that, and this and that. It’s more like, well, when we think about this, we do this. So. very much so on the application building side, really the problems change and the implementations change, but then more so from the architectural end, like we have to factor in the fact that when we send, you know, when we use Cowork or when we use Claude or when we use ChatGPT, whatever we may use, or maybe the new MAI models. We’re not saying, hey, do this now. Like the whole human in the loop perspective for interacting with AI for the purpose of building solutions changes, right? It’s now more like, hey, do these sets of things. Like it’s not step by step, how do we get from A to Z, and we stop at B&C&D. It’s. going from A to Z and letting the models take that. So there has to be some degree of, first of all, like skepticism, healthy skepticism, but also like realizing that the power of this and what it can do right, but also what it might be able to do wrong, especially when answering and thinking about architectural problems. A little bit of a, like, I want to dovetail a little bit on the same topic. What’s your favorite model? Let’s go, just one word. What’s your favorite model? Opus 46. Opus 46. I would agree with that, Opus 46. It’s a game changer that it is in, you can get it in Co-Pilot now. So the way I like to describe it is like a candy, a root beer barrel, a butterscotch. That Opus 4.6. by Claude, that’s with the actual candy, but the wrapper on top of it, that’s what Microsoft Copilot allows these models to do. So I definitely think it’s a game changer that is in Microsoft’s landscape now. For the complex problems 4.6, and there’s a lightweight sonnet model that kind of almost matches Opus 4.5 to some degree. But like you might, you don’t necessarily want to stick to one, so I sometimes kind of dip and dip and dip and tail in between the other ones. I use them all, right? And I’ve found a variety of different ways to use them, but they’re good at certain things, right? So my go-to is chat for deep researching activities. when I need to do a McKenzie level, you know, analysis and give me 4 weeks of research, you know, and, you know, an analyst report, I go to chat. It just does such a great job of running through that stuff. I find that Claude comes back a little bit too quick. You know, so I don’t feel like it constantly went through and double-checked its work and went to like some really obscure kind of sources to validate some other information. I think Gemini is fantastic for visuals, and you know that you’ve seen the presentation that’s been able to come up with super fantastic. So, inside of Copilot, we’ve got this ability to be able to switch through different models. And I think it’s a great, I think that’s a great tool. I use GenSpark because it has that multi-modal capability. The problem I have with Copilot right now is that I have to choose the model. And I really want a tool that’s going to orchestrate for me and go to the right model for the right reasons when I just ask a question. What are your thoughts on that? Like, do you wind up switching between the chat, the Microsoft, and like, you know, the cloud models against setting Copilot? Yeah, so there is this feature, so when I use it, I use it either in the Copilot CLI harness or like within an IDE or within Visual Studio or within a code editor. And so I don’t really use the auto. I could, I don’t really like the fact that it does the thinking, like does the picking of the model for me. And I do that mainly because I want to ask the same questions, sometimes in the same way, sometimes in a different way of different models. And so like. I like to use more than one model, but not necessarily like for the exact reason like of research. But like let’s say I have an implementation plan, I lead the implementation using, so let’s say you come up with the implementation plan using Codex. It’s a little slower, people talk about it being slower, but it’s more methodical. in some of its outputs based on my opinion. And then 46 for the implementation. But then let’s say you use Codex 46 and another model or 4th model to actually like review the code and point out different problems of the same code to comprehensively come up with like one, you know. workload of fixes that you can make. It’s A culmination of feedback from different models. So I use different models for different reasons, but I also use different models for the same reason, like with the same goal, just a different kind of process. Get them against each other. Yeah. Last word. Yeah, get the last word. And to your point, Mac, first of all, I find myself. going away from the auto more and more now, and Claude’s more integrated into Copilot, which I like. A second thing that’s really cool that Satya Nadella just posted on, I believe it was LinkedIn about a week or two ago, is a new feature within Copilot where you can have both models running at once. If you’re familiar with ChatGPT, if you’ve queried there and they’ll be like, hey, we’re testing a new model, which output do you like better? you can do that within Copilot where it’s going to show you, hey, this is what the ChatGPT model came back with, and this is what Claude came back with, which one do you like better? So you can kind of like converse them against each other, which I really like, and it’s going to be a really cool feature moving forward. So we’re, I’m trying to think when the first time I saw Copilot being used on stage at Build. Were you there with me, right? Oh, I think I was there last year, but I think that was two years ago. Yeah, a couple years ago. So, but it wasn’t even real, right? It was like, yeah, here’s what it’s going to look like, but like we’ll get to it. Yeah. So we’re not that much further than like a year into like really being able to use Copilot as a tool like built into our ecosystem. Yeah, and you know the excitement, I think, is kind of outpaced a little bit of what the organizations are seeing as deliverable value. So, you know, we got a lot of licenses that people have been paying for that aren’t getting used. Microsoft gives you that ability to really dig into it and look at like who’s using it, and if they’re not, why? can I get them coaching or do I just take their license away? So, you know, Derek, I know you do a lot of the coaching. So, you know, where is the value showing up for organizations? What’s resonating with people? What are some of the things that you would tell people that are on this? Yeah, the biggest kind of overarching The theme that I tell organizations when I go through adoption plans and training with them is I break it down into three situations. The first one that you want to start with is that those individual prompts, what are those use cases that you’re going to get value out of? Where can you see yourself finding efficiencies there? And what I recommend is the two biggest issues that a lot of people have are there in too many meetings. and they get way too many emails. So finding different ways to have efficiencies with Outlook and with meeting summaries, meeting recaps, preparing you for meetings, those are two of the biggest areas that resonate for those individual use cases. Once you start using Copilot that way and you kind of broaden your scope, you’re going to figure out and start to think through what are those recurring tasks that I have on a regular basis, whether they’re daily, weekly, monthly. or different knowledge repositories. Maybe we have a SharePoint that just has so much information on it, but we can’t really aggregate it in an effective way. That’s where agents come in. And those individual agents that you can create fairly easily in an agent builder concept in the Microsoft, you can, they have it configurable where you can spin those up in a couple of minutes to be able to test something at most. Then from there, that’s going to start to think and get your head in the space of, wait, this could be really beneficial for my team, my department. Wait a minute, this could be great for my entire organization to have access to or potentially use. So you start with the individual and then you get into the bigger picture thinking that naturally leads into. I find that that definitely helps people figure out how to adopt it, how to integrate this into their daily workflows. Unsurprisingly, the simplest use case is also the easiest to solve and most impactful at times. Like, yeah, give me all my emails. That’s like a 30 second thing that anybody who uses co-pilot day one probably thought. I was opening my computer 2 days ago and Outlook popped up with a co-pilot prompt and said, hey, I can help you prioritize your emails for on a daily basis and elevate important emails to the top of the inbox. And like, and then it gave me the ability to do a prompt and it’d be like. You know, the suggestions it was giving me was like, anything from Derek is important and everything from Mac is not important, right? So, it would be right, right away, but you know, I it got me kind of thinking a little bit that there’s a shift, I think, in solving the problems and and thinking of like Clayton Christensen’s job to be done, like. Why am I doing this? Why am I solving it? I didn’t actually like prioritize my email because it wasn’t a real important activity for me because that’s not what I need to do. I need to change the way that I’m doing work. I need to change the workflows. So I think, you know, I’m going to throw this one over to you, Mac. And how are you? thinking the work that you have to do and still leveraging Copilot as a tool, but like changing the steps and not just saying, go replicate me, do, you know, get this job accomplished for me. Yeah, so I had kind of a mental shift just as recently as a few days ago where like I started feeling like I was a little bit behind and that’s just because I have. We all have a ton of things that we’re doing. And, you know, specifically like how I use it, it came down to like, okay, I have these five things I need to do. I need to synthesize all of this. I have all these meetings I’m in. I have all this individual contributor work that I need to do. I just went out. to town and I said, okay, let’s not actually use Copilot for this, but let’s stand up a Copilot CLI project and effectively put all my markdown files in there, put recordings in there, meetings, agendas, and start using that to help me plan my and get a better understanding of what I’m doing and what I need to do. So I use it for in that way. And it’s changed how I do things because now for the past week or two, because of some of the shifts of my priorities, like I started going to co-pilot or in this case, the various models that it has to help me prioritize the work rather than just focusing on, okay, like I needed to use this, I needed to use this. a copilot to implement this code or do this thing or another thing. And I think that for some degree, like, you know, companies really are observing their employees when they’re just buying. You mentioned there’s so many copilot licenses and they’re not really used, or there’s so much opportunity and they’re not used. Like, you know, there’s like this whole, like, I think. They bought these things and they just kind of dropped these licenses in people’s laps and it’s like, hey, go figure it out or, you know, take it on your own recognizance and just determine where you need it. Or it’s like, hey, there’s this great tool and then people get it and they’re like, okay, yeah, it can summarize my emails. Like, nice. So I think something that I’ve… I’ve really started to lean into a little extra is thinking like an agent, thinking like what the LLM needs for me in order to be successful in doing some of the tasks. And I’ve started to bring the artifacts that would feed a long running prompt or a long, like something that I need to work through over the course of a couple of days. and really isolating the information in a way that it has access to. I’ll give you an example. I’m starting to put together like a new talk, you know, for this talk I’m going to be doing down in Chattanooga in a month. And I’m bringing the artifacts and I’m putting them into a project so that I have a good grounded context to it. Yep, so just to, you’re the technical technical guy, right? You’re in the weeds. So what are some advice that you can give to the group here today about redesigning the architecture of how you plan your work and like leaning into that whole project structure, building contact? Yeah. Because that’s a different way of thinking is like, I need you to have this information. I need this to be the top of the list, not just what Copilot can find through the Microsoft Graph. And I’m glad you mentioned that, because I kind of started leaning into it, and then you solidified it even more. So what I do is I do, I basically initialize Copilot. in a folder and I have all of my artifacts. Let’s say I’m doing like project planning, architecture planning, discovery, and like, you know, effectively discovery of the business problem, but that leads into architecture. Ultimately, I have all those artifacts in that directory and co-pilots inside of there. I have my transcripts, I have my any emails, I have any diagrams I draw in Lucid, I actually export them. And if I draw anything on my own Accord, I export it as a PDF, I throw it into that folder structure. I actually kind of do the housekeeping a little bit where I have like a transcripts folder and diagrams folder. And I really focus my project along with specific markdowns of like, hey, this is what this project is, yada, yada, yada. You know, these kinds of folders you’ll find in this area, these kinds of information you’ll find in that area. I grounded in that and I have a lot of contextual information that this agent can now kind of really focus on. And I can start to ask questions of like, okay, you know, we had. you know, a session a few like this a few days ago, session #3, we discussed this, you know, how are we like, how far are we along in getting a shared understanding or meeting this goal or or like getting ready to potentially design or start POCing stuff and then I, you know, these transcripts. They have my conversations, they have what I said, they have what other people say. And then it’s like this rubber ducky, but not necessarily for coding. It’s this rubber ducky now for like, if I’m going, it’s like you, I’m going to your office, hey, like I have this problem I’m solving, you know, I’m solving it this way. Is this correct? I don’t need to necessarily do that. I will anyway. But I have that other thing, my rubber ducky for architectural problems, more so than like, hey, I’m stuck on this blocker because I put the if block in the wrong place, whatever. Like, you know what I’m saying? Like, these are rubber duckies for complex issues now before I go to somebody else or as an addendum to go to somebody else. All right, before I shift to the last topic here, I just want to invite anybody in the audience here to drop them a question or two in the chat. I’m going to try to leave the last two minutes for us to maybe take an ad for a question or two. Really, really put us on the spot. So I’ve been doing this talk Tomorrow, down in Chicago, it’s going to be the third time this week that we’re going to do this talk, talking about agent ops and what I’m talking about is like getting past like the POC pilot trust gap and getting something that’s really going to work in production. We’ve got a customer that is… you know, getting ready to roll out one of these cool agents that we built to several 100 people from, you know, they’re comfortable with it and they want to know what that’s going to look like and what we should be doing. So I’m thinking about things like evals and monitoring and tracing and what happens when and like a kill switch and control box. So. For you, what’s the minimal amount of agent ops that you need in order to feel comfortable deploying one of these agents that we built to 500 users? Yeah, so first of all, the amount of new features that Microsoft has put into the new Foundry specifically, I mean, not only are they fixing bugs live, like I’m… I literally found a bug and a few days later it was fixed. But the amount of focus they’re putting on making it a feature-rich kind of one-stop shop for these enterprise level kind of agent implementations. If you just go down to New Foundry and look at some of the feature sets that are in preview now. evaluators and evals, like that’s almost like the equivalent of test-driven development. And so that’s table stakes because, you know, some of the corners that we cut in development of applications over the last, you know, few decades, like they’re the same ones that people are going to cut now. Like the problems aren’t going to change. The tech debt isn’t going to change. It’s just going to be. how impactful is it going to be and where is it going to like reside, right? It’s not going to reside in your unit tests and it’s going to reside in your evaluators and the kind of the data sets that you give them. And the output is going to be the same, right? The output’s going to be that you have unexpected issues come up. things are going to break and the agents are not going to behave the same way that you would expect in the same way that they would if you don’t have unit tests or integration tests or like whatever that may be. And so table stakes for me, evaluators and data sets for evaluation specific to Foundry and then tracing, right? So you have to understand where you are, especially. the more complex these workflows get. You have 15 or 16 steps, or 15 or 16 sub-agents in some cases, depending on the granularity of the problem and the implementation. Where are you? What failed? Where did it fail? And if you don’t have tracing, you won’t know that. And if you don’t have evaluators, you won’t know what it’s going to do when you do deploy it. So that’s table stakes for me. So Derek, you’re in charge of managing our projects, maintaining a budget, getting things out on time, and setting the right expectations. This is a lot of extra work now from the way we’ve been building agents, adding the agent ops into it. How do you communicate with the stakeholders when things start to take a little longer? what’s the value proposition? I’m sure there’s a lot of pressure from the project team to cut those numbers. Yeah, I mean, it may sound obvious, but whenever you have a situation like this, you have to ask what can go right, but more importantly, what can go wrong? And think through all those different externalities, those different… use cases or information that the agent could have access to. You have to think through those guardrails and say, okay, what do we need to explicitly make sure? What are those edge test cases that we may not be thinking about right now that we should? That’s full of different people with different thoughts from different areas, different departments of the organization. Tachi say, hey, what do you think could be going wrong here? We just had something come up the other day on an agent that we’re working on together where just from a security perspective, who can actually input queries to it to potentially have it change something. You know, those are questions that have to be answered and are really important to be able to. do so. One last thing I’ll say that’s kind of tangentially related to it is there’s a lot of talk in the industry right now of how AI is just going to automate everything and be able to do everything where there’s not going to be a human in the loop. Agent Ops is emphasizing the need for humans here. that agents and humans work together long term. These are going to create operational efficiencies, the ROI there. Maybe it’s going to save us hours, increase our revenue, something like that. But having that human in the loop for those checks and balances to be able to audit on a weekly basis, to be able to have these conversations with stakeholders, I think That’s table stakes to me. One other thing I’ll say is, I, you know, we’re working on something together for one of our clients, and you know what we’re talking about is, hey, what do you do with a stakeholder? What do you do? How do you do it? What are your thought processes when you do these things? And, like, a lot of those things end up being individual scenarios, and those individual scenarios end up being the evaluations or like the things that we might test against in in in our for our agent. Going prompts, that’s exactly what I talked about. I wanna make sure we stay a little bit on track, so I’m gonna I’m gonna pause that thought for a second, because Jerome and the… in the chat posed this question. It’s really related to this and he says, is it more effective to release initial agents early or you should be focusing on perfecting the agent, you know, getting it out, you know, with the data? And here’s kind of my thought and I want you guys to challenge me in this moment is. You said it’s table stakes for evals and tracing, and you also said this is super important. But like, I honestly don’t care when I’m doing a POC, if I’ve got that stuff baked into it. I don’t need evals. I’m trying to demonstrate is this thing going to provide value to the organization. Once I’ve done that, I need to put that in. But like. You know, what are your perspectives? When’s the right time to do that? So I would say in a controlled environment, the sooner the better, fail fast. That doesn’t change in my opinion, but I think that, you know, I say table safes and emails and tracing and yada, yada, yada. But you know, think back to when we saw like these different graphs of AI adoption and like. You know, where are you? Where is your company on on this on this graph? And it’s like this ever is this curve that increases, right? And and and how far you how long you are? Well, if you’re early on in your adoption or the company is, then you have to consider that, you know. You can’t do these evals and tracing because maybe one, you don’t know how, maybe you don’t even know what questions to ask. So like certainly don’t wait for that and don’t let that be the thing that you want to boil the ocean and make it, you know, perfect. Fail fast, fail as quickly as possible and iterate as much as possible. And the more you do that, the better you get, because nobody, like evals and evaluators are a new thing. I’m pretty sure it’s still in preview. We’ve been on this journey for years. This is a new thing, though. It’s not. Well, it’s a new thing that it’s like a formalized framework, right? People have been running their own, like, little, you know, Python notebooks or their own tests. reasonable, but for the most part, like the eval harnesses that are available in Copilot Studio and Foundry are relatively new, but not the idea of running evals. So even the structure of what an eval looks like, just the harnesses are. So Derek, you’re gonna take a, you’re gonna field a question, I’m gonna put you on the spot here. Awesome. And I’m not sure that you’re not a licensing guy necessarily, but we’ve heard of the Microsoft E7 licensing being rolled out this year. Yeah. And it’s really going to kind of shift, you know, what the expectations are. So Craig asked, you know, with the E7, you know, license being aimed at management of agents. Who are some of the people or customers that you think are going to really need to pay attention to this? I think those that are on the forefront and that you look at this license and you say, hey, we’re ready for the capabilities that this presents, because there is a lot of value in there that Microsoft is giving you a discount on at a high level. But if you aren’t there in your AI journey, don’t try to, what’s the term, fed a square peg in a circle hole, that you want to be able to get further along on your AI journey before you’re ready for it. So if you’re seeing a lot of good co-pilot adoption, you’re putting out different agents, that’s a great use case for having that sort of licensing. But if you aren’t there yet, focus on working from the ground up with getting your co-pilot adoption together, start working through different agentic use cases before you get to that point. I would think that companies that are like ready to put like agents on the ground doing things autonomously on their own, kind of reason things through a little bit more, maybe not fully autonomous, but like 90%. you could do your own job, do your own thing. Like those are the companies that could like stand to take benefit from these licenses. But if you’re just building Copilot Studio agents and like people are driving things and not like you’re still manually making changes to your ERP or you’re still manually making changes here or there, like you’re probably not going to reap the benefits. It’s until like you have an agent that does this for you that you’re going to start reaping those benefits. Well, that’s going to be a wrap. We’re at time. And so I do want to say thanks to everybody for joining us. If this piqued anybody’s interest, go ahead and reach out to us. Follow me on LinkedIn, follow Concurrency on LinkedIn. And thanks for spending your time with us. Thank you.