/ Insights / View Session Recordings: Virtual AI Summit – Beyond the Hype Insights View Session Recordings: Virtual AI Summit – Beyond the Hype October 2, 2025Keynote by AI Expert Todd McLees Track A – Leadership & Outcomes Track B – Tools & Implementation Transcription Collapsed Transcription Expanded Kate Weiland-Moores 0:18 Good morning, Todd. Good morning, everyone. Good. So good to see you, Todd. Good morning, everyone, and welcome to Concurrency’s Autumn AI Summit Beyond the Hype. I love the lineup that we have for you today, and I’m going to get things kicked off right Todd McLees 0:21 Good morning. How are you, Kate? She. Kate Weiland-Moores 0:38 by introducing our keynote speaker, Todd McLeese. It’s my great pleasure to introduce you, Todd. And Todd is a globally recognized strategist. He’s a futurist and thought leader and a published author at the forefront of AI powered future of work. Todd is the co-founder of Human Skills AI, dedicated his career to helping people and organizations succeed in a world shaped by disruption, automation and rapid change. Todd’s groundbreaking frameworks on AI agility, human AI mindsets and. Skills for the AI economy have been adopted by over 60 institutions and industry partners across the country. And he has more than 30,000 people that have taken the AI Agility Challenge. Could you be next one of you on the call? Named a LinkedIn top voice in artificial intelligence in both 2023 and 2024, Todd is passionate and is a passionate advocate for building healthy AI habits and empowering people to flourish and rapidly. Changing and evolving world. I myself have been in awe of Todd McLees for many years, so please join me all in welcoming Todd McLees. Todd McLees 2:00 Hey, that’s really nice. Thanks very much and thank you everybody for taking the time this morning. My goal today is really just to get through some content, not a lot of content, roughly half the amount of slides they normally use so that we could hopefully spend some time in a more dynamic conversation, not just questionnaire. An answer, but some back and forth. The concept that we’re working on right now takes us from using large language models and AI clients to agents and of course concurrency business and their customer base. Very much based today on agentic workflows and agentic A I where we’re focus is on the people and process side of things. And so it’s very much about agentic workflow design, taking our imperfect processes and making them ready for a I agents. So we think about it as aiming higher. In the A I economy, because every business process that we run today was built around scarcity, scarcity of talent, scarcity of time, scarcity of intelligence. But intelligence is moving from scarce to abundant. One way to imagine that is to look at raw cognitions measured by. IQ It’s not a perfect test. It’s a maybe even a poor proxy for human intelligence, but it certainly is a nice benchmark to track the progress of AI systems. So a year ago the landscape looked like this. The best A I models were clustered right around average human IQ falling a little bit short and that was just, you know, a little over 12 months ago from this date. And you can always you can track this yourself. It’s tracking a i.org is where I get this information from. So that’s a year ago. Now average human intelligence is 100, and every 15 points is 1 standard deviation. And what we’ve seen is that throughout the many releases that have occurred over the last year, it’s essentially a standard deviation, 1515 IQ points with every new major release. So today it looks like this. The frontier has already shifted far to the right with models scoring in the top 1% of human capability. And you can see it’s not just open AI, it’s not just the very latest models with GPT5, it’s Grok, it’s Gemini. And you can see Claude just behind that and coming. And again, every release that they do, like Anthropic just released Claude 4.5, they gained about 15 points. So we’re confident that by the end of this year, maybe Q1, we’re going to start seeing models regularly performing over. the threshold of 140, which is genius level. And it doesn’t stop there because IQ, you know, it’s a little loaded. All of the benchmarks are, but every benchmark that we’ve ever created to track the intelligence or the capability of AI is being quickly usurped from coding to reasoning. To data analysis, the ceiling just keeps moving higher. You can see here that right now the latest models, anthropics models, G PT5, they’re outperforming human engineers in in many different ways. Not always, but in many different ways. Let’s look at math for a second. This is the invitational math exam that goes out to the toughest, sorry, the most talented students that are out there in high school math. A I models are already winning gold now in the Math Olympiad, which is a global competition. And on the AIM exam, one of the toughest challenges for advanced high school mathematicians, AI models are now posting perfect scores. What was once a test that separated the top 2% of students is now being crushed by machines. If you look at science, lastly. The GPQA, which is Google proof, PhD level questions, hundreds of questions. The average expert, the average PhD in their domain of expertise scores somewhere between 65 and 74% on this exam. And you can see that the newest models are not. Competitive with experts, they’re outperforming them. All right, so why does that matter? Because, you know, one thing against every benchmark that exists is that the A I labs, when they release a new model, they overfit the model to perform well on these types of tests. So you have to take it with a little bit of a grain of salt, but it’s still a good measure of progress over the last couple of years as language models have made their way. It matters because once a I reaches or exceeds human capability in those three domains, math, science and software engineering. We’ve essentially unlocked the full cycle of innovation, math for abstraction and modeling, science for generating and testing hypotheses, software engineering for implementing and scaling solutions. So let’s just take a a moment, a little pause. And think about what it really means. For decades, as Vinit Khosla put it, we’ve rationed the time of our most expensive, our most capable people, every workflow, every piece of software. They’ve all been wired to date to optimize around that constraint, around talent, around intelligence. But now we’re we’ve entered a moment where intelligence in math, science and software, the very engines of innovation is no longer scarce and and creativity too. All sorts of the jagged frontier of AI capabilities, all sorts of capabilities, not perfect, lots of gaps and so forth. But we’re getting to the point where intelligence is not scarce. It’s abundant, which means the old assumption that we have to conserve expertise, limit cycles of experimentation, or settle for good enough processes is collapsing. When scarcity disappears, everything built around rationing breaks the workflows we design for bottlenecks. They don’t make sense in a world of abundant intelligence. That’s a lot bigger picture. It’s it’s more than just faster productivity. It’s an opening to reimagine how we create, how we discover, and how we build. And we’re seeing all sorts of announcements and predictions coming from the A I labs now about novel concepts in math and science being discovered, if not now, by next year, these new capabilities that are emerging around reasoning. Too often our conversations in academic and professional circles is about teaching AI literacy, helping people understand. The basics of tools, sorry, just trying to advance the slide and literacy is absolutely necessary, but it is not sufficient for the work that both mid-career professionals, all of us need to be thinking about or the students that are going to be coming into the workforce, which we we know from the data. Is a struggle today. They need skills that are related beyond AI literacy to ways to leverage AI agility and orchestrating AI agents within workflows. Because if workflows themselves no longer make sense in a world where. We’re quickly heading. Then the real challenge isn’t just literacy, it’s fluency and redesigning how we work. It’s about agility, not awareness. So when you think about the work that’s going on right now, the companies that are out in front, not just a I forward, but a I first companies. like these companies, they’re trying to be AI first. They’re pushing agents and automation into every corner of the business, measuring progress and tasks completed and cost reduced. You’ve probably seen the wave of CEOs declaring their companies AI first, like those on the slide, certainly the top Example our examples with Box, Fiver, Klarna, very public, released memos about what their policies are going forward at Shopify. Employees were told they can’t request a new hire. Until they’ve proven why AI can’t do the work. Klarna bragged about replacing hundreds of support agents with an AI assistant, but they ran into some trouble. That was more than a year ago. 700 support agents were let go. They were very public about why that they were deploying AI assistants to do the same. Only to quietly begin rehiring people earlier this year when customers lost trust and they were repurposing software engineers to work the phones and support and so forth. It was a mess. And that’s the reality of AI first. For some it means speed and scale. For others it. Exposes the risks of chasing efficiency over value. You can see direct supply on this on this slide locally. I think they’re maybe the best example of a company that’s got more than a decade long investment in AI. They’re an AI first company. They have a couple 100 different automation. And you know from GPTS to agents running throughout their enterprise. But in from our perspective, AI first versus human first AI, it’s not really about picking a side. It’s not a religion. The real work is learning to move fluidly between the two in agentic workflows. You have all sorts of different tasks and there are AI first tasks when speed and scale are essential. There are human first tasks when trust and problem solving and value creation are at stake. So the organizations that thrive in this era, they won’t be the ones that declare themselves AI. First, or human first. They’ll be the ones who know how to design workflows where the two multiply each other, where you go back and forth. There’s multiple tasks and there are always reasons to consider. Should we automate this? We can’t. Once we can automate something, should we? Should we leverage AI instead to augment people’s people in the role, or should we keep this as intrinsically human? So when we think about the time that we’re entering, it’s not about the AI economy, it’s about the agency economy. For most of the history, our economies have been defined by what is scarce. In the agricultural economy, land and labor were the limiting factors. In the industrial economy, it was machines and capital infrastructure. In the knowledge economy, intelligence itself became the scarcest resource. But now we’re moving into something new, where it’s the agency economy. And here the scarce resource isn’t land or machines or intelligence, it’s agency, the ability to decide, design and direct how humans and AI work together. Because the agency economy is powered by two workforces, the human workforce and the digital workforce. And when you combine ingenuity, resilience, judgment of people with speed, scale and precision of agents, AI agents, you don’t just get more productivity, you expand what’s possible. And that’s what I mean when I say. Agency is the multiplier, but both have to be there. You can’t really solve the issues with simply automating, right? We’ve heard that for many years in manufacturing, for instance, where you can’t really just automate a bad process. You have to redesign the process to optimize it. In this case, taking people out of the equation has to be an intentional decision. And when you do that, you can’t just solve the shortcomings of of the individuals that are doing the role by having an AI do it. In fact, the processes that we’ve designed over the many years, I think we can all admit if we think about it just a little. little bit. They’re relatively imprecise because we all have people that we trust, that we are our go to employees and so forth, who just figure it out. When it goes off the happy path of a given process or workflow, we depend on people to figure it out. When it comes to Agentec AI. AI and and offloading work to agents, it’s much more necessary with the constraints and the context and the objectives and so forth to be much more definitive in how we think about process so. If the agency economy is about harnessing human and digital workforces together, the question becomes how do we actually design that partnership? Well, one of the people that I pay a great deal of attention to on this issue is Andrew Ng. Co-founder at Google Brain some time ago, CEO and founder at Baidu, a large AI company, and a Stanford professor as well. And tremendous online content from Andrew about everything technical about agentic workflows. Kate Weiland-Moores 15:15 OK. Todd McLees 15:21 About agents themselves, about AI in general. He’s been pointing out something critical. The breakthroughs in AI won’t come from just building bigger models. That’s certainly helpful, but they’ll come from better workflows. And he calls it agentic workflow design. That’s the term that we’ve adopted. And it means building processes where agents don’t just spit out a one shot answer, but they reason, they reflect, they use tools, they plan steps and even collaborate between themselves. And our job is to be involved in that. Sometimes wrapping an older model in an agentic workflow outperforms the newest model simply because you’ve given it the opportunity to create the learning loops and give itself feedback and get feedback from human interaction and so forth. That’s why workflows are the story. It’s not just what the model can do in one pass, it’s how you design the system around it. And that’s where human agency comes into the picture, because not every task in a workflow is the same. Some are automatable. AI can do them end to end. Some are augmentable humans and agents working together, elevating the work that either could do on their own. This is where you see the metrics of 40% improvement in quality because a person who’s doing that task is leveraging AI to go further, not just faster, but to take them further. Further to extend their capabilities. And if you’re using AI on a frequent basis, I’m certain that over the last year or so you’ve experienced those instances where you know those holy cow moments around what AI is actually capable of. And then there are some tasks that are just purely human. The judgment calls, relationship management, ethical decisions, creative leaps, things where AI might be additive, it might not be, but that domain is still a human domain. The companies that figure this out, the ones that master agentic workflows and define the human role with precision, they’re going to outpace the competitors. And the same is true for individuals. The people who learn to work this way, knowing what to automate, what to augment, and what to keep purely human, will be the ones who move the fastest, give themselves some runway, create the most value, and stay indispensable in the economy that we’re heading into. Because the real edge isn’t just knowing how to use the tools, it’s building the right skills too. This is a huge message and and area of focus for us with within the higher edge that we work with across the country. There are durable skills, judgment, problem solving, ethics, the things AI can’t replace. Transferable skills like data fluency, collaboration, critical thinking, the ones that carry across roles and industries. And then there are perishable skills that the more specific, the more perishable. So the specific tools and platforms that we all need to keep refreshing as those. Products evolve faster than any technology that we’ve ever dealt with in our lives. Individuals who invest in those layers of skill, whether you’re 45 years old and mid-career or an 18 year old kid headed to a college track or right to work. The ability to build these human skills and combine them with the ability to not only collaborate with AI in meaningful ways, but to orchestrate AI agents. That’s where thriving and human flourishing is really going to come into play in this economy. So, all right, let’s make this real. It’s one thing to talk about durable, transferable, and perishable. But what does it actually look like in practice when humans and AI divide the work across automatable, augmentable, and purely human tasks? Let me give you a couple of examples. Let’s talk about radiology. Almost a decade ago, one of the field’s brightest minds predicted AI would replace the field of radiology entirely. The logic seems simple. If AI can read images better than people, which it can. Do. It’s more accurate every time today. Why do we need radiologists? But that didn’t happen. Radiology is actually growing. Why is that? Because the job is more than image recognition. It involves context, patient communication, regulatory decisions, and accountability. AI speeds up the repetitive parts, which creates more demand, not less. This is the pattern. Jobs refactor before they disappear. They shift into automatable tasks, augmentable, and those that remain purely human. This is Jevons paradox. It’s the name of it. It’s when efficiency. Goes up, demand doesn’t go down, it explodes. A I made scans faster and cheaper, so we order more scans and that’s actually increased the need for radiologists, not less, because some someone still has to make the judgment calls, handle the complex cases. And sit down with patients and their loved ones to explain what it all means. Here’s how the role is shifting. The repetitive part, the first pass scan of thousands of images is being automated with a I flagging anomalies at scale. That frees radiologists to spend more time on the edge cases, the ambiguous results, nuanced patterns, integrating findings into a patient’s full. Medical story. They’re also being pulled closer to the patient, communicating results directly, building trust, working alongside care teams. At the same time, they’ve become orchestrators of hybrid workflows, reviewing what AI flags like Cleveland Clinic is using ambient listening in their rooms, almost 100%. Of physicians have opted in, and only one patient in the past year since they’ve been running ambient AI in in the rooms has opted out. Ultimately, the radiologists are still accountable. They’re making sure diagnosis are accurate, they’re documenting decisions, they’re navigating the ethical and legal responsibility. When A I is involved. That’s why jobs don’t disappear overnight. They refactor, and certainly there’s exceptions. Different companies, different individuals are going to make different decisions, like Klarna did 12 or 18 months ago when they cut 700 people and then made the mistake and had to hire many of them back. Or many new customer service agents and ask engineers to work the phones while they fill that gap. Next use case is around Goldman Sachs. So David Solomon, the CEO of Goldman Sachs, he put it bluntly around six months ago now on CNBC that it’s the last 5% that now matters because the rest is just a commodity. Think about. What he was talking about was the S1 prospectus. So when you take a company public, you might hire, if you’re playing all the way up the top of the market, you might hire Goldman Sachs to take you public. And one of the things that they do is they build a prospectus on your company. Traditionally, that meant 6 Goldman Sachs bankers, not inexpensive resources, working for multiple weeks, grinding through hundreds of pages, every disclosure, every chart, every footnote. But today, most of that baseline work is automatable. A I can draft sections, assemble financials, check formatting, and compare against thousands of past filings in seconds. So what’s left? The part where judgment and trust matter most. The story that positions the company to investors. The careful framing of risks. Decisions under uncertainty that can’t be delegated and the relationship between the banker and the client that says this is how we tell your story to the market. Now sometimes the split looks like this, like 955. Other times it’s 8020 or 2080 or even 603010 automation, augmentation and purely human. The ratio shifts depending on the task, the stakes and the context just depends on the workflow. But here’s the reality. Even if only 10% of a process can be automated over time, ultimately it will be automated. And that’s the gravitational pull of a I technology. The principle doesn’t change. The commodity work keeps shifting toward automation. And the human edge keeps concentrating in judgment, trust, and creativity. That’s why this isn’t just what about what can be automated. It’s about clarity, clarity on what’s automatable, what’s augmentable, and what must remain human. That’s the essence of agentic workflow design. How? How many of you who are thinking about AI agents at this point have already done the work of defining the process to greater detail than you’ve ever done before, and identifying exactly which tasks you think are automatable and where you still need to rely on people, and where AI can elevate the work of people as well too? Increase the value of that process to create better relationships or outcomes. So radiology shows us. How efficiency expands demand. Goldman Sachs shows us how automation concentrates the human edge. The line between automation, augmentation and human only work is always shifting. That’s the challenge. It’s not static. A single project might move across those boundaries, sometimes automated, sometimes shared, sometimes human, back and forth throughout. That’s why we need a framework, a way to see and design the handoff. And this framework is not ours. It’s it’s Erik Brynjolfsen, who wrote the book Human Plus Machine, maybe a decade ago now. He was at MIT at the time, now he’s at Stanford. They developed this framework called Human Agency Scale and. We’ve not only been inspired by it, we’ve modified it to fit for enterprise workflows as well as academic workflows. I’ll show you an example of that in just a second, but. Human agency scale is all about the role of the human. Instead of only focusing on what the A I agent is capable of, it’s what role. This is the can we should we conversation. It’s what role can should a person still be playing? In that context and how much agency and autonomy do they have in the conversation? Because we still need to help people build agencies so that they get out of bed in the morning feeling like that, you know, they are in charge of their own destiny. It’s called self-determination, that they show up to work ready to drive some of that. Even if they’re using agents to achieve it, and that they’re they know and they’re willing to be held accountable for the results. This is a this is a massive shift for most people. You know the old concept of. I guess it’s the Peter principle where you’re very successful frontline contributors, so you just keep getting. I think the principle is you keep getting promoted to the level of your incompetence, right? So suddenly we we put somebody in a leadership role or a management role and they’re not as good a manager as they were an individual producer. Well, now as the Silicon Valley folks, the, you know, the A I tech Bros and everybody else starts talking about. About, well, we’re all just going to get sort of promoted and we’ll be managers and we’ll be managing agents and orchestrating them and so forth. These are skills that a very small percentage of people have. So this framework maps out the different modes of collaboration. At one end there’s autonomous AI agency, where the machine runs end to end. In the middle there’s Shared agency, which is about humans and AI co-creating in a in the same loop back and forth. A lot of us are using language models. That way today. And at the other end is full human agency, where the stakes, nuances or contexts demand that people stay fully in control. Is there too much risk to automate something? Klarna found that out the hard way. The reality is real work moves across this scale. Sometimes you delegate more to AI, sometimes you pull it back more to the human side. The key is that you’re clear on what that looks like. It’s not just as simple as saying human in the loop, very popular phrase. The real question is, well, what is the role of the human in the loop? Is it oversight? Is it review? Is it decision making? Is it creative direction? The clarity is human agency and you have to design for that. So let me show you an example that is academic in nature. As Kate mentioned, we work with more than 60 different institutions, not only as clients, but as our partners. Our go to market strategy, they’re our distribution channel if you will to to the market around our AI agility and agentic workflow programs and challenges and so forth. This it’s interesting when you talk to I’ve I’ve worked with I think we’ve had. More than 200 college presidents take. The AI Agility Challenge and you know, locally Marquette University has had now some 70 people go through from School of Business and the School of Engineering, all the leaders, Kimo, the the president at Marquette University has completed the AI Agility Challenge and so forth. It’s a it’s a level set. It’s a Shared language. It’s a baseline skill sets around not just prompt design, but human AI collaboration. So as I’ve worked with now thousands of educators in higher Ed community colleges to the greatest universities. The people who dislike me the most are English professors and writing instructors. They they have a really hard time, understandably, with how can I bring a I into the mix. So we work with all 30 community colleges in the state of New York to the SUNY system. And at one of those stops, I was working with an English professor in a workshop around an assignment. So she gave me her assignment. She sent me her assignment, the PDF, describing the assignment. And the assessment and the rubric and all the things that the academics give to the students so that they can do well on the project. And what we did was we ran it through a system prompt that we’ve built to help create an agentic workflow out of it. And what it essentially said was in education, they’re not very. Interested, understandably, and maybe even in business in AI leading very often. So on that human agency scale, you can see on this chart we sort of floated in the 10 steps of the of the assignment, which was the same 10 that she’s always had in that assignment. The human agency scale was somewhere between a three and a five. You can see that there’s four steps where there’s a five. OK, so once we got done mapping, sometimes they were operating at full human agency, like choosing a topic or making a final judgment call. Other times they’re working in Shared agency, they’re co-creating. And then we didn’t just tell them to do it. We use system prompts. So we refer to it as a seed prompt. We gave the instructor. I’m I’m using an academic example, but this certainly fits in business as well. You know, it’s much. Harder to understand how students are using AI if you don’t give them the guardrails to live within. And I’m not talking about policy. What I’m saying is it’s very easy to generate a system prompt that says to the student. So I can say to the student, here’s the assignment and happy to have you use AI on this assignment, but you have to start with this prompt. And what that prompt does is it guides the AI tools behavior quite perfectly actually, and it reminds. The student when necessary that it’s a coach, not doing the work for them. So for instance, if you’re trying to choose the topic for this particular assignment and you say, well, write the write the topic for me, pick it for me, it will come back and say, well, actually this is an H5. This is a full human agency. Step. And so I’m your coach, not your writer. This needs to come from you. Here are the skills you’re building right now. And So what the students learn is not just the ability to complete this assignment in the way that complies with the way the institution wants it to be done and the teacher and the classroom wants it to be done. But they also learn agentic workflow, and they learn healthy habits on how to work with AI to get work done at a better level than they’ve ever been able to before. And it becomes completely auditable. And within the context of industry, these sorts of things, master prompts and system prompts, also have a tremendous amount. Of value to create standards instead of just starting a new chat session and relying on, you know, ChatGPT or Copilot’s memory capabilities to play within the guardrails. That to me, this is the future of learning. It’s about agentic assessment design. It’s also the future of work, because the organizations that build this kind of intentionality into workflows and process, they’re going to move faster. There’s going to be more trust, there’s going to be fewer critical mistakes, and it’s going to free. People up to be more creative than anybody else, any other entity that you’re competing against. That brings a maybe one of the final points up here. Abundance creates new work. So the real upside of a I isn’t just automating what we already do, it’s it’s the work that we never get around to doing because we’re so busy with all of the rote tasks. That fill our day. Warren Berger, the author pictured here, this book is actually from 2014. It informs a lot of our work. It’s called A More Beautiful Question. He makes the case that progress doesn’t start with answers, they start with the right questions. And we hear that all the time, you know, especially true in the new era of abundant intelligence. It’s no longer about finding answers, it’s about asking great questions. Because if intelligence is no longer scarce, that’s the differentiator. Who can ask the best, most beautiful questions? And that is playing out today. When you think about how A I is being used in your company right now, you may be an outlier who’s thinking more. Creatively and working up the value chain, but most of us are living in this green box. The core use cases that make up today’s work, making it a little faster and a little cheaper, trying to get that e-mail response done without spending a great deal of time on it, and so forth and so on. Very tactical transactional task, but as intelligence continues to multiply. 4X per year, then the next year, 16X and then 64 and so forth compounding. Well, the ideas in the green box just don’t get us very far. The real opportunity is on the other side of the chart at the intelligence frontier. The work we’ve never done before because it was too expensive, too time consuming, or simply impossible to gather to curate the intelligent resources, the people, the talent that we need to solve those issues. And to reach that frontier, we need both imagination and human agency, and we need we need to carve out some time for these use cases. We we need it to look like this where that’s a lot more populated in our business. That’s how we unlock value with a I and agentic A I and workflows. It’s questions like what’s already working in our business that we could 10X if we had access to abundant intelligence. What problems feel impossible right now because we can’t hire enough people, enough talent to solve them. What new kinds of work, new business models? What? What? What becomes affordable when intelligence is no longer scarce? Can we build process and systems around that? And yeah, what are the things that only humans can do, whether it’s today or next year when we see? IQ’s of 175 surpassing just about any human being that’s ever been recorded or measured. What’s still going to be human? And again, your business, your enterprise, you’re going to make different decisions than your competitors do within the business, in different business units and departments and line managers, they’re they’re going to feel differently about this than others do. In the agency economy, we don’t just need better agents, we need more human agency. Because if people don’t have the capability, the clarity and the courage to ask better questions and act on them to actually do them and be held accountable for them. Well, abundant intelligence will never reach its potential, and neither will we. That’s the shift. Productivity has to be the floor. And agentic workflows can help you get the productivity, help you get to the table stakes. Another way to look at this is a framework that we published in Harvard Business Review in December of last year. And you know the Gray box on the bottom is the green box on the previous slide. This is where everybody. Is this is where everybody’s starting. But if you create a Shared collective intelligence across your team, if you can help people get aligned around what agentic workflows can look like in your business, in your culture, what is automatable? What represents too much risk to do that? Where are there opportunities to augment human capability and so forth? Then you can start working your way up to the point where you are driving additional value, you’re increasing customer relationship value, lifetime value, et cetera, transformation and growth and then ultimately. Innovation. So to me, it’s about creating a Shared language and baseline skills around human AI collaboration. That’s not just literacy, that’s. It’s not just IT either. It’s It’s also the domain experts in your business, the people running business units, the people executing on that side have to have this Shared understanding, learning how to collaborate with AI in the flow of their work. Everyone needs the same foundation so you can move together, not in silos, not with uneven. In adoption, if you do that, if you’re in silos, agents are just going to accelerate the silos. The next thing to learn, and you could start doing it right now, is learn agent delegation. So it’s an entry level skill. This is going to be expected in the next couple of years of anybody coming into the workforce at a young age or an old. It’s about teaching people how to give the agent a role. I’m a horrible delegator to other human beings. This is. This is something I’m going to have to really intentionally build this skill to figure out what I can not just collaborate with an AI on in a ChatGPT session, for instance, but what I am willing to and able to give to an agent to do. What constraints need to be set? And how to treat a I more like a junior teammate who needs to understand our business. And I need to be more explicit in giving an instruction rather than a magic answer box, which is how many of us think about that. So there are simple ways to start with agent delegation. If you’re a beginner, you can use ChatGPT. Agent. You can just kick off a task, watch it plan steps, execute them, and deliver a cited result. If you’re I’m guessing many people here are a Microsoft shop, so if you’re already in Microsoft and using Copilot, take a look at Copilot Studio and what it’s capable of doing. They’re not just prompts in Word or Excel, some of the really actually very cool. Functions that they’ve recently brought on board, but you’re pulling data across Outlook Teams and SharePoint to assemble a board briefing. I’m certain that concurrency can be helpful here. It’s about learning how to work one-on-one with an agent to see what you can outsource or offload, but then also orchestrating multi step processes. You have to get more specific and more precise in defining process if you’re going to bring AI into the picture. And then lastly, agentic workflow design or agent orchestration once people can delegate to individual agents. The next step is designing whole workflows where humans and A I move fluidly between what’s automated, what’s augmented and what remains purely human. And that’s laying out that just like the assignment from the English class, it’s just like that. It’s there’s 27 steps in our onboarding process. This is what’s automatable, whether it’s a customer onboarding. Process or an employee onboarding process and this is what’s augmentable and what’s purely human. And for most organizations, that means finding a partner like concurrency to handle the technical side of responsibly deploying agentic AI. At scale, as as you’re building these capabilities in your organization, that’s where you need to focus, building skills and building agency with your people. So where do you start? You build a Shared language, learn how to delegate to agents, and then design agentic workflows. Because that’s the real opportunity. Productivity is no longer the prize. It’s the floor. It’s the starting line. Value creation is the ceiling, and agentic workflows powered by human agency are the multiplier. That’s a force multiplier if you’re trying to get people who don’t feel high agency, which only is about. 40% of people in the average business. If you’re trying to get them to use A I, they only see it as a threat. Low agency people see A I as a threat. If I if you roll out of bed in the morning, like many of you do, and you feel like you’re in charge of your day, you’re in charge of whether today is deemed a success. Yes or not, that you, whether you’re a founder of a business or working for somebody, you can feel that sense of ownership. If you feel like you contribute to those results and you’re willing to be held accountable, that’s the basics of high agency. Only 40% of us feel that way. 60% of the world feels like what the conversation that we’re having here is a threat to their livelihood, to their self-worth, because the skills that they’ve built, when you start looking at even radiologists in the way that their job is evolving, it’s not going away. It’s going, it’s in, we’re increasing the number of radiology jobs, but the job is changing. Every change is difficult to navigate. So that’s how we’ll lead in the agency economy, not by chasing efficiency alone, but by reimagining how we create. How we discover and how we build with together with intelligence that’s no longer scarce. This is happening right now in some of the businesses that I identified before from a I first. So I’ll stop there and leave some time for questions. Want to say thank you to Concurrency for giving me the opportunity to address the group and happy to answer any questions that are already in the chat. Or if you want to just turn your camera on, please feel free to do that and ask questions. 43:58 Yeah. Amy Cousland 44:02 Todd, if people want to drop questions in the chat, I did see one. If you want to go look in the chat, there was one asking about why tracking AI shows a different chart than what you had shared. Todd McLees 44:13 Oh, so that’s the offline versus if you click on the Mensen, I I use the Mensen Norway. I have been for the last 18 months. If you click on the Mensen Norway button, it’s going to show you the same chart. It’s also three days old, so it might be a little bit different because they’re running tests on a regular basis, but I imagine that will be a lot lot. Closer to what you’re seeing. So that’s that’s the issue of having offline versus online access to data or context or search capabilities and so forth. And from my perspective, just about every work process today, a I’s have those capabilities. So I use the Mensa Norway test instead of the offline capability. Amy Cousland 44:54 Any other questions? You can go ahead and drop them in the chat. OK. Oh, there gave you a question about what is the the Marquette class? Todd McLees 45:14 Oh, so there is. So we’re working with executive education at Marquette around a I agility and a I agility is a 30 day challenge. You you actually get three months to complete the challenge. It’s 20 modules, roughly 20 minutes per module. It also includes 12 months of updated modules. So every month you see another two to three modules depending on AI announcements. And not just, you know, we’re not out there saying, hey, G PT5 just came out or Sora 2 just was released, you know, a couple days ago. We’re more interested in what impact is the new capabilities have on the way we work or the way we learn. So that’s what we ship out. So this is around human A I collaboration, a little bit on responsible A I and we’ll have a new challenge coming out this quarter through Marquette and our other higher Ed Partners around agentic workflow design. So you can take a look at humanskills dot A I for information on that. And if you’re interested in taking the class, you know it’s $500 per person. But again, that’s 12 months of content, not just the 1st 30 days. And many teams take it through the higher Ed partners. We don’t have a direct to market model. We go directly through the Microsoft ecosystem partners, MSPs and higher Ed, but primarily higher Ed. Amy Cousland 46:40 I do have a few other questions there. Can you see them in the chat? Todd McLees 46:42 Oh. Amy Cousland 46:47 See. Yeah, sure. For people in the 60%, those that think that AI is a threat, what’s the best way to overcome this fear? Todd McLees 46:47 You want to give me one Amy and I’ll get in there. Yeah, it’s it’s small doses, you know, showing them ways that it can be additive. It’s the what’s in it for me old line right around here are ways. So essentially focus on augmentation rather than than automation with those folks so that they can see the impact that it has for them. In many cases it’s well, I can save 4 or 10 hours a week. I can do work at a higher level than I’m capable of or with many fewer hassles in getting that work done. So that those sorts of if you can stack those up, you start to see converts and. As opposed to starting with the conversation about, well, this technology is here now, so we’re going to automate, you know, even 10% of your job is automatable. Usually it’s not more than 25% today, but for many people, 10% is territory where they feel like they have to defend. Amy Cousland 47:47 Awesome. What jobs today do you think require the lowest level of human agency and are at the greatest risk of disruption? Todd McLees 47:57 Yeah, well, I would just direct you to will robots take my job dot a I. It sounds goofy as a website name, but it’s a real there’s really good data there. Anything that is transactional in nature, anything that’s just, you know, knowledge-based, I would suggest that writing editorial. You know that Goldman Sachs use case is a really interesting example. There you’ve got 6 bankers who can probably go do higher value work, but if it’s somebody’s job to do research or just create documents, whether those be grant proposals or sales proposals by inside sales people. So forth. Those kinds of tasks are going to be disrupted or are being disrupted right now. Customer service roles, anything that I would say is either remote where you don’t have to be in front of people, where you’re not protecting human relationships at all. You’re not part of that part of the value chain. Where you’re just doing working on deliverables, it’s those types of just cognitive tasks that are most automatable. Amy Cousland 49:01 From a practical aspect, how do you see companies overcoming outdated IT paradigms, limited access to plugins, higher level permissions for applications that can leverage AI, et cetera? And what tools exist to help guide this transition? Todd McLees 49:18 Yeah. So couple questions in there. The first one has to do really around modernization of systems to enable AI. And to me that the only way through that is a strategic conversation in the C-suite in the ownership of saying we have to get believing. And not everybody does, but believing that you have to get to the point where you’re leveraging artificial intelligence and agentic A I to keep pace with competitors. If that’s the case, then that has to be the catalyst to get you moving in that direction to modernize your systems, your IT systems. The other question or. What tools exist? You know we’re we’re building a platform right now about which will help with agentic workflow design. We’re starting with academic because we’ve got built-in customers there, but it’s completely translatable to say take a defined process right now, plug it in and and AI algorithms will make. Suggestions based on your guardrails around what’s automatable, augmentable, and purely human. Other than that, it’s it’s about learning. It’s about, you know, I I’m not a huge fan. We’ve seen all the negative news about endless pilots and not getting into production. Ninety-five percent of them, according to MI TS data a couple of. Months ago now or a month ago, but there does need to be a fair amount of experimentation just for the sake of learning. Amy Cousland 50:44 I think we have time for one more question. I’m going to read that to you here. Some in the AI space argue that given the rapid advancement of AI’s capabilities, the skill of workflow automation will likely become obsolete within the next year or two. They argue. They argue you’d be able to simply explain the workflow you want in Presto. What’s your take? Todd McLees 51:04 Yeah, I think that that’s a little optimistic. You know, for instance, I’ll go to coding, which I would say is probably the leading role right now in terms of automation and while we see the headlines around 80 to 100%. Of code, you know 90% of code in anthropic and 80% of meta being created by AI. We also have seen just yesterday an anthropic researcher came out and said that he believes by 2027. An A I will be able to work on a coding project for 30 consecutive hours without human interaction, and the code will be 50% right. And when I heard 50% right, I thought, well, I mean, it definitely changes the role to oversight and quality and elevating A I’s outputs. But I I struggle. There’s, you know, people are all over the map on that. I think today, between now and the next 24 months, you’re pretty safe in saying here’s where we have to make a decision as to what our company believes, where humans need to be involved and where they don’t necessarily have to be long term and as the technology matures. And as you partner with groups like Concurrency to bring solutions to the table, you can have much more clarity about where an A I can play a heavier role in your business without introducing unintended risk either on the employee side or the customer side at any business risk at all from a compliance perspective and so on. Amy Cousland 52:34 Thank you so much, Todd. Really appreciate the keynote. Really insightful. I know you all have the agenda you can pick between the turning AI ideas into ROI or the intelligent document processing with AI for the next 10:00 session. We are going to be recording all of these sessions, so if there’s two you really want to see at the same time, we’ll be sharing those recordings with you. Again, thank you everyone, and thank you, Todd. Take care. Bye. Todd McLees 53:02 Thank you, everybody. Thanks, Amy. Thanks, Kate. stopped transcription Brian Haydin 0:11 And we are live. Welcome to Turning Ideas into ROI. In this session, I’m going to walk you through how to prioritize AI initiatives and translate them into actual measurable business value. Today my focus is not just on ideas, but creating A. structured path for impact. But A. little bit about me. I am Brian Hayden. I’m A. solution architect at Concurrency. I spend A. lot of my time with leaders around the industry helping them map out and implement A. I strategies and hopefully help them ideate and figure out what kind of initiatives are A. good priority for them, like many of you. I’m super passionate about moving past the hype and into results and you know, so A. little bit more about me personally. I’m an avid outdoorsman and I like to talk about being in the outdoors and hunting and fishing. And I’ve got two kids. You can see my, you know, my son and my daughter are there, married. And A. fun little fact about me is that I am A. twin. So reach out to me, follow me on LinkedIn. There’s A. QR code. I I’d like to regularly share stuff. You know, basically insights on AI strategy and innovation and whatever else you know is kind of interesting to me. So and one quick housekeeping note before we dive in, we’re gonna be Turning these concepts into action and that can be kind of challenging. So I’m offering. Currency is offering A. 30 minute session to help you get started. We’re gonna share A. link in the chat later and you can book A. time that works for you. And so more to come on that and love to meet you. Love to hear about the Ideas that you and your organization are thinking about. So let’s get started. Agentec A I is now mainstream. A I agents are no longer experimental. 100% they’re mainstream. 80% of 88% of organizations are now say that they’re measuring value, actual value from A. I adoption. That means the conversation is starting to shift from if to how, how much and how fast. I think it was Accenture that recently just saw A. headline that Accenture has something like 14,000 agents running in their organization, 14,000 agents running in it. So that’s just absolutely incredible. It’s telling me that that most organizations are starting to to adopt this and it’s A. big part of their strategy. So it’s not just hype, this is reality and hopefully you and your organization are starting to get there. And and this is going to change how work gets done. I’ve got A. bunch of numbers here. This is kind of A. I had like 8 slides with all these different numbers on here, so I thought I’d just sort of throw it all together. But here’s where the 88% kind of came from, if you look in the the center of this. In A. world that’s buzzing with hype, business leaders now they they need A. road map and right now there’s broad adoption, limited adoption, full adoption. Add all that up, it’s A. significant part of it. Also kind of calling out that the the growth in companies that are. Currently in planning to use AI that on the right side of the screen, it says 55 to 75. Those are old numbers. Those are from 2023 to 2024. We’re A. third of the way or A. 3/4 of the way through 2024 or 2025. So we’ll see what those numbers look like. I’m sure it’s. It’s going to be huge. I mean, we said it’s 88 on the A I agents. So it’s probably going to be some close to, you know, 90, you know, something like that. But you know, it’s not without challenges, right. And 92% of companies are reporting that they are having challenges with with A I. So it’s not easy, you know, to adopt strategies. It’s not easy to capture actual ROI and real value from that. And so that’s why I thought this would be an important session for us to talk about. But before we get started, when I say agents, I don’t, I don’t mean models. I mean agents. And so I think it’d be helpful for us to discuss what agents are. So agents combine models with tools, with contexts, with memory, and eventually with autonomy. So they can perceive, they can plan, they act and adapt. Much like you and your co-workers would do on A. day-to-day basis. And these are starting to become the building blocks of an AI enabled, the frontier enterprise, what they are. So you know this chart I showed the last time I did A. talk about this topic. Building, you know, use cases and it hasn’t really changed that much because this was really, really super recent study. But what it’s showing in this chart is that the number of job functions that AI agents are able to help individuals with is starting to accelerate. You can see that on the the left side, people that are using agents to do one or more function. That’s A. pretty high number. You would expect that. But look over to the right. You’re starting to see that trend go up as individuals start to use A I agents to do multiple functions, 5 or more functions at A. time. Pull functions, five or more functions at A. time. So we’re seeing, you know, the scale, you know, happen, you know, quite rapidly and we’re going to continue to see that through the rest of this year, which is, you know, the the year of agents at the end of the day. Agents can play different roles depending. I might even come back to this slide A. little bit. It’s it’s got A. lot of content here, but agents can play different roles depending on your needs. So it can do commodity A I co-pilots. You know, Chat GPTS, Geminis, those are kind of the commodity public A I, you know, sort of things and they boost productivity pretty much right away and and most people are using these tools today I would say. But as you start getting into more sophisticated tools like Copilot’s Researcher Agent and Analyst Agent, I see quite A. steep drop off as people just don’t really understand how to use these tools effectively. And then once you get into the custom agents. We’re that’s where we’re providing A. lot of business value, A. tremendous amount of ROI, very intentional, you know, type of activities or things that we’re building, but it takes A. lot of engineering and A. lot of thought in order to get there. So this is just sort of showing you that as you go from commodity to more engineering activities, what you’re really doing is you’re increasing the accuracy of the of the generative A I and the usefulness of what it’s going to bring to the table as well. So. As organizations think about getting started with A I, A. bunch of things, you know, come up in terms of the concerns that they have. A I isn’t all upside and people have concerns around privacy, data readiness, talent gaps. Biases and hallucinations and and most importantly ROI. So this today we’re really going to focus on the ROI aspect of it and helping you to compute and figure that out. But these are all signals that it’s important for us to plan carefully. And and think about hallucinations, talent gaps, data readiness and all these other things as we start to build agents. A I is no longer, you know, just about isolated use cases or tech experiments. It’s A. strategic capability for most organizations, and they’re looking at it that way. Nearly half of all tech leaders say that A I is now fully integrated into their core business strategy. As PwCS Chief A I Officer says, you know, top performing companies will move from chasing A I use cases to using A I to fulfill. Business strategies and Sati Nadella reminds us that that this is really the defining technology of our time. And so it’s no longer, you know, optional for growth. But here’s the risk. The movers, the people that are early adopters, they’re pulling away from the latecomers. And the late comers are starting to stall. So early adopters are times the value and that as the late comers, people that are just starting to adopt it, but but it’s not too late. I mean, there’s still time for us to catch up. So one of The Dirty little secrets that not A. lot of companies are going to tell you about is that there are really no experts in this field, except concurrency, of course. But I’d say that in jest. But honestly, if you think about it, ChatGPT, you know, is. Only A. couple of years old and and so we haven’t developed A. lot of muscle memory. There aren’t A. lot of people that have been doing this for, you know, for A. very long time. And so for people to claim that they’re experts in the field is kind of ridiculous. And A. lot of companies, you know, feel that when they hire some of the big consulting companies that they’re hiring people that are actually learning on the job. And and that in A. lot of cases, you know, is is going to be true. I like to be pretty transparent with companies that we work with. When we do that, that we’re we’re going on A. journey together and we have A. hypothesis and this is going to be an experiment. We’re going to treat it as such, but but nonetheless, it’s still time to catch up. Nobody is more than two years ahead of you in terms of level of experience. So keep that in mind as you start to, you know, frame that that journey up. So how do you start? How do you catch up? And just because that gap is widening doesn’t mean that you, you know, should just abandon all hope. So getting started today, I think. I’d start with, you know, four key points, four key things to look at. Scaling up your workforce is number one. Getting people comfortable with A. I literacy and copilot usage. Todd talked about the A I literacy in his in the keynote. And so I can’t emphasize that enough. People have to remove the fear, you know, from working with these tools. And then start with your with your organization’s North Star, aligning AI projects with, you know, the top strategic goals for your department. Department for your organization is going to be important in order to keep the momentum, and we’ll talk about that A. little bit more. Build pilots with urgency, show value that’s going to happen within months, measured in months, not in years. I don’t like to take on A. I projects that don’t have measurable A I or measurable ROI results that are going to pay off in 12, maybe 18 months, depending if it’s A. really big payoff or A. really complex thing. But you need to start demonstrating ROI value very early. And the reason why you want to do that is that the technology is growing so fast and so quick that oftentimes before you’re done building A. sophisticated implementation. It becomes commoditized and there’s other tools that are out there that you can just buy off the shelf. That doesn’t mean it’s not important to do it or not valuable to do it. It just means that your ROI window is shrinking with with the acceleration of this technology. And then scale with governance. So bake your telemetry and KPIs into what you’re going to be building. It’s often difficult to measure because you may not have the KPIs or the baseline data to support growth or improvement, but baking. Into your solution. Some sort of way of recognizing what the value is that you’re providing with the organization is super important and critical. So what about preparing your team? Three key things that you should be looking at. Scaling, change management and Co ES. So AI is really only as powerful as the people that use it. And that’s why we’re emphasizing these three things, these elements, they’re going to make the adoption sustainable and not just experimental, but you are going to experiment, so. Skilling. Every member needs to be equipped with the different tools that they’re going to be asked to be to use. That might be commoditized tools like copilot or ChatGPT, or it might be giving them access to some of the semi-custom tools like copilot Studio. But without cultural adoption, without change management, things are going to stall. So you need to have have A. change management strategy that is going to ensure trust, ensure buy in and keep the excitement going for the organization so that the momentum keeps going stays. And then A. central center of excellence is is critical as well. Share your learnings with each other, share the wins with each other so that the organization can recognize those things and it gives you A. stable base, you know, with consistency is if everybody understands. How to build things or what the tools that we want to use and what the overall strategies are. So in skilling, AI literacy for all introduced copilot training sessions, AI 101 activities. I’ve seen some of the healthier early adopters. Starting to do like AI 400 level classes and 300 level classes you know for people. So build A. curriculum that is appropriate for the organization where they’re at. Find champions that can help. You know, work with power users or identify power users in different departments and train them up. Get fluency with the executives. I don’t think that this is as much of A. problem today as I’ve some recent studies of have shown that executives tend to think that they’re further along in their. AI journey than the individual workers at the organization. And I thought that was really interesting, but in some cases with the companies that I’m working with, the executives really don’t understand what’s possible or what tools they should be using for what. So executive fluency is and. Important and change management. It really is about the trust and the momentum. So transparent communication, you know, explaining why AI is being introduced, what it’s doing in imploring on the individuals. This isn’t here to take your job, it’s here to make it easier. Make it a rewarding participation for users and frame AI up as a partner, not a replacement. And so all of this is intended to help them embrace the journey because technology is only half the battle. You know, we can throw technology at the problem, but without, you know, cultural change and cultural adoption. And these projects are likely to stall at some point. And then, you know, Center of Excellence really is kind of your base camp. It provides the governance, the reusable assets, cross-functional teams, you know, to help guide scaling activities. You know, governance anchors, reusable assets, cross-functional team, innovation engine. Some of the most successful Co ES have actually built out portals for the organization to share things like the best prompts that I’ve used or the best agents that I’ve been able to create, you know, as part of the organization. And reward people for innovating internally within the organization. So now that I’ve gotten some of the ground things out there, let’s talk about some of the use cases. We need to choose the right trails. We can’t just go deviating into all sorts of different areas. Areas. So let’s identify where AI can align with our strategy and reduce pain points and unlock, you know, actual measurable value. I can share with you some stories of, you know, cases where people have chose the wrong path. And it, you know, here’s one good example that I’ve got is I was working with this company. They have a lot of different business units and a lot of divergent technology is they have grown through acquisition over the years. They had somewhere around 12 to 14 different ERPS. And the data was, you know, terribly messy. So when we’re trying to find where my order was and how to, you know, hook that up to a generative AI and an LLM, it was nearly impossible because there were so many different gaps. As the data journey needed to traverse many different areas, so be careful to you know about the use cases that you choose. Try to make sure that they’re that they’re possible, you know before you can before you bite them off. Um. The next point is, and I brought this up a little bit before. So before you start to brainstorm on any kind of single idea, you really do have to understand what your organization’s North Star is. I hope that you do today, but if you don’t, you might want to pause before you start the brainstorming session and understand what it is that the. Organization is trying to accomplish. Tie these things to the priorities of your organization and make the experience an aspirational experience. How we’re going to make the organization better. Anchor it definitely in measurable outcomes. Identify the KPIs that you’re going to, you know that you want to, that you want to measure, and then use all of this as like a rallying cry, you know, for the organization to go in the right direction. We recommend, you know, focusing on four kind of key areas. These are the most prominent ones that organizations tend to understand where generative AI is going to provide the most value. But you got to start with the why and top performing companies are they’re moving from chasing AI use cases to using AI to fulfill the strategy. So the core areas that we. Focus on our customer experience. Making customers feel warmer about how they’re interacting with your organization is one of the biggest use cases. That could be a customer service type of bot, or it could be any other way of people finding information quick. Quicker, faster, resolving issues, et cetera. Efficiency and cost. So process automation is really popular. People seem to understand that pretty well in terms of coming up with ideas. These are the types of ideas where somebody is using muscle memory to get through the day. I do this. You know, for two hours a day, it’s a button, you know, it’s 50 button clicks. It’s, you know, it’s repetitive and those are often areas where we can provide AI assistance to make those decisions that are kind of routine for our brain into more of an agentic approach. Revenue growth is another big area. So why would we do this? We want to grow revenue and sometimes that might just be like, you know, having better insights so we can make better decisions, but you know, being able to. Automate the sales process or make upsell opportunities available to users is another good use case and then risk reduction. So I’ve talked with companies, we don’t really have a problem. I’ve heard with some companies with compliance, we’ve only been sued two or three times. You know, in the last five years, well, maybe we can make that zero. And what are those lawsuits cost you? Was there a settlement? Was there legal fees? You know, risk is a thing and happens to a lot of organizations. So sometimes it’s the things that we’re that aren’t going to happen to us. That are important for us to try to capture and measure as well. So aligning with these four, you know, big four kind of outcomes in these brainstorming strategies sessions tends to come up with a lot, generate a lot of ideas. And in manufacturing, we see AI agents cutting, you know, performing all these things, you know, readily. So I gave you an example here on this screen, 5 different examples of ways that concurrency has built agents to help improve decision making. With data insights, drive revenue for the businesses, reduce costs and error, you know, for operations and increasing your competitive advantage. I’ll dive a little bit more deeply into these, but sales and quoting, auto quoting is something we’ve done many times. And we can respond, you know, we can cut response times from maybe somebody sending an e-mail for an order or for a quote from, you know, a couple hours maybe down to like minutes or you know, seconds even a sub minute, you know, and and a driver for that especially is in high turnover. Sales activities, supply chain optimization comes up quite frequently as a demand forecasting agent can help reduce inventory. We’ve done some really, really significant demand forecasting. Projects that have resulted in up to $40 million a year in savings. It was huge procurement, you know, AP matching, you know, anytime that somebody’s working within an Excel spreadsheet doing V lookups and trying to match data from a bunch of different systems, that’s, you know, those are opportunities. Opportunities for us to help out with AI and then on the vision side quality. So the QA inspectors in the manufacturing systems happens quite a bit and it’s getting the AI detection classification, all that stuff is. Accelerating and being much more productive. And finally, the last one that we’ve done quite a few of are the virtual support agents, and that seems to be a place where a lot of people start for a number of reasons. One, there’s a lot of documentation that supports it. And spinning up these agents are pretty easy to do. So give you a little bit more of a visual of what I’m talking about when with some of these ideas. So automated quoting is basically taking a e-mail that might be in the language of your business. The this e-mail has a bunch of, you know, metal parameters for, you know, somebody that’s selling like sheet metal and world metal and stuff like that. And you and I probably wouldn’t understand what you know all the different notation is, but AI can be trained on that very easily. And help them figure that out. A lot of times with with these quoting, you know, quoting problems, it’s trying to figure out what the special item is for that customer, what the special pricing is for that customer and we can make that up to like 80% faster or even more. So it saves hours and hours and hours relying on agents. I got a question here I’ll take a look at. So relying on agents, customer support, support, reduce customer confidence. People do like to speak to real people. That is absolutely true in in a lot of cases. But I would actually challenge the premise that people like to speak to real people on a frequent basis. That’s closer. That’s a true statement, I think, for my generation. But my children and my children’s children generation don’t. You know, they’re used to speaking through chat bots, and I think you’re going to see. A higher rate of adoption in that area, but I also would say that people like to get their answers really quickly, and if it means waiting for two or three minutes as a frustrated user to get what could be a simple answer, the user experience is likely to be better with a customer support agent. And that is escalated to a real person very quickly when you can do sentiment analysis. But great question. I love that. Back to the other real world example, purchase order matching. I talked about this whole idea of. You know, streamlining that, get the data, put it in Excel, get the other data, put it into Excel, do the V lookups. We can get like really, really quick at this stuff with the multimodal processing of things like PDF documents. So I can, I don’t have to like copy and paste a PO or go get a PO data from a different system. I can just pull it straight out of something that somebody sent me in an e-mail attachment and then compare those things field by field to make sure that everything matches. Anything that is abnormal or has abnormalities, we can flag that for a human review and that’s what we typically do is we start with a human in the loop kind of aspect and allow the users to sort of. Tell us and coach the AI agents that we’re building. If it’s, you know, doing the right things, once we get comfortable with it, we can automate it and end. So to surface these opportunities, you know, bringing diverse teams together I think is one of the more important things. So now we’re ready for the brainstorming session. Let’s start getting the use cases on the table, bringing a very diverse set of users. Don’t look at leaders. I mean, you should look at leaders, but like, don’t always just think we’re going to bring the head of sales, the head of customer service, head of operations together, and they’re going to brainstorm. Bring the people that are really doing the work and bring lots of them together. They’re going to feed off of each other. And bring insights to the organization that a lot of time managers and leaders don’t really have an understanding of. I see a lot of friction happen when the leaders of an organization are the ones that are determining what the priorities are and the people that are. In the trenches on a day-to-day basis, they’re like, that’s not going to help me. I don’t know why you guys are focusing on that. So bringing them to the table early, you’re gonna get buy in. Nothing is going to railroad your AI initiatives faster than nobody wanting to use the thing that you built. So. And then facilitate A strategic way of opportunity mapping and do it in a way that that brings everybody together. So we have a we have a way of doing a couple of different workshops together that start with you know a COE kick off and bringing this. Strategic team together. What we’re really trying to do is set the stage and make sure that we get things like that North Star defined before we dive into each of the functional domains. And so at the second workshop, what we’re trying to do is capture ideas, the bigger ideas and building an idea. Registry. Once we have a set of ideas, then we can do deep dive workshops and understand like, all right, if we were going to pursue this opportunity, what would that look like? And you know who you know, what data are we going to need? What’s the tech that would you know that we would bring into it? And how will we actually measure out the the outcomes to this? So a good example, you know our buddies over at the Fox World Travel, they had an organizational wide brainstorming session and it led to the the creation of Colby which. If you check out their blogs and stuff like that, they’ve got a multifunctional customer facing agent and it doesn’t just give you questions and answers or do the question and answer aspect of it. It generates charts, it’s very knowledgeable, it’s insightful and. Has provided a lot of great feedback, you know, to the organization from people that are using it. So here’s what we would typically do. So coming out of your workshops, it’s common for us to have tangible output output from these like an idea registry. So we’re really just kind of like listing out each one of the concepts that came out of it, assigning it. Category, maybe aligning that to a technology stack that is closely mapped to it, but we’re really just trying to capture the different ideas and build on a backlog of things that we can that we can look at and pursue over time. And even if we categorize something as really high effort today, I mentioned before that these tools are shifting into commodity, you know, pretty rapidly. We can assess that six months down the road and say, oh, that used to be really hard. Now that’s off the shelf. Maybe we should start looking at something like that. And then not every idea is, you know, doable or good. And so using a simplified framework to help you prioritize, you know, the, you know, the different ideas, putting them into a quadrant, you know, I’ve used a couple of different methodologies. But a BXT lens would be a good one to start with. Microsoft recommends this. It examines. It first examines the business value. What are you going to get out of this and the degree of strategic business impact? And then and then the technology feasibility, can we actually do this? So and that could be based on skills that could be based on data readiness. But you plot them out in this this matrix and you’re going to see that they’re going to fall into, you know, one of the four quadrants, so research. You know, high value, low degree of of executional fit, meaning it’s going to be kind of hard. So maybe you you invest in that, but high value, high degree of executional fit, that’s an accelerator. You definitely want to pursue those opportunities, you know out of the gate and then the other thing to. Look for quick win opportunities. One of the customers that I’m working with, we found a very quick win, took us 6 to 8 weeks to get a pilot built out for them and it demonstrated almost instantaneous. Value to the organization. Remember how I mentioned that AI isn’t intended to replace people in this particular case, and I’ve worked with several different organizations that have retirement cliffs coming up. They have an aging workforce. And they’re subject matter experts in this particular area. And so if we can generate some sort of agent that takes over 758090% of the workload of that individual or simplifies the workload of that. What that individual was doing, maybe when that individual retires, we don’t, we don’t replace them. And so this company that I was working for, they had that exact scenario. We took a quick win. He was able to take that to the executive team and the board and demonstrate that AI can. Turn around and make very impactful business sense right out of the gate in 6 to 8 weeks and that kind of opens up the flood gates for the executive team to want to pursue more opportunities that have those kinds of wins. So demonstrating value fast is key to ensure that the. Leaders of the organization can see the value and want to invest more into it. So let’s let’s get into. I think everybody was like, how do we how do we do this? So let’s let’s start talking about some ideas. Let’s imagine that you have three AI ideas, that sales forecasting agent that we wanted to predict demand. And so we looked at like how much value that’s going to provide to the organization and we calculated that it’s going to generate 20%. Uplift in sales opportunities and and then we looked at what are the things that need to happen in order to make that make that work. And while there wasn’t something that was out-of-the-box that we could purchase, there were a few tools that we could stitch together that would automate this. So we felt that there was a fairly. Decent, um, you know, uh, certainty that we could do it. So that that jumped up into the accelerate quadrant. Um. And. I was going to try to use, there we go, use my pen and then all right, employee help desk copilot. Those are becoming easier and easier. I would actually sort of force this one over here. I mean, it’s really, really easy to put together a help desk copilot. Especially if you have documentation, but maybe this organization didn’t quite have a lot of the like standardized documents or wasn’t all in the same place. And then the third one was a procurement optimizer. So auto negotiate with suppliers was kind of the idea that came up. And that one was, I mean a lot of value if we can auto negotiate with our suppliers for sure. But it’s really difficult to do optimization projects and machine learning. Typically to do machine learning projects, you’re going to need a robust data set that goes back. 23. Years at the kind of intervals that you need to make the decisions and most organizations don’t capture it at that level. And so here you can see that the degree of executional fit was, you know, pretty low. We might want to experiment with that, you know, to understand that you know the concepts a little bit better, but it’s certainly. Not something that we’re going to pursue very quickly, so keep it in mind, but don’t touch it for now if you don’t have to. So next after we sort of prioritize these, the next step is ensuring that your selected use cases. Case has some sort of an executive story to it. So how does this link back to the strategic objectives of the organization and who’s gonna benefit? So the executive story, we can actually generate a prompt that will help us create the story. How does this align? Aligned to our Northstar, make sure that we hook that into the messaging to the leaders that are going to decide whether to invest in this the strategic context. What is the pain this is intended to solve? How does the outcome actually solve this problem and what is the what are the KPIs? That are going to that we’re going to measure that will will support this as an initiative, give it kind of credibility. We’ve done our due diligence. We know that data is here. The technology is capable of doing XY and Z. And and then in an area where we know that there’s some risk or some ambiguity or some decisions that we’re not going to be able to make until we get, you know, into the into the weeds, call that out so that if things do get derailed at some point, you can say we knew that this was going to happen, we had. Plan and right now we’re going to abandon for now and then make it clear what you’re asking from the organization. I need licenses for this. I need to get a pilot for this. I need to, you know, make sure that it’s very clear what you’re asking for the organization. I do see that I’ve got another question. Without getting too specific, how do you folks charge for helping us incorporate Agentec AI into our workflows? Mark, that’s a great question. I don’t want to get completely specific, but for us, a lot of it starts with these ideas. Sessions and helping to make sure that we align the right agentic AI’s into your workflows. And sometimes it’s just us coaching the organization on what they should be doing or how they can improve it. And that might be for a small fee or sometimes even just, you know, I tell you in a 20 minute. Conversation you should do try this and sometimes we build custom solutions as well and the cost could be anywhere from $1.00 sign to many dollar signs. So but great question when we get to that point. Why don’t you set some time up and we can talk a little bit more about your use case. All right. So even a great idea needs compelling business case to get funded and supported. So got to transition this into a, you know, ROI cost benefit, what are we getting out of it? So define the objectives and the KPIs metrics like cost savings, your revenue increase, get specific, get as specific as you can in these and and and lay those those calculations out. Make sure you can establish a baseline. I mentioned this before. You’re not always going to be able to do that, but if you know that your conversion rate, your sales conversion rate today is 70%, that’s your baseline and everything that you measure after a I is going to start with we were here, we got here now. And so, even if you aren’t able to capture it right away, start collecting the data so that you can establish a baseline before your project rolls out, if at all possible, and set some realistic time frames. We expect this to be ready for production in X. We recommend going into a pilot pilot phase where we might do two or three users in a group of 30 before you roll it out to all 30. So make sure you call that out. We’re going to pilot this. We’re going to see what the value is. It’s going to be six weeks after the pilot before we get into production. And then we’ll see what the value is. So we’re not going to capture any real ROI in 2025. We’re looking at Q1 of 2026 before we’re going to make it a a real assessment. So frame it up that way and then. Craft a data-driven story. So I like to structure the business case, you know, with the problem, the opportunity 1st, and then explain to them what the solution, what the AI use case is going to do for the organization. How that benefits and what kind of KPIs we’re going to use to measure the the ROI. I’m sorry that we jumped here for a second. Let me get back to the right slide, the investment that’s going to be necessary for the organization to pursue this. And that might be the licenses that might be, you know, some professional services. And then the lastly you want to be able to project like your outcomes and next steps. So I I predict a 14th month payback period for this type of an initiative. So a good example would be it currently takes five days to turn around a customer quote. If we had an AI system, we would target one day and and that would allow us to maybe win, you know our projection is 15% more deals. So frame it up in those kind of stories, they’re going to get traction. When you when you frame it up that way, I’ve got a few examples, but I want to rush through this. We’ve got about 8 minutes left and I’ve got 20 more slides to go to. I’m just kidding. It’s close to 20, but not quite an example breakdown of how we’re going to calculate ROI. So I’ve got. I’ve got this invoice processing and I do 22,000 invoices a month. It takes 15 minutes you know to process one invoice and and so if I take 12,000 of those. And map those out. Those are the ones that I can actually do. You know that’s gonna 1/4 hour for 12,000 invoices cause I I split it up and said all these 22,000 only half of them or a little more than half of them, 60% we could actually automate the rest of them aren’t like. Fit for that kind of that kind of activity. So that comes up to 55,000 hours and then take the, you know, 55, you know, thousand 5500 hours and look at that and say I can really only recapture about 3000 of those hours. Because you know some percentage of that isn’t going to, isn’t going to happen. And then that equates to 2.75 full-time employees. Hopefully you can calculate what your loaded cost is for a full-time employee. So at the end of the day, labor savings would be 288 K Now, sometimes that’s, you know, we’re not going to fire the person. We’re going to have to lose that through attrition. As people retire, maybe, you know, as people find new jobs. But that’s the kind of math that you’re going to walk through. Through for your executives and show them each step of the way how you made those calculations. I want to get to some of the technology stuff here too, because I think it’s kind of important to to have a. A good technology footprint. So no expedition is going to succeed without the right gear and the right guide. So here at concurrency we’re we’re really heavily aligned with the Microsoft platforms and there’s a good reason for that that we’re going to kind of show you over the next. Couple of sides, but by using their platforms you get a head start. You typically have licensing that’s gonna support this, and most of the tools work together pretty cohesively and pretty fluently, so you’re not jumping between too many. Connected platforms data is usually a foundation of this, and Microsoft came out with a unified data and analytics platform on Microsoft Fabric. If you aren’t aware of this, it basically combines all the different. Data functions that you would use in a robust data warehouse into a simple into a unified platform that allows you to do the same complex things, but you don’t have to jump between different tools in order to get the work done. So good example. I am going to load my data in some sort of ETL tooling that allows me to copy data from one source to the other, maybe do some transformations and then I want to use Power BI to like, you know, do the visualizations and and stuff like that. You’re not jumping between the different tools now to accomplish those goals. It’s an end to end kind of workspace that you’re gonna be working in and it supports things like data governance through Microsoft Purview and then also has things like copilots that allow you to be able to get your. Work done faster, whether that’s like writing Python scripts or even just asking questions about the data, like what are the top five SKU’s from last quarter in the Latin America operating unit? So then Azure A. In Azure we’ve got kind of a simplified approach of the Azure AI Foundry and that allows you to do model selection, model curating even. You know, even some of the new features that allow you to do automated model selection and you know all inside of a single tool inside of Azure. You can also experiment and generate agents directly in the AI Foundry. And if you find that that’s helpful, you can move it into the Power Platform directly and sort of fluently between them and then on the lower code kind of platform Microsoft Copilot Studio. Is a way for you to build agents in a low code fashion. Sometimes it’s just by generating prompts, but you get a little bit of control to get like drag and drop like I want these kind of actions and sort of like build out a workflow or even like a technical workflow and all this kind of cohesively works together. So I might build a component in a. Pro code solution like AI Foundry and serve that up in Copilot Studio, incorporate that into M365 Copilot and I’ve got everything all cohesively built. I’m going to pause here for a couple of seconds. Is there going to be recording this session? Absolutely. Capacity creation is straightforward as the way to quantify opportunity created. I’m going to skip that question. I think I need to ask for a little bit of a clarification. And then Brandon, there are so many places to start with building agents. It is best to think of building agents for a complete task or to build micro agents. That’s a fantastic question. The short answer, and I’d love to talk to you a little bit more about that, is microagents is the pattern that we’re going to. So all right, getting close to wrapping this up. Quick wins, early milestones, leverage your low hanging fruit. I like to start with low code tools, you know, and deploying those Copilot Studio agents until I hit a barrier or a threshold of complexity that Copilot Studio is not able to to get over. So start small, start quick, build momentum and demonstrate that you can get the ROI and then have you know have a long term vision for you know what the platforms are going to do plan for like Brandon’s question, a multi agent. World and then cultivate the culture for the organization so that they’re they’re bought in and they feel they feel empowered to build with you as part of the enterprise. So I promised we would get to sort of a wrap up here. Turning ideas into ROI complimentary sessions. So I would love to meet with you if you have more questions. I believe there should be a link in the chat for you and and then Paige is going to be providing recordings. As well. So, but I’d love to meet with you, spend 30 minutes on somebody like myself or some of the other solution architects. We can help you understand what it would look like to prioritize your AI initiatives. If you have specific questions, you know I’m happy to do it. And the link that there that page just dropped in the in the chat here is direct to booking. So you’re just gonna get a meeting on the calendar when you can make it and when we can make it. So last question, Brian, we use software written by a third party which is not an Ms. project. So many of our issues involve inefficiencies. These systems, yes, we do work with people in your situation. I would also say that one of the reasons why I am very appreciative of the Microsoft ecosystem is that. They have embraced an open ecosystem. Azure AI Foundry isn’t just Microsoft, Microsoft models or tooling that’s available. They’ve kind of opened that up to other models and ecosystems, deepseek and you know, others. Not everybody is playing nicely in the sandbox. But a lot of them are, and that’s what I do appreciate about the Microsoft. Another good example of Microsoft’s openness is that ChatGPT Enterprise I find a lot of organizations. Were early adopters in ChatGPT. Microsoft Purview supports DLP policies in ChatGPT Enterprise Edition. So just because you might have picked some third party tools doesn’t mean that we can’t make it work in the Microsoft ecosystem. I’d love to hear a little bit more about that use case, so. Any other quick questions before we close? All right. Well, thanks again for your time today. I really, really enjoyed the talk. I wish I could have gotten through all 48 slides, but maybe next time. Next time. Cheers. stopped transcription Security concerns there too. Jailbreak and indirect prompting injection where because AI models are interfacing with different data sources and different systems, you can have unexpected results inside of your responses from the model. Because of perhaps somebody malicious actor interjecting information in there that the models then using to relay to relay on in a model will operate off of that underlying data and those underlying systems and unless you put some safety mechanisms in there to prevent. The wrong information from getting in there. There again, you can have unexpected, undesirable results. So with the new risks, you have new attack surfaces here too. The way people are interacting with prompts, putting information into AI models that maybe they shouldn’t be and and asking. Then for the responses, the responses coming out again, as we spoke a moment ago, you need to be able to protect that data that’s inside of those responses in the same way that the underlying data is in some ways. AI orchestration. This is where we’re combining AI agents perhaps with each other and making sure that they’re interfacing. Right way and sharing information correctly. And then the underlying training data, rag data and the underlying models themselves need to be managed so that that is all providing the results that you would expect and that you users are going to end up wanting. End up trusting and that the organization can trust ultimately too, that you’re not exposing things you don’t want, that you’re getting reliable results from the models and that the vectors across there that this will operate on is obviously at the application at the agent level there. There certainly across the cloud and the different data sources that are inside of there. Identity is very important here and we’ll talk about it in a moment, really treating agents as additional Employees in a lot of ways and identifying them. As you would an employee and carrying that through and tracking that and managing that accordingly, all make a big, big impact in terms of how you’re securing your underlying data and your AI state. So the zero trust security model evolved really from the last major change and shift in technology, which was the move to cloud. And really there are so many different attack surfaces in so many ways that. Things could go sideways. Zero trust security model really came out of a a means of OK, how do we enable people to use this technology in a way that we can feel comfortable with? And by taking the never trust, always verify approach towards this, where you verify every interaction explicitly, you use least privilege access and controls there to limit what the systems and the your ultimately users are interfacing with. And always assume breach. So you have to continually monitor the environment to find where things are going wrong and and that things are are maybe having unexpected consequences. So taking that zero trust model. And then combining that with the responsible A I principles that Microsoft provided, where it’s not just about security, it’s also about how do you ensure reliability and safety, how do you ensure fairness and inclusiveness with the model and what it’s producing? How do you have some transparency there so you know where this data is coming from and where the model create, how the model created its inferences and ultimately having the accountability there. At the end of the day, people need to be accountable for for the A I models, but you also need to be able to track what the A I model’s doing on its own as well. But things like the responsible AI framework and and the dashboard can assist with this beyond security. As with the cloud, AI operates best if you and creating your AI security governance policy works best if you understand the shared. Model. If you’re using more of the bought AI models, SaaS offering like offerings like Copilot, then your usage of that is really the extent of your responsibility in terms of managing. We’re gonna spend a lot of time talking today about. That usage layer and even into the application layer a little bit. If you’re creating your own model using Azure AI, then that starts to extend into the model design and underlying infrastructure there too that. You’re then responsible for. If you’re building your own from scratch, then there’s other pieces there that you’re responsible for. Thankfully, if you’re using the again copilot or Azure AI, there are certain responsibilities under that that Microsoft undertakes that allows you to. Focus just on user behavior, identity, access, management, data governance, and then if you’re creating your own model, then kind of the architecture and design of that model and the systems and data that that’s interfacing with. So in order to secure and Govern that considering all that. Really, the best place to start is by crafting an AI policy. Now, how you do that, I believe, is best educated by what are the things that I can control underneath that? What are the the areas that I need to be thinking about? And so we’re going to spend a lot of the time today focusing on the how. And really the areas where you would be enforcing this and thereby using that to educate how you create the policy, then educate your Employees and then enforcing that policy across these different areas, different areas being identity management, making sure that. You understand who the users are, but also the AI models and what they’re doing and being able to track that in all their interactions. In addition, putting controls on applications and access to those different models, So what the AI model can interface with. The AI models that your users or Employees can interface with and being able to control what data can go into those. In addition, you need to protect your data state. This is probably one of the most fundamental pieces of. Securing your in AI estate and that you need to make sure that you have a strong data protection mechanism if you’re exposing sensitive data through an AI model and so. So you know, tagging the information using AI models that respect that tagging and that the responses inherit that tagging is very, very useful for providing that protection end to end in addition to those enforcement, you know, kind of upfront. Pieces you then have ongoing monitoring governance. There’s going to be new changes to the technology. There’s going to be new interactions. People are going to take these models in different places. This is how you prevent things from going and being used unexpectedly overtime. And then finally, the last piece with responsible AI model management. This really is most important when you’re creating your own models, AI models, those use cases where maybe I’ve got a purpose-built autonomous model or. I’ve done something beyond what Copilot Copilot Studio would allow me to do and then what are some of the the principles that I need to to be thinking about there? So let’s start with identity management and I’m splitting this between. Managing your users or your employees and then managing AI agents because both of them need to be treated again as individual entities. I create an identity for each of them in Entre ID. Which will allow me to then provide certain security controls around those, including who can be interfacing with the AI agent, controlling the permissions for that, also controlling the permissions for what systems and data that that AI agent can access because I understand what that agent is and who it is that’s provided. Those requests on the user side, they use conditional access rules to control what apps and AI agents that that user can interface with, given their role in the organization and the type of sensitive information that they’re gonna be dealing with. All that would be controlled through Entre. And then that in concert with with Intune frequently. So in terms of the Entre Intune architecture there behind there, we really might recommend that you’re at the cloud first level in terms of Microsoft’s. Kind of five step architecture progression at that cloud first level. We’re leveraging the cloud technologies. We can leverage the security features almost fully within the the cloud there in terms of what Entre can provide and also what Intune can provide to be able to control. Access be able to monitor what AI agents are interfacing with, what your users are interfacing with, and so on. So those upper levels of the Intune architecture are really important. That you’ve achieved that in your organization in order to do the other things that we’re going to talk about today. Certainly something we can have conversations about, talk about where you are and how you might get there. But I just want to state that that is that is an important prerequisite, if you will, for for doing everything else. We’re gonna talk about the next level is we talk about application access controls. Now we know who the users are, we know who the agents are. We’ve put their controls using conditional access, maybe MFA when we’re in certain scenarios judging the risk. Of both the user, the device, where they are at any point in time and the next level is that OK, what are the application access controls we want to enforce there for the user. We want to restrict access to only trusted apps and really for the purposes here today. AI agents, there are ways to control within your environment the ability to use other outside agents. Now this will be an ongoing battle as it was with cloud technologies, but there are ways to at least limit that. And even if you’re allowing for access, perhaps on mobile devices where you aren’t fully managing what they can and can’t do, you can at least block your data from going over to untrusted apps or agents. We’ll show you a small demonstration of that in a moment. You also want to restrict access to AI agents that are beyond the user’s scope. Here’s where, again, identity becomes important, or you may have a trusted agent, but this user shouldn’t be using that. Perhaps it’s an agent that is designed to work against sensitive financial data that’s for. For the financial department’s purposes, I don’t need somebody in production or in sales accessing that necessarily, because that really may not be anything that’s appropriate for them to see. So aside from just limiting trusted apps and agents for the organization, we also want to control what agents. That users can access that may be beyond what they should be seeing from the AI agent side as I’m constructing, again, approaching these as I would a person in some levels, I’m going to restrict access to only trusted and permissioned users. Again, kind of the other side of what we were just talking about. I also want to make sure that I’m restricting access to only trusted systems, that I’m not pulling data from systems that I don’t trust. Those are the things that can lead to jailbreaks, indirect prompts, even ultimately hallucinations on there as you’re impacting the model with data and system data that. Maybe I shouldn’t be trusting inside of it like I do others. Always maintain make sure to and this identity helps with who and what the agent is interfacing with. This is something that’s gonna continue to have to be looked at over time. It really reinforces the importance of ongoing monitoring. So in terms of restricting access to trusted AI apps only, Microsoft has in their Defender in a couple of places, the cloud discovery piece, which you can actually look strictly at AI agents there too. It’ll provide a risk score for you and then you can determine whether you want to block or permit use of that AI agent and then anywhere that you are managing the device or you have control of that access, it will then block that from. From being able to be used, if you’re not blocking outrights, perhaps on a personal mobile device where I’m taking more of the I’m going to trust these apps, but not these, but I’m not blocking them, then preventing data from entering untrusted AI becomes very important. Here you’re seeing a quick demonstration of. Somebody’s copying data from sensitive document. You can see that it’s it’s marked as as sensitive data, but when they go to paste it into the public ChatGPT, they are blocked from doing that. Purview is very important in in how this happens also in concert with Intune. In some ways where I can’t copy and paste trusted company data outside of trusted applications. So there’s kind of two forms of of protection there. One is blocking out right. The other side of that is then for any of those that we were not blocking those ones we haven’t even caught yet. Yet maybe they’re new to market. I also wanna make sure that I’m preventing data from exfiltration into untrusted AI, and purview’s very, very important for that. This brings us to data protection. Ultimately, within the organization, you should have established and teach a data protection standard. What is? Considered confidential data? What is company only data? What is public data? What is information that can be shared? It’s important that not only do you have that labeled and enforced, but first of all make sure people understand it. They’re gonna have to make judgment calls. You need to make sure that they understand what’s expected there and what the policies that have been established. Once you’ve established the data protection standard, then you can work at restricting sensitive data to only those who should have access. Again, if we have our identities controlled. The right way we’re able to to do that and limit those by security groups and be able to even enforce encryption on the data if you choose to. But if nothing else, I’ve at least labeled it and can then take certain actions and allow and disallow certain activities based on that. From there, I can then again use data loss prevention means which are carried throughout the Microsoft platform and in other areas that respect that sensitive data labeling that’s occurred and be able to enforce that on your behalf. So it’s one thing to teach people. It’s another thing to prevent mistakes, which is frequently the most common form of data exfiltration. In large scale, it tends to be a little more deliberate, but this is something that where if you’re not careful, people can share information. Naively. And so that’s all the more reinforce the importance of educating at the front end and having some form of DLP to to protect that, to prevent people from making honest mistakes, if you will, as well as to help limit those that are intentionally trying to do some harm. On the AI agent side, this is really important. We’ve had this in the user place, you know, DLP and e-mail and then into some of the other cloud apps over time. But now this becomes equally important with the AI agent because the AI agent is taking information. Surfacing that to anybody that interfaces with it is allowed to interface with it, and then they’re producing output from that, which also needs to carry those protections forward. So you need to ensure that the agent is able to not only only show the data that any user interacting with it should have access. So it’s again the identity importance here and being able to respect the identities and what security permissions that individual should have. You also want to protect in the response output in alignment with your data protection standards so that as it’s made an inference on sensitive data. That that same data classification is carried forward into the output there. You also, in terms of data protection, want to make sure that the agent isn’t making changes to underlying data if you’ve allowed for some autonomy in in terms of the agent that you don’t want. So that’s fairly easy to control in terms of model building and the construction, but it’s another consideration here when we’re talking about data protection. For all of this, Microsoft Purview is particularly in and if you’re using Microsoft platforms, very important one that allows you to provide the. With information tagging, sensitive information labels, having that respected throughout the Microsoft platforms, both in Microsoft 365, in Copilot, anything you’re constructing with Copilot Studio. It also allows for a data governance so that I can then see over time where is data being utilized, where is it flowing, what systems are able to access this, what agents are able to access this. And be able to control that. It also allows for ability to remain compliant. So with those same sensitivity labels, you can label information that maybe is subject to privacy laws that’s subject to other. Financial regulations, HIPAA, all of those, you’re able to control the flow of that in accordance with the regulations as as well. Purview, just, you know, a little background, so probably the only licensing part we’re going to talk today. Just but this can get confusing sometimes. Preview for unstructured data via documents. So Word, Excel, PDFs, PowerPoints, what have you. That’s the preview that’s included with Microsoft 365 E 3 and then ultimately E5 licensing in varying capabilities. They’re in for structured data, data that you might have in the cloud and databases and that where you’re doing row level security on that data cataloging, that tends to be more of a pay as you go in Azure model. So just be aware of that, but ultimately all that comes together and can be managed using this. Same dashboards and controls there. So here’s a quick example of sensitive information. Again, that same project Obsidian document we used in the demo earlier was used. To surface response here in an AI session, but you can notice that it’s actually flagging this as this is sensitive information. This cannot be seen outside of the organization. So it’s it’s respecting that data privacy, the data sensitivity labeling that was tagged behind on the data that this is using to create inferences from and then carrying that forward to the responses automatically if the user shouldn’t have be able to see this at all. They wouldn’t. They wouldn’t be able to see this. It wouldn’t be used for them in that in that session if it is something that they can, they can see. But again, this provides that reminder that you should not be sharing this outside of there and can also put some protections on that as it surfaces that to. To those using the AI agents. In order to secure the environment, kind of the recommended approach is start with the recommended basic default labeling. Again, you’re not putting full enforcement there, you’re not putting full encryption on here, but just start understanding. The underlying data state that you have being able to put labeling on, start using DLP for more sensitive labeled content and ensuring that that doesn’t flow places where you don’t want it to, and then start using auto labeling and providing deeper context, perhaps more sophisticated. Labeling based on compliance needs and or individual org needs and then you start using DLP for content that’s not labeled as well, allowing it to ensure that things aren’t flowing. You want to take a progression here because. If you turn on full encryption, full enforcement right away and don’t really understand the data state, you can kind of bring the flow of information to a halt within the organization. So there is kind of a stepwise approach here. Make sure you’re gradually understanding your data, that you’re providing increased levels of enforcement. Moving forward, as you continue to move on, you can start taking actions to improve the labeling, ensuring that you have high confidence, high trust in the labeling that’s being applied, and then ultimately really start enforcing encryption and protection. That beyond that, which would be encrypting that underlying document so that wherever it’s surfaced, it’s encrypted going forward. So even if it is shared, they can’t access it. So all the components of Microsoft Security portfolio work together here. Again, we’ve talked about Purview, we’ve talked about. Entre and Intune together talked a little bit about Defender. All these work in concert in order to protect your underlying data and ensure the apps are operating in a way in a trustworthy manner that you can use going forward. It’s kind of an example of how these link together. You might use Defender for Cloud Apps to start with the discovery of the use of AI apps and the user interactions with the apps using Purview and the AI communication compliance and AI use that’s available there. By viewing that, I then can start looking at OK, what are the things I want to allow and disallow for? You can start blocking access to unsanctioned AI apps, limit access, limit the data that can start going. Flowing from there, then going on to securing the underlying data and exposing that through the model, I can then say, OK, I’ve established a comfortable level of security around my data and then as I’m creating. AI apps that’s accessing that sensitive data, I can ensure that that is enforced and that that’s not sent into certain AI apps and that’s only sent to certain restricted AI apps from there. I mean, this is an ongoing governance battle where you’re going to continue to audit what’s happening here, continue to monitor it, make sure you’re looking for inappropriate behaviors and prompts. You’ll start diving and find yourself diving a little deeper into OK, we’ve allowed for this AI agent. It has this data here. We are trusting the people that are interacting with and what’s happening to a degree. But let’s walk for what people are doing in in the prompts and saying, OK, maybe that’s we don’t want them asking that and how do we refine things to prevent? Some of those prompts and or the responses you you get from those all of which you know uses the entire security platform to to do that. So ongoing it’s you set up the prevention mechanisms at at the front end to stop the behaviour. That you don’t want to have happen and stop the data flow. That would be detrimental to your organization, but this is gonna be an ongoing monitoring and governance activity, both in terms of identity. You can use Entre ID governance to monitor security groups. So if I’m trusting the identity to help provide a layer of protection here, I need to make sure that people aren’t added to security groups. Maybe they shouldn’t be, and that we’re not leaving identity sitting out there that could then be hacked and utilized for nefarious purposes. You want to continue to monitor that identity plane. You also want to continue to look at the app and access level where again AI Hub and Purview can monitor that AI usage that’s occurring and so. I’ve trusted these apps, but now am I trusting the use of those apps too? Defender XDR and the risk assessments associated there can also be very helpful in this realm and then ultimately I may have my data sensitivity. Standard set and have my labeling and enforcement to a degree that I want, but you need to ensure that continues over time. That’s where using the audit and communication compliance features of purview as well as then ultimately discovery. What has happened? What have people been doing with this? So that’s going to end up having to be exposed in. In different legal realms there too. And so how am I able to surface that easily without causing undue difficulty with the organization? Here’s what AI Hub looks like where you can see. The prompts and the risk assessment on the prompts in there. You can drill in further into what were some of those prompts, the nature of some of those prompts, but it gives you kind of an overview of how are people using AI within my environment, even in these trusted AI apps. Um, what’s what’s happening there? And then obviously you also have views into the the untrusted ones too. In addition, as I drill in in the communication compliance realm, I can see the actual prompts that people are putting in there. And perhaps these may be things that I want to educate users on 1st, but also maybe provide some enforcement there. I might change how I’m labeling certain data. Off of this, I might restrict access to certain agents if I’m finding that certain people are asking questions of it that I hadn’t anticipated before. This is how you’re providing for that ongoing management of the underlying estate and being able to continue to control what’s happening with your underlying data there. And Defender also provides alerts here and then they it’ll flag certain interactions with you know copilot agents and others that I have that I’m monitoring there and say OK hey you know these they’re they’re starting to trying to dig up finance related files that maybe. They shouldn’t be and it’s it’s an odd behaviors there. It allows you to see where people are trying to attack and or you know, exfiltrate data and it could be again innocent enough requests. They may just be education. But there also may be some enforcement you want to undertake from there too. And this allows you to see what’s happening in the environment, again, allowing the organization to trust the use of this technology. So that’s how we’re managing really your copilot, copilot studio level of AI agents. If you’re building custom AI and ML models, you’re going to want to take further action and further review of what the model is, because you’re responsible for what that model is producing as well to a greater degree. This is not just privacy and security. It also should incorporate, based on the responsible AI framework, reliability and safety. So am I able to to trust the responses? Is it producing responses that? Are inciting negative behavior that I may not want? Is it? Does the model have bias? So fairness and inclusiveness? Are we ensuring that the model’s been trained in such a way that we’ve avoided bias as much as possible? There are ways to. To measure and detect that and to retrain around them. Do I have enough transparency here as to where this data is coming from? Again, so that my ultimately my users can trust the results that are out of this, but that I can as the organization can trust. What they’re being told by the AI responses and outputs from from AI and it’s it’s variety of forms and then ultimately you know how am I what’s the accountability here is this do I have measures in place to ensure that? People ultimately are responsible for what’s happening inside of the environment and and able to take action and be able to have again, you know, some responsibility for what this model is producing. Again, ensuring that you can trust the model and what it’s. Producing Microsoft’s AI content safety, which is prompt guards and a host of other tools in there, API’s that can interface within your model and ensure that some very obvious things aren’t happening. But also you can create your own block lists inside of there. Say I really don’t want to permit these kind of. Questions you can you can stop that inside of there. And the broader responsible AI dashboard from Microsoft also has means to assist with this. Couple of forms that this will take is first of all in model debugging. So if I’m crafting my own model, I wanna make sure that I’m ongoing on an ongoing basis. errors that are that are coming off of this? Where is the model wrong and how sensitive is it to certain parameters and certain combinations of parameters really where you know if I’m flagging data that’s I’ve trained it primarily on US data. But Europe is a little different when it’s it may be wrong when I’m dealing with European context, but it’s it’s right for the US start to detect those kinds of things. There could be a whole host of of different biases with the within there. There’s also, you know, a fairness assessment tooling inside of there to help with detecting other forms of bias inside of there. I going, you know, forward from there, being able to interpret the model, make sure that I understand how it’s coming. Up with the responses and inferences that it is, how am I able to test for, hey, these two are producing different results off of similar data. Why is that happening right and being able to? To tune that so you get more reliable results ongoing and then continuing to explore the underlying data set to ensure there isn’t new data in there. If I’m using a more dynamic data set that there is a new data in there that’s driving. Unexpected behaviors or unwanted behaviors in the AI model and then taking action, mitigating fairness issues. There’s tooling in there that can help you with that and also ensuring that. We’re we’re providing enhancements in online data to ensure that that can be trusted, then that’s going to be an ongoing process that that you’re using to debug the model. Sounds like a lot. There’s a fair amount there, but there are tools there again with the responsibility dashboard and some other things that can help with this with this process. In addition, you want to make sure that again with understanding the data that I actually am understanding the causal inference from there too. So what is it that? The, you know, features within my data set are driving in terms of real world outcomes and what are the responses ultimately driving in terms of perhaps understanding by humans in terms of what they’re seeing from there and being able to do some analysis. Of that too ultimately will help create the best AI model, a more trusted AI model that then is used more and is ultimately results in in better greater benefits to the organization. So in summary, begin with identity management. Manage AI agents like humans. Verify all user requests, use least privilege accesses, particularly as with the AI agents in the system that it’s it’s undertaking. You don’t want to have standing admin access there, you want to make sure that it’s invoked as needed. So it’ll it prevents against some of the broader cyber threats on on AI systems and then you secure based on risk. So make sure that you’re ensuring that users are who they say they are and that AI agents are accessing data and systems the proper way and then you have the right. Mission set for different scenarios, right? Somebody’s on a mobile device versus and possibly has other data there. Might enforce that differently than we’re on a trusted managed compute device by the organization. Can log everything. Make sure you’re tracking all that. That’ll help with your ongoing management over time. In terms of application access control, restrict use of of untrusted AI agents and applications. You’re already doing this, most likely with certain applications in the environment. You need to extend this to AI agents, again having those identified, but also then you know the publicly available ones when you put sensitive data into public. AI models, in some cases that then is shown to others that are then providing prompts to that model. So you want to ensure that anywhere that your sensitive or private information goes, that that’s protected and that that is contained within AI models that you trust that you know. Won’t surface that to anybody else that shouldn’t be seeing that. You want to make sure that this is managed on all types of devices. So you might handle mobile devices again differently than you handle trusted managed computer devices within the enterprise. As far as data protection, first start by knowing your data, understand the underlying data. That’s exposed through AI agents. Secure the data at the source. It also ensures that that security follows it wherever it goes as it’s surfaced to users through their responses, that that is also still enforced there too, and then track where it goes. Be able to see certain trends and things like that and perhaps this agent is surfacing data that you hadn’t expected and be able to say, OK, we need to to change our underlying data protection models and and provide enforcement in a different way. And all this leads to just this is an ongoing challenge and this I none of this is meant to make this seem like it’s too much work or or that this is this is Oh my gosh, this is this is painful. There are tools to help with all of this. But it is important that it’s done so that you’re able to trust AI to use for the benefits that that you’re seeking out of there. Again, monitoring AI usage, monitor the data access underlying that continue to monitor AI model performance there, make sure that that’s. Producing trusted results from and again log everything. Continue to log that. It’ll help with surfacing maybe what’s happening in the environment later on for AI models. If you’re constructing your own models, then you need you have a responsibility to continue to debug your model and to make sure you understand. And the data is used in the systems connected to it and enforcing protection mechanisms on on all of those. So again with that, if you’d like to have a further conversation, please feel free to book time with us or we can dive more deeply into your specific scenarios. But hopefully this provided a high-level framework for the types of things you should be considering when you’re rolling out AI models within the enterprise. And hopefully driving the trust in there so that you really see all the great benefits that come from from AI. With that, thank you very much and I hope everyone has a great day. Here at Concurrency and for our next topic, we’re going to be talking about security, governance and trust in AI. Very important or to enable all the benefits of AI that you’ve established trust in it first and we’ll kind of talk through the means of doing that throughout the the course of the conversation here in the next 45 minutes or so. With you, please use that bookings link to go ahead and schedule time with us if you so choose. So starting here today, I wanna begin with the importance of trust here. It’s kind of an obvious topic, but. I think the importance of it isn’t in the ways that that manifest isn’t always so obvious. So all new technologies bring both risk and reward the Internet when we first started using that. 30 years ago now had its risks, but also had its benefits and its rewards. Mobile devices presenting new security challenges and risks and also obviously had benefits. Cloud technologies very much the same thing. However, how we address that really leads to how we realize the benefits of this. And if you address it the right way, you actually can increase the benefits from this as if you as you reduce risk, you increase trust obviously in the. Organization with allowing your Employees to use the technology. But if you do it the right way, particularly with AI, if you increase the trust there, you ultimately can see increased adoption. Obviously, as adoption increases, the benefits from everyone using it increases. Thereby you ultimately with a state, if you do it the right way, we’re implementing means to establish trust while increasing adoption that that will result in in greater reward and reduced risk and ultimately a better. Benefit there. Security governance plays a very important role with establishing that trust by reducing the risk involved, obviously. So in the AI world, the new attack services that have emerged are under a few different. Means one is data leakage. This is probably one of the most obvious at some levels. This hallucinations data leakage is where if I’m training my model and have allow my model access underlying data and potentially sensitive underlying data. That can be surfaced then to people that maybe I don’t want to see that and so they you need to take take steps to ensure that you’re protecting the underlying data, but also then the output from those models contains sensitive information. That needs to be protected as well, and we’ll talk about that a little bit. On the other end of the spectrum, we have hallucinations where the model’s not producing results that you would like, and that really is a function of how you’re training the model and continuing to debug it and manage that model going forward. There’s a host of other. Hello everyone. Welcome to our conversation about organization design for AI adoption and trying to help you with leading with empathy, innovation and alignment. There’s a lot of changes that have come forward with AI entering the. Tech stack for everyone and want to talk about what the people side of that is a little more. We’re sure you’ve had a lot of conversations about the technology and the technical side of that security, what have you. But let’s talk a little bit more about the the people side of that first before we get started. If anyone would like to have further conversations with us, please, you’ll see a link in the chat that will allow you to book time with us to have further conversations after this. Certainly happy to take questions at the end as well. But please look for that link and feel free to book time with us to carry on the conversation. So over the last 20 years in IT, there have been a lot of changes and we’ve dealt with changes like AI before. We had the mobile revolution in 2007 when smartphones. Emerged and started offering new possibilities and capability for people. Also new challenges for the organization and IT cloud is that those technologies emerged and the rise of data science had similar effects on on the workforce and. On IT and how it had to react to things, hybrid work in 2020 changed things for everybody. Now AI is just the next example of that and we want to talk today about how we enable the organization to embrace that and get all the value from that that you can. So because of all those changes in technology, today isn’t the same as it was. We used to operate more in physical location-based work. We’d go to our office and our cubicles and get our work done, then go home. Not really the case anymore where we’re able to work from anywhere. That was a challenge for IT to enable. Well, how do we allow everybody to work anywhere they need to and the the freedom and that that that allows for while still making sure that we’re doing that in a responsible way. Data used to be secured more within purpose-built apps. Now it’s it’s everywhere. How do we control the flow of data everywhere? Those are challenges that had to be addressed. Technology adoption as a whole tend to be used to be IT LED in a lot of ways and they tend to be more driven by the infrastructure. Now mobile really changed this where I can have apps there that users say, hey, why don’t I have this in the enterprise and really drove IT to respond then to that in a different way they had before. Used to have directed more directed user interfaces on technology. Here’s the form you’d have training binders for. How to use the technology. Now as we move to more natural user interfaces and user experiences in tech and even on to conversational computing, which a I allows for, you’re not going to need that training the same way. It’s really going to be up to the users as to how they want to use the technology and in creative and new ways. You know, it used to be we were really limited by the technology that was we were allowed to use and now we’re shifting to a place where really imagination is going to be encouraged. We really want to get the most value from these new technologies, particularly with artificial intelligence. And all the things that it can do, creativity, imaginations can be very, very important. So with all this, IT specifically had challenges with and particularly with security had to move from more of the no, you can’t do this, you can’t do that, you can’t do this. To really saying, OK, how do we say yes, how do we do that in a responsible fashion? There are a number of things that had to change within IT organizations and within the industry as a whole to allow for those modern approaches to using technology. And security had to mature along with that. So there was more enabling and less restricting. You know, whether you’re moving from device and network controls to more identity access controls to allow freedom there, you know, data device and users were mobile. Kate Weiland-Moores 4:32 Oh. Joe Steiner 4:36 Now security had to kind of follow along where it wasn’t dictating how things were going to be. And now with AI agents, we need to treat them from a security standpoint as as other users. So how do we, how do we enable that? Well, this is where zero trust entered. And while the model for Zero Trust is never trust, always verify, it is a very strong approach for controlling technology and any new technology has started with cloud and it’s really kind of the framework for securing a I as well. Despite the name, Zero Trust really helps increase trust of the technology which ultimately drives. Adoption and being able to realize all of the value from these new technologies. If you have the strong security in place, we found that with new technologies, people are enabled to work in new and creative ways, being able to work from home, being able to. Leverage new technologies at a faster pace than ever previously available. Really kind of got away from the model of the people versus IT and became more the people and IT together. So as we move into the AI era, what’s a little different here? We’ve solved that kind of IT security concerns, but now that AI is truly a human first technology really is driven. The most value is going to come from your employee base and them solving for the problems that they’re seeing every day in creative. Kate Weiland-Moores 6:00 Mhm. Joe Steiner 6:03 In new ways and leveraging the technology for that, what does the org need to do in order to change to adapt to that? So it’s responsibility in that is to provide a secure and reliable platform that can withstand all the creativity that’s been coming its way. So how do I ensure that I have the responsible? Mechanisms in place to allow people to do what they want to do, what would be valuable to them. HR challenge for them is going to be how do I develop a culture where Employees can create the most business value from AI? And that’s something Megan’s going to talk about in depth as we proceed with our conversation today. Brilliant. You know, that’s all great. But at the end of the day, we need leadership involved here in dictating. Hey, here’s where the org needs to be structured in order to leverage this, but also ensuring that AI is used to create business value and isn’t just another toy, right? And then the danger here. With all the creativity is that it just becomes playtime. And so how does the org create a structure and create an environment that allows for people to be creative with a purpose? And this is really where. AI, while you have your security needs and and that you really need to enable the organization to change so that it can allow users to realize the full value from from technology. And we’re going to start with that with Kate talking about the leadership’s approaches towards. How organizations will will change in that in that world. Kate Weiland-Moores 7:41 Thank you so much, Joe. Hello, everyone. Yeah, I think that Joe really says it well. It’s just the title of that last slide and that AI is a human first technology. Here at Concurrency, we’re always using. People process technology is like a guidepost right for us to talk about the how you successfully implement technology. And I would love to say like the rise of the frontier firm is really all about the people. If you were to think, hey, this is the. Future of the org chart, if you will. This is a really good place to start. I’m not sure how many of you are familiar with this, but this is the 2025 Work Trend Index. It’s an annual report put out by. It’s written and published by Microsoft. So if you’re not familiar, I definitely would recommend you take a look at this. It really kind of shows a phased approach, one in which that we at Concurrency are even adopting, starting with the individual employee using AI as their own assistant, building that up to the employee. Having multiple assistants and then really taking it further than that and understanding how that employee would actually become what we call an agent boss, if you will. So the employees will increasingly build and then delegate and manage AI agents to be more productive. Of. Next slide, Joe. Thank you. All right. So, you know, I don’t know if any of you joined Brian Hayden’s presentation earlier, but you know, one of his big themes is really building agency with your people. So, you know, if you really look at the fact that 81% of leaders are planning on using that digital labor. We say those agents in the next 12 to 18 months, we really have to figure out how do we manage all of that knowledge that is at our fingertips, still considering the people along the way. One of the things that we’re thinking about is succession planning. You know, another pretty important statistic that came out of this study is that there’s such a shortage, especially in IT talent. When you think about 53% of leaders are saying, hey, we want more and you have 80% of Employees saying, yeah, but we’re burned out. Now what about the other 20% of these people? A lot of that, you know, a lot of those individuals in that 20% are thinking about retiring. So, you know, we need to be really thinking about how do we organize our future org chart with our current Employees and do we want to even replace some of the people that are retiring? Or can we create those efficiencies and incorporate them into our plan using a human employee first model? Couple of things that I want to talk about before I turn things over to Megan are really the importance of starting. It starts at the top. I really want to talk about the importance of aligning. Anything that you do, any adoption of AI needs to align with your business goals. I’m really starting with that North Star. A couple of things that you want to think about is really the fact that adopting AI is something that will enable all of your Employees to do more. Will get you and your entire organization into that promised land, if you will, aligning with those goals. The next thing I would say is use the data, your own data so that you can show the momentum and how adopting this is making your business better as you. So there’s a lot of great statistics in that article that I’m not gonna spend my time reiterating here, but there’s a lot of different ways that you can build data use case scenarios as you’re doing this. And again, this whole presentation is around how you emphasize that human AI partnership. It doesn’t replace people, really. It’s going to augment them. The way that we see it is that we’re even going to have interns that are managing an agent to make their job better, to increase their productivity and they’ll be. Learning along the way. And last but not least, I think it’s really important to start small. Start small, scale fast and start with augmentation, not just focused only on the efficiencies, but how do you augment an individual’s role versus thinking about replacing? That’s not where we want to go with this. It’s really about. That augmentation, I would propose that you start with a pilot program and you know, again, follow those top three points to really ensure that you see success as you build that future org chart. Meghan Focht 13:00 All right, so I’m super excited to be able to talk to this group. I know a lot of times we’re talking about technology and I’m very excited that this new age of AI agents is bringing to light the importance of a culture. And I’ve spent over 15 years working with organizations, universities, manufacturing, hospitality, tech, really trying to figure out how we do this and prepping for this conversation. It was really interesting talking to Joe and hearing about the history of IT. And how zero trust security came about. And we sort of had this, this moment of, well, that’s kind of what we need to do with culture. You know, a lot of times it’s been top down. It’s been HR as sort of the police it’s been. Here’s the handbook. Here are the rules. And let’s be honest, how do those rules get in the handbook? Usually because someone broke them at some point, which I would love to hear all the stories of how those came to be at some point. But I digress. So we need to now create a culture where we. Give a little bit more freedom and empowerment to the people and that we’re looking at culture, not just sort of a top down dictation. And how do we create the bounds where the culture is developing and maturing as people are interacting with each other and with technology? And so here, here are some bullets to kind of think about when you’re you’re creating the zero trust culture. You want to be intentional. You know, I’ve I’ve heard a lot of stories from different people who’ve said, you know, we wanted to change the culture and my boss told me to. And. parties or the leader said that they want to mirror culture in how I operate. And so, you know, go infuse that, right? Or Or a hired you because we’re trying to change our culture. And while those are small elements of a culture, we know that’s not going to solve it. We have to be Aligning what we want our culture to be with our organizational goals and with our hiring practices and with how we operate within the organization. As if you caught Brian Hayden’s presentation earlier today, he outlined that really well. Part of being intentional is looking at your values. Do the values you have, are they actually what you like to celebrate? Are they actually what allow you to be successful? And do your most successful people demonstrate those values? Do you demonstrate those values when you’re making business decisions? That’s how you’re going to reinforce it. And then along that the org chart alignment, right. So as Kate discussed with the frontier firm, org charts are getting another look. You know they’re they’re now going to include the digital workforce. And we have to think about what that looks like. Another thing to consider is should it still sort of be this hierarchical triangle where it’s sort of top down or should we be looking at a more circular approach to that and the view? And then communication paths, a big part of that and why we should re relook at it is that it’s gonna be more important now than ever to have communication paths that go left, right, up, down, angle, you know, we talk about. An intern in a company and they may be up at midnight playing with an agent or trying something out, or watching a YouTube video or a TikTok video about something that they could do and then they get on their computer and they start doing it. They might discover something that would be really valuable for the CEO to know. So we have to kind of check ourselves and be like, do we have the paths where that can happen, where that knowledge share can happen since everything is moving so fast now and then psychological safety. I think it’s a it’s a common thing that we hear about. nowadays, which is really good. It’s thrown around a lot, but to actually create that can be extremely challenging and I think needs a lot of measurement and thought behind it. As Joe explained too, we’re going to need a lot of creativity. We’re going to need space For creativity in this new sort of phase of our our world at the moment. And a big part of allowing creativity is allowing mistakes, is allowing, you know, people to be bold. And so do you have a safe space for that to happen is going to be more important than ever. So with this topic, we could go down a bunch of different avenues and the one we want to focus on today is the value piece. It’s typically where I might start with a client is to just go, OK, we need to be intentional about it. Let’s look and see, are your values sort of inhibiting? A future culture that would support AI agents, so it’d be a great place to start. So we work together to come up with what are some values that we know you’re going to need to have present in some form to be successful in this next phase of our work world. And so it’s human first, collaboration, creativity and accountability. So we’ll jump into this human first, which we’ve heard a lot about all day today, you know, starting with our keynote speaker, Todd and so. A lot of times we think about AI, we think about technology and we immediately go, what about us? What about the humans? Is this going to be a takeover? And I know it’s been mentioned in a lot of other sessions too, like there are some negative sides to technology that we have to be thoughtful and aware of. And part of mitigating that is to not forget that AI and AI agents need to have a human first mentality for it to work. So what does that mean? It means prioritizing our employees, our customers, the way that we set up the. And thinking about that. And one example I like to think about that Todd actually mentioned this morning in the keynote is this Klarna example where and what a gift that CEO has given us to take such a risk and let so many people go, which is not a. only to realize that his quality fell, his customer service quality fell. And now he’s realizing the importance of not just looking at AI replacing humans and really thinking about, hey, it’s not just about efficiency, we have to protect quality. As well and now he’s got a human focused strategy moving forward. So thank you Klarna for making that mistake for us and then we’ll make sure to to keep that in mind. So in this human first model we broke it down a little bit further and how you can think about. What this might mean for your organization? You know First and foremost, we have to think about ethics. We have to think about the integrity of it. Is this agent secure? Is it creating a safe environment? I think there are plenty of examples out in the world right now. How AI is maybe influencing people in a negative way. So we need to be thinking about that. We want to protect quality and human human experience. So we need to think about the process, the decision, the outcome is it? Going to align to what our people need, what our culture supports and all of that we want and and with that we want transparent AI and automation. I know there’s a lot of questions too about how do I know what to create an agent for. And I think we have found that if you have really clear SOP S, standard operating procedures, if you know step by step what needs to happen, if you’ve done a RACI chart and you know who’s accountable and who’s contributing to it, those are going to be really great places to start. When creating your agent, because you will have that transparent awareness of what you’re trying to do. And then as you build it, you just need to make sure that you can see how it’s making its decisions as well. Digital well-being becomes a big piece of this human first. I don’t know about you, but I’m pretty sure that I’m not supposed to look at my phone before I go to bed. I’ve heard that it does not promote quality sleep. I also have children, so I’m very aware that like more screen time doesn’t usually equal happy kids. So these are really real things for us to consider. I’ve been in some conversations trying to understand whose responsibility is it? Is it the employee’s responsibility or the organization’s responsibility to manage this? And what we know as organizations is it’s harder than ever for our employees to. Fine satisfaction and happiness as our world becomes more focused on screens. And so we do need to keep that in mind. And whether that’s making some intentional thought to pull people out of their screens and connect in a different way or if as we’re building these agents, it’s just something. keep front of mind. And then again, this has been mentioned in the summit today as well as this upskilling. We have to be able to to skill up faster than we ever have, and we need to make sure our our culture empowers people to learn from anywhere. We don’t necessarily have the time To set up the training plan and to put people through sessions that are linear, we have to figure out how do we learn from each other in real time? How do we support giving people time to look into that and to share information? The next one that we wanted to call out here is collaboration. And collaboration is going to change because it’s not just about collaborating with humans, which I think we can all agree can sometimes be hard enough. I’ve been on enough group group projects in college to know that it’s not. Always an easy task, but now we have to think about collaboration as human to human, human to digital, how we’re going to solve problems using that partnership between people and machines. machines, and then keeping that trust and transparency and not losing sight of our purpose. So as we jump into this, you know thinking about that human and machine synergy, I had a little bit of a tiff with my copilot putting this presentation together and I created these images you see with my copilot buddy, so we collaborated on those. Pretty proud of them. Hopefully you love them. But I went to go get my image for accountability, you know, and I said, hey, keep on this same theme, keep the bright blue accents. And show me accountability between humans and A I go. And it showed me an image of a robot with a sad face. And I thought, well, accountability is not sad. You know, it can sometimes feel that way. It’s not necessarily the most fun, sexy thing, but. But I had to explain to copilot, accountability actually makes people happy. So take the sad face away. And we were able to find common ground. But it was a moment of me realizing that, OK, I’m thinking this is going to be obvious and it’s not to the agent, even just me understanding what is an agent. Capable of. There was another time I was trying to get it to format my slide and however I was asking it, it just wasn’t capable of. So I was getting really frustrated and I needed to educate myself. So we’re going to see those moments more and more cross-functional fluidity. So when we use an agent. It’s likely not just going to be for me and my work. There are going to be those agents, but the most impactful ones are going to span across the organization. So now I have to work with people outside of my department and really understand, hey, this process starts with me here. What happens when it gets? To you, is there a place for efficiency and where an agent would work? And then as the organization changes and grows and people change and processes change, I need to stay very well connected with those people to make sure we are tweaking our tool. When it’s needed, Shared intelligence. Not only am I putting information into this agent, but it’s grabbing it from the web. It’s grabbing it from our documents that you know, Susan over in that other department might have put it in this folder if I’ve asked it to sort of check this folder. So it’s just another thing for us to be thinking about, looking at and revisiting. We need to look at AI not as as a competition. You know, there is this idea of will it take my job? But even as Kate called out, we’re going to see a lot of people retiring and we don’t necessarily have the workforce coming up the ranks in the same volume. And so it’s more likely that we get to use agents to take away some of the most frustrating parts of our job and allow us to lean more. To the things that we love, which will be a really cool opportunity. And then radical candor and feedback forward as we talk about how we have to interact with each other as humans to make this the most successful tool and to think about how we have to continue to pivot and probably faster than we’ve ever. or had to before. We have to have really strong relationships with each other. At Concurrency, we talk a lot about radical candor, which if you don’t know what that is, it is a book you can read by Kim Scott. Or you can go on YouTube, and she has a ton of videos that explain it as well. But it’s this idea that if you’re caring personally and you’re challenging directly, you can be really clear and fast about feedback. You have to think about, you know, how you’re filling your coworker’s love bucket, which is sort of not really an HR thing to say, but I think we all know what that means is how am I showing that I? Care about someone before I have to go and give that direct feedback. And then, you know, the other thing that we say, I think I started to say as like a cheesy HR person, feedback is a gift. Clear is kind. Thank you, Brene Brown. But we’re hearing it now from all of our people at Concurrency where, you know, it’s like, just give me the feedback. It’s OK if it’s really messy and we’ll, I’ll get curious about it and then we’ll figure it out. So that’s going to be more important than ever. So the next one we have to think about is creativity for me, front of mind. I mean this, this, these images, whether you think that they’re rudimentary or you like them or you don’t, would have taken a lot of time for me to create. In the past. So if anyone’s worked with some of those creative tools, even just moving one little line can sometimes take way longer than you ever thought it should. And also trying to do image search, you don’t really find what you need. And so for me to say, OK, I need some images, I want something fresh. I can go do this and and start to spark my own creativity and now I can hone it in. Really saved a lot of time and was able to produce exactly what I’m looking for. So creativity is going to be able to be acted upon probably easier than it has been in. In the past and more people are probably going to come to the table with suggestions in areas that they may have not been able to do before. As we deep dive into creativity, we look at that human ingenuity and that machine amplification. Exactly what I did, I had an idea. Like, OK, we’re going to create this image and I’m going to start to tweak it and figure out if I like it, if I don’t, and we’re working together to do that. I’ve also used copilot to put training plans together. You know, like, hey, I’ve I’ve got to do a manager training for this type of organization. Can you give me an outline? And then can you give me 5 bullets within that outline? And where I was maybe experiencing sort of a writer’s block, if you will. Like where do I start? There’s so many things. Now I have an outline I can start to build off of. So it’s it’s a really great tool to just spark that. And get you going. Curiosity driven innovation. This for me, curious is like my favorite thing ever. And also I’m a huge Ted Lasso fan. Hopefully everyone is, and especially being in HR, that’s probably no surprise. But there’s this clip of Ted Lasso in a bar with his boss’s ex-husband, so just stick with me. And he was playing darts and trying to sort of put a wager out there. And Ted puts this whole speech together about curiosity. And so if you haven’t seen it. Please go Google it, go look on YouTube and go search for it. But he talks about, you know, be curious, not judgmental. And when people are judging, it’s usually not about you. It’s usually about their lack of curiosity. And so we’re in this, this world now we’re asking questions, thinking outside the box, staying really curious, whether that’s about the humans that you work with or the technology is going to be really important. Some other highlights here is we have to just make sure that we’re. We’re still kind of throwing ideas out at the wall that we’re sharing ideas that we’re keeping the humanity 1st and thinking about, OK, what are our human needs and how can we creatively meet them, I think. I think a lot about just this age of technology and how it can relate back to even the assembly line. And so then you pull up forward in 1913 and a lot of similarities there and one thing that. Henry Ford missed out on when being creative about so many other things is he put efficiency forward so much that he only made black cars and customers didn’t want black cars and so he started to lose. The momentum that he had gained through his other creativity. So it’s just important to continue to check in and think about, OK, are we being creative sort of in the right frame of mind. So now the last value that we really want you to be thinking of. As we move forward is accountability. So you can see I collaborated with my copilot very nicely to explain to them that accountability is actually a happy thing and our robot here is smiling and has the nice little check mark. So we work together to get there. But it means taking accountability for our actions, for our decisions, for the consequences. So we’ll continue to see a lot of court cases where AI or technology has done something and we have to figure out, OK. Who? Who owns that action, right? Like, could that have been avoided? And so it becomes more important in a world before where we all sat at desks and paper was kind of moved around and maybe decisions happened a lot slower. It was probably easier to operate with a little less. Accountability, but the speed in which we’re happening, the outcomes that can be there, this is going to just be pulled up even more to the forefront. So as we dissect accountability, we look at human oversight of technology. So we go back to the org chart that Kate was talking about and we do need to know ultimately who. Who is responsible for that agent? And not that we can necessarily hire or fire, we can certainly generate and we could retire an agent, I suppose. But we have to know who is ultimately going to be accountable for making sure that agent is still doing what we need it to do and if that agent. Has a different outcome that’s valuable to the organization. Do we need to make other tweaks to it? Looking at the decision-making, looking at the input we’ve given the agent, training people on how to use this agent if it’s a public agent, right? So we still need to highlight that human oversight. An accountability to our technology that transparent systems. We need to know what input has gone into this. What is it doing? Is it aligned with our values? Have people had bad experiences using this agent? Again, that goes back to that feedback loop. We talked about it being ethical. We talked about even in the human first value, how it’s a shared responsibility. So even though we have one person accountable, you’re going to have people who are responsible for parts of it or who contribute. To it. So doing that RACI for for the processes that these agents are taking over will be continue to be extremely valuable. And then what are the boundaries? Like what is this agent going to do and what is it not going to do, right? So I’m starting to generate some ideas around. Hot off the press, but like a manager, coach type of agent. But I also will have to be really clear, like that’s not going to replace your need to run really key decisions through me or your leader or whoever else, right? So hey, this can get you to a certain point. But let’s, you know, all be on the same page of where authority or validation or double checking needs to exist. The other thing about these agents is it can save a lot of time, right? So we could have. We’ll just use Joe as an example because he’s so smart. He could go and automate his job so that a 40 hour job is now 20 hours and maybe the rest of his 20 hours he’s kicking back in his backyard, hanging with his dog. Working on projects right now, the question becomes, do we care? Because at the end of the day, he’s still getting the work done that we need him to get done in 40 hours. But we should care because now we have to think about engaged employees probably more than ever. If Joe isn’t engaged. Employee. Then he’s gonna automate 20 hours of his work and then he’s gonna go. Thank goodness I now have more time to work on this backlog of ideas I have for the organization. Or I now get to work on these more creative projects or these more complex projects that I just haven’t had time for. Now I get to right or Joe will then go, hey boss, I have a little bit more time. Is there something on your priority list I can help with? Now we could have him sitting there, but your organization is now probably going to be 10 times better. So we all know that engaged employees are good for the business. We know. We’ve heard that. I think as we look to the next slide, we have some stats that we’ve all seen. I know there’s stats too about how revenue increases with engaged employees. This increase in productivity, I think we’re at a moment where that will skyrocket if we look at that probably in three. Years. I bet that number is extremely higher because we can either have the employees who sort of sit still and enjoy their free time for themselves, or we have employees who give that back to the organization and continue to be engaged and involved in what’s happening. So some of the things that we need to be focused on to keep engagement front of line, front of mind is for one, just as individual employees, we want to look at what we can do now and help people do the same. So it’s sort of this ownership of, OK, what are our values here at the organization? What have we said are things that are important to us? Am I living those values? Am I being a good employee is one way. And the other way is if you see an employer or co-worker. That’s kind of misstepping or in a bad mood or not treating someone the way that you you think that they should check in on them. You know, it’s lead with empathy first. Are you OK? Like, hey, we don’t do that at concurrency. We don’t want to do that at concurrency. Like, can I help you? And that happened at concurrency. As we were shifting our culture and we were trying to drive some change and it was a lot of leadership saying this is what it is, this is what it is. And I remember the first moment that an employee had said to us, hey, you might want to check on so and so. They brought this up and I. You know, checked in on them and I said, hey, like, that’s not how we want to do things here. You should go take this path. And I I had like a little giddy party in my office. Not that this person is struggling, but that we are now checking on each other. So that’s a really cool place to be in engagement and culture. Tried and true engagement surveys, I don’t think they’re going to go away. I know that they can be kind of annoying, but they give you really good information and if you make sure that psychological safety is present, you’re going to get really honest information, which is going to be really helpful in you figuring out how you need to tweak. Your culture. You want to create space for people to connect and share ideas. Employee resource groups is a great way to do that, but also thinking about other casual ways for connection. This becomes really hard in a remote or hybrid environment. And you have to be a lot more intentional on how you do that connection to leaders. You know, how are your leaders checking in with the people within the organization, even one-on-one, understanding what’s happening in their life? Are we celebrating personal milestones in people’s lives and sharing that information? And then also our leaders asking for feedback. One thing that Kate does at concurrency, which I really appreciate, is often before a meeting’s over. Not that she holds us hostage, because that sounds bad, but she really makes sure that. Everyone gets a chance to say, hey, what can I be doing different? I don’t have all the answers. Like what advice do you have for me? And that goes such a long way in in people being able to give that feedback back to her. Maybe not right away because they’re kind of shocked, right? And a lot of organizations say that they want that. than get defensive or don’t actually take the feedback. So you might have to take, you know, it might be five times of asking that question before someone finally has the courage and sort of prepares themselves to say it, but it’s going to go a long way. And then another thing we do here is a quarterly check in with an engagement form. I love this form so much. It goes through and it asks you to rate what’s most important to you from work life balance, the work you do, opportunities, people. Competitive rewards. I always forget another one, but rank what’s most important to you today and then rate how satisfied you are. So I found these can be a lot more effective than a stay interview and then it helps the employee stop and say. Wait, I’m happy here. Why am I so happy? Oh, it’s cause work life balance matters to me right now and I really feel like I have the flexibility and the trust as a human that I’m going to get my job done and I can tend to. Family things or fitness or whatever else it is. And the other question that I ask really carefully is what would a 10 look like? So even if they gave us a rating of a nine, what does a 10 look like? Because a lot of times people put boundaries on what’s possible. It helps. The manager very quickly hone in on how can I make this employee’s life at concurrency better? Or let’s have a really real conversation about why that probably isn’t possible and sometimes allows you to have these transparent conversations about is this the right place for you, but ultimately the purpose? Is I want every person who wait, every person at concurrency to wake up in the morning and go, this is exactly where I’m supposed to be. And if that’s not the answer, we got to explore why. So those are some tips to kind of help you think about how to drive engagement. And then just as an overall sort of summary of our presentation, you want to really establish an organizational culture with intention and really kind of putting that innovation creativity. Front of mind, you want to have an AI vision and then those values on organizational culture that are aligned and communicated and you’ve created a structure around. You can consider a center of excellence of people passionate to lead this charge who might be a good place to connect to. different layers of people in there, but really making sure we’re keeping humans first in this AI um new world, and then responsible AI practices to ensure that the alignment is there, and then just enable all your users to create and innovate, feel empowered. to do that. But thank you all for coming. It’s a topic that I know we’re all very passionate about. We try to you know put that forward here at Concurrency and we’re helping a lot of our clients think about this, especially as They take on big tech projects that now span across the whole organization. So it’s not just, hey, let’s help the IT team do this in the background. Now we’re affecting an entire organization and the cultures that aren’t aligned in this way, it’s becoming incredibly obvious. And creating barriers to be successful in their tech, their tech plans. So really appreciate it. Please feel free to connect with us on LinkedIn. We always love meeting new people and then as Paige has put in the chat, we would love to get a session. To talk more about your specific situation if you’re interested. I don’t see any questions in the chat, but happy to field some if you have any. You know we have 5 minutes, so we’ll we can hang around if anyone wants to add anything. Otherwise, really appreciate your time. Joe Steiner 45:35 Thank you everybody for for your time today. We appreciate it. Kate Weiland-Moores 45:39 Thank you, everyone. Hello, welcome everybody. You’re in the Intelligent Document Processing with AI webinar. We’re going to talk about workflows, tools and integrations and validation techniques for AI document processing. My name is Nick Miller. I am an ML architect at Concurrency. And first, we’ll start with a very brief intro to who I am. I started off my career with an MIS degree from UT, and from there I went to the Navy. I was a logistics officer in the Navy, did that for about 10 years, got out and wanted to speak civilian instead of just militaries. And so I got an MBA from UT. And shortly after I got married, you can see my wife, Heather, there on the right. And I predicted that we’d need more money because we’re gonna be having kids and all that sort of stuff. So I went back and got a master’s degree in statistics. Really, the impetus for that was that I really I took a few classes in the MBA program that really exposed my love of data science. And I realized I couldn’t get a job without a at that time, at least as a data scientist without a master’s degree, minimum in statistics, math or econ. So I went back to A&M and got my master’s degree if you’re from Texas. You know that UT and A&M are rivals, and so I’m not really sure if who to root for on game day, but it’s all great. Both schools were awesome and a good experience. There’s my family there. I’ve got two kids, Sadie and Bam, and two dogs, Olive and Layla, and my wife is adamant about matching Halloween costumes. This year we’re gonna be the minions. And then probably the newest aspect of my life, especially one that I can nerd out with numbers on, is golf. And so anyways, that’s a little bit about me. Thank you so much for joining today and I think we have some great stuff to talk about. Before I do, want to let you know that we’re offering a complimentary intelligent document processing session where you’ll meet with us to discover how AI can streamline your document workflows, talk about your specific use cases, any specific questions you have, get some practical insights. Kind of you know this webinar is kind of a high level overview and you may. Have you may be further along or maybe not quite as far along. So you mean we can really kind of tailor that session to wherever you and your company are at at that time and just let us know. We’ll later in the session we’re going to drop some booking links in the chat. So if this is interesting to you, if you’re not sure you’re kind of think it might be useful. Let’s meet up and we’ll find out. It’s no risk meeting. All right. So now that we have the options out of the way, let’s talk about the agenda. So today we’re gonna talk about the challenges in document workflows. If you’re here, you probably have some good ideas already with. Those challenges are then we’re going to talk about some workflows or common scenarios of document workflows that you might see and we’ll talk about some of the commonalities and things that that are that are common between different workflows or scenarios and different and how we can handle that. And then we’ll talk about one of the key components of that workflow, which will be the AI document intelligence tooling from Azure. And then we’ll give some demos of using that tooling. And then we’ll next talk about some tools and integrations and validation techniques and then we’ll demo some of that and then we’ll close and of course anytime. Please ask questions and then if you ask a question and I’m not answering, hopefully Amy will will answer the question, not answer the question, but will let me know there’s a question because it’s kind of hard to see them sometimes in the chat where you’re presenting. But please ask questions if you want to know anything or something does not make sense or you want some clarification. So first, challenges in document workflows. As digital as we are, as advanced as our ERP systems are and SaaS and all this sort of stuff, we still have the siloed systems between companies. Even intra company communication is often with documents. Our customers require compliance documents from us. They send us purchase orders in e-mail. We have contracts in PDF that are not necessarily accessible or searchable. We get payments from our customers. You know, there’s there’s receipts, there’s all sorts of documents that are just being used for communication, for communicating business. This information today, they’re not going anywhere anytime soon. So that’s I guess a good thing and a bad thing. So the bad thing is it’s a challenge and the good thing is we have a solution. The other part about document processing workflows is unless you’ve already got that, unless you’ve already implemented the things we’re talking about in this. Webinar, then the way you’re handling those documents is very likely manual, right? So you have someone looking at the purchase order, you have someone looking at the ACH payment details where your customers have paid their invoices, but you don’t know which invoices the ACH payment even belongs to. So you know, it’s a manual process or maybe you have what you call a expense policy. People submit receipts and you have to look at the receipt and make sure that they follow the company policy. Well, that’s costly, takes time and and. You know the the employees time is not free and also it’s error prone, especially if you’re extracting information from those documents and not just reviewing for for policy validation. The other problem or challenge I like to I like to think opportunities for excellence, right? So instead of a problem, it’s a challenge. An opportunity to excel. So another opportunity to improve is the fact that there’s so many different formats, right of documents. And when I say formats, I mean the actual file format. And then within the file format, let’s say we’re talking about purchase orders. Do your customers all use the exact same format in their purchase order? And the answer is no, right? So it’s the old school document processing was very look at this form, find this thing. And when your forms were very, very uniform, it worked. But that’s not the real world. So that’s another challenge. And then the other problem here is that there is an A I skill shortage and it’s kind of hard to find folks with A I skills. There are a lot of democratized A I toolings, but still it’s a challenge. But the good news is we’re going to give you solutions. We’re going to demonstrate how you can overcome all these challenges and show those opportunities for excellence. So after you get the challenges, what are some of the scenarios or workflows in these? Now these are high level and they’re kind of generic and you can they can be kind of built into lots of things. So in document processing this is very generic, so you there’s a document input. You extract some data from it and you take action. Now an example of that might be a PO process. So you receive a purchase order from a customer. That’s the document input. You extract the data. Right now it’s manual, so you’re looking at it. What’s the PO number? What do they want? Type, type, type, type, type, type. Enter the purchase order in your ERP is taking the action. Right, so that’s an example of document processing. Knowledge base management is fairly similar, except instead of taking that data and doing something in the RP system right away or triggering off some process. You’re actually extracting the data so that you can build a knowledge base so that when you have multiple documents, you can ask questions and get information across the entire knowledge base, right? So you can think of this maybe as. Let’s say you have some company standard operating procedures, right? You have standard operating procedures and you have customers and they ask you questions and they say, well, I want to know what your policies are. They give you a survey. What are your policies for XY and Z? And of course, you probably don’t want to hand over your standard operating procedure because, you know, that’s your secret sauce perhaps. And so, but you do want to let them know that you have policies and procedures in place to mitigate the risk of whatever it is they’re talking about. So they say a survey, tell me how you handle such and such risk. And you would extract the information from all your SOP’s and then you encode them and store them in an astage base. Oftentimes those are vector store vector databases like AI search and then you would use an LLM or some other process that would look through that database and. Find the relevant information related to that and summarize the data in such a way that you can answer the question without sending them your SOPS, right? And the first part of that is knowledge base management. Now what you do with it later, there’s different ways of doing that, so I left that off. These two things are like, in my mind, they’re kind of the most generic foundational things. Now there’s different flavors, there’s different tooling. You may take the action, may be part of an application or a workflow or something else. So there’s a lot of options there, but at its heart, these are the two. Primary workflows or scenarios that we see. Amy, do we have any questions yet? Amy Cousland 9:27 Not yet. Nick Miller 9:29 All right, great. OK. Just wanted to check. All right. So what we’re going to do is we’re going to give a, we’re going to go through AI document intelligence. Now in that previous slide, if you remember, and actually I’ll go back, extract data and extract data, right. So they both have this input and they both have this data extraction piece. Now the primary tool, at least a first step for data extraction is going to be Azure’s AI document intelligence tool. And so we’re going to give an overview of that and the name is changed and AI is really awesome. And so you know, everything that Azure offers is now called AI something or other. And that’s it’s a little bit of a marketing thing, sure, but there is actually these days there is AI embedded in the document intelligence tool, especially in some of their custom models that you can build and also in some of their pre built. So they pre build these models and pre train them and. There’s some AI behind the scenes in those models as well, and we’ll talk about that. So there’s there’s AI built in and it’s not just a marketing gimmick with that AI document intelligence resource. Now in general, there are two types of models. There’s the prebuilt and there’s the custom. The prebuilt, as the name sounds, it’s ready to go out the box. You access it with an API. The custom model, you have to provide training data, you label it, you perform a training session, and then you use the model, right? And so that’s the difference between the prebuilt and the custom. Now once the model is built, you still access it via an API. So they’re both API driven. And so because they’re API driven, you can integrate them into your work processes or into your workflows. The first one we’re going to look at is the layout, all right? And the layout is we’ll just actually look at the website for that if I can. Ull that U. To find my right website, OK. All right, so this first. Uh, when you are in document intelligence. The first thing you see is you see your document analysis. Now it doesn’t call these pre-built explicitly, but they are pre-built. If you look at the documentation, they’re pre-built, but these are kind of generic ones where you have a layout OCR read. This is just OCR data layout. They actually extract structure, general documents in your pre-built, right? So you’ll see that. And. The so I’m trying to resize this. Pardon me here. Hey, there we go. It’s a little bit better. All right. OK, here we go. Sorry about the delay there. So in layout and maybe I need to zoom in a little bit, that might be a little small. So in layout you you add your document and what it does is it gives you it extracts some standard structures. So instead of just, you know, taking the PDF document or whatnot and just extracting a bunch of text, it gives you text, it gives you lines of data, it gives you page information. It gives you tables, et cetera. So let’s say we’re looking at this receipt document here. Now there is a prebuilt receipt. We’ll talk about that, but just for fun, I’ll show you this. So look at this Denny’s receipt. There are different items in here in the content. There’s text and it shows you all the text that’s in there. It says Denny’s. It gives a paragraph, I guess it’s a paragraph. And let’s see what else. It gives you kind of table 72, server, etcetera. So it gives you the text. And there’s selection marks. This one doesn’t have any. And there’s tables and then they can be figures. A figures are like images and it could be charts or whatnot, but basically images in there. And then from the table you have two tables. Your first table is are the items on the receipt and then the second table looks like the total, like the bill total. Tables. You have your subtotal, tax, gratuity, etc. And then the total. So this right here, this is layout. This is a prebuilt model, fairly inexpensive. So you can run this via API and get this text output and that may be all you need. It depends on the document, it depends on. Your solution or what you’re trying to do. You may just take your documents and use layout to extract all that form, and you may take all the text, maybe page by page. You’ll extract that Jason file from. This output and you’ll get for page one, you’ll get some text, you’ll get a table, you’ll get a figure, whatever’s in there, and then that whole thing becomes a document. The text becomes a document inside a vector database, the the table becomes a document inside a vector database, and then the figures become a document. Document inside that vector database. And so then your document is now searchable. You haven’t really enriched it. You haven’t really done much to it except extract the data and put it into a vector database. And maybe you encoded the data where we’re talking about that knowledge management, right? So you used layout and you encoded it and turned those texts into vectors. Vectors, which is basically just a series of numbers and the LLM’s use it to find matching. So when you ask the vector database something, it encodes your request into a vector. It finds other vectors in the database similar to yours and then returns the plain text backs to the model. And it maybe like 34520 of those just depends on what you want. And then it takes that selection of returns and then it summarizes what it finds, right? So that that may be all you need and that’s totally cool. So that’s the layout and so you’re getting the text, the tables and the figures from that. OK, now one step up from layout is general, right? And so we go to general. Document. So what general document does and this can also be used. You can also use this in the same way that we just spoke, right? You can parse this and push this into a. A knowledge is not a knowledge base database. It gives similar things. It gives text, it gives selection marks and it gives tables. The difference though is that what it does with general is it it you can get key value results if you want and so it’s a little bit similar there but. Notice that it kind of gives a little bit of a different of a table. It gives you uh. It guessed with the key value pairs that the server was straw, so it’s it doesn’t always perform really well, right? And that’s OK like it was trying. This is showing you what works and what doesn’t work, right? So there’s no tables in the CVS Pharmacy one. How about here in this California Pizza Kitchen one, do we have a table? No, no table. It’s and it’s supposed to extract tables and it does. However, it just it didn’t. It doesn’t always extract them very well, where if you look at the layout table, it tends to to output those tables a little better so. It’s just kind of up to you what you want to use there. So that’s the general. And then we get into the pre-built specifics, all right. And a specific pre-built and there are a lot of them. And actually I’m going to go back real quick and show you some of them. There are, there are a ton now they don’t have everything you need and and. To be honest, no one has ever asked me to build them a solution using a pre-built model, probably because they feel like if there’s a pre-built model they can do it themselves. But you know, I don’t know. But that doesn’t mean they’re not useful. They are useful. You have invoices, receipts. Identity documents, pay stubs, I guess mortgage information, marriage certificates, contracts, business cards, right? So there’s cool stuff in there for sure. But what if you want to extract ACH payment details? So you want you receive a document. With a ACH payment reference and it has a whole bunch of invoice information on it, and that’s how you’re going to find out which invoice payments have been paid by that ACH payment. Well, it doesn’t exist. There isn’t a prebuilt model for that. You’re going to have to build that yourself. And so while there are a lot of options, it doesn’t have everything, but if it does have it, it’s pretty good. And so let’s go check out this prebuilt receipt model. And so you look at this, the Denny’s. So what it does, instead of just extracting the data, it has a schema in mind, right? So if you’re getting, if you’re if you have a receipt model, you might imagine, well, if you’re getting receipt, what is the schema? You’re probably going to have a merchant, right? And and maybe the address and there’s going to be items you buy and quantity. And quantities and prices and like dates, totals, tax, things like that, right. And so you see that it has that, it has the merchant address. So we didn’t have to tell it it was a merchant because it’s a receipt, it just knows it’s part of the schema. So that’s what that pre-built means. So you get the the merchant address there. It’s in Las Vegas in the city. So you get this, you get. If you extract this and you say give me merchant dot city, you’re gonna get Las Vegas. So it gives you a lot more structure, more reliable data. So if you do want to have a. A business process or workflow where you’re taking this document and you’re extracting the data. It gets pretty good results right now. Is it perfect? It is not perfect. No, nothing’s perfect. So I don’t want to set the standard or the bar of perfection. But you know, there are little things out here on the receipt like this chowder Hut receipt. They got a Coke and a spicy fish. Sounds pretty good. And then it subtotals it for food and beverage, which you know, I guess would have been more impressive if they had more than one item of each. But anyways, subtotals food and beverage, then tax. Maybe the tax is different based off of food versus beverage, I don’t know. And then they add this little thing here, SF, it’s in San Francisco, SF like employee mandated something or other. I don’t know what that means. It’s some sort of government fee after the tax. And so if you put in some validation rules and we’ll talk about. Validation rules later. A reasonable validation rule was for every item in here added up and that’s the subtotal. And the subtotal plus tax and tip should equal the total payment amount. And if it doesn’t extract this information, you’re gonna be off by $0.09. You’re gonna flag this and be like. It’s there’s something wrong with this receipt and 9 cents. Maybe that’s minutiae and you don’t care and that’s totally fine. But the point is just to demonstrate that even though the receipt thing is awesome, like it doesn’t, it doesn’t have like mandated government fees necessary in its schema. Or maybe this 83 was on top of this and it messed it up. I don’t know. But regardless, it’s not in there. And one of the cool things though is you look at this table of items and you expand this table and it says the description of this item was they got a Coke. It was a quantity of one and it was $3.09 USD. That’s pretty good. Accurate. Spicy fish one and 1629. Pretty good, right? So you have this pre-built model of the seats. Pretty nice. Yeah. Oh, sure. Amy Cousland 21:31 Nick, we did get one question. It says is there a reasonably straightforward way to determine whether your document data extraction requirements could use canned out-of-the-box capabilities or will require customization? And also is there an in in in between? Nick Miller 21:48 Yeah, that’s good. Yes, there is. So the way that you know is you, it’s really a business question, right. So there’s nothing to say that you have to do a validation rule that matches your subtotal plus tax and tip with the total. But if you have a business process where the document has information, well, one easy one is when you’re looking at the prebuilds, is your business document, is it one of those prebuilds, right? And if it is, then you’d want to do some experimentation. So like if it is, let’s say an invoice. Or a purchase order or something like that. If you get that, you get invoices. Let’s say you have invoices from you’re on the receiving end of an invoice and you get invoices from 10 suppliers, like maybe your 10 most common suppliers. If you throw those in there and you look and you you analyze them in here in the UI, like just do a little experiment, little test. In data science, there’s this concept called no free lunch, and it means that you don’t get anything for free without a little bit of work. Now, AI makes that less true, but still. So you take those 10 documents that are somewhat representative of the invoices you receive, you throw them in there, you do this analyze just like we’re doing here in the UI, and then you look at it and is it getting? The things that are important to you. Now what’s important to you? It may be things. Everything may be on there, but there may be a thing that you want to extract from that that that it doesn’t work. We’ve had invoice information from clients where. The the table said the table said invoice number and it was the date and then the columns of date was the invoice number. Rich, you know they goofed that up. Now would this handle that? Maybe, but we’ll talk about some other ways to handle that at the end. When I show you kind of my preferred method, although these are really good and I’m not disparaging these at all. So it’s really just a little experimentation. So you have to take 30 minutes or an hour before you move some big business process to this and invest some time and money into it. Do a little experiment. So if it meets your needs, great. If it doesn’t, there’s one more. Option and that other option is custom and we’re gonna talk about that as well, right? And then the reason I showed you layout is that you may have a use case where you just want to take the text out of the document and put it into a database. And that’s totally fine. And if that’s the case, you can use that layout pre-built and just save the page number and all that stuff. So it really is driven by the business use case and the business value you’re trying to provide. We’ll also look at layout a little bit later because at the end of the program I’ll show you how I use layout along with. A large language model to perform similar extraction but add a little bit more. Does that answer the question? Amy Cousland 24:36 We’ll get more. I’ll let you know if we get more questions. Nick Miller 24:39 All right. OK, great. Thank you. All right. So this is the receipt. So we covered pre-built. So we covered layout, we covered general and we covered one of the specific pre-built of receipt. And the next thing we’re going to look at is custom and in custom. You have different models. Let’s see here, label data. So I added some receipts. So I’m focusing on receipts today just so you can see the difference, right? So if I did different documents for each one, you really wouldn’t be able to see performance characteristics and performance differences between the different models. But in custom there are two options. There’s an extraction model which does not use AI, and then there’s a neural model which does use AI and the guidance there. And this is in the documentation of course, but when you’re looking at the two if the. If the if the layout of the document is not a standard pre-built document and and the form and and the layout itself is uniform, right? So most likely it’s an internal document. So if the the the internal document is uniform, then you can use an extraction model, right? Now there’s a little bit of a caveat to that, as in, let’s say that internally you have two flavors of this of this form, but they’re the same purpose, but they’re two different flavors. You can do what’s called a compose model and you build one model for one flavor, extraction model for one flavor, another extraction model for the other flavor of document, and you put them together with a compose model and it uses a classification model to decide. Which of the two flavors of model to apply, right. And so that’s that’s the composed model. And so remember how we talked about AI being built into it. Well, one, the extraction model doesn’t have AI, but when you use the classification to decide in that composed model to decide which flavor of extraction model to use, that is AI. So you just give. You basically apply because they’re the same, right? Let’s let’s say it’s a, I don’t know, a quality report. You have some sort of manufacturing process and you generate a quality report and there’s like a quality report for this type of product and product A and product B. But you just wanted one. You have one process that that that takes all the quality report processes at all. You can have a composed model, send all the documents to that, it’ll decide which flavor to send it to and then it’ll process it that way, right? So that’s the custom. And then the the other option is if your if your form or what is is not available in pre-built. So that’s step one if it’s not available there or pre-built doesn’t have everything you need and the layout. Is not uniform, right? So most likely external documents coming to you, so invoices sent to you by your suppliers. It would be nice if they all sent the same form, but they don’t, right? And so in that case you’re going to want to use a neural model. So you can kind of think high level extraction model custom extraction. Is uniform, uniform, but unique, right? So not prebuilt and neural model is not uniform and unique, so not prebuilt and different types of flavors from your customers, different types of formats. And so the neural model is what we trained here. So because we trained the receipt model, our receipts. Now I’m violating my logic. I know because it is in pre-built, but let’s pretend it’s not in pre-built. But then is the second thing, is it unique? Are the receipts unique? In and of themselves, they are right because they’re external documents. I don’t control the creation of the receipts and so therefore they’re a large different variety of items. So I chose to use a normal model because that’s the one that you should use because it adapts to the changes better. And so on that note. You’ll see here we did similar things. We got, we got the dining, we sorry, we extracted. Let’s look at the maybe the two ones that are similar. We got coffee, Pizza Kitchen. Chatter. Oh, here we go. Our our Coke and spicy fish. All right, so we got our spicy fish. We got our Coke. It got the date and time. It did not get the server. It did get the address and it got the tax and it got the payment, right? It didn’t get the additional fees. Additional government fees or whatever. So OK, I didn’t get that. No big deal. This one a little bit hard to see. So you got your this person is eating a breakfast of champions or maybe they’re just not being nice or they’re just hungry, but they got two. It looked to me, I’m guessing. Little Debbie Oatmeal Cream Pies and then Pepperidge Farms something or other oat raisin cookies. I don’t know if that’s totally right, but just looks out the price like a single Little Debbie’s cookie and an oatmeal raisin cookie package for 5 bucks. Seems seems about right. Right. Anyway, so extracted that and that was thanks to Robert over there in Las Vegas. So he extracted the date, the time, the merchant name, received this pharmacy. So it got a lot of the same stuff. Now arguably, in my opinion, it didn’t get. All the stuff that the pre-built receipt got and some of the things that I mean these pre-built receipts in my opinion are pretty cool. So look at what I’m seeing here. Merchant name receipt type. It’s a meal. Well, that’s kind of cool, right? So if there are different types of receipts. Maybe it’s nice to know that and so based off of the receipt type, maybe you have downstream process that handles them differently. I don’t know. So that I thought that was kind of cool. So it realizes it’s a meal. So this is the custom model and the way that we do that is. You just you have to have data in a storage account and then in that storage account you’re going to load that data and then you’re going to build a new model and I call it customer seat, this one right here and you can build a composed model from this. But we label our data and then we put the model in here and we did training for it and then you can test it and see how well it performs and then you’re good to go. And that’s then that model is available via an API. And then if you have one or more, if you have more than these, you can create a composed model. And so that really right there is the overview of. The overview of AI document intelligence, right. So now if we go back, I want to go back to the presentation. Let’s see. All right. So if we go back here, remember, right, document processing, extract data, that middle piece of extract data is the. AI document intelligence, but there are other steps or other things you can do to extract data, right? And so I wanna talk now about some relevant tools and integrations. So at a high level you have four steps, source, orchestrate, extract and store and. Have you ever used some of these to orchestrate ones like Azure Data Factory, that whole factory looking thing on the bottom that has source and syncs and things like that. And what I mean to say by this is that you can kind of choose. You’ll use one or more in each column or maybe not depending on the workflow, right? This is kind of the. I wouldn’t say the universe, but at a high level, this is the universe of of procode tools for these workflows. So the top one, if you’re not, if you if you don’t live and breathe Azure and Microsoft and you don’t have these icons memorized, which I don’t. I had to, I had no, I had to Google this one. I didn’t know what it was. To find it, but this is Graph API, so Microsoft’s Graph tool. So if you want to. So for example we had a project where we were monitoring an e-mail inbox and whenever a new e-mail showed up, if it had an attachment we would extract the attachments and then do some further processing. So that was a source of the attachment. And actually we saved it because we wanted those for later. We used, we used Graph API, pulled the PDFs off, saved them into a storage account here, and then that would kick off some further processing, right? So do you source your data from here? There are other options for source. This is not the only thing possible. But these are some typical, I would say some typical sources. So this is where the data is coming from. It could be an application, it could be a data. There’s lots of things, but in general this is what we’re seeing. Next up is orchestrate. Now there are different ways to orchestrate that top icon. That’s AI Foundry. You can use either like semantic kernel process, which is a semi structured process that you can embed LLMs in. So if it’s a business process that works the same way every time, you can create that process and embed those LLMs in there. You can use a. You could also use a. Multi agent workflow as an orchestrator and where that multi agent workflow has access to tools and maybe some of those tools call document intelligence API’s or some other things that we’re going to talk about later and maybe those are embedded in Azure function even who knows or it’s an MCP server, there’s lots of options there so. You orchestrate the activity with that AI Foundry agent service, and there’s different ways to do that. Or you use Azure Functions. So maybe the Azure Function is watching a storage account and then based on what shows up, it takes some actions and that’s another way to orchestrate and that’s kind of what we’re going to look at today. And then the next step after you have your document. So you have a source to document and then you have to decide what to do with it. And maybe these orchestration tools use AI to decide what to do. Maybe if it’s to find certain text in the document, maybe if there’s a regex pattern in the document, maybe if the document has images, maybe if the document contains certain text. We’re going to delete the document because it has controlled unclassified information. We’re not allowed to process that in Azure. You know, there’s a lot of things that could happen there, right? So there’s a lot of use cases for that. So we can use those and we can orchestrate. But once we decide we want to do something with that document, we use up top here is document intelligence. It looks like a document with the the four squares. That’s document intelligence. And if we, as we discussed earlier, you kind of have an overview of that. Now you could use layout, right, which is generic, but it gives you structures, which is what I like. Use a prebuilt, right? And in that case, it probably has most of what you need. You could still use other tools on it later, or an LLM on it later, or you could still put it into a knowledge base later. There’s all sorts of stuff you could do with it. It just depends on your business use case, but you’re gonna extract the data and then you might then also apply an LLM. To the extracted text and get a structured output. So think the pre-built model receipt, we’re going to look at that in a minute. We want to get that receipt data, but there’s actually extra stuff we want, right? We have business rules that are written down and when we process our documents, the business rules. As written by humans, not necessarily IT folks, will be pulled out and the LLM will apply those business rules and then enrich or supplement that extracted text. And then when you’re done with that, you can store the output, right? So maybe you just put the the JSON of the extracted document. To commit into a storage account and then something happens with it. Great that storage account and then maybe it gets picked up by AI search cuz you’re building a knowledge base. Or maybe it’s part of an application and it gets placed in a no SQL DB like Mongo or Cosmos or whatever. Or it’s so structured and. We’ve done this as well. The Jason gets turned into multiple tables and you make entries into a standard SQL database. All up to you, all business. Those are all based up of business situations, business scenarios, business values and downstream the bigger process. And so the point is, is that. There isn’t one way to do it. You just have a lot of tools and based off of whatever it is, the tooling you have that maybe the business value, the long term plans you have or the way you want to integrate it or how your end users are going to interact with it. Maybe it needs to, you know, be at this after the store. Is it becomes part of an application. That’s totally cool. It’s just there’s so many options. I didn’t. I feel like I’d have to put every Azure icon on here if we wanted to talk about everything. So. So the next part is we’re going to talk about an AI enhanced extraction demo. I’ll demo that rather. And so we’re going to talk about our source. We’ll have documents in that. Database or sorry, the storage account. We’re going to orchestrate extracting that data and we’re going to then use document intelligent to extract the data. We’re going to apply a structured output and then we’re going to store that the output of that. All right, let me switch back to. Uh. See switch back to. All right, so this is our storage account. Yeah, all right. So in here in our storage account, we have these receipts and they’re the same receipts that we use from the pre-built because again, I’m trying to show some similarities, right? Or differences. We have the Celius. I think that was like the spicy fish and a Coke chowder hut. I don’t know what that was. Maybe I was, yeah, whatever. CVS, our cookie, this is our cookie extravaganza, Denny’s and then Jimmy Buffett’s, right. OK, so you have to play a little bit of make believe with me right now. So we are building and I don’t know why you would do this because. Tools already exist that are SaaS and already open source and available. But let’s say you wanted to build your own employee expense management tool, right? And your policies are that you know no, you will not reimburse for alcohol at lunch. OK, sure. That’s an option. Or that you maximum tip is 20%, right? You realize that servers are really nice, but we don’t want to give them 30% because we got to be good stewards of our company money, right? OK, sure. So you have some things or maybe you have a policy that you do. Not reimburse no matter what unless it’s an itemized receipt. Now if we looked at, we looked at the the prebuilts, they’re not in there. We looked at the custom model. That stuff’s not in there. But what we’ll look at, those things will be in there and I’ll kind of show you how we do that. So first we have our receipts and then the next thing is we will look at VS code and sorry this is a little bit smaller. Resize this real quick. All right, don’t want to show my local settings. All right, so this is our function app, and in here there’s not a whole lot in the function app, right? We’re going to call our extract receipt data. A class, which we’ll look at in a minute, and from that we’re gonna extract the payment. We’re gonna give it a BLOB URL, right? So you could think a BLOB shows up in a store. The URL gets triggered via queue or HTTP trigger or whatever. In this case, I think it was a HTTP trigger, so. One process calls HTTP trigger and passes in the BLOB URL. Lots of different ways to do it, doesn’t really matter. And then you get your response right? And this we can look at this extract receipt data class and see what it does. So we’ve got some. We’ve got some prompting utilities. We’ve got some utilities. We have the Azure Open AI. It’s really not complex either. We’ll take a look at that. We have our document intelligence class and we have our storage utils and that’s basically if we take a look at those things, the Open AI prompting is. We’re basically just connecting to the service, right? So we’re connecting to the open AI service, that’s most of it. And then we take whatever we take the message that was provided, the system message, and then we take the data that was passed in from the document and we give it a structured output. Class and we get a response, right? So that’s it. It’s not super complex for document intelligence. We’re connecting to the document intelligence endpoint and we’re going to extract some text. So we’re going to use the Document Intelligence SDK. This is not custom code. You know, analyze the document. We’re going to use a prebuilt. Remember we talked about layout, so we’re gonna use the prebuilt model dash layout and then we’re gonna pull from the body the body of that we’re gonna pass in the document itself. We’re gonna get our results and we’re going to extract the line. Now this part is the way you handle the output. I find this to be retty reliable. But basically what you do is you generate a JSON document output that has the text of the document and any tables. So all it does it just extracts the text and the tables. There’s no necessarily any knowledge, just structure at this point, and so our document intelligence extracts the structure. And then we pass that into open API, open API to get a response and we expect a specific output. Now the fun part here is the output. Now this may be hard to read too. Let me see if I can zoom in a little bit. But the idea here is that remember the receipt, there’s the merchant name. So what we do is we use this merchant name, we give it a description. And when this is we’re not necessarily training, this doesn’t require any training. It just requires you to know what you want from the document. So we want the merchant name. It’s the name of the merchant and it gives an example. You know a restaurant number. Maybe you don’t care about that. So you delete this, right? We don’t like, we don’t care about restaurant numbers. We’re going to delete that. OK, merchant address, transaction date, transaction time, order number, server name, table number. I mean, these are just. I pulled most of this out. I’m a little happy on parameters, but number of guests items. So items are actually the table where we’re going to get the items list of items purchased in the transaction. Pretty straightforward. Itemized. OK, here’s where we’re starting to put some business rules in. It’s a Boolean and it’s true if the receipt contains itemized table of purchase is false. So if you get a receipt and your process shows that itemized is false, you can the person submits the receipt and it goes analyze it and they say the merchant name. Well he gets the merchant name and prefills all stuff like it goes. This receipt is not itemized. You may not submit this receipt for reimbursement. Now, maybe you’re not so draconian and mean hearted as me and you just give them a warning, right? I don’t know. But that’s a possibility, right? So itemized. Nope. Who knows what they bought? Subtotal tax tip and then I put in tip percentage. So if a tip is given, calculate the percentage and what you could do and I didn’t specify an axe tip, but you could then have some logic. If tip percentage is greater than blah, you have a warning or deny the expense or reduce the reimbursement based. Off of what a 20% tip would have been. You could do things like that. And then the other thing is the additional fees. Some restaurants or governments charge additional fees. The amount is usually is between tax or tip and the final tar, blah blah blah. So that’s cool. So what we’ve done so far. Is we have loaded our prompting utility which we’ve just shot which you just saw right? And the prompting utility is open AI, our document intelligence for extraction, our storage account that we want to get and then save the document. And then I created just this little notebook so you could see it. You can follow along with me, right? All right. And so let’s say let’s clear this, restart, clear outputs, and then we’ll run them all. And so we’re passing it. This this is the CVS receipt and. It says CVS Pharmacy, Las Vegas. It’s a LD oatmeal cream quantity one. It’s a food or beverage. Oh, that’s the other thing I added. Maybe you don’t reimburse for alcohol, right? And so this is a food or beverage and. The extended name that I got this wrong. Lynn and Larry’s Omo Cream Cookie. Well, that’s not totally crazy. This other one, PFFHSC Pepperidge Farm Farmhouse Omo Raising Cookie. That’s pretty darn accurate. Great. So maybe there’s some tweaking we can do, or maybe we can give it some other knowledge. Or maybe you don’t really need that. I don’t know, maybe it’s just for fun. Um, itemized. This one, it says false, although that is definitely not false. That is true. You’re not supposed to see that in a demo. You didn’t see that that was that was true. It said true. You did not see any false. Looking at which one was it? Was it Chowder House? Which one was not itemized? It looks like Jimmy Buffett. Jimmy Buffett one’s kind of fun cause it shows that if you’re Jimmy Buffett’s Margaritaville, you’re gonna be drinking some alcohol, right? And so. We’re gonna look at the Jimmy Buffett. And all right, so it’s in Las Vegas, 6:30 PM. So you could also have a thing like says breakfast, lunch or dinner. You could have that as a field appetizer trio. It’s a food and beverage Landshark. That’s alcohol. Large draft Landshark will. That’s sorry. Oh, this is the extended name Jambalaya. That’s some food. I don’t know how good is Jimmy Buffett in New Orleans. It’s amazing. Then Marg Perfect. That’s alcohol. Bud Light, Jeff Large. That’s alcohol. Fish and chips food. And then Marg Blueberry Pongo. Like I said, these folks. We’re living it up, having fun at Jimmy Buffett, Margaritaville. So there’s a lot of alcohol here. You could even say give me the subtotal for alcohol and food beverage. That’s something you could do. That way you could extract or decrement the alcohol items from the total. It is itemized, so got it right on that one. Thank you for not making me a liar here. AI and then the last one we’ll look at Denny’s cause this one a lot of them don’t have tips um on here. Which is weird. And maybe it’s just like the receipt that didn’t have the tip. So I’m not judging anyone, but you should tip if you have a good server. Anyways, Denny’s Coffee 379 and it says Denny’s Brewed Coffee Orange Jew. Oh, I’m pretty sure that’s orange juice. Yeah. Lumberjack slam. Lumberjack slam. Breakfast platter. Uh, substitute double Berry strawberry topping. Like, it’s pretty cool, honestly. I don’t know. Maybe I’m the only one that nerds out of this. Amy Cousland 48:24 Hey Nick, we have a couple questions here as we’re I just wanted to read them to you. All the examples have been with forms and type data. If the form has uniform fields but is not typed, how do these extraction capabilities work with handwritten information? Nick Miller 48:28 Yeah, perfect. So there’s a way to extract handwritten data as well using document intelligence. We don’t typically see a lot of that, but it’s there is a form, so we use prebuilt layout. You might use like because there may be written things and type things. Makes it fun. You may use both models. It depends on how reliable of a solution you need, right? So if you need like a super, it’s really important business process. You’d probably process through the layout and the handwritten, and you may even if it’s an image, you may even process that image separately. Through an LLN as well. It just depends on what you’re trying to do, but you can use there’s a the document has handwritten text text recognition. Amy Cousland 49:30 OK, one other question. Are these technologies mature enough for companies to ditch the tech debt? Sorry, the tech debt of their legacy document management vendors with their high annual licensing fees? Companies like Hyland, Kofax, Open, Open Text, etcetera. Nick Miller 49:46 That’s a really good question. So we have customers using them with with really good results. So honestly, not trying to plug us too much, but hey, remember that complimentary session we mentioned that is the perfect use case. Let’s get on the phone and talk about, well, how many documents to be processed. Because we have companies that are using us because where they process tons of documents, it’s worth millions in savings, right? And and not only that, you get really good result, like better results in than from what I’ve seen and it depends, right? There’s only I don’t know which one. In which document, but we’ll look, we could talk about the specific use case. So it may be that the document you’re talking about may be ultra complex and we’d have to find out. I maybe I maybe my answer is I don’t know, right? Or maybe you show me some documents and we say, yeah, absolutely we can do that and then we could do like a quick proof of concept. And say, hey, Yep, here we go, this will work. And then great, if we know it works, then you can deploy it. But as far as these document intelligence has been around for a long time, very long time that’s been around for about now it being enhanced with AI capabilities. You know this that’s been probably three or four years, maybe more the doc, the AI, the LLMS extracting structured output, that’s standard, that’s kind of standard process these days, right. And so Azure doesn’t have one built in necessarily we we. We provide that as part of a solution, but you know that’s very common. We’ve been doing that for years and so it’s it’s working with document workflows. It’s very reliable. We have one customer that they were processing ACH payments. They would get an ACH file, find the ACH amount, go look in e-mail, search through e-mail, find the e-mail. OK, there’s a PDF and the PDF has vendor information, but it’s a different vendor in the e-mail than from what’s in their ERP than from what’s in the ACH thing. Oh, and by the way, the invoice number is truncated. And oh, this is in the wrong column. And we went from that person taking three to five hours a day and now it takes them 10 minutes because we put other things in there to help it learn and match and do things like that. So it depends. If you need it to be absolutely robust, we can 100% do that, right? Or you can 100% do that. But it just depends on how much is the business problem, business process. If it’s valuable, we’ll put whatever needs to be in there to make it, make it work. Amy Cousland 52:17 OK, Nick, actually, I’m going to read one more question that I’m going to drop to kind of go on to the next session. You can stay here and answer questions however long people have them for you, but here’s another one for you. Um Does document intelligence include any contract signature capabilities? Nick Miller 52:32 There are, yes, there is the ability to find signatures as well, yes. Amy Cousland 52:38 Yeah, well, sorry, we’ll let you. Well, we’re kind of up in time here, so if you wanna go ahead and wrap up and if anybody else wants to drop any other questions into the into the chat. Nick Miller 52:44 Yeah. Yeah, I I do have to leave pretty soon here as well. But absolutely those 30 minute sessions we mentioned, there’s no streams attached. It’s not a marriage contract. Just you know, if you if you have an interest, you have a question, we love solving problems. We love working with our clients and even potential clients. So just schedule a session and you we. Amy Cousland 52:49 Mhm. Nick Miller 53:07 You may answer some questions. You may get everything you need and you go on your own and do it. Or you may find out that maybe we can help you. So I do have to go in just a few minutes here. But yeah, so on that note, this is the last slide, so we’re pretty much done anyway. Amy Cousland 53:14 OK. OK. Oh, perfect. Nick Miller 53:22 There. And I just want to say thank you for your time. Thank you for hanging out with us and and you know, asking really good questions. And let me see if I can see the chat now so I can answer questions on my own without the Amy. I know you got a job. Amy Cousland 53:33 Yeah, I think, I think you got everything. So I thank you, Nick, so much. This was a great session and we’ll be sharing out this recording and slides after the event. So I think he answered them. So I think we can go ahead and end this session and we’ll see you all in one of the next sessions. Nick Miller 53:50 Thank you, everybody. Amy Cousland 53:50 Thank you. How it supports multi agent design. So that means our agents are no longer confined to working in isolation, so they can actually collaborate with one another, orchestrate tasks and scale in a different way that is more aligned with how teams operate. Great. So over this slide deck we’re going to go over some different ways of implementing this. We’re going to go over some live demos today of creating multi agent systems and highlights and best practices and different things to watch out for. So while we have this session today as a follow up to this if you need or. Would like to go deeper with your own organization. You can definitely schedule time with us just 30 minutes and we’ll be able to look at how Copilot Studio will work in your environment, how to maybe configure some agents that are relevant to your use cases. And the goal there is really to give you both a conceptual understanding and practical steps so you can take everything you’ve learned back to your organization. So after today, you’ll have a follow-up opportunity to schedule that follow-up and that link will be dropped in the chat today. So here’s what we’re going to cover today. We’re going to start with an intro to multi-agent Copilot Studio. We will look at how to configure and manage these. We’ll examine both the parent child, parent child model and the agent to agent or peer agent model. I’ll show you how they compare and where each makes sense. We’ll walk through a live demo creating these, and then we’ll wrap up with some best practices, limitations, considerations, and any questions you might have today. So Copilot Studio has introduced 3 capabilities or three key capabilities for a multi-agent design. So first collaborative Copilot design agents can now collaborate with each other rather than operating in silos. So this increases our efficiency. Second, the parent-child hierarchy. So that’s new and a parent agent can delegate tasks to different child agents and that creates A structured workflow that keeps your conversations more coherent. And 3rd, we have our peer-to-peer or agent to agent and those connected agents can work as equals. Orchestrating across the domain to provide kind of a seamless user experience. So taken together, these form a modular team of our specialists, each focused on its own role as part of the bigger picture in your organization. So some of the benefits. So we are simplifying complexity, right? So once we split these responsibilities into multiple agents, now instead of having this one massive copilot that’s hard to maintain, the canvas gets too large, we get all of these really large topics. We get smaller, easier to manage units and this specialization of roles. It actually helps improve accuracy because each agent is optimized for one task. And then from a maintainability perspective, this actually improves because updates to one of your agents don’t automatically update all the others. So a lot of times in. Pilot Studio, you have some complex topics. You make a change to one topic that has an error. You can’t go ahead and publish that update and it can actually have a cascading effect against your other topics. And then scalability that comes naturally now. So we can add new copilots or new agents to expand capabilities just like you’re adding team members. So the end result is more of a coordinated system that grows and works just like a team of people. So some requirements to connect and publish these agents. So to connect these agents up in Copilot Studio, there are some requirements. First, the agents must be in the same environment. So if you have multiple power platform environments or multiple environments in Copilot Studio, they can’t be across those different development environments. So this ensures stability, consistency and security. Second, each agent has granular controls that say allow connections. So this prevents accidental linking and ensures intentional utilization of this agent to agent architecture. And 3rd, you actually have the option to toggle off and on conversation history. So if history is passed, so we’re talking to one agent and we we take that conversation history, then we are sending that whole history over to the connected agent and then that agent has all the prior context of that conversation. If we don’t do that, the agent starts fresh and there are use cases for that which might be safer for isolated tasks. And then the management again is more flexible. We can enable, disable or disconnect agents without deleting them. And so this way we can have staged rollouts, we can have testing, we can do different things and not impact. Main agent functionality. So child agents. So a child agent is built inside of a parent. It ensures that you experience the user experience as one coherent conversation with the parent agent always being in control. So the child agents are invisible to the users. They work behind the scenes and they simplify the experience. They can also be event triggered. So say for example, after activating maybe inactivity, invocation or a workflow, priorities determine which child responds if multiple could handle the same event. So this makes child agents powerful for narrow, specialized functions where you really want hidden helpers. Doing work and I’m going to show you some of the settings, limitations, different ways so that you can understand where we use the agents and where we use the child or sub agents today. So connected agents work a little bit differently. They’re independent co-pilots that can be reused across the solution, and that individual agent is managing its own lifecycle, its own governance. It could have different departments pulling in things from that. And generative orchestration allows the parent to call one or more connected agents at runtime, merging all of those results into one cohesive answer. So the key is to give each connected agent a distinct domain, and today we’re going to go over HR and IT. So that we can show how orchestration happens across those different domains and so how the agent can select the right domain without having to confuse the user. So this is peer collaboration. And different key differences. So we’ll go through this in the interface as well as we build these out. Child agents depend on the parent for visibility, while connected agents operate as peers. Child agents are hidden from users. Connected agents may be visible in conversations depending on what we set up. Child agents are very lightweight. They’re managed within the parent, whereas connected agents are actually individual agents that we can manage independently. And finally, that child agent uses triggers and priorities, while connected agents use they’re orchestrated dynamically based off your. So it really depends on which ones are going to depend on your needs, how much customization you need and how much control you need. So if you need cross solution collaboration, that’s really where connected agents come into play. So we’re going to spend most of our time today in an actual demo instead of in a PowerPoint deck. So we are going to go over now how to how to do these. So I’m going to stop sharing the deck and go ahead and share my screen here. Just to make this easy for everybody. Just waiting for teams. There we go. All right, so now you should be able to see my screen in Copilot Studio. So a couple of things that we’ve gone over today is that you’re. Agents need waiting for this to reload since I just closed that window. So we are going to start out with a I’m calling this Veridian Policy Central. So this is my main entry point for people coming into the organization that are going to maybe start asking policy questions or asks us different systems across the organization. So in this scenario when this finishes loading. We’re going to go in and take a look at how this was set up. So again, this is the main entry point. As we wait for this to load ever so slowly today, you will see that there’s no direct knowledge tied to this agent. This agent is set up purely as a router or is going to be set up purely as a router, and I’m going to show you how we take care. Service today. I’m actually going to turn off my camera for just a minute while I’m showing my screen, so this moves a little faster. I’ll pop back on in a little bit. Let me refresh and this should be much better. The Pellet Studio is running a little slow today, so just bear with me here. So you can see here it popped up with our test. It’s going to give us our overview here in a second. It’s going to show our instructions and as those populate, then I’ll be able to go ahead and we’ll talk through. Different ways we can add new agents to this. So after we have published this, then we can start adding our agents. So here are my instructions telling it it’s the entry point for employees seeking assistance and guidance on company policies. I have already created an HR agent that talks about all of our different HR policies and allows the user to submit a PTO request and then I have an IT policies agent that covers our IT policies, password procedures, VPN backup. And allows the user to open a ServiceNow request. So by using this front end agent, I can then connect these up. So when it understands when it’s asked these questions how it’s going to transfer to these other agents. So we’re going to start with the peer agents first. So once your agents are all created. I’ve already created an IT agent and a help HR agent that are both grounded in knowledge docs and have a couple of topics in place to guide user flows. So first we’re going to go ahead and when you choose your agent, if you choose create an agent, we’re going to do this next. This is our child agent. So from here we can choose our Microsoft Fabric agent which is now supported or Copilot Studio. Today we’re going to choose Copilot Studio and then I have a listing of all of my agents that have been published and are eligible to connect up. So I’m going to see if I so we’re going to start with our HR agent. Change. Oh, I forgot to turn on generative AI for that agent. So we’re gonna stick with our IT agent first. Amy Cousland 11:33 question for you, Corey. Somebody said, I’m wondering the difference between this and creating an agent and M365. Is it the same? Corey Milliman 11:41 No, it is definitely not the same as creating an agent in M365 Copilot. So while we have the agent builder in M365 Copilot, those agents are designed for end users to kind of use against their daily activities. I can’t really, I can share that agent with other people, but I can’t pull that into all of the. The compliance and logging and say triggering Power Automate flows and external actions. Those are really good for knowledge against documents and answering questions about content. They give you kind of a ChatGPT like interface where you get real rich responses if you say expose it to a document. Copilot Studio agents are going to give us the ability to augment that with external connectors, external tools and external flows. So different use case, you can use them both, but different use case there where those are really going to be limited to the. Things you get in the agent builder which do not allow for actions and triggers. I hope that answered your question while we’re waiting for to see if I turned on generative AI for either one of my agents today and. Amy Cousland 12:54 Another question said thank you. So is this so is this better for connecting apps to each other for communication? I’ve had issues with connecting internal apps in M365. Corey Milliman 13:03 Yes, this is definitely better for connecting those different apps because I can create topic. So we can do this a couple of different ways. With some of our customers I’ve been working with, I’ve actually created an agent, say for ServiceNow, an agent for ADP, all ADP interactions or a different external system. Go ahead and connect all those up through one agent or through those different agents. Yeah, I didn’t turn on generative AI for either one of these. Let me go fix that. The other aspect of that is then you also have access to your connectors as well, custom connectors. So there are a couple of different ways I could hit those. External systems through a specialized agent or I can hit those externalized systems through a specialized topic within that agent. So I’m going to go back to my IT agent here and turn on orchestration because I did not do that and by default on some of these when you do create an agent, orchestration is disabled. So now that I’ve made that change, I do have to go ahead and publish this and now I’m going to revisit the questions here just to make sure. Did that answer your question about connecting apps to each other for communication or did you have a follow up to that while we wait for this publish? Amy Cousland 14:18 Yeah, I think he said it’s all good. Corey Milliman 14:18 OK. Awesome. Then I’m going to try to go quickly take care of our HR agent as well and turn on generative orchestration for generative AI, excuse me, while that other one is publishing. No. So one thing to note, if you do make changes to any of these agents and they are connected, you do want to make sure of course you’re also republishing these so that those agents pick up on changes. So if I have a master agent and it has eight different agents associated with it and I make extensive changes to one of my sub agents, if I haven’t published that sub agent, the master agent. Isn’t going to know about those changes, so just one thing to keep in mind. So we’re going to go back now to our main agent while our IT agent finishes populating, go back to Viridian Policy Central, go to Agents. See, it’s running much faster now. Go to Copilot Studio. And then we’re going to go back and do a search and pull our HR agent. So here, this is really important. So when you connect to an external agent, you have to give it some kind of description so it understands its role and it’s already done a pretty good job of pulling over what I already had in the agent. Description for the HR agent. It has some different examples and it has a lot of my information there. This is where I flag whether or not I want to pass my conversation history to the agent in one instance where I don’t use this or in one instance where I’ve seen not using this. Is where it understands I want to transfer over to an agent, but the context of the historical question might not make sense. Maybe we’re directly calling an external application or an interaction to do something. It really depends on your workflow, whether or not you want to pass that through. So we’re going to actually pass that over. And you’re going to see here what happens after we add that agent. Now this agent out here is actually hooked into this HR documents folder is the knowledge source and then out here you’ll see that we have 4040 different documents and then I’ve also created a Copilot for Office 365 agent inside of here. Which is only grounded to these. So to answer that question, this agent lets me talk to everything in this folder and gives me that ChatGPT like experience where this agent is still going to provide me information about the Viridian HR policies, but Copilot Studio is not as conversational sometimes. So it really depends on what your use case is as to how you want to go ahead and use that. So now we have our HR agent assigned. You can see it says connected. I’m going to go in here and I’m going to actually edit the connection. So there aren’t a lot of details. You can see here it’s enabled. All it says is available to Policy Central and this is giving it its description. We can expand this to 1000 characters. A lot of times I will point Copilot for Office 365. I’ll give it some information about my agent and. Have Copilot write up an optimized description so I don’t have to try to come up with one. So just different ways of using the AI tools you already have so you don’t have to try to recreate the wheel. Now we’re going to look at our HR agent just so you can see the difference. Our HR agent is grounded in knowledge with our HR documents. If you go to agents, you can see it. It doesn’t show that other agent is being added. I could add an agent here, but we’re going to go ahead now and go back to our main agent and start a conversation. Once we also hook in our IT agent. So now we’re going to add our IT agent to this as well. Again, passing conversation history. Now I’m going to publish Bridium Policy Central. OK. Yeah, I see another question while we’re waiting for this publish. We have end users that will only be able to access M365 if in studio we create an agent to communicate with calendar for example. Give me so you’re clarifying question. Your users have a Copilot Studio, a Microsoft Copilot Studio for Office 365 license. I’m sorry, Copilot license or just a M365 seat? I’m assuming you need an M365C, so you have two scenarios there. When we publish these agents out and say we put it on a SharePoint site, you will be billed. You can turn that on that way on a pay-as-you-go. And you would that would allow anybody in your organization to interact with anything created by Copilot Studio regardless of the licensing that they have. So that way you get built again per message and the message is defined as a turn in conversation. So here if I say. I have a question about the IT policies and procedures that response that little transaction that would be considered one message and your billing is set up on pay as you go that way. Did that answer your question? I hope so. If not, drop it in the question. I’m going to or in the chat, I’m going to continue. So we do have our HR agent connected and our IT agent connected and I’m going to say I have. Here and I’m actually going to turn on our activity map so you can see what goes on in the back end when we start asking questions and what this looks like to the user. Then I share my password with an employee that is locked out. 3. So now we’re going to be able to see that it did understand right away that this was the IT agent that I need to go to. So from an end user perspective, it I don’t see, I’m inside the Veridian Policy Central map agent. That’s what I’m interacting with. And saying sharing your password is prohibited, blah blah blah blah blah. OK, I need to open a ticket to reset. Maybe. So inside of my IT agent, I have a topic that is set up and that is another topic that launches based on what the agent understands. So I’ll show you that topic, but it’s very basic. It’s going to ask the title of my IT issue. It’s going to ask for some information. Uh. This is very basic for today’s demo. Subscription of the incident and now if I have this connected to ServiceNow that I just gave a send to ServiceNow here so we don’t need to actually trigger that for today’s demo, but this is showing that we were able to start with our policy central. Understand and bring it over and back over to the IT agent. So if we go back to that IT agent now. I can show you. How that was done with topics? So I created an IT ticket topic. And the agent chooses how that topic gets launched. So this is where the description of this comes in play. So if I have descriptions for my topic and I leave them as the agent chooses, then when we’re passing conversation history or passing intent between all of our agents. Copilot Studio is going to know which agent or which trigger is being sent and therefore know which topic to eventually drop a user into. So we started out with our central agent. I said I wanted to open a ticket. It understood then from the main agent that the topic I need to be in. Was inside this topic that we’re looking at here, create an IT ticket. So those topics, when you have your triggers, you want to use as many as you can to be triggered by the agent and that way when you’re passing context history, if somebody were to, if I go back to my main agent and the very beginning of my conversation. No. Um. I need to open a help desk ticket. Now it’s going to traverse our agents. It’s going to and it understands it’s supposed to go to the IT topic based on the context. So it goes right to the topic and the user doesn’t even see that the user is still in the Viridian, the Viridian Policy Central agent. Cell. Here we are. So another question. Can you ask on a different language even though the source files are in English? Would the agent be able to make the appropriate translations? So we can turn on different language packs for Copilot Studio agents. And the IT doesn’t matter if the sources in another language because what’s going on in Copilot Studio side when we add these knowledge sources, it’s building up semantic index of its own which is similar to vector embeddings like when you have a large LLM. So it is doing that for us on the fly so we don’t have to manage those. embeddings, but an embedding as it relates to AI is a structure, it’s called a vector, and that is a numerical representation that many characters long, 8,000 or longer, depending on how you’re embedding information, where it understands context and intent. So while language is used in that, it still understands Context and intent so that when we are interfacing with a different language, it might not find the actual phrase that the user used because it’s in a different language, but it can understand intent now if there are highly technical documents or phrases that it can’t handle. There are things that you can do in Copilot Studio to make it better at responding to different languages, but yes, out-of-the-box we can have some functionality for supporting different languages, understanding that source content and responding to the user. It’ll respond to the user and their language, but it will not translate the document that’s being presented as a reference. Of their language natively, so you would have to build that out through Power Automate. If you also want to provide a translated version of a policy document, you would have to build out a Power Automate flow and some advanced topics so that you’re presenting the actual documentation in a language that the user’s comfortable with. I hope that helps. So here I’m going to say how much PTO do I get per year on the HR agent and if that didn’t answer your question or if you have a follow up, Carlos, go ahead and drop that in chat and I will come back to that. So now this made a mistake because I referenced PTO at the beginning and I did this on purpose. So I have an agent that has a topic that says request PTO and this is to start off with a PTO request. So when I get a question like this, how much PTO do I get per year? That means that this topic needs some refinement so it only understands PTO request. Shouldn’t be launching every time we see PTL and we can actually then go and modify that to understand our intent better in our HR agent in order to handle that. See this one by default said this tool can handle queries like these. So when I asked how much time, how much PTO do I have it it inferred that that was what I was talking about. So this tool. So we make that modification. I’m going to go ahead and publish that sub agent. We’re going to close this, then we’ll go back to our main agent to show how that changes. I hope that other agent is already published. We’ll find out. This one is a little stuck on PTO and this is where if you were doing this you would have to refine one of these other topics. I didn’t expect for this topic to be absolutely perfect. I really did want to show some of the limitations though that we do have when we’re doing the agent to agent. So we can make different things happen with our HR agent by changing our different topics to determine better how that happens. And one way we could have fixed this one here in our agents from Policy Central. Is to not pass the conversation history over and to show you what happens when we don’t do that. And these limitations and some of these gotchas are listed out in the PowerPoint document as well that you can take that will be shared with everybody if you do want that. But it does go through some of these kind of scenarios and exactly what we’re doing here to refine those. One other item I’m going to start working on, just because we’re coming to the bottom of the hour, is while that I’m going to send one more question and then I’m going to show you the difference in what happens when we use child agents instead of agent to agent. So here, how much PTO do I earn per year? Now it is searching knowledge first. And it’s going to come up to my PTO policy. So that’s an example of how we need to refine some of our topics, questions, intents and some of the prompts that we’re using throughout the agents to get it to understand our intent a little bit better so it doesn’t trip us up by going to the wrong the wrong. The wrong topic. So you can control that a little bit easier if everything is in one agent. But when you’re using multiple agents, it takes more steps with topics, prompts and intents to make sure that we’re hitting the right the right destination. So now I’m going to go in here and I’m actually going to go back to my. Main agent, excuse me. Now we’re going to go into child agents and show how this is a little bit different. I am going to turn off the HR agent and I’m going to turn off the the T agent. I’m going to go ahead and publish these quickly while we’re making changes. So here I’m going to go ahead and do add an agent. And this one I’m going to do create. Oh, it didn’t publish. I published it too fast. So we’re going to go ahead and just go through with our sub agent here. So this is our sub agent now and we’re going to create one for HR sub agent. When will this be used? Here we can define some different things here. The agent is going to choose. Again, similarly, we’re defining contexts that understands HR. A certain message is received. A custom client event occurs. That could be a button. That could be an adaptive card or somewhere it’s revoked and we can actually with a child agent, this is helpful. We can actually redirect. So in my I’m going to discard this in my parent I can set up topics and when you’re going through Copilot Studio based on. Conversation. I can transfer to another topic or I can send it over to another another event. We’re going to just let this one be. The agent chooses this. Go ahead and discard. We’re gonna leave that based on description HR policies. And procedures. Documented the same ignore my spelling mistakes. We do have some advanced where we can actually bring across variables, bring over identifiers, bring over different pieces of information if we know we want to use this. So there are some different ways we can actually do this. So one thing I think about is what if I wanted to create a sub agent that was solely. Responsible for helping a user create a ServiceNow ticket. In some environments we’ve done that as an agent, a side agent, actual agent to agent or a peer agent. We can also do this here because we have tools that we have available. Now we aren’t building out topics in our subagents, so all of. Of our settings for this are here and other ways this is helpful is if I have a very complex agent scenario, I can add sub agents instead of having to build out instead of building out 10/15/20 topics and figuring out how to orchestrate between all of those. I can bring that down maybe to a few different agents that are child agents and it’s going to understand what it needs to bring those over. So if we’re doing a lot of things where we’re using external tools, here I have instructions I can give to this agent. And what’s interesting to get into these multi agent scenarios is you also have the ability to start using the slash key to actually start inserting different things in your constructions. So I’ll always start the user with. And we can say a greeting. We can automatically transfer them over to a tool. If I had one set up, you can see topics that are out-of-the-box here doing variables. Then of course we can start kicking off power effects flows. So we have a lot of. Controls we can add to give the agent more interaction and actually call out different agents, different sub agents and different different as tools in our agent instructions. So every time you receive an HR request, call this tool. And those are different ways that we can do that. So I am going to provide some very general topics here. And I’m going to add knowledge. This is a separate knowledge source for this agent and I’m going to go back over and I’m going to pull out a. This one I’m gonna pull in my HR documents. HR. Also look at your descriptions. That’s a you have 1000 characters. You can also use there as an AI prompt. I have while we’re doing this, I just looked up and saw another question. I have seen companies working under ISO guidelines and where our wording does. That’s why Copilot Studio really helps in those situations because Copilot Studio doesn’t generate the same kind of super verbose message. We might get in something else where with even Copilot Studio, I’m sorry, Copilot for Office 365, I can ask it a question about a document and it’s going to follow up with. Would you like me to create a comparison for you? Would you like to learn more about this? It’s going to infer a lot more content. Whereas we want our Copilot Studio answers to be grounded in truth, we don’t want to expose anything that’s not inside of a document. So in our instructions with Copilot Studio, I always turn off web search and give very explicit instructions so that it is not using. You can even set it up. It’s not using the general AI knowledge and it’s only exposing the knowledge of your documents so that it does pass those kind of strict guidelines. So I wrote a very short instructions. We have our knowledge going to go ahead and save this. And just show you on the front end here. Now I’m going to go back to my agents. Let me show you what happened here. So on the main screen you can see that HR documents has been added as knowledge to this master agent. Even though we didn’t add it by adding a sub agent, now that knowledge source is being surfaced to our our parent agent as well. You can see nothing changed with our topics. You can see here we still have our custom and system and when we look at our agents, the difference is now you see this is a child agent. Triggered by agent instead of and I can’t just click and go, I get delete. So if I want to go modify, I actually click it instead of right clicking and choosing something off that menu. So now when we go back to. Our HR questions, what is our policy on? Boy, I can’t think of an HR policy right now. I’m. What types of leave do we offer? We were very short so you can see here it went to the HR sub agent starting to look through knowledge sources as well. And this is a way of really segmenting. OK, this knowledge source and you can see here I have all of my different documents and policies that is referencing directly. It is taking quotes directly from my documents, so. This is and that says for more details you get with your HR department or here’s a telephone number and then I can of course go ahead and read these different documents. Now the difference in response if I go over to my HR for. HR agent over here that was published directly over to Office 360 Copilot for Office 365 and I’m going to ask that same question which what types of leave do we offer just to show you the difference in response when we’re using Copilot Studio versus Copilot. So. For Office 365, I think this goes Audrey. It’ll show something similar to kind of the question that you’re asking as well. So you can see here that it is. Going a little slow, but it is running purely on the SharePoint side. This is not tied to Copilot Studio. So it is showing me information on my documents. And if you’d like initiating a leave request to understanding eligibility, I’d be happy to assist and this one. Just says for more details, go here. So this without there are no instructions written for this agent right now. This agent that we’re using over here, this is a office for Office 365 agent that was created directly in SharePoint to show that you can have a conversation with documents here. It’s going to be a little bit more verbose if we give it a different. Different instructions. It does a really good job of expanding here inside of. It is very factual. It does not hallucinate and it does give you these more direct correlations which are really helpful in agent to agent or policies, procedures. Things where we’re not inferring context. So awesome. I’m glad that helped, Audrey. So you can see here that we went through and we have our sub agent and we could actually from here now. Create that special sub agent that is only for interfacing with ServiceNow. I can create a sub agent that is called every time a user wants to get a status on tickets and this way I am segmenting out these different connections in different ways. So the change the difference is a topple here. Now we have our connected agent if I go to my instructions. You can see here at the bottom I do have different abilities here as well within my. Where I could tell it to go to specific topics based on this instruction that I’m giving it. And here is where I can say if you hear this word, transfer to this agent. So if we want to get really, really granular and say this agent is responsible for the agent in this domain. Only we can now and this has been turned on. Just the slash command in some environments has been turned on the last week where I could, even though I have a broad maybe engineering topic, I can choose different items within that engineering topic that are. That maybe one is about our standard for this is how we manufacture this widget, widget A, widget B, widget C. So that way we can really narrow those conversations down to widget AB and C without creating all of these disparage. All these disconnected agents, so just one way there. So I think that covers most of what I was. I can also show you. We’ll add one more. We’re coming up on time, but to add in another agent here, I’ll discard my changes. Changes another child agent would be the same. Do you create an agent? The one thing to point out, if I create these agents, these child agents, that means this Veridian Policy Central cannot have an agent to agent connection with another agent. So we can only connect. We can’t connect this wording and central if I have child agents right now and that some of these limitations are changing. This is still preview. Some of the features functions that I saw last week are different this week, so you’ll have to keep an eye on that to see how those how those. Continue to evolve, but they are moving really quick and these are really helpful to avoid having to create very complex topics for understanding intent, routing different things and going through all of that. So now I’ll go ahead and I’ll bring you back to the PowerPoint just to make sure that I didn’t miss anything today. And again, if you have any follow-ups, Amy just dropped that in our. Let’s try that again. Amy Cousland 42:46 Yeah, I don’t see it. I see it. I see it there. Corey Milliman 42:49 You do. I don’t even see the slide deck. I see nothing, so I can’t even tell what it’s what it’s showing you right now. Amy Cousland 42:58 You have it on the first on the first slide. Corey Milliman 43:01 Well, we’re going to, uh. Not share that anymore. Sorry everybody that’s on the call. I work with Copilot, not so much teams, so. Amy Cousland 43:08 No. That’s OK. Actually, if if you’ve done that, if you want to, if just if anybody has any questions, they can drop them in the chat. We will be sharing these slide decks after today’s presentations for anybody who’s interested. Corey Milliman 43:33 Well, here I’m just going to go jump in this way and share my screen and it might be pretty, but I just want to make sure that we’ve covered everything today and can just show some of the other items. We’ll go slide show from current slide there. All right. So we’ve talked about some of. Amy Cousland 43:34 I. Perfect. There we go. Corey Milliman 43:51 So again, guidelines. Make sure you define clear roles. Make sure your descriptions are very strong, very unique, very precise. I recommend using Copilot or Copilot for Office 365 sometimes to help you draft those and help. You refine those limit. Use child agents selectively to prevent conflicts. So make sure that your child agents aren’t overlapping. They are actually different domains so that we don’t have conflicts where we see the same information in three different agents. How is it going to know which one to respond to? And it’s going to pull the information from all of those potentially. Make sure you have fallback responses and things that happen for dead ends. So you did see and that was one thing I did on purpose in our demo today. We saw that we create, it said we create a ticket. ServiceNow ticket sending happening here. We didn’t have anything after the end of that to bring the user back to a conversation, to bring them over to a clean close. So when you’re using these other agents, you do have to think about, am I ending the conversation? Do I have to transfer to another topic where we’re having a clean handoff? Or am I bringing him back to my master agent somehow? Or you know, what does that look like? So just make sure that you’ve defined those and risk limitation redirect limited. So your redirect nodes cannot directly hand off to child agents. So you can’t just go to topic and say redirect child. When you’re using child agents, there is the risk that citations are going to disappear. I’ve seen that happen sporadically with agent to agent or agent to child, depending on if we’re passing context or not. So just know that you might have to tweak instructions. To make sure that you are explicitly requiring the agent to respond with the citation. If it’s not, you can add a step in a topic for a general conversation flow where it actually does build out that citation list as a separate step to make sure you get it. Agents cannot be connected in multiple places restricting complex multi hop chaining. So you can’t connect one agent and agent to agent. So we do have limits there. And again, as I brought up, this is preview status. So these features are in preview. There are changes. There were big announcements on other ways that Microsoft is enhancing the Asian community. So it’s moving very quick and y’all be ready for change. So child agents for specific tasks, connected agents for collaboration, and you can combine those models for scalability depending on what you need. And then this makes your copilot agents easier to maintain, easier to adapt and expand. So again, if I have a unified agent. That is running for one line of business in my organization. I want to add on new features, new features, new features. That’s not an entirely new agent now that I have to keep developing and adding on to bring this initial agent back to a QA level. I can leave the main agent alone and just bolt on new functionality. So I put together some resources as well for anybody. If you want to dive deeper into this, there’s some excellent resources here on how to use these multi-agent scenarios. We have video walkthroughs, some demos, and of course some links to the official Microsoft documentation. So again, um. If you want to dive deeper with somebody from concurrency, we’ll definitely love to spend 30 minutes, jump in deep and show what these can do for your organization. And Amy did drop the booking link in the chat here. So with that, I’ll go back and see anybody have any additional questions? If not, I know we’re coming right up that time, so thank you for the time today. If there are no questions, I’ll hang out here and wait though. Amy Cousland 48:07 Thank you, Corey. Corey Milliman 48:08 Thank you. Amy Cousland 48:11 See, we’ll give it a minute here and if no questions, we’ll go ahead and end the session. Corey Milliman 48:13 OK. Absolutely. Amy Cousland 48:18 Thank you everybody for joining us. Corey Milliman 48:19 There, I’ll even turn my camera back on as I promised I would now that I’m not seeing anything on my screen. So thanks for your teams today for me, everybody. Much appreciated. Amy Cousland 48:22 Uh. And. I don’t see anything popping in there, so we’ll go ahead and close this out. Thanks again and have a good afternoon. OK, bye. Corey Milliman 48:37 Think so. Good afternoon and well, as of now, everybody, it’s been a few seconds into noon, so good afternoon. My name is Matt Kravitz. I’m I’m super happy to host you guys for our last session of the day, AI Apps and Agents in Action. A little bit about myself. I’m a senior software engineer here at Concurrency. I’ve been here for, well, almost five years now. On the day-to-day. I’m just an individual contributor and one of the tech leads on our projects. Feel free to connect with me on LinkedIn. I checked that QR, so I expect at least one or three of you guys to to connect with me throughout here today. And then a little bit fun fact about me, I’m actually in Europe. I had some preplanned travel and Monday night I flew out to Poland. So I’m in Poland hanging out and I’m super glad to be with you guys. And with that, I’ll hand it off to Anne from Microsoft, who’s one of our guests here today. Ann Britt 1:10 Hey everyone, Ann Britt, I’m super happy to be here. I’m a Senior Digital Solutions Engineer at Microsoft specializing in AI workforce. So really in low-code solutions, agent to capabilities and co-pilots. And through that, I really Dr. innovation and focus on digital transformation for our customers. I’m really happy to be here to support this conversation and talk to you all about this awesome topic. Mac Krawiec 1:37 Yep, thank you. And I hope you guys can see my little laser pointer. Can you? I don’t know, Ann, if you see my little laser pointer flying around. Awesome. So for today’s agenda, we just went through the intro so we could check that off. Ann Britt 1:49 Yes. Mac Krawiec 1:54 We have a bit of a shameless lug here coming right up. Then we’re going to get into a bit of an overview of Copilot Studio. Some of the audience may or may not be familiar with how it works, what it is. We’re going to talk through some use cases, some of the challenges that. Come around with Copilot Studio and Azure AI Foundry and just in general that entire stack, both organizational that I’ve seen, but also my own. We get to talk a little bit about that. We’re actually gonna spend most of the time here and just building. And we’ll get to what we’re gonna be building next. We’re gonna have another shameless plug and then just talk. So I hope to have us engaged, have us have some conversations. I know it’s now the last one, but I wanna make sure that we make the best of it. So with that shameless plug, we do offer a free complimentary set or. Complimentary. That seems doubled up, but we offer a complimentary session to talk all things AI apps, ideation, anything like that. So feel free to go and chat and if you’re interested in it, our team will be happy to hang out and talk to you about it. With that said, really quickly, what is Copilot Studio? It is Microsoft’s tool for building custom AI copilots. Well, what is copilot? Copilots are effectively assistants. In that are fully employed with with artificial intelligence and they’re able to take conversational instructions to then integrate and work with apps, websites, teams and more. And when I first started using it, the whole conversational aspect of it really kind of. Was hard to grasp and blew my mind a little bit, but the more you work with it, the more, more, more comfortable you are. Really you get to integrate AI within your business and and the the plethora of connectors and and and now MCP servers that are available make that super easy, super seamless. And in general, one of the focuses of this of this talk is really automating workflows, making it such that AI isn’t is within grasp of your organization’s tool belt. That said, though, there’s also the things you have to be careful about with the advent of AI. Everybody has a hammer and everything is a nail. So you have to also be cognizant of where to employ it. So that’s a little bit something that we’re gonna talk about a little bit. Now within Copilot Studio, there’s a studio. And what is that, right? It’s basically a. A UI that’s gonna help us design and manage the environments that we have our Agents in. It’s very akin to Power Platform. As a matter of fact, it’s augmented inside of Power Platform. The same elements of environments that you have in Power Platform, pipelines, managed environments, unmanaged environment, all of that applies. With Copilot Studio and building your Agents in a low code fashion within Copilot Studio is just as seamless as it is to build a canvas app and certain things like that. So a lot of the authoring, the integrations, the managing super easy to come to terms with and super easy to get a grasp on. This is. So I saw this image at Build. I went to Build. I had the great pleasure of going to Build this past year and I saw it and I I thought I would talk about it a little bit. We’re here in this in this left most box and we’re going to stay within this box for the most part. But we’re gonna give honorary mentions to others throughout this presentation. Now, the reason why we’re here and we’re gonna stay here is because I only have 15 minutes, really 30 between the demo and I could talk about this for hours on end and I also could problem solve and go through my things as we. Integrate Copilot Studio. We create an agent in Copilot Studio. We might create an Azure AI Foundry with an agent inside of that project using GPT 40 or something like that. We’re not going to get into all of that. We’re really going to focus on Copilot Studio. Just know that throughout our use cases and the more enterprise designed. Implementations. Really you might have all or elements of these. Oftentimes you really have at least at least one or two, right? I mean, actually you might have all four of these, right? Where if you’re using. Copilot Studio to surface your team’s agent and give some instructions there. Well, then on top of that you’re using likely an Azure function to add some more pro code, more custom, more lower level kind of work, and obviously you’re using Visual Studio for that. You’re using the Foundry SDK. And you’re surfacing that all through an agent in Azure AI Foundry. So this stack I didn’t realize at the time when I was sitting there at Build hugely impactful and hugely relevant today and we are literally living this image if you are building AI Apps and making anything within that and doing it in Microsoft. Tech space, you’re more than likely living this stack and and and did you have anything to add to that? Ann Britt 7:12 No, I would just say, Mac, that, you know, I know you’re going to get into some of this later, but when you look at the platform story, which is what you’re telling here, there’s real cohesion in your builds and in your security, your data structure. So yeah, I would just say great, great. But the nod to this, it’s super relevant and timely, so excited you how you tease this out. Mac Krawiec 7:36 Yeah, for sure. So let’s talk a little bit about some of the real use cases you you might see. The first one being is the most common, just because all business needs to make money and in order to make money, for the most part, all business needs to have sales. And part of the sales process is going back and forth and quoting and negotiating well with the advent of AI, the automation of a sales quote process and and general CPQ. Has been made hugely streamlined and within grasp, where previously we would work with clients that would have to curate their data for years to sometimes years even to build enough data to train custom models and use machine learning to build now with. AI being effectively somewhat of a commodity, it is within grasp to accelerate the sales process of your business using sales quote automation. And just to let you know, our demo is gonna be about that. So really big focus here. Some of the things that we’ve done and I’ve seen is field technician support. So imagine. If your company has a ton of field technicians and they need to reference documentation or schematics or diagrams, and they don’t wanna carry a huge textbook with them when they’re servicing some sort of equipment, for example an AC unit or whatever, we’ve built things that accelerated Agents in the field. Using AI that’s trained on documentation, manufacturing, IT and operations where you’re using something akin to the site reliability engineer, which is an AI augmented into the Azure portal that allows you to monitor what Azure. Performances you all like and what errors you have and even employee fixes right away within seconds. Logistics, right? Logistics is fast-paced and the more AI you can. I have a father and he’s an owner operator so logistics is pretty close to my heart and so I know how I can always think about what you can what the real life use cases are but. Even sales quoting where the dispatcher goes to the broker and you’re starting to negotiate all relevant finance. We’ve actually worked with a few companies where we’ve reconciled accounts payables using AI and accelerated those processes. There’s a ton of different use cases there and retail and e-commerce, right? I’m sure you’re seeing it today. Today, I mean, even just the fact that you’re saying something now when your phone is nearby and you’re talking about a new phone case and all of a sudden your Amazon store has phone case suggestions, that’s one of those examples. So those real life use cases all applicable. Ann, did you have any other thoughts, comments or anything that you wanted to add to this slide? Ann Britt 10:29 Yeah, I love that you hit so many different industries. You know, we have case studies in so many of these. So for example, manufacturing, it’s a great one. We’ve seen a 40% reduction in incident response time through autonomous monitoring on manufacturing lines. It’s pretty crazy what we’re seeing and when sales is another really good one. At Microsoft, we did a study when copilot was released to our sales groups and we see that our sales reps are recouping on average about 90 minutes a week. And at scale you can imagine what that looks like in terms of ROI and just efficiencies that are gleaned. So great examples that you pulled out. Maybe I’ll give one more. You mentioned financial services and this one sticks out to me. 60% faster loan processing time via intelligent document processing. I don’t know about all of you, but for me, dealing with a bank in any way, shape or form is tedious. So if you could reduce the time, I’m excited about that. Mac Krawiec 11:27 I’m living it right now. I’m currently waiting for my house to be built and I just went through the mortgage process and I could tell you that having I I know my parents went through this and when they bought their house and it is much quicker now. So I hope that this company is also using using AI so. Certainly applicable. I did hear of AI being used in manufacturing to your point in fault detection or predicting faults. So for example, if a company is using drills, they’ll use AI coupled with IoT to determine that a drill is going to. Ann Britt 11:55 Mhm. Mac Krawiec 12:02 So bad. And so so a lot of that. And then what that allows you to do is to never have downtime because you’re going to preemptively replace those drills. And a lot of this stuff is is hugely impactful to the to the bottom line. So absolutely. So yes, I need to give the sneak preview. We’re going to focus on sales code automation. So I just wanted to to to let you guys know what we will build. I a lot of my presentations are all F1 themed. I’m a huge fan of Formula One. So what we’re gonna build is a Formula One sales quoting tool, and we’re gonna use Copilot Studio to detect new quotes, find the requisite product, and we’ll talk through that. We’ll assemble a quote in some human readable format, and before that, within the product section, we’re gonna determine what the pricing is, what the. And inventory that is available and we’re gonna send the quote to a client and I have that you can see I have quite a few of asterisks there and part of this and we’ll get to that in a greater fashion here in a second kind of speaks to what Todd and I don’t know how many of you guys were at the keynote kind of speaks to what Todd. Todd was talking about and the confidence rating and some trust. One of the things that we can do in Copilot is send emails directly. We can create drafts, we can reply directly. So we’ll talk to those capabilities. Now the reason why this is asterisked is because I’ve worked with companies where sending a quote directly from AI. At least at the beginning is not something that they are willing to risk. It is business, it is sales, it is money. Especially as as the adoption rate grows, you want to make sure that you you have that human in the loop, right? Todd did mention that human in the loop. So the reason why there’s an asterisk here is that we’re going to be sending that e-mail for quote just for demo purposes. That said, though, I would not recommend doing that straight away. That’s something that you probably get to after you’ve built enough confidence and you’ve got enough test data to prove that you have, you know, 90 some accuracy or whatever. Threshold a company may feel comfortable with, but I did want to throw that out that you do want to be cognizant of those things so as to not erode trust. And with that said, organizational challenges, I have the first one, what I think is one of the most. Prevalent ones and ones that are because I’m on the implementation team, on the delivery team, most often this is the one I run into first is data readiness and quality. A lot of companies want to get into AI and we’ve done this where we’ll have some low cost. Dip your feet in the water solution just to prove out the use cases work for Company X or Company Y. That said, though, the ultimate benefit is somewhat stunted, or at least there is a law of diminishing returns that applies when your data. Is not curated and not clean enough for AI to operate on. We’ll get to that in the demo where you’re going to start to where we’re going to show our data source for this information. Well, if every database and every company used data that was that clean and that simple to understand, AI would have a much easier job. But that’s not the real world. So that is one of the my, my chief challenges, legal compliance and governance. And we’re just talking about this maybe and I’ll let you, I’ll let you talk to speak to that a little bit since you’re you’re familiar with that a lot more probably than I am. Ann Britt 15:38 Yeah, I think, you know, this is a topic that everyone wants to double down on. You you hit the, I think number one part of the concern is data, right. We’re concerned about our data being leaked, whether it’s, you know, things that are under NDA or just very, you know, a PHI or PII, there’s critical data that can be stored. Stored in some repositories. And so with the advent of agentic capabilities, our LLM sitting over these things and just all that can be exposed, it becomes a bigger concern. But Mac, you touched on this. What it really comes down to is being smart about our overall security posture and Satya said it best when he. With. Without security and our zero trust architecture, the rest of it doesn’t really even matter. We could be the most innovative AI company on the planet, and we are. But without having that security posture, without having proper data protection and governance, without properly looking for technology and tools that are gonna help you like. W or DLP, it really can become a little bit of a nightmare situation, but luckily we do have all of the tools to bring to the table to help with that and really there is then that business process behind that of making sure you clean up your data. In two ways, that’s important, right? If you’re moving towards agent in agentic world, you’re right, you want to have that really clean data. But also agents learn differently and the way that data is searched is different and it’s in vectors. And so the data needs to be clean everywhere or you’re really getting half the story. It’s like playing a game of telephone and the message getting. And kind of like watered down as you go along, if you can imagine that. So and the same then is true for the security of that data if we don’t have things properly classified or shared. So I I hope that perspective helps Mac and I’ll turn it back over to you. Mac Krawiec 17:30 Yeah, thanks for that. And with with the legal and compliance aspect of it, I wanted to make sure I give you a chance to talk just because again, we were talking about this. I’ve had clients implementations were slowed and and put to a stop because the legal department just wanted to make sure that they do their due diligence and rightfully so. To make sure that their business is secure and the data is there too, but it is definitely one of the challenges that I want to draw attention to that. That is why. And if I actually go back to one of the slides here that showed the stack, This is why you’re seeing this box around Azure AI Foundry and. Specifically, the security and governance Azure AI Foundry is at top of mind is that security aspect and making sure that you can take all these aspects within the stack and still leverage the things available to you in Azure to ultimately secure it better. So I just wanted to give a mention that trust. Trust is a big one. That’s why you don’t send quotes right away. That’s why you might want to check them. That’s why you kind of set thresholds within your organization to ensure that they’re. The AI is doing what it’s supposed to, and that is exactly why there are certain specific automated tests that you can run against your Azure AI Foundry agents that you can code. And you know if you create an agent in Python and part of it is deploying that to to Azure AI Foundry within ADO and within certain. Specific implementations. You can test a previous deployment against a new deployment to ensure that the answers are what you expect them to be, both from a safety perspective in terms of making sure that the AI isn’t doing something nefarious, but also from just the output in general. Culture is another big organizational challenge where first of all, you don’t see this in in in in consulting companies where you definitely see this in in some in some industries where things just don’t need to change. Or at least that’s the perception of of some of the some of the members, some of the members feel threatened. So that’s a challenge. And a perceived cost and scalability. So AI is is such a huge topic and a lot of conversations around is like oh it’s expensive and a lot of money and tokens cost and and to some of that to some of the degree it’s it’s true. However, with the features that are rolling out within Azure AI Foundry, for example, that chooses the best model for the best price that can solve the problem at hand, which that surfaced in Build or more recently what I’ve seen in Copilot Studio that’s still in preview is the efficiency. Capabilities, which we’ll get to, which again chooses the best model within the copilot studio stack to ensure that the questions answered are cheap. That’s one of the one of the considerations. And then the other one is really just start small. Don’t go big. The projects I’ve worked on that have led to the most success within the AI space always started with a very small proof of concept within a given use case for the business that solved the problem in a very simple way. But really, it started to show that, hey, this is possible and gain confidence. So all of these challenges exist. All of these challenges, all the companies that we’re talking about or seeing or hearing, they’re all going through them. All of them are solvable. Um, so that’s definitely a big one. Personal challenges. We’re about to get into the demo here. Hang on. Personal challenges for me is really at the beginning when I started that I’ve been, I’ve been coding for, you know, a while now. I’ve always been pro code. Front end, back end data, everything. The whole conversation of low code versus pro code is wrong. This versus should not be versus. This is low code with and or and pro code with the advent of Copilot Studio and how it’s growing and how. Agentic apps and agentic development goes in. This is no longer a versus. This is totally with. You should get used to it as developers and you should get used to it as architects. And the sooner you do, the better off any product that you’re working with will be because it is here to stay. And and really, it is when when those two things work together, the solution is overarchingly tons better. I’m seeing this. I’m literally working with a client today where we’re building something that has elements of low code and elements of pro code, and that is becoming more and more prevalent. So I I will definitely say that that’s been a personal challenge for me to realize this is not a versus, this is a with certain preview features are not yet ready for like the enterprise level stuff that that I would like to to see be more more available. Just for example, some of the testing capabilities, certain features that are fundamental are still in preview. So for example, deploying with pipelines, some of the deployments within pipelines using pipelines of Copilot Studio agents from one environment to the next still aren’t. And functioning to the absolute way that you’d like. Well, it’s because it’s in preview and we’re still using it and it’s still great, but it’s still being worked on. One of the things that isn’t fully yet ready I think was I recently tried to use a managed identity authentication within an agent flow and I started running into issues and. I started researching and I found out that there is no documentation and MVP is like, well, just use this. And so we’re all working on this. We’re all in this together. We’re all figuring it out. And the more we do that, the better it’s gonna get. But it is something that I’ve that I’ve had some challenges with, but nothing that you can’t get over. We’re engineers to solve problems. So that’s that’s our job. That’s why we’re here. And then just fundamental changes in approach. I’ll tell you this one of the things that pained me the most. Is understanding that now not everything is going to be exactly the way that I coded it, where you know if I’m if I’m writing a method that’s doing a certain thing, that method is going to do the same exact thing every time, unless I make a change to it and tell it to do something else and my tests are all going to succeed in the same way every time. Getting over that mental hurdle that, hey, I’m a developer and not everything is going to be kind of the way that I would always want, that’s a big change. For example, the responses may vary just a tad. Between one another. And we’ll see that an AI is going to take some liberties and you might have to refine it. And there’s this whole prompt engineering thing, which again, when I first heard prompt engineering, I thought it was this fancy thing. Then I was like, Oh my God, I could never do this. And then I realized, OK, it’s just a fancy word for being very specific with your instructions. And just understanding how all that works. So those are my personal challenges. Those are things that I ran into. Any comments? Ann. Ann Britt 24:55 Yeah, for sure. I think, you know, it’s interesting. So I come from the customer side. I’ve been with Microsoft for almost 2 years now and I used to get immensely frustrated and I still do even internally on the availability of some of the features set, what’s in preview or maybe why things are taking longer. But I mentioned before, right, that security is first. A lot of what you’re seeing now in some of these, I would say delays in release to core product are really because we’re hardening our posture as we’re learning and as we’re growing in this space. The other thing I’ll say is, and Mac, I’m sure you’ve seen this too. In my career, I have never seen a technology move this fast ever before. So I say that to say one of my personal challenges is learning, right? The speed with which this changes, the integrations that are available, the APIs that’s that exist that never existed before. You know, we’re learning about whole new protocols and ways to share and orchestrate these agents. So I think the learning is maybe. Something else that I would add, but that also excites me because it means we certainly won’t be bored for a very long time. Mac Krawiec 26:00 Yeah, for sure. I mean, it is exciting. I mean, the fact that we even get to play with these things in preview is awesome. One of the things that has come in really handy and you mentioned learning, so I just thought of it is I actually had an agent that I built that was connected to the Microsoft Learn MCP server, which is free to use. Ann Britt 26:08 Yeah. Mac Krawiec 26:19 And that makes that a lot easier because that MCP server is obviously going to give you the latest information and all the embeddings are built on the data that’s in there most recently. And so with that said, to anybody here, if you’re dabbling with Copilot Studio, maybe the first one might be to build yourself just a very quick agent that’s answering questions. Questions related to Microsoft documentation that uses the MCP server. That way you don’t have to search through tons and tons of articles. It’s super helpful. But yeah, we’re in this together. We’re learning, we’re doing, so definitely recommend that. Ann Britt 26:51 Yeah, no, that’s awesome, Mac. Build yourself agents, right? Don’t wait for a big enterprise agent. You could automate your whole entire day. It’s a beautiful thing. Mac Krawiec 26:59 Yes, I have co-workers that have built agents that are scheduled to pull the latest information from certain popular blogs to make sure that they have the latest learnings. Use this in your day-to-day like it’s like this is just a more personal since we were talking about use cases, personal use case for developers, not necessarily enterprise grade stuff. But totally, totally helful to our to our day-to-day. But with that said, let’s build O. We’re going to go ahead and jump in. Let me share. Let me change my screen here. I apologize. I am down to one monitor now that I’m across the pond, so I have to. I have to. I’m not nowhere near as professional of a setting as I would normally have. So bear with me here, friends, but we’re. Where we are is copilotstudio.microsoft.com, which if you navigate to that on your own tenant, you’ll probably land in the default environment for your company. As I was mentioning, right, what we’re gonna be building is an F1 sales. Automation agent Really. Where we start is we go straight into agents and let’s just build along. We’ll talk through it. If you guys have any thoughts or anything that you’d like to add, just throw that in the chat. Or any questions that you have, feel free to just fire away or hold them to the discussion and that’s OK too. But we’re just going to go ahead and start with our new agent. If you’ve never seen Copilot Studio before, there’s two ways that you can create an agent. One is where you actually use. Conversational language to configure your language and you could tell it to do, hey, you know, I’d like my agent to do X and do Y or you can configure it straight away. I am a little bit more stringent, so I would like I prefer to configure on my own. So we’re going to see, we’re going to call this the F1 sales agent. And this agent is going to assist us with creating quotes for F1 team parts. Now I have some instructions prepared for us that I think will work best for us. And so we’ll go through them here briefly and then we’ll throw them on into our screens. So here, just one second, you are an agent and we’ll go through this. One of the things that I think I, well, actually I know will improve with due time is the ability to expand. Oh, this. This was not here. I swear to you this was not here like 3 weeks ago. So yeah, so I’m glad that I can expand the chat or the the the the input box because that was pretty decent and available and that was something that I had a a problem with. Ann Britt 29:33 Speed of light. That’s that’s what I’m saying, Mac. It’s like it’s crazy. Mac Krawiec 29:47 But anyways, you are an agent designed to interpret emails. When an e-mail comes in, you should do the following, right? So our agent, the way the workflow is going to work is the workflow of actually many companies. So some of the companies I’ve worked with told me that a lot of their business comes in through e-mail and so and as a matter of fact, a lot of their business, the first. Person to respond gets the business, which is huge because if I build an agent that’s gonna respond to you within say 20 or 30 seconds, which this is the kind of accelerated response times we’ve seen for our clients, then you’re gonna win the business and that’s monumental for making sure that the sales process goes smoothly. Go smoothly and that the ROI is there because typically a sales representative for a lot of companies in the best case scenario takes 15 minutes to get a quote. But if you’re on PTO or if you’re on lunch or whatever, it might be an hour or two and by that time somebody else will respond. Well, if your agent can respond to you within 3030 seconds. That’s a lot better than even 30 minutes. So we’re going to go through the emails and we’re going to scan my e-mail box and we’ll see how that goes. O when an e-mail comes in, you should do the following. Determine if the e-mail, subject and body hint that the e-mail is about a request. For a quote in Formula One parts, if it is, send an e-mail with the quote information including a list of all the quoted parts using the send an e-mail tool which we will build. If not, do nothing and the fact that I can write this two sentences. Again to my develoer mind, like it blows my mind because when I wrote it and it did it, it was just completely mind boggling. The next one is use the knowledge from a SharePoint folder F1 parts to determine what the cost of a given part is. If a part is not on the list or the on hand is 0, please let the customer know that the part is not available. We’ll see that. We’ll add that knowledge base. We’ll add that directory. This is an example. I actually put this instruction here on purpose. Get us thinking about the integration, right? So if we’re thinking about the way that we commonly do things right now, a lot of API is flying around and not everybody’s using MCP servers yet. Well, Jason, if payloads are flying around the world all the time, everywhere, a ton of it. You can have your agent spit out the results, and I’ve done this before for companies, and if you integrate it in this way, you can have it spit out specific Jason schema, which is hugely important given that you can basically integrate with just about anything because you can create a topic or create a. A tool which uses an agent flow that then calls an API and you can send that API request provided this payload and oh by the way, the agent will know to provide that payload automatically and just do it. So that’s the third instruction. And then the 4th one when invoking the send in e-mail tool, the to should be the from. Of the trigger e-mail. So that’ll make a lot more sense when we get through. We’re not gonna be using the reply, we’re not gonna be using the draft, we’re just gonna send a new e-mail to the person. So those are the instructions that I wanna give it and these are very specific. And as you build agents, this is a lot of these instructions whether you’re. Working on them within the configuration of your tool and we’ll get to what the tool is or within the configuration of your agent. Very prevalent for you to get there. So the next thing that we’re gonna do is use knowledge. I actually have a document prepared and I’ll show the team here. Here very simple CSV file. I kept it pretty simple just to save us some time. A very simple CSV file that shows part numbers, the manufacturer, whether it’s Sauber or Ferrari, what the part is and. I mean a front wing, there’s there’s a ton of parts that goes into it. So again, we’re keeping it simple. A clutch is not just a clutch. It happens to be thousands of other little subcomponents. And then then the units of units of measure, some of them are a little bit off, but that’s OK. The price per part, which they’re really low balling Formula One here, trust me. And then the on hand available. So that’s a little bit. Of a sneak peek at the data. And So what we’re going to do and let me share my screen here is we’re actually gonna add and so that file already exists. I have it uploaded in SharePoint and you can add knowledge and choose to add knowledge from Dataverse from D. Those are the typical Microsoft ones, but there’s a ton of other ones that you’ll see once I create this agent, and you can obviously add a knowledge from public websites. What does doing this do? The agent when it attempts to answer questions or attempts to interact with the user. They will go ahead and use whatever sources you specify here to provide the answer. So in our case, we’re gonna use SharePoint and I’m going to navigate to a SharePoint site that we have along with a directory, so. Uh. One second here and I have a F1 parts list here available and I’m going to go ahead and add that and I’m going to say you you really want to be intentional and you want to make sure that some descriptions back when I was developing kind of were like optional. Well, not back. I’m still doing that, but you know, they were optional. Well, now they’re really not. You really have to be very intentional with what you say. And so this document houses holds all the parts that are available to be sold. When a when a customer requests a quote. So that’s the description and that’s the the the context that the agent’s going to have for for this for this file. And so the next thing that we’re going to do, we’re actually not going to do any suggested prompts for now is we’re actually going to create this is doing a lot of the stuff for you and and taking care of of things. The next thing that we’re going to work on is actually triggers. So you can, there’s a few ways that you can surface an agent, right? There’s actually purely and entirely automated, kind of like an automated flow in Power Automate. Or a logic app, but there’s also channels. So if you really wanted to expose your agent to the outside world and to users, these are all the channels that are made that are available. And believe me you, this used to not be this expansive list of four. So it’s it’s moving and one of the most common ones that we’re doing today for clients oftentimes is Teams and 365 copilot. Everybody wants an agent in teams, so this is very common, but you you put it just about anywhere else that you’d like. So with that said, we’re actually not gonna surface this to anybody just because we’re gonna keep it internal. The one thing I wanted to show you guys is now that this agent is created, there’s some other knowledge sources that you can use like Azure SQL, ServiceNow, Salesforce. So we’re really going beyond just the. The typical Microsoft Dynamics ERP kind of with Dataverse or anything like that, we’re expanding and our wings are opening. So there’s a lot more. We’re bringing everybody in and that was one of the key things that Satya mentioned during build is that we. You know, Microsoft aims to be the everything for everybody within opening and adding more and more things to their catalog, so. Ann Britt 37:23 Yeah, Mac, I would just sorry, I would just say one one point to add to that just to maybe punctuate a little bit is really what that allows you to do is maximize on ROI for tools you already have. Mac Krawiec 37:41 Are we there, Ann? Ann Britt 37:42 It’s truly going to allow for our customers to maximize on their investments. Mac Krawiec 37:46 Yep, perfect. Sorry, I just thought. I think I had a lapse in in connectivity and I thought it was you, but it was me. Amy Cousland 37:53 No, yeah, I think, I think, Ann, I think you you for momentarily, we couldn’t hear you. Just want to kind of repeat what you said, Ann. Ann Britt 37:58 Yeah, essentially that those integrations are allowing our customers to maximize on their current investments. Mac Krawiec 38:07 Yes, absolutely. Being able to connect and integrate with more things is if if you’re being siloed to use Azure SQL, just Azure SQL and nothing else, or just D365 sales and nothing else, then you’re less likely to use this. So the fact that we can use this for everything and everyone and the toolbox is expanding. It’s the more better. So one of the things that I just want to talk about really quickly is you’re seeing several cards here. We’re seeing tools, triggers, agents, topics, and suggested prompts. What is a tool? A tool is effectively an integration piece, so you can add various tools, you can add flows that are built. Built in Power Automate, you can add MCP servers that are made available, whether it’s Salesforce or whatever else, and we’ll look at that. You can use agent flows, which are flows that you can create specifically in Copilot Studio. A plethora of different things you can call. APIs directly through those automated flows. Triggers are the extensibility points to making this happen. Believe it or not, on a trigger. So received an e-mail or new opportunity created or new record. Again, various triggers. We’ll take a look at that agent. So I’m sure if you’ve attended other sessions, there was this talk of multi agents, right? And everybody hears multi agents and it’s like, OK, well great really if you think about it, this is kind of a change where an agent almost becomes a method. Or a function in code where that function, if written correctly, is a single purpose and the perfect architecture is always made available and reusable and it’s very modular and kind of like a Lego block. Well, this is the agentic answer to that and now you can have. Agents that for all intents and purposes serve a single function and are a single function that is then called by different agents and using in this case low code, you can actually leverage AI orchestration which lets the agent decide based on the context of the question. And based on the instructions that we provide, let’s the agent decide what other agent to call upon. So that is a low code approach to doing that. The more pro code approach that sometimes is employed for a little bit more. Granular or more specific implementations is the use of orchestration within semantic kernel, which allows you actually now it’s being integrated with Agent One. You can use more specific orchestration, sequential orchestration and such. Which arguably for more enterprise implementations of a sales process, a sequential orchestration within semantic kernel may be the more appropriate approach, but for the intensive versus of a demo, we’re going this way. So that’s just to touch on the fact that really agents can now become functions rather than entire implementations in another. So think about it that way when you approach multi agents and then there’s topics. So if your agent is surfaced to a channel, be it Teams or Slack or whatever, you can add topics that are very procedural and very step by step. And you can maintain those because every agent gets their own set of default topics. I oftentimes, believe it or not, find myself for my use cases to be disabling some of these basic ones or some of these system ones because. That’s just noise that not necessarily everybody wants, and so if my agents are very specific to solve a very specific business problem, I’ll disable some of the default ones or delete them altogether and then enable the ones I’m interested in, which have that more rigid approach. And then suggested a prompt. So if if you’ve ever worked with, I mean Chad, GBT, now Copilot has it where if you start writing there is like little blurbs that pop up above your input box to kind of get what you’re thinking. If you wanted to provide that for the end user, this is where you do that. This is where you add those suggested prompts. So with that said, we’re going to move along a little quicker because I want to make sure that we have enough time. There’s a time check we have. Oh my goodness, we don’t have enough time. We’re going to build this quickly. So for our tools, we’re going to actually go ahead and. Really quickly add a tool and our tool is going to be an Outlook 365 connector. This is very similar to Azure Functions and it’s going to be send an e-mail and so this tool we’re going to get to configure it and we’re going to create it as my connection so you can use. Other types of connections, but in this case we’re going to use my connection that’s going to be acting on behalf of me, Mac at concurrency.com and we’re going to go ahead and configure that really quickly. And I’m going to go ahead and dig into my tool. I I apologize. I do have a bit of a thing prepared just to make sure that we can move along faster. But and this is again where descriptions and this is this was the gripe. I know Ann, this one doesn’t expand yet. That’s the one I want to expand. Ann Britt 43:24 I’ll call a guy, Nick. Mac Krawiec 43:24 So we want thanks. Actually, for all intents and purposes, I’m gonna skip to one. I learned from the build presenters. I have the tool we were building already built, so we’re gonna go through that just to save time. I know that. I want to just get through and hopefully save a little bit time for questions and we’re going to talk through the tools that we have available and the triggers. So I’ve just selected in this agent pane the agent that I’ve built alongside to kind of mimic this approach. One second and it’s thinking. Let me hard refresh here just to make sure that my. Maybe now I’m having technical difficulties from the Internet perspective. And I sure was nothing that a hard refresh control R or command R can’t fix. So we’re gonna go ahead and enter the Mac test bot. And the Mac test bot, for all intents and purposes, is exactly what we were building here today, so. The knowledge that we chose was the exact same file. It’s it’s the CSV file that we had there. The instructions for the agent are the exact same and we’ll talk a little bit about the tool. So we we can configure a tool which I called send an e-mail and the instructions I gave it again very specific when drafting emails in Outlook align neatly and make it easy to read. Kee a rofessional tone for all formatting. Use HTML. This is important. If you’re surfacing it in teams, you’re surfacing in emails, it might be different in teams. I literally just try to make it nice and readable using markup, and then if you do that, it’ll make it nice and readable using markup for teams. But in in for all and and verbs and emails we want it in the HTML. We want it to be nice and clean for quoted items, create a table displaying all the details and it’s going to do that using HTML at the end. Think for the request for the quote. The company that are generating in the emails is concurrency if there are more than one, if there are more than one. Quantity of an item include a total column for a given row. That’s to make it specific that each row might have more than one quantity. At the end of the e-mail, state the total sum of dollar amount. So very specific. And the inputs of this tool are the two, the subject and the body. And notice I told the agent that the two. To for this tool is the from of the trigger and I chose dynamically filled with AI and when you do that the agent will know on its own what the input is. And that’s crazy to me that it just it just does that so. That’s the actual implementation of sending an e-mail, and again, much easier than anything that I’ve ever had to do. You no longer have to use graph or anything like that. And then the trigger, right? So the trigger is actually just in Power Automate. So our trigger is a very conventional trigger. I did add some tidbits to it just from a safety perspective and you guys will see that here in a second. And that is when a new e-mail comes in to my inbox, that’s the trigger and then for each two recipient I actually check to make sure that the sender. Isn’t also the recipient, and that’s because when I was testing this, I was testing it for my concurrency e-mail, sending it to my concurrency e-mail, and then I was in just a recursive hell. So I added that there to make sure that we’re good. And then what we’re gonna do if it’s not and everything is hunky Dory, we’re gonna send a prompt to the specified copilot for processing. And that specified copilot is obviously our bot, and I’ve already disabled all these other topics, which I didn’t really need to do because we’re not surfacing this anywhere. I did not add an agent. This is a very single purpose demo kind of agent. So what I’m going to do. Is actually show you guys a few of the examples that we have. So here let me give me just one second. I have created just in a garbage e-mail and it’s primarily because I. I’ll be sending myself an e-mail and I don’t. Here you are. So I have a proton e-mail. I’m checking it out. I’m going to send it to my concurrency.com e-mail and I’m going to call title it RFQ request for quote and I’ll say I need one Alpine intake, one Red Bull MGUH and one Mercedes wheel rim and I’m going to go ahead and. Send this e-mail to myself and the one thing I wanted to show the team here is if I open Outlook. Just so you know that I’m not lying, I don’t have an e-mail for myself just yet. I’m going to have an e-mail soon as it comes in and what we can do in terms of observability. This is the really nice part is you can go into your flow, your trigger flow. I got the e-mail and I just saw that on my phone, but if I refresh here RFQ right, it came in from me@proton.com. This is exactly what I need. You can actually see the run of the flow, kind of like in Power Automate where the flow is triggered. And it was triggered 8 seconds ago. It ran for three seconds and it called the agent within 3 seconds. Very simple folks. I’m not even going to enter it. But then the next thing you can see is activity for an agent and an activity for an agent you could actually see. Where your agents are being executed last. Now I’ve been getting emails while we were on the phone and because I gave them instructions not to do anything with them, it hasn’t done anything. This was my test right before today just to make sure everything is still working. But this is our run 1249 and I’m gonna access this run and this is cool because for observability sake. You can retrace the individual steps of an agency, the inputs and the outputs. I’m a big inputs, outputs guy and we have our trigger. It got our e-mail content, it used the knowledge source and it actually used in Jason output. It used the CSV file to determine that the price is this and there’s this certain quantity. And by the way, it determined that the Mercedes wheel rim is not available. There’s only an Alpine wheel rim. And then I went ahead and sent an e-mail. So I’m going to look at my mailbox and I got an e-mail from Mac at concurrency. And as you can see, the instructions were pretty clear. It was used nice HTML formatting. Thank you for the quote. Very professional. The Mercedes real room is not available. The only real room is risked is Alpine, not Mercedes. Kind of the same exact response that we were talking about. We have one quantity of this, one quantity of that with a total. The sum total is 43 grand. Thank you for the request. Guys, if I didn’t take us long to build this and when I was building this the first time, it didn’t take me long either. But I have a pretty rudimentary approach for sales processing and if you know for the more complex approaches, obviously. Obviously it’s it’s a little bit more work and you you might have different implementations, but it’s done, it’s done and it’s doing it’s using the data. It’s it’s in and I was, I mean amazed at how how great that was so. That was so huge kudos to the copilot team, but that’s basically the end of our presentation here. I do have just a few more slides now that we’ve built the next slide. I did say that we’re gonna have a second shameless plug. So again, if you wanted to talk more use cases and talk a little bit more about how we can, how we can solve these problems and the more enterprisey approaches, here you are, there’s a chance. And then with that, without further ado, we have a discussion. Don’t just hide, don’t just go away unless you have to, but if you want to stay back and chat. I’m here for it and anything that I missed? Any thoughts, questions, comments if you wanted to start the conversation? Ann Britt 51:29 Yeah, awesome demo. Thank you so much. You know, one of the things you touched on is that right out of the gate you have like better than an MVP. I’m curious, maybe others on the call, like who’s who’s doing this already? What’s the maturity level? Anyone want to throw something in chat? Mac Krawiec 51:31 Um. Ann Britt 51:53 We’ve seen so many really interesting use cases coming through. Maybe I’ll I’ll throw one or two of those at you, Mac, while we’re we’re waiting for people to warm up. But one of the ones that I I think is really neat is an internal use case. Again, we had an AB group in customer service, those with and without co-pilots. And our customer service folks with co-pilots are 12% faster on average and that’s cross 11,500 folks in that customer service area. So it’s not a really cool one, just kind of watching agents handle entire cases. Mac Krawiec 52:27 Yes. And and so that’s the customer approach, customer service approach. There is the SRE. So again more pro code perspective, there’s the SRE agent. I’ve literally seen it revert bad builds after a deployment. So if if something was bad and and it needed to swap the deployments, it went right back to the to the previous build. That worked. I use Copilot or GitHub Copilot every day. Matter of fact, I’ve done webinars on GitHub Copilot and I can tell you the ROI on GitHub Copilot for developers when they’re using it right, because if they’re only using it for completions, it’s really not the whole shebang. The. Huge. And and it saves even even if it think about it, even if it saves you 1/2 an hour every day. That’s a lot of time in the perspective of a year and that’s a lot of time for both you, for your team, for your, for your company, so and and your clients, it’s huge. Abraham had one thing. Would these agents be able to differentiate between spam versus normal emails and how does the flow look like? So yes, they can and and I think we’ve seen an example of that even just today. Where I give it the I give the instructions to look in the body and the subject to look for contextually clues that make this a quote. So that’s already kind of filters the the the output there. Where it’s not gonna bother with the e-mail I got from my coworker double checking that I responded to an e-mail, literally what you guys saw in the inbox cuz I was there and then it didn’t process. But then as far as spam goes, you know, I think with some intentional instructions like let’s say somebody’s spamming you to try and give you fake quotes. We can certainly work that in and say with some more specific prompt engineering, have some more detection. I think really what you might have in a full-fledged implementation is an agent dedicated to just that. That’s detecting spam and and based on certain criteria it’s it’s doing that really bot handling. I have my brother is a data engineer for a huge travel third party company and they handle bots for behavior and activity on their website. You can totally detect that. It’s very difficult. But not not not impossible. So I think that would be more of a bit of a pro code approach just because you might want to use some more interesting approaches. But AI definitely at a POC stage could totally at least solve the 90th percentile of those I think. Ann Britt 55:00 Yeah, I would agree. And it’s gonna learn from the context around your organization as well. I think is is something to consider. You know, yes, we can give it very specific things like it, but we can also ask it to use those context clues. So that’s awesome. Mac Krawiec 55:15 Yeah, especially given that the flow is triggered on basically what’s a flow. If you really wanted to have that more deterministic style rather than just AI, your flow could say what’s our DNB list and add that DNB list, do not do business with list and ultimately filter out those. Emails, but then more instructions that are more agentic could be targeted at a little bit more specific use cases. Great question though, Abraham. Thank you. Any other questions? Any other comments, tears, fears, or maybe experiences that you’ve had? Excuse me, feel free to chime in. I know we’re over. A few people have already left us, but that’s OK. All right guys, well it seems like that’s it. If you need anything again, we have those sessions, but also you saw my QR code for LinkedIn. Feel free to connect, feel free to talk, feel free to chat. If you want my e-mail, I’m also happy to give to give that out. Always happy to talk and and talk shop with anybody. But again, just thank you. First of all to Ann, thank you for being here and thank you for talking with us. But then to you, the audience as well, thank you for attending, taking the time out of your day to listen to me speak. Ann Britt 56:37 Thank you, Mac. 56:39 Thank you.