Insights View Recording: Virtual AI Summit – Beyond the Hype Keynote

View Recording: Virtual AI Summit – Beyond the Hype Keynote

How AI Agentic Workflows Transform Enterprise Productivity

Organizations across Chicago, Milwaukee, and Minneapolis are grappling with how to adopt AI in a way that improves performance without adding complexity or risk. In this webinar, futurist Todd McLees explains how AI agentic workflows help teams blend automation with human judgment—unlocking speed, precision, and trust at the same time.



You’ll learn how top organizations redesign processes for AI agents, where humans must stay in the loop, and how Concurrency applies Microsoft and ServiceNow technologies to build enterprise-ready agentic systems.

WHAT YOU’LL LEARN

In this webinar, you’ll learn:

  • How agentic AI differs from traditional automation and why workflows must evolve
  • A practical framework for deciding which tasks to automate, augment, or keep human-led
  • Real-world use cases from healthcare, finance, and enterprise operations
  • How to build skills for the “agency economy” and prepare your workforce for AI agents
  • Why human judgment is still essential—even as AI surpasses human capability
  • How Microsoft Copilot Studio and ServiceNow workflows support agentic design

FREQUENTLY ASKED QUESTIONS


What are agentic AI workflows and how do they differ from basic automation?

Agentic AI workflows involve AI systems that can reason through steps, use tools, reflect on output, and collaborate with humans. Unlike one-shot prompts or simple automation, agentic workflows create multi-step reasoning loops that support planning, decision-making, and continuous improvement.

How can enterprises decide what tasks should be automated, augmented, or remain human?

Start by mapping processes into three buckets: automatable tasks (highly repeatable), augmentable tasks (improved by AI but requiring human direction), and human-only tasks (judgment, ethics, relationships). This mirrors the Human Agency Scale used in the webinar and gives teams clarity for safe, aligned AI adoption.

What skills do employees need to succeed alongside AI agents?

Teams need durable skills such as judgment, communication, and problem-solving; transferable skills like data fluency; and perishable technical skills tied to current AI tools. The combination helps workers orchestrate AI agents, delegate properly, and maintain trust in high-stakes workflows.

How can organizations reduce fear among employees who see AI as a threat?

Begin with augmentation rather than automation. Show how AI reduces workload, improves quality, and accelerates employees’ own impact. Small wins build confidence and demonstrate that AI is a tool that multiplies human capability—not a replacement for it.

What practical steps should companies take to begin using AI agents responsibly?

Start with a shared language for AI, then pilot agent delegation. Next, redesign workflows using clear automation/augmentation/human rules. Partner with experts to ensure governance, data protections, and alignment with Microsoft or ServiceNow platforms before scaling.

ABOUT THE SPEAKER

Todd McLees is a globally recognized strategist and futurist focused on the intersection of human skills and artificial intelligence. As co-founder of Human Skills AI and a LinkedIn Top Voice in AI, Todd has helped thousands of professionals and dozens of organizations adopt AI responsibly and effectively. His work emphasizes agility, ethics, and preparing people for the future of work.


Event Transcript

Transcription Collapsed

Kate Weiland-Moores 0:18 Good morning, Todd. Good morning, everyone. Good. So good to see you, Todd. Good morning, everyone, and welcome to Concurrency’s Autumn AI Summit Beyond the Hype. I love the lineup that we have for you today, and I’m going to get things kicked off right Todd McLees 0:21 Good morning. How are you, Kate? She. Kate Weiland-Moores 0:38 by introducing our keynote speaker, Todd McLeese. It’s my great pleasure to introduce you, Todd. And Todd is a globally recognized strategist. He’s a futurist and thought leader and a published author at the forefront of AI powered future of work. Todd is the co-founder of Human Skills AI, dedicated his career to helping people and organizations succeed in a world shaped by disruption, automation and rapid change. Todd’s groundbreaking frameworks on AI agility, human AI mindsets and. Skills for the AI economy have been adopted by over 60 institutions and industry partners across the country. And he has more than 30,000 people that have taken the AI Agility Challenge. Could you be next one of you on the call? Named a LinkedIn top voice in artificial intelligence in both 2023 and 2024, Todd is passionate and is a passionate advocate for building healthy AI habits and empowering people to flourish and rapidly. Changing and evolving world. I myself have been in awe of Todd McLees for many years, so please join me all in welcoming Todd McLees. Todd McLees 2:00 Hey, that’s really nice. Thanks very much and thank you everybody for taking the time this morning. My goal today is really just to get through some content, not a lot of content, roughly half the amount of slides they normally use so that we could hopefully spend some time in a more dynamic conversation, not just questionnaire. An answer, but some back and forth. The concept that we’re working on right now takes us from using large language models and AI clients to agents and of course concurrency business and their customer base. Very much based today on agentic workflows and agentic A I where we’re focus is on the people and process side of things. And so it’s very much about agentic workflow design, taking our imperfect processes and making them ready for a I agents. So we think about it as aiming higher. In the A I economy, because every business process that we run today was built around scarcity, scarcity of talent, scarcity of time, scarcity of intelligence. But intelligence is moving from scarce to abundant. One way to imagine that is to look at raw cognitions measured by. IQ It’s not a perfect test. It’s a maybe even a poor proxy for human intelligence, but it certainly is a nice benchmark to track the progress of AI systems. So a year ago the landscape looked like this. The best A I models were clustered right around average human IQ falling a little bit short and that was just, you know, a little over 12 months ago from this date. And you can always you can track this yourself. It’s tracking a i.org is where I get this information from. So that’s a year ago. Now average human intelligence is 100, and every 15 points is 1 standard deviation. And what we’ve seen is that throughout the many releases that have occurred over the last year, it’s essentially a standard deviation, 1515 IQ points with every new major release. So today it looks like this. The frontier has already shifted far to the right with models scoring in the top 1% of human capability. And you can see it’s not just open AI, it’s not just the very latest models with GPT5, it’s Grok, it’s Gemini. And you can see Claude just behind that and coming. And again, every release that they do, like Anthropic just released Claude 4.5, they gained about 15 points. So we’re confident that by the end of this year, maybe Q1, we’re going to start seeing models regularly performing over. the threshold of 140, which is genius level. And it doesn’t stop there because IQ, you know, it’s a little loaded. All of the benchmarks are, but every benchmark that we’ve ever created to track the intelligence or the capability of AI is being quickly usurped from coding to reasoning. To data analysis, the ceiling just keeps moving higher. You can see here that right now the latest models, anthropics models, G PT5, they’re outperforming human engineers in in many different ways. Not always, but in many different ways. Let’s look at math for a second. This is the invitational math exam that goes out to the toughest, sorry, the most talented students that are out there in high school math. A I models are already winning gold now in the Math Olympiad, which is a global competition. And on the AIM exam, one of the toughest challenges for advanced high school mathematicians, AI models are now posting perfect scores. What was once a test that separated the top 2% of students is now being crushed by machines. If you look at science, lastly. The GPQA, which is Google proof, PhD level questions, hundreds of questions. The average expert, the average PhD in their domain of expertise scores somewhere between 65 and 74% on this exam. And you can see that the newest models are not. Competitive with experts, they’re outperforming them. All right, so why does that matter? Because, you know, one thing against every benchmark that exists is that the A I labs, when they release a new model, they overfit the model to perform well on these types of tests. So you have to take it with a little bit of a grain of salt, but it’s still a good measure of progress over the last couple of years as language models have made their way. It matters because once a I reaches or exceeds human capability in those three domains, math, science and software engineering. We’ve essentially unlocked the full cycle of innovation, math for abstraction and modeling, science for generating and testing hypotheses, software engineering for implementing and scaling solutions. So let’s just take a a moment, a little pause. And think about what it really means. For decades, as Vinit Khosla put it, we’ve rationed the time of our most expensive, our most capable people, every workflow, every piece of software. They’ve all been wired to date to optimize around that constraint, around talent, around intelligence. But now we’re we’ve entered a moment where intelligence in math, science and software, the very engines of innovation is no longer scarce and and creativity too. All sorts of the jagged frontier of AI capabilities, all sorts of capabilities, not perfect, lots of gaps and so forth. But we’re getting to the point where intelligence is not scarce. It’s abundant, which means the old assumption that we have to conserve expertise, limit cycles of experimentation, or settle for good enough processes is collapsing. When scarcity disappears, everything built around rationing breaks the workflows we design for bottlenecks. They don’t make sense in a world of abundant intelligence. That’s a lot bigger picture. It’s it’s more than just faster productivity. It’s an opening to reimagine how we create, how we discover, and how we build. And we’re seeing all sorts of announcements and predictions coming from the A I labs now about novel concepts in math and science being discovered, if not now, by next year, these new capabilities that are emerging around reasoning. Too often our conversations in academic and professional circles is about teaching AI literacy, helping people understand. The basics of tools, sorry, just trying to advance the slide and literacy is absolutely necessary, but it is not sufficient for the work that both mid-career professionals, all of us need to be thinking about or the students that are going to be coming into the workforce, which we we know from the data. Is a struggle today. They need skills that are related beyond AI literacy to ways to leverage AI agility and orchestrating AI agents within workflows. Because if workflows themselves no longer make sense in a world where. We’re quickly heading. Then the real challenge isn’t just literacy, it’s fluency and redesigning how we work. It’s about agility, not awareness. So when you think about the work that’s going on right now, the companies that are out in front, not just a I forward, but a I first companies. like these companies, they’re trying to be AI first. They’re pushing agents and automation into every corner of the business, measuring progress and tasks completed and cost reduced. You’ve probably seen the wave of CEOs declaring their companies AI first, like those on the slide, certainly the top Example our examples with Box, Fiver, Klarna, very public, released memos about what their policies are going forward at Shopify. Employees were told they can’t request a new hire. Until they’ve proven why AI can’t do the work. Klarna bragged about replacing hundreds of support agents with an AI assistant, but they ran into some trouble. That was more than a year ago. 700 support agents were let go. They were very public about why that they were deploying AI assistants to do the same. Only to quietly begin rehiring people earlier this year when customers lost trust and they were repurposing software engineers to work the phones and support and so forth. It was a mess. And that’s the reality of AI first. For some it means speed and scale. For others it. Exposes the risks of chasing efficiency over value. You can see direct supply on this on this slide locally. I think they’re maybe the best example of a company that’s got more than a decade long investment in AI. They’re an AI first company. They have a couple 100 different automation. And you know from GPTS to agents running throughout their enterprise. But in from our perspective, AI first versus human first AI, it’s not really about picking a side. It’s not a religion. The real work is learning to move fluidly between the two in agentic workflows. You have all sorts of different tasks and there are AI first tasks when speed and scale are essential. There are human first tasks when trust and problem solving and value creation are at stake. So the organizations that thrive in this era, they won’t be the ones that declare themselves AI. First, or human first. They’ll be the ones who know how to design workflows where the two multiply each other, where you go back and forth. There’s multiple tasks and there are always reasons to consider. Should we automate this? We can’t. Once we can automate something, should we? Should we leverage AI instead to augment people’s people in the role, or should we keep this as intrinsically human? So when we think about the time that we’re entering, it’s not about the AI economy, it’s about the agency economy. For most of the history, our economies have been defined by what is scarce. In the agricultural economy, land and labor were the limiting factors. In the industrial economy, it was machines and capital infrastructure. In the knowledge economy, intelligence itself became the scarcest resource. But now we’re moving into something new, where it’s the agency economy. And here the scarce resource isn’t land or machines or intelligence, it’s agency, the ability to decide, design and direct how humans and AI work together. Because the agency economy is powered by two workforces, the human workforce and the digital workforce. And when you combine ingenuity, resilience, judgment of people with speed, scale and precision of agents, AI agents, you don’t just get more productivity, you expand what’s possible. And that’s what I mean when I say. Agency is the multiplier, but both have to be there. You can’t really solve the issues with simply automating, right? We’ve heard that for many years in manufacturing, for instance, where you can’t really just automate a bad process. You have to redesign the process to optimize it. In this case, taking people out of the equation has to be an intentional decision. And when you do that, you can’t just solve the shortcomings of of the individuals that are doing the role by having an AI do it. In fact, the processes that we’ve designed over the many years, I think we can all admit if we think about it just a little. little bit. They’re relatively imprecise because we all have people that we trust, that we are our go to employees and so forth, who just figure it out. When it goes off the happy path of a given process or workflow, we depend on people to figure it out. When it comes to Agentec AI. AI and and offloading work to agents, it’s much more necessary with the constraints and the context and the objectives and so forth to be much more definitive in how we think about process so. If the agency economy is about harnessing human and digital workforces together, the question becomes how do we actually design that partnership? Well, one of the people that I pay a great deal of attention to on this issue is Andrew Ng. Co-founder at Google Brain some time ago, CEO and founder at Baidu, a large AI company, and a Stanford professor as well. And tremendous online content from Andrew about everything technical about agentic workflows. Kate Weiland-Moores 15:15 OK. Todd McLees 15:21 About agents themselves, about AI in general. He’s been pointing out something critical. The breakthroughs in AI won’t come from just building bigger models. That’s certainly helpful, but they’ll come from better workflows. And he calls it agentic workflow design. That’s the term that we’ve adopted. And it means building processes where agents don’t just spit out a one shot answer, but they reason, they reflect, they use tools, they plan steps and even collaborate between themselves. And our job is to be involved in that. Sometimes wrapping an older model in an agentic workflow outperforms the newest model simply because you’ve given it the opportunity to create the learning loops and give itself feedback and get feedback from human interaction and so forth. That’s why workflows are the story. It’s not just what the model can do in one pass, it’s how you design the system around it. And that’s where human agency comes into the picture, because not every task in a workflow is the same. Some are automatable. AI can do them end to end. Some are augmentable humans and agents working together, elevating the work that either could do on their own. This is where you see the metrics of 40% improvement in quality because a person who’s doing that task is leveraging AI to go further, not just faster, but to take them further. Further to extend their capabilities. And if you’re using AI on a frequent basis, I’m certain that over the last year or so you’ve experienced those instances where you know those holy cow moments around what AI is actually capable of. And then there are some tasks that are just purely human. The judgment calls, relationship management, ethical decisions, creative leaps, things where AI might be additive, it might not be, but that domain is still a human domain. The companies that figure this out, the ones that master agentic workflows and define the human role with precision, they’re going to outpace the competitors. And the same is true for individuals. The people who learn to work this way, knowing what to automate, what to augment, and what to keep purely human, will be the ones who move the fastest, give themselves some runway, create the most value, and stay indispensable in the economy that we’re heading into. Because the real edge isn’t just knowing how to use the tools, it’s building the right skills too. This is a huge message and and area of focus for us with within the higher edge that we work with across the country. There are durable skills, judgment, problem solving, ethics, the things AI can’t replace. Transferable skills like data fluency, collaboration, critical thinking, the ones that carry across roles and industries. And then there are perishable skills that the more specific, the more perishable. So the specific tools and platforms that we all need to keep refreshing as those. Products evolve faster than any technology that we’ve ever dealt with in our lives. Individuals who invest in those layers of skill, whether you’re 45 years old and mid-career or an 18 year old kid headed to a college track or right to work. The ability to build these human skills and combine them with the ability to not only collaborate with AI in meaningful ways, but to orchestrate AI agents. That’s where thriving and human flourishing is really going to come into play in this economy. So, all right, let’s make this real. It’s one thing to talk about durable, transferable, and perishable. But what does it actually look like in practice when humans and AI divide the work across automatable, augmentable, and purely human tasks? Let me give you a couple of examples. Let’s talk about radiology. Almost a decade ago, one of the field’s brightest minds predicted AI would replace the field of radiology entirely. The logic seems simple. If AI can read images better than people, which it can. Do. It’s more accurate every time today. Why do we need radiologists? But that didn’t happen. Radiology is actually growing. Why is that? Because the job is more than image recognition. It involves context, patient communication, regulatory decisions, and accountability. AI speeds up the repetitive parts, which creates more demand, not less. This is the pattern. Jobs refactor before they disappear. They shift into automatable tasks, augmentable, and those that remain purely human. This is Jevons paradox. It’s the name of it. It’s when efficiency. Goes up, demand doesn’t go down, it explodes. A I made scans faster and cheaper, so we order more scans and that’s actually increased the need for radiologists, not less, because some someone still has to make the judgment calls, handle the complex cases. And sit down with patients and their loved ones to explain what it all means. Here’s how the role is shifting. The repetitive part, the first pass scan of thousands of images is being automated with a I flagging anomalies at scale. That frees radiologists to spend more time on the edge cases, the ambiguous results, nuanced patterns, integrating findings into a patient’s full. Medical story. They’re also being pulled closer to the patient, communicating results directly, building trust, working alongside care teams. At the same time, they’ve become orchestrators of hybrid workflows, reviewing what AI flags like Cleveland Clinic is using ambient listening in their rooms, almost 100%. Of physicians have opted in, and only one patient in the past year since they’ve been running ambient AI in in the rooms has opted out. Ultimately, the radiologists are still accountable. They’re making sure diagnosis are accurate, they’re documenting decisions, they’re navigating the ethical and legal responsibility. When A I is involved. That’s why jobs don’t disappear overnight. They refactor, and certainly there’s exceptions. Different companies, different individuals are going to make different decisions, like Klarna did 12 or 18 months ago when they cut 700 people and then made the mistake and had to hire many of them back. Or many new customer service agents and ask engineers to work the phones while they fill that gap. Next use case is around Goldman Sachs. So David Solomon, the CEO of Goldman Sachs, he put it bluntly around six months ago now on CNBC that it’s the last 5% that now matters because the rest is just a commodity. Think about. What he was talking about was the S1 prospectus. So when you take a company public, you might hire, if you’re playing all the way up the top of the market, you might hire Goldman Sachs to take you public. And one of the things that they do is they build a prospectus on your company. Traditionally, that meant 6 Goldman Sachs bankers, not inexpensive resources, working for multiple weeks, grinding through hundreds of pages, every disclosure, every chart, every footnote. But today, most of that baseline work is automatable. A I can draft sections, assemble financials, check formatting, and compare against thousands of past filings in seconds. So what’s left? The part where judgment and trust matter most. The story that positions the company to investors. The careful framing of risks. Decisions under uncertainty that can’t be delegated and the relationship between the banker and the client that says this is how we tell your story to the market. Now sometimes the split looks like this, like 955. Other times it’s 8020 or 2080 or even 603010 automation, augmentation and purely human. The ratio shifts depending on the task, the stakes and the context just depends on the workflow. But here’s the reality. Even if only 10% of a process can be automated over time, ultimately it will be automated. And that’s the gravitational pull of a I technology. The principle doesn’t change. The commodity work keeps shifting toward automation. And the human edge keeps concentrating in judgment, trust, and creativity. That’s why this isn’t just what about what can be automated. It’s about clarity, clarity on what’s automatable, what’s augmentable, and what must remain human. That’s the essence of agentic workflow design. How? How many of you who are thinking about AI agents at this point have already done the work of defining the process to greater detail than you’ve ever done before, and identifying exactly which tasks you think are automatable and where you still need to rely on people, and where AI can elevate the work of people as well too? Increase the value of that process to create better relationships or outcomes. So radiology shows us. How efficiency expands demand. Goldman Sachs shows us how automation concentrates the human edge. The line between automation, augmentation and human only work is always shifting. That’s the challenge. It’s not static. A single project might move across those boundaries, sometimes automated, sometimes shared, sometimes human, back and forth throughout. That’s why we need a framework, a way to see and design the handoff. And this framework is not ours. It’s it’s Erik Brynjolfsen, who wrote the book Human Plus Machine, maybe a decade ago now. He was at MIT at the time, now he’s at Stanford. They developed this framework called Human Agency Scale and. We’ve not only been inspired by it, we’ve modified it to fit for enterprise workflows as well as academic workflows. I’ll show you an example of that in just a second, but. Human agency scale is all about the role of the human. Instead of only focusing on what the A I agent is capable of, it’s what role. This is the can we should we conversation. It’s what role can should a person still be playing? In that context and how much agency and autonomy do they have in the conversation? Because we still need to help people build agencies so that they get out of bed in the morning feeling like that, you know, they are in charge of their own destiny. It’s called self-determination, that they show up to work ready to drive some of that. Even if they’re using agents to achieve it, and that they’re they know and they’re willing to be held accountable for the results. This is a this is a massive shift for most people. You know the old concept of. I guess it’s the Peter principle where you’re very successful frontline contributors, so you just keep getting. I think the principle is you keep getting promoted to the level of your incompetence, right? So suddenly we we put somebody in a leadership role or a management role and they’re not as good a manager as they were an individual producer. Well, now as the Silicon Valley folks, the, you know, the A I tech Bros and everybody else starts talking about. About, well, we’re all just going to get sort of promoted and we’ll be managers and we’ll be managing agents and orchestrating them and so forth. These are skills that a very small percentage of people have. So this framework maps out the different modes of collaboration. At one end there’s autonomous AI agency, where the machine runs end to end. In the middle there’s Shared agency, which is about humans and AI co-creating in a in the same loop back and forth. A lot of us are using language models. That way today. And at the other end is full human agency, where the stakes, nuances or contexts demand that people stay fully in control. Is there too much risk to automate something? Klarna found that out the hard way. The reality is real work moves across this scale. Sometimes you delegate more to AI, sometimes you pull it back more to the human side. The key is that you’re clear on what that looks like. It’s not just as simple as saying human in the loop, very popular phrase. The real question is, well, what is the role of the human in the loop? Is it oversight? Is it review? Is it decision making? Is it creative direction? The clarity is human agency and you have to design for that. So let me show you an example that is academic in nature. As Kate mentioned, we work with more than 60 different institutions, not only as clients, but as our partners. Our go to market strategy, they’re our distribution channel if you will to to the market around our AI agility and agentic workflow programs and challenges and so forth. This it’s interesting when you talk to I’ve I’ve worked with I think we’ve had. More than 200 college presidents take. The AI Agility Challenge and you know, locally Marquette University has had now some 70 people go through from School of Business and the School of Engineering, all the leaders, Kimo, the the president at Marquette University has completed the AI Agility Challenge and so forth. It’s a it’s a level set. It’s a Shared language. It’s a baseline skill sets around not just prompt design, but human AI collaboration. So as I’ve worked with now thousands of educators in higher Ed community colleges to the greatest universities. The people who dislike me the most are English professors and writing instructors. They they have a really hard time, understandably, with how can I bring a I into the mix. So we work with all 30 community colleges in the state of New York to the SUNY system. And at one of those stops, I was working with an English professor in a workshop around an assignment. So she gave me her assignment. She sent me her assignment, the PDF, describing the assignment. And the assessment and the rubric and all the things that the academics give to the students so that they can do well on the project. And what we did was we ran it through a system prompt that we’ve built to help create an agentic workflow out of it. And what it essentially said was in education, they’re not very. Interested, understandably, and maybe even in business in AI leading very often. So on that human agency scale, you can see on this chart we sort of floated in the 10 steps of the of the assignment, which was the same 10 that she’s always had in that assignment. The human agency scale was somewhere between a three and a five. You can see that there’s four steps where there’s a five. OK, so once we got done mapping, sometimes they were operating at full human agency, like choosing a topic or making a final judgment call. Other times they’re working in Shared agency, they’re co-creating. And then we didn’t just tell them to do it. We use system prompts. So we refer to it as a seed prompt. We gave the instructor. I’m I’m using an academic example, but this certainly fits in business as well. You know, it’s much. Harder to understand how students are using AI if you don’t give them the guardrails to live within. And I’m not talking about policy. What I’m saying is it’s very easy to generate a system prompt that says to the student. So I can say to the student, here’s the assignment and happy to have you use AI on this assignment, but you have to start with this prompt. And what that prompt does is it guides the AI tools behavior quite perfectly actually, and it reminds. The student when necessary that it’s a coach, not doing the work for them. So for instance, if you’re trying to choose the topic for this particular assignment and you say, well, write the write the topic for me, pick it for me, it will come back and say, well, actually this is an H5. This is a full human agency. Step. And so I’m your coach, not your writer. This needs to come from you. Here are the skills you’re building right now. And So what the students learn is not just the ability to complete this assignment in the way that complies with the way the institution wants it to be done and the teacher and the classroom wants it to be done. But they also learn agentic workflow, and they learn healthy habits on how to work with AI to get work done at a better level than they’ve ever been able to before. And it becomes completely auditable. And within the context of industry, these sorts of things, master prompts and system prompts, also have a tremendous amount. Of value to create standards instead of just starting a new chat session and relying on, you know, ChatGPT or Copilot’s memory capabilities to play within the guardrails. That to me, this is the future of learning. It’s about agentic assessment design. It’s also the future of work, because the organizations that build this kind of intentionality into workflows and process, they’re going to move faster. There’s going to be more trust, there’s going to be fewer critical mistakes, and it’s going to free. People up to be more creative than anybody else, any other entity that you’re competing against. That brings a maybe one of the final points up here. Abundance creates new work. So the real upside of a I isn’t just automating what we already do, it’s it’s the work that we never get around to doing because we’re so busy with all of the rote tasks. That fill our day. Warren Berger, the author pictured here, this book is actually from 2014. It informs a lot of our work. It’s called A More Beautiful Question. He makes the case that progress doesn’t start with answers, they start with the right questions. And we hear that all the time, you know, especially true in the new era of abundant intelligence. It’s no longer about finding answers, it’s about asking great questions. Because if intelligence is no longer scarce, that’s the differentiator. Who can ask the best, most beautiful questions? And that is playing out today. When you think about how A I is being used in your company right now, you may be an outlier who’s thinking more. Creatively and working up the value chain, but most of us are living in this green box. The core use cases that make up today’s work, making it a little faster and a little cheaper, trying to get that e-mail response done without spending a great deal of time on it, and so forth and so on. Very tactical transactional task, but as intelligence continues to multiply. 4X per year, then the next year, 16X and then 64 and so forth compounding. Well, the ideas in the green box just don’t get us very far. The real opportunity is on the other side of the chart at the intelligence frontier. The work we’ve never done before because it was too expensive, too time consuming, or simply impossible to gather to curate the intelligent resources, the people, the talent that we need to solve those issues. And to reach that frontier, we need both imagination and human agency, and we need we need to carve out some time for these use cases. We we need it to look like this where that’s a lot more populated in our business. That’s how we unlock value with a I and agentic A I and workflows. It’s questions like what’s already working in our business that we could 10X if we had access to abundant intelligence. What problems feel impossible right now because we can’t hire enough people, enough talent to solve them. What new kinds of work, new business models? What? What? What becomes affordable when intelligence is no longer scarce? Can we build process and systems around that? And yeah, what are the things that only humans can do, whether it’s today or next year when we see? IQ’s of 175 surpassing just about any human being that’s ever been recorded or measured. What’s still going to be human? And again, your business, your enterprise, you’re going to make different decisions than your competitors do within the business, in different business units and departments and line managers, they’re they’re going to feel differently about this than others do. In the agency economy, we don’t just need better agents, we need more human agency. Because if people don’t have the capability, the clarity and the courage to ask better questions and act on them to actually do them and be held accountable for them. Well, abundant intelligence will never reach its potential, and neither will we. That’s the shift. Productivity has to be the floor. And agentic workflows can help you get the productivity, help you get to the table stakes. Another way to look at this is a framework that we published in Harvard Business Review in December of last year. And you know the Gray box on the bottom is the green box on the previous slide. This is where everybody. Is this is where everybody’s starting. But if you create a Shared collective intelligence across your team, if you can help people get aligned around what agentic workflows can look like in your business, in your culture, what is automatable? What represents too much risk to do that? Where are there opportunities to augment human capability and so forth? Then you can start working your way up to the point where you are driving additional value, you’re increasing customer relationship value, lifetime value, et cetera, transformation and growth and then ultimately. Innovation. So to me, it’s about creating a Shared language and baseline skills around human AI collaboration. That’s not just literacy, that’s. It’s not just IT either. It’s It’s also the domain experts in your business, the people running business units, the people executing on that side have to have this Shared understanding, learning how to collaborate with AI in the flow of their work. Everyone needs the same foundation so you can move together, not in silos, not with uneven. In adoption, if you do that, if you’re in silos, agents are just going to accelerate the silos. The next thing to learn, and you could start doing it right now, is learn agent delegation. So it’s an entry level skill. This is going to be expected in the next couple of years of anybody coming into the workforce at a young age or an old. It’s about teaching people how to give the agent a role. I’m a horrible delegator to other human beings. This is. This is something I’m going to have to really intentionally build this skill to figure out what I can not just collaborate with an AI on in a ChatGPT session, for instance, but what I am willing to and able to give to an agent to do. What constraints need to be set? And how to treat a I more like a junior teammate who needs to understand our business. And I need to be more explicit in giving an instruction rather than a magic answer box, which is how many of us think about that. So there are simple ways to start with agent delegation. If you’re a beginner, you can use ChatGPT. Agent. You can just kick off a task, watch it plan steps, execute them, and deliver a cited result. If you’re I’m guessing many people here are a Microsoft shop, so if you’re already in Microsoft and using Copilot, take a look at Copilot Studio and what it’s capable of doing. They’re not just prompts in Word or Excel, some of the really actually very cool. Functions that they’ve recently brought on board, but you’re pulling data across Outlook Teams and SharePoint to assemble a board briefing. I’m certain that concurrency can be helpful here. It’s about learning how to work one-on-one with an agent to see what you can outsource or offload, but then also orchestrating multi step processes. You have to get more specific and more precise in defining process if you’re going to bring AI into the picture. And then lastly, agentic workflow design or agent orchestration once people can delegate to individual agents. The next step is designing whole workflows where humans and A I move fluidly between what’s automated, what’s augmented and what remains purely human. And that’s laying out that just like the assignment from the English class, it’s just like that. It’s there’s 27 steps in our onboarding process. This is what’s automatable, whether it’s a customer onboarding. Process or an employee onboarding process and this is what’s augmentable and what’s purely human. And for most organizations, that means finding a partner like concurrency to handle the technical side of responsibly deploying agentic AI. At scale, as as you’re building these capabilities in your organization, that’s where you need to focus, building skills and building agency with your people. So where do you start? You build a Shared language, learn how to delegate to agents, and then design agentic workflows. Because that’s the real opportunity. Productivity is no longer the prize. It’s the floor. It’s the starting line. Value creation is the ceiling, and agentic workflows powered by human agency are the multiplier. That’s a force multiplier if you’re trying to get people who don’t feel high agency, which only is about. 40% of people in the average business. If you’re trying to get them to use A I, they only see it as a threat. Low agency people see A I as a threat. If I if you roll out of bed in the morning, like many of you do, and you feel like you’re in charge of your day, you’re in charge of whether today is deemed a success. Yes or not, that you, whether you’re a founder of a business or working for somebody, you can feel that sense of ownership. If you feel like you contribute to those results and you’re willing to be held accountable, that’s the basics of high agency. Only 40% of us feel that way. 60% of the world feels like what the conversation that we’re having here is a threat to their livelihood, to their self-worth, because the skills that they’ve built, when you start looking at even radiologists in the way that their job is evolving, it’s not going away. It’s going, it’s in, we’re increasing the number of radiology jobs, but the job is changing. Every change is difficult to navigate. So that’s how we’ll lead in the agency economy, not by chasing efficiency alone, but by reimagining how we create. How we discover and how we build with together with intelligence that’s no longer scarce. This is happening right now in some of the businesses that I identified before from a I first. So I’ll stop there and leave some time for questions. Want to say thank you to Concurrency for giving me the opportunity to address the group and happy to answer any questions that are already in the chat. Or if you want to just turn your camera on, please feel free to do that and ask questions. 43:58 Yeah. Amy Cousland 44:02 Todd, if people want to drop questions in the chat, I did see one. If you want to go look in the chat, there was one asking about why tracking AI shows a different chart than what you had shared. Todd McLees 44:13 Oh, so that’s the offline versus if you click on the Mensen, I I use the Mensen Norway. I have been for the last 18 months. If you click on the Mensen Norway button, it’s going to show you the same chart. It’s also three days old, so it might be a little bit different because they’re running tests on a regular basis, but I imagine that will be a lot lot. Closer to what you’re seeing. So that’s that’s the issue of having offline versus online access to data or context or search capabilities and so forth. And from my perspective, just about every work process today, a I’s have those capabilities. So I use the Mensa Norway test instead of the offline capability. Amy Cousland 44:54 Any other questions? You can go ahead and drop them in the chat. OK. Oh, there gave you a question about what is the the Marquette class? Todd McLees 45:14 Oh, so there is. So we’re working with executive education at Marquette around a I agility and a I agility is a 30 day challenge. You you actually get three months to complete the challenge. It’s 20 modules, roughly 20 minutes per module. It also includes 12 months of updated modules. So every month you see another two to three modules depending on AI announcements. And not just, you know, we’re not out there saying, hey, G PT5 just came out or Sora 2 just was released, you know, a couple days ago. We’re more interested in what impact is the new capabilities have on the way we work or the way we learn. So that’s what we ship out. So this is around human A I collaboration, a little bit on responsible A I and we’ll have a new challenge coming out this quarter through Marquette and our other higher Ed Partners around agentic workflow design. So you can take a look at humanskills dot A I for information on that. And if you’re interested in taking the class, you know it’s $500 per person. But again, that’s 12 months of content, not just the 1st 30 days. And many teams take it through the higher Ed partners. We don’t have a direct to market model. We go directly through the Microsoft ecosystem partners, MSPs and higher Ed, but primarily higher Ed. Amy Cousland 46:40 I do have a few other questions there. Can you see them in the chat? Todd McLees 46:42 Oh. Amy Cousland 46:47 See. Yeah, sure. For people in the 60%, those that think that AI is a threat, what’s the best way to overcome this fear? Todd McLees 46:47 You want to give me one Amy and I’ll get in there. Yeah, it’s it’s small doses, you know, showing them ways that it can be additive. It’s the what’s in it for me old line right around here are ways. So essentially focus on augmentation rather than than automation with those folks so that they can see the impact that it has for them. In many cases it’s well, I can save 4 or 10 hours a week. I can do work at a higher level than I’m capable of or with many fewer hassles in getting that work done. So that those sorts of if you can stack those up, you start to see converts and. As opposed to starting with the conversation about, well, this technology is here now, so we’re going to automate, you know, even 10% of your job is automatable. Usually it’s not more than 25% today, but for many people, 10% is territory where they feel like they have to defend. Amy Cousland 47:47 Awesome. What jobs today do you think require the lowest level of human agency and are at the greatest risk of disruption? Todd McLees 47:57 Yeah, well, I would just direct you to will robots take my job dot a I. It sounds goofy as a website name, but it’s a real there’s really good data there. Anything that is transactional in nature, anything that’s just, you know, knowledge-based, I would suggest that writing editorial. You know that Goldman Sachs use case is a really interesting example. There you’ve got 6 bankers who can probably go do higher value work, but if it’s somebody’s job to do research or just create documents, whether those be grant proposals or sales proposals by inside sales people. So forth. Those kinds of tasks are going to be disrupted or are being disrupted right now. Customer service roles, anything that I would say is either remote where you don’t have to be in front of people, where you’re not protecting human relationships at all. You’re not part of that part of the value chain. Where you’re just doing working on deliverables, it’s those types of just cognitive tasks that are most automatable. Amy Cousland 49:01 From a practical aspect, how do you see companies overcoming outdated IT paradigms, limited access to plugins, higher level permissions for applications that can leverage AI, et cetera? And what tools exist to help guide this transition? Todd McLees 49:18 Yeah. So couple questions in there. The first one has to do really around modernization of systems to enable AI. And to me that the only way through that is a strategic conversation in the C-suite in the ownership of saying we have to get believing. And not everybody does, but believing that you have to get to the point where you’re leveraging artificial intelligence and agentic A I to keep pace with competitors. If that’s the case, then that has to be the catalyst to get you moving in that direction to modernize your systems, your IT systems. The other question or. What tools exist? You know we’re we’re building a platform right now about which will help with agentic workflow design. We’re starting with academic because we’ve got built-in customers there, but it’s completely translatable to say take a defined process right now, plug it in and and AI algorithms will make. Suggestions based on your guardrails around what’s automatable, augmentable, and purely human. Other than that, it’s it’s about learning. It’s about, you know, I I’m not a huge fan. We’ve seen all the negative news about endless pilots and not getting into production. Ninety-five percent of them, according to MI TS data a couple of. Months ago now or a month ago, but there does need to be a fair amount of experimentation just for the sake of learning. Amy Cousland 50:44 I think we have time for one more question. I’m going to read that to you here. Some in the AI space argue that given the rapid advancement of AI’s capabilities, the skill of workflow automation will likely become obsolete within the next year or two. They argue. They argue you’d be able to simply explain the workflow you want in Presto. What’s your take? Todd McLees 51:04 Yeah, I think that that’s a little optimistic. You know, for instance, I’ll go to coding, which I would say is probably the leading role right now in terms of automation and while we see the headlines around 80 to 100%. Of code, you know 90% of code in anthropic and 80% of meta being created by AI. We also have seen just yesterday an anthropic researcher came out and said that he believes by 2027. An A I will be able to work on a coding project for 30 consecutive hours without human interaction, and the code will be 50% right. And when I heard 50% right, I thought, well, I mean, it definitely changes the role to oversight and quality and elevating A I’s outputs. But I I struggle. There’s, you know, people are all over the map on that. I think today, between now and the next 24 months, you’re pretty safe in saying here’s where we have to make a decision as to what our company believes, where humans need to be involved and where they don’t necessarily have to be long term and as the technology matures. And as you partner with groups like Concurrency to bring solutions to the table, you can have much more clarity about where an A I can play a heavier role in your business without introducing unintended risk either on the employee side or the customer side at any business risk at all from a compliance perspective and so on. Amy Cousland 52:34 Thank you so much, Todd. Really appreciate the keynote. Really insightful. I know you all have the agenda you can pick between the turning AI ideas into ROI or the intelligent document processing with AI for the next 10:00 session. We are going to be recording all of these sessions, so if there’s two you really want to see at the same time, we’ll be sharing those recordings with you. Again, thank you everyone, and thank you, Todd. Take care. Bye. Todd McLees 53:02 Thank you, everybody. Thanks, Amy. Thanks, Kate.