/ Insights / View Recording: Inside the MCP Protocol: Supercharging AI Agent Collaboration Insights View Recording: Inside the MCP Protocol: Supercharging AI Agent Collaboration July 17, 2025Join us for a deep dive into the MCP (Model Context Protocol) and how it’s reshaping the way AI agents interact, coordinate, and deliver results. We’ll explore how MCP enables more effective communication between agents, streamlines task execution, and opens new doors for intelligent automation. You’ll also see a live demonstration—featuring Microsoft Copilot—showing how MCP-enabled agents work together in real time to solve complex tasks.Whether you’re technical, strategic, or just AI-curious, this session will help you understand why MCP is a foundational leap forward in multi-agent systems. Transcription Collapsed Transcription Expanded Ken Ullsperger 0:10 Hello, welcome everybody. Give you guys another maybe 1015 seconds to see some more people trickle in, see a couple more people are joining. So hold tight. I’ll be with you guys in a second. OK. All right. Welcome everyone and thank you for joining today’s session. Multi-agent Magic Smarter A I with MTP. I’m excited to show you how the model context protocol transforms isolated large language models into coordinated teams of A I agents working together to solve complex real-world challenges. Over the next hour, hopefully less or so, we’ll explore the vision behind multi-agent A I and why it matters now, core components of the MCP architecture and how they fit together, practical benefits from fresher data to stronger security. We’ll do a live demo of a MCP server and connecting to it and having. It interacts with our chatbot and we’ll discuss some real world use cases and the monumental impact this approach can can and we hope we’ll have across industries. By the end you’ll understand how to architect, secure and deploy multi agent workflows that supercharge your AI initiatives, but. Before we dive in, please indulge me when I take a minute to introduce myself. This is my presentation, so I can talk about me just a little bit. It’s not too long. My name’s Ken Alsperger. I’m a technical architect here at Currency Concurrency. I have a daughter who will be a senior, a son who will be a freshman, and another son who will be in 6th grade in the fall. So basically my life right now is. Whatever they’re doing. So in no particular order, I guess I’m I’m a theater dad, I’m a baseball fan, and in the fall I’m a football coach. My spare time during the summer during baseball season is traveling to my kids baseball games and watching them. Lose in often spectacular fashion. Sometimes I get a W, but it’s not often. When I’m not doing that, I’m right now I’m helping my daughter. She’s trying to get into, let’s see, musical theater conservatory programs in college. So she’s on. The the cost of graduation. So if anyone has or has had a child trying to get into colleges, they know what I’m talking about. We are also, as you can see over here, we are one of those super annoying Disney families. So if you’ve seen me in the break room, I will talk your ear off about the parks. It’ll just try to avoid me if you can. My my time is spent in the mornings at the gym and evenings sitting behind a drum set and playing with my kids and working here at Concurrency. So a little bit about work. I’ve been in IT for 25 years. I started my career doing ASIC design. Those are just computer chips with a really narrow focus. From after that, I got into development software design and then ultimately architecture. In my role as a technical architect, it’s my job to convert often abstract ideas into secure performant enterprise applications. But honestly, what I really spend my time doing is streamlining and optimizing how dispersed systems talk to each other, which appropriately is what today’s webinar. The webinar is about. So MCP, what is it and what does it solve? So I I mentioned I’m a theater dad, so before we dive into demos and some of the more geekier stuff, let’s start at the very beginning. A very good place to start. Put in the chat the musical if you know where that is and we’ll give you a free 30 minute consultation. So what is a good place to start? But let’s answer the question, what is MCP and what does it solve? So a I systems hard have in in the past have a hard time reliably integrating and orchestrating external tools, APIs. Live data, web searches. So every new service that an AI tool uses demands glue code, breaks on upgrades, fragments your architecture, and scatters secrets around the system. So MCP, this Model Context Protocol, was created to solve these problems by providing a single standardized Protocol for tool to. Discovery, Authentication, invocation and response formatting. So any model can plug into any capability without writing custom plumbing. What you see on this slide are the five core challenges that MCP was built to solve. Open standardized interfaces so your models can call tools, APIs and data. Resources without custom adapters. Unified integrations eliminated one off glue code for every new service or model. Multi model ecosystems letting GPT, Claud or any custom model share the same registry of capabilities. A consistent architecture so you can extend and maintain your AI apps as they grow without constant rewrites. And fragmentation being solved, offering a single protocol, MCP, for discovering tools, authenticating securely, and executing calls. In short, MCP replaces brittle, handcrafted integrations with a scalable, secure framework. And that’s exactly what we’ll explore in today’s session. So now that we’ve talked about what MCP is and why it matters, I like to start with sort of the how. Let’s let’s see how it works under the hood. So on this slide, let’s walk through step by step a request flow. And how each component in the MCPAI agent process behaves in relation to one another. So we start here with your client application. It’s an AI model and MCP host. It’s a chat UI or IDE plugin, you know, copilot. And that sends a user prompt to the hosted large language model. The model processes the prompt and determines if it needs some sort of external capability. So then that model admits A structured tool call request with the MCP client captures. It transforms the model’s intent into JSON method invocation and then it accesses the server over the MCP protocol. The client sends that method call to the designated MCP server. So this is the MCP server. So you got the host, the LLM, the client and the server talking to each other. So Inside the server box, 4 subcomponents collaborate. The tool registry which quickly looks up the requested tools, schema and metadata. Authentication checks the client’s token grants, permissions to invoke the tool, a request handler which validates the incoming parameters against the tool schema and routes the call, and a format a response formatter. We’ll package the raw output back into structured JSON that the model expects. So the tool interfaces here. These are the conduits in order to communicate with the external tools, and these external tools can be. Like a web search calculator, database access, accesses to a file system. Um. You know, weather systems, whatever. If you start thinking about what your organization needs, it can be functionality, any sort of functionality that can enhance, give the chat bot more information to respond more intelligently to its its prompt. So once the tool completes its. Discovery of whatever information that’s been prompted. It returns that data back to the server and that it all just flows back to the LLM, which the LLM then uses as additional data that it adds to the prompt to make its response for smarter, more intelligent and more valuable. It generates that response and returns it back to your UI tool, whether it’s copilot or custom UI. Each component plays a clear, isolated role, making integration predictable, secure, and fully observable. So why does MCP matter to you today? What does it provide that is remarkable? Why will it have the impact? We believe it will. So let’s start with interoperability. You’re no longer locked into a single vendor’s API or forced to rewrite connectors when you swap out models. You can move from GPT to Claude to Grok or whatever you need. The second consistency, consistency and reusability. So teams can publish their MCPMCP compatible tools like search or database access and custom business logic and everyone sort of just plugs them into their agents. There’s no more reinventing the wheel. The agents don’t have to specifically know things you provide the tool. Think service oriented programming. That’s sort of what this is on an AI scale. The 3rd is speed, so standardized plug and play tools, slash development time, letting you focus on unique business logic instead of the plumbing. And critically, MCP brings live data into your models. Say querying today’s sales figures or checking real-time inventory while keeping everything secure. Under Active Directory or role based security. In short, MCP turns your large language models into true AI powered apps that are scalable, maintainable and enterprise ready. So here are four concrete wins you get with MCP. So freshness. MCP lets your agents call live APIs or query databases so they can work with today’s information, whether it’s inventory levels, financial metrics, or real-time sensor readings, rather than rely on stale training data. Capability extension. So you need image analysis, geospatial lookup, business logic computations with MCP. You just register those specialized tools with your models and they’re invoked on demand, even if they weren’t natively trained for those tasks. Helps reduce hallucinations by routing factual questions through alternative data sources. MCP dramatically cuts down on made-up answers. Your AI pipelines become more reliable and trustworthy and privacy. You never have to share confidential data into a prompt, so sensitive information stays Inside your secure services behind firewalls and role based stuff. Authentication while MCP handles the secure handshake between the model and the data. So now that I’ve tried to sell you on why you should care about this topic, let’s go under the hood a little bit and just geek out on some of the stuff. So at the heart of MCP, it really is a clean client server model. The first element are the hosts. They’re your user facing LLM apps. Think Visual Studio code extensions, chat interfaces that want to enrich model outputs with external capabilities inside each host. This is an MCP client. This is the connector that negotiates which tools and resources are available. Sends model driven tool invocation requests and integrates responses back into the conversations. Think orchestrator, right? So like you come to somebody in your organization and they tell you who they need. That’s what the MCP. The MCP client does you go to somebody, you say, well you know John can answer that question. And so the MCP client says this MTC server has a tool that probably this could use. So on the other side of the MCP client is the MCP server. So these are the small services you deploy to expose specific features so each server can. Offer one or more of resources like local files, database roles, external APIs for live data, the tools like discrete functions like search inventory or calculate my risk for this particular thing or analyze the stock market for trends. It uses well-defined parameters, structured outputs, and this separation lets you compose powerful module AI pipelines without hard coding every integration. So let’s get a little bit more in detail about the workflow of an MCP enabled system. I’ll I’ll walk through this diagram step by step. So let’s start with the host. We talked about this a little bit before. This is your development or user interface UI. It’s a custom chat client or any application embedding the MCP client library. It’s responsible for orchestrating LLM calls and deciding when to invoke external capabilities. We touched on that briefly. The next layer here is the MCP Protocol channels, right? So between the host in each MCP server, you see arrows labeled MCP Protocol under the hood. This is just JSON over HTTP or web sockets extended with MCP MCP’s method definitions for tool discovery, invocation and session management. The channels they connect to the MPC servers. So in this example the A&B servers connect to local data sources. Um. You can deploy those on Prem in your VPC. Each one is scoped to a particular local data source, maybe a SQL database, a file share, internal knowledge base. When the model requests a tool, for example, like look up a customer record, the host routes that call the MCP to server A or B. And runs that query against the local store. So model MCP server C here uses Web APIs to talk to remote services. Server C would sit at the perimeter and bridges MCP to external APIs, third party SAS public data feeds. Cloud hosted microservices. It grants your large language model access to Internet facing capabilities while still enforcing your organization’s security and governance policies. Notice that the host never embeds credentials in its prompts. Each server handles its own authentication using managed identities, API keys, or service. Principles results come back as structured JSON, so MCP clients can merge them seamlessly into conversation or code generation workflows. And sort of the key take away is that this client server pattern lets you add, remove or upgrade data sources and tools independently. Your large language model powered app stays lean. It it avoids becoming a monolith and coding all this stuff into the into the host itself, and it’ll scale securely across both private and public endpoints. Alright, a little bit more geeky stuff. So this is probably for the Super tech guys out there. So under the hood MCP rides on Jason RPC 2.0. So every tool call resource. Fetch or prompt request is a standard JSON RPC method call, no propriety protocols. So MCP didn’t set out to reinvent the wheel in terms of the communication mechanism. Each session is stateful, so it lets your agents accumulate contacts over a particular conversation. After a host client connection, they perform capability negotiations. The client learns which servers, tools and resources it can use. When a user issues a request, the client consults the model. If a tool is required, it sends the Jason call to the appropriate server with the tool name and the parameters. The server runs the function if it’s a database query external API call and then returns structured results. The client merges these back into the large language model’s context so that the final reply. Is enriched with this live data. MCP also provides helpful developer utilities like progress updates for long running tasks, cancellation tokens, standardized error codes, and structured logging for observability. Crucially, every tool indication requires explicit user consent. And respects fine-grained privacy control, so sensitive data never leaks into prompts or model integrals. And when we get to the demo, I can kind of show you what the explicit user consent looks like. All right, a little bit more of the deeper dive. These are some of the MCP security best practices. We’re almost done with this. Let’s talk quickly in high level so that you all stay awake because security can be kind of boring to at least 95% of the people on this call. I’ve imagined so authentication and every MCP client to server connection uses your enterprise identity provider, whether that’s Microsoft Entra. Oauth 2 so no shared secrets are floating around. Authorization. We can use our back scopes at the tool level. Only approved roles can call high privilege functions. Everything else is default consent before a model can trigger any sensitive operation. MCP can can surface an explicit consent prompt to your user. Ensuring that they’re aware of what data or action is at stake. You’ll see this again later in the demo. Metadata validation. Each tool’s schema and metadata is cryptographically signed or hash checked. This guards against malicious metadata updates or tool poisoning. Prompt Shields MCP integrates with content safety filters that sanitize any model inputs or outputs. It blocks hidden injection attempts before they reach your large language model and logging and monitoring. All MCP calls, responses, errors and consent events are essentially logged. You can feed them into your security information. And event management tools for real-time anomaly detection. Schedule periodic reviews to ensure no misconfigurations slip through. By leveraging these built-in features and following best practices, you get enterprise grade security without reinventing anything. All right, last slide. I promise on the on the geeky stuff. I think it’s beneficial to understand exactly how the model context protocol process flows. We talked about how each individual component of the AI agent workflow behaves, but but zero in. On what specifically MCP does once it’s invoked. So this spiral diagram here lays out the exact life cycle of of NCP interaction from discovery all the way through execution. So let’s watch through each step. So step one is identify the MCP server. So your host again, Visual Studio code extension, chat client, UI, whatever it is, first scans its registry or service directory to find what available MCP servers are there for it to use. Once it does that, it will go on to Step 2, publish metadata. Each MCP server advertises its own metadata, so it tells the host what tools it offers, their schemas, inputs, outputs, prompts and resource types. This metadata is versioned and can be fetched on demand. The third step is it browses the tools. The MCP client inside the host that’s talking to the server presents the user or agent with a catalog of available tools. It has descriptions, parameters, list and sample usage. Your model can even ask to. To list me all the data lookup tools. If it’s looking for specific tools, you can interact with that with the listening process dynamically through your UI. Step four is to understand the requirements. When the LLM generates a plan containing a tool call the MCP client inspects. The chosen tools metadata to confirm required parameters, data formats and any preconditions like it needs a valid customer ID or something. Then step 5 is it authenticates securely. We discussed this in the last slide. The client obtains a fresh access token via. Azure ID Oauth to manage identity and that token is scoped specifically to that server or and and tool. This ensures least privilege access and no hard coded secrets. Step 6 is finally the client issues a Jason call over the MCP channel passing the validated. Parameters. The server executes the logic, creating its data store API and returns structured Jason. The client merges that output back into the model’s context and that completes the cycle. So every single MCP session follows the same six step choreography, making integrations predictable, secure, and. Clear and fully observable. Knowing this process end to end will help you design robust enterprise grade agent workflows. So that was it for the geeky stuff. That’s the cell is over, the explanation is over. Let’s do a little bit of a demo. So what I did was with help obviously from the Internet. See if I can find it. I created a MCP server. Um. Let’s go through that code quickly. So people that are interested, we can talk about the MPC server code. So I’m assuming everyone can share my screen. If they can, Amy, let me know and I will fix it anyways. O. At the very top here at this program at the program. dot CS file level we bootstrap the function host. We use the functions application create builder to spit up a standard Azure function host. So what we’re doing now is we’re telling this application that you’re going to run as an Azure function like it could be. We could run this as a Lambda function in a WS or. Or a function in Google Cloud Platform or locally. Or we could deploy it to our own hardware if we want to. But then after we define that it’s going to be an Azure function, we call the configure functions web application function which wires up the HTTP timer and other bindings that we need. Um. At the very end of setting all this stuff up, we do. We get the configuration. I’m sorry, we declare the tool metadata in line. Let’s talk a little bit about configuring it before in a single trigger attribute. We’re telling the MCP, hey, we’ve got a tool named get snippets, for example. It’s going to we immediately chain. Let me chain it with this. Where it is with property. I might be on the wrong file. Hold on. Well, anyways, we’re configuring, we’re configuring the build, we’re configuring the tool and telling it here are the tools that we need that that you should be expecting downstream when and we’ll get into that code in a little bit and then finally the builder. That build that run, ties it all together, spins up the MCP support that’s baked in, and that support is baked in thanks to this Microsoft Azure Functions worker extension MCP, which is in preview.5 right now. Amy Cousland 23:04 Can can you um and somebody’s asking if you could increase the font size a little bit so they can see it better. Ken Ullsperger 23:11 Better. Amy Cousland 23:12 Yeah, that’s way better. Ken Ullsperger 23:14 Hey, scroll wheel does its trick. All right, let’s go on and look at some of the actual functionality of the application. So we decorate a couple functions with. Amy Cousland 23:15 Mm. Ken Ullsperger 23:30 These function names get snippet and save snippet. What these do is it’s going to take a basically a key value pair and say I’ve got a key called snippet one and its value is this is snippet one. It’s going to store that so that when you call. Get snippet. You can pass it the name snippet one and it’ll say this is my value snippet one. It’s basically a key value dictionary or remote dictionary that a large language model can use to store and retrieve data. And then very simply we have another. Function. It just says hello. So if you want to say hello to this particular MCP server, it will respond. Hello I am in an MCP tool. Again, super simple stuff doing nothing exciting, but you can imagine these functions. Being drawn out long-winded business logic stuff that is, you know, applicable to your organization in your company’s needs. So there’s really not much to this application outside of that, a lot of the. Plumbing for MCP is taken care of inside of the.NET Framework and through these extensions, so we can run this locally, which I believe I am. I am running it locally so it’s currently running and I can show you where I believe this is the window. So you run it and then that exposes these functions that I just showed you get snippet, save snippet and say hello and it is running here locally. At this port and at this address. So how do we go in and we how do we go in and make sure that it’s behaving the way it is? Well, there is a MCP inspector tool that is included inside of. The.net framework that we could use, that’s very convenient. So if I go into. See which one of these 20 windows I have open. So it is running here. So this is it’s currently running and I have it working well. So I’m going to do a something dumb in a demo and I’m going to stop it so I can run it again. So this command here Model Context Protocol Inspector will launch it and it is a web app. That runs hopefully here. There it comes. OK, good. So here’s the version that was running, and here’s the new version that just got spot up. So what we can do here is. We can. Connect to the local MCP server that I just spot up so that it’s running. The one that I just showed you is running in this context, right? So we’re going to set the transport type to SSE, paste in the URL of the local function. And connect to it. So it tells us we’re connected. So yay for us. One of the first things that we can do is do what happens behind the scenes when you’re actually running this through like a chat bot is the one of the things that the client and the server talk about is what tools do I have, right? Like what’s? Available to me. So we can click this list tools and we can see that we have three functions available to us, get snippets, save snippet and hello. From there we can click on any of these and run them. So I’m going to run the Hello World. MCP tool and it responds like we thought it would. Hello I am a MCP tool so if we go into the code. And we change this to hello IMMC tool for the demo and we stop it. And we started back U again. You guys can probably can’t see this, but it’s. It’s building and attempting to launch. Super ribbon. I apologize. Um. Alright, so then we can see it’s it’s it’s been relaunched, it’s about to start. It’ll expose these functions again, and when that’s done, we can try to connect to it again. Via our insector tool. O Let’s disconnect. Let’s reconnect. Clear this list the tools. Hello and I haven’t tried this yet, but let’s see if it changes. It does. So hello. I’m an SECT tool for the demo, so yay. So now when you update this, you can imagine that any large language model that’s currently using it will now get this new context back that it can use to generate a response for you. So and I I know hello I am an MCP tool for the demo is not exciting but. What you have to understand, I guess what makes this remarkable is that I went in and I changed a simple program that’s going to alter how the large language model itself behaves, right? Because it’s going to find this data, it’s going to use this data and because I control just this small piece of it that maybe hundreds of large language model. Hosts across the world all use. They’re all gonna now function differently because I changed what data comes back to it. So from a business standpoint, if you’re working in finance and you wanna change an algorithm about how you handle. Stock trades, right? Like every large language model is now going to behave differently because you change that algorithm. So that was. Creating a server. Kind of exciting, right? So now we have a server, but how do we actually go in and interface with that server itself? So one of the simplest ways is to test that through Visual Studio code in the the Microsoft or GitHub copilot. Chat bot. So we can we can do that locally. I’ll show that locally. There’s a actually, I mean, I can’t. I can show you how this gets deployed remotely. I’m not sure that’s too exciting. I can’t show you that. It has been published externally so it can be used. I just haven’t set that up the demo to. Have it reference the external. Published version of it. So if I come over here to build and publish selection. I can hit this publish button and it’ll throw this out to Microsoft Azure and and run it as a Azure function remotely. So it’s not local on my machine, it’s available to anyone in the world that I share the access tokens with it to it. So right there. So there’s the security ingrained. In there, it’s not just open to the world. You have to actually understand how to wired up and access it, but let’s show something a little bit more fun. Let’s go into. Visual Studio Code and specifically Copilot. So if I want to add an MCP server that my copilot can utilize when I’m when I’m chatting with it. I can go in to the command palette and I can say MCP add server. It’s a local server running over HTTP so I can enter the URL which I’ve already done. So here’s the URL of it running locally. So when when I do that. That and press enter. It generates this code inside this MCP dot Jason file that runs inside of your workspace. So what this does is this tells Copilot that this is the server that’s available for you to access information from. When we use the chat feature of Copilot, so let’s show what that looks like quickly. So right now we’re in ask mode. If we go into agent mode, it’ll allow us to access the data that’s in this file and the data office servers that this MCP servers provided for. So let’s say, what should we say? Let’s say, let’s say. Hello. I should probably make this bigger too, right? I can. Let’s make that bigger, but copiler isn’t behaving. I can try to talk you guys through what I’m doing here and I’ll I’ll talk through it if you can’t see it all that well. So I’m typing say hello into the chat bot. So the chat bot tells me it’s going to use two references. It uses this file here. Is this file here? And copiler instructions that are put in the markdown file. So in the here is where it says to run the Say hello tool. You should use the MCP tool which utilizes configuration. OK, please run the MCP tool to execute the Say hello action. If you need help running the MCP tool, let me know. All right, so there should be. Oh, it looks like I didn’t. It should ask me to continue right here and it didn’t. So that’s a little demo snafu. Maybe because I need to re-add the server, I can try that or I can reload the page. I might have had something. Let’s just give this a shot, see what it does. I want to call it. All right, so again. The server. Maybe it’s running now. Maybe it wasn’t running. I don’t know. I think something I I I missed. Let’s try this again. Let’s start a new chat. Your chat, we are in agent mode. Let’s try say hello again. Alright, so what I want to do is I’m going to list MCP functions for me. So it’ll it it discovers the actual MCP server and it sees that these functions are available for it to use. Try it one more time and say hello. Well, it is. It doesn’t think it’s running, but it is running. I’m I’m assuming that’s why it’s not allowing us to actually go in and I believe it’s running. Yeah, it’s running, but for some reason it doesn’t think it’s running. I’m going to reload the the page quickly here and see if that takes care of it. Nothing like demo screw ups. Developer reload page and see if that is carried it. Start a new chat. Say hello. Are we in agent mode? We are in agent mode. It happened 30 minutes before I start. I started this application. Once you type and say hello, see this add server button down here, it’ll ask you to continue. And if you press continue, this is what I told you that you guys would see later on in the demo. If you press continue, it’ll actually go in and invoke that say hello command and it’ll return the same string that we just saw over here. In the inspector, right? So it’ll it’ll it will return that information here in the in the chat bot. So and that’s sort of how you would interact with with the MCP server from your own UI. So you can imagine. This working from here, this working from like if you have a custom chat bot that you have, we’ve developed a lot of custom chat box for organizations across the country and some of the things that we’ve done is route these use an orchestration tool, a different orchestration tool internal to route a lot. A lot of commands to different services, but now with MCP we can repack our architecture about how we actually implement those solutions. So it’s a pretty exciting, it’s pretty exciting protocol and it’s gonna allow us to be able to enhance large language models functionality. More efficiently, more securely and provide more value quicker. So we’re excited about that. That was the demo. Let’s move past it quickly. And talk a little bit about the future. All right, so you’ve seen MCP in action. So let’s turn our gaze of the horizon and and see how MCP adaption promises to reshape how organizations like yours build your AIS. So democratize integration. So any team like data science, DevOps, business operations can spin up a I powered features without deep LLM expertise. Again, you saw me just create a small little. C# Azure function code that was maybe what, 40 lines of code total. Definitely did not need a large language model experience in order to do something like that. We have faster innovation, so plug and play. Tool ecosystems developers move from concept to production in days, not quarters, especially once you have your MCP ecosystem sort of set up. Then creating a new tool, altering an existing tool is trivial, right? So the the framework is there, the deployment pipelines are available. All you need to do is change the functionality in your large language. Models that use that MCP tool will automatically reap the benefits of those changes. Cross vendor interoperability no longer locked into one AI’s vendor stack. So GPT, cloud, custom on Prem, they all speak MCP, all of them. Um. It’s an ecosystem standardization. So imagine like an App Store for AI tools. They’re all vetted, they’re versioned, they’re shareable across enterprise or public marketplaces. So you could have like the App Store version of MCP tools. You could literally sell your functionality that you provide from your organization to large language models. Across the world, not just providing value to your own, but for everyone else, and to charge a premium for it. It’s a different way to deliver your your intellectual property in this new A I sort of world. It’s an architectural shift. MCP is definitely an architectural shift. We’re moving away from monolithic LLMS that are just trained on a ton of data, and we’re moving into an era where LLMS are trained but then are enhanced post. Training, right? With all this information that they can get from MCP tools, it reduces tactical debt. This stable protocol means that the connector code lives on even as the tools and teams and functionalities evolve. There’s regulatory alignment, so there’s the uniform consent prompt, structured logs, role basic. Security makes compliance with GDPR, HIPAA or SOX. It’s far simpler workforce evaluation. Developers focus on crafting domain specific tools. AOPS specialists handle security, scaling and governance. So your developers don’t have to worry about all the plumbing stuff that would have to go in into making sure that they adhere to AI security, scaling and governance things. They can just function on the functionality. So here’s a few fun real world used cases for MCP we can talk about briefly. Finance. So imagine a trading assistant that calls out to real-time risk analysis tools, portfolio, portfolio optimizers and news sentiment APIs, all coordinator via MCP to help make split-second decisions. For healthcare, how about clinical support agents can securely query health information systems, medical imaging services, drug databases in one session, which would provide physicians with up-to-the-moment insights about their patients. Her supply chain anonymous planners stitched together information of technology, telemetry. Demand forecast models and routing tools to adapt shipments on the fly. Customer service chat bots can pull a customer’s purchase history, open tickets and contract terms in real time without manually contact switching in the education space. So you can have adaptive tutoring systems access rich content. Libraries, assessment engines, and student performance databases to personalize learning paths for climate and environment agents. Like I guess societies can fuse live sensor data, weather prediction models, emissions and analytics, and optimize resources usage for policy planning. In the public sector, policy analysis bots can aggregate bills, public comments across social media, budget figures, sentiment to brief local or federal legislatures in a matter of seconds about. Decisions that they need to make in the media and entertainment space. So creative co-authors can browse asset libraries, social trend trackers, rights management tools, and that can help them streamline their content production. It’s just a hint of what MCP’s transformative power can do across every function of society. So it’s been about 45 minutes. I thank you for staying with me for this long. If what you’ve seen today and heard today interests you, here’s the next steps you can take. One, you can schedule a quick 30 minute call with our team and we can do a MCP agent collaborate collaboration discovery session. We’ll map out how MCP agents could tackle your biggest operational challenges. We can also collaborate with you on a focused 30 minute workshop where we’ll work side by side to pinpoint real-world use cases tailored to your A I platform. We can identify where multi-agent workflows Dr. the greatest return on investment for you guys. Finally, if you’re part of a leadership team, we offer strategic sessions to align on the future. You can define your vision for multi agent AI, explore governance models and chart the road map to enterprise scale adaptation. Again, if you’re interested in learning how to leverage this technology to enhance your current AI offerings or even start. on your AI journey from scratch, contact us, fill out the survey that you guys will see, and we can help you realize your goals. All right, everyone. That’s ball game. Take an E exit stage, right? I appreciate your time and attention, and for everyone here at Concurrency, we hope to hear from you guys soon. Amy Cousland 43:46 Ken, thank you so much for everything. I’m going to go ahead and end the event and that that surveys link is in there. Love to hear some feedback on the session today and let us know if we can reach out to you. Thanks so much.
Events View Recording: Pro-Code Meets Low-Code: Navigating the Full Stack Seamlessly The future of application development is fusion—where low-code agility meets pro-code power. In this webinar, we’ll explore how your teams can use Microsoft Power Platform and Copilot Studio to accelerate full-stack solution development. Learn strategies for blending professional development with low-code agility to enable faster delivery, scalable architecture, and tighter collaboration across roles. See how… July 9, 2025