/ Insights / View Recording: What’s Next in AI? Learn about AI Agents. Insights View Recording: What’s Next in AI? Learn about AI Agents. May 2, 2024Are you curious about the future of artificial intelligence and its potential to revolutionize the way we interact with technology? Join us for an insightful webinar where we’ll explore the latest advancements in AI and delve into the exciting realm of AI agents and stateful AI assistants.In “What’s Next in AI? Building Stateful AI Assistants That Take Action,” we’ll cover:The evolution of AI agents and their role in transforming user experiencesUnderstanding stateful AI assistants and their ability to take proactive actionsReal-world applications and use cases of stateful AI assistants across industriesBest practices for designing and building robust and intelligent AI systemsWhether you’re a seasoned AI professional or simply interested in the future of technology, this webinar promises to be an enlightening journey into the cutting-edge developments shaping the AI landscape.Don’t miss out on this opportunity to gain valuable insights into the next frontier of AI innovation. Transcription Collapsed Transcription Expanded We are gonna talk today about AI agents and where we go from here. Talk about an interesting conversation. This is the future of how we are going to be leveraging AI to accomplish more within our organizations and to keep this off, I’m going to be spending most of the time talking through conceptually what agents are, but then also we’re going to do a little bit of a demo of how you go about using agents in your organization to achieve more and understanding some of the frameworks that we’ll be using and very, very interesting conversation. And I’m just going to introduce myself. I’m Nathan Sonoski on concurrencies, chief technology officer with concurrency for 22 years, I’ve been consulting in AI for about 10 work, primarily with executive teams on how they align AI to the strategy of their business. This happens this might QR code you want to connect on LinkedIn. I would love it to be talking with you having more conversations about what’s happening within the industry. Also joining me today is going to be vickrant. Vickrant is a data scientist at concurrency and he’s going to be demoing a few of the AI capabilities as we have the conversation today. As we dig into this, I love you to assertively use the Q&A and chat feature and ask any question you want. Dig in. We have a couple people on this call for that very reason. So we can answer questions as we go and dig into the conversation on EI agents and as a vehicle to get started with that, I’m going to launch. Hmm. Amy, you are seeing that QR code correct? You’re seeing my screen. Amy Cousland 1:55 Yes, I apologize. I was on there. Did I change it to that? I apologize. Did I do that? Nathan Lasnoski 1:59 No, no, no. You’re seeing my screen though, correct? Amy Cousland 2:00 OK. Yes, I am. Nathan Lasnoski 2:01 Just making sure everybody’s good. Amy Cousland 2:02 We see your QR code perfect. Nathan Lasnoski 2:03 Alright, good. Alright, thank you. OK, so I’m gonna launch a poll and this poll is gonna give me a little bit of perspective of how far this audience is has been in the space. OK, so let’s launch that guy. Alright, I love for all of you to pick one of these. One of these options just give me some perspective of what you have presently done in the AI space around agents and even what your perspective or experience has been so far in that domain. So go ahead and watch that. I’ll give you about 10 seconds here to fill that out. A little bit of opening eye, little bit of laying chain out there, OK. OK. That’s great. Thank you. Uh. Fascinating. Fascinating. So, uh, certainly. The vast majority are nothing but excited, a little bit of you have been playing around with basic open AI and retrieval augmented generation patterns, which is fantastic. What this conversation is going to dig into is where we go from here and this is really where things like Lang chain and semantic kernel come into play, as well as other models for going down the agent path. So let’s talk a little bit about what is an AI agent? What is an AI agent? So an AI agent is it’s it’s exists on a continuum, exists on a continuum. So in the far left hand side you can see this idea of a chat bot, right? This is what many people have been thinking about historically in regards to like historical Agents. Simple back and forth chats. I need some information about a particular thing I’m being LED through the process of doing a request for return of a battery I bought, or I have a question about a internal document or a customer service request that’s being submitted into an organization and that is this sort of chat bot into retrieval augmented generation zone that many of you start to have some experience with. One of the things that you might have found in that space is that it’s very transactional. It’s very memory. It doesn’t exist in a sense like it’s grounded by data. It’s grounded by an ability to interact with that data, but there isn’t much long term retention of that conversation. It existed a point in time, and then it disappears and is no longer something that you’re you. It really exists as a long term relationship with that bot or that agent that you’re interacting with and can’t perform, say long term activities. It’s I need information on something. It provides an information back and that’s the end of the conversation. Then you start to move into this idea of of copilot, which is this idea that you’re interacting side by side on a task, and it starts to as it moves down that continuum, it starts to have a little bit more memory of who you are a because it’s grounded in a longer sense of data of who you are. So if you think about like even M365 copilot for example, it’s memory per se of who you are is grounded in the historical basis of your O365 tenant, right? So everything that exists in your email, everything exists in your body of work. Isn’t sense a memory of its relationship with you, even if you haven’t interacted with it much? So it starts to have more than just that that one time interaction that you might have built through a chat bot or a rag pattern. Now getting into this idea that not only is the copilot able to perform a task, but it has a certain extent of relationship with me over a period of time and can learn from me like there’s a feature in copilot that says sit called cell like me. I’m writing an email cell like me and this idea that it sounds like you is built on this idea that it has all the emails you’ve sent over time, so we can start to create something based upon that relationship that it has with you. But then this movement to the AI agent moves into this fully autonomous sense where they truly can respond to activities. Take those activities, perform them, and then bring that back to the person that has instantiated activity. Think about like even this and even the fully autonomous sense has a range of capability that exists in it. Think of it like an intern, right? I’ve got an intern I the intern has a set of capabilities that I’ve come to know. Maybe when I first start working with them I’m unsure about their capabilities. They come to learn it. They become more capable of doing performing a function and I start to trust that person to perform action XY or Z and they become trained to do that thing up to a point where maybe you have an architect on your team that is truly, fully autonomous and performs tasks that you’ve delegated to them. To do so, there’s this range that the AI agents exist in. But think about the context of AI agents as this idea of truly being able to have a self contained function that has both memory and other capabilities to it. I’ll talk about what those are. So things that AI agent might do an AI agent might answer if a client has paid their bill, or it might return the right time for inventory level changes to occur based upon its historical knowledge of the demand inventory forecasting that’s happening. You might order inventory based on instructions, so you might get to a point where you’re truly delegating an activity to order inventory or to prepare the order of inventory. Now more of a copilot scenario where a person is actively having that performed work that they would have alternately done or preparing a client brief and an for an advisor we had met with a financial firm and the other day, and they have all these advisors, and they all do it a little bit differently and it takes a ton of manual effort. What if I can hand that off to an AI agent to perform that task? Bring that prepared brief to the advisor that’s aligned to the ethos of the financial advisory firm. That then lets them bring that back to their customers and do it in a way that is aligned to the ethos of the overall firm and combined with the intuition and capabilities of the person, not just kind of trading one off for the other. Uh, maybe it’s make quality errors in manufacturing and I have AI agent. That’s job is to look for issues coming off the line. Maybe the stickers not on right? Maybe the like. Have you ever bought anything at Costco, for example, where like you got the chicken you go to check it out? There’s no sticker on the chicken. Right. You can’t actually check out. The guys to go run across the store to go get it. Well, AI agents can perform the task of doing quality analysis now. Like, how is that different than some of the tools that might exist today? So the big and we’ll talk about that later. The big difference is it’s ability to function a little bit closer to the way human might perform in that task memory. The ability to plan out a set of tasks necessary to complete it. The ability to organize itself in an autonomous sense. Not necessarily. Having to always have the human intellect exactly what it has to do every moment, it’s kind of like my like 6 year old when he’s cleaning the house. I’d say ghosts go pick up the toys that exist on the ground in the family room. Then he falls on his back and I can’t figure out what to do himself, so I have to give him exactly what toys I wanted to pick up Rose, my like 12 year old. He can go do that task completely independently, like Ethan, go clean the family room. He just knows exactly what my expectations are. So he goes and does it. AI agents are existing in that same continuum that we need to think about, like how we delegate to, and that’s where it’s sort of moving more towards this independence that might exist in this space. So you should see in front of you something you should recognize, which is a picture of you. Remember Star Trek 4K, Star Trek? I think there’s Star Trek four. Yeah, the voyage home, right? He’s they go back in time, Scotty and Doctor McCoy, they’re back in time and they have to transport these whales in this spaceship to save the future from this thing that’s making these big shock waves. Sounds like it sounds completely ridiculous at this point to even explain this, but you have a you have Scotty and he goes up to the computer and he’s he’s gonna like program the the design for transparent aluminum and he says Hello computer. He’s like talking to the computer, right? And the guys like we are. You are insane. Like this is a mouse. You’re supposed to like. Click on it and type it in a keyboard and he’s like doesn’t get that right because he’s so used to this idea where you’re delegating and activity to an autonomous AI agent that’s performing a task. It’s doing it for you. It’s. I’m not typing the things anymore. I’m assembly conveying my intent to an AI agent and I agent is performing the task based upon me and and you might say ohh this is like dude this is so far out. No, this is now like these are things that we are doing now and in a sense it’s the architectural advancement that exists surrounding how you’re building AI platforms. And we’ll get to in a second like people are building a Agents. Actually incorrectly because they’re trying to build too much of that infrastructure themselves. Umm. So closer than you might think that moves as well into. If you remember Star Trek next generation, you had these sort of situations where Jordy would say do a level 4 diagnostic on the warp core, OK. And that’s a complex set of set of activities that need to be performed independently, pluses and minuses. Checking this in that all this stuff that has to happen right, and it’s essentially like runs over a period of hour and a half, but he’s told the AI agent to go perform that task again. Kind of representation in like you’re handing off something, and in this case it’s still kind of a copilot, even in that space, right? They’re they’re reviewing the results, they’re reviewing the results before something happens, but still interesting use case or idea. And then the last, if you are also still a Star Trek fan of, I’m sorry for all the non Star Trek fans on the call, but there’s this really interesting situation where Jordy has to solve a problem happening on the enterprise and he creates a hollow deck of the Utopia Planitia on yards where the designer of the enterprise is instantiated as a virtual instantiation of herself, which becomes this whole complicated issue of him doing that. But what I thought was really fascinating about it is he’s collaborating actively with an AI agent to solve a problem, and in this sense, it’s not just like, go do this thing and return it to me, it was this, like, very active relationship working with an AI agent to make things happen. So where are we the closest to not Montgomery Scott talking to the computer? Certainly the middle zone, but even more so, I think we’re getting close to what was possible with the AI agent of Doctor Lebrons. So all that said, this next slide which you should start to see is what is the delineation between an AI agent and its capability? And what copilots are doing now? So think about like when we think about like the execution power or capability surrounding what M365 copilot might do at the moment or what a traditional existing level copilot might be doing, terms of its ability to take action on a thing that we’ve given it or autonomy, whether it’s assisted or potentially even full autonomy. Now you think about autonomy as far as like self driving cars or. Which which of course is a huge point of contention and collaboration right now, but it is going to happen. It is going to, it’s going to be something that takes hold as the technology continues to evolve. You don’t have chat actually flying your plane. Thank God. But you do have an autopilot that will take a portion of that flight and person’s hands aren’t on the virtual wheel. And sense I’m such as, uh, ideas around task initiation, autonomous versus you should initiation, who’s taking who’s organizing and automating the task. Again, great scenario where AI, autonomous agents or even AI copilots are essentially agents that are not autonomous, are performing activity within the space. I think one of the biggest pieces here, and I’m not going to go through all these, but I think one of the things the biggest pieces here is ability to handle complexity in a decision making process and also a little bit of ability to handle ambiguity that exists within the decision making process exists in this space. So all of this is to say that AI agents are a quickly evolving conversation. So give you perspective on a couple key frameworks in Vikrant. Please feel free to come off mute as you have some things to mention here. There’s a few frameworks that you might already be somewhat familiar with. Lang chain laying graph so laying these these on the left hand side. This is an open source framework that facilitates sort of AI single function Agents. Into multi agent experiences that allow them to chain together very interesting, very open source, a lot of content around it. Great framework. That’s one of the I guess you could say like one or major two or three that are really taking hold right now. The second that you may not have played with it looks like nobody played with it yet, but certainly something that’s like gained a ton of acceleration as semantic kernel. Umm, about half of the Microsoft Copilots that have been built are built on semantic kernel, so that’s sort of the inside info. The inside baseball on that is that the companies that have companies, the teams that have the greatest velocity right now are the ones that built on semantic kernel. Now the noise in the ones didn’t built on semantic kernel simply because they existed before. Semantic kernel is a thing, but certainly is a reason why they’re gaining forward movement. Now, one of the things that’s interesting is that both of these frameworks have some drawbacks. And one example that drawback is just like in Vikrant, we’ll talk about this. Some of the customizability or the way that it’s it handles requests or the extent to which you have your hands on the dials and that might be a reason why you even build your own framework for doing AI Agents., but be very careful if you go down that route, simply because if you start building your own framework, then is yours and you are, you’re building something that other people are building at broad scale across the industry. And then this last one in the middle here auto Gen, this is something from Microsoft Research and I’ll talk a little bit about it. Yeah, I expect most of its capabilities to move into semantic kernel versus it being like a product in it of itself. But it’s I guess that that’s my guess like I don’t know for sure, but autogen is certainly an output of Microsoft Research and it’s a vehicle to talk about stacked agents, agents that are like talking with other agents and collaborating with other agents. So certainly, uh, like a huge point of interest and capability right now. OK. So, uh, this leads us into a conversation about goal based versus utility based agents. And this is I contemplated whether or not to include this or not because it is sort of like pseudo academic in nature, but I think it’s important to think about. You ever heard the term start with the end in mind? You heard that. Umm many of the agents were now building and this is in a sense this can be actually a little disconcerting is they’re being billed on. What am I trying to achieve? What’s the thing I’m trying to actually accomplish and that can be a spoken thing? It can be a described thing and in a sense that’s also why this is so powerful, but also so challenging because. Like if you think about like why do I like ice cream like to do triathlons? Like what? One of those things like why? Like find that spot in my brain that describes the end game behind that very difficult right? Like even describing that now we’re starting to understand the brain. But like very difficult. You can think about to certain degree like some of the things that are happening in the sort of large language model plus zones not being directly analogous to what happens in the human brain, but certainly like closer to it than what we did with demand inventory forecasting. Traditional models that are very deterministic goal based agents focused on that kind of idea they’re thinking about. Like what is the end game and how do I get to that goal? How do I work toward that goal? Utility based agents don’t have that sort of fixed goal in mind, so much as they are evaluating against a metric like energy consumption or maximizing profits, whereas say a navigation agent would be. I wanna get to point AI don’t actually care about some of the other qualities that get me there. I care about getting a point A to point B, and maybe I care about getting there fast. Umm, but getting there fast isn’t the metric. Getting to point B is the metric with other qualifiers, so most agents that were thinking about kind of live in this zone doesn’t preclude us from thinking about utility based agents as well. So you’ll see a lot of content on this, but I’d be happy to share the deck afterward to kind of let you dig into that very academic conversation, but also very intentional conversation. So what are the components of an AI agent? If I’m building one, you know now that we got the why behind this right? I I want to have a more effective delegation to an AI agent to perform something or get me something or return information for something or even more so like I’m just building a case management system or a customer service system or quoting system like how do I do that? Well, what are their now repeatable components? I used to do that well, so components of AI engine are this. There are plugins or integrations, might be another way to think about it. I can talk to different things to return information to my AI agent. I have planners that put together the steps to accomplish my task and I personas like what’s the what is the umm character or the relation? What is the? How does my agent interact with you? Is it very static? So if you did your taxes on TurboTax or HR block this year, you notice that they have a AI chat bot that exists there. Super static and one of them not very static in the other. Very interesting how they made this decision, but they both have their own persona and one is very like dry and one is a little bit more less dry, but they each have their own way of interacting. You have to make a decision as to like how creative is my AI agent and how much do I want it to be coloring outside the lines or not. Very interesting part of the conversation. So as you’re thinking about this, there’s a couple different ways that I think is fascinating, but it it all requires certain degree of a new mental model. So when you are writing and creating large language model AI platforms, you are finding that some of the things that you might have done before are harder because it’s less deterministic. But the flexibility and creativity that exists in this space is incredibly powerful. So you run into limits which what can’t be done, and then this is what allows us to think about why we need Agents. Because you need tools that facilitate the AI platform in a way that’s similar to partnering with a human but not a human. Thinking of like this long term conversation we were having earlier that like your initial rag pack pipeline short term, it remembers only that one interaction. Think about what we’re talking about here. Infinite chats, long running tasks might run over days like activities that need to perform over a period of time, or even completely independently. But I’m just monitoring it. How do you do that? Well, is the big question mark. This is really what we’re talking about is the architecture to do that well. So one thing we’re gonna use as an example for the following content is semantic kernel. Now again, this isn’t the only way to do this like Lang chain Lang graph. Also, a sort of equally maybe equally strong word, but like a great platform would thinking about as well, but we’ll use as an example to talk through this topic. What is semantic kernel? It is an open source light weights platform for handling Asian creation. It’s job is to not have us all create the same framework or to create our own frameworks and instead to facilitate building upon a framework that many can build off. So does cost anything? It’s an open source framework that you can use to be able to make progress here. Microsoft contributing to it rather assertively, and it’s how they build their own copilots. So if you want to know how to build their own copilots, there you are you using semantic kernel at this point. So there’s GitHub repo. There’s a discord community you can get right in conversation with the product team as a result of that, if you wanna dig in, or if you want to know how we’re building things, this is certainly a part of it. So what does an agent contain? This is a big question. Quite a bit. So the first thing the agent contains is context. It exists in a space that’s specific to its goal. We’ll talk about like purpose built agents in a bit, but like just think about the idea that like agents are an A lot like people in the sense that like we have, they have things that they do better and worse, right? We know that like, if you just go out to copilots@microsoft.com or chat GPT, it’s ability to form a task might be not as excellent as if I gave it specific rounding to do that task really well. It’s accuracy and precision at that specific task, but then also the shared capabilities can provide some broadness as well. But what if you’ve been following any of the recent Microsoft conversations? Is that even small models can perform roughly equivalent to large models if they’re training data is excellent training data, so there’s a lot of forward movement and what’s happening there. So first thing that this contains is the context itself. The second thing it contains is recallable memory. Now think about, you know, recallable memory in the context of both short term and long term memory, right? This idea that like stuff exists that I’ve been doing and doing over a period of time, but then also potentially long term call will not talk about what that is funny though like short term memory, long term is does my AI agent have memory loss ohm like we’ll talk about this. There’s an ability to create a plan. More sophisticated people have more sophisticated abilities to build plans, right? Like could ask me what we’re doing, go to the clean up the House example. Every Saturday morning, I sit down and I write on this little chalkboard, like everybody’s jobs for family cleanup. Right. You so and so is cleaning the jock garage. So and so is cleaning the basement. This is the vacuuming jobs this person cleaning out this thing? Someone asked to create a plan, right? So AI Agents. Become capable of creating plants. Simple RAG pipelines do not have this. First off, they don’t have recall memory too large degree, but they also don’t have the ability to create plans and at least in a way that they do anything with it. Then you have interactions with other parts of the platform through APIs. You have functions that are built for autobox plugins and then you you may even build your own custom plugin that exists in that space. So and sort of another picture of this idea is, let’s pretend that you have built a agent that facilitates sort of long running customer service questions and answers and activity that needs to get researched. And you probably built a front end experience, whether that’s like a teams chat or it’s an application experience that lets you hand things off to that go research this. Find potential answers. Bring it back to me. The funny thing is, it happens so fast that like maybe you don’t even. It’s not even a long running task, but like you have this this front end that exists and it does things with it. What’s happening is you’re then invoking that into the AI service and it is selecting the actual tasks that need to happen. It’s selecting the model, the simple templating it, and it’s rendering a set of prompts, but this is a cyclical process and this is something I want everybody to get. Is that like once you enter into a semantic kernel like a gigantic wild loop? OK, I think like it doesn’t stop doing what it’s going to do until it is figured it is. Completed your task and that might mean you have an integration with outside things like an AI model from a open AI or something from hugging face hub or something you built yourself that’s getting ingested and used to return results back to the application and parse it functions as a model to do these things. So like you could build all this yourself, or you could leverage semantic kernel or something akin to it to be able to accomplish these tasks. So I mentioned I was going to talk about memories. Misty colored ones. You have this idea of agent memories and we have different vehicles to make that happen. So for example, you might have long term memories that need to exist, and this is what we call a vector database. So you might use a custom vector database that you built yourself. So think about your positioning a Cosmos database that Cosmos database contains data. You’re building an ability to look up data within that platform. You might even have it query business system data when it’s asked certain questions, and I even be like one of these things. It might be like call this business system return this information that I can then use as part of what I’m trying to solve. OK, so your agent memory interaction might be something to that degree. Might be something that complicated where it’s like I’m not gonna take that from my business system putting into another database. I might be like actually calling that source of truth to return information back and the sense that is the memory of. Like what you’re asking for, right? Umm, but even in addition to that like it has its own memories of itself, like it’s Don’t person not trying to personify make this into a uh, the human analogous thing, right? But like, think of any sense of like its ability to retain information that based on historical conversations that you’ve had or even I’ll ways that this has been successful in the past to be able to make it better. So you might build your own integrations or your own databases, or you might use Azure AI search, which for the most part is like, where a lot of companies are going, which is essentially is a vector database optimized for cost and return results that gets connected into your. Yeah, agent. So this is a little different by the way, than like just what a standard rag is. OK, so standard RAG uses Azure AI search too, but it’s doing so only to return information. This is also you’re also thinking about memories here in the context of, like other long term relationship data that you have about solving problems or about the interaction. So the goal of AI agents is not simply to just have that data on hold and a stateless way, but to maintain some kind of state regarding the relationship with the end customer or user of the system. So super interesting and certainly part of the framework. So how does that work? Like in practice you are combining that with plugins, so plugins performing certain tasks and that might mean your application. It might mean M365, copilot and This is why this is all contiguous, right? Feel like you might be interacting with M365 copilot and then it’s providing a relationship to semantic kernel or back and forth in having this ability to tie in, so pausing here for a second, I think you’re gonna find like there’s a there are varieties of ways to do additional work. So things like math, right? So like in semantic kernel, we all know that like large language models aren’t good at math, right? Neither am I, but let’s just take that as it is, right? So like I hand that off to an an action, it performs that action. So how are we solving for that? Well, there’s a plug in called map plugin or you can build your own or use any set of other capabilities that you’ve tied into to perform mathematical operations on something that needs to be done. Or you might hand off to another agent to perform that task, which makes it super interesting in that context too. So this is something that’s gonna continue. Expand length in in full transparency Lang chain I think is done a good job of having like a variety of bigger variety of plugins and splint, but I think semantic kernel is a little bit better organized in terms of how it attacks the problem and we’ll probably gain velocity of kind of the bigger space long term. So also integrated into this is the idea of planning. So, like you’re talked about, like making this this sort of job clean up board for the home, like your agent is gonna have a planning capability. So like there’s a capability in semantic kernel called handlebars. Handlebars essentially give your agent the ability to. Perfide, here’s the wall. Here’s the the the like how I want you to stay on track and in a sense of like how they would handlebars mean. So this is like a prep for how it’s going to create its plan. OK, so it interesting thing about this is it’s all English language. Or is that funny? Like keep template short and sweet. Be efficient as possible. You’re describing this in a way which isn’t like a mathematical formula, although it is behind the scenes. You know it is. You’re essentially describing what it is that you want, and this is different from the prompts, not the system prompt. This is something else that goes along with the Asian plan. So then what happens is when it builds a plan, it is constructing this output that you can export and understand what was actually built as the plan, right? So like you ask a question go, you know, do this particular thing what it’s gonna do, then it’s going to create this plan in front of you if you ever actually way you can see this. If you go out to copilot.microsoft.com and you ask it to solve like a simple mathematical question like I’ve got $100 and if I put that at 10% year over year interest, what’s the does that $100 turn into over 20 years like and you’ll notice what it does is it takes that information and then it actually kind of like outputs the plan of the math of how it’s going to do that. And then it does that math in that interface. This is very similar to like what happens here, except you’re a little bit more control over the the actual planning process, OK. Uh. Another thing that you would be doing as you’re building an agent to perform any task is to give it a personality. Uh. In a very like nonhuman sense like it it, but it still has a way of interacting, and this is also similar to a system message, except it’s not the system message you’re indicating. How do I want it to perform? It will, in a sense, it is the system message. Like how do I wanted to perform your friendly assistant who likes to follow the rules? You will complete required steps and request approval for taking consequential actions. You’re giving it like, again, a human sort of tent. This is also why people can sometimes be uncomfortable building Agents. Because, like, wait can I how do I set boundaries? Then you can you can put guardrails around these things, but again, you’re describing it human centric way, which is also again fascinating, but another part of what semantic kernel and how you go about doing it. So before we get into Vikrant, demoing a few things I would love to talk about now how this all relates to multi agent frameworks. So what you can think about in terms of multi agent frameworks is I might have an agent that performs a task, but perhaps I have agents that are skilled at certain things and I start to hand off tasks between agents that perform those activities. So semantic kernel can be built to do that. So can laying graph and umm autogen that is like sort of the master Microsoft Research project around this concept and at the end of the day its goal is to and this is I don’t know if this is scary or interesting, but imagine you have an org structure that’s made up of roles that are all AI agents and I’m essentially delegating to a AI agent that performs task. So in a multi agent framework you might have a user proxy agent which represents like a shell of the relationship with the person. OK. But then the and that’s who’s like interacting with the person. OK, but then this user proxy agent is actually interacting with an assistant agent to do the thing. So you can see here right here we have the interaction with the person, but then from that all these other steps in between are this interaction relationship between the two, which then brings back. So this is the output brings back to the person and then that gets handed back to the agent. So it really what the point of this is to say that it’s not as simple as like a person just interacting with an agent. That solve every problem, right? Is is going to become a human that interacts with the user proxy agent then goes to the agent. That actually is good at solving that problem. Essentially like a project manager agent in a way you’re delegating it again. You’re like, is this like light years out? Like, no, this is something you can do now. Like this is something that you can intentionally build your structure how you’re building your your scaled model around. Umm, so then you start getting this point of like agent organization. I’ve conversable agent and then it’s handing off to different hierarchical pieces of the picture, or even joint chatting. Can you imagine a situation where like the the generic customer service agent pulls in the like, uh product specific agent into the chat to have the conversation? Why will the reason? Why is because we have a lot of different functions within our businesses and they need to work correctly like none of this is a big deal when it comes down to things that don’t matter. But when things start to matter, they need to be precise and accurate. And if you are building something to be precise, you want to be able to test it, validate it. Know it’s going to be successful and that means you’re probably not combining like the lunch menu chat application with the customer service chat application. There are two different agents that perform very specific functions need to be tested, but you might have a conversing agent that sits on top of the two to route you to the right place and pull in the right place. Put right agent to perform the right job. Here you say computer, what am I having for lunch today? Or computer my customer has this question like both are theoretically routable, right? I don’t have to have necessarily even two separate places to go, so an example this might be help me write an email for my boss. Of course I would be happy to help you write email for your boss. Would you please provide me with details? Specifically, I need to know if note this is interesting. Most LMS don’t do this right now, right? Like this. Note what’s happening here is asking For more information. Most of the time when you go to chatjiptorcopilot.microsoft.com, it doesn’t ask For more information, just it just says. Here’s my answer and you are left to adjust it in this case context member the big wild loop right? It’s no, I don’t have different information to solve the problem. It’s developing a plan. These are the things I need. Go back to the person to get the information and then go take action on it. So you see that in the relationship to what the user is then doing with that information saying. Here’s the things I need to do great here. Sounds like a great topic for email. Of course, this isn’t what we necessarily would use the AI agent to do, but I think it’s an easy thing to follow. OK, here is the boss’s email address. Great. Here’s the plan. OK, so like these are the things that we need to have in this email. Does that sound good? Yes, please. OK, cool. Here is the draft and this is just continues like into the draft of the email. Think about that in terms of solving any problem that any agent would receive. Can you help me do X? Sure, here’s what I need. Here’s the information that you need or go retrieve it from this business system. Return it back. Does that look good? Note there’s human in the loop. The reason humans in the loop because maybe you don’t trust the agent to do this completely independently. I might not either, so we need to have this sort of back and forth that might exist, but certainly was faster than doing it myself versus doing it as a part of doing everything else. OK, so uh, I love to have vickrant talk a little bit about some experiences that he’s had working with semantic kernel and like positive negative, like let’s just be raw here. Like this is totally an evolving space, something excited exciting that’s happening. Vickrant give us a little perspective here about what you’ve seen with semantic kernel so far. Vikrant Deshpande 42:36 Yeah. Thanks, Nathan. Awesome conversation going on here. I I love the fact that there’s so many people excited about this space overall here at concurrency as part of the data science team, we’ve delivered quite a few LLM based applications. Here’s my perspective as a developer and a data scientist in terms of semantic kernel. I mean, this is totally just my perspective because again, we all just saw that semantic kernel has like 3 main components, right? Like you have your kernel, you have your plugins which are like individual components that your kernel has access to, and you have your overall planner on the controller. So that’s basically like the product manager that you just spoke about, who creates out a plan and then just performs like different actions to accomplish a specific goal. So just restating for brevity and agent is basically just an automated flow that is intended for, let’s say sending emails, retrieving information from a database, asking for help or just retrieving memories from previous conversations. So like having its own memory as part of your application. So again, moving ahead with the pros, there are definitely quite a few. Mainly you can create your own plugins if you think about it from an engineering perspective, it’s gonna be like a rest endpoint that you can connect with your chat GPT or Bing search application. Even if you could move on to the next slide, I think that’s where I am at right now. Yeah. Thank you. Yeah. So again, each plugin could be deployed onto Azure as an Azure function with a specific rest endpoint that you can just push into chat, GPT or Bing search. So in order to have like a very developed chat application, like any LLM application, you’re gonna have these agents that are gonna be interacting with different plugins, right? And there are VS code extensions that help you accelerate through this process, so it automates a lot of the boilerplate stuff that you might have to run into if you’re developing these on your own. Again, going over into the plugins themselves, they’re gonna be two types, so one being it’s either native function which is written in pythonjava.net, or it could be a prompt itself that helps you generate out content as where requirement. So think of these as modular components that you’ve deploy on to the cloud and then just use them in your application. Now the planner, which is what I was talking about earlier, being the product manager or like the manager who decides what plugins to use at runtime based on the user’s requirement under the herd. It’s a very well defined prompt that chooses a set of plugins to use at runtime based on what the users ask is. So according to the goal, let’s say you have provided 15 different plugins like 15 different functions to occur at runtime. When you ask it a specific question, it’s gonna say, hey, I think based on the descriptions that you’ve associated with these functions, I think we should use, let’s say, these 1234 functions in the sequential order or like just a specific order. Ohh that’s been defined by researchers. I mean, it’s based off of like MRL and react, so MRL being modular reasoning knowledge and language and react as reasoning and acting in language models. So it’s all based off of research, but under the hood what I mean to say is it’s a prompt. It’s a prompt that’s gonna access things that you’ve written in code, but convert those into strings and then ask another LLM to generate out this plan. Uh, yeah. So a lot of pros here. I think we can move on to the next slide. Yeah. So again, like with every package or application out there, there are obviously gonna be some cons. So again, the robot there is just to denote that you might run into some phase form issues, but again, it truly depends on the kind of user experience that you want to give to your application or your clients. Because again, I have, I’ve noticed like 3 different things in terms of the cons, 1 being the there were some source code issues, but as part of the starter documentation itself like the starter demos that they have out there. Ohh. Going through like the open GitHub issues I just saw that we had to fall back to like a dev release and then roll back my Python version, whatever. That’s all fine, but I’m just thinking maybe there might be some debugging required in practice when we actually deploy these production grade applications. Nathan Lasnoski 46:55 Mm-hmm. Vikrant Deshpande 46:55 That is all that is also to say that the planner itself, although it is based off of research like like I just said, MRL and react it is a bit just like a very highly fine tuned prompt that someone else is written out for you. So it is a bit nondeterministic and it might run into some issues when you actually deploy these production grade applications. We have done this in the past where we use like router or Lama index to create out like a specific set of flows and we had explicit control over these flows. So that is to say that we are intended detection Routing to deterministic agent tech business flows thereafter. So you have one entry point. You basically define what intends to be detected for that entry point and you have 10 different flows that actually are agents in their own right. So in production, if you’re dealing with this kind of application, I’ve seen that debugging these can be better done and you’re going to be able to have more granular control on what you need to improve. So it’s a very elegant balance between both. OK. Yeah. Overall, I think it truly depends on the kind of user experience that you are to give to your users. And finally, I just noticed that there’s not any direct usable functionality for pedantic, so we had used this in the past as well, which was if you wanna enforce out like a specific structure to your prompt response that you get from an LLM, it’s gonna get tricky because LLM’s by default it’s hard to put the reins on these horses to say it’s tricky, but it is doable through like very well curated prompts and pyretic sensually under the hood does this for you. So up until now, at least at a first glance, I haven’t seen a direct support for identic all their users pyretic under the hood. But yeah, that’s basically what I’ve gotten into right now. Hope that helps. Nathan Lasnoski 48:58 Awesome. Did you want to show anything Vikrant? I wasn’t sure if you actually wanted to review that. Vikrant Deshpande 49:03 Oh God, I don’t really have like a working application just yet, But yeah, I’ve been playing with it just on the side. Nathan Lasnoski 49:11 Yeah. Cool, cool, cool. OK, I I I overextended the the term demo there, so I won’t. I’ll back that down. OK, last thing that I think would be interesting before we start to break is some of this is gonna come into like think, Oh my gosh, I don’t have the loan. Do I not have data scientists? But I don’t even have developers that build these kinds of things. Umm, this is gonna be a very rapidly evolving space and one thing that’s already happening in the RAG zone for these sort of middle zone kind of use cases is low code, no code, semi code kinds of scenarios being built into and I love Vikrant in a second for you to kind of comment on how your experiences some of those too. This isn’t to say that everything is gonna be built in low code or semi code. It’s finally not, but there’s a lot of scenarios where things that were harder before are now becoming productized to a level where you can build this with highly capable power users rather than having to have like all the sophistication built. So like, think what you’re gonna see is that you may not even know semantic kernel or like Lang chain is operating underneath the covers of something, but the framework, the platform you’re using to build it is. It is doing so and using a lot of same concepts, so this is a screen shot of copilot studio and copilot. Studio is like a low code way to build medium of the road AI chat experiences or what they will eventually call agents you know in the context of this and what you can see here is I’m I’m part of this organization called the Catholic College Center. We have this building that we’re working on and I pointed at the website you can add in these like development, these upload documents or pointing to SharePoint, location or whatever and very quickly you have a chat bot that is successfully answering very specific questions about that organization and surfacing where that information came from. Umm, this is also been used to say take action right? So like we’ve built Vikrant and team built chat experience where go order this laptop or go take action on this thing. And at the moment it’s very short term memory oriented, but long term I would expect that like additional investment can gets placed in this kind of space as well. I don’t know Vikrant, you want to comment on what you’re seeing with some of the local and tools as well. It’s sort of like a that the agents are for everyone, I guess is my point that like agents are not just a thing that like we’re gonna construct in a in a development house like they will be. But you’ll see these light up in a variety of places. You know what’s your perspective on that? Vikrant Deshpande 51:58 Yeah, I totally agree. So this space has been so rapidly evolving right now the past few months that I know the last year is where we’ve seen the most development in terms of building or LM applications with accelerators like semantic kernel or LAMA index or Lang chain. But overall I do see value in this low code, no core kind of copilot UI where you can just build out a chat bot that does 10 different things. Very easily within like a few hours. If you have the right training for using that UI, but again the fine tuning aspect of things is something that they allow you to do through UI. If you prefer like a code approach, that’s when you wanna kind of build out your own chat about using language or like llama index so the UI aspect would be good for maintenance with like very quick fixes. But if you are a fine tune and dig deeper into why something is happening the way it is, that’s when you would rather choose code based approach. That’s what I think. Nathan Lasnoski 52:58 Awesome. Awesome. Vikrant Deshpande 52:59 Yeah. Nathan Lasnoski 52:59 OK, so where we go from here? What we would love to offer is to have some conversations with you. One of the things we’ve done I think we’ve done probably 40 of these executive sessions for the leading Executive teams through the process of adopting AI, you can also dive into use cases in the context of something you specifically are building or company wide, envisioning sessions, helping with hackathons, getting you off the ground. So there is a form that I’d love you to complete before you leave, and my intention with that form is for us to have a conversation with you about one of these options. So either diving right into the Executive conversations, understanding your use cases, or creating excitement and engagement by driving visioning and hackathon activities within your organization to help you to fish help you to build. And I think that’s one thing that we feel really strongly about is that our goal is to help you accelerate your business and your mission using AI. And that has a lot of different ways and a lot of different organizations, but it’s a great opportunity for us to be a great partner for you in that sense. So polite trigger on that. We love to have that conversation. And then in also in that context, connect with us on LinkedIn. We love talking in the community. This is this is just a passion of ours. It’s not just about it is about like also connecting businesses, but it’s also about raising the general understanding of where AI is going within the greater ecosystem and connect with us to get this regular content every single week. I’ve produced a newsletter on AI, so definitely connect with us and have more conversations as well. So nice to meet you. Thank you for joining today. Please fill out the survey at the end. Tell me if you love the session or if you didn’t tell me what you want to change and we’ll see you next time. Thank you very much.