Insights View Recording: Modernizing Legacy Workflows with AI: From IDP to Sales Quoting

View Recording: Modernizing Legacy Workflows with AI: From IDP to Sales Quoting

In today’s AI-driven world, legacy workflows are holding businesses back.

In today’s fast-paced business environment, breaking down data silos is essential for smarter, faster decision-making. Unify Your DatLegacy workflows can slow down operations, increase errors, and limit revenue potential. Modernizing Legacy Workflows with AI: From IDP to Sales Quoting explores how Azure AI transforms traditional processes, making them faster, smarter, and more efficient.

In this session, you’ll discover how to:

  • Use Intelligent Document Processing (IDP) to automatically extract, classify, and process data from invoices, contracts, and other business documents
  • Reduce errors, cut costs, and accelerate revenue-generating activities
  • Streamline deal cycles with AI-powered Automatic Sales Quoting, generating accurate, tailored quotes in minutes
  • Accelerate post-merger integrations by consolidating information from multiple organizations efficiently and securely

Learn practical, real-world strategies to modernize workflows, empower your teams, and drive measurable business impact with AI.



This webinar explores how Concurrency uses Azure Document Intelligence and Copilot Studio to modernize outdated processes—from document extraction to sales quoting. Led by Concurrency’s experts Nick Miller and Matt Krawiec, the session highlights real-world strategies for automating manual tasks, improving data accuracy, and accelerating response times—especially for organizations in the Midwest and beyond.

WHAT YOU’LL LEARN

In this webinar, you’ll learn:

  • Why legacy workflows are slow, error-prone, and costly
  • How Azure Document Intelligence extracts structured data from unstructured formats
  • What layout, prebuilt, custom neural, and composed models can do for your documents
  • How Copilot Studio automates quoting workflows using low-code tools
  • Ways to reduce quoting time from 30 minutes to just 15 seconds
  • How to overcome organizational challenges like data readiness, legal concerns, and cultural resistance

FREQUENTLY ASKED QUESTIONS

What is Azure Document Intelligence and how does it work?

It’s a Microsoft AI service that extracts structured data from documents like invoices, receipts, purchase orders, and SOPs using layout and neural models.

How does AI improve quoting speed?

By automating document analysis and quote generation, AI reduces quoting time from 30 minutes to 15 seconds—helping businesses win deals faster.

What are composed models in document processing?

Composed models combine multiple AI models to intelligently classify and extract data from varied document types and formats.

Can Copilot Studio automate quoting workflows?

Yes. Copilot Studio can detect quote requests in emails, extract product details, match inventory, and generate quotes—all with minimal human input.

What are the biggest challenges in adopting AI for workflows?

Data quality, legal review, cultural resistance, and perceived cost are common hurdles. Concurrency offers discovery sessions to help organizations assess readiness and build pilots.

ABOUT THE SPEAKERS

Nick Miller
Nick Miller is a Senior Architect at Concurrency, specializing in AI-powered workflow automation and intelligent document processing. With a background as a Navy logistics officer and a master’s degree in statistics, Nick brings a unique blend of operational discipline and data science expertise. At Concurrency, Nick leads initiatives that modernize legacy systems using Azure AI, helping clients streamline processes like cash receipts, PO validation, and compliance documentation.

Mac Krawiec
Mac Krawiec is a Senior Software Developer at Concurrency with deep experience in low-code automation and AI integration. Known for his creative approach and passion for Formula 1, Mac has built intelligent quoting tools that reduce response times from 30 minutes to just 15 seconds. He specializes in Copilot Studio, Azure Document Intelligence, and orchestrating scalable AI workflows that empower sales and operations teams across industries.

EVENT TRANSCRIPT

Transcription Collapsed

Nick Miller 0:05 All right. Thank you for joining us today. Hello, my name is Nick Miller and also I have with me, Matt Kravitz. We’re going to talk to you about modernizing legacy workflows with a I and from all the way from intelligent document processing to sales quoting. All right, I’m going to hand it off to Max so he can introduce himself. Mac Krawiec 0:24 Yep, good morning. My name is Matt Kavits. I’m a senior software developer here at Concurrency. Been here for a few years now. A little bit about me. I’m a big Formula One fan and I will see some of that come through today. As we get deeper into the webinar and yeah, just a huge fan of AI, kind of took the bull by the horns and tried to dive in and the opportunities afforded to me have been awesome to get more experience in this space. So super excited to talk to you about. Some of IDP and and and quote processing today. Nick Miller 1:02 All right. Thanks, Matt. My name is Nick Miller. Nice to meet you all. As I kind of timeline in my life, it’s maybe relevant to this presentation is, you know, started out in information systems. I graduated from UT a while back and I joined the Navy as an officer. I’m a. I was a logistics officer in the Navy, active duty for 10 years. And then when I got out, I realized I didn’t speak civilian. And so I went and got an MBA from UT so I could kind of know what an ERP was and just all the basic things I didn’t learn in the military. Shortly after that, I got married. And realized that. One of the things in my MBA program I learned is I had a data mining class and one of the the head of fraud from Apple came and spoke to our class and I was like, wow, this is amazing. We did some things there and I like, I really got an interest in it, but I realized that no one would hire you as a data scientist with an MBA or an MIS degree. You had to have it a minimum. It was a master’s. Degree back in the day. It was a master’s degree in economics or math or statistics. And so I went back to A&M to get my master’s degree in statistics. And if you know college football, you’ll know that UT and A&M are long time rivals. So yeah, I’m a little confused, but that’s OK. And then next after you you get married, then the other things. Some dogs to get some kids. My wife is really big in matching Halloween costumes. We’re going to be minions this year. And then really to compete, complete my life. Once I had kids and a dog and and a and a wife, you know, golf is really my new mistress, my new fifth family member, perhaps. That’s just a little bit about me. I also love doing a I data science work, so we’re really excited to show you some things today. So first what we’re going to do is we’re going to, we’re going to cover the challenges in legacy workflow. We’re going to give you some examples of workflow and we’re going to talk about component pieces of those workflow and how those. Go into larger into the fit into those larger workflows. We’ll give you an AI document intelligence overview that is a component of almost every one of these workflows. And then Mac is going to do a sales quoting demo. He’s going to open up the hood and show you that that will be a. A more low code type of demo and he’ll talk about some thoughts on that and we’ll close it out. And of course, anytime throughout this whole presentation, if you have a question, please post it in the chat and we’ll try to answer that as they’re asked. That way the question is relevant. So please ask. If you have any thoughts, someone else. If you have the thought, someone else probably has the thought too, so please speak up. All right, first challenges in legacy workflow. So a lot of times these legacy workflow, they involve information and usually that information is not in a structured or digitized format, right? I would probably consider something like if you’ve ever worked with Edifact or ANSI X12 for like EDI transactions. Those are kind of some of the first communications, large-scale communications between businesses and they are structured, but they’re kind of brittle, they were expensive to implement, et cetera. And still they’re not that great today. There are some other options, but in general, there’s still a lot of documents, right? There’s like. Documents flowing everywhere. And so the reality is these legacy workflows often contain documents and they also contain a different variety of documents in different formats. So even with a specific workflow, let’s say like order entry, you receive purchase orders from your customers and you need to enter those into your ERP. You know, it would be nice if your customers all use the same purchase order style and format and everything was exactly the same and you could kind of just drag a little box and pull out the data and it’s highly structured. But the reality is, is that’s not the case. And often even within their own document, there are errors. You know, we through our research. Through our experience with our clients, we realized we had a customer with a cash receipts project and the customers, their customers were sending invoice payments in and they would have this document and it would list all the invoice numbers and the ACH payment that they all. Came in on and one other column said date, but it was actually the invoice number and then the date column said the date column was the invoice number, right? So they mix those things up and even though that is supposed to be structured, if you have a very kind of. A a process that doesn’t involve AI, you’re going to look for the column that says invoice number and you’re going to get a bunch of dates. So even when there is structure, it’s not always reliable. We also had them send in images and the images were, believe it or not, like it was a black background, almost black, like charcoal blade, Gray blackground with black text. And it was just an image. You know, we were able to process that. No big deal. CSV files, text files. There’s a lot of different formats for the exact same workflow. So it’s a challenge, right? The third is without implementing some sort of automation and using AI and the things we’re going to talk about today, they’re costly and error prone like it. People’s time, people aren’t cheap, and they cost a lot of money to manage these processes, and they’re not typically very interesting processes. The people doing them don’t really like them, so it’s hard to really stay engaged. And because you don’t stay engaged, you make mistakes. And so and those errors themselves are costly, so no good there. And then the other thing is even if all these other issues were resolved, they’re just slow, right? If you need to respond to customers, if any part of your business where you’re running these things requires that you take action in in if you may lose business if you. Mac Krawiec 6:35 OK. Nick Miller 6:51 Don’t reply to a quote fast enough. And if you’re behind the times, if you know, if you have a slow process, you have a backlog, you may get to it. But by the time you got to it, they’ve already gone with somebody else. And in that case, you’ve already spent the money to do it, but you’re not going to win the business. So they’re slow. And even if they, if you, even if you don’t lose the business. It’s better to have your your ERP system and your processes up to date with information faster, right? Faster, faster data, as long as it’s good data is always better. And then the last part of this is there’s an AI skill shortage, so. There are tooling here, there’s procode, there’s low code, and building things on your own from scratch are that’s exceptionally difficult. But what we’re going to see today are some of the the procode or low code ways. They still require some some knowledge and some specialization. But it’s not nearly as as big of a hurdle as back in a few years ago, like when I was creating LLM models or they were, they weren’t large. They were small language models to be honest, especially compared to all this GPC stuff. I was creating small language models on my own and there was just data scientists doing it and it was OK, but they weren’t that great. And so then now we have all this tooling that’s available to us at a lot lower barrier of entry. So those are the challenges. And we did talk about a little bit of the ways to move past those challenges of those legacy workflow. Now that we talked about, now that we talked about the challenges, let’s talk about some examples. And there’s two high level high level patterns and they’re not the entire workflow, right. So this document processing pattern on the left is not the workflow, it’s just a kind of generically applicable part of the workflow. So if you look at sales quoting or order entry or purchase. Orders to sales order validation. In that case what that means is you have we had a client with a very complex product, right? They made pieces of metal and the metal had to be exact tolerances to it’s a disc and it had to be within a certain tolerance and bend in the surface. Material and annealing. And you know, each disc was about 10 inches and it was about $5000 per disc because there’s like these rare certain metals. I don’t really know what they’re used for, but the reality is, but the issue is that if there’s something in the purchase order. A requirement, for example, that didn’t make it into the sales order, you you really risk processing or manufacturing, you know, ten of these $5000 discs and then the customer’s going to reject them because they didn’t meet some sort of inspection requirement, right? It’s so. That’s a PO to sales order validation. You’re validating the PO and the sales order are matching, right? And that’s in some complex types of workflows, right? Custom, custom type materials. And then the last one, like cash receipts, you know this is where you get ACH payments and you don’t really know which invoices that ACH payment applies to. So those examples, those are the workflows, right? But within each one of those workflows, there is this little subprocess of a document input. And that document input could be an e-mail as a document, it could be an attachment as a document, it could be an image, CSV, all those sorts of things. And then there’s also this common process we’re going to. Extract information and extract data from that document and then we’re going to then take action on that data. And it could just the actions depend, but you can see that document processing workflow that is embedded in each one of those examples. Mac, do we have a question? Mac Krawiec 10:33 Do we have a question? I don’t think we have a question at all. I think Q&A and chats are popular, so keep going. Nick Miller 10:37 OK. All right, good to go. I just wanted to check in with you. All right. And then this other kind of pattern is a knowledge base management where you it’s not about the single document, right? On the left, that document processing, we’re going to get that document and do something with that document and then life is good. We’re done. The knowledge-based management is we want to get information from multiple documents, right? We want to, you know, have multiple documents, extract data from those and in cords to that data so that we can find and learn from and act on that data later. So for example, you might have standard operating procedures and you need to, you need to analyze a process and say, hey, does this process meet all of our SOP requirements? And then you know if you have this knowledge based management process, you can do that and you can use LL. You can use LLMS to interrogate that encoded in stored data. It could be customer service support, so maybe the SOPs plus product documentation or repair manuals. You know someone calls in and they want to know how to repair or replace some sort of component in an item that they bought. Well the SOPs. And the product information, you know you can use an LLM to find that information, search across SOPS, find the repair part or the repair manual and maybe the repair parts. So that could be a customer service support thing and the last would be like examples like in compliance. And here there’s there’s a couple of things and this is actually this compliance example is a little bit of document processing, but it’s also knowledge based management. So it combines both of them. So in one thing, let’s say you have to create safety data sheets, right? You have a product, you make chemicals, you make materials that have certain chemical aspects and. You want to create those safety data sheets. Well, there’s information. The information from multiple documents is used to create that SDS, that safety data sheet. And so you’re gonna use the knowledge from your knowledge base from these multiple different documents. And then create a document out of it, right. And then so that’s a little bit of both. And then the other thing is like answering customer service for compliance. So maybe people buy your products and you have an SDS for it, but they want to know does it meet certain criteria and what’s your policy for? You know manufacturing standards at your this specific plant and how do you ensure that this thing doesn’t happen in your manufacturing process? Well, when you have those compliance doc, when you have those SOP S, when you have the SDS S, when you have all those documents and you have them encoded and stored in a knowledge base. And you then have a combine that with a document processing workflow where you take that survey, you extract the questions and then you go for each question, you go through that knowledge store and have that agent pull those answers out. That’s an example of a answering customer surveys workflow. So these are just a small, I mean just some examples to get the creative juices flowing. And see if what we’re talking about can hopefully resonate with you and some of the challenges you face with your manual or legacy workflow. Now if you might notice, there’s a component in here for each of these patterns that’s the same, right? Well, maybe two document input and then extract data. And what we’re going to talk about next is the AI Document Intelligence overview platform. That’s a part of Azure and that is really where we extract the data from that document and we’ll take a look at that now. So inside AI document intelligence there are two primary types of models and there are the pre-built models and then the custom models. The pre-built models, there are actually a lot, right? But some of them are a layout. We’ll we’ll look at these in in detail, but a layout model gives you components. So if you’re looking through a document, you as a human know that it says chapter one. You know that chapters are important to organizing information. Or if it’s a legal document, it says paragraph 1.5 point a point whatever. I was in the Navy. I got really good at learning bureaucratic manuals, right? And so, you know, knowing where something comes from, you can’t just extract the text, right? The text belongs to a paragraph which belongs to a section, which belongs to a chapter. And so all that information is relevant. So the layout pre-built model allows you to extract those things. Things along with text and also figures like pictures inside the text or tables and extract the structure of a table and kind of give. It doesn’t imply A schema necessarily, but it does give you some structure to which you can then interrogate with an LLM or if the document is. You could perhaps place right into a knowledge base itself. Then you have general. It’s basically key value pairs, it’s pretty generic. And then we have all these kind of specific models and those are use case specific like invoices, receipts. You know resumes, there’s a ton of them there on AI document challengements. We’ll take a look at those and then on the custom side of the house you have an extraction model and you have a neural model and compose. An extraction model is a non AI based model to extract information from. From forms and same thing with the neural model. And the main difference between them is the variety of the formats, right? So you would have. So for every workflow or every type of document you want to interrogate or extract data from, you’re going to need a model. And if the if the format of that specific document is always the same, probably an internal document for example, then you can use an extraction model because we know where we’re going to get information, the wording of the key value pairs, the name of the columns of the table, etcetera. So they’re all going to be. Fairly straightforward and standard. In that case, you can use the extraction model. Now if you’re talking about purchase orders, like you’re going to process customers purchase orders, again, they’re not so nice to us. They like to use their own formats. You know how rude, but the document itself is a purchase order and that’s the same, right? So what you can use. Is you can use a neural model. So the neural model is good at knowing that purchase order or purchase document number or document number or those sort of things. They’re the same thing. And so these neural models allow you to extract similar extract and for a schema onto that document. That’s the same so. Model, neural model, same sort of thing, except if you have a variety of formats, you’re going to use the neural model because it uses AI to figure out that you know certain words mean the same thing. And then a composed model is using multiple of these models in the same. You may in in the same process, so you it basically gives you a front door to multiple different types of models and you can use AI in the compose model and the new standard extraction models. So for example if you have a purchase order process where there’s only two formats. But you don’t know which format it is. You would first have a composed model that would look at it and infer what to use some AI to infer what type of PO it is. And once it figures out which model is is applicable, it’ll send it to like you know, PO model A or PO model B and that’s the composed and it has a classification. You can do that. That or you could include multiple neural models. So for example, we just talked about one format, one type of document, one document type, but multiple varieties. You could use compose to classify a document to multiple types, right? So instead of just PO, it could be PO or if you. If you didn’t know what documents were coming in, like you didn’t have a segregated inflow for purchase orders, you could use a compose model and it could recognize this is a purchase order. So I’m gonna use our purchase order neural model. This is an invoice payment. Model ACH, you know, cash receipt payment model. OK, so I’m going to use our cash receipt in our model and that’s what that compose is for. All right. So now that we’ve covered them, we’re going to go ahead and take a quick look at a document intelligence. Um. Let me zoom in just this image here. It’s probably. A little hard to read. OK, so we talked about layout models and this is the layout model here. You also that’s the one that extracts tables, check boxes and other sort of forms and structure. The generals documents, key value pairs and then OCR read. If you have handwritten documents, have handwritten text, et cetera, or your PDFs don’t have are not searchable, meaning when you click the little, when you take your little cursor and you click over the text, it doesn’t highlight the text. That PDF is basically just an image, it’s not searchable. So if you need to turn kind of images or handwritten text into from PDFs, if you need to turn that into a readable text then in a basic form, then you can do that OCR read layouts you can actually use. You can input JPEGs, so it doesn’t. It’s not necessary that you have to do OCR first, right? But maybe you just want the text from that image document. And then we also have things like we’ve talked about invoices, receipts, IDs, health insurance, bank statements, pay stubs. Credit cards, contracts, business cards, right? So there’s a lot there, but the reality is, is this doesn’t include things like purchase orders. It doesn’t include ACH payments. It doesn’t include SOPS, it doesn’t include all the things we talked about. It can be useful for invoices, but in general you’re most likely going to need to build your own and the 1st way to do that as we can look at is that the layout model. Now in here I put in some receipts and we’re going to look at the, we’re going to look at the layout, we’re going to look at a receipt model that they created and we’re also going to look at a custom neural model that I created for receipts. I just kind of wanted to show how a receipt or how some receipts look in these various. Models and what you get out of them. So in the layout model, this is actually, you know, my grocery. This is a grocery receipt. I live in Austin and our favorite grocery store and the one that I worked at in high school was called HEB. It’s a guy’s initials. And here’s my grocery list. You know, grocery. What did I buy, right? So I bought, you know, some milk and cabbage and bacon and whatever, right? I bought some stuff. And then when we run it through this layout. Model you look at the text it gives you. Over here you see text, selection marks, tables and figures. So right now it just gives you the plain text. So at the top it says HEB. It gives some sort of information. I really don’t know what that is or it’s something. It gives you like my credit card. The last four digits, all right, don’t steal my identity, please. Thank you very much. And then and then we go up here to tables and in here you can see that it’s highlighted. We can see that it has. All the items in there highlighted, right? And that’s pretty good. Highlights the items in the subtotal table and then you know submission information about the process. So we can see these tables, right? Here’s table one, there’s table 2, which is like our subtotal. And then table three was, you know, the processing information and table 4. I don’t know what this stuff is. It’s something. But anyways, we get this table information out there and then figures. There aren’t any figures in this one, and that’s OK. But to give you an example of what a figure would look like, let’s say we look at this document here. If we look at text, it gives you all the text and it gives you things like a page header. Remember how we talked about expecting structure from the document? Page header is something that you understand. Hey, title. OK, this is a title of this page. Here’s a page header. Here’s a section heading. Here’s a paragraph. And so you get the text and also you get the schema of its component with it. So. A paragraph. Here’s the text. Oh, there’s a section heading. What’s that section heading called? Oh, it’s Figure 2.2. And then you get tables. There’s a little table in here. And then you get figures. And this figure is the image. And So what you could do is you can extract. You could run a layout model and just say give me when you extract it, just throw away everything except for the figures. I just want to extract the figures or I just want the tables. You could do that because these things like the text and even the text within here you could look at text and then say give me all the titles or give me only the section headings or give me only the tables. So you can, although it’s not a specific schema, you can decide what you want to extract from the document and it really gives some context to that extraction. So that’s what the layout document looks like. And you can see that it’s fairly good even for these other receipts it kind of, you know, it gives you. Some information. It didn’t pick up that table, just the total. It missed the the text up here about, you know, the fact that they bought an oatmeal cream cookie and well, two oatmeal cream cookies and then some some other oatmeal Raisin. Apparently it was like a cookie sort of day, right? But it does OK. It checks in the structure. It’s not maybe the best for the receipts, or at least it’s not the best for receipts as is. And we’ll show you a spiffed up way to get better information out of those receipts. And then you have a pre-built model. So this is AI document intelligence. This is receipt model. This does a lot better. As you can see, it gives you the information about the items. So it says, hey, this is HEB Organics Maple Spice Instant. It’s oatmeal. By the way, it’s $2.49 and then there’s a quantity. You know, and so it it breaks things down a lot better. It gives a schema that is relevant to receipts. And the thing about these pre-built models is good, that’s great, but you may want a different schema like what kind of receipt, like a a lunch receipt for a business purpose. Well that has things like tip and other stuff like that. That maybe aren’t as relevant to other types of receipts, and so you can build your own schemas with those custom neural models. Or you could do the other thing that I’m going to show you in a minute here and then. The last part of here is the custom neural model. So what we did is we put in some receipts and we were able to extract some of these different pieces of information here. So we kind of replicated the receipt. Now the custom one, this is OK, but it is honestly it’s not. Quite as good as the pre-built receipt, even though we gave it some receipts to train on. So that kind of at high level is the AI document intelligence overview and what I want to show you very quickly because I want Mac to have plenty of time. Is I want to show you that same HEB receipt, right? You know we we had some things on there. It was like the instant oatmeal, but although it didn’t say oatmeal, it just said HEB organics Maple insta right and so. What we did here is we used the layout document that pre-built the model of layout and we extracted the text and we extracted the extracted the table and we passed that information and we used a Azure Open AI. A schema like we defined our own schema and what that looks like. Sorry, what’s what’s in that schema? And this is the schema right here. So we told it a receipt has things like a merchant name and an address and a transaction date and you know, are they optional? Order number, server name, table number. There’s no server at the grocery store, right? So these things are optional. But what are the items? Is it an itemized receipt or false? You know, what are the subtotals? And then we also have this thing in here called items. And we define what that is. So that that table of of things we bought, that’s an item table and it’s a list of receipt items. And so if we go up a little bit, we just find those receipt items right here. So the receipt item has a name, there’s a quantity, there’s a price, item name extended. And this is where we use AI says give the item, give given the item. Item name might be truncated or abbreviated. The type of vendor, the item price and quantity provide a more descriptive name for the item by inferring the brand, product name, etc. Exclude attributes you are unsure of, so don’t try not to make stuff up. From what you can see, guess at what the. Actual item name is right. That’s all I wrote. Nothing else. And then item category. You know, we as a family are trying to buy more fruits and vegetables and meat and stuff like that. And especially me on Fridays. Less alcohol, less sweets. Less snacks and other stuff like that, right? So what we could say is this is this category and the last one is OK, if we are buying fruits or are buying grains, are they whole foods or are they processed foods, right? Because ideally you’ll be eating more of the whole foods. Supposedly it’s good for you, at least that’s what I’ve been told. And so when we run all this, the outlet, so it gets that receipts document and we run it through this payment extractor which does the it applies that model that we built and I showed you that model, it applies the document intelligence, the layout. Model extract that data and it has some other things like it has a BLOB little module like for getting the file from BLOB storage and then sending it to document intelligence key vault for something like that, but nothing really relevant to this specific workflow. But when I do the analysis you can see in here. That the HEB orgs Maple spice inst. It knew it was HEB Organics Maple spice instant oatmeal and that is a cereal and it’s a processed food. I agree with that. Makes sense. It is a cereal. It’s a processed food. Because it has like additives and sugar and you know, whatever. CM organics, 2% milk, Central Market organics, 2% milk, 1 gallon. It’s dairy and it’s a whole food. All right, that’s pretty good. So the point of this is it started categorizing all the stuff on here. You know, bacon, that’s a meat, but it’s processed. Dave’s Killer Bread White Dunn rank because my kids eat a ton of PB JS. That’s a bakery, but it’s a processed food. You know, flour, tortillas, blah blah blah. So you get the idea here. So it starts, it adds a lot of this process is adding a lot of information to the data that didn’t really exist. And would be hard to extract manually. But what I did here, this code just may ignore the code, but all it does is it calculates percentages, right? So if I want to say down here, let’s just look at total percentages, cereals 2%, dairy 14%, vegetables 17%. My mom’s proud of me. I showed her this. She was very proud. Meat 50%, bakery 11, fruit 15, sweets 3%, legumes 3% and non food. Now this was not by this is by dollar value, so you know. That’s a little bit confusing because typically legumes are pretty cheap, right? So maybe there’s a way to analyze it. If you want to get weight, if you maybe there’s a way to pull weight out of it or you could then take this information or and then get more information. You have a little further downstream process that might get information about the product itself, like the weight and then you could add more. So you’re really doing something pretty cool there. I just wanted to show you this and that’s how we use Azure Document Intelligence combined with Open AI to get a structured output that can be used for further processing. So that all that is right there is this little middle piece document extract data. Now, what do we do when we take action? I’m going to hand it off to Mac and he’s going to show you a low code way to do sales quoting. All right, Mac, would you like to take over? Mac Krawiec 31:30 Absolutely, Sir. Thank you, Nick. So I’ll take over the screen if I may. What we’re going to talk about, as Nick mentioned, is sales quoting and the fundamental why of. Nick Miller 31:37 Yes, please. Mac Krawiec 31:45 Why we chose sales coding specifically is simply that most businesses need money to operate and in order to have money you need to sell and and this is an area that we’ve been working in for for quite a while and I first started. Creating some sales automation even before Azure Open AI was a thing. What stuck with me was one of our clients had told me that the first one to respond usually gets the business no matter what the quote is, just because of how high speed that industry worked. And what we were able to do is by virtue of of employing artificial intelligence and and machine learning and such, we were able to reduce their time to quote from about 30 minutes on average to what turned out to be 15 seconds. And the reason why was because you are no longer, you know, in need of a of an individual salesperson. And I’m not advocating for the removal of salespeople. We need those, but we’re able to accelerate their, their, their output by saying, hey, you can now oversee the, the, the, the generation of sales. Leads or or orders? Five times hold because you you can use a I to to to speed your your processes up and so that’s why we ultimately chose this. But what we’ll be building in in this space and as I didn’t mention I’m an F1 fan so I did have a flavour of that of F1 involved. But what we will be building is a Formula One sales quoting tool. Now if you were at our summit maybe last week and and and sat in on my segment, this is not the exact same one, so don’t leave. But basically what we’ll be using is Copilot Studio. We will integrate that with a wonderful default model that Nick actually gave us. We’ll be using the layout model and we’ll use Copilot Studio to talk to AI Document Intelligence or Azure Document Intelligence to ultimately. Hey, detect new quotes in emails, detect whatever attachments they may have based on the e-mail and its attachments. Find the product in some sort of knowledge base, whether it be a database or whatever, and we’ll get to what our data source is here. We’re gonna find its. Pricing. We’re going to find it’s on hand inventory. We’re going to build a quote that’s human readable, and then we’re going to send the quote to a client. And as with most things, there’s asterisks at the end of this. Send the quote to a client has three asterisks. The reason why is because it’s a big deal. Not all implementations of this, and in fact most don’t. Most of them do not start with send the quote. There is this threshold using the arbitrary set by the business that we determine, hey, you know we have some sort of. And if the AI deployment is 100% confident the quote is correct and it’s what it needs to be, then hey, we might be able to send it out and right away. Or we want that human in the loop. You’ve heard those buzzwords before. You might want that human in the loop to say, hey, you know. A I generated this quote for me. Is it correct? Is it not? Do I need to modify it a little bit to make it something I want to send to my client basically. So we’re going to, we’re going to see that a little bit. Now before we get to the fun part, I wanted to talk a little bit about. The AI Foundry or AI stack, really Azure Stack. I saw this at Build and I didn’t realize just how important this image is going to be because for all intents and purposes we are when you’re building an AI. Capable application and you’re automating anything, whether it’s a coding workflow like we’re about to do or anything else, you are living and breathing this stack. As a matter of fact, everything that Nick has shown us was in Azure AI services because. Document Intelligence is a part of Azure AI Services and as a matter of fact the Python code he’s shown could be extended within an Azure function and that Azure function could be used to take care of the handshakes and ultimately take care of those receipts and use document intelligence whether. SDK or API mediums to ultimately generate this data. So what you saw was an example of just this. Where we are going to be currently is in this low code world where we’re going to use Copilot Studio. To automate triggering on e-mail on received emails, we’re going to use largely low fidelity instructions to reach out to Azure Document Intelligence here. And generate our a quote and send back an e-mail to our potential client and all of that’s going to be in Copilot Studio. The reason why this is such a big deal is that this is again a few hours of work where I’m going to show you this F1 sales quoting tool. And we’ve only touched copilot Studio and we’ve only touched the base models of AI services and we’re going to do and and and ultimately build quotes. And if you really put that into perspective, imagine what you can do if you actually employ. The entire stack right from from using Copilot Studio to automate emails or if your surface a different agent and use Copilot Studio to interact with your clients and then you leverage what’s in this Azure AI Foundry stack. But then hey, by the way, you have some. Security and governance where you need synapse involved and we need to secure this box here. We have that. If you wanna extend it, make it more pro code. We use Azure Functions, all this stuff and all these different methodologies. You have so many tools at your disposal. Holes in your tool belt. Now the one thing I’ll caution you against in the last two years when Azure AI Open AI at the time came out, everybody got the AI hammer and then everything was an AI capable name and everybody was just smacking. This nail with AI everywhere, it is important to keep this stack in mind, but also keep in mind that given certain use cases, you might employ different tools, whether it’s within this stack or maybe it’s not AI entirely. And we’ll get to some of the organizational challenges here at the end of my. Segment that will talk that will kind of hopefully resonate in imagining where we can get to from a use case perspective. So without further ado, let’s get to it. So. What I wanted to show was a full bill out of our agent, what we’ve built. And here, let me refresh this page. If you are on your own tenant, whether you’re using a dev tenant or a local tenant here on at at your at your org. If you go to copilotstudio.microsoft.com, you’re going to be able to see all the agents that are available to you that have been built by your org or agents that you can create. And what I’ve created is the F1 sales agent. Again, the purpose of this F1 sales agent is going to be to allow clients if you’re an F1 fan. Teams, right? Whether you’re an engineer at Williams and you’re a customer team and you need a Mercedes engine. Williams is a customer of the Mercedes engines and you need their engine or you need their parts. You might want to reach out to them by e-mail and say, hey, I need this and I didn’t know how much it costs and the whole shebang. You might be hitting up against this F1 sales agent when you send an e-mail. In this case, I’m going to be my, my, my Mac or mkavitz@concurrency.com e-mail is going to be the audience for this. It could be anything, right? It could be a shared mailbox. It could be multiple people’s mailboxes when an e-mail comes. then what are we going to do, right? We’re going to get to it right here. Our agent. You are an agent designed to encrypt emails. When an e-mail comes in, you should do the following. First, determine if the e-mail subject or body hint that an e-mail is about a request for a quote of Formula One parts. You would not believe before Azure Open AI how difficult #1 would have been to employ. Using machine learning to say, hey, is this e-mail relevant? And before what we would have to do is build either pretty complex rules engines for companies to use or build little models to be trained on. A plethora of data to say, hey, this e-mail is a relevant e-mail or this e-mail is not. Now it’s a sentence and for somebody I’ve been in it for since, you know, the last two or three years. For me, seeing the fact that I can do this in one sentence is huge. Um. And the next one is uh. Actually, before we continue with with the instructions, what I wanted to talk about is really what are the pieces of the agent, right? So we have our instructions and I started to get into something more domain specific, so we’ll we’ll get back here. But we have our main cards here within the agent, our instructions that we’ve already touched upon. We have our knowledge. Our tools and our triggers and the knowledge is where we’re getting our data from. And for the purposes of this simple example, it’s gonna be just a single SharePoint document. It is an F1 parts list. It’s just a CSV file. It’s the literal Nirvana of data sources where everything. Is beautiful and and the world’s a happy place and hunky Dory and we can all be, you know, happy and we’ll get to why this might. This is not the reality a lot of companies work in and that we’re familiar with it. So I have my table of. Manufacturers and they might be creating or manufacturing different parts. So Sauber or Ferrari, they might have a suspension arm or Red Bull Racing might have a turbo charger and here are the prices and the available on hand inventory and part numbers, right? So pretty simple data source. And then we’ll have a trigger. So when a new e-mail arrives, we are triggering this agent and we’ll get to what this looks like in a little bit. And then we have multiple tools, right? And tools are effectively. Little tidbits of functionality. So if you’re in low code and you’re a developer, think of this as just as your function or your method, right? That’s that’s that. It’s encapsulated logic that that that you are making most likely repeatable because multiple agents can consume a single given tool. And those could be custom, but there’s also a lot of out-of-the-box tools, and I’ll show you that there’s a lot of out-of-the-box tools that you can use, whether it’s connectors or MCP servers. Again, a huge buzzword. MCP. Oh my God, it’s a protocol for agents to to to communicate through. And MCP servers that you can leverage out-of-the-box. The coolest thing that I’ve that I’ve I did to get in touch and start playing with MCP servers is Microsoft Learn actually created an MCP server and you can use this MCP server and set up an agent in all of 5 minutes. To make your own agent that allows you to ask questions for latest documentation information within Microsoft Learn. So you don’t have to peruse the documentation and hey, did this change or you know we all know how fast document. Documentation moves at Microsoft. You can create your own MCP server that you can use natural language to ask questions for the latest and greatest documentation. So just an example, just a thought. So anyways, we’re encapsulating this logic here and then we have this agents, right? So you can use multi agents and. An agent might call a different agent based on certain instructions and it understanding that hey, I can’t do this on my own, I need to tap in this other agent and then topics and topics are a little bit more. Involved area encapsulation of business logic. So especially in chat bots you might have a goodbye topic where it’s detecting hey you know I need to say goodbye to this user. I have all this disabled. I I I usually disable a lot of the the the. The default topics, but just know that they’re here. So anyways. Before we dive back into the instructions, let’s talk about the beginning of the beginning. So this agent operation starts when a new e-mail arrives and the trigger for that is a simple Power Automate flow. And again, all of this is low code and for somebody that’s been pro code. And continues to be. I’m hanging on for dear life. There is there was a huge internal battle for me where I used to think that the pro code and low code was this thing that is up against one another. And really it’s not all that it’s it’s something that. It is more akin to a relationship where they’re working together and then and and they’re they’re ever growing within together and they’re improving the ways in which they they work together. So stop looking at it as hey, you know you are just a business app developer and hey, you’re just a. Guy that knows C#. It is a tool and it is a team and and and and they’re symbiotic in nature. So anyways, we have a trigger when a new e-mail arrives and it’s just connected to my my e-mail. And we’re going to include attachments and we’re going to monitor the inbox folder. And basically for every recipient, what we’re going to do is we’re going to see if the this is just some logic to make sure that we’re not getting duplicate logic. But basically it’s making sure that if you’re testing it from your e-mail and you’re sending yourself an e-mail. Not processing data incorrectly, but effectively an outside party is going to send you an e-mail. We’re going to go through and we’re going to see if there’s any attachments. If there are, we’re going to go through each attachment and we’re going to append the ID of the attachment. A lot of this is just kind of technical mumbo jumbo, but but but each attachment in Outlook identifies with an identifier. But the most important thing that I want to, I want to show is what we’re going to do is with each incoming e-mail, we’re going to call the agent, send a prompt to our F1 sales agent. And we’re going to send the e-mail conversation ID, which is this unique identifier for a given Outlook e-mail, and we’re going to send our agent a Jason body of the body of the of the of the e-mail, the front, the sender information, the e-mail ID. And the identifiers for each individual attachment, and we’re going to send this back to our agent and that’s what’s going to trigger our agent. So with each trigger. The instruction set is we’re going to take the attachment IDs. If if it is not an empty string, we’re going to invoke our call and document intelligence tool. And what this tool does is it’s it’s a low code way to call a REST API that you just saw Nick use, except Nick used a. GUI implementation of it or an SDK implementation of it in this Python code. We’re just gonna use a straight REST API endpoint and we’re gonna specify to AI. And this is the thing that I couldn’t for the life of me get over that hump for a while. I could just in plain text tell them, hey, the input of this is this ID or the. Attachment IDs input as the string attachments to the trigger e-mail and for somebody that’s been pro code for so long, it just blew my mind. And then we’re gonna go ahead and use the text and the body and the e-mail attachments to generate a quote. And then based on that, we’re gonna use the knowledge from our SharePoint folder to determine what the cost of a given part is. And then ultimately, we’re going to invoke a send an e-mail tool. Now what I’m going to do is really quickly just send myself an e-mail. I wanted to show what the data is. It’s just a PDF, very simple. I needed two Ferrari diffusers, one Red Bull MGUH and one Mercedes wheel rim. It’s just a PDF again of of text. Sometimes client engagements like I once had a client that said, hey, it’s great that we’re attaching attachments and everything is Word. I literally have clients that are sending me. PDFs with handwritten quote requests and at the time I was like, dude, what an interesting use case. Anyways, we can totally handle this nowadays, but in a simple one, I’m going to go ahead and send myself. A part, a quote request. And I’m going to say see attached and what I’ll do is I’ll attach my quote test PDF to with that I just showed you and I’m going to send it to myself and I’ll let it start to to work. And while we while we wait and it’s all going to take just a few seconds, I’ll continue. Continue to show you our call document intelligence flow. So you can add a tool and the tool could also be a flow. It could be a cloud flow and I created this call document intelligence flow that you can see here and I gave it some. Basic instructions and configurations that I can show you here within within our agent that says hey, this tool is called to retrieve text data from an attachment file received in an e-mail. The attachment IDs are dynamically filled with AI and what this is is it’s a common delimited string of attachment IDs that identify individual. Attachments present in an e-mail. All of these descriptions are important. They used to not be when you just were a developer and you could write code and imperative code and what you wrote happened. You didn’t really need to describe it much, but as long as it was readable. But descriptions are important now because AI needs to know what it needs to do so. We configured our our our tool, we told it what to do and what our actual flow does is it ultimately is called by an agent. So when an agent calls the flow, we take care of some variables and and such. Nothing crazy. We’ll get to I’ll show you. How it works. Basically what we’re saying is we have a common limited string of attachment IDs. For each attachment ID, we’re going to use the Outlook action available to us in a flow and we’re going to get the attachment. So we’re going to get the attachment ID, we’re going to get the attachment by attachment ID, and what that’s going to give me is the bytes of an attachment. And in the same way that Nick was able to create a new document analysis inside of his document intelligence portal, I can invoke an API endpoint using. An HTTP action in Power Automate that says, hey, call this endpoint, specify some query parameters and provide this base 64 source. And what that’s going to tell document intelligence is, hey, you should start analyzing this document and it’s going to do that and it’s going to continue doing that until it’s ready. And so I have a very quick. Do until inside of my power automate flow that says, hey, until we know that we’re done and there’s a very easy way to determine that, we’re going to keep calling this endpoint at an interval saying, hey, is it done? Is it done? Is it done? And if it is. What we’re going to do is we’re going to give the contents of of this document in plain text. Now, rather than this content bytes, we’re going to give a JSON array with the schema of the actual contents and the attachment IDs back to our agent. And while I was talking, all of this happened in about 10 seconds, but I sent. To I sent that PDF to my agent or to my mailbox. My agent picked it up from my mailbox. It ran the flows and all the processes that I’ve shown you here in the last few minutes and it generated this. It went and hit the CSV file. And said, hey, does do I have this in stock? How much? What did, what did Mac actually want, right? And I wanted and let’s go back to it. Here one second. I wanted a Ferrari diffuser. I wanted a Red Bull Racing MGUH, and I actually wanted a Mercedes wheel rim, but it gave me an Alpine wheel rim with a note saying, hey, the Mercedes wheel rim is not available. The only available wheel rim is manufactured by Alpine. And so it actually gave me a quote for everything that it thinks I might want and you know. You know, with some simple, with some simple prompt engineering, you could say you know, don’t do this or do that and ultimately refine it to to be the way that you want. But what this shows us is just this very quick low fidelity way to get. A pretty darn good implementation going in a in a very low amount of time and it only took a few hours. And what we built was this agent that had this trigger, had this tool, had this product knowledge and it also had another tool to send an e-mail and all and and all. Ultimately. We got what we needed. We could do things a bit differently, right? We might have an orchestrator agent that’s triggered on a new e-mail or it’s triggered on a message coming in, and our quote processor agent might be a specific specialized agent that, hey, it can also call this document intelligence tool, but it might also. Have an ERP integration where it’s it’s adding in a new quote inside of D365 CRM. Or maybe we make it more robust and we say, hey, you can also modify this tool or modify this quote. So the world’s our oyster. There’s tons of of use cases. But with that said, let’s talk a little bit about the challenges. You saw some of this data and you saw how nice and great it can be. It very seldom is. And so one of the main challenges that I’ve run into and we’ve run into a concurrency. Is data readiness and quality. Again, I said when everybody got the AI tool, everybody got the AI hammer, everything became a nail. But not everybody’s ready. And data readiness is probably the most impactful thing that’s in our way. I’ve had we built POCS and. And pilots for clients just to get their feet wet and just to see what the potential is of a given use case of which there are many and understand how it can fit into their business in a low risk way. And what we found is oftentimes companies are just not ready from hey. Our data’s not clean or our data’s not all in one place. And so there’s a huge strategy behind making sure your data is ready for AI. Some of the other challenges are legal. I’ve had engagements be put on hold for about two or three months where entire legal teams got together for the clients. And for Microsoft and like and they’re figuring out, hey like what gives? How does Azure Open AI work and how does this data impact me and what happens in the event of a data breach? By the way, if you have your data in Azure SQL. It’s the same policies, security and trust where you’re you’re concerned about who can access what data in agents and that’s why they layered on or Microsoft layered on Azure Purview to to make things a little bit more governing governed from an organization perspective. Culture is a huge one where not everybody thinks of AI or they’re scared of AI or or if what I found is oftentimes a lot of manufacturing companies are just fine getting by because nothing is broken. Why fix it? Where and what? Where? We’re often meeting resistances. If I can accelerate your quote generation for your manufacturing business from 30 minutes at best, hours at worst, down to 15 seconds, that’s huge and and and again. That’s one of the challenges is just getting up against the culture, perceived cost and scalability where people are afraid of what will Azure do with our costs of these tokens or whatever and how much is AI actually truly gonna cost me. The best way to do it is to not jump into the pool. It’s to give yourself a quick low cost, proven way, low cost POC or pilot to determine what the ROI is and how you can then best leverage the time saved with with the A I spent. So that’s a lot of the. Stuff that we’ve done with our clients and then just talent and expertise. I once heard the first time I heard prompt engineering. I was like, Oh my God, like I’ll never figure this out. Like what is prompt engineering? It sounds like engineering’s involved. Like what is this? And I found within a few hours that it’s just modifying text and it’s like, OK, now I know, but the the the bottom line is a lot of AI sounds like it’s this mythical or mythical thing that. Not everybody understands and you might have trouble getting talent and that’s true to some degree, but know that it’s something that you can overcome and we can totally talk about. So those are just some of the organizational challenges that I’ve seen and I’m sure Nick has seen. Nick, do you have anything to add to to to these challenges a bit? Nick Miller 58:53 No, I think I think you you did well. I guess the only amplification perhaps is we say data readiness and quality. This is specific to the use case, right? And so it doesn’t mean that you have to have a fully built out data lake with everything. All every single one of your ducks in a row, right? It applies to this specific use case, right? So if you’re wanting to automate a cash receipts process, yeah, we’re going to need to know what are the outstanding invoices. So we know which we could be matching to. We’ll need to know information like some customer. For data, what are the names? Things like that. And that maybe is a table that you have in a data warehouse or maybe less ideal. We could hit the source system, which I don’t typically recommend, but I guess if the use case is important and you have the data there, the data quality doesn’t necessarily mean. That you have like this fully fledged, you know, you know, gold medallion architecture, right? So it just depends on the use case. Everything else that’s great, I would say, yeah, that’s awesome and. Mac Krawiec 59:58 Certainly the and then for next steps, the one thing I wanted to to mention is given all of that and the clarification from from Nick, we have an A I workflow discovery session that can help ideate determining what where we can employ certain A I. And whether the data readiness is there and most often for a low cost pilot it is. So get with us, talk to us about this discovery session. There’s also a Microsoft funding review that our team helps analyze whether the use case, once we determine what it is, is eligible for some additional Microsoft funding. That can offset some costs, but we encourage you guys to first of all, fill out the survey that I know Amy’s been sending throughout the throughout the webinar, but then get with us about these sessions and let’s talk a little bit more. With that said, I wanted to thank you. Thanks for sticking around. I know we’re at time. Nick, do you have any other thing beyond? Thank you. 1:00:58 Thank you.