/ Insights / View Recording: How Microsoft Fabric Accelerates Machine Learning in Manufacturing Insights View Recording: How Microsoft Fabric Accelerates Machine Learning in Manufacturing April 3, 2025Join us for an in-depth exploration of how Microsoft Fabric streamlines AI and machine learning adoption in manufacturing. Just as a hunter gains a clear advantage by scouting frozen terrain in the offseason, manufacturing leaders can leverage Microsoft Fabric’s unified analytics platform to experiment with ML solutions before full-scale deployment.In this webinar, Brian Haydin, Solutions Architect at Concurrency, will showcase how Fabric’s end-to-end capabilities—including OneLake for centralized data, built-in Apache Spark, and integrated AI/ML frameworks—enable rapid prototyping and deployment of machine learning models. We’ll dive into three real-world manufacturing applications:Computer Vision for Quality Control: Automating defect detection with AI-powered vision models.Predictive Maintenance: Using sensor data to anticipate equipment failures and reduce downtime.Supply Chain Optimization: Enhancing demand forecasting and logistics planning with machine learning.Discover how Microsoft Fabric simplifies the entire ML lifecycle, from data ingestion to real-time insights with Power BI. Whether you’re exploring AI for the first time or looking to scale existing ML projects, this session will provide actionable insights to help your manufacturing operations thrive. Transcription Collapsed Transcription Expanded Brian Haydin 0:06 Well, hello everybody. Thanks for joining us for Concurrency’s webinar. We’re gonna be taking a little bit of a trekking journey today, hunting for some insights using machine learning in Microsoft Fabric. I’m Brian Hayden and I’m a solution architect here at concurrency. If you want, you can follow me on LinkedIn. You’ll notice that I do have a little bit of an outdoor theme to most of my talks and. I also do a little bit of a blog on LinkedIn as well. Trying to make analogies with the outdoors to help you understand some of the technology changes, but a little bit about me. Definitely an outdoors guy you can see I like fishing. I like being out in the outdoors, hiking around. And fun little fact about me, besides being a nerd, is that I’m a twin. So who’s not involved in the other? The other half of me is not involved in technology, so let’s talk about today’s adventures. So we’re gonna go on a little scouting trip. And the idea here is to really just get an idea of what kind of tools we need to put in our backpack. And so I’ll often times I’ll go out, you know, if I’ve got a hunting trip in the fall, I like going out in the spring when things are kind of open and clear. And you know, I don’t have to worry about, like, the stress of, you know, what I’m actually trying to accomplish. Maybe it’s hiking on the Appalachian Trail, but what are the things that I need to, you know, think about? So we’re gonna explore some data, some insights. We’re gonna blaze a couple of new trails today and hopefully by the end of this, you’ll understand what the right gear is for you to use when you go on your own adventure. So let’s talk about Fabric first. I hopefully most of you are aware of what it is, but it is a, you know, end to end analytics platform in the Microsoft ecosystem it it is basically. All of the tools that you would typically build. In your Microsoft Azure data ecosystem. So data factory your synapse analytics it now even includes. SQL databases. But it also supports data science. Pretty natively and has capabilities around real time intelligence. So most of this is built on most of the data estate is built on this foundation. I think it was like a frozen lake. Where all these different tracks are gonna converge and you have one single, unified way to deal with with all the data. So everybody has the opportunity to kind of work with it. But let’s think about it like from a data science process perspective. Dev so on the data science side, typically you’re you have a problem statement that you’re trying to to work with. You wanna formulate, you know, some ideas how we’re gonna solve it? Then you might do some data discovery, do some preprocessing and you know, then you experiment a. Little bit with the data. Fine tune what models you wanna use. Maybe fine tune models or create models. And then you need to get into like an ML OPS. You know sort of. Sort of cadence where you’re going to operationalize whatever it is that you’re trying to build. And then the key component is leveraging, you know, all that information that you’ve created and generating insights. And so as you can see in the Microsoft platform, we’ve got Power Bi, you know. That’s that’s built into the fabric ecosystem, the data science and your ETL processes all. That is all in a unified environment. So. I like to think of Microsoft Fabric as like a Swiss army knife. So it has with all those different tools. If you’re a hiker out in the field, that’s like the one thing that you gotta have compass and a Swiss army knife. And within the fabric ecosystem, let’s dive a little bit deeper into the AI tools and the capabilities that are there. So we have the ability to do spark jobs. We have the ability to use the analytics components. In fabric natively. But recently they’ve started to integrate with the Azure machine learning. So Azure Foundry and we’ll get into a little bit about that in some of our use cases, but hopefully this is going to eliminate like this overhead of like building a camp every time that we got to provision something like you know we’re going to have things kind. Of pre built and and working for us rather than against us. Taking a little bit of a step back, concurrency has a methodology for implementing a lot of these projects, and I thought that I would talk a little bit about how we approach designing a solution and then we’re going to get into several different use cases today. So our. Approach is a value focused implementation intended to derisk or minimize the risks of. Building complex ml or AI workloads. In Microsoft Fabric, or just natively in the Azure state as well, we typically in step one we’ll start with APOC, and in this kind of stage, what we’re really trying to do is validate that the data that we think that we need is in, it’s available that it. It’s it’s available, it’s accessible that there aren’t any gaps in the data that need to be identified. And then we’re going to do some machine learning. Around that. So we’re going to start to leverage some models. Typically in the POC phase, we’re using out-of-the-box models to to engage with the data and see, you know, our hypothesis was that we could do some forecasting or we could do some optimization. Can we see value in that data before we actually move into building out a pilot? So in the pilot stage, we validated data we’ve we’ve predicted that the model’s going to have some impact on it and now we wanna like actually experiment with that data with real live scen. So we’re going to build maybe an interface for that. We’re going to refine the models that we’re using, maybe create our own machine learning models and we’re going to use that against a day-to-day operational use. Now, if you think about it as a pilot, meaning a very focused engagement might have a shop floor with. 15 lines in the manufacturing line. We’re just going to focus on one line, you know, and try to like prove and demonstrate that. That our hypothesis working. And and that also gives us an opportunity to measure the impact of what our machine learning is doing against the other lines that aren’t using that. And finally, after we validated that, Yep, the model can actually predict or can optimize things better than than a human could, we’re gonna move that into production. So here in the scenario that I just talked about, we’re gonna take it from one line. We’re gonna go into all 15 lines, and we’re going to evaluate those results over time. Time, and if that proves to be successful, then we’re gonna scale that out. Most manufacturing companies might have 5-6 or fifteen different manufacturing facilities. This is a way for us to look at how we’re gonna scale that and maybe your Mexico facility is using a different ERP system. So that’s where we talk about kind of the scaling aspect of it. Maybe you’re using dynamics in the US and you’re using SAP in in Mexico. You know, while the data that we’re gonna analyze. And leverage for these types of scenarios are gonna be the same. We do have to, you know, make a scalable, adaptable ecosystem to be able to support that. And the nice part about Microsoft Fabric is by being able to reuse things like data pipelines, we’re gonna have that ability to scale that out pretty effectively and efficiently. So how does? How does fabric support like an end to end? You know kind of scenario. So there’s one platform with all these different capabilities centered around Microsoft’s one lake. And in from there we can connect to things like our data engineering can connect to that one lake and push data in from our ERP whether it’s on Prem or another cloud service. We can pull data from IoT sensors. In real time and then our data science workloads like spark jobs can connect to that one. Like to pull data out and do the analytics on it and and then serve all the information that comes out of that ML product through a power BI dashboard. So the way that this fits together is by using pipelines in a variety of different scenarios. And the advantage of the pipelines is that these. Give you a start to finish. You know, trip. You know that is monitored and maintained in. A. DevOps supported type of lifecycle. So when you make changes you can promote those two different workspaces through different environments and make sure that you have consistency in in tests that will control what gets pushed into your production workloads. So you could see that using an ML OPS which is sort of like a dev OPS for ml. Your framework. We’re evaluating the model before we promote it into our environment. We’re we’re validating our Python And spark jobs before we promote that into an environment and then we can also maintain most of the infrastructure through things like Terraform as well and have a really robust end to end. And solution developed. So just give you a quick little snapshot of what a basic ML workflow looks like in fabric. So most of these scenarios that we’re gonna talk about are gonna go through the same kind of workflow. We’re gonna store some data. We’re gonna get it from whatever those data sources are. We’re gonna explore and analyze that data. We’re probably gonna make things like data corrections like we wanna change spaces to underscores. We wanna convert all the currency to a single currency. You know things along those nature. But we’re also gonna kinda expose some of these insights at this point, so we can start to look at. Look at that data through dashboards and we’re gonna do our experimentation and our analysis through some of these temporary dashboards that are gonna just help us explore the data. Once we’ve done that, we’re gonna develop and train and create jobs and pipelines to support that. We’ll evaluate and score the data that we have generated through our ML process. And then we’re going to finally apply the best model, and we’re going to promote those models into, into our environment. So before we get started into some of the scenarios ’cause, it’s coming up pretty quick. I wanted to talk about some of the recent innovations. So like what are the latest and greatest kind of cool things that are coming out on the pipeline? And I’ll evaluate this I. Revisit this a little bit towards the end as well, but we have this real time data hub which is fantastic for bringing in IoT data. And and there’s new capabilities that are coming out almost on a on a daily or weekly basis? In conjunction with that, there’s a low code. Data activator that allows you to set up real time alerts and send out things like emails or teams of messages when something happens and we’re going to leverage that through a variety of these different experiences today. Another recent announcement, and this is like maybe a week old, is that the copilot features that are part of Microsoft Fabric are now going to be generally. Available to all fabric SKUs. And I mean, that’s a huge. That’s a huge decision at Microsoft and I, you know, wanna applaud them for that. Previously you needed to have an F64 SKU in order to enable copilot within your fabric environment, and for many organizations it’s too big of a byte to it’s an $8000 a month fabric SKU. So it was just a bit much for people to chew off for that feature. But now you can try it out with a smaller F4F6 ski that’s only gonna cost you a couple $100. A month and get you know those benefits and the copilot features aren’t just about exploring your data like things like. Show me how many unique customers I have or things like that, but it can also explore scenarios. You know that are a little bit more complex as long as it can understand the data. And then you also get those benefits as part of. Your machine learning Python scripts and things like that. Imall.net developer Python’s not my. It’s like maybe my third or fourth language in terms of familiarity, so I leverage it quite a bit to help me, you know, get some of these scenarios built out. New features coming out in notebooks. Those are getting faster and having more capabilities. And then I already kinda mentioned it, but there’s a new preview feature out with the Azure AI foundry integration and you know we’re starting to find we’ve always found use cases where you know we have to connect to mm models in fabric, but this makes it a really. Seamless integration and makes it much quicker and faster for development and then you can expose. One of the big benefits is being able to expose some of these scenarios. In agents. Expose them as agents through like copilot chats or you know things of that nature. So let’s get started on the journey. So theme here is that we’re gonna be going on an expedition, you know, sometime later this year. And we need to get the right gear in place. We need to understand what gear is used for different scenarios, different hunting trips or different hiking trips that we wanna go into. So we’re gonna track, we’re gonna dive into some of these real life scenarios. Alright, first one predictive maintenance. This one comes up quite a bit. You know in in like this inventory spirit, when we’re going out, we gotta make sure that the gear works, right? So maintaining the gear you’re you’re hiking gear is like, critical. If you’re gonna be out on a two or three-week expedition, you don’t want things to kind of like breakdown mid journey. And so for manufacturers, it’s kind of the same thing. You wanna keep these machines running? I’ve had a couple of different scenarios that I’ve worked with customers. On where? Like inkjets are gonna start to fail. They get gunk built up into EM. And So what can we do to like? Predict when the maintenance is required. And be able to service those before it brings an entire operation down, or an entire line down. So instead of reacting to it like something happens, I can actually schedule the maintenance ahead of time knowing that things are gonna need to be cleaned. Or that many, like some of the some of the conveyor belts are starting to get worn out and we’ve got some, you know, vibration things that are, you know. That are happening. So the key benefits here that we’re gonna reduce some of the unplanned downtime. And by having better maintenance, just like, you know, sharpening or cleaning your gun or something like that, you’re gonna prolong the life of those assets, you know, over over time. And with regular maintenance, you typically see things like lower maintenance costs, you know as well. So what does this look like in fabric? So in fabric, what we’re gonna do is we’re gonna load the data. And that data is typically in these scenarios around predictive maintenance, their real time data. So we’re ingesting it, you know, into into the Delta lake where we can start to operate on it. We’re gonna prepare that data using something like spark data frame. And we’ll land that into our lake houses, delta tables. And you know, occasionally we’ll use things like, you know, pandas data frames to make, you know, plotting, make use of the plotting libraries that we’re gonna use. And then we perform some explore and exploratory analysis. And like I mentioned before, what we’re gonna do is create some vis. That help us understand, you know what, some of the parameters are like. Over time, we we know when things have failed in the past, so we can look at those metrics. And see how they correlate well with with, you know the end result, and then we’re gonna training evaluate models. And using things like data science notebooks and then once we have 15105 models, you know kind of get it down to the ones that work, we’ll select those ones that predict the right outputs that we’re looking for. And then we can create things like dashboards and alerts. And these typically look something like this. So. This use case was looking at, you know, some manufacturing. Some manufacturing components and looking at like the torque ratios and heat you know across that and you can see in the upper in the upper quadrant that those are the areas where things are starting to get a little bit dicey. And then using the data activator, what we. What we’re able to do in real time. Time is set up alerts and so you see when I hit that red on the the graph on the right side. I kinda hit this threshold and now I can send an alert to my manufacturing operations department through a teams chat through an e-mail or some other mechan. And alert them that something is about to fail and they need to go and do some maintenance on, you know, on the the machine so. It’s kind of like, you know, hey, like, I was, you know, getting ready to go. For the day, and I’d looked at my my compound bow and hey, that bowstring doesn’t look quite right. I probably need to go get that that fixed. So things along those nature will help you keep things running in tip top shape and save you some money. Another really common scenario that we’ve talked to a lot of customers about is this idea around computer vision. I use that. I use an analogy around the inkjets before and customer that I was working with. Some printing and So what we did is hooked up a bunch of cameras that could take pictures of the print quality and look for anomalies, and so that’s a computer vision scenario where we need to train a vision model and leverage it in order to detect things that. Are outside of some threshold that we set, so quality inspectors, you know, they rely on people looking at things typically. And in some scenarios, maybe the parts are coming to through too fast. Or people have to take breaks and you know, they also kind of get a little worn out, you know, throughout the day. So using computer vision can really automate most of those inspections. I’ve seen some really complex scenarios that have been implemented where they’re looking for really fine kinda cracks or or you know other defects. And but in the computer vision world, you know, detecting scratches on some parts, misaligned labor labels, faulty solder joints. Those are all kind of scenarios that are fairly. Easy for the computer models to be able to pick up, and So what does something like this look like? So we’re gonna set up. You were gonna set up some data storage and fabric. You have the ability to bring pictures and images you know into into the solution. So we’re gonna land that and data that data storage in our one like as well like what we’ve already talked about. And then we’re gonna prepare. You know our fabric environment, building out our spark notebooks. So for example, we could use some pre trained RESNET models or some PY torch. Libraries that are going to be built into a fabric notebook and these can kind of start to classify product images as OK or defective and do that within like a browser based environment as well. And Fabric has a lot of these libraries that are like built into into the ecosystem. So there’s no setup friction, you don’t have to. To build a pipeline to bring in these Sdks to libraries, it’s like having a trained sniffer dog just ready to go for you. And if you’re using Azure machine learning, you can integrate into custom models that you might have built using custom vision where you’ve pretrrined libraries you know for specific use cases as well. And then as an output, something like this would look like. Here’s some computer chips that we. Were looking at. And some solder connections. Were, you know, were not quite right. It was causing some some issues with the hardware, so once this once the computer vision models in place you can flow that right through a you know, power bib report and fabric and look at like where over time, how how frequently you’re seeing some of those issues going. Back to like that printing aspect. Inkjets are notoriously unreliable, and so you’re gonna get some artifacts from. Time to time, but at the end of the day, like the scenario that we were evaluating was they were labels, you know, delivery labels. So is it good enough to deliver, you know, through UPS or through FedEx? That’s all we really cared about was it was good enough for somebody be to be able to read it. But we did track, you know, the defects over time so that when things when those defects started to happen on a more frequent basis, we knew that it was. Time for us to, to. Do some cleaning on the inkjet printers and get things ready for the next run. Another real common scenario is some forecast scenarios. I’ve got a couple of these. You know, there’s demand forecasting typically like in a sales using some of the sales data. And you know the analogy here is kind of thinking about the weather. Like I’m about to go fishing on Lake MI. The first thing that I do is check the weather. Check what the wave forecast is. Is it going to be safe for me to go? And. Or more importantly, like is there a window of time that I need to be back and I need to be aware of so this is kinda same thing with manufacturers. They’ve gotta do demand forecasting so they can predict when the product is their product demand’s gonna be be at the highest point and at the lowest point. This’ll eventually flow into things like supply chain management, but at the end of the day, we wanna make sure that. We have the right products at the right time for the orders that are gonna be coming in. And so how do we actually accomplish this? Well, taking a step back, remember I talked about our our pilot and our POC phase. So we would start off with a risk minimized approach. This, in our vernacular, was step one of our approach and we’re gonna do a data audit and we’re gonna do a quality check of the data and we’ll do a a simple feasibility assessment as part of, you know, that kickoff for that data. Discovery to make sure that we have a path forward to build a proof of concept to actually use the data in a model. So we’ll pilot some AI models with something like AI foundry. We’ll do some quick validations. And then we’re gonna spend a little bit of time making some adjustments to either the data that we’re feeding into it or maybe we’re gonna swap out some different models and look at which ones are performing best. But the outcome of this is that we’ve got some evidence based, we’ve got evidence. In terms of actual numbers that we were able to predict what what the sales forecast was gonna look like. And maybe you know, use it to to drive some of our decisions. So using those two, you know using those two first steps like OK, what are we gonna get to? We’re gonna centralized our data in one lake and a lot of these scenarios we’re pulling data from variety of different sources. You might have your CRM. You might have an ERP. And you know, for sales history market indicators, you know, if you’re doing weather, you’re looking at the weather forecast, but you get all those data from ERP systems and CSV files and maybe even some IoT signals from your smart products and you consolidate that into your one like. This gives you a comprehensive view of of all your data. So it’s like having like the full weather map rather than just like the local forecast. You know, my trip is gonna. Couple 100 miles and I need to have a holistic picture about it. So once we’ve got that data, we can use something like Fabric Spark to work through large historical data sets. And we can process that to find patterns and outliers, things that we might want to ignore. Data scientists can train time seals forecasting models using a variety of different you know, models like profit or even just do this. Right inside of a note fabric notebook leveraging some of the out-of-the-box Python libraries. Once that’s done, we’re gonna automate the forecasting pipelines that are built into Microsoft Fabric and then visualize some of those forecasts in in Power BI. The key and the key benefit here is that you know we’ve got a unified data set that can be used for a variety of different, not just the ML workflow, but it can also be. Used to driven sales reports for your executives and then in the Microsoft in in the fabric ecosystem, we had this concept of multiple workspaces. So we can do rapid iteration and deployment. Everybody’s working off the single source of truth in terms of the data. And we have a we have the ability to make a robust deployment of our of our workloads by using the pipelines in ML OPS to manage our infrastructure and manage the code that we’re deploying. The next scenario I want to look at is, oh, actually. So this is what something like this would look like if you kind of zoom your you know, head into like the little red triangle there. It’s probably a little bit hard to see, but we’ve got an actual value of what our sales forecasting was and then we had like a forecasted value and that’s that darker blue line that’s on the right side of the graph and right about. Right about here you can see a divergent where we were we were forecasting sales to go up and in real time we were seeing that the the actual values were going down. And so using the data activator we were able to set up alerts. That somebody want might want to take a look at something so they can adjust, maybe their supply chain or flag certain conditions for whatever for a variety of different reasons. So what this can do? Is help you forecast things like a stockout of uh, you know, critical item for next month. You know, you see, if you’re if your predicted value is lower than your actual value of consumption, you’re probably going to run out of stock. And we can automate, you know, some of those alerts for you as well rather than just having a human being using spreadsheets and, you know, making a decision there. Supply chain optimization. You know, this is also a pretty complex problem. But it’s, you know, very similar to the to the sales forecasting in terms of how we would approach this problem and what our implementation is. So we do wanna make sure that when we’re manufacturing things that we have the right parts at the right time and that we’re not overcompensating for. Delivery issues. That may occur with, you know, variety of for a variety of different reasons. One of the customers that we worked with had, you know, five or six different vendors that they would use for a specific raw material and when when they would forecast, they try to figure out. Well, this you know this part coming from Mexico or this material coming from Mexico, we’re able to get it quicker, but it cost more. It cost quite a bit more than the stuff that we’re getting from China. So, but the China deliveries are really kind of unreliable, you know in comparison. So they would use, you know, short term resupplies, you know at a higher cost and do some of their long term forecasting and by doing it that way they were able to reduce. They were able to reduce their warehouse capacity by 60%. You know something in that that, I mean, it was, it was ridiculous. It was millions and millions of dollars of raw materials that they didn’t have to have on stock. And so they were able to just get what they needed when they needed it in order to to complete the manufacturing process. So you’re minimizing your carrying cost of your inventory and also as importantly? Using these forecast models. As as some of your key inputs change, you can make quicker decisions and minimize the amount of cost for how you’re spending your money in in those scenarios as well. So very similar implementation pattern to the sales forecast. We’re gonna bring data in from a variety of different systems. Right now I’m working on one that is pulling data from. ERP systems. On this as well as transportation and logistics and even real time on the delivery trucks using like IoT sensors that are embedded in the trucks to be able to tell where they are, you know at in real time and then we can build dashboards, you know based off. Of that. But so we’re consolidating the data. You know we do some data cleanup, do some exploratory data analysis, automate our pipelines for continuous optimization. And here is like, you know, a little bit different than some of the supply chain stuff or the sales forecasting because supply chain is really volatile based on things like Panama Canal or COVID. You know when that happened. And so we create in these scenarios we typically create. Automation around optimizing the models that we use so they can retrain themselves. You know, month to month or week to week. You know and perform updates to those models based on what the current market conditions are. Again, a nice take away for this. When you implement one of these strategies, you have a unified data set. You know data visibility across the organization. Much of this data can be used to calculate. The cost of manufacturer per SKU or what customers? You know what customers are costing us the least because their order frequency. And you know, and whatever else. So a lot of insights that you can make, but most importantly this helps with automating some of the decision making that happens especially in the supply chain where you have to make quick decisions when you get a large order and you need. To fulfill it. So typical dashboard might look like this. You know that trend lines or the forecast scenarios are emphasized? And then we can see where the actuals are are running. So we would use this to optimize inventories levels. You know, what is our predicted? You know, sales looking like and how are we trending against lines that we had forecasted? And then you could actually connect a lot of these pipelines to. To your ERP system so that can feed into a recommended production schedule, which is another scenario that I’m working with another customer on is I’ve got this, I’ve got all this work that has to get done over the next two or three days. How do I optimize my shop floor? To to work with that and make sure that I have everybody working and I’m not paying excessive overtime. So a lot of different use cases when we when we think about optimization. A little bit different than sales forecasting because what we’re trying to do is make real time decisions in in the optimization scenarios. And and not just trying to forecast for cash flow. Another common scenario that we run into with manufacturing companies is energy usage. It’s a big component of the manufacturing, and then there’s also just how do we reduce our impact on the environment. You know, as part of that as well. So just like when I go on a long trip, I’ve got to be really conscientious about how I manage. Things like power, you know, on my phone or on my watch. Or my electronic GPS. You know, I I have to manage that power consumption. Pretty carefully. Otherwise, I can be carrying around a large 12 Volt marine battery and my back’s gonna hurt because I’m carrying that in a backpack. But most manufacturing companies are looking for ways to. Have less impact on the environment and also just reduce the energy consumption which is a direct component of of cost in in a lot of cases it’s a major. It’s the the highest cost that they have is their energy consumption. So we don’t wanna drain those resources. We wanna have some of those cost savings and then we can put a green stamp on our website, you know, so that people know that we’re doing the right things. For the environment. So. Here couple of different steps from how we would you know normally normally calculate optimization. We’re definitely gonna do the energy data, you know, bring all that into our one lake. And do our exploratory analysis. But we might want to look at other, you know, other areas that like what are the forecasted energy pricing going to look like and and then on top of that, we’re going to use that data to. Manage our HVAC systems in real time so that we can reduce some of our electricity usage. And a lot of companies are using IoT sensor data for something like that. From meters or their equipment, you know they’re on site. Or even, you know, even to protect predict gas consumption for trucks that are are out doing deliveries. So this showcases the ability to use both streaming data. You know in real time, but also, you know a little bit more forecast data as well. And I think kind of an important you know one of the the important things that we can do with this. Is that we can feed a lot of this data into our sustainability dashboards or our compliance reports. You know that that we might have to serve up as you know, part of the regulatory. Environment. So this is the case. This is a use case that provides a lot of cost benefits for customers and and also. Clear, actionable dashboards. And simplify some of the. Reporting requirements for your regulatory reporting. And the last scenario I wanted to talk about today was root cause analysis, so. Out in the wild, things never work out the way I expect them to. You know, sometimes I get to a campsite that I was planning in using and somebody’s there. Sometimes I might wake up in the morning and somebody’s rummaged through something I shouldn’t say. Somebody something has, you know, started rummaging around some my equipment. And so we all kind of become detectives. We start looking for tracks. We look for trails and same thing happens in manufacturing. We have a sudden drop in, you know, product quality. Something happens on on you know the production line and we got to figure out what it is that that happened. But we got to do it really quickly. So in this scenario what we’re doing is we’re enabling Analy. To be able to look for, you know, look for these tracks or these these little crumbs of activity that may have caused. Something and those are, you know, very much needle in the haystack kind of scenarios and they’re time consuming. And so we wanna get. We wanna get the information to people quick as quick as possible. So Microsoft Fabric, it’s kinda like this vast landscape where you can search for the tracks. Analysts can start to use things like fabrics, SQL or K sequel interfaces to quickly query logs. And we found that. In this scenario, the copilot features in fabric are exceptionally useful ’cause you can do things like. What’s different about the batches? Where the defect rate was high and so in like a human is gonna say, well, I want to look at these different factors and you know look at over time what was different and we don’t have to do that if we’re leveraging something like copilot and it underst. The data it can just go out and give you a report. And let you know that these are some of the anomalies that we ran into. And so in real time. Especially in these real time scenarios where you know something’s happening in in that production line being down could cost 10s or hundreds of thousands of dollars. It’s important for us to be able to make decisions quickly and if you can set up a unified data estate to bring all this different data together, a lot of times you can get you can make decisions much quicker. So that was the last scenario that I had. I wanted to share, you know, some of the big news that’s coming out about Microsoft Fabric. Most of this I’ve already talked about on the left side of the screen. You can see all the what’s new stuff. This is all the stuff that’s in public preview at this point, and nobody’s gonna read that entire slide. But take a look at some of the features that are coming out. And if you don’t wanna do that, you’ve got a little bit of time to go. You. Know over the next month to go to Vegas, Fabric is having their community conference the Fabriccon. In Las Vegas on March 31st through the 2nd. And I think I remember looking at this online today and there’s a $400.00 discount as of right now. If you were to go and register for that. But I would take a look if you’re using Azure Azure AI Foundry. This is a game changer. I mean being able to connect not only from fabric to your Azure AI foundry, but from Azure’s AI foundry to fabric as well. I mean, this is a game changer. You know we can. We can connect pipelines, we can take. Data expose it to to the Azure AI Foundry and vice versa. It’s just, it’s amazing. And then most of the companies that I’m working with haven’t gotten to that F64 SKU, you know, to start off with as they start to experiment with fabric. So they’ve been really limited to understand like is it worth it for us to spend that extra 1000 or $2000 to go from an F32 to an F64? Now you can make that evaluation, you know, without having to make that big investment. And so copilot for everybody, and we’ll see how. We’ll see how some of the smaller organizations that are using fabric today can benefit from those features. So just a key takeaways for me, Microsoft Fabric two big use cases. That resonate with manufacturers are being able to bring all this data together. From a bunch of different sources. You know there’s new connectors that are being published almost, you know, on a weekly basis to make getting access to your data easier. And then for manufacturers, the the ability to support not just batch data. Doing a nightly dump for my ERP, but also being able to bring in real time data through IoT sensors or a variety of different mechanisms is where they really see a lot of that benefit in in these kind of use cases. So next steps if you enjoyed what I had to talk about today. I would love to have a conversation with you and your organization about any potential use cases that you might have, so please respond to the survey link that was dropped in the chat. And let us know if you would like to help us scout your ML trail and pick a real world example that might be easy for us to to evaluate. And do that in days and not months. We can also do a data readiness assessment and build out kind of a road map for you. I love doing these types of engagements. You know, maybe over the course of a week or three weeks or four weeks, we can really take a look at where your data’s coming from and provide some recommendations of how we would, you know, how we would design or architect something like that. And if you’ve done those things already and you’re ready to kind of take the step, but. But you’re not really familiar with with fabric or you’re not sure exactly how to get started. We put together a six to eight week fabric adoption where we’ll take. We’ll help you get the gear. You know, all built out, you know, sort of give you the backpack that’s already built out. But give you an end to end solution that takes advantage of a variety of different components in in fabric. So bringing in things like data security. And, you know, purview to be able to tag and govern your data is a really big it’s it’s a key component and a big feature of of Microsoft Fabric as well. So if you need a jump start, that’s another. That’s another way that we’d love to help. So that being said. Thank you very much. And please follow us on LinkedIn if you haven’t already. Thanks for the feedback.