/ Insights / View Recording: Data-Driven Insights: IQ Insights View Recording: Data-Driven Insights: IQ February 12, 2026Data-Driven Insights: IQData alone isn’t enough—insights drive decisions. In this session, Data-Driven Insights: IQ shows IT and business leaders how to turn raw data into actionable intelligence. You’ll learn how to identify the most valuable metrics, ensure your data is trusted and governed, and leverage analytics to make faster, smarter decisions across your organization. Walk away with practical strategies for building a data-driven culture and turning information into measurable business impact. In this webinar, Brian Haydin (Solutions Architect at Concurrency) and Suneer Mehmood (Data & AI Architect) introduce the idea of an organization’s “Insight Quotient (IQ)”—the ability to turn raw data into decisions the business actually trusts and acts on. They focus on practical ways to move beyond “dashboard theater” and build a reliable, governed analytics foundation that supports responsible AI. The session walks through three core outcomes: (1) choosing actionable KPIs aligned to business goals (not vanity metrics), (2) standardizing definitions and governance so teams stop arguing about numbers, and (3) applying AI guardrails plus a concrete, low-drama two‑week pilot blueprint. They also explain how Microsoft Fabric helps reduce “tool sprawl” by consolidating analytics capabilities under one platform—and how Fabric IQ (a preview workload) adds business language and context via ontology, a graph of relationships, and a data agent that answers natural-language questions consistently across the organization.WHAT YOU’LL LEARNHow to pick KPIs that drive action (not debates)Why tracking too many metrics creates “noise,” and how to focus on a small set of measures that trigger decisions and behavior change.Examples of “actionable” KPIs mentioned in the session: revenue per customer segment, time to resolution for support tickets, inventory turn rate, 90‑day customer retention, and margin contribution by channel.How to build trust with standardized definitions and governanceWhy inconsistent definitions (e.g., “active customer” meaning different things to marketing, sales, and finance) undermine analytics and turn meetings into arguments.How governance supports scalable analytics and helps enable “certified” sources of truth that the business can rely on—especially as AI becomes part of the data workflow.How to add responsible AI guardrails to your data estateWhy AI errors and “hallucinations” often trace back to unclear or untrusted data foundations—and why “human-in-the-loop” and read-only patterns help build confidence.Why permissions and security posture matter when AI acts like “another user” querying data: answers must respect authorization and controls.How Microsoft Fabric helps simplify analytics architectureHow tool sprawl (“dozens of trails”) increases complexity and creates gaps in security and definition consistency—and how Microsoft Fabric provides a unified platform, a single security model, and a shared foundation for experiences like data engineering, warehousing, real-time analytics, and Power BI.Acknowledgement that many organizations already use tools like Databricks/Snowflake, and the session notes Fabric can work with data where it resides via shortcuts.What Fabric IQ is and how it worksFabric IQ as a Fabric workload that organizes data into the “language of your business” so analytics and agents share consistent meaning and context.The components described: ontology (business vocabulary), graph (relationships across concepts), Fabric data agent (natural language Q&A), operations agents (trigger actions based on rules), and semantic models working alongside these concepts.FREQUENTLY ASKED QUESTIONSWhat is “Insight Quotient (IQ)” in this context?It’s presented as an organization’s ability to translate raw data into decisions that “stick,” avoiding the common gap between abundant data and poor decision-making caused by inconsistent metrics and low trust.Why are “vanity metrics” a problem?Because dashboards full of metrics can become “screensavers” that don’t change behavior; the recommendation is to pick a small set of measures that reliably drive action.Why do standardized definitions matter so much?If core terms like “active customer” vary across departments, the same dashboard can produce conflicting interpretations—leading to debate instead of strategy. Governance is positioned as the mechanism that prevents definition drift and enables scalable, trusted analytics (including certified datasets).What is Microsoft Fabric?It’s described as an end-to-end analytics platform built around a single foundation (“OneLake”) with multiple experiences under one roof—data engineering, warehousing, real-time analytics, and Power BI—intended to reduce fragmentation in tools, security, and data definitions.What is Fabric IQ—and what problem is it solving?Fabric IQ is presented as addressing definition drift and inconsistent meaning by formalizing business vocabulary (ontology) and relationships (graph), enabling agents and analytics to answer questions consistently across roles and departments.Ontology vs. semantic models—do we need both?The session frames semantic models as the familiar relational modeling layer (facts/dimensions and joins), while ontology provides the business vocabulary and meaning needed for natural-language interaction; they’re described as complementary.What does the graph enable that traditional dashboards don’t?It supports cross-domain reasoning by connecting concepts across contexts—e.g., linking rising support tickets to a specific product, to a recent update, and to an engineering release—so teams can identify causes, not just symptoms.What’s a Fabric data agent and how does it answer questions?It’s described as a conversational “front door” that takes natural-language questions and generates the appropriate query language depending on where the data lives (e.g., SQL for lakehouse, DAX for semantic models, KQL for real-time sources), returning consistent answers aligned to governed definitions.ABOUT THE SPEAKERSBrian Haydin — Solutions Architect at Concurrency; leads customer conversations on turning data into actionable, governed insights and outlines practical pilot approaches for Fabric IQ.Suneer Mehmood — Data & AI Architect; discusses modeling, ontology/graph value, governance (“garbage in, garbage out”), and how Fabric simplifies traditional “bolted-on” data estate complexity.TRANSCRIPT Transcription Collapsed Transcription Expanded Brian Haydin 0:05 All right. And here we are. Thanks, everybody. Thanks. Thanks for joining us today. We’re going to be talking a little bit about IQ and not the kind of thing that you took a test for in the third grade. We’re really gonna be talking about your organizations like Insight and Insight Quotient. So this is the ability for you to turn raw data into decisions and hopefully ones that are actually gonna stick with you and the organization. I am Brian Hayden. I am a solution architect at Concurrency and drive a lot of these conversations with the customers and I am joined with Suneer. Suneer, you want to introduce yourself real quickly? Suneer Mehmood 0:50 Hello everyone, I’m Suneer. I work as a data and AI architect. So yeah, looking forward to this conversation. Thank you. Brian Haydin 0:59 Yeah, fantastic. So let’s, let’s go ahead and dive in a little bit. We’re going to talk really kind of about three things. And by the time that we wrap this up, I’m hoping that you’re going to walk away with these three impacts first. One of the things that we talk with our customers about is picking metrics that actually move the needle, ones that, you know, don’t just look pretty in a dashboard, but are actually aligned to things like your organization’s North Star. So we’re going to teach you how to move beyond those vanity metrics and define the right kind of KPIs. For the business. And 2nd, the second thing we’re going to talk about is building out some of the trust in AI guardrails. So ground rules for making sure that your data is trustworthy and that AI is not running around unsupervised, you know, like a kind of like a golden retriever at a picnic or something like that. And then the third thing, and this is kind of the big one, is, you know, a concrete pilot blueprint. So at the end of the day, this is what I want you to walk away with is some sort of two-week low drama plan to get some of this data stood up and be able to ask, you know, questions about it in your own business language and get meaningful answers. So this really isn’t like a someday kind of maybe like talk. I I’m hoping to present a set of actions that you’ll be able to take in order to drive this, you know, drive some value out of this in in like 2 to three weeks and. I’m gonna start with with just sort of a statistic. Over half of business leaders admit to using inaccurate or inconsistent data for their decisions. For me, that that’s kind of like an illustration of like low IQ and a low score for any organization. And so you’ve got this like pit of despair in between like this abundant data that I have and all the decisions that we want to make as an organization. So, but you know, most companies aren’t data poor though. Most companies are are decision poor. They have metrics, but they’re not in agreement with, you know, the rest of the business. They may have dashboards, but they don’t have trust in that in the data themselves. And now you start throwing AI into that and without any kind of guardrails, you’re going to get all sorts of inconsistent, you know, inconsistent results. So think of it this way. Imagine your company is taking that IQ test. You’ve studied hard, you got all the test books, textbooks. But when you sit down to answer the questions, every department is giving you a different answer. Like what is a customer? What is sales growth look like? Marketing says revenue is up. Finance says it’s flat. Sales says, you know, it kind of depends, you know, 6-7, you know, on how you define the active customer. So that’s the kind of thing that’s not really intelligence. That’s like a spreadsheet court. And we want to make sure that, like, you know, it’s not just the loudest person that’s going to win. So last kind of point that I’ll make here is that, you know, when we add a I, we talk about things like hallucinations and hallucinations like don’t just come out of nowhere. A lot of times it comes out of like not understanding the data itself. So what we want to do is make sure that like we’re giving you the tools to be able to make good decisions with the data that you have. I also want to just say Sunu is here to field some questions too, so Q&A should be open. Throw some comments in here. We’re happy to answer any questions as we kind of go along. So let’s make this as interactive as we can. All right. I talked about a little bit about some ground rules, so and this is one that. Kind of separates the data smart from the data drowning. Pick a handful of metrics that trigger some actions and not just like arguments. And what do I mean by that? If you’re tracking 50 metrics, nobody changes any behavior because of it. Then that’s not really analytics. That’s just like a fancy little screensaver or something that people click around and make them feel good. And we don’t want that. What we want is like intelligent, you know, intelligent, you know, data and metrics that we can actually perform actions against. So it doesn’t take like 500 random trivia questions. It’s really focused on the core areas that actually measure your intelligence, pattern recognition, logic, verbal reasoning. Those are some of the things that that we’re looking for and when we build out the KPIs. And so if I had to give like 5 examples of you know what of what would be some good ones, revenue per per customer segment. So not just your total revenue, but revenue per customer segment start to divvy that out a little bit time to resolution on some of your support tickets like are we growing are we are are we? You know growing and being able to answer those questions quicker. Inventory turn rate, customer retention, 90 days or margin contribution by channel. Those are some actionable types of things that you can do something about. So as an example, I’m seeing my time to resolution. On the support tickets going up, that’s going to trigger some sort of action over time. I want to like look into why is it taking longer? Did we have a lot of turnover? You know, what are some of the factors that went into it? Suneer, you’ve been doing a lot in this space, you know, helping customers kind of frame out the right, the right metrics. Do you have anything you want to add to that? Suneer Mehmood 6:37 Yeah, like you know pretty much you know they derive customers derive the requirement here, right. Like so they define the business rules or the what you model in in you know any data platform really is driven by what the customer says, what they want to see and and all the business rules that they. And the way in which all AI and source the data really depends on how the data is modeled and how the ontology which we are going to talk in a few moments is going to be modeled. So it’s really curtailed to how you want to define your entities and your context of. Your business so. Brian Haydin 7:18 Yeah, and and look, you know, when you look at the screen, you’ve got on the left side, you know, very noisy metrics. I have actually seen dashboards that look like this and you know, and who’s gonna, who’s gonna dig into that? I mean, sure, you can click on things and make all the different pieces kind of move around, but at the end of the day, like. 12.5 jumps out at me. I’m seeing growth, I’m seeing, you know, shrink, you know, etcetera. So, all right, ground rule #2, we need to standardize definitions and this is going to build trust within the organization. I gave a couple of examples earlier, but if an active customer. It means five different things across five different departments. Then how do you like, you know, how do you actually call that analytics? What you really have is kind of like an improv, like you’re you’re out there, you know, making it up as you go. And you know, when I say improv, I’m not talking about like going in and doing the improv like funny stuff. It’s more like just sort of making it up as you know as the wind blows. So take a look a little bit at this slide. We’ve got marketing, sales, we’ve got finance and they all think they know what an active customer is. So for marketing that’s like this e-mail was open in the last 90 days. You know, sales might say that they’ve got a deal that’s in the pipeline or finance, you know, is saying, you know, anybody that has given us money in the last 12 months is an active customer. But that doesn’t like those aren’t like clear definitions for the organization and one that you can actually report on. So when they show up to the same meeting in the same dashboard and everybody’s talking about 12 percent, 5%, you know, 8%, nobody’s on the same page, it turns into a debate, you know, for hours rather than actually having a conversation about what the strategy is. So that’s what we’re talking about with governance. And I know it like usually sounds about as exciting as reading the terms and conditions from your phone, but you know, this is the stuff that’s really going to kind of make or break your ability to scale and have conversations that are meaningful and especially when you get into the A I components of it, being able to have a I give that standard answer. Organizations that that don’t have this kind of governance are gonna get stuck in like isolated experiments, you know, once that they really have any value associated with them. So you know, and then the other part is like if you if you are continuing to have these kind of arguments, you know, within the organization. And the data is unreliable. You can’t scale because nobody is trusting and nobody is saying I’m going to bless this. One of the things in that we kind of get into like fabric a little bit is talking about certified data sets and you know having business owners that you know can actually say this is the source of truth and if you can’t get to that state because. Because nobody has a common understanding of what those definitions are, you’re never going to get certified data that that the organization’s going to be able to to be using. Anything else you want to add on this, Suneer? Suneer Mehmood 10:21 Yeah, this reminds me of the phrase garbage and garbage out, right? Like, you know, we we are tackling with a very smart systems. So you know the the more we govern and certify these datasets and make sure that this data is. Accurate and precise, you know the more it’s going to help, right? Because the systems can really query, AI can really query your system really fast and and bring insight, so you want to make sure that you get the accurate. Results. So governance is of utmost important than certifying your data is, you know, very important, yeah. Brian Haydin 10:59 Absolutely. So then the third ground rule, and this is kind of where the rubber hits the road from an infrastructure perspective. You know, with AI, it’s like having a really confident new hire, like an intern or maybe somebody who just graduated top of their class, like super wicked smart. You when they, you know, when you interviewed them and you bring them in on day one and all of a sudden they’re rewriting all your SOPS. Are they smart? Sure. Are you going to trust them? I don’t know if you’re going to, you know, trust somebody that just came out of college, you know, right off the bat. So you know, a lot of enterprises now have protocols to mitigate AI errors and the industry has really gotten a lot more skeptical and and I would say in a healthy way. So what we’re trying to do is set up like how do we double check, how do we set boundaries, how do we treat AI as read only, you know? So we kind of have the proof that we can trust it with right access. So a lot of times with our projects, what we’re doing is we’re saying we’re going to keep a human in a loop, we’re going to use it to guide and inform, but we’re not going to give AI control of the outcomes until we’ve actually validated that what it’s doing is correct. So on the cloud side, the fastest way to really kill momentum on a project is to have unclear capacity, unclear security posture, unclear tenant settings. And what we’re gonna kind of show you and walk through today a little bit later is like having a good known. And we’re gonna talk about how Fabric can kind of address some of these things and that when your capacity is provisioned and paid and not a trial, you’ve got a good security model and you have access to some of these other like features that we’re gonna dive into. So permissions really not like, you know, an optional thing at this point. We’re gonna have data. And when we give it to the agent, which is like essentially another person, just a, you know, just an AI agent, that that answer that they’re gonna give us back is gonna be something that isn’t authorized. They’ve got the permission to be able to say, you know, to access this data and more importantly. Have the ability to give it back to you to use in whatever fashion it’s gonna be. So sorry, most data stacks grow up the same way as most trail systems kind of grow. Somebody needs to get from point A to point B, and so they’re kind of go in with their machetes and they cut a path. For people to follow through and then somebody else needs to go from point C to point D and then there’s another path that gets cut and then somebody needs to go from D to E and they create a new path and pretty soon you get like dozens and dozens of trails. There’s no no clear way to get around. Nobody remembers who built the one, and now there’s poison ivy growing through it, and everybody’s in a mess. So this is what the tools tool landscape can grow into when you’re not following consistent patterns. Ingestion over here, modeling over there, Power B over here and you know some PowerPoint side that nobody’s updated since you know 2022 or something like that. So from a consolidation standpoint, what what Microsoft Fabric is going to do is give you one one platform. For all these different, all these different duelings, one security model, one source of truth, it becomes, you know, an operational consolidation. And with the fewer parts and fewer places where people have to go look for things, that means that there’s going to be fewer gaps and fewer places where the data gets lost in definitions, drift. And the security kind of falls through the cracks. So a lot of our implementations takes take governance and security kind of as our first step. You know, we look at how do we want to tag this information, how do we want to govern and segment the information. We start building out a plan for data domains, you know, from the ground. From the beginning, and that’s exactly why we’re gonna anchor on Fabric and be able to use something like a unified platform like this to help us stay on track. Anything else you’d add for that? What are some of the things that you’ve seen, Suneer, in terms of tool drift? And how that that adds the complexity? Suneer Mehmood 15:26 Yeah, traditionally when we speak about our data as state, we need several layers to it, right? Bolted on. For example, we need a data storage like a relational database. We need a semi structured storage like a data lake. We need a querying mechanism. We need a data pipeline. We need distributed. Data processing like your big data architecture. So traditional data estates have these silo services for the right reasons and you need to bolt on and integrate each of those. So as you bolt on and integrate, governance can be a little bit tedious because you have to take. Of everything that is there in your data estate, so is your maintenance and your licensing, cost tracking and all of that. So Fabrics brings everything to one ecosystem. It kind of the governance is bolted onto it. So is the security architecture and it’s very. Very easy to maintain a licensing perspective as well. That’s what I’ve seen, yes, and we’re going to talk about the intelligence aspect of it. So that’s what my take on this is. Brian Haydin 16:38 Yeah. And I don’t want to scare people off either when we start talking about a lot of different tools. You know, I don’t know if I actually have anything in the slides here to talk about this specifically, but you know, we work with a lot of customers that have invested a lot of money in things like Databricks or Snowflake. And Fabric has these, you know, has this idea and concept shortcuts that, you know, allows us to work with that data wherever it resides, whether it’s an AWS or if it’s in, you know, Google Cloud, it doesn’t really matter. It’s really just having this engine that allows you to build analytics. In a unified way. So what does some of this look like if for people that maybe aren’t that like familiar with with Fabric, such as the kind of quick orientation, here’s like a like the 92nd version. It’s an end to end analytics platform and it all is based on. On like the single source of truth, essentially one link and it’s a set of experiences that are built on top of that. So we’ve got data engineering, warehousing, real time analytics, Power BI, you know, all of this is living under one roof. Think of it a little bit like a base camp, you know, so whether you’re heading into like you know your data engineering trail. Or your Power BI trail. Everybody’s like launching off from the same place. All your gear’s in the same place. Everybody’s working off the same, you know, the same trail map, and nobody’s gonna get lost. And you know, it’s been around for a couple of years. You know, I think it was 2 1/2, maybe three years ago it was released. And so it’s really kind of matured and people are starting to really double click into building in this platform. And now we’re starting to see it really take off in some of the more innovative ways with things like IQ and copilot. And it’s, you know, kind of a fantastic platform to build on. So hopefully that’s just a quick little overview. I’m not going to dive into all the nit and gritty about it because we really want to get in the IQ aspect. Anything else you’d throw in here, Suneer? Suneer Mehmood 18:43 No, I mean you covered pretty much everything and we are you know happy to answer any you know feature related questions that you might have you know regarding fabric. But at a gist you can see that everything that we need in and around you know processing your data and storing your data. Is there in fabric so? Brian Haydin 19:05 So what is Fabric IQ? This is the part that probably most of you are are what? What is this thing? That’s why you came here for this this talk. So it’s workload basically in Fabric that unifies the data across the one lake and starts to organize it into the language of your business. So that the analytics and the agents, you know they have a consistent meaning and context around what around the data that you’re working with. Remember when we talked about that like definition drift problem a little while ago and marketing’s active customer versus finance’s active customer, this is where this is the problem that I. IQ is coming in here to solve that and for real and actually giving you like at the platform level what these definitions are. So it’s not like a magic button completely, but it’s a set of things that are working together. First you’ve got ontology, which is how and where you define. Your business vocabulary. Then we have the graph, which is like the which helps you to build and materialize the relationships between all these different business business concepts. You’ve got a fabric data agent which allows people to basically ask questions in natural language or plain English. And get the same answer no matter if you’re somebody in finance or somebody in sales that’s asking the question. And then you’ve got operations agents which can trigger actions based off of the rules. This is similar to like the real time analytics and some of the components have been around. And then finally you’ve got the semantic models that are that are part of this, which you know you’re probably already using some of these features as well. So if you want to think of like Fabric as that base camp, IQ is kind of like the map, right? Like the trail guide who actually speaks the local language knows where the shortcuts are. And make sure that you don’t, you know, step off the Cliff if it’s a little bit cloudy out or foggy out, keeps you on the right path. Suneer, you’ve been working a little bit with this. Would you want to double? Would you want to dive into any of this first? Suneer Mehmood 21:18 Yeah, sure. Most of you who have worked in fabric ecosystem or just having Power BI are aware of semantic models, right? So you might wonder, OK, I already have semantic models, then why do I need something like an ontology? Brian Haydin 21:18 A. 1. Suneer Mehmood 21:36 Sort of, you know, so IQ is something that we’re going to talk about it more, but at a high level it gives that intelligence component to fabric and and semantic model is the one that helps you relate your data in a relational, you know. architecture perspective because you have your facts, your dimensions like and how do they join and so on and so forth. So both work complementary to each other is what I would add and then yeah, we can go into the details and we can discuss more about it. Brian Haydin 22:09 Yeah. So the first one we really want to talk about is ontology, right? So, you know, that’s a word that Suni and I were putting together this deck and he’s like, I haven’t, I haven’t seen that word in a little while, right. Because it’s kind of like, you know, this is not a new term or concept. It’s something that’s been around for a long time. But I think people sort of like stopped like really considering it or really thinking about it. And this is where the magic really happens. So if you don’t have a clear definition of the nouns and verbs that your business has, none of this is going to work. And so think of like your nouns as your entities, so customer or order or product. These are all like these are all like the concepts that you’re going to want to think of like the you know the name, the named entities and then the verbs are like your relationships. So a customer places orders, a tick is it about a about a product or like product is stored at a location. Those are like the verbs. So here’s kind of where it gets interesting. You’re not just drawing these boxes and arrows like, you know, abstract ways. You’re binding them to the actual data. You’re taking a look at it and saying, is this the customer entity? It lives in my lake house table. This is where it exists. This is the order entity. It comes from this warehouse. And when you start to do that, Fabric can understand and build what they call an instance graph and that it takes your abstract business model and it starts to fill that all in with the real data and real relationships in which that’s the part that actually allows you to get to the real answers. So the other thing that it does when you build out this ontology is it keeps the lineage intact. So when somebody starts to ask questions like where did this number come from, you can trace it all the way back to the source and it refreshes, you know, on that same schedule that you’re refreshing the data. So whether that’s hourly. Or if it’s like you know on a daily refresh, you’re getting that up-to-date like current like this is where the data came from. So people aren’t like questioning how recent is this, you know and you know can I actually trust it? What else would you add to the to this Suneer? Suneer Mehmood 24:22 Yeah, and think of it like you know you’re giving vocabulary to your data, right? Like you know, so far we have been dealing with data using languages like structured query language or Costa query languages and stuff. Now we are looking at ways where we can interact with the data with our natural language, right? So. How do we actually strengthen that vocabulary, make that system understand what entity connects to what with the nouns and words terminologies as Brian just mentioned. So it’s very interesting that we are getting into that realm where we can actually query the data in natural language and make it even better. Better. Brian Haydin 25:02 Yeah, and I’m just going to throw out here just to just to be clear about this. This is a preview feature at this point. So IQ was just released, you know at Ignite and and so you know it’s kind of an evolving thing. There are the ability to essentially. Do auto discovery like they have some automation you know built into the the ontology here. But we’ll see these features kind of mature and enhance over the next you know several months as it goes from preview to like a a GA release so. The next concept that I wanted to get to was like the graph, and I kind of mentioned the graph a little bit. But now that we’ve got the ontology defined, let’s talk a little bit about why the graph changes the game. Because, you know, defining the entities is kind of like the first step that you have to do, but the powers. And all these like different types of relationships. So here’s kind of like a real world example, something that I would see, you know, talking to, you know, different companies. Let’s say your support tickets are starting to trend up and in a traditional siloed like, you know, the traditional data warehouse, the support team looks at their dashboard and says, huh. Tickets are up 15%. Like maybe we need some more agents, maybe we need to hire some people to, you know, deal with this. But with the graph, what we can do is we can start to follow the thread and figure out where and why those tickets were created. And maybe we’re starting to see that they’re about a specific product. Maybe that product had a software update that happened last week, and that update came from a specific engineering team’s release. So by building all these different relationships within the ontology, we now have this graph that allows us to connect all the dots. You know, so we can see the cause, we can see the product, we can see the team and if we can see all that information, hopefully we can see that fixed pretty, you know, pretty obviously maybe with that release what we want to do is do a quick rollback in order to get those those support tickets to come back down. So that’s the kind of cross domain reasoning that happens. And it’s like the equivalent of like a really high, you know, IQ person drawing on the knowledge from multiple fields and decades of experience to solve a problem that stumps even the most ******** specialists, you know, in the the ticketing realm. So that’s what the graph does for the organization and it connects, you know, it really kind of connects all those different dots across the different data silos, you know, for the organization. Suneer, how have you seen this impact other customers? Suneer Mehmood 27:41 Yeah, so a good question. Like when we actually create these ontologies or graphs, we can actually combine multiple semantic models. More often than not, when we create semantic models that is specific to a context, for example, support tickets. And and and things like that. And the reason for a software update or something like that could be totally in a different context altogether. So The thing is like you can actually stack up or you can actually relate multiple semantic models. And that way you know it gives you insights into your data, which a traditional querying might not be unless you have specifically, you know, modelled those dimensions, which more often than not might not be the case always, right? So, so pretty insightful. Brian Haydin 28:35 Yeah, so I mentioned data agents and I think this is the cool part where all this starts to come together. Microsoft Fabric has had copilot capabilities enabled for a little while now and. And now we’ve got the ability to create agents and that’s where like we’re really kind of bringing this all together. So you know the everything that we’ve been talking about, the metrics, the definitions, the ontology graph, you know it, this allows us to put a conversational front door to it. So imagine. Being able to ask questions in plain English, something like which products are driving repeat purchases and what changed in the last 30 days? A data agent, by being able to traverse the graph and understand the ontology, is able to figure out where that answer lives and generate the right kind of query in order to. Pull the data back and more importantly, it might be able to give you insights around that data as well. So if your data’s in a lake house, it’s writing the SQL to get the data. If it’s in a semantic model, it’s writing the DAX to be able to interrogate that data. If it’s in a real time data source, it. Can write KQL, right? So the user doesn’t really have to know how to get to the data, it just knows that I’m asking this question and it’s going to let let the agent go and and build the access to that data on its own. So here’s the important part, and This is why we spent all the time on the these ground rules. Data Agent really enforces responsible AI policies. It enforces things like user permissions. It’s the read only, right? We’ve kind of established that we’re going to start with just read only and it’s not going to decide, you know, what things that have to be updated in the database. Database. It’s not gonna go rogue. It’s not gonna do anything like that. It’s just giving you the questions. It’s giving the answers to the questions you’re asking. At some point we might want to enhance some of that and start performing actions, but the data agent itself is really just about getting good governance. Answers to the questions you might be asking. So remember that intern, that confident intern you know from before after three. This is like the intern after like 3 months, right? We’ve given them like, all right, now you know where everything is. Stop writing the SOP S. You know, I need you to go fetch this, you know, this ad hoc report for me and give me the information back. So, so that’s the data agent, you know that that that we’re talking about. But let’s get into like a little bit of a more like reality check. Because I’d rather hear you. I’d rather that you hear this right now than to figure this out on a Tuesday at 10:00 PM when you’re first trying to get your pilot running. I mentioned this before, it’s in preview, right? So just expect that it’s going to be a little rough. And that some of the documentation is going to be, you know, maybe a couple of days old and so you might not see that button, you know exactly where where it was. And so don’t be afraid of that. That’s that’s just kind of like how it works with a a preview type of workload. Second thing is, and this is the one that catches most people, fabric trial capacity doesn’t support any of these kind of these kind of AI experiences. So you really have to, you have to decide what kind of capacity you’re gonna go for and the data agent, the IQ, none of this stuff is gonna light up until you. Have a paid capacity. So something like an F2 would be like the minimum minimum, but you’re probably gonna wanna experiment with something like an F4F8 if you wanna do any kind of real workloads. So you gotta pay for it. Not super expensive. I think F2 is a couple 100 bucks a month Suneer. Is that about right? Yeah. And then it’s, you know, just double S, you know, for each one of the workloads. So you go up from there and then, all right, third tenant settings. You got to turn this stuff on in the specific settings. And so you’re going to need some of your you’re going to need administrator access to be able to go in and like flip the switches on some of this. Suneer Mehmood 32:34 Yes. Yeah. Brian Haydin 32:53 So the data agent toggle, cross Geo AI toggles, those are off. You’re not going to get anything. So you got to be able to get in there and turn those on and work with your admin to get that in. 4th your data agent is read only and so when we think about security. We want to make sure that that’s that’s what we’re doing. And I know we already said it, but I’m saying it because I’m saying it again because this really does matter. It reads, it queries and it answers and that’s it. It doesn’t write and we’re doing that by design and by design and we’re doing that. You know, for the, you know, for the foreseeable future, let’s get some trust. Let’s let that intern get a little bit more experience before we, you know, give him the keys to the Kingdom. And then the 5th, you know, the ontology, it doesn’t have versioning yet. So right now we don’t have a way for us to go back and look at how it’s changed over time. So if you make changes, document it and keep some sort of a change log off on the side. I think this is a a behavior that’s going to mature. It’s going to come into play pretty quickly because we’re already seeing that create some sort of pain points when that ontology changes. Uh, but um, but be a little bit disciplined in tracking uh what you what you change. All right, so blueprint, what can you do as an organization to get some momentum? So I think what we, you know what I want to do is lay out a two week pilot blueprint and let’s do something quick and simple and low drama and. Something that’s gonna be meaningful. I’ve often talked about fabric kind of pilots as thinking about the end in mind, right? So start to think about what would be an important question that you could ask for the organization and then we’re gonna build from there. So the goal is kind of simple within two weeks a business user. You can ask, you know, 32345 priority questions and get consistent answers that are sourced and that match the definitions that you’ve already agreed upon as an organization. That’s the set. That’s the criteria for success. That’s like not 100 questions. Not a production rollout, three to five questions that your business actually cares about. Making sure that we’re answering those questions and answering correctly and consistently can look at the lineage and the ontology and say that we we feel confident that it got the right answer. Um, what would you add to that? Like you’ve seen this drift when people want to boil the ocean, Suneer, right? Suneer Mehmood 35:32 Yes. So I would always recommend starting with exactly knowing what you want to do, what you want to get, defining what your business problem looks like. What do you want to see in your report? And what are the types of questions? So when we actually design A warehouse, we start with something called the bus metrics where we say that these are the KPI’s that we want to see against, these are the dimensions that we want to see which is kind of qualitative slicing of your data and filtering and. And so on and so forth. So the better you’re aware of your problem definitions, you can actually model and make sure that the data that comes into all the lake houses or warehouses within fabric are precise enough and that helps us maintain or create and maintain the ontology. Better like hey this is your customer, this is your product and you have a concept of orders and customer places an order which is the verb to noun connection of that. So that helps us define this ontology. Also provide the sample queries to this ontology so that we can train this ontologies to. Retrieve the right information when a question is being asked in a natural language format. So the more you know, the more you can, you know, define your, sorry, can define your problem statement, the better we can get the results. Sorry, Brian, go ahead. Yeah. Brian Haydin 36:54 I know we. Yeah, yeah, yeah. No, it’s all good. So I know we’re talking about like week one, how to get started and picking the right slices. This is kind of a generic group. So, you know, I’m not exactly sure where everybody’s business, you know what, what you do, but something that kind of fits everybody sales. Suneer Mehmood 37:19 Yeah, exactly. Brian Haydin 37:20 Orders, inventory, you know, these are, these are some of the areas that I would start with. And if we’re gonna talk about this, you know, kind of to a large group, I would say pick somewhere between 4:00 and 6:00 entities. So pick customer, pick an order, pick a product. You know, if you have inventory, maybe your SAS product, maybe you’re not, whatever. But then map those relationships, actually do the steps that bind it to the data. And then if you have a semantic model, connect it to that semantic model or create one, just a small one like don’t. Don’t spend a lot of time on it and get that kind of that foundation set up, right? So now you’ve got that. And then week two is kind of like the activation. Let’s get a fabric agent set up. Let’s get that data agent up and running. Let’s turn the dials up. Feed it your sources, keep it small, keep it those like you know four or five that we that we picked and then add some instructions and a couple of example questions to help guide the behavior of the agent. That’s a really important step to make sure that you can assess the quality as you’re building this this data agent. And then run those questions through there and validate the answers. Not seeing things like evals yet as part of the platform, but I think that’s coming pretty quickly as well. But so we can start to automate some of those things, but in the meantime start to validate those questions and. Did you get answers that matched the definitions that you agreed on at the beginning of it? And then lastly, I would say from the infrastructure side, make sure that you’re kind of planning that capacity and because it’s paid that you’re not over. Allocating for it and raising eyebrows. The nice thing about fabric is that it’s when you pay for it, it’s not really consumptive based billing, it’s really just reserving a capacity. So the costs are gonna be pretty flat. You know you’re just gonna get some performance behaviors like if you start to run out of it. So, all right, so deliverables at the end of the at the end of two weeks, small ontology, one semantic model, one data agent. And you know, I think you know if you can do this in in two weeks that’ll demonstrate like is this a value proposition that your organization wants to invest in? Or is this something that like you’re you’re finding out that your data just isn’t in the state that you know, or you can’t get the definitions that you need around that ontology to be able to get a consistent answer? Anything else you would add to that two week pilot? Suneer Mehmood 40:05 Yeah, yeah. One thing that I would like to add is like, you know, a if any of us are thinking like I don’t have a well defined semantic model, I have some data in the lake house or I, you know, I haven’t even brought the data into the lake house. You can actually create ontologies. Even without a semantic model, so Fabric actually supports creating ontologies for up to five different sources, and those sources can be a lake house or or a warehouse and wherever you store the data within one lake in Fabric and then from there you can actually define. Brian Haydin 40:25 Yeah. Suneer Mehmood 40:40 What your objects are and bind those data. One of the reasons why we would recommend a warehouse or a semantic model is like your data is more pruned and it’s more or more defined so that you can bind the data and you have the precise customers or products, but otherwise also you can. Can do that, you know, just just calling that out. Brian Haydin 41:03 Yeah, that’s true. I think when we’re building a pilot it, it’s good to understand like what the capabilities are and so that’s a very good call out and and you know maybe one that we would that you’d wanna consider. Suneer Mehmood 41:04 Yeah. Yeah. Brian Haydin 41:18 All right, so let’s bring it home. We’ve talked a lot about different aspects of just fabric, you know, some of the guardrails that would layout for the organization and then you know what IQ is and how you can get started. And you know, so we’ve got metrics, definitions, guardrails. We walked through the tool landscape and why this like tool consolidation will help. What can you do now? So I I kind of looked at maybe three different things that you could do. So if you’re not really sure. Where you stand, start with an assessment. We do kind of a free data strategy assessment workshop where Suneer and I can sit down and have a conversation around this with you like a couple hours, half day, something like that. Think of it as just like a trail survey. Before you start building, if you are ready and you think you want to get your hands dirty, we can run the pilot for you or we can help you scope the right business slice, help you get the infrastructure set up, you know and connect some of the things for if you you know in two weeks. Get real data, real answers. And three, you know, if your biggest need is people in process, we can help you build out like you know some of that like governance and intelligent workforce. We can help teach you some of the data literacy, responsible A I workshops and even help you build some of this. So if you are looking for some more information, Amy put a link inside of the chat here today and love to have you give us a little bit of feedback. Did you like what you heard today? Was it? Anything you want to hear a little bit more about, definitely give us that feedback and then in that survey as well as an option for you to be able to have a follow up from Sunir and myself. That being said, if there’s any questions that anybody has, we could stick around for a minute or two, but most of. importantly, I want to say thanks for sticking around um and uh hope you learned a little bit today.
Events Frontier Firm Part 3: From Org Chart to Work Chart – Humans & Agents Working Together The future of work is not humans or AI—it’s humans and AI working together. In Frontier Firm Part 3: From Org Chart to Work Chart, we’ll explore how organizations can move beyond traditional org charts to optimize workflows, increase productivity, and empower teams by integrating AI agents alongside human workers. Learn practical examples of how… March 25, 2026
Events Migrations, Mergers & Divestitures: App Migration Madness When it comes to Migrations, moving applications and accounting for all of the related considerations can feel like a full-court press. In Migrations, Mergers & Divestitures: App Migration Madness, we’ll guide you through winning strategies for planning and executing app migrations in a variety of situations. You’ll see how to tackle common challenges, optimize cost… March 18, 2026
Events Copilot & Frontier Org Design Design Smarter AI User Interfaces for Your Teams AI is transforming how teams work—and the way you design & leverage agents can make or break adoption. In Copilot & Frontier Org Design: Design Smarter AI User Interfaces for Your Teams, you’ll learn how to use Microsoft Copilot Studio to create intuitive, AI agents that streamline… March 11, 2026