In this episode of AtScale’s Data-Driven Podcast, host Dave Mariani, Co-Founder & CTO at AtScale, is joined by Petar Staykov, Product Director, and Daniel Gray, VP of Solutions Engineering, for a deep dive into composable analytics—the architectural shift that’s transforming how enterprises model, govern, and scale data in the era of AI agents.
Key Takeaway
Composable analytics isn’t just a design pattern—it’s the foundation for governed, scalable, and AI-ready data ecosystems. Learn how AtScale’s Universal Semantic Layer and open-source Semantic Modeling Language (SML) are redefining interoperability across BI, AI, and agent frameworks.
Meet our Guest
Petar Staykov
Product Director at AtScale
Petar Staykov is Product Director at AtScale, where he leads the development of next-generation analytics platforms and semantic modeling architectures. With a background in BI and data engineering, Petar has steered teams across product design and delivery, helping enterprises leverage composable data products for scalable insights. He is passionate about open standards and adaptive data systems, and drives AtScale’s mission to build AI-ready semantic layers.
Daniel Gray
VP, Solutions Engineering at AtScale
Daniel Gray brings rich experience in technical solutions engineering as well as software engineering to his work with global enterprise organizations. Prior to joining AtScale to lead the Solutions Engineering team, Daniel spent many years in the analytics space including Hewlett-Packard’s Advanced Technology Center, Vertica, and Domino Data Lab. When he’s not in the office or onsite with customers, you’ll find Daniel running, climbing, hiking, and biking – basically anything outdoors.
Transcript
Dave Mariani: Hi everyone and welcome to another edition of AtScale’s Data Driven Podcast. Today we got some inside baseball. So I got some really important people that are responsible for product delivery and customer delivery that are joining me today. So I want to introduce Daniel Gray who runs our customer operations and sales. So Daniel, thanks for joining.
Daniel Gray: Hey, thanks. Really looking forward to the conversation. Thanks, Dave.
Dave Mariani: And Petar Stakhov, who is in product, responsible for lots of things in product, including our user experiences. So Petar, looking forward to chatting today.
Petar Staykov: Thanks for having me Dave.
Dave Mariani: Okay, so for today’s topic, we want to talk about composability and composability and analytics. Know, Gartner has been talking a whole lot about this lately about how important it is for composable analytics. So we want to define what that means. GigaOM just came out with their 2025 radar report for semantic layers and metric stores.
And this was also a really big theme for GigaOM was, how can we make it so that we can decentralize analytics products creation? So that’s not all central IT creating analytics for everybody to use, but allow that decentralization, but decentralization with still consistency and governance.
So let’s just dive into the topic. When it comes to composability, Petar, what does it mean to you when people start talking about that term composable analytics? What’s the benefit to customers and analysts and users out there?
Petar Staykov: Before we talk about the benefit, Dave, let’s see the problem. I call it the Power BI Tableau click problem. They bounded modeling with visualization, which according to me is the wrong thing to do. Models always should be kept close to the business processes. Models are really digital twins of the business processes.
Dave Mariani: Mm-hmm.
Petar Staykov: Models should not care what kind of business questions will be asked on top of them, what kind of visualizations they will be used in. Models should care to describe the business process. And when you compose those models, the modern term data mesh, is when it satisfies a certain end-to-end business process visualization, end-to-end report.
This is where we start answering the business questions and the model should be designed in a way to answer all kinds of business questions without being so close to the visualization. This is what I think about composability. This is how I think about modeling.
Dave Mariani: So, you know, when we think about that, right, is like, I think we’re used to building dashboards first and the like, and we’re building dashboards and trying to be sort of like the everything to everyone. And then that sort of translates when we sort of looked at building universal semantic models. We also tend to think about let’s build a model to serve everything.
And what we found right is like, you and I found this together and some of our customers is these big monolithic models are really hard to maintain and to change without breaking a bunch of things. And so taking sort of some, taking some learnings from object-oriented programming, instead of building one big monolithic model, we can build little mini models and then put those together and create composite models to create different views around what you’re saying like a business process. So give me another, give me a good example for that the listeners can understand. What does a composite model look like compared to one big monolithic model, Petar?
Petar Staykov: I’ll not only give you example of one composite model, but I’ll give you an example of a couple of them, because this is how you can really understand it. And I would use the famous sales model used in all the examples, but then bring it into a different business scenarios. One scenario would be forecast accuracy reporting. Another scenario would be out of stock reporting. Third scenario can be revenue growth management reporting. Fourth scenario could be promo ROI. All of these scenarios include the sales model plus something else. Forecast security will mesh sales model together with forecast in order to see how good our planning department is planning our sales.
Dave Mariani: Mm-hmm.
Petar Staykov: The promo ROI model will mesh the sales model together with promotion model in order to see how good we plan the promotions and how profitable they are. Out of stock model will mesh again the very same sales model with our inventory model to answer the question, well, when our business will be out of stock. So you saw how we reused three times the sales model plus something else.
You correctly mentioned it always starts with a simple model that is answering a simple question. But if you keep it monolithic, soon enough it will become gigantic and only daughter of this model can support it. What happens if the daughter with the company? Now we are getting rid of this model or selecting another tool that will answer the questions.
Dave Mariani: And so also, I think that what you’re saying here is like, there’s different teams working on different models who have different expertise. So a team who’s responsible for inventory doesn’t have to understand how sales were calculated or vice versa, right? So you get to be able to have your business units bring their sort of knowledge to bear, take that knowledge, it into a semantic model, and then allow that semantic model as an object to be used for other business use cases without them having to understand what really inventory means, for example. Is that, did I get that right?
Petar Staykov: Absolutely. Actually, let’s talk about personas. I’m a product guy. And with the composite model, we just split the personas in the different user journeys. We can have the business units more, they know their business, they have tech-oriented guys that can build the digital twins of the models that are describing the business process. But then we have the business analyst persona that can easily mesh those Lego buildings in order to satisfy those end-to-end use cases that I just described. So now we introduced another persona and we lowered the bar of the technical knowledge needed in order to achieve this end-to-end mesh.
Dave Mariani: Mm-hmm.
Dave Mariani: Okay, so we talked a lot about product because we’re product guys, really we build products for customers. So Daniel, you’re out in the field, you see how customers are really using composable analytics. So can you talk a little bit about a real world example where you’ve seen a customer really use composable analytics to solve a business problem?
Daniel Gray: Yeah, it’s thanks Dave and I I’ll try to you guys covered a lot of the things that our customers are saying and seeing, right? Our customers, they come from a traditional logical modeling background where you build big monolithic models, be it in something like a SAS cube or cubing technology or in the BI reporting tools that Pitar talked about earlier. So right away, the concept of data mesh has been out there for a while, the concept of hub and spoke of data product, it’s where people want to go. They struggle getting there because without composability and really composability also enables reuse and sharing.
That’s a huge thing because when you build big monolithic models, you’re not reusing, you’re not sharing anything really. You have to basically copy the model if you want to do something differently. So right away they see that there’s a path to actually making it to building data products. So that means going faster. So you guys talked about it earlier, Lego blocks, you know, maybe the foundational stuff being built by, by IT and then the business being able to quickly spin up their own models. And then there’s always reason why people build monolithic models is because
Dave Mariani: Mm-hmm. Yep.
Daniel Gray: Suddenly like Pitar said that you want to be able to combine sales data with inventory data. And in order to do that, you had to start building a bigger and bigger model. So people realize that right away. And what that means as far as efficiency, and I’ll get to a real world use case, what that means as far as efficiency is you start thinking about flywheel effect because you can quickly build data products now because you’re reusing things. So in an at scale, you can reuse every hierarchical concept, every calculation. You can put those things into libraries. You can do auto modeling. You can quickly build new data products and now you can compose them. So people see right away that when you go faster, you do not get semantics brawl, which we see today. So if you look in Tableau, see, know, hyper files all over place. See Excel extracts all over place. You see in Power BI, you’ll have a hundred different data sets. Those problems start to go away and you can just go a lot faster. You get rid of a lot of duplication of metrics. So composability to me really enables all those core concepts.
Dave Mariani: Yeah, and you need a platform to actually support that, don’t you? Because I think a lot of what’s prevented people from really doing a hub and spoke, where you have a central team that can sort of maybe help with enabling analytics, but then your spokes are the business. It’s really hard for the business. It’s really hard to do that if you don’t have a common language. So can we talk a little bit, Daniel, about you’ve seen, in terms of customers, how would they manage that in a hub and spoke? Like how do they, what’s the common language that they use to talk with each other in terms of that workflow? How do you decentralize that at scale per se?
Daniel Gray: Yeah, yeah. So in the past, we saw what we typically saw was that the business would come up with a set of requirements. So Pitar mentioned some of those use cases, and then they would funnel those requirements down to an IT group. You know, maybe months later, you know, eventually a model would come back for them to be able to work with. And by then everybody knows you’ve moved on to the next thing. Or you create a shadow IT and you saw it, you try to solve the problem yourself, which, you know, so the, so the common language here, the common, the commonality is a semantic layer. You know, everybody’s using the same semantic layer. It’s built on open concepts, like for us, a semantic markup language. But really, the way that these groups can work closely together is, you know, they can now share and reuse. So give you some examples. Imagine for a second you’re building models. Well, if you think about it, you probably have a common product, you probably have a common time, you probably have a common geography, so on and so forth. Those can all be built and managed by an IT group, which they understand how that data is structured and they can take care of that. Then expose those through libraries to the business groups. And as Pitar said, the business group simply needs to know their data and they can then quickly build those models together. So one real-world example without given a customer name is we have a retailer that’s currently building composable models. One example is they have a Finox model. For Finox, and then they have many other models, but one particular model they’re combining is supply chain. So they’re combining supply chain and Finox so that they can do analysis across cost. They can do actual predictions to try to predict and things like that.
Before they would have had to build one huge model, all the teams would have been tied together or there would have been maybe a COE where everything got funneled to. Here these teams are working independently. Like you said earlier, Dave, we had one group build FinOps, we had another group building the supply chain model, and then they were able to come in and compose those because there’s always gonna be a requirement to do that cross analytic.
Dave Mariani: Yeah, and I love the fact that that decentralization really does make the organization move faster because now they already are using pre-built objects so they can get their models built faster and more consistently. And they also can be free to do that on their own versus giving business requirements to somebody who doesn’t understand the business. So that’s really, really key. So Petar as a product manager, this sounds really easy, but it’s actually pretty hard to do from a platform perspective. So what are some of the key technologies, enablers to allow this to actually even work? What do need?
Petar Staykov: Yeah, Dave, it’s pretty challenging. I think SML really unlocked all these capabilities. Our semantic modeling language, which is object oriented, which is in nature allowing composability and share of objects because it’s built in an object oriented way and then building a whole platform around it, it’s really the key. Building a platform that I can, I really like what Daniel said about the two fighting groups, if you wish. You know, the business and the IT, the central IT. The business is saying to the central IT, you are too slow. The central IT is saying to the business, you are not organized enough. So you’re causing the metrics problem. And, know, as everything in the world, the truth is somewhere in between and building a platform where these two groups can feel comfortable and have the, you started the podcast with it, having just the right amount of control and the right amount of freedom. Think it’s the key.
Dave Mariani: Yeah, it’s like the pendulum has always been, you know, swinging in both directions. We started out with IT owning everything and the business in the position of having to ask for stuff. And then we swung back the other direction in the last decade with the popularity of tools like Tableau, where the business just ran wild. Neither one of those models work for obvious reasons. IT is a bottleneck or the business is running uncontrolled and nobody trusts the data. So if you can do this right in a hub and spoke or data mesh, whatever you want to term, you want to use, I Gartner calls it a franchise model, which I think is kind of interesting too. Then you can sort of, you know, decentralize with still governance in place. Okay, so I mean, we’re talking about composability. That’s pretty specific, but you know, there’s been a lot of chatter about just semantic layers lately. I mean, we’ve all been talking about it in this company for the past 12 years. We’ve been at it for a while. But really I would say, know, really since, you know, in the past, in the past 12 to 18 months, now everybody’s talking about it or trying to build one as a, as a software vendor. So Daniel, why do you think that semantic layers are such a hot topic these days?
Daniel Gray: Yeah, it’s a great. Great question. Mean, we know that semantic layers have been around for a long time, but as Pitar said before, they were really tied to a single tool. So you’re stuck with one single step. And then obviously, you know, David, the genesis of the company having a universal semantic layer. So we’re allowing you to bring the tool of your choice that you can work in. And that really enabled that next journey of self-service. So it brought onto things like Tableau and things like that. You could use the tools that you want, but you have the governance of a single semantic model. Well, as you think about the future, so you start thinking about a genetic strategy. So generative or even, you know, we’re heading for scriptive actually doing something. You must be operating on the truth. So imagine for a second when Tableau first came out, you didn’t have something like an at scale, a business group created a new analytic, new report and it was the wrong answer. How much time was spent figuring out who had the wrong answer versus moving on to the next thing and differentiating your company with the next analytic. If you take that same journey with conversational BI generative, you’re going to be stuck. You know, you’re going to be stuck trying to figure out who’s got the wrong answer, who developed the wrong prompt engineering. All those problems go away when you use a universal semantic layer, a single source of truth for your calculations, for your metrics, for your complexity. And so that’s why if you start to look, everybody is really starting to think about that next generation of self-service. But in order to do that, you have to have a semantic layer. And so I think that’s why you’re really seeing the popularity. You just in like the Gartner and the different reports and, know, frankly, a lot of people trying to build their own.
Dave Mariani: Yeah, and I definitely AI has changed the game. Mean, I think since chat GPT came out, people started to say, hey, natural language query is actually a possibility here. We can actually start to talk to our data for the first time. And you can’t talk to your data without a semantic layer. You can talk, you can, it will work. I mean, it will work operationally. I mean, it work technically, but it won’t give you the answers you want consistently without a semantic layer. But know, Petar, it’s not just about asking questions of the model, is it? Because you still have to build the model. So can you talk to us a little bit about what you’re planning on doing in terms of using AI not just to question and interrogate a semantic layer, but to actually build a semantic layer? What you’re thinking about there.
Petar Staykov: Like the answer of the previous question, the foundation is SML. When you have modeling as code, when you have a modeling backed up by code, then it’s really the next natural thing is let LLM do the job of building semantic layer, of building semantic models. And just to reiterate on the previous question, Dave. When you’re browsing social networks, you can see a lot of screenshots, you know, how Chuck GPT is failing to do the job. You can rarely see how he did the job, you know, correctly. And the same thing is with the asking a business questions. In a company, know, the viral would go any wrong answer but not the correct answer, which is ensured by semantic models behind the algorithm.
Dave Mariani: Yeah, totally. So this is a question for both of you, actually. So I think that we kind of expected a repeat of the past where you saw the birth of these BI tools where you started to see all these different sort of BI tools and the like. Again, they embedded the semantic layer. So they had their own semantic models. And that created havoc for me at Yahoo, which is why I decided to start the company with my fellow Yahu’s. Where do you think things are to go on the agentic side? Because you have people like, have our partners like Databricks with Genie, you got Snowflake with Cortex Analyst, and those are really specific chatbots. So Daniel, to you, because you’ve been out and you’ve been seeing customers who are starting to really get into trying to deploy these technologies, what are they looking at? And what are some of the strategies that you see your customers working through? And have you seen anything that’s been sort of a successful pattern so far? I know it’s early, but what are you seeing out there?
Daniel Gray: Yeah. Yeah, I mean, what we’re hearing is, you know, from prospects, you know, is a lot of not being successful yet, Dave, to be really frank. I talked to a prospect the other day where they were trying to build a chat bot and hey, these things actually been out for a couple of years and chat bots, LLMs, it’s all been commoditized, right? So it’s not, you know, that’s not rocket science, but getting the right answer back is super important. So they actually put this chat bot in front of their executive team and it produced the wrong answer. And so it looked pretty bad on them. It’s just a huge step back. So that’s the first thing, people are struggling. It’s one thing to have a really simple model that has a few joins and a simple calculation, but what happens when you tell an LLM, hey, I need a time series calculation year over year on a moving average that has a slowly changing dimension in a currency conversion. That’s a much more difficult answer. And that’s the real world questions. That’s the first thing I’d say. You know, and I’ve been I’m I’m over prompt engineering. That’s only a thing when you need to try to tell the LM how to get the right answer. If you have a great semantic layer, move beyond that. Know, and then the final thing is, it’s not about the question that you know. I mean, what I’m seeing is it’s like the question that you don’t know. That’s where the true power comes in. When you have a great semantic layer and you can just say, hey, you know what? Tell me something I don’t know. What’s interesting about this particular model? You shouldn’t have to prompt. You shouldn’t have to engineer those. Let the LLM actually do its job, but you can really only do that if you have a great semantic layer that’s going to ensure that you get the right answers.
Dave Mariani: You know, just to that point, Daniel, it’s like, you know, we’ve been working with AdventureWorks, you know, Microsoft’s sort of standard data set for the past 20 years. And when I paired our AdventureWorks with AtScale’s semantic model with Claude, there’s actually some incredible insights in there that are embedded in there that no one ever knew until we actually said, show me some insights on AdventureWorks. And we could only do that with the semantic layer. And it’s pretty amazing what it came up with. So for 20 years, it’s been locked away because maybe a few people were lucky enough to point and click and drag and drop in Tableau and trip upon something. It’s with an LLM, the LLM becomes your superpower to sort of power through and not have to run a million queries, but let it run a few queries and come up with something that is actually meaningful to you and your day and your business. So I’m really excited about just being a user here about, think how analytics will really change for people. Petard, I mean, what do you think about that? Do you think that we’re going to see. You think we’re going to see a bunch of new sort of AI agents and specific applications? Do you think people are going to roll their own? Do you think that the future of business intelligence through BI tools is going to change? What’s your prediction about what you think is going to change given all that we have in front of us?
Petar Staykov: I think there is a place under the sunshine for everyone, Dave. And you started this company with the word universal. It’s the same with the LLMs. Somebody wants to use Quote, somebody wants to use ChartGPT, somebody wants to use Cortex. So here the key is to deliver the universal semantic layer to those LLMs to do the job. We are betting on MCP server. So this is the way to be universal with those guys. And this is just another way to answer a business question. I don’t think traditional BI is going away. There are still monthly reports. Are still reports which are the big enterprises are running the business on. Really, the LLM should be taken as the part that will deliver the next value up on top of the report, as Daniel said. Tell me something that I don’t know. And even this is not new. Data mining technologies are there in the data warehouses and they’re all the SBI.
Dave Mariani: Yep. But insanely, insanely difficult to use. I’ve never been able to run one. So as opposed to me saying, show me insights on sales. I mean, I can do that. And the alarm could do the rest. Me running a regression model, forget about it. Ain’t gonna happen. So, know, so yeah, it’s a game changer Petar for sure.
Petar Staykov: Exactly.
Daniel Gray: Yeah. And I would add some of what Patara’s saying is like, you don’t want vendor tie-in. We’ve been down that road before. The other thing is, is when it comes to AI, you can’t have black box. You got to know why something answered a question. You have to. And if you’re in a regulated space, it’s not a, it’s not a, you know, want to have, or it’s a must have. And so when we work with our regulated customers, they’re bringing their own LOM, right? They’re certifying it, they’re regulating it. So like the concept of supporting MCP allows you to do that to be a proliferation of agents and the thing that we’re focused on right now is generative AI which is great but what’s next is prescriptive. You know if I’ve got that agent that we talked about earlier that’s combining FinOps and you know inventory it should be able to make an order for me. I shouldn’t need someone to look at analysis and go pick up the phone or make an order. Prescriptive is where it’s going but again you have to have a single source of truth to be successful there.
Dave Mariani: Yeah, I love what you guys have highlighted here because, know, we’ve sort of, I mean, we’ve we’ve put all of our money into this whole concept of universal and it’s been for BI and until really the past couple years. But it’s like and the universal means that it is like you said, it’s like you’re not going to you’re going to be you’re not going have vendor lock in. And so I think what you’re saying when I hear you guys saying is like it’s the same thing for AI and AI agents. You don’t want to go with and lock yourself into one single ecosystem. You want to be able to invest in a partner. And we’ve chosen to do that through our semantic layer with an MCP interface. MCP model context protocol for those who don’t know it, go look it up. It’s super important. It’s like JDBC for LLMs. It’s the way you communicate with LLMs and give them the data that they need. And the semantic layer is a perfect sort of use case for MCP. So, stay open is really the key that I’m hearing and it means, and that’s more important than ever. So, don’t shift left with your semantic layer. Don’t shift right with your semantic layer. Semantic layers need to be standalone and it gives you the most flexibility to be able to prepare for the future, which nobody knows at this point. So Daniel, why don’t we just finish it off on, you’re in the customer seat. What’s some advice you can give to a customer who’s looking to sort of make this journey? And not just a semantic layer journey, because everybody in the C-suite is requiring their teams to deploy agentic AI in some form or fashion. So how would you advise them in terms of how to get started and what to do first.
Daniel Gray: Yeah, I would say don’t reinvent the wheel. Whenever there’s a new, know, buzzword or new technology. You know, engineers want to build stuff. You know, it’s a lot of fun. You get to learn things. Don’t reinvent the wheel. First of all, a lot of this stuff’s already been commoditized in the AI space. So don’t, you know, don’t try to recreate that wheel. The second thing is, is, you know, it really does come back to, you know, single source of truth in my mind. And when I talk to our customers that already using AtScale, you know, and as our customers are thinking about conversational BI, if you a semantic layer in place and it’s governed. You run your business off it, right? So you have Excel users doing end of quarter financial reporting. You know, you have people in marketing that are building dashboards to figure out what they need to buy and who to sell to. Why not use that same source of truth just to answer conversational BI questions? Know, and so when I talk to prospects, it’s very much the same talk track. Use one single semantic layer like Patar said to do reporting. Spreadsheets aren’t ever gonna go away. I maybe one day when when human you know but
Dave Mariani: I don’t know, not in our lifetime as I’m predicting.
Daniel Gray: Not in our lifetime. So, you know, really don’t try to reinvent the wheel and over engineer something. Put a great semantic layer in place and you’re going to be able to have a single source of truth across all those and you’re going to accelerate conversational BI. You’re going to go really fast there and you’re going to start thinking about less generative and more of what the next thing is. How do you differentiate yourself from your competitors versus trying to, you know, engineer and build some thing from scratch.
Dave Mariani: Yeah, I totally hear you. You made a good point, Daniel, of like, it’s not just about conversational BI, but it’s also prescriptive. In other words, I can go ahead and pair an agent with a semantic layer, and that agent itself could be headless, and it could actually go do something based on that data. Maybe send an email to a customer who was about to churn with some specific content that can reengage them. I mean, that’s all data driven. And so just like Petar’s shirt says, it’s important to be data driven. And in these days, you can do that with the semantic layer and you can do that with GEN.AI. So with that, I want to thank both Petar and Daniel. You guys have been great. This has been a great chat. And I hope that you as listeners can take something away from this. So all I can say is I can end with stay data driven and stay frosty because change is afoot. So thanks a lot for listening and you all have a great day.
Daniel Gray: Thanks everybody. Take care. Bye.