In this episode of MLOps Weekly, host Simba Khadder engages with Eero Laaksonen, CEO of Valohai, to explore the intricacies of end-to-end MLOps platforms. They discuss the challenges and solutions in machine learning operations, delving into topics like proprietary vs. open-source tools, operational efficiency, and the evolving role of large language models in the MLOps domain.
Listen on Spotify
[00:00:00.000] - Simba Khadder
Hey, everyone. Simba Khadder here, and you are listening to the MLOps Weekly podcast. This week, I'm speaking with Eero, who's the CEO of Valohai. In this episode, we'll really dive into what it means to have an end-to-end MLOps platform and how that compares to best in class vendors when you need it, the problems it solves, proprietary versus open source, it will be full of great information. Let's dive in.
[00:00:28.270] - Simba Khadder
Eero, it's great to have you on podcast today.
[00:00:30.430] - Eero Laaksonen
Thanks, Simba. Thanks for having me. Great to be here.
[00:00:32.910] - Simba Khadder
It's cool to have you on. I know we've seen each other a million times at different conferences. We never really talked about. I'd love to learn more about your journey to starting Valohai. What got you there? What made you decide to start Valohai?
[00:00:44.690] - Eero Laaksonen
That's a big question. For me, personally, the motivation has always been pushing machine learning forward on a global scale. I've been following machine learning and deep learning for quite some time. I really saw the potential to change the world, whether it was through automation of work or scalability of things that were not scalable before. That's how I really condensed the impact of deep learning. We can scale new businesses that were not possible to scale. That then brings price down and gives access to a lot of things.
[00:01:13.860] - Eero Laaksonen
For instance, we work a lot with health care. Let's say we can automate diagnostics. We can bring that to a lot of countries at an affordable cost. If we don't have to recruit doctors, have to train as many doctors to treat as many patients. These kinds of things that are, I wanna say, almost low-hanging fruits.
[00:01:32.000] - Eero Laaksonen
Obviously, these are difficult problems. There's a lot of regulation, but there's so many things that are somewhat obvious that we are going to solve with enough elbow grease and time put into the other models and putting them into production. That will make the world a better place. And I really wanted to push that market forward on all industries. That's why I wanted to go into tooling in Machine Learning, and that's why I am here, why I'm one of the founders of Valohai.
[00:01:57.190] - Simba Khadder
For our listeners who don't know, could you give the general page of what Valohai is?
[00:02:02.480] - Eero Laaksonen
We're an MLOps platform for what we call machine learning pioneers. The way we define that is companies that are building either core features of their product or the entire product, IP, on top of machine learning. This, of course, in contrast to things like internal reporting tools, stuff like that. The reason why we think these are very different is that when you're building someone off reporting for internal use cases, you don't really have to employ us as a rigorous software development. It is as you need when you're really building a product that you're constantly evolving.
[00:02:36.870] - Eero Laaksonen
We've really focused on that part where you're building products that are constantly evolving. You need to scale your teams, scale the compute, scale the processes, bring in more models all the time.
[00:02:46.780] - Simba Khadder
When you say MLOps platform, that means a lot of things to a lot of different people. What does that mean to you? What are the pillars? What are the components that make up Valohai?
[00:02:55.710] - Eero Laaksonen
Great question. For us, it's orchestration. We run, spin up your machines, we deploy docker images in just code, transfer data. Orchestration part in your environment, then version control and reproducibility, both for exploration. We all know in machine learning, there's often an exploratory workflow, and then you build that into production. We do the reproducibility and version control both for exploration. And then when you go and productionize those models into automatic retraining pipelines and then deployment.
[00:03:28.470] - Simba Khadder
It's a split, and I know it's imperfect, but some people would call things end-to-end platforms and some things more best in class vendors. On that spectrum—I know it's imperfect—but on that spectrum, why do you feel that Valohai lives?
[00:03:42.750] - Eero Laaksonen
For me, it's more on the end-to-end side of the spectrum. Of course, there's still stuff that we don't do. We don't have a feature store, for instance, if you're dealing with unstructured data, stuff like that. Also, for instance, model monitoring is something where we don't have a huge amount of offering. Even with end-to-end, this is a very large industry. And then use cases are so different that monitoring for Machine Vision is a completely different piece than it is for structured data. What is truth detection for LiDAR scans, for instance, stuff like this.
[00:04:16.410] - Eero Laaksonen
I'm sure that even with the biggest enter tool, that's going to be best in breed solutions that might be somewhat industry-specific. But we try and keep in the area where it does the things that can be generalized for a vast majority of use cases, where we try and play with our product.
[00:04:34.470] - Simba Khadder
One thing that, especially on orchestration that at least I see, is a lot of people are coming from in-house, have to gather things on top of airflow, for example, or if even that. When you talk to someone like that, or if let's say a listener is like, "Hey, yeah. We are learning about MLOps. We just have hacky systems on our airflow, and we are thinking about MLOps." What would you say to something like that? What makes something like Valohai different? What are the key problems you solve that aren't solved by more generic hyper generic solutions?
[00:05:05.930] - Eero Laaksonen
There's so many things. One of the difficulties with, especially companies that are early, if they're doing this for the very first time, a lot of data science teams don't come from software development background, first of all. They haven't gone through that progress that we, in software development world, had where we went from releasing on CD ROMs and then not having version control and everything was a pain to the ability of deploying several times an hour, in best cases in the production.
[00:05:35.730] - Eero Laaksonen
First of all, they have a very different perspective from software development. Then, if they're still building their first thing, and it's barely in production, there's relatively little you can say. Usually, we engage when teams have a few models in production, they've already hit that wall of like, "Hey, we have five new data scientists coming next week. I don't think they can be effective in the next six months because there's just so much overhead to understand all the in-house build stuff, and we really realized that we're hitting the wall."
[00:06:04.510] - Eero Laaksonen
But if I tried to say something and when we do meet with these very early teams that are not really coming to us with the pain already, but we have to instill the pain in them, we'd like to talk about things like, "You have to really go back to business." What is the impact of this work you're doing? Because a lot of teams are not very clear on that, but there's a business goal behind this. It's not research anymore. We're in the real world now. We're trying to move some needle somewhere.
[00:06:32.470] - Eero Laaksonen
And then we need to get that person in the same room with the person who's spending wild amounts of time fixing a pipeline, like two months fixing a pipeline that should be done in two hours. Having that understanding throughout the organization that if this time, that this work that takes you two months today would happen in two hours that would have a very concrete impact on business, and that's when the light start lighting up in people's heads. But if we can't make that connection with business, it's usually very difficult to get that engagement with teams.
[00:07:04.040] - Simba Khadder
As I'm thinking about that, a lot of what you're talking about makes sense. It's very Ops as operations. If you don't have something, there's no reason to operationalize. In the sense of efficiency don't really make sense. The duration tends to make sense because you don't have anything yet. It's like adding process before you've even done the thing once.
[00:07:22.080] - Simba Khadder
You've talked a bit about what the final state looks like. I think that's been an open question in MLOps, and I'd love to get your take, what does MLOps look like when it's done well? What are we all aiming for? What does a gold standard workflow look like?
[00:07:37.560] - Eero Laaksonen
For me, there's a few things. I don't think there's maybe a one solution fits all, or same to say metric fits all. But I'll give you a few things that I think I see with the best teams that are out there. They usually think a lot about velocity of engineering. How fast can we deploy new things to the product?
[00:08:00.100] - Eero Laaksonen
When you're starting to think in those terms, that's usually already a huge mentality in the way you work. Then going from there, how does that then reflect on the ways of working? We noticed that most companies that go into production, they don't have a single model. They deploy a model for a single customer, but then they realize that they very quickly have to train a model for every single factory, for instance, or every single new geography that comes onboard or every single customer specific data. Usually, the dimensionality goes through the roof when business starts running.
[00:08:34.610] - Eero Laaksonen
You have things like a single POC that was built for a customer. It took six months. And all of a sudden, you have sales team bringing on 20 new clients, and then you would have to fine tune the model for each one of those 20 new customers in the next month to close a sale, but we are looking at four-month long development period to get through it per customer, getting to a point where that dimensionality gets trivial.
[00:08:58.420] - Eero Laaksonen
I mean, new customer coming in, you being able to press a single button as even the salesperson, somebody who's not related to the actual training process themselves, and being able to train or fine tune a customer-specific model that then becomes the product for that customer. I think that's one of the gold standards that we usually talk about a lot. Completely automated pipelines.
[00:09:22.230] - Eero Laaksonen
Because the world is never perfect, those pipelines break at some point. It's never going to be perfect. How fast are you able to iterate back to the data science who then fixes the problem, whether it's got to do with your data or with the model or the code itself be able to push that change in the production? I think those two things, the iteration speed and how fast we're able to grow that dimensionality are a big guiding light for a lot of product companies.
[00:09:50.810] - Simba Khadder
Yeah. It makes a ton of sense. I think it's these components of philosophy, reliability, this iteration time. At least for us, we think a lot is about collaboration, how can we work together and do things better as a team in getting that best, as a matter of fact. Slightly different, you all have taken the approach of being proprietary. There are a lot of open source different tools and players in different spaces. How do you think about open source versus proprietary, how do you think about that? I just love to just generically get your thoughts on it, and then we can dive deeper.
[00:10:23.100] - Eero Laaksonen
I'm a big fan of open source, first of all. I think open source is basic running the world. I think the problem with open source in a lot of industries is how do you build a business around it? The danger of a fully open source solution is that if you're not able to build a sustainable business, the support will go away at some point. Especially if you're dealing with business critical systems, at least enterprises, they will need somebody to offer that premium service.
[00:10:50.760] - Eero Laaksonen
I think just looking back in our industry, it's been really difficult for a lot of open source solutions out there. We've lost a lot of good teams, a lot of good products because quite frankly, we're not able to build a business. That's been one of the reasons why we haven't been able to jump on that back end because we never really saw that direction on where can you make a sustainable business in this industry and then build a fully open source solution? That would be just wrong against our customers.
[00:11:16.850] - Eero Laaksonen
We'd be able to run with some VC money for three, four years, and then we'd get a lot of people on board. And then at the end, we wouldn't be there to maintain it and the project would die eventually. I think that's a real danger that a lot of companies are not looking into. How does that sustainability look like in the long run?
[00:11:33.280] - Eero Laaksonen
I think our industry, because of various things, is a little bit difficult for the model. One of them is that, like I said, a lot of data science teams don't come from an engineering background. They come from research background. If we look at, of course, classically software development, developer tooling has been a field where open source has been a really, really great way to build business. I think that there's a fundamental difference in a lot of these teams because they are not able to maintain and install software on the same level as software teams are. That, fundamentally, they just need more help and hand holding to get this stuff up and running.
[00:12:11.220] - Eero Laaksonen
I think it's also quite a tall order. We have people who are PhDs in mathematics or statistical methods. And then on top of that, they need to be software developers, engineers, and then on top of that, IT and dev ops capabilities. That's quite a lot to ask for people. That's what you need, quite frankly, to deploy these most complicated orchestration tools out there.
[00:12:35.300] - Eero Laaksonen
There's a lot of friction in the open source field for MLOps products, specifically. That's at least my opinion. I think that's also one of the few reasons why we are, at least, if not the only, but one of the very few companies that have been able to get breakeven and build a sustainable business in this particular field.
[00:12:52.790] - Simba Khadder
Yeah. I'm sure that's true. I think it's quite rare. MLOps see companies that are... Actually, even fix this a little bit profitable, at least, some level of business that makes sense, especially given all the hype that you should just surround it. I think now, we're seeing a lot more companies figure out how to do that the hard way as things change. But it is a huge accomplishment, that you're able to do that.
[00:13:15.450] - Simba Khadder
We talked a lot about longevity, and I think it's a huge part of any buyer, anyone thinking about bringing on a tool. It's one thing that everyone should look at in my opinion is, "Okay, if you bring this thing on, you're investing the long term." You're like an investment company in a funny way. Do you believe not just, "We'll just perhaps solve a problem for you," but especially, if we're talking about bigger deal size or critical infrastructure, do you believe in this company? Do you think it makes sense?
[00:13:40.080] - Simba Khadder
Do you think it'll be around in five years or whatever your time frame is? I mean, if you're a Fortune 100 that could be talking 10 years. Is there a storyline there? I'm curious about the flip side of it. We're talking about longevity and the ones that's there making sure it stays. I'm curious about when you bring on and onboard a new customer.
[00:14:01.230] - Simba Khadder
I'm thinking of this from the perspective of someone who's starting to onboard MLOps. What are the steps you take? I know there's a million things you do, but if you could in broad strokes describe what it looks like to go from no MLOps or a homegrown solution to something that's a little more standard, stable, and legitimate. How do you help people throughout the process?
[00:14:23.880] - Eero Laaksonen
Yeah. Also, as a company, that's one of the areas where we've spent the most time on, figuring out those processes of onboarding. The way it works for us, if I'm going very concretely, we go through a trial where we, in the very beginning, tie everything into business values. Like I said, if we can't get that business connection, it's usually not going to go very well in the end.
[00:14:45.350] - Eero Laaksonen
Especially in this particular market, if we work with projects that don't have a business impact, these teams are not eventually going to exist. It's also a big investment for us in the beginning. It just doesn't make sense for anybody. We work a lot with the teams to figure out, why is this actually important? Why are you setting up on this journey? Why are you investing time and effort and eventually money for licenses to move faster, essentially? What's going to be that impact?
[00:15:11.300] - Eero Laaksonen
Once we understand what that impact is, we try and quantify it. So, "Okay, if we're able to build pipelines from zero to this in this particular time compared to what it's taking now, what would be the impact, and then we verify that with business. Once we have a good understanding on that, that's when we only engage from the technical side. That's where we do basic stuff like installation, we usually have to work with IT, provide a lot of audit stuff make sure that we comply with their security standards, stuff like that.
[00:15:40.750] - Eero Laaksonen
That's in itself already somewhat of a large engagement, of course, depending on the size of company. We help the data science team through it. Usually, they're not security professionals, so it's really hard for them to drive that conversation. Help them get through ID to get installations on.
[00:15:57.300] - Eero Laaksonen
Then we go to onboarding where we usually do a short training in the beginning. We actually take customers on existing real world project and help them go through that project and how to move that on Valohai and how to get it up and running. Usually, within that two-week time frame that we serve for these trials, is when we get the first pipelines up and running. And then prove those quantified requirements that we had in the beginning. Then we have 24-7 support available for customers to ask.
[00:16:29.580] - Eero Laaksonen
They usually run into a lot of very simple stuff, even just basic Python errors. We try and make sure that we get through all of that relatively quickly, so nobody gets stuck in that first two weeks. And then gets to prove that we are able to deliver that value that we prove. And then once we get to that actual engagement, signing contracts, stuff like that, after that, of course, that onboarding then goes from usually just that one single trial team to helping other teams get onboarded and then grow and grow in organizations.
[00:17:00.480] - Simba Khadder
Yeah, it makes a ton of sense. I think it even helps us to lay them out that way. I think a lot of people forget the little pieces. There's so many teams that have to come into play. IT, we deal with a lot. I'm sure customers deal with them a lot too, figuring out how to work through that process, how to do it efficiently. In the end, all that matters is business value. All these things that they have reason to exist. Obviously, security really exists for very good reason.
[00:17:24.870] - Simba Khadder
But the ability to efficiently get through these processes that you can actually show the value in it just by the time investment. It's something that I think a lot of people don't factor into their timelines, and I think it's really important. Even for us, we'll start parallelizing that as we get towards the analyst, start doing that now. Because it's going to take a few intro meetings before you can get to the technical to try and make these things line up so that they all happen at once.
[00:17:53.270] - Eero Laaksonen
Yeah. I think it's a value driver for the companies themselves, too. Because, again, these teams are almost always new. They've never run a procurement process inside the organization, they're often quite lost on, how to even buy software and how to get it installed. The fact that we're able to take some of that burden off their backs and learning. How do we deal with security? How do we make compliance with security framework we're working with and so on?
[00:18:19.620] - Eero Laaksonen
It's a big, big value add for the teams. Quite frankly, it would be quite a lot to ask for, again, a data scientist on top of everything else now you have to be a security audit expert.
[00:18:30.330] - Simba Khadder
Yeah, it's a funny skill. The ability to get new initiatives through. It's a skill that vendors have gone very good at in a funny way, and it's something that salespeople, other than obviously, evaluate, helping you figure out your whole process around that. A lot of what our sales does is just help the org navigate their own organization sometimes.
[00:18:54.360] - Simba Khadder
It's just big companies. You have to go to a lot of different places. If you forget a place, it can make you lose a month or two, you have to rewind. It's a lot of pain that can be avoided if you know what you're doing from the beginning. Obviously, the thing that is on everyone's mind in machine learning and data science is the rise of [inaudible 00:19:14]. For ML and MLOps teams, especially, I think it creates a lot of confusion uncertainty. I would love to get your take, how are things changing? And how is ML and MLOps evolving in the face of the alliance?
[00:19:27.590] - Eero Laaksonen
It's a great question. I think, honestly, it's a bit early to say how will MLOps itself evolve in the face of LLMs just because it's so early for most projects. Where we see LLMs today is mostly using closed APIs. However, the field that we are in, I talked about this machine learning pioneers building products. We see four fundamental problems for the teams, which is basically all of it's on lack of control, meaning lack of control over your data.
[00:19:59.740] - Eero Laaksonen
If you're dealing with health care, defense, customer data, it's difficult to send all of your data into a third-party API and not be sure what happens behind it and so on. Secondly, cost on price, which is assessing what will this cost, especially if it grows rapidly has been quite difficult for a lot of teams that we've talked to.
[00:20:21.630] - Eero Laaksonen
The pricing tends to get out of hand really quickly once things start moving when you start to hit that exponential growth curve. Third is service, especially if you're dealing with critical products. Dealing with these APIs is like dealing with an API where somebody changes the API definition without telling you how it changed. Somebody updates the model and all of a sudden you have regression. You have no idea about it beforehand.
[00:20:48.350] - Eero Laaksonen
Something that worked yesterday is not working today, and you have no idea how to fix it or if you can fix it. It's really difficult to build your business on top of an API like that. Lastly, lack of control on IP. If everything your product is someone else's closed charge system, it's pretty difficult to see where the real value generation of the company is happening in. And due to these limiting factors, I personally am a big fan of open source foundational models. I think that's going to be the way.
[00:21:19.030] - Eero Laaksonen
I think it's going to be better for everybody in the end that nobody owns the best only LLM out there. I think that that's the direction that... As a society, it would be better if we drive towards that goal. And I think that that's generally where the market is headed towards. I really think that these closed doors APIs, they definitely have a mode in a lot of use cases today, but I think that mode is getting smaller and smaller every day.
[00:21:47.180] - Eero Laaksonen
The models that are coming out, they're getting better exponentially all the time. This is a very, very clear indication that the open source models are getting really good, really fast. Pretty much every week, there's a new model. I think that that's what the MLOps feel this gravitate towards. How do I fine tune? How do I manage these open source foundational models? You're going to do a test on closed APIs, and then you move to open source models to build that product for real.
[00:22:15.780] - Simba Khadder
Yeah. It makes a ton of sense. I totally agree. I think we've already seen with the GPT APIs. They go down a lot. Their latency is pretty hard to predict, like where things happen. Like you said, the mall gets changed underneath you sometimes, which is why I'm under the belief that there's no such thing today as a true enterprise-grade online application or enterprise-grade prototypes.
[00:22:40.550] - Simba Khadder
But I don't think we've really learned how to actually productionize these things. I also think about the cost component is a component a lot of people ignore, but let's say you're dealing with something like fraud detection. Let's say your visa or something, I don't know. I think of how many recommendations, like YouTube or Spotify make just like for a single user in a single day. Now, let's say each prediction cost a penny. Let's just say a penny, make the numbers easy. You're spending dollars per user per day.
[00:23:10.140] - Eero Laaksonen
Yeah. It's probably more today, right?
[00:23:12.550] - Simba Khadder
Yeah, it just doesn't make sense. Also, then you add the latency on top of it and all other things. I think that you mentioned deep learning. One thing I've always thought to be true is that in the early days of deep learning. It was this implication that traditional machine learning was dead. I think that the same thing is going to happen again where it's like, "Okay, deep learning, sure, it panned out. There's a lot of value that's created deep learning."
[00:23:36.330] - Simba Khadder
I mean, you could argue that LMs are more or less just the continuation of that. It's the random forest and the boost models. They're not going to go away. In my opinion, they're still going to be the most common model that you interact with, even if you don't notice it or don't know, in five years. I think that being true, I think that there'll be this unification a bit of MLOps and now LLMOps or whatever.
[00:23:57.480] - Simba Khadder
There's also problems you mentioned. They just remind me of deep learning problems. Like how do you handle a model, if it's a black box? The difference is that you don't own the model, but the open source ones you do. And the changing, it goes away when you go to open source. It just looks like a unique type of model, deep learning model, which is the same problem space as them wants, except for prompts which is a new thing.
[00:24:17.810] - Eero Laaksonen
Yeah. I think a lot of people forget maybe seven years ago, something like that, we had the Google Machine Vision API come out. And then there was a ton of Machine Vision models. People were like, "Well, building your own Machine Vision models doesn't make sense anymore because the Google Machine Vision API will take care of everything."
[00:24:36.500] - Eero Laaksonen
Nowadays, if somebody said that they'd be like, "You're crazy. There's no way the single API is going to take care of the complexities and the challenges that we're facing with Machine Vision. It's an absurd statement. I think that that is where these LLMs are headed to. I do think that there's a much larger value capture that these closed source LLMs are going to be able to make than Machine Vision APIs. Don't get me wrong.
[00:24:59.970] - Eero Laaksonen
At the same time, I think it's still quite limited in terms of what can be done once open source models get there, and people really start building tools and products on top of open source elements and fine-tuning them.
[00:25:12.340] - Simba Khadder
Let's say you run an MLOps team right now at the big organization. How should you be thinking? Should you be evolving your platform with this in mind? You think it's going to be a whole different platform for LLMs? How would you move forward knowing that, "Okay, today, it's not ready, but we're definitely moving that direction for potentially a large percentage of your amount." How would you change your platform if at all?
[00:25:34.470] - Eero Laaksonen
It depends on what layer. Where we are concentrating on reproducibility, orchestration, building your pipelines, it looks exactly the same as any other fine-tuning of model or training of any other model. There's very little changes in terms of orchestration for Fine-Tuning Llama 2 compared to fine-tuning something like [inaudible 00:25:57]. It's virtually the same thing. Your dataset is, in one sense, it's images, and the other one, it's natural language.
[00:26:04.610] - Eero Laaksonen
Aside from that, there's very little in terms of how do we orchestrate it? How do we run these operations? How do we deploy the models? We actually have plenty of customers that went overnight into fine-tuning LLMs on top of Valohai without us making any changes to the platform. I don't know if that's going to be true forever. But at least for now, it seems like there's very little.
[00:26:27.050] - Eero Laaksonen
And then when we go into really analyzing the model and deep into understanding the model itself, I think that's probably where the changes are much bigger and labeling your data, stuff like that. But what comes to training of the models, fine-tuning models, building pipelines, deploying is very similar.
[00:26:45.770] - Simba Khadder
I remember when we released our LLM, more RAG-focused functionality into feature format took one engineer one week, more or less. I get it up because 90% of it was the same. The only thing that was different was this new idea of activity. The embeddings weren't really new, but we didn't support them at the time. It was a pretty quick change because in practice, it looks very similar.
[00:27:07.910] - Simba Khadder
Prompt is just a new type of feature. It's making a prediction. The way you handle that prediction is not that different from handling and prediction for NLP. Obviously, prompts are a little different. But in some ways, they are very different. In other ways, you can abstract that difference away. I think that there's a lot more overlap between them than there is differences.
[00:27:27.980] - Eero Laaksonen
Yeah, I agree.
[00:27:28.780] - Simba Khadder
Yeah. I think the differences will be handled by unique new products, which I think be integrated into these platforms, but I actually don't really view an LLMOps platform and an MLOps platform to be two separate platforms as much as it's part of one whole. I think that's our future and where things are going. I think it just makes sense because you're just going to wanna use the same data and the same monitoring for these different types of models.
[00:27:51.010] - Simba Khadder
You're not going to want to have a whole monitoring stack for your chatbot and then have a whole number of monitoring stack for your fraud detection. You want to unify these things. Just thinking about Valohai thinking about MLOps, what are you most excited for now that we're looking for that great unknown, what are you most excited about?
[00:28:11.160] - Eero Laaksonen
It has to be on the LLM side still. Just the progress that the open source models are making. How little we've done, how little of the low-hanging fruits that are out there. I have to say it's not only LLMs. It's generative models; multimodality. Those things are so close, but at the same time, we haven't even graced the surface. We haven't even scratched the surface of it. I'm super excited. One of the reasons why I am so excited about it is I've talked a lot about the business.
[00:28:40.440] - Eero Laaksonen
In the past, we really didn't see business engage or be the primary motor of driving these projects. Now, it's completely changing. Every product owner, every CIO, every CTO, every CEO has used ChatGPT. They have their ideas on how to deploy this for their particular products, and they are driving the change, which means that all of a sudden, business goals is a given, budgets is a given.
[00:29:07.820] - Eero Laaksonen
The actual drive to go through with this is a given instead of us trying to teach these organizations that, "Hey, you need to really think about how does this impact the bottom line in the end." I think that this will really make it faster for organizations. Because now, the amount of stakeholders is so much larger that are talking about these projects now.
[00:29:28.150] - Simba Khadder
Yeah, I agree. I think that's what, for me, has been the coolest thing, too. It's that this unexpected tailwind of not just in LLMs, but just in, I think, machine learning as a whole. I think it just, end users in the end are expecting for applications to get smarter in every way. They don't really care if it's an LLM or not, but chatbots, sure. Maybe it's going to be much more LLM focus, but they expect it for everything. LLMs aren't exactly the answer for everything, so it's this renewed extra push that we've been, this third wave of data that we're about to ride, which is really exciting.
[00:30:02.840] - Simba Khadder
Eero, it's been really awesome here. Your insights on all of this. Thanks for off and on. Always really, really good to catch up with you. Thank you.
[00:30:09.360] - Eero Laaksonen
Thanks for having me, Simba. Happy holidays.
[00:30:12.000] - Simba Khadder
Yeah, you too.
See what a virtual feature store means for your organization.