In this episode of the "MLOps Weekly" podcast, host Simba Khadder talks with Paul Iusztin, a Senior ML and MLOps Engineer at Decoding ML, about his journey from software engineering to MLOps. They discuss the integration of software engineering principles in ML, the challenges of writing tests for ML applications, and the key differences between software and ML engineering. Paul shares insights on building scalable and reproducible MLOps platforms, emphasizing the importance of decoupling feature, training, and inference pipelines. They also explore the convergence of MLOps and LLMOps, highlighting the unique aspects of prompt engineering. The conversation underscores the importance of robust engineering practices and continuous adaptation in the rapidly evolving AI landscape.
Listen on Spotify
[00:00:05.19] - Simba Khadder
Hey, everyone. Simba Khadder here, and you are listening to the MLOps Weekly Podcast. Today, I'm speaking with Paul, who's a Senior ML and MLOps Engineer at Metaphysic, a leading GenAI platform. He's one of our core engineers and is taking a lot of our deep learning products into production. He has seven years of experience where he's built GenAI, he can do revision, MLOps solutions at companies like Core AI, everything in continental. He's also the founder of Decoding ML, a channel both for battle testing content to learn how to design, code, and deploy production-grade machine learning. He is just a really big part of the MLOps community and just creates a ton of really awesome content. Someone you should definitely follow on LinkedIn. Let's jump in.
[00:00:47.07] - Simba Khadder
Hey, Paul. Thanks for joining me today.
[00:00:49.15] - Paul lusztin
Hello, Simba. Excited to be here.
[00:00:52.26] - Simba Khadder
Let's dive right in. I'd love first maybe for the audience who doesn't know who you are and doesn't follow your amazing LinkedIn content, would love to get an introduction on how you got into MLOps.
[00:01:07.11] - Paul lusztin
It's actually not a straightforward question. My first interaction with the technical world, let's say, it started with software engineering and Python. Initially, at least from my point of view and from the people I had around, there was no difference at all between machine learning research, machine learning engineering, and MLOps, and so on and so forth. This was like six years ago. At the time, these terms weren't so well-defined.
[00:01:55.12] - Paul lusztin
Initially, basically, I took the first job that I had AI and machine learning in the title because I was mostly at the junior level in the field and I just wanted to learn. Basically, I did not know a lot what AI is about, so I just took it. Actually, I turned into a deep learning research position where we research autonomous driving algorithm, 3D object detection using images, LiDAR sensor, and tracking, and that stuff. It was mostly about reading papers and trying different algorithms and breaking of metric and that stuff.
[00:02:47.01] - Paul lusztin
I realized, "Hey, this is not for me. I don't like it. I like to build stuff." But I also like AI a lot. I just started digging more into the field until I saw, "Okay, there's actually a field where you don't need to just build algorithm. You just need to take them and build the ecosystem around it," which is actually what actually most of the people out there do. I changed my position into more of an ML engineering still in the computer vision space, where unfortunately, it was about engineering, but not that much about MLOps.
[00:03:29.19] - Paul lusztin
Now I realized that people didn't know actually what MLOps is. They just knew, "Hey, I want to use all kinds of cool tools like MLflow, and Triton, and that stuff," but they weren't really focused on MLOps principles. It was everything very bad. Ultimately, I moved to the next company, which was very ML focused, and they actually wanted to sell this MLOps vibe. There I had the chance to learn a lot about this field because they wanted to create a repo of a well-known analytics tools, so we could easily interact with the whole ecosystem.
[00:04:27.19] - Paul lusztin
We had an experiment tracker or a model registry, a data versioning system, a monitoring, and all of this. That was the aha moment where I really understood what everything is about. I would say that the journey was long, about four years long, until I really cracked what this is about.
[00:04:49.29] - Simba Khadder
One thing that you've done which is unique is that you have been both a software engineer and a researcher prior to becoming an ML engineer. In my experience, people tend to come from one side or the other. Maybe data scientists turned MLE, or you sometimes get software engineers turned MLE. But you're rare in that you've jumped on both sides and then ended up in the middle. What skills, maybe first of software engineering and then from ML researching, but what skills do you think really mapped well, and what skills did not? What parts of being a software engineer are completely different from being an ML engineer and what things are really valuable?
[00:05:35.13] - Paul lusztin
Basically, the question is software engineer versus ML engineer, right?
[00:05:41.13] - Simba Khadder
Yeah, let's start there, and then I'm going to ask you the same question of research.
[00:05:45.12] - Paul lusztin
Okay, awesome. From what I've seen in ML engineering, they still lack a lot of software engineering good principles like object-oriented principles, good design architecture, and basically writing good and clean code, because usually ML engineers, from what I've seen, they know just Python, and they follow this very Pydantic way, not really, Pythonic way, where you don't care about this stuff. You just put function, in function, in function, and you hope that everything works fine. Usually, it's a big mess. I've seen many code bases that are just spaghetti code, and I don't like that. I'm usually very lean towards clean code. I don't want to generalize here. I just see this pattern many times. There are also other use cases where this is not true.
[00:06:54.10] - Paul lusztin
On the other side of the spectrum, I guess what is good about ML engineering and in the software world not, is that it completely shift your mind towards a data-centric point of view. You don't care that much about APIs, data layers, the database layer, the classic database layer, application layer, UI layer, your main [inaudible 00:07:21] is usually the data and the model, and you need basically to completely shift how you think the whole application.
[00:07:30.00] - Paul lusztin
I saw that for many software engineers, it's hard to grasp for some reason. But I think it makes more sense because even in normal software engineering applications, in the end, it's all about the data. Even if you don't have AI, you still have to manage a lot of data and to track it in some way.
[00:08:00.07] - Simba Khadder
The key difference is you're working, one, with things that are not deterministic. That's always an issue I feel people run into, at least coming from software engineering. Distribute systems have similar aspects, which is why for me, I really love that switch. But if you're used to, you just follow the code and see what happens, it doesn't really work. When you're dealing with a model, you have to observe its behavior, observe the data, and then try to deduce what's happening. That goes into what you're saying, thinking from a data in, data out perspective and less of a programmatic API call in and an API call out. Is that fair?
[00:08:48.03] - Paul lusztin
Exactly. That's why I think in the ML or AI world, experimentation and being able to experiment a lot and see how it behaves, it's a lot more powerful than just going through the code. Because as you said, going through the code leads to many variables unanswered. Based on this, one thing that I saw, it has still a huge gap, it's writing tests. Writing tests for machine learning applications, at least from what I experience, is a real pain. I not mean writing simple unit tests for minimal parts of your code which are deterministic. That's software engineering applied to this, but writing integration tests or tests that really see how the system behaves, it's really hard.
[00:09:47.21] - Simba Khadder
Have you seen it done well before?
[00:09:51.28] - Paul lusztin
I've seen it once, in a more simpler way, let's say. Basically, he just checked that the system works. You start the feature processing, the feature pipeline, you start the training, the inference, and you just check that everything works, the shapes are okay, that the interaction with the system works, but you never check that the result itself, it is what it should be. It's hard. Just imagine how you can test that a training works without having an integration test that runs for a few hours. It's hard, especially with deep learning models.
[00:10:46.17] - Paul lusztin
You can do some simple tests that check for a simple batch that the learning curves goes toward zero or something like that, but still, it introduces the issues like where you store the data for continuous integration pipelines, where you store the model, where you train it. Because with out-of-the-box solutions like GitHub Actions or GitLab Actions, they don't really integrate that well with the GPU world. You can do that, but you have to spend some time into building that infrastructure.
[00:11:30.14] - Simba Khadder
Yeah, that makes sense. I wonder if you were starting from scratch, you had some researchers who found their way of getting models in the production, and it's not a clean slate, because there's already models, but you're the first MLOps person. You have to build that MLOps platform. What would you do? How would you go about it? How would you break it down? What would you do first?
[00:11:58.26] - Paul lusztin
That's a really interesting question. I think that the first thing I would try to understand is the system itself because I like to really understand what's going on. Let's assume, based on what I've seen so far, that most research-based deployment system have a big monolithic system where the features are the training are computing in the same place. Usually, you don't save any state. In some cases, you have a model registry. Basically, everything is completely coupled together.
[00:12:44.03] - Paul lusztin
This is not really scalable because, for example, if the features are directly fed into the model, you don't know, you can't track what is going on. Basically, you don't know that model was trained on what features. You can't use those features. On the other side, if you have a real-time system, if a user makes a request, and you don't keep a state, usually you have to transfer the whole state into that model to compute the features and so on and so forth.
[00:13:25.05] - Paul lusztin
What I would try to do, usually, is to apply I like this feature training in terms of pipeline methodology, which is a mind map that I really love. I think it was introduced by Jim Dowling from Hopsworks. Basically, it splits your whole system into three abstract components. Let's say the feature pipeline, the training pipeline, and the inference pipeline. It's not necessarily three pipelines, but it defines clear scopes and clear interfaces between these three.
[00:14:06.07] - Paul lusztin
Just to go very fast-forward, you have a feature pipeline that gets raw data as input and spits out features. Usually, you store this feature right into a feature store, and then you take this feature from the feature store, you train the model, you put the model into a model registry, and on the inference side, you take the features from the feature store, the models from the model registry, and you make predictions. This is not hard to implement if you have the right tooling. It then allows you to scale really well and basically start to dig into the other issues.
[00:14:47.28] - Paul lusztin
I think I would try to apply one way or another this principle, basically to decouple these three components and create a clear differentiation between these three steps and I guess find some tooling, some most essential tooling, which is a feature store or a logical feature store, and some other registry and some experiment trackers that they can basically track what they're going on and to make the system reproducible and scalable and modular.
[00:15:23.19] - Simba Khadder
I like that breakdown. I wrote a similar one. I added the fourth component, which is just a monitoring evaluation, so then it would go data, experiment tracking, and model store, deployment, which is really just the model deployment. That's where you would handle A/B testing, you would handle upgrading, downgrading.
[00:15:47.09] - Paul lusztin
That will be the fourth component?
[00:15:51.10] - Simba Khadder
No, that would be the third one, the same one you mentioned, the inference one, and then the fourth one would be monitoring and evaluation, which would just be the stack that actually samples or takes the predictions and the features and provides some UI for you to monitor, so companies like Arise or Aporia-
[00:16:12.01] - Paul lusztin
Oh, okay.
[00:16:13.20] - Simba Khadder
-those types of companies.
[00:16:14.12] - Paul lusztin
That would make sense.
[00:16:16.04] - Simba Khadder
Then I would have those four. I always feel like a lot of what MLOps engineers do at companies is like, okay, there's feature stores, there's experiment trackers, just weights and biases, there's MLflow, there's Seldon, there's Pento for serving, there's Arise, Aporia, evidently for monitoring. These tools are great. You don't really need to build your own anymore. You shouldn't. I mean, it's probably not a good use of time anymore. But what you need to do is more like, what's the abstraction above it? I think that's something that is going to be super unique per company. How do you do things?
[00:16:52.15] - Simba Khadder
We have companies and customers for us that work in notebooks. We have other customers who do everything via CI/CD and pretty much Python scripts. Everyone's so different that you can't really like one size fits all on MLOps platform. But anyway, that's just something that the way we think about it. It's very similar. I feel like a lot of people have come to the same abstraction, same realization. It's just slightly different tweaks of the same thing.
[00:17:20.17] - Paul lusztin
It's very tweaked in the end. I love this forth component. It makes a lot of sense not to put it in the inference pipeline. They could put the meta-layer that observes the whole thing. It made a click.
[00:17:41.29] - Simba Khadder
I think it's like people forget about that piece. It's like, "Okay, I'm deployed. I'm done." It's like, "No, that's just the beginning." Good job, you made it to the starting line. You're off to the races, but there's so much more to do.
[00:18:00.25] - Paul lusztin
Plus that it makes a lot of sense to add an out-of-the-box tool for monitoring MLOps because it's hard to do this yourself. To be honest, I tried once, and it's a waste of time. It's really a waste of time.
[00:18:22.17] - Simba Khadder
Painful to do. A lot of this stuff from scratch just doesn't... The ROI has gone down. In the early days, tools were so new, and maybe there was an argument of we could do something unique to us. But nowadays, all the tools are years old and used by big companies. It doesn't really make sense to start from scratch. I agree.
[00:18:45.17] - Simba Khadder
One thing that I've found interesting, though, is I've actually reapplied the same breakdown to LLMs and RAGs, specifically, and it ends up looking the same. What it looks like there, to me at least, is the data piece is pretty much the same. The big difference is you're chunking vector DBs, like slightly different types of feature engineering. But embeddings aren't new, we've been using them for recommendation systems and LP use cases forever.
[00:19:17.23] - Simba Khadder
Then training becomes an optional fine-tuning step. Serving is still the same problem. You have this big model. The only difference is nowadays, there's APIs you can use, like hosted models. Monitoring is also evaluation. I mean, the hardest problems in LLMs are, ironically, in my opinion, the hardest problems in MLOps, which is data and evaluation.
[00:19:43.03] - Paul lusztin
I totally agree. I personally believe that evaluation is complete and also problem in this field. You have solutions, but all of them are incomplete. In my opinion, an incomplete evaluation solution will always leave you with a feeling of not trusting what it tells you. You cannot trust, and if you cannot trust the system that has to make you trust the rest of the system, you always end up feeling you don't really know what's going on inside there.
[00:20:22.03] - Simba Khadder
It's funny because you see a lot of these random tricks that people do, and some of them work or seem to work, at least there are papers written about them, but they just feel very wrong intuitively. For example, a lot of people use the same LLM to evaluate itself and its own responses. Obviously, that can work because if you talk to an LLM, and it does something wrong, you're like, it was wrong, and it will try something new. It can't work. It just feels like, okay, you've added a black box to your black box to make you feel better about your new black box. You're almost like, if I add enough layers of abstraction, it becomes so confusing that I'll just be forced to trust it because there's no way to figure out what's happening.
[00:21:11.25] - Paul lusztin
Exactly. You're completely right. What's more even confusing is that sometimes it works, and sometimes it doesn't, using the same LLM. But what I've seen usually in the industry that you can use a smaller model for your open-source use cases and all of that, and you usually use a very powerful model for evaluation, even if it's more costly and so on and so forth, because you don't want to evaluate everything. You probably just sample and evaluate that. From what I've seen, that's a reasonable solution so far.
[00:21:59.26] - Simba Khadder
I agree. I think that one's more obvious and that one's clearer. It's like I'm using a better model to check my weaker model. It makes sense. I think what's funny is when people use GPT-4 to check GPT-4, it feels a little bit... It might work, but it just feels like surely in three years, we'll look back at that and be like, "Yeah, that was a funny little idea we had."
[00:22:21.11] - Paul lusztin
Okay, so that's why the application failed.
[00:22:24.29] - Simba Khadder
It's almost like looking at ML before MLOps, the stuff we would do, just like the hacks and any trick would work. It's almost like we were so worried about the cool demos, especially if it can be revision and things like that. We were so worried about the cool demo, but we almost didn't care about how often it didn't work. I feel like LLMs are in that same boat right now where it's really easy to build an awesome demo. I think even unlike community revision, these demos tend to work way better in production just by default, but they're definitely not production-grade systems, depending on what you're trying to do.
[00:23:03.20] - Simba Khadder
Most of them still have pretty high failure rates, but we're just like, "Oh, but look, here's a few examples of it working." It's so awesome to get that. I think that's going to start to go away as people get used to how good GPT is. I'm already finding for myself personally when I use GPT-4, I used to just be amazed every time. Now I'm so used to it that I use it every day that by default, it's like, this is great. It solves some problem for me, and then when it misbehaves, I get more annoyed. I'm like, "I can't get it to do what I want," or "I can't figure this thing out," or you give it sometimes an error message, and it just doesn't know, and it just guesses, and it's completely wrong, and you know it's wrong, and you're just like…
[00:23:46.29] - Simba Khadder
I think that the hype is going to start to fade, and then almost like the bar is going to go up, and people are going to want it to work every time or almost every time.
[00:23:55.05] - Paul lusztin
That makes sense, a lot of sense, especially if you get annoyed that it will be just faster to Google it sometimes.
[00:24:03.15] - Simba Khadder
Yeah.
[00:24:03.19] - Paul lusztin
To be honest, I still Google error messages. I found that that's more robust, at least now, because usually it points to some top stack thread where people try to debug. I think the only thing that I still Google is error messages because that's usually a very standard string that is very easy to query, and it still gets the job done. I also really try to force GPT in many use cases that I found I need to automate. I usually find myself very discouraged because it works, but it doesn't work good enough.
[00:24:56.01] - Simba Khadder
It's almost like an augmentation tool to me. That's how I think of it. It makes me better, but I have to almost supervise it. I would be freaking out to let it run and do its own thing completely unsupervised, completely automated. It's just not only can it be wrong, it can get very convinced that it's wrong and then follow a path that...
[00:25:21.14] - Simba Khadder
I'll give you a very specific example. I ran into this error or one of my engineers did. One of my junior engineers, she was asking one of our senior for help, and the error was something like, proto needs to be initialized of a dictionary. She googled it, it wasn't really clear. Then she asked GPT, and GPT was convinced that the issue was how she was initializing. It kept spitting out code for her, and that code will continue not to work, and it would continue to try different renditions of the same code. It turned out that the issue was actually that one of her fields was misspelled, and the error message was just awful.
[00:26:02.06] - Simba Khadder
The error message was really awful, and it confused GPT, and it almost wasted more of her time because GPT was so sure of itself. She was more of a junior engineer. She was just like, "I just must be doing this wrong." But anyway, it's like a lot of examples I found of it not only being wrong, but being convinced and be very persuasive in that like, "Oh, it's probably just me. I'll just copy and paste this code or see what happens."
[00:26:28.15] - Simba Khadder
That's a very big issue. A stupid person that doesn't understand that it's stupid.
[00:26:36.01] - Simba Khadder
Where do you think that LLMs and MLOps, LLMOps, let's call it, and MLOps, what happens? Do you think they're going to merge? Are they going to be separate things? Does one eat the other? Is the MLOps over? Should we all change our job titles?
[00:26:55.11] - Paul lusztin
Well, personally, but this is just my personal opinion, to be honest, I can't even back it up by any official stuff, is that LLMOps is just a subfield of MLOps. Because if you look, usually a lot of MLOps tools are very used for tabular data, structured data, almost all their demos until the LLM world up here were for tabular data, to be honest. They did not take a lot in mind this deep learning world where you need a ton of GPU power, you have big data, which is usually text or images, videos, and this stuff.
[00:27:47.08] - Paul lusztin
As I've worked with deep learning before this LLM hype, and then break into MLOps, to be honest, I really saw a very big the divergence between these two fields. I liked MLOps, but I couldn't see how to really apply it to the deep learning world. There were many lacking things behind. To be honest, I see LLMLs, just MLOps or big learning, because in the end, LLMs are just transformers to use for text and images. It's just a fancy word for big models that take as input images and text.
[00:28:29.29] - Simba Khadder
Yeah, I think so, too. I've seen two different schools of thoughts on this. One is the extension mindset of I'll take my MLS platform and extend it to do LLMs. The only hugely unique thing about LLMs is prompts. Prompts have never existed before. The concept of a prompt is completely new.
[00:28:51.02] - Paul lusztin
Yeah, that's a good point.
[00:28:53.10] - Simba Khadder
Embeddings aren't new. Vector databases aren't new. Sure, they're way more popular than they were five years ago, but they're not new concepts. Transformers, obviously, aren't new. I remember when ELMo came out, that was my first introduction to using transformers in production, and then BERT came out, and I was super excited. Then people thought it was a little bit crazy because it was very niche at the time. Nowadays, obviously, it's clear that that was the future, at least the next step. I think there's an extension mindset. I think some companies I'm finding the really big companies that have huge enterprises are treating them different, but I think it's because more due to bureaucracy than due to anything else, because to get their MLS platform to move as fast as they want to move is actually harder by just starting from scratch.
[00:29:42.05] - Simba Khadder
In the end, like you said, it's all data-centric. It all starts with the data. An LLM is almost like another extension of your data. It's another tool that you never plug into your data that's really powerful, and you can use in a ton of places. But in the end, it's just like, cool, here's a new screw thing I can put into my data. I think a lot of the same.
[00:30:01.01] - Paul lusztin
The biggest difference is that the data is a lot different than what you experience and how you model the data, as you said, the prompts. Basically, that's how you model your data, which is a lot different. To be honest, I see prompts at this moment. Maybe it's a stretch, but I see them as part of the feature engineering step because in the end, your prompt, you don't focus so much on how to clean automatically and that stuff. If you have better prompts, you end up with better embeddings that basically create your autoregressive process better, which in the end are features, more or less.
[00:30:50.23] - Simba Khadder
To maybe even give you a bit more credit than I think you're giving yourself, I completely agree, and not only do I agree, I've seen it. We have customers doing this and how they would think about it. There's a prompt, which is a string. It's a template. All of the inputs to the prompt are features. They're exactly the same features that you would have done otherwise. A nearest neighbor lookup is not something that's new. We would do that for recommender systems all the time.
[00:31:20.27] - Simba Khadder
For example, if I'm asking a bank, chatbot, how should I invest my money? It probably wants to know how old I am, my credit score. Am I married? Do I have kids? What's my risk profile? All that stuff, right? But all that stuff is in a feature store somewhere. So how do you pull that stuff and also pull the nearest neighbor stuff and dealing with the chunking, which is just... It's not a new idea, but it's that pipeline has become way more important than it used to be. Pull all that stuff together, and the only thing that's new is I have these templates.
[00:31:56.08] - Simba Khadder
So rather than old models where you just plug them directly into the model, now there's this abstraction in between where you just fit it into a prompt. Yeah, I think in the future that's what-.
[00:32:08.10] - Paul lusztin
Basically, it's just an abstraction layer between your vector that you fit into the model and basically what you said, how you take the data and plug it into some processes that transform your data into this vector. You just don't care about transforming into vectors anymore. You just put in a prompt and my job is done.
[00:32:32.15] - Simba Khadder
Yeah, it's like an adapter. It's not just a dumb adapter, you can get a lot of value out of just adapter itself. I view it as just an adapter to the model. In a way, it actually is more natural than… Even if we move away from text, and let's say we move to JSON somehow in the future, and we move to that stuff, it's almost like an unstructured input as opposed to deep learning models or traditional ML models. Inputs are very structured. This is what goes into every single input node, where here it's like, "Yeah, just give me information and I'll work with it." So I think it's like a natural progression. But I really do view that a lot of the issues are the same. They just look slightly different. It's more of an extension of what's there as opposed to starting from scratch.
[00:33:24.23] - Simba Khadder
We'll see if some of the really crazy stuff works, like model evaluating models and models, the agentic stuff and all that really becomes what everyone does and all the rack stuff is almost second to it, then I think it's not as obvious because agentic flows are just different enough where you just want to do something new.
[00:33:47.24] - Paul lusztin
It will be really interesting to have a model that tries to improve the prompt that is an input to another model and make them bounce together and have the perfect prompt in the end.
[00:34:04.06] - Simba Khadder
They actually do that. It's a thing. In fact, what some people will do is they'll… I've seen that before, but it's like a thing where a user asks a query, and then you'll have a LLM actually rewrite the prompt to achieve better performance. The other one I've seen, which is interesting, is for Naive RAG, where people just do a nearest neighbor lookup, they'll take the prompt and then pretty much generate a summary or even a mock response and then use the response to do the embedding and then do the nearest neighbor look up for RAGs.
[00:34:40.02] - Simba Khadder
We're already seeing actually a lot of that stuff, and it seems to work well. But again, we'll see how much of that becomes the main feature engineering or hyperparameters or if it becomes data. I think that's the question.
[00:34:54.25] - Paul lusztin
On the RAG side, what I've seen lately, and I know, personally, I like it very much, is this self-query step, which it works for well-defined applications, which most of them are, especially in one of the more busiest worlds, where you know what a user should input and what you should look for. You do this self-query process where you put the LLM to find some keywords into the question, user age, location, whatever makes sense for your business use case, and then map it to a hybrid search or filter vector search of some sort. You can very nicely combine these tools. The more deterministic our world and with this new LLM-based and vector-based search that helps you find more nuances in your search.
[00:36:03.27] - Simba Khadder
It's funny whenever I end up talking about LLMs in the podcast because I know that this will go out in a few weeks, and who knows? It's just like GPT-5 gets released, and it actually kills RAG. I don't think it will, but just like... Things are so explosive in that space. It's cooled down a bit, but it's still hard. Everyone's keeping their finger on the pulse because it feels like it changes every couple of months. It used to be every week. Now it's every couple of months. But still, it's really dynamic.
[00:36:35.02] - Paul lusztin
In the beginning, to be honest, I was reading weekly about this, but at some point, I realized that if you don't apply that stuff, at the moment, it's pointless. I just tried to focus on the fundamentals to keep understanding what's going on. But actually, to go into all the details that I did not need at the moment was just a waste of time because of this. You just don't use it in a few weeks.
[00:37:06.09] - Simba Khadder
Yeah, exactly. I think as ML engineers, we have an unfair advantage because we know how to deal with black box models, it's not our first rodeo. It's different, but it's not completely brand new versus people coming from more software engineering background, they treat it like an API, but they've never dealt with an undeterministic API before where that's all we know. I think there's an unfair advantage that ML engineers have in figuring out what's going on.
[00:37:34.07] - Paul lusztin
Yeah, because they just apply a normal function.
[00:37:38.05] - Simba Khadder
Yeah, exactly.
[00:37:38.28] - Paul lusztin
They don't really understand what's going behind.
[00:37:41.23] - Simba Khadder
It's like Stripe, you just hit it, and it just does the thing, and you just don't worry about it. Obviously not exactly how it works. Paul, I don't think we could go on for forever. I would love to get chatting with you about this. We are getting to time. I want to be respectful of your time. I really love that you were able to come on, and we were able to have this great conversation.
[00:38:03.25] - Paul lusztin
Yeah, me too. It was really a great conversation. Personally, I also learned a lot from you, so I'm really excited to be here.
[00:38:11.29] - Simba Khadder
Yeah, me too. Thank you.
[00:38:14.20] - Paul lusztin
Thank you.
See what a virtual feature store means for your organization.