Over the past few years, many organisations have found that although they train great models, they don’t always gain long-term value from them. The reason for this is deployment and monitoring.
Deploying a model isn’t always that easy. Sometimes they are large. How long does inference take? You might need a GPU. Do you want to predict in batches?
Monitoring is key. If we train a model on something fashion-related, it could be invalid within a few months due to a rapidly changing trends.
MLOps is the solution to this problem, but it also covers much more.
Table of Contents
- Defining MLOps
- Why have MLOps for your side projects?
- A framework for tooling
- Some tools to explore
NVIDIA define MLOps as…
A set of best practices for businesses to run AI successfully.
MLOps aims to unify the release cycle for machine learning and software application release.
Google gives this definition…
MLOps is an ML engineering culture and practice that aims at unifying ML system development (Dev) and ML system operation (Ops).
Practicing MLOps means that you advocate for automation and monitoring at all steps of ML system construction, including integration, testing, releasing, deployment and infrastructure management.
So MLOps is some sort of culture, framework or set of rules that allow us to perform and implement Machine Learning more efficiently. It includes many features of DevOps such as automation and testing but is focused on Machine Learning and Data Science by improving deployment and model monitoring.
Why have MLOps for your side projects?
I’m sure you can see the benefits for a large company with terabytes of data and loads of Data Scientists, but what about the average Data Scientist, working on small projects at home?
I’d say there are several benefits…
- A reasonable portion of Data Science roles are focused on deployment and often employers want to see deployment experience on your CV.
- It makes your life a lot easier through automation and facilitates the creation of more outward facing outputs from your work.
- It makes your work scalable meaning if your side project turns into something people use, you can scale up easily.
- Finally, it takes your project to levels rarely seen in side projects. I’ve never seen a side project with automated retraining pipelines!
Developing a Framework for Tooling
Google has an excellent MLOps framework. They define 3 levels. Let’s explore Level 0 and compare it to Level 1, then see how we can relate this to tooling.
Level 0: Low automation
- Level 0 looks very much like a portfolio or Kaggle project.
- There is little automation and limited options for data storage.
- If we wanted to retrain out model, we’d have to repeat all these steps again. This makes dealing with model drift difficult.
- If we wanted to repeat this with a different data or notebook, we’d have to copy code or adapt a notebook.
- We need additional tools to track our experiments when creating models.
Level 1: High Automation
Let’s digest this from left to right.
- The first new item here is a feature store. This is a database specifically for ML features. We do a variety of operations when training models such as scaling, feature engineering, encoding and mathematical transformations. Many of these features are not useful for typical analysis so a feature store allows us to store this and access it more easily.
- On the left side, we have orchestrated experiments. This is the concept of automation many operations in the Data Science work flow.
- The red square in the bottom left is the creation of a pipeline. This is another automation. When we deploy as pipeline, we are deploying the model and the transformations required to generate predictions from the model.
- Many validation steps can also be automated.
- We also allow continuous monitoring and retraining by collecting new data and storing it in the feature store.
So this is a great flow chart for MLOps processes. But what about tooling?
In businesses, there are many people, moving parts and a rapidly changing landscape of tools. Because of this, industry-grade MLOps should be tool agnostic. However, at home, we are cost and scale limited in what tools we can use and we are creating a personalised process. I believe MLOps at home should not be tool agnostic.
A key rule of MLOps is that it should be tool agnostic. We can bend this rule for MLOps at home.
To move from Level 0 to Level 1, we need to implement some new technology.
Selecting Our Tools
You can use the headings in this chart to categorise tools you find. I’ve included some example tools that might fit under each h
Some Tools to Explore
This reddit post inspired me to write this and I’d suggest checking it out. Here’s a list of tools I plan to investigate over the next year or so.
- Project Scaffolding: CookieCutter, Kedro
- Documentation: Sphinx
- CI and Deployment: Jenkins, Docker, Gitlab
- Data Modelling: DBT
- Data Exploration and Preparation: Pandas (Pyspark if large)
- Testing: Great Expectations, Pytest
- Feature Store: DVC, Feast
- Workflow engine or orchestrator: Luigi, Prefect, Airflow
- Model Registry: MLFlow (using Kedro-MLFlow or PipelineX)
- Model serving: FastAPI, BentoML, Cortex
- Model monitoring: Jenkins Pipelines, MLFlow
Some tools provide many of these features in one package
There are surely some areas missing here — this will need some compute power which I don’t think will be free. There also needs to be somewhere to store data other than the feature store. This may be lacking in scalability as I’m not sure how this would work over multiple clusters.
In this post, we looked at developing an MLOps framework. The great thing about this is that we can plug tools into this framework as we find suitable tools. It also really helps with assessing new tools, as we can understand how they fit into our framework and any existing tooling.
Over the next few months, I plan to put this into practice. I’m going to try out tools, link them together and try to get a genuine MLOps environment on my home workstation. If you have experience with any of the tools mentioned in this, leave a comment!