Who Puts Your Machine Learning Models in Production?

When building Machine Learning (ML) products, what is the common output from the following three agile objectives?

  1. Build things fast
  2. Build things right
  3. Build the right things

Answer: A useful and reusable model.

No MVP without a model in production. And production is not the final easy step after proving model value in a notebook. It’s not merely an automagical model file copy-paste action to your local cloud provider. It’s a major part of building an ML product. Getting to production should be part of the process from the very beginning.

Maybe — you don’t have Google-sized teams and ML systems with thousands of models automatically trained on huge datasets, deployed through smooth pipelines producing high-value predictions every second.

But — you still have to maintain and deploy your model somewhere to make it useful. However, putting models in production means totally different things to different people. Depending on your role and experience you might take very different approaches.

In my experience working with real-time mathematical models, not just ML, I have seen many approaches to putting models in production. The following three approaches also happen to represent each of the three agile objectives mentioned before, and in the same order:

  1. Domain expert: Copy paste unversioned heuristic model code with important relevant features into a script on a local machine and tweak parameters on the fly while manually emailing data and results back and forth to end-users.
    “Yay, we got something ‘in production’ on my machine— but it’s a nightmare to debug, maintain, reuse, and it doesn’t work robustly or autonomously.”
  2. Software engineer: Write generic code where the concept of a model has been abstracted away through a million nested methods all the way up to a well documented class called TheUniverse(). Reject all code changes in long code reviews that 1) do not have 100% unit test code coverage, 2) do not apply the most optimal best practice software design pattern for the use case, 3) have not been vacuumed for trailing whitespaces automatically by a formatter that took a week to setup in a CI/CD pipeline.
    “We didn’t have time to build any models. But at least the training pipeline and potential artifacts are automatically tested and packaged. We also handled all git merge conflicts nicely and the repo looks really structured.”
  3. Data Scientist: “Just give me the latest Deep Learning net with 1TB worth of parameters and a GPU to train on and I’ll give you a Jupyter notebook where the model can run — at least sometimes if data behaves or does not change. But also a paper with nice results to present at a conference in 6–12 months time.”
    The model is state of the art, follows best practice modelling principles according to theory. It shows low error metrics and great performance, even when looking at the business KPIs. But it only runs in one experimental environment on one specific machine.
    “You will never reproduce my super advanced model architecture, custom Python packages or weirdly scaled features. Never! MUHAHAHA!”

All these exaggerated approaches are not particularly great on their own. And one person rarely has the wide range of skills required to stay in the sweet spot of all three — roles or agile objectives. It is a team effort.

So which approach to putting models in production fits you the best? What do you think other stereotypical roles would do? Data Engineers? UX?


Who Puts Your Machine Learning Models in Production? was originally published in Compendium on Medium, where people are continuing the conversation by highlighting and responding to this story.