The challenges you’ll face deploying machine learning models (and how to solve them)

In 2019, organizations invested $28.5 billion into machine learning application development (Statistica). Yet, only 35% of organizations report having analytical models fully deployed in production (IDC). 

When you connect those two statistics, it’s clear that there are a breadth of challenges that must be overcome to get your models deployed and running. Common obstacles are generally centered around:

  • Poor visibility of model performance 
  • Code that doesn’t play nice in different environments
  • IT gaps within your infrastructure
  • Disjointed software and approaches to production Machine Learning (MLOps)
  • Workflows that can’t move between cloud and on-prem infrastructure

The following paragraphs will give you deeper insight into these challenges and how you can overcome them. 

A single pane of glass can be shatterproof

No matter what stage of machine learning development you’re in, if you are working with point solutions or siloed toolsets you’re creating vulnerabilities for your models and your business. To overcome this, you need to operate on a holistic, unified platform that lets you see operations through a single pane of glass—from data sources to production environments. This will enable your teams to move ML models from experimentation to production faster, and give you streamlined insights into the performance of your mathematical and technical metrics. 

Having an integrated backbone will help you visualize data (be it charts, graphs, or other visualizations) – allowing you to assess progress and iterate quickly. It can also provide automation capabilities – sending you alerts if benchmarks are missed or anomalies are detected. 

Make or break your code

The early steps of ML exploration require you to wrangle raw data sources and prepare them for testing and modeling. Code will evolve quickly as you understand the data and the problem. If your data engineering and science teams are working in a siloed fashion (using different point solutions) you will inevitably run into a common challenge: your production systems won’t be able to run your ML models. 

Rewriting code for production will dramatically slow down your progress, costing you time and money.

To overcome this (incredibly common) challenge, you should think about what getting to production and cross-team collaboration looks like from the get-go. Even if that stage feels light-years away, consider the power an integrated platform will provide to your data scientists and engineers long-term: real-time access to data and models in one place.

This point goes hand-in-hand with making sure you’re leveraging ML Operations (MLOps) standards. Do it early (and with help from your platform) so that you’re speaking a common language across teams and production workflows. This ensures your data, code, and models will be structured to work properly in your production environments – whether they are on-prem or in the Cloud.

Shadow IT gaps: where good efforts go to waste

Whether we’re talking machine learning or not, security in the enterprise is essential. A challenge – or threat – to your entire machine learning efforts can become siloed and ungoverned (Shadow IT) without the right end-to-end infrastructure. If you lack visibility and are forced to code and recode models, you will create gaps. IT gaps are a haven for bad actors and endless tunnels where data can get lost or scrambled. You need a unified platform with strong end-to-end governance standards to ensure the data flowing through your production environments is secure. 

Don’t let infrastructure stop you

The point of deployment is to unlock greater business value – moving from models that work to models that provide probabilistic predictive and prescriptive insights.

As you deploy models, they will take on larger volumes of data and compute resources. Your infrastructure must be able to support these workloads. A common challenge is getting data and models to move seamlessly between on-prem and cloud environments for workflows such as bursting compute-intensive jobs or deploying models within the business or through the web. When you don’t have a flexible environment, it is virtually impossible to scale your models.

The key here is utilizing a platform that offers interoperability. This will create flexible workflows that can be continuously monitored and governed. When you have this set, you also open up the opportunity to adopt microservices that will give you visibility into production analytics that will help you iterate and scale quickly.

Learn more about platform considerations and how those will help you achieve maximum business value: 4 Essential Platform Factors For Enterprise ML.

Discover how to enable production MLOps at scale in our webinar, Enabling MLOps at Scale – Hands-On with Cloudera MLOps.

Alex Breshears
Alex Breshears

Sr. Product Manager: Production ML

Leave a comment

Your email address will not be published. Links are not permitted in comments.