Detecting and Managing Data Drift with MLOps

25.03.2021 | 6 min read

Detecting Data drift with MLOps

We recently wrote about the benefits that MLOps can bring to your business, particularly when it comes to eliminating time-consuming tasks from the development process. But it can also be of great value when it comes to detecting and managing data drift. We all know that finding out about changes and anomalies in new incoming data after a model has been deployed is key to ensuring that the predictions obtained are correct and that they can be used safely. In this blog post, we want to discuss data drift in more detail and show you how it can be effectively managed with MLOps.

A recap on data drift and its main causes

When an ML model is first deployed in production, data scientists are predominantly concerned with how well the model will perform over time. The major question that they’re asking is whether the model is still capturing the pattern of new incoming data as effectively as it did during the design phase?

Any difference between the behaviour of the model in training and test stages is termed dataset shift.

There are many possible reasons for the dataset shift, ranging from bias that has been introduced in the experimental design stage, through to the possibility that test conditions cannot be reproduced at training time.

There are three main types of shift: Covariate, Prior Probability and Concept drift. We’ll go on to explain them briefly below.

1. Covariate shift -this is the change of distributions in one or more of the independent variables (input features).

2. Prior probability shift - this can be viewed as the polar opposite of covariate shift: it is the case that input feature distributions remain the same but the distribution of the target variable changes.

3. Concept drift - A concept drift happens where the relations between the input and output variables change. So we are not anymore only focusing on X variables or only the Y variable but on the relations between them.

Useful methods of dealing with drift

One of the key functions of MLOps is to detect anomalies and defects in ML model development and in doing so, help IT teams to spot where fixes and improvements need to be made. This results in fewer systemic failures. MLOps support models as they adapt to their own evolution and drifts in data, and in doing so, help to develop dynamic systems.

Using a black box shift detector for managing prior shift

There are several ways of managing data drift, but we’re always aiming for a balance of accuracy and efficiency. Here’s where using a black box detector can be useful. Such detectors are handy in managing prior data shift, in situations in which we have no access to the features used by the deployed model.

About the method:

The aim here is to build a shift detector on top of the primary model. The primary model will then be used as a black-box predictor and ML testers will seek out changes in the distribution of the predictions. Any differences spotted can be perceived as a sign of drift.

But note that while this method is effective in spotting prior shift, it is not useful in the cases of covariate shifts which have much less impact on the prediction.

The ML testers should begin by amassing the primary model predictions on both source and target databases and perform a statistical check to see the difference between the two distributions.

As Daikatu explains: “One possibility is to use the Kolmogorov-Smirnov test and compute again the p-value, i.e., the probability of having at least such distance between the two distributions of predictions in case of absent drift. For this technique we are looking at the predictions (which are vectors of dimension K), the number of classes, and perform K independent univariate Kolmogorov-Smirnov tests. Then we apply the Bonferroni correction, taking the minimum p-value from all the K tests and requiring this p-value to be less than a desired significance level divided by K.”

But a few key things need to be in place to allow us to accurately perform the test:

  • A well-performing primary model
  • The source dataset must contain examples from every class
  • There must be a prior shift occurring

Practically, its impossible to know whether all three assumptions are correct at the time of conducting the test, but nevertheless, the technique is still useful.

Using the Domain Classifier for Covariate shift

About the method: Different testing methods apply for Covariate shift.

In this scenario, we detect shift by explicitly training a domain classifier to discriminate between data from source and target domains.

As outlined in An Empirical Study of Methods for Detecting Dataset Shift, we “partition both the source data and target data into two halves, using the first to train a domain classifier to distinguish source (class 0) from target (class 1) data. We then apply this model to the second half and subsequently conduct a significance test to determine if the classifier’s performance is statistically different from random chance.”

The one significant disadvantage to this method is cost. The method is expensive because you have to perform a new training every time that a batch of new incoming data is available. But there are many notable advantages. The primary one is the fact that it can be used locally, at the sample level, allowing you to see which observations differ the most from the original dataset. It also allows you to fairly easily detect partial drift, which occurs when only a part of the observations or a part of the features are drifted.

So why is MLOps useful when it comes to data drift?

In summary, monitoring data drift without the right tools can be lengthy and arduous. Data scientists responsible for model maintenance have to continuously compare live traffic with the baseline. But using one of the methods above can speed up the process and adopting some key MLOps tools, can help lead to further efficiency.

Hydrosphere - Hydrosphere provides an interpretation of model predictions without the need for access to the model structure. Moreover, Hydrosphere provides explainable alerts when changes in distributions happen. You can understand what happened to your data and act upon it.

Fiddler - With Fiddler’s drift detection capabilities, you receive live alerts about changes in model feature or prediction distributions from their training baselines. This enables you to determine when it’s time to retrain models based on the impact of changes. Additionally, Fiddler attributes these changes to the underlying features causing them, using AI Explainability.

Monitoring for data drift in ML models is essential when it comes to enabling ML teams to stay ahead of performance issues in production, which is why its essential to figure out a swift and effective way to do this that works for your business.

Looking for an experienced team to bring your digital product to life?

Get in touch for a free consultation on hello@10Clouds.com. Our friendly team will get back to you within one working day!

You may also like these posts

Start a project with 10Clouds

Hire us