Contents

banner image

This whitepaper explores the practical application of Davra’s Anomaly Detector across a range of real-world data scenarios. Using a Mahalanobis distance–based approach, the algorithm is evaluated in six distinct scenarios, each based on 90 days of data (60 days for training and 30 days for testing).

The scenarios are designed to reflect realistic IoT device outputs, incorporating variations in data inputs, types of anomalous behaviour, and the number of input metrics. Collectively, they demonstrate the flexibility and robustness of the approach and are intended to be representative of the majority of operational anomaly detection use cases.

The six scenarios examined are:

  1. Normal Day-Night Behaviour
  2. Reliable Sensors with One Unstable Signal
  3. Sensors That Move Together in a Predictable Way
  4. Changing Relationships and Sudden Instability in Sensor Data
  5. Changes That Alter What ‘Normal’ Looks Like
  6. The Effect of Additional Normal Metrics

Across these scenarios, the results demonstrate that the Mahalanobis distance–based approach is well suited to a broad range of conditions, performing robustly even in cases involving challenging or non-ideal data distributions. While the algorithm shows tolerance to deviations from ideal assumptions, it does rely on a stable and relatively clean training period to establish an effective baseline.

Overall, this whitepaper illustrates the effectiveness of this approach for detecting anomalies and outliers commonly encountered in real-world industrial and IoT environments.

Davra Anomaly Detection: A brief summary

Davra’s Anomaly Detector is an out-of-the-box solution designed to easily detect anomalous measurements in IoT data - irrespective of the type of data being collected. The algorithm itself is based on Mahalanobis Distance (MD), a statistical measure of the distance between two variables (explained further in "What is Mahalanobis Distance?"). However, the approach goes beyond this - allowing for a more nuanced system to be created which develops a “baseline” over a set period of time, with all new MD metrics being calculated against said baseline. This effectively takes a simple statistical technique and elevates it to a system that learns patterns within the data and continually evolves with the data as the baseline is updated.

The Anomaly Detector features its own UI, and allows for aggregation on the dataset to reduce/ remove certain noise characteristics in order to better communicate when an anomaly occurs. Aggregation types include the average/ mean, sum, count, minimum and maximum; allowing for the maximum amount of versatility. However, given certain limitations of the MD approach to anomaly detection (explained below), care should be taken when utilising these aggregation techniques - especially when considering the use of aggregation other than the mean.

All data, both aggregated and the MD output, are written back to the time-series of the device(s) being considered, allowing for further actions to be taken with tools such as the Rules Engine. The Anomaly Detector is designed to be an easy-to-use, out-of-the-box solution with the versatility to monitor a wide range of assets and provide greater insight than simple rules-based thresholding.

What is Mahalanobis Distance?

Mahalanobis distance (MD) measures how far a point is from the center of a multivariate distribution, taking into account the shape of the data — specifically the variances of each feature and the correlations between them.

Unlike Euclidean distance, which treats every dimension as equally scaled and independent, Mahalanobis distance rescales the space using the covariance matrix. This means that distances along directions with high variance are reduced, and distances in tightly correlated or low-variance directions are amplified.

When we compute the distance from each point to the dataset’s centroid (the multivariate mean), MD gives a powerful measure of how unusual each point is. Points with large Mahalanobis distances are likely to be statistical outliers or anomalies.

The algorithm is defined as:

MD = (xB – xA)T * C -1 * (xB – xA)

Where:

  • xA and xB are a pair of objects of which we are taking the distance between the two points, where one of these points is usually the centroid or average point considering all the multi-variate information.
  • C is the covariance matrix. Often described as the shape of the overall data, this allows us to take into account the distribution and makes this more than just a simple calculation of the distance between two points.
  • T is the transpose operation. Transpose changes a column vector into a row vector so that the multiplication with the covariance matrix produces a single number (the distance).

MD has a few expectations for the input data in order to be successful at determining statistical outliers or anomalies. MD expects:

  1. All data inputs to be normally distributed, or at least elliptical in their distribution.
  2. All data inputs to have variance within it - i.e. no data that displays either a very small range or constant values.

If MD receives metrics obeying these rules, the output should correlate to a Chi-squared distribution. Of the 6 scenarios we use later in the report in the main section of analysis - only two of the MD detectors utilised have all of their metrics correlate nicely with a normal distribution, and only 1 presents a chi-squared distribution. Whilst this report will detail these results in greater detail in following sections, the algorithm itself is fairly versatile to non-normally distributed inputs that are clustered into a single entity in multi-variate space, and can handle differences at scale without the need for normalisation/ standardisation.

To continue reading, please download the full whitepaper via the form below.

Download the full whitepaper

Tips and IIoT insights to help you transform your business

Cookie & Privacy Policy

Copyright © Davra Networks 2026. All rights reserved.