Contents

AI and human

As industrial organisations adopt AI-driven use cases—ranging from predictive maintenance and anomaly detection to automated decision-making—a critical question consistently emerges:

“How can AI outcomes be trusted if the underlying sensor data may be unreliable?

At Davra, we design for the realities of operational environments. Sensors degrade, networks fail, batteries run down, and conditions change. For this reason, we do not treat data reliability as a single validation step. Instead, Davra approaches trust as a multi-layered system of defenses, embedded across devices, data pipelines, analytics, and human workflows.

Catching Problems Early

The first thing we look at is whether devices and infrastructure are healthy. If a sensor is offline, low on battery, or behaving abnormally, the data it produces cannot be trusted.

The Platform (opens new window) continuously monitors things like connectivity status, battery levels, error codes, and reporting frequency. For example, if a smart meter suddenly stops sending data every 15 minutes and starts sending it once per hour, that change is detected immediately. The system can then raise an alert or issue a remote command to investigate.

This approach applies not just to sensors, but also to gateways and upstream infrastructure. If a gateway loses connectivity, dozens or hundreds of devices may be affected—and that context is critical when interpreting downstream analytics.

Not All Data Is Equal

Even when data arrives on time, it may still be incorrect or misleading. For this reason, data quality checks are applied as data enters the system rather than assuming all incoming values are valid.

These checks can be implemented at the edge using the Edge SDK (opens new window) or centrally through APIs and message buses. In practice, teams define rules that reflect their operating environment.

For example, a temperature sensor in an industrial environment might normally operate between –20°C and 80°C. If a reading of 500°C arrives, the rules defined in the system can either drop the value, tag it as “low quality,” or emit a specific “poor data” event instead of treating it as normal telemetry.

The Rules Engine (opens new window) is commonly used to detect missing readings, sudden spikes, or values that fall outside expected thresholds. This ensures that analytics and AI models are always aware of the quality of the data they are consuming instead of silently incorporating faulty measurements.

Knowing Where Data Came From

In many use cases, when data is generated matters just as much as what the value is. A pressure reading from five seconds ago may be useful, while the same reading from five hours ago may no longer be relevant.

To address this, every data point is timestamped and enriched with metadata as it moves through the system. For example, if a KPI shown on a dashboard is derived from several sensors, we can record which sensors were used, what calculations were applied, and when each input was last updated.

This level of lineage is particularly important for AI outputs. When an anomaly is detected or a recommendation is generated, users can determine whether the insight reflects genuine operational behaviour or whether it may be influenced by stale or degraded data.

Seeing the Whole Picture

Trust in AI also depends on visibility into how data behaves over time. Logs, metrics (opens new window), and traces are exposed through the observability stack (opens new window) so teams can monitor data flows and system behaviour in real time.

In practice, this often means building dashboards that show whether devices are reporting as expected, whether data is arriving within acceptable time windows, and whether long-term trends suggest sensor drift or degradation. These views help teams detect issues early, before they impact analytics or automated actions.

For critical applications, measurements are often cross-validated across multiple sources. For example, water usage might be measured by both a flow sensor and a controller within the system. If those values diverge, the system can flag the inconsistency before any automated action is taken.

Human-In-The-Loop

AI models are only as good as the data used to train them, and this is where human judgment is essential. During development, data scientists use tools like Jupyter Notebooks (opens new window) to explore datasets, visualise trends, and assess data quality.

For example, a data scientist might discover that a subset of sensors regularly report missing values during maintenance windows. Rather than letting the model treat those gaps as anomalies, the data can be labelled or excluded so the model learns the correct behaviour.

This human-in-the-loop approach ensures that AI supports decision-making rather than blindly replacing it. Clear visibility into data quality allows teams to understand not just what a model predicts, but how much confidence to place in that prediction.

Data Quality Transparency

Ultimately, trust comes from transparency. Instead of hiding uncertainty, we surface it.

AI outputs can include information about data freshness, known gaps in reporting, or quality issues that may have influenced the result. In addition, Anomaly Detection (opens new window) can be used not just on operational data, but on data quality itself to flag unusual patterns that suggest sensor drift or failure.

Glora, Davra’s agentic AI layer, adds an additional level of accessibility by allowing users to ask conversational questions about an insight. A user might ask what data was used to generate a result or whether there were any quality issues at the time, making AI-driven conclusions easier to understand and validate.

From Imperfect Data to Reliable Intelligence

Trustworthy AI does not depend on perfect sensors or flawless connectivity. It depends on systems that explicitly manage uncertainty, preserve context, and make limitations visible.

By designing for imperfect data and combining edge validation, ingestion rules, observability, and human oversight, we enable organisations to move beyond experimental AI and toward operational intelligence that can be relied upon in real-world conditions, even when the data is incomplete or noisy.

Tips and IIoT insights to help you transform your business

Cookie & Privacy Policy

Copyright © Davra Networks 2026. All rights reserved.