Why Data Quality Matters in Indicative Air Quality Monitoring
- David Green
- May 14
- 3 min read
Updated: Jun 9
Why Data Quality Matters in Indicative Air Quality Monitoring
In recent years, the deployment of indicative sensor-based air quality monitoring systems has gained momentum across the globe. These compact, cost-effective devices provide an agile alternative to traditional, bulky reference-grade monitors and offer new ways of understanding pollution patterns in real time. But as this sensor technology becomes more widespread, so too does the need to ensure data quality—a critical component in turning measurements into meaningful action.
In this blog—the first in a series exploring data quality in EarthSense’s Zephyr® Network—we’ll look at why data quality matters for indicative monitoring and how understanding the nuances of quality assurance (QA) and quality control (QC) is essential for interpreting sensor data. Along the way, we’ll demystify key terminology, explore common pitfalls, and set the scene for upcoming blogs that will delve into performance standards, real-time QAQC techniques, and data science innovations.
Indicative Sensors vs Reference Monitors: Different Tools for Different Jobs
To understand the importance of data quality, it’s first helpful to understand how indicative sensor-based systems differ from reference-grade monitoring.
Reference monitors are the gold standard—highly accurate and rigorously tested instruments that follow strict calibration and operating procedures to measure air pollutants such as NO₂, O₃, PM₁₀, and PM₂.₅. These devices often require regular maintenance, power-hungry operation, and complex sample preparation such as filtration, dehumidification, and temperature control to ensure stable conditions for measurements.
In contrast, indicative sensors like the Zephyr® offer an accessible, lower-power, and portable solution. They may not match reference monitors in precision, but they excel in scalability and flexibility. Their small form factor and ease of deployment make them ideal for dense sensor networks that can monitor air quality at street level, in near real time, and in places where traditional monitors are impractical.
By enabling high spatial and temporal resolution, indicative systems help uncover localised pollution trends, inform behavioural interventions, and support smarter urban planning. In short, they are essential tools in modern air quality management—but their value hinges on trust in the data they produce.
Understanding the Challenges of Indicative Sensors
With their many advantages, indicative sensors also come with limitations. These are inherent to the sensor technology, but they can be managed with the right approach:
Sensitivity to target pollutants: Sensors must be carefully selected and configured to detect the pollutants of interest at the right concentration ranges.
Cross-interference: Gases like NO₂ and O₃ can interfere with each other’s signals, leading to skewed readings.
Sensor drift: Over time, sensor response can degrade due to environmental exposure, ageing, or physical damage.
Humidity effects: High or rapidly changing humidity can significantly affect sensor readings, particularly for optical particle counters. Moisture can impact the stability of the sensor signal and even lead to false positive readings if not adequately compensated for.
These limitations don’t make indicative sensors unreliable—but they do demand a thoughtful strategy for data correction, validation, and ongoing monitoring.
Key to this is the use of both laboratory-based and field-based calibration techniques to understand and adjust for sensor behaviours. At EarthSense, we combine these approaches to develop proprietary algorithms, shaped by years of co-location experience and machine learning training. This ensures we put our best foot forward in providing corrected data in real time, supporting high-confidence air quality insight from indicative sensor networks.
Additionally, data quality validation must be continuous and adaptable, especially as network conditions evolve. Until recently, though, the absence of clear performance standards made it difficult for users to assess whether sensor data could be considered trustworthy.
Standards Are Emerging—But the Path Isn’t Always Clear
In response to growing adoption of sensor technologies, new performance standards and regulatory guidance are finally being developed. These include:
CEN/TS 17660 Part 1 & 2 – outlining performance assessment and data processing guidelines.
ASTM D8559-24 – providing classification criteria for air quality sensors.
PAS4023 – Defra’s guidance on deployment, calibration and performance expectations.
While welcome, these frameworks often lack clear guidance for end users, especially non-specialists who need to interpret sensor data with confidence. This is where platforms like Zephyr® can play a critical role: by embedding network-level QAQC mechanisms and supporting users with transparent, standards-aligned processes, we can help bridge the gap between raw data and reliable insight.
What’s Next in This Blog Series
This is just the beginning. In our upcoming blogs, we’ll be exploring:
Decoding the Standards – Breaking down what CEN, ASTM, and PAS guidance really mean for sensor users.
Real-time QAQC in the Zephyr Network – How EarthSense ensures data quality every minute of every day.
Case Study: West Midlands Combined Authority – Showcasing the power of data quality in a real-world deployment.
Neural Networks and Future Algorithms – How AI is helping us discover hidden patterns and improve sensor corrections.
Data quality isn’t a “nice to have”—it’s the foundation of any meaningful air quality policy. By understanding the strengths and limits of indicative sensors, and adopting a robust approach to QAQC, users can unlock new levels of insight and impact.
Stay tuned for more!
Comments