Relative humidity is one of the trickier readings to get right, and there are several things to consider when comparing readings between devices.
Conserv cloud is a free, cloud-based environmental monitoring tool. You can create an account at https://app.conserv.io.
What is relative humidity?
Relative humidity is a measurement of the amount of moisture (water) in the air, relative to the amount of moisture it could hold if it were saturated (unable to hold more water), at a specific temperature.
Make a note of that last part, as temperature is a critical part of the relative humidity calculations performed in a sensor.
Relative humidity and temperature are inversely related. Assuming the absolute humidity stays constant, when temperature goes up RH goes down.
How is relative humidity measured?
Sensors rely on the physical properties of materials to measure relative humidity. There are two ways this is typically accomplished, but the way the sensor works is similar.
Imagine a relative humidity sensor as a sandwich. The bread in this sandwich is made of conductive materials, like metal. The filling of the sandwich is a less conductive material that absorbs moisture from and releases it to the atmosphere, eventually reaching equilibrium.
As the filling absorbs moisture its electrical properties change (either resistance or capacitance, depending on the sensor design). By measuring the change in the filling relative to the bread of the sandwich, we can get a value that is convertible to relative humidity!
However, this is only part of the equation. An accurate relative humidity calculation from a sensor also requires an accurate temperature reading. The combination of the readings from the two sensor components gives us a reading that is close to the truth.
What factors affect the accuracy of readings?
The accuracy of an RH reading depends on several factors and can be complicated in numerous ways.
The sensor's accuracy range, its repeatability, hysteresis, response time, and annual drift values contribute to how close a reading is to reality. All of these values should be available from your vendor's documentation. The specific environment a sensor is in, and how it moves about also play a role.
Read the knowledge base article on the Conserv sensor specifications
The accuracy range of an RH sensor is expressed as something like "Typical +/- 2% Maximum +/- 3%". What this means is that the reading from the sensor is expected and calibrated to be within 2% in most cases, but up to 3% of the actual relative humidity in the environment.
As an example, if it is exactly 50% RH at the sensor, the sensor would typically read between 48% and 52%, but could be up to 47% and 53%. When comparing readings between sensors, this can lead to exaggerated differences.
Let's say we are comparing RH readings between two sensors from different vendors. Both have an accuracy rating of +/- 3% maximum. If the actual RH at the sensor is 50%, one could be reading 47% and the other 53%, creating a 6% difference between the two readings and leaving us to wonder which one is correct.
Repeatability is a measurement of how close together readings will be when the same conditions are present.
Most modern sensors have good repeatability. For example, the sensor chip in Conserv's sensor has a repeatability of 0.1%, which means that if the conditions at the sensor are the same, each reading should be within 0.1% of previous readings.
A sensor can have great repeatability, but poor accuracy. If a sensor has a poor repeatability rating, this can contribute to differences in readings when comparing devices. This can also show itself in your data, as a "jitter" in the readings in a space with stable conditions.
Broadly speaking, hysteresis is the dependence of the state of a system on its history.
In practice, this means that as the RH is rising or falling, a sensor will tend to measure in the direction of its previous state. So, if you move a sensor from a low humidity environment into a higher humidity environment, the readings will remain on the low end of measurement until the movement to the new environment is complete.
In practice, this means much the same thing as response time. When an instrument is moved into a new space, it should be given time to stabilize before readings are taken for comparison.
Any kind of sensing device, whether that is a mercury thermometer or an electronic sensor, will take a little while to get an accurate reading.
The amount of time it takes for readings to reflect reality is the response time for that instrument. For an RH sensor, this is the amount of time it takes for the material in the sensor to either absorb or release moisture to reach equilibrium with its environment. All measurement devices for RH have a response time, so when you first move a new instrument into an environment there will be a little time before a good reading is available.
In practice, this can mean that a handheld instrument taken into a space should be given some time to acclimate itself before a reading is taken for comparison.
Annual drift is the amount of accuracy that a sensor loses each year.
A sensor experiences wear and tear, of a sort, and over time it begins to drift away from reality. This is expressed as a percentage each year. Conserv's sensor, for example will drift approximately 0.25% each year. After a few years, it will be an additional percentage point off.
Conserv deals with this by replacing sensors when the drift reaches an unacceptable amount as a part of our subscription offering. For other instruments, you can measure drift by checking readings against a reference, or by sending them off to the manufacturer for recalibration.
Placement of Sensors
When comparing readings between devices, even tiny variations in the environment can create significant differences in readings.
When comparing readings between devices, they should be placed as close to each other as possible. Variations in airflow, light falling on the devices, proximity to people, entrances, etc all affect the readings from a device. Even a foot or two can make a difference, and cause large variations in readings.
Where this can really matter is when there is a significant difference in temperature, as differing temperature readings will have an effect on RH calculations. Depending on the sensor design and RH being measured, temperature differences as small as 1°C can cause a 3-4% difference in the relative humidity calculation.
Environment can also damage a sensor.
Exposure to solvent vapors or other chemicals can degrade the materials used in the sensor. This might be a specific risk if handheld instruments are stored in a lab environment where solvents are in use. A sensor element should not be touched, blasted with compressed air, or otherwise contaminated.
Contamination is a common cause of sensor accuracy problems. Another common cause of accuracy issues in sensors is improper storage. Prolonged exposure to very high humidity conditions can cause a sensor to develop a temporary, or in some cases, permanent, drift.
What can I do about it?
Comparing readings between different devices can raise more questions than it answers, but there are some things you can do to get to the bottom of differences in readings:
1. First, and foremost, make sure your equipment is in good working order, has been recently calibrated, and properly stored.
2. Give your devices ample time to acclimate to a new environment. Allowing a device some time in a new environment helps to mitigate the effects of hysteresis and response time. Once readings have stabilized (they are no longer rapidly climbing or falling in a stable environment), a more accurate comparison can be made.
3. Ensure that the two devices are as close together as possible, as even small differences in position can cause a difference in readings.
4. Check the temperature readings of each device. In a closed system there is an inverse relationship between temperature and relative humidity. As temperature declines, RH will increase. If you are comparing two sensors and one is reading a slightly higher temperature it should show a lower RH reading (within its accuracy range).
5. Check the manufacturer's specifications for the reading accuracy range of both devices. If you are comparing two devices that are both recently calibrated and see a large difference in the reading, it is quite possible that both devices are reading within their specifications and the reading accuracy range may be the root cause of the difference.