Liquid Class Instability Due to Temperature/Humidity?

Up until now, we have successfully defined liquid classes for our assays on the Hamilton STAR/VANTAGE using the LVK and adjusting correction curves. However, we’ve recently noticed that liquid classes for our most-used assay at one site are fluctuating significantly, which we suspect is due to changes in temperature and humidity.

We have received reports of under-aspiration by as much as 30% (e.g., a 10 µL target volume in a 300 µL tip actually aspirates only ~7 µL). After we adjust the liquid class to correct for this, we then receive reports of over-aspiration a few weeks later, requiring us to readjust.

Has anyone else experienced similar issues, particularly to this extreme degree? This problem affects all of our liquid classes, but the lower-volume transfers are the most significant offenders.

We plan on implementing better temperature and humidity regulation in the future, but are there any other factors that could be contributing to such large variations?

How are you measuring these variations that are reported? Also do these liquids change?

Using an Artel you can directly test a number of ranges much faster than a LVK:

Do dev on the LVK, do verification on the Artel.

That’s the problem with liquid handling in general. It depends on the environment. Now, since you are using a Hamilton with positive displacement it should be minimized. However, if you are working with lower volumes, it could influence it a bit. Make sure to measure your volumes on a calibrated device like @cwehrhan suggests. Slower aspiration might help if you are in an area with low airpressure.

Also, AC’s on the lab, climate in general…don’t underestimate this.

1 Like