Acceptable tolerances for quantification of NGS DNA libraries

Hello everyone. Working on an automated DNA quantification protocol using Quant-iT PicoGreen chemistry on a BioTek Synergy reader. Not currently using the monochromator (i.e. using the optical emission filters). Input samples are Illumina DNA libraries diluted 200-fold.

Standards are diluted from lambda stock. The curve contains a blank plus points at 0.01, 0.10, and 1.0 ng/ul (all per vendor recommendations). This yields an effective quantitative range of 2 - 200 ng/ul.

My question to folks more experienced than I: What does the industry consider an acceptable tolerance for quantification with respect to accuracy and repeatability? My observations thus far indicate that we often over- or under-quant towards the limits of the standard curve- on the low and high end- while samples near the middle of the curve (i.e., ~ 20 - 100 ng/ul) are generally very accurately quantified. The standard curves I’m generating are also quite linear, with an R2 approaching 0.99999.

This indicates to me that the samples (clean DNA libraries) are fluorescing slightly differently than the standards (lambda), but I’m really having trouble dialing in the low and high ranges to a level that I am happy with. Of course, finding completely identical and 3rd party certified standards to use is quite difficult, and I’d really rather avoid any chemistry changes at this stage.

Willing to take any advice on the matter with respect to optimization of standard curves, dilution best practice, and general statistical guidance.

Thank you

2-200 is quite a wide range, can you not change what standards you use, so if you, say need detection at the lower end you have more standards there? I used to generate both a high sensitivity, and broad range standards plate and use the one that was more appropriate.
If using high concentrations might there be some bleed over (fluorescence or potentially even splashes from adjacent wells)?
Also, 200-fold dilution of libraries- can that be correct?!

Could you extend your standard curve to 0.1, 1, 10, 50, 100, 200? This would occupy a good bit extra wells, but you’d gain the benefit of interpolating within your calibration curve rather than extrapolating. R2 breaks down a bit at this large of range, so you’d need to look at some sort of process control to track potential drift.

Most OQ seem to allow around 5% CV for accuracy in liquid handler dispensing. That’s likely a good place to start for expected tolerance but ultimately it depends on your required level of confidence. You can use a t-test or chi-test to really go in depth on this depending on your needs.

I should have been a bit more clear. The standard curve is constructed from the points specified above, and then the samples are diluted to fall within that curve. This is done by pulling a 2ul aliquot of undiluted sample into 198 ul of TE, and then mixing that in equal proportions with dye solution.

I’m using a 384 well plate, so I’m not terribly concerned about using additional wells. I had considered the problem of fluorescence carryover but I haven’t observed that in practice.

However, to your point, I think adding in an additional few standards in the middle of the curve would be a great place to start. The relative jump in RFUs between 0.01 and 0.1 is relatively small, while the jump between 0.1 and 1.0 is very significant.

Thanks for the insight.

Sounds like you’re on the right path! When developing verification methods I refer to this resource a LOT
https://journals.sagepub.com/doi/full/10.1016/j.jala.2005.01.009
Really great explanation of considerations for abs/fluor based quantification. Of interest might be the sections on settling time and potentially the centrifugation - which wasn’t fully explored but does have interesting effects.

I’m pretty sure that the second reference used (Taylor) was used for the development of Artel’s MVS, so it’s also a good read.

2 Likes

I’m actually planning to do the same thing with on a Dynamic Devices Lynx. What liquid handler are you using for the 2uL? That’s going to be where the error comes in. Afre you prepping the dyed samples in 96 wells and then putting some in the 384 for reading?

You got it. My team is working on really careful optimization of that 2ul transfer, but the variability I’m seeing is appearing with reeeeeally careful manual pipetting as well.

Actually, the data doesn’t look bad, but it’s just not quite where I want it to be.

For the automated workflow, I’m using a STAR and the sample dilution is in a 96well plate. Then, an aliquot of diluted sample + TE is pulled and dispensed into a second 96 well dye prep plate, where an equal volume of dye is added.

Finally, the diluted sample and standards + dye are stamped to a 384-well plate in triplicate.

All those plates and liquid handling is making me want a Lunatic now…
I am really hoping for my process I can dilute right to the plate but seems unlikely.

I think it could be feasible if you adjusted your dilution calcs. But if you’re looking to analyze a full plate of 96 samples you’d need to run your standards on a separate plate. We just wanted measurement repeats rather than n = 1 informing our post-QC processing.

Question, why did you go with Quant-iT PicoGreen chemistry?

I’ve kind of inherited the decision along with the project. Won’t go into details, but I’m stuck with it at the moment.

What other chemistry would be on your list? Im actively developing tis so would love to know!

1 Like