Is it better to pipet duplicates or triplicates in real-time PCR?
All posts · · Bram De Craene
This post was originally published March 19, 2017 for qbase+ (now deprecated).
The content is unchanged apart from minor edits for readability.
To understand the need for PCR replicates (duplicates or triplicates) in the experimental set-up of a real-time qPCR reaction, it is good to first think about the different sources of variation contributing to the measurement values. Far too often, the experimenter is too focused on the very last step of the workflow, i.e. the final pipetting step to obtain reproducible quantification cycle (Cq) values. Sources of variation in upstream processing steps are often underestimated, and are not always adequately assessed. A good understanding of a robust workflow is key to trust the actual Cq values obtained.
It starts before the PCR: sample preparation
Imagine you would start the experiment with a homogenous tissue sample that has been aliquoted in several tubes. Would every processed sample from every aliquot give the same Cq value in the end? The seasoned experimenter would strongly advise to use a standardized workflow to obtain reproducible results. However, even when using such a standardized procedure, it is not a given that all aliquots will result in the same Cq value, even when further correction steps for variation (normalization) will be applied.
Dividing the homogenous sample in equal-sized aliquots (rather than starting with different amounts) is a first step toward qPCR reproducibility. Trying to compare samples that differ too much in size would only skew these differences further in the RNA isolation step, the first step of the qPCR workflow. A variety of methodologies is available out there: obtaining highly pure RNA by means of a time-consuming CsCl gradient purification has the flavor of nostalgia for me. Using a monophasic solution of phenol and guanidinium isothiocyanate followed by chloroform for phase separation was my personal favorite to extract RNA from tissues, while I preferred a column based approach to extract RNA from cells.
When using one or another methodology, but most often when using commercially available kits, I have seen people overloading the system, in the hope to get the maximum out of it. Unfortunately, the opposite is true; and here would come my plea that also applies for every further step in the workflow: take the time to understand the limitations of the system. For any extraction methodology, you will experience that reproducibility is enhanced when respecting the range it has been designed for, and more important that you have been testing and optimizing with your particular sample type. Generally spoken it is advised to start with similar amounts of samples you would like to compare.
The most underestimated step: reverse transcription
The second step in the workflow is probably the most underestimated step in terms of reproducibility: the reverse transcription (RT) step. Let us push the reset button since the previous processing step and wipe away the variation we would already have incurred during RNA preparation: we partition an RNA sample in several tubes again. How can we process these different aliquots to obtain the same Cq values in the end?
A very nice study has been published by Stephen Bustin and colleagues (Clin Chem. 2015, 61(1):202-12) giving us insight on how to avoid the trap of further distorting the data: variability can be minimized by choosing an appropriate reverse transcriptase, using high concentrations of RNA, and characterizing the variability of individual assays by use of multiple RT replicates.
Remember again: take the time to understand the limitations of the system! Some RT enzymes, also provided from some respected companies, simply do not do what is preferentially needed in qPCR: to generate an equal amount of cDNA starting from an equal amount of RNA in a reproducible way. The best approach to test this is to look at the RT reproducibility over a concentration range of RNA (dilution series), and while we are busy, to compare its performance with other available enzymes: surprises guaranteed!
So, duplicates or triplicates?
Minimizing the technical variation by using standard operating procedures throughout the entire qPCR workflow is thus an absolute must, the remaining technical variation can then be reduced further or removed by using a proper qPCR normalization approach, enabling a better appreciation of the true biological variation (see 11 proven tips on normalization and biostatistics). This brings us back to the initial question: should I pipet duplicates or triplicates reactions in my run set-up?
It is good to realize at this point that the Cq values you will obtain are the estimates for a given sample and a given gene and that the reproducibility of these values will only reflect your pipetting skills (or the performance of the robotics doing the pipetting for you). The quality of PCR replicates is often evaluated in terms of Cq variability, and a delta Cq value between replicates (highest Cq minus lowest Cq) of 0.5 is generally accepted as reproducible.
It goes without saying that if you have been neglecting the sources of the variation in the previous steps of the workflow, the good reproducibility of the PCR replicates might give you at this point a false sense of security. The values may be nicely reproducible, but they would be meaningless if started from material from uncontrolled processing steps.
Biological replicates matter most
Moreover, what you are probably really interested in is the true biological variation (e.g. between untreated and treated samples). That information is captured in the Cq values from different biological replicates, which I would consider as the most important level of replicates (see also 11 proven tips on normalization and biostatistics). This is the core distinction between biological replicates vs technical replicates in qPCR: statistics on data from independent biological replicates makes sense whereas statistics on repeated measurements (duplicate or triplicate PCR reactions) are meaningless.
If you are confident about your pipetting skills, duplicate reactions (delta Cq of 0.5 or less) will generally give you the information you need. But make sure that you are as confident about the other steps in the qPCR workflow because reproducible Cq values not necessarily reflect a good estimate for a given sample and target combination. Moreover, proper data postprocessing (including quality control and normalization), use of biological replicates and appropriate statistics is also needed to interpret the data with full confidence.
Key takeaways
- Sources of variation in upstream steps (RNA isolation, reverse transcription) are often underestimated - reproducible Cq values alone do not guarantee reliable results.
- Take the time to understand the limitations of each system: respect input ranges for extraction kits and test RT enzyme reproducibility over dilution series.
- Reverse transcription is the most underestimated step - choose an appropriate enzyme, use high RNA concentrations, and characterize assay variability.
- Biological replicates capture the variation you actually care about; statistics on PCR replicates (duplicates or triplicates) alone are meaningless.
- Duplicates are generally sufficient if your pipetting is solid (delta Cq of 0.5 or less), but only if the entire upstream workflow is controlled.
Bram De Craene
Research Fellow Scientist, Biocartis & qPCR Trainer
Obtained his PhD in Sciences (Biotechnology) at Ghent University in 2005. Over two decades of hands-on qPCR experience spanning cancer research, training programs at Biogazelle, and diagnostic development at Biocartis. Founder of Outside The Lines, where he offers qPCR training courses for researchers, PhD students, and industry professionals.
