Ideally the efficiency of a PCR should be 100% and qPCR tutorials online say that it should be somewhere between 90% to 100 %. In one of the recent talks that I attended, I was told that if my PCR has an efficiency of over 90% I should have a relook at my results because it is generally rare to get such high efficiency. I know it depends on a lot of factors ranging from primers, reaction conditions, chemistry to cycles as well. On a general note, what is the average efficiency of a PCR in practice?
Primers make the biggest difference. Always test several primer combinations if possible.
Sidenote: I am a little concerend about so many users claiming to have efficiencies between 90% and 110%. The efficiency can not be >100%. This is physically impossible. Since it may happen that the *estimate* for the efficiency is calculated to be >100%, I assume that such estimates are reported. If so, I would be concerned with three things:
1) is the reported range expessing the variability/uncretainty? Then this is way too large to be useful in judging the "real" efficiency; Results based on such low-precise estimates may be drastically wrong (depending on the actual ct-values)
2) is an appropriate error model chosen? I suppose the impact of the wrong model is considerable right at the edges of the domain, so it is at values close to 100%. Thus I think that such estimates may considerably overestimate the "real" efficiency.
3) are there other (physico-chemical) effects causing biased (too high) estimates (that values >100% are obtained)? If so - what do the results tell us? Especially as theses effects may depend on the primer or amplicon sequence.
Sidenote to the sidenote:
Due to the inherent difficulties in precise enough determination of efficiencies, the reactions are usually optimized to ensure ideal conditions, i.e. efficiencies of 100%. The aim of efficiency determination is not to get an actual estimate but rather to convince oneself that the reaction is running with max efficiency. However, there is not even a rule-of-thumb for a cut-off value.
Sidenote to the sidenote-sidenote:
Algorthms using an "efficiency-correction" should perform a proper error-propagation. Unfortunately, the errors for efficiency-estimates are quite large, and dur to the exponential relation, the propagated errors for the final results will be enourmous. Hence, you either *hope* that the efficiencies are *identidal* (and this is best assured when all reactions are pushed to their limits) or you end up in a very, very vage measure.
Hi, normally you can use an efficiency between 90 and 110%!
Maybe also the pdf will help you?!
I always had efficiencies between 90 and 110%!
Cheers, Nadine
Thank you nadine for the reply. So suppose after all possible optimization measures you take and still end up with an efficiency between 80-89%, would you simply not consider it and change the primer or would you go with it ? I mean is there any standard statistical measure or cutoff to decide whether the efficiency I am getting is good or bad or is it subjective?
Dear Nikhil,
using SybrGreen and a LightCycler480 my efficiency values range normally from 80 to 97%. Switching to another master-mix (SensiFast), the efficiency averagely improves (90-110%).
cheers
Andrea
Optimal efficiency is between 90-110%. This has little to do with master mix/cycler and more to do with target and primers. I recommend beginning optimization with a new primer set using purified DNA so you know the template DNA will give you good results. Order a handful of primers (5-10) and test them with a standard curve of purified DNA and optimize the annealing temp first. Optimize other reaction parameters, if necessary to achieve optimal efficiency. If you optimize other reaction parameters (annealing temp/time, magnesium titration, etc) and still end up with 80-89% efficiency, I recommend redesigning your primer set. You should also look at your standard curve. Is the efficiency low because the highest or lowest amount of target does not fall in the linear range? You may need to eliminate those points and then make sure you are only working with DNA concentrations that fall in that linear range.
Once you've optimized with purified DNA, then test the primers on your experimental template. If efficiency is lower, you may need to look at your purification condition. You may have an inhibitor carrying over or highly fragmented DNA. There are rare cases when low efficiency is unavoidable, but there is a lot you can do to improve it before you accept a low efficiency.
Primers make the biggest difference. Always test several primer combinations if possible.
Sidenote: I am a little concerend about so many users claiming to have efficiencies between 90% and 110%. The efficiency can not be >100%. This is physically impossible. Since it may happen that the *estimate* for the efficiency is calculated to be >100%, I assume that such estimates are reported. If so, I would be concerned with three things:
1) is the reported range expessing the variability/uncretainty? Then this is way too large to be useful in judging the "real" efficiency; Results based on such low-precise estimates may be drastically wrong (depending on the actual ct-values)
2) is an appropriate error model chosen? I suppose the impact of the wrong model is considerable right at the edges of the domain, so it is at values close to 100%. Thus I think that such estimates may considerably overestimate the "real" efficiency.
3) are there other (physico-chemical) effects causing biased (too high) estimates (that values >100% are obtained)? If so - what do the results tell us? Especially as theses effects may depend on the primer or amplicon sequence.
Sidenote to the sidenote:
Due to the inherent difficulties in precise enough determination of efficiencies, the reactions are usually optimized to ensure ideal conditions, i.e. efficiencies of 100%. The aim of efficiency determination is not to get an actual estimate but rather to convince oneself that the reaction is running with max efficiency. However, there is not even a rule-of-thumb for a cut-off value.
Sidenote to the sidenote-sidenote:
Algorthms using an "efficiency-correction" should perform a proper error-propagation. Unfortunately, the errors for efficiency-estimates are quite large, and dur to the exponential relation, the propagated errors for the final results will be enourmous. Hence, you either *hope* that the efficiencies are *identidal* (and this is best assured when all reactions are pushed to their limits) or you end up in a very, very vage measure.
I know that this is an old post but I just wanted to add that efficiencies above 100% are indeed possible when using SYBR green. Obviously, this is not that the amplicons are replicated >100% as this is not possible. The reason why SYBR green get efficiencies above 100% is that SYBR green binds to dsDNA thus binding to primer-dimers as well. So if your primers duplicate the DNA of interest by 100% and also form primer-dimers you'll get an efficiency above 100...
Take care,
L
There is also inhibition at the most concentrated (dilution curve) points as well as non-specific target amplification that can exaggerate efficiency values as well. Primer design is key, I agree. Lightning in a bottle doesn't always behave like a model prisoner.
Thanks everyone for the very interesting discussion. I agree with Lars that >100% efficiency usually indicates a presence of primer-dimers. One way to optimize amplification efficiency in this case is to decrease primer concentrations to 100 nM.
I do not quite agree with Lars and Pavel regarding the impact of primer-dimers on the efficiency. Their answer is too simple, because it elides the fact that such an interpretation is the result of an analysis violating several aspects of "good practice":
*IF* primer dimers have an effect on the calculated efficiency, *THEN* the calculation is based on bad data. This should actually not happen anyway, because when primer dimers corroborate the amplification then the data of the dilution series is not in a straight line. Typically a kink is visible in a plot at the point where dimer-amplification outcompetes the amplification of specific product. Such non-linearity should be a warning sign.
Notably, at "high enough" template concentrations there usually is no dimer amplification. It is one important step in the analysis to determine the dynamic range of the method, and this includes to identify the critical minimal template concentration (= the highest Ct value obtained from the amplification of the specific product without detectable amounts of dimers). The efficiency calculation must anyway use only the concentration range of "undisturbed" amplification, so "too high" and "too low" concentrations must not be used for efficiency calculation.
But then the precence of dimers has no impact on the efficiency.
Therefore: if primer dimers are a problem in the calculation, than the calculation has been done wrongly (in disregard of standard rules of evaluation).
It may be that the precence of dimers is limiting the dynamic range of quantification; this can be a problem when the real samples have a too low concentration. But this has nothing to do with the calculation of the efficiency.
The linearity must be judgable from the data. If this is difficult, most likely the experimental design was simply inadequate: Often the dilution series is made of 1:10 dilutions. This is far too coarse to identify the dynamic range. You will have 6-8 different dilutions, what really is not much to see in what range the relation is linear (given the highest and lowest dilution are "suboptimal" there are only 4-6 points left, and the always present noise won't make it easy to see at which point the linearity breaks down. It is better to use many different concentrations in smaller dilution steps (like 1:2 or 1:1.5 dilutions). Instead of measuring eight 1:10 in triplicate (=8x3=24 rxn) one can span the same concentration range with 26 rxn that are 1:2 diluted. Almost the same number of rxn but far better information about the dynamic range and the linearity.
Another mistake often done is to use some real sample as a standard to create the dilution series. When this sample has a low concentration (of the target sequence), then you start with already high ct values and you won't get many dilutions for which you have useable ct values (no dimer amplification, no severe stochastic ("Poisson") effects [data from only a few (some 10 or less) molecules is very noisy). So you end up with only very little information and any estimate of the efficiency can be grossly wrong, at best only too imprecise for any practical purpose. You will need at least 8-10 dilutions with good ct values that are clearly in a straight line (when plotted against the log dilution).
Thank you very much, Jochen, for such detailed answer! Even though I am in qPCR business for more than 10 years, I can't say that I am an expert, thus I still try to learn more about qPCR. My note regarding qPCR efficiency was actually taken from a very nice manual on qPCR quantification found on the web site of University of California, Riverside: http://genomics.ucr.edu/facility/documents/iQ5_real-time_PCR1.pdf
In the manual they discuss examples of the unreasonably high qPCR efficiency (slide20), and how primer dimers raise the apparent efficiency (slide22). It was actually their recommendation to decrease primer concentration to solve the problem (slide24).This recommendation worked very well in my case.
I studied some gene with a very low expression level, thus the formation of primer-dimers competed significantly with my template amplification. I used qPCR to compare expression of this gene in different tissues, and I had to calculate qPCR efficiency to be sure that my quantification is correct. Even though my primers could demonstrate a standard (90-100%) qPCR efficiency when amplifying some "easy" analog of my template (e.g. plasmid containing cDNA this gene), the same primers showed the unreasonably high apparent efficiency with the "real" template - cDNA obtained from the analyzed tissues, where expression of my gene was very low. Since the apparent qPCR efficiency was high, I could not trust to the results. After decreasing the primer concentration, I got a standard value for efficiency that convinced me that in the new amplification conditions my results are relevant.
You can read everywhere that reaction efficiency should be between 90-110% (over 100% is not possible I understand). But if I use Pfaffl (2001) efficiency calibrated equation for the relative expression ratio determination, why should i still worry about reactions under E=90% (furthermore I investigate abundantly expressed targets)?
I totally disagree with the fact the results of PCR would be unreliable if the E