We are doing PCRs on a regular basis, and we are used to represent Ct values. What is special about this new terminology? Is it more significant than ct values?
In a real time PCR assay a positive reaction is detected by accumulation of a fluorescent signal. The Ct (cycle threshold) is defined as the number of cycles required for the fluorescent signal to cross the threshold (ie exceeds background level). Ct levels are inversely proportional to the amount of target nucleic acid in the sample (ie the lower the Ct level the greater the amount of target nucleic acid in the
sample).
Cts < 29 are strong positive reactions indicative of abundant target nucleic acid in the sample
Cts of 30-37 are positive reactions indicative of moderate amounts of target nucleic acid
Cts of 38-40 are weak reactions indicative of minimal amounts of target nucleic acid which could represent an infection state or environmental contamination.
In a real time PCR assay a positive reaction is detected by accumulation of a fluorescent signal. The Ct (cycle threshold) is defined as the number of cycles required for the fluorescent signal to cross the threshold (ie exceeds background level). Ct levels are inversely proportional to the amount of target nucleic acid in the sample (ie the lower the Ct level the greater the amount of target nucleic acid in the
sample).
Cts < 29 are strong positive reactions indicative of abundant target nucleic acid in the sample
Cts of 30-37 are positive reactions indicative of moderate amounts of target nucleic acid
Cts of 38-40 are weak reactions indicative of minimal amounts of target nucleic acid which could represent an infection state or environmental contamination.
A delta-ct essentially is a ct, normalized to a loading control. That's the main difference.
If you can not assert that all samples you like to compare do have the same concentration (e.g. cell numbers), then differences between ct values of two samples may be caused simply by the fact the two samples contained a different amount of biological material. Thus, delta-cts are mainly relevant for gene-expression studies, where usually not the absolute amount of the mRNAs in a sample is interesting, but the amount *relative* to the amount of material used for the PCR (depends on the number of cells used to extract the RNA and the kit and efficiency used for RT). This amount could be determined by any means, but it is convenient to just quantify one ore more constantly expressed genes ("reference genes"), so that their ct values can be used normalize the cts from the gene of interest.
If you standardize your sample material by other means or if you want to measure absolute values, using ct values is fine.
A good paper that explains the ΔΔCt is the following:
Livak, K. J., and Schmittgen, T. D. 2001. Analysis of relative gene expression data using real-time quantitative PCR and the 2(-Delta Delta C(T)) Method. Methods 25:402-408.
Ct (threshold cycle) is the intersection between an amplification curve and a threshold line. It is a relative measure of the concentration of target in the PCR reaction. Fold change for each target gene is calculated in the following way:
because this is the way normalization is usually done, and higher DeltaCts indicate higher expression, what is more intuitive.
if you follow this advice, then the "fold-change" is 2^DeltaDeltaCt
2)
Never ever provide "fold change +/- S.D." because this wrongly indicates that the uncertainty for the fold-change would be symmetric. IMHO it is sufficient to show the DeltaDeltaCts with the SD or the SE or CI (usually the best). If you are urged to present fold-changes, then calulate the limits of the CI for the DeltaDeltaCts and anti-log these limits (lower FC = 2^(lower DeltaDeltaCt), upper FC = 2^(upper DeltaDeltaCt)).
Dear Wilhelm, you are right but both ways can also be used. I also add this from my reference: Positive DDCt values indicate less targeted gene's mRNA after induction or treatment, meaning inhibition of targeted gene's expression. Conversely, negative DDCt values indicate more mRNA after stimulation, indicating induction of gene expression.
Volkan, yes, both ways are possible, I certainly know. I am just of the opinion that calculating numbers where positive values indicate downregulation and negative values indicate upragulation is just counter-intuitive and also absolutely uncommon in any other field. For instance, nobody would report a protein concentration as LC/POI (LC = loading control, often beta-actin or something; POI = protein of interest).
----
Just to make this concrete:
let the conc. of the loading control in the sample be 16 pg/µl
let the conc. of the protein of interest in the sample be 64 pg/µl
the normalized conc of the POI would surly be given as 64/16 = 4 pg/µl (per pg/µl LC) and not as 16/64 = 0.25 1/(pg/µl [per pg/µl LC])
Since errors of concentrations are multiplicative, here also better logarithms should be reported, what makes it very similar to the dct-problem. Let us take the logarithms to the base 2:
logConc(LC) is log(16)=4
logConc(POI) is log(64)=6
The logRatio is log(POI/LC) = log(POI)-log(LC) = 6-4 = 2.
anti-loging this value again gives the factor 4: 2^2=4.
Now note that the ct value is proportional to -log(Conc). Min the minus-sign! The smaller the ct, the higher the logConc.Thus,
logConc(LC) = -p*ct(LC)
logConc(POI) = -q*ct(GOI)
(p and q are usually unknown proportionality factors > 0)
Plugging this back into the equation for the logRatio gives
The literature has it spelt clear (definition), as well as researchers below. When it comes to usage I have seen researchers using delta ct values or delta-delta ct values. delta-ct value comparison when you are doing 'case-control studies' mostly in humans and delta-delta ct values (no units on 'y-axis') specially in animal studies.
You have to carefully look into MIQE guidelines to prepare your data before submission, it helps the community to understand and reproduce it better.
As far as I know; There are small differences between the different samples and to minimize the effect of these differences, we do normalization against the control or standard or a gene say any housekeeping gene. So, subtracting Ct of the housekeeping gene from the Ct of the gene of interest is delta Ct.
"So, subtracting Ct of the housekeeping gene from the Ct of the gene of interest is delta Ct" - this is one way dCt values are often calculated, but it is quite counter-intuitive! It would make much more sense to calculate dCt as Ct[reference gene] - Ct[gene of interest].
1) The Ct value gets LARGER when the concentration of the target sequence gets SMALLER.
2) The Ct value is a measure proportionaly to the NEGATIVE LOGARITHM of the concentration target sequence.
The log ratio of the concentrations of the gene of interest (goi) to the refenece gene (ref) is the difference of their logarithms. Now their logarithms are proportional to -Ct, so this log ratio is obtained by
The calculation as Ct[goi]-Ct[ref] actually normalized the concentration of the reference gene to that of the target gene, what in my eyes seems a rather stupid thing to do. Livak et al "correct" that mistake eventually by calculating a fold-change as 2^(-ddCt) (mind the minus sign in the exponent!) = 1/(2^ddCt), where ddCt is dCt[treated] - dCt[control]. Althout the end result is correct, the way is awkward and counter-intuitive.
I don't know any publication using Ct values - these values don't make much sense without any appropriate normalization. But there are many papers using delta-Ct values (which are "normalized Ct values").
"Relative gene expression of IL-8 to 3-actin was calculated as 1/2 (CT IL-8 ÷ CT3-actin), essentially as described in the User Bulletin # 2, 1997 from Perkin Elmer (Perkin Elmer Cetus, Norwalk CT, USA)."
So the DIVIDE the two Ct values (not substract) and multiply the result by 0.5.
This is clearly utter nonsense.
It is actually so stupid that I wonder if this is more a type-setting problem or a collection of typos (and the actual calculations were done correctly).
The "comparative Ct method" is explained on pages 11ff, where it is written in huge letters that the formula is 2^(-ddCt), with a lot of derivation how this formula is obtained (including how the dct is calculated as an interim result). The bulletin is correct. I can not say if the authors really did such a major mistake (in this case one must doubt all the results and conclusions presented!!), but at least the reviewers who did not notice this are to blame (not every reviewer needs to know qPCR good enough to understand this, but if a paper strongly relys on this method, the editor must find at leas one expert for this method, and this obviousely failed...).Regarding your own project:You must thoroughly check the performance of your assay(s) anyway, so you will have to determine the efficiencies of all your primer systems. If the efficiencies are similar (and close to the optimum), there is no problem using the simple ddCt-method. If the efficiencies are quite different, then you can have a problem either with the efficiency determination or with your assay(s). In either case something is strange, and whether or not you will apply some "efficiency correction" (like the "Pfaffl method") in such a case - I wouldn't trust the results. So if the assay doesn't not perform close-to-perfect, change the assay (rather than the analysis!). If the assay performs almost perfect, keep things simple and use the ddct method.
I don't know precicely what you mean be "to use" (... the delta-Ct values...).
First of all, ct-values are the read-outs. These are the basic things that are measured (or determined, from the amplification curves). So whatever you do subsequently is neccessarily based on these ct values, and thus "uses" these ct values.
Showing either dCt* values or 2^dCt values is similar to showing either log(Conc**) or Conc. I would always prefer to show dCt (or log(Conc), resp.) because this better reflects the "biological relevance" of changes (e.g.; on the log-scale, a down-regulation to 50% would look similar to an up-regulation to 200%, just to the opposite direction. This makes much sense to me, and seems to be in-line with subsequent biological effects that will exhibit a similar strength but work in opposite directions. The picture looks entirely different when the concentrations are presented on the linear scale). So there is no "right" and "wrong", but to my understanding the log(Conc) (or dCt) version better and more intuitively reflects the relevant biological picture.
* I certainly assume that the dCt is calculated as dCt = ctref - ctgoi, to be proportional to log(Ngoi/Nref) (rather to the reciprocal)
** "Conc" means the normalized concentration of the goi (normalized to ref).
The analysis (averaging, linear models, hypothesis tests, ...) should/must always be done on the log scale, because our stochastic model is about the relative changes, and their effects are multiplicative on the linear scale and thus additive on the log scale. All our standard statistical procedures are based on additive (stochastic) effects, and hence the logs need to be analyzed to meet this prerequisite. Whatever was calculated may (or may not) be transformed to the linear scale to show or report the results (where I suggest to stay on the log scale, but others think different).
Is there a maximum fold-change value in gene expression analysis?
I am conducting a dose-dependent gene expression profile for some primers. There seems to be consistency in pattern. However, in the higher dose in a serially diluted treatment showed a 200-fold increase/upregulation, as against the 10-fold increase of the next lower dose.
Can this be explained? I have checked my calculations and they are fool-proof!
Yes, there is, and it is defined by the dynamic range of the method.
If your dynamic range for ct-values (that can be used reliably for quantification) is, for instance, from 15 cycles to 32 cycles, then you have a dynamic range of 32-15 = 17 cycles. At an amplification efficiency of 2 this translates to a dynamic range for the initial concentration of 2^17 = 131072-fold.
This is a theoretical upper bound, comparing a sample with a concentration at the lower limit of quantification to a sample with a concentration at the upper limit of quantification.
Generally, 100-fold or even 1000-fold differences and more are easy to measure with qPCR, just because the very high dynamic range of this method. In contrast, reliably detecting small fold-changes (2-fold or less) is rather difficult.
I agree with the theoretical range. It's just that my supervisor doubted the practicality of my 200-fold change value, as do I. Or could this be because I am using drug-loaded nanoparticle. I am yet to see any paper that has reported such magnitude of change. Do you know of any? I will appreciate a link.
When you are working with real-time PCR, you’re looking for the exact amount of a target sequence or gene in your sample.
Ct values are inverse to the amount of nucleic acid which is in your sample, Lower Ct values indicate high amounts of targeted nucleic acid, while higher Ct values mean lower (and even too little) amounts of your target nucleic acid.
So when you say that CT is proportional to -log[conc] is it log 10 base? If yes, then representing delta CT as 2^(-delta CT) would be the fold change of the log data?
Because only if the log is base 2, then you are doing an inverse anti-log, but if it is base 10, then the 2^(-delta CT) is basically keeping the log but getting the fold change of the log data, is this correct?
Thank you for all members for these whole huge information ,specially Dears (Sibtain.A, Gaascht.F, Wilhelm., Chronis.D, Ergin.V, and Hjortebjerg.R). So I am now in happiness and resting status.
The way of how to adjust a baseline manually may depend on the instrument and software. On the Applied Biosystems devices one has to switch from an automatic mode to a manual mode by selection of appropriate option in a program menu. Then using a mouse you move baseline vertically to an optimal position. Results need to be recalculated aftewrwards.