I do not agree here: 2^∆ct and ∆ct do not have different interpretations. It is the very same thing expressed on a log scale or on a linear scale. For expression values, the log scale is more useful since it better reflects our attributed significancees relative of effect sizes (so to say: when fold-changes are compared) and our way to understand variation (as being symmetric around a center).
Regarding statistics, it is much easier to work with ∆ct (i.e., in the log scale), simply because effects there are modelled additively and errors symmetrically. Statistics on 2^∆ct will have to employ a multiplicative models (what is, eventually, again simply using the log values, haha), or a gamma-model with log-link (so the expected value is modelled on the log scale, but the observed values are taken on the linear scale).
Eventually, all we can effectively do is to analyze the log values, and these *are* the ∆ct values. Any transformation to 2^∆ct would anyway have to be undone for analysis.
If by ∆ you mean "delta-delta", then 2^-delta-delta Ct is more interpretable, since it is the relative quantity. Presumably, each one would be interpretable by some statistical test and, if done properly, the statistics would say precisely the same thing since the input data are exactly the same, you're just transforming it.
Actually, I meant delta Ct and not delta delta Ct, because I'm just comparing mean between 2 or more groups and not normalizing by a reference sample (or calibrator) as in the delta delta Ct formula.
In this case, what do you think?
I'm asking about it because I think that using 2ˆdelta Ct could be more correct (statistically) since this formula considers the efficiency of reaction, isn't it?
The efficiency is only considered if you have generated a standard curve. If you did this, then you will know the true (ng) quantities. It will be very easy to do statistics on the absolute quantities.
Your method assumes an efficiency of 100% and is imprecise. Again, if you calculate statistics on 10 values compared to 10 other values, and then I tell you to compare the same 20 values after making them the exponent of 2, then your statistics should tell you precisely the same thing about the 10 values….they just might take the form of a different statistical test.
I do not agree here: 2^∆ct and ∆ct do not have different interpretations. It is the very same thing expressed on a log scale or on a linear scale. For expression values, the log scale is more useful since it better reflects our attributed significancees relative of effect sizes (so to say: when fold-changes are compared) and our way to understand variation (as being symmetric around a center).
Regarding statistics, it is much easier to work with ∆ct (i.e., in the log scale), simply because effects there are modelled additively and errors symmetrically. Statistics on 2^∆ct will have to employ a multiplicative models (what is, eventually, again simply using the log values, haha), or a gamma-model with log-link (so the expected value is modelled on the log scale, but the observed values are taken on the linear scale).
Eventually, all we can effectively do is to analyze the log values, and these *are* the ∆ct values. Any transformation to 2^∆ct would anyway have to be undone for analysis.
Statistical comparisons should be drawn in data representations that have a more Gaussian distribution of the dataset. For example, the simplistic view of 2^-ddCt in percentile format introduces statistical artefacts around data distribution that routinely make e.g. RNAi experiments look successful. Many also refrain from assessing 'baseline' or calibrator group variance (how precise is your definition of what 100% is) and thus completely ignore the impact of that SD to statistical significance.
If the data looks skewed around the mean in any way, try flipping it based on the mathematical scale used (e.g. log2 for Ct methods) or others (ln, log10) that normalise distribution. If the data is one-sided, again chose the transformation that gives a more Gaussian distribution. Your biostatistician should be able to help.
As for not using a calibrator, how do you know your reaction efficiencies, sample loading etc. is not even given the compounding effect of log scale data change as reactions proceed that miniscule differences in the starting material might have?