I had 10 different tissue samples, of which 5 were normal subjects and 5 were diseased. I have ran a AQ RT PCR for my gene along with GAPDH and I have the ct values. But how to analyse expression from ct values?
Monserat and Subratra gave advice for a relative quantification, but Revathy was asking for advice about absolute quantification.
Absolute quantification means you have some "absolute standard" to measure against. The concentration (in some biologically meaningfull unit) of your absolute standard is known, and you use it to express the quantities measured in your "unknowns" in the same unit
This is achieved by measureing a dilution series of the standard. From the known dilutions and the measured ct values a claibration curve can be constructed, and the ct values from the unknowns can be compared back against this calibration curve to get their concentration in the units of the standard.
This is done for all (both) genes (target and reference genes), and the resulting concentrations can be processes in the usual ways.
The proposed "delta-ct method" in contrast (1) assumes equal amplification efficiencies and (2) does not give expression levels in any interpretable ^biological unit.
.
Correction to Subatra:
Maybe I did not understand your calculations correctly, but the way I understood them there seems to be a mistake:
You say that reference gene and target gene were measured in triplicate, so you get the mean ct values
B, D for the reference gene
A, C for the target gene
in a "normal" and in a "diseased" sample, respectively.
Now you seem to mix biological and technical variances in your calculations. I'd say that it is correct to calculate it as follows:
The dct for the normla sample is then dctnormal = B-A and
the dct for the diseased sample is then dctdiseased = D-C.
This refers, as you seem to state, for a single "normal" sample and a single "diseased" sample (note: you estimate two coefficients from two samples, so there are no residual degrees of freedom). If there are several samples, all this is done per sample. If you have n "normal" samples and m "diseased" samples than you get n pairs of As and Bs, and you get m pairs of Cs and Ds, giving n dct values for the "normals" and m dct values for the "diseased".
Finally, the ddct is the mean difference between the dctdiseased and the dctnormal.
This is a "standard" comparison of the means iof two groups. So the t-distribution (with n-m-2 degrees of freedom) can be used to get confidence intervals and p-values.
The ddct is a log fold-change. The base of the log is the amplification efficiency.
you can make an delta ct analysis, this alternative is the best for you because you can compare the expression or your interest gene with the inner expression of the housekeeping gene of each sample. You can performed the analysis really easy since you have the ct values of the gene and the GAPDH, you have to calculate the differrence between both genes, taking in count that GAPDH has a normal expression and its almost the same in all samples. So the formula is: ct of the problem gene - ct of the reference gene. the value that results from this operation is inverse to the gene expression, i´ll try to make myself clear; lets say you have a ct of the reference gene expression that has a value of 12 and the ct of the interest gene in the sample is 24, and in other sample ct of GAPDH is 14 and ct of the interest gene is 8. if you make the calculation the delta ct of first sample is 12 and in the second sample is 6, meaning that sample 2 has higger expression of the interest gene since the value of the delta ct is lower, this means that the expression of your interest gene is closer or even higger than the expression of a reference gene, on the counter the sample 1 has a lower expression of the interest gene and the deltact value is higher. in my opinion a normalize analysis might get you in some trouble, since you need a normal control for the interest gene expression, and compare all your samples with the control. I hope i have been helpful
Monserat and Subratra gave advice for a relative quantification, but Revathy was asking for advice about absolute quantification.
Absolute quantification means you have some "absolute standard" to measure against. The concentration (in some biologically meaningfull unit) of your absolute standard is known, and you use it to express the quantities measured in your "unknowns" in the same unit
This is achieved by measureing a dilution series of the standard. From the known dilutions and the measured ct values a claibration curve can be constructed, and the ct values from the unknowns can be compared back against this calibration curve to get their concentration in the units of the standard.
This is done for all (both) genes (target and reference genes), and the resulting concentrations can be processes in the usual ways.
The proposed "delta-ct method" in contrast (1) assumes equal amplification efficiencies and (2) does not give expression levels in any interpretable ^biological unit.
.
Correction to Subatra:
Maybe I did not understand your calculations correctly, but the way I understood them there seems to be a mistake:
You say that reference gene and target gene were measured in triplicate, so you get the mean ct values
B, D for the reference gene
A, C for the target gene
in a "normal" and in a "diseased" sample, respectively.
Now you seem to mix biological and technical variances in your calculations. I'd say that it is correct to calculate it as follows:
The dct for the normla sample is then dctnormal = B-A and
the dct for the diseased sample is then dctdiseased = D-C.
This refers, as you seem to state, for a single "normal" sample and a single "diseased" sample (note: you estimate two coefficients from two samples, so there are no residual degrees of freedom). If there are several samples, all this is done per sample. If you have n "normal" samples and m "diseased" samples than you get n pairs of As and Bs, and you get m pairs of Cs and Ds, giving n dct values for the "normals" and m dct values for the "diseased".
Finally, the ddct is the mean difference between the dctdiseased and the dctnormal.
This is a "standard" comparison of the means iof two groups. So the t-distribution (with n-m-2 degrees of freedom) can be used to get confidence intervals and p-values.
The ddct is a log fold-change. The base of the log is the amplification efficiency.
The youtube video does a good job of explaining things. If you really want absolute quantitation you really need to have a good standard to run a dilution series on in parallel with your experimental samples. If you really want absolute quantitation, I highly recommend droplet digital PCR. It actually counts molecules relative to the count of molecules of your standard and is much more accurate than normal quantitative PCR as well.