This "problem" as described by Valentina is quite common when "replicate measures" are taken from the same "individual" or subject/object and when there is a considerable variability between the individuals. A typical example is as follows:
the blood pressure is measured in 10 patients, once before a treatment and again (in the same patients) after the treatment. There are two groups of n=10 values, but they (usually) have a high correlation. The "baseline differences" between the patients are (usually) already so large that a specific treatment effect will get lost in the noise. An appropriate solution is to look at the individual treatment effects, i.e. to calculate the difference of the two measurements belonging to the same patient, for each patient. So one gets 10 differences that are more direct measures of the treatment effect and where the intra-patian-correlation has been eliminated. This is the principle behind the "paired analysis" (like used in a paired t-test for instance).
By some strange misconception, Valentina (like others) seems to think that these differences need to be calculated for the "control group" (the measurements before the treatment). In this example this adjustment will cause all control values to be exatly zero (because each value is subtracted from itself), to the mean of the "control group" is zero, and there is no variance (sd=0). Then the "normalized treatment group" (what is not a "group" now, because we are talking already about the effects, i.e. the differences between groups "after-before"!) should somehow be compared to the "normalized control group" (all zero values in this example). This is clearly completely nonsense. Instead, the differnces can be tested directly against the null hypothesis of the effect (typically d=0).
The normalization does not need to be a translation; it can be a scaling, too. This is often used when the effects are multiplicative (so it anyway simplifies to the above described translation-example on the log-scale). Instead of subtracting the control values from the corresponding treatment values, the treatment values are divided by the corresponding control values, resulting in a "proportional effect measure" for the ratios. If the same transformation is applied (nonsensically!) to the control values, all normalized control values will be 1 (or 100%) and again have no variance (sd=0).
The real scenario may comprise more than two groups. For instance, the blood pressure could be measured at several (fixed) time points. Or each individual could be treated with several different drugs (givin enough time inbetween to not confound the effects). And the "normalization unit" does not need to be an "individual" like a patient. It may be a western-blot or a n ELISA plate or a stock of cells or a cell passage and so on (to give some examples from biology).
Already the ANOVA including the normalized control group is nonsense and wrong (you would actually use the information of the controls twice, and once in a completely inadmissible way; and the normalized control data definitively violates virtually all assumptions of the ANOVA). Btw: why are you performing an ANOVA at all? Is this required? Are not the post hoc tests the only things you are interested in? (to make it clearer: would you really aim to present the ANOVA table and a thorough discussion of this table in a publication? - or is your aim just to justify the selection of one (ore some) experiemntal groups that have "statistically significant" differences to the control group?)
Dunnett's test is not applicable for correlated values. The values are correlated by the "individuals", and with the normalization you just try to get rid of this correlation. This (i.e. the normalization) is absolutely ok, but at the cost of having no separate control group anymore. Instead, each normalized value is already a measure for the difference (or the ratio) to the (respective) control.
You can test the group means against the reference value (1 or 100%). You can use a pooled standard error for these tests, and you can control the wamily-wise error-rate by the Bonferroni-Holm method.
If the sample SD one group is zero, then you need to figure out how to estimate its population SD. Often this requires looking at the literature on similar measures. The methods used in computing packages make some assumptions which doing this with ANOVA, but it is often better to think about it a bit more and be more weary trusting those assumptions.
No, Daniel, there is no "etimate of a population SD" for the control group. The information of the control group is integrated in the normalized data and thus the normalized control data containes no information anymore. Using this data as if it would contribute additional information is clearly wrong.
Jochen, maybe I misunderstood the question. If a group has SD=0, then I would address that before standardizing or transforming in anyway. If a variable has SD=0 then it doesn't vary (or covary with anything), so it provides no information about associations. I would explore this first. That said, re-reading the question it is more confusing than I first thought reading pre-coffee since ``normalizing'' may or may not affect the mean and/or SD. I think I saw SD=0 and thought that was what the question was about.
This "problem" as described by Valentina is quite common when "replicate measures" are taken from the same "individual" or subject/object and when there is a considerable variability between the individuals. A typical example is as follows:
the blood pressure is measured in 10 patients, once before a treatment and again (in the same patients) after the treatment. There are two groups of n=10 values, but they (usually) have a high correlation. The "baseline differences" between the patients are (usually) already so large that a specific treatment effect will get lost in the noise. An appropriate solution is to look at the individual treatment effects, i.e. to calculate the difference of the two measurements belonging to the same patient, for each patient. So one gets 10 differences that are more direct measures of the treatment effect and where the intra-patian-correlation has been eliminated. This is the principle behind the "paired analysis" (like used in a paired t-test for instance).
By some strange misconception, Valentina (like others) seems to think that these differences need to be calculated for the "control group" (the measurements before the treatment). In this example this adjustment will cause all control values to be exatly zero (because each value is subtracted from itself), to the mean of the "control group" is zero, and there is no variance (sd=0). Then the "normalized treatment group" (what is not a "group" now, because we are talking already about the effects, i.e. the differences between groups "after-before"!) should somehow be compared to the "normalized control group" (all zero values in this example). This is clearly completely nonsense. Instead, the differnces can be tested directly against the null hypothesis of the effect (typically d=0).
The normalization does not need to be a translation; it can be a scaling, too. This is often used when the effects are multiplicative (so it anyway simplifies to the above described translation-example on the log-scale). Instead of subtracting the control values from the corresponding treatment values, the treatment values are divided by the corresponding control values, resulting in a "proportional effect measure" for the ratios. If the same transformation is applied (nonsensically!) to the control values, all normalized control values will be 1 (or 100%) and again have no variance (sd=0).
The real scenario may comprise more than two groups. For instance, the blood pressure could be measured at several (fixed) time points. Or each individual could be treated with several different drugs (givin enough time inbetween to not confound the effects). And the "normalization unit" does not need to be an "individual" like a patient. It may be a western-blot or a n ELISA plate or a stock of cells or a cell passage and so on (to give some examples from biology).
Jochen, thank you for your answer and detailed explanation. I need just to proof the statistical significance. Agree, that ANOVA with dunnet's test is not the best one for that (just noticed that adding more groups with concentration dependently increased effect alters the statistics, this is not suitable for me). I will learn more about Bonferroni-Holm method. You said I can "test the group means against the reference value (1 or 100%)". Is this mean that I can use this 100% control for statistics using other methods than ANOVA?.. (anyway will try to avoid it).
With testing a group mean against the 100% value I meant that the null hypothesis is 100%. The test is actually a "single-sample test" or "one-sample test". I did NOT mean that the non-sensical "normalized control values" are used in any way, neither in an ANOVA nor somwhere else.