The t-test calculates the standard error summing the variations of the two samples. When the SD/variation of one sample is 0, then the total SD/variation will be equal to that of the other sample. Variation (SD) of a sample will be 0 only if all observations in this sample are the same. Then the t test is the same as a one-sample t test comparing a sample with a nominal value.
Note also that if some values are 0, and other values are the same (e.g. 1), the SD will be different from 0. But in this case, the distribution will not be normal and you should use other types of test (e.g. likelihood ratio).
Lyudmil Antonov, you stated that when all scores in one group are the same, "the [unpaired] t test is the same as a one-sample t test comparing a sample with a nominal value." If I follow, you are describing a one-sample t-test where the group with some variation in scores is compared to the mean of the group with no variation. If so, my first thought was that surely, the df will be different. Then my curiosity got the better of me, and I started tinkering around. You can see the results of my tinkering in the attached text file. In a nutshell, in the examples I tried, a one-sample t-test comparing the mean of the group with variability to the mean of the group with no variability ended up being equivalent to an unequal variances t-test comparing the two samples directly. (And the Welch and Satterthwaite tests were identical.)
The t test uses an estimate of the standard deviation and for Student's version assumes they are equal (so one conservative approach is to take the larger of them as an estimate). But without knowing more about your problem I can't give any specific advise. Please describe your problem more and why you think you just got a constant for one group. If it is because that is all the is possible, then this is a different problem (and as Bruce says) than if the sd=0 is just a sampling issue. Also though, if you have lots at one value it is unlikely other assumptions of the t-test are meet (e.g., are these somewhat continuous measures).
Bruce Weaver You can see this from the formula for the T statistic:
(m1-m2)/sqrt(s1^2/n1+s2^2/n2). If, for example, s2=0, then the formula becomes (m1-m2)/sqrt(s1^2/n1) = (m1-m2)*sqrt(n1)/s1, the same as the t test for one sample with m2 acting as the nominal parameter. We see that n2 also disappears.
Applying the Satterthwaite approximation in the Welch's t test does not change things because when s2=0, the formula for the approximation of the df becomes just nu = nu1 and the t test does not change.
Fair enough, Lyudmil Antonov. When you said this in your first post of the thread...
"Then the t test is the same as a one-sample t test comparing a sample with a nominal value."
... I did not realize that you were talking about the unequal variances version of the unpaired t-test. For me, when someone says unpaired t-test without any further qualification, I take it to mean the pooled variance version. I suppose that is because the pooled variance test is the default in the stats packages I use.
Lyudmil Antonov and Bruce Weaver , isn't the first (and there are others) issue to resolve before recommending if the questioner should use another approach is what estimate to use for sigma_2. If x_2 is (assumed to be) a constant, then you can assume sigma_2 is 0 and then it becomes a variable test, and finding s_2 is 0 provides a bit more backing for this assumption. If you know other values occur in the population, then you know sigma_2 > 0, and therefore the plug-in estimate of s_2 will produce a lower SE for the test than appropriate if using the above formula, which is why using s_1 only might be appropriate if you wanted your test to be conservative, but you could calculate it otherwise. Wouldn't it make sense first to check on this aspect from the person who asked the question. We don't even know the sample sizes (or what "nearly zero" means), and if they n in group 2 is small the purpose of the test should be stated.
Yes Daniel Wright, agreed. I did not intend to recommend any particular test in my posts. I was simply trying to understand Lyudmil Antonov's statement, which turned out to be about the unequal variances t-test, not the pooled variances t-test.
Samuel Oluwaseun Adeyemo , your #1 is discussing the population sd and the sample sd. The population sd can be something other than zero while a sample sd is equal to zero. Consider a population distributed binomial with 10 trials and p=90%. The population SD is sqrt(np(1-p)) or sqrt(10(.9)(.1)) or about .95, but over a third of the time the sample SD will be zero (.9^10 + .1^10). See the discussion above.