Your question is quite confusing. You are probably not clear in what you want exactly.
Estimation of a mean difference does not require to calculate a sample size. However, if the estimate should have a given minimum precision, then the sample size matters. The larger the sample, the better the precision.
The term "precision" can be defined in different ways. In principle, the precision is inversely related to the dispersion: The lower the dispersion, the higher the precision. There are different measures of dispersion, and so there are different measures of precision. Dispersion can be given for instance as maximum deviation (to the mean or to the median), median deviation (to the mean or median), average deviation (...), average squared deviation (usually always to the mean, what is called "variance"), ...
Most common measure of dispersion is the variance. Since its unit is the squared unit of the variable, people prefer to report the square root of the variance ("standard deviation", s) instead, what has again the same unit as the variable. Sometimes, the standard deviation is given relative to the mean value, what is called the coefficient of variation: cv = s/m. This may be expressed as percentages (cv*100%). In principle any other precision measure may be expressed as percentages, but to my knowledge this is uncommon.
The standard deviation of a statistic is called the "standard error". If the statistic is the mean value it is called "standard error of the mean" (sem). It can be estimated by dividing the standard deviation (s) by the square root of sample size (n): sem = s/sqrt(n). In a similar fashin the standard error of a mean difference can be calculated. This is used to construct intervals for a "plausible" mean difference, for instance the 95% confidence interval for the mean difference. Usually, the width of such an interval should be or is given and then the sample size required to obtain this width is calculated. For this the size of the variance must be known (from previous studies, similar data, or from an "educated guess").
I hope this helps to refine your question more clearly.
sample calculation depends specifically on three base values (values be added depending on the formula applied), the first is related to the error, the second with the confidence level and the third with the variance. Some may understand precisely how the variance.
A greater variance is lower accuracy of the sample, which implies a smaller sample size. The more accuracy desired, the sample size increases.
Now the variance can be obtained in two ways one taking behavior of continuous quantitative variable of interest. when discrete quantitative, ie we define success and failure can calculate the variance as p * q in the case of not knowing the values assume maximum variance, ie, p = 0.5.
Now how to determine the sample for experimental designs, not usually the same procedure as for exploratory or descriptive analysis of populations.
Carlos, I am not sure if the words "accuracy" and "precision" can be used interchangeably. As I am aware, "accuracy" is related to "correctness" as being the opposite of "bias", whereas "precision" just means "it scatters a lot". If you like, "accuracy" is a measure of how "trueful" the data is, and "precision" is a measure of how variable (or: how repeatable) the data is. Surely, both concepts can be appplied to statistics (like the mean or mean difference) as well (and not just to data).
I agree with you,the word is always to use precision. accuracy is symbol of variance = 0, while precision, shows some kind of error. (sorry for my english)..