No. You need some information about the variance. This may be estimated from the standard errors or the confidence intervals, if these are given, plus the sample size (that should be given).
Please be careful with using published estimates - they are often larger than you should reasonably expect them to be (results with smaller effects often are not statistically significant and are not published; sometimes experiments are repeated until the first one shows "statistical significance" and this is then published. This is a form of p-hacking, but unfortunately quite common practice).
No, and not because you don't know the variance but because the sample size needs to be based on the minimum effect size that is of real life significance. Early studies in a field tend to be published if they show a large effect size, but smaller effects may still be of real life significance.
You need to decide the smallest effect that would be meaningful in real terms. This involves taking a deep breath and making a guess, usually. After that, you set your sample size to have at least a 90% chance of detecting an effect this size or bigger. 90% because running a study with more than a 10% baked-in probability of failing to find something even though it exists is unethical because it wastes research resources, one of which is usually participants who work for us for nothing, and it discourages further research.
The question I ask clinical researchers is how big an effect should be to make your colleagues change something about the way they manage patients. This is a complex decision that involves weighing up the costs involved in change, for example financial, resource, impact on patients.
In other areas you can ask similar questions. But in the end a sample size calculation is the application of precise mathematics to wild guesses based on personal biases and unprovable assumptions.
Yes, you can calculate the required sample size for a new study if you know the effect size from a previous study, along with other information such as desired power and significance level. The sample size calculation is based on statistical principles and aims to ensure that the study has a high probability of detecting an effect of a certain size if it exists.
The formula to calculate the required sample size typically involves the following factors:
Effect Size (ES): This is the difference between the groups being studied (e.g., treatment group vs. control group) and is usually expressed as a standardized measure like Cohen's d for continuous outcomes or odds ratio for binary outcomes. If you have the effect size from a previous study, you can use that value.
Desired Power (1 - β): Power is the probability of correctly detecting a true effect if it exists. Commonly chosen values for power are 0.80 or 0.90, indicating an 80% or 90% chance of detecting an effect if it's there.
Significance Level (α): This is the threshold for determining statistical significance. Commonly used values are 0.05 or 0.01.
Type of Hypothesis Test: Depending on the research question and the type of data (continuous, categorical, etc.), you'll choose an appropriate test (t-test, chi-square test, etc.).
Ma'Mon Abu Hammad , not quite. If we talk about a relative (or standardized) effect size like Cohen's d, you are right (Coden's d is the ratio of the effect size [=mean difference] and the standard deviation, so it already includes the information about the variance. I still advocate strongly against using such measures because they conflate the often relevant information about the actual effect size [=mean difference] and the variance). But for hazard ratios you also need some information about the prevalences, what is equivalent to the information about the variance.
A published effect size might be useful for planning other experiments but should be supplemented by precision measures, otherwise it is not useful and could even be misleading.