It depends upon the kind of data, the sample design, and standard deviation for quantitative data. A pilot study may be advisable. For continuous data, or yes/no data in finite populations, with randomization, there are a number of good textbooks. Two i know that specifically have sample size estimation chapters - though you will likely need to look at more than just those chapters, are given below:
Cochran, W.G(1977), Sampling Techniques, 3rd ed., John Wiley & Sons.
Blair, E. and Blair, J(2015), Applied Survey Sampling, Sage Publications.
There are many other good books. For example:
Lohr, S.L(2010), Sampling: Design and Analysis, 2nd ed., Brooks/Cole.
Särndal, CE, Swensson, B. and Wretman, J. (1992), Model Assisted Survey Sampling, Springer-Verlang.
Brewer, KRW (2002), Combined survey sampling inference: Weighing Basu's elephants, Arnold: London and Oxford University Press.
You likely need other books for likert scales, qualitative data, etc.
Cheers - Jim
PS - If you see a sample size 'calculator' on the internet, it will likely only be for yes/no data, for a worst case proportion, without considering a finite population correction factor.
GPower is a fairly popular program to estimate sample size. It is available free on the internet. There are quite a few other choices as well.
That said, this may be of little real help. Such programs often want information that you do not have (unless you do a pilot study as James suggested). They also assume a very simple experimental design. For example, I am not sure how to use GPower to estimate sample size for a three way interaction term that I need to estimate within 2% of the true value given that I have to use an incomplete block design. For the more complex problems, maybe a simulation study could be developed to give a rough approximation.
The answer also depends on the nature of the problem. Is this a field trial looking at the effect of nitrogen fertilization levels on yield, or is this a survey of university faculty opinions on global warming? The latter problem might have issues with unanswered questions as well as people who simply refuse to take the survey. As part of this one also needs to know if this is a univariate question or multivariate. If all I am interested in is the mean proportion of university faculty that say "global warming is an issue" then life will be easy. If I am developing a multivariate model showing that people in England have a different view of global warming than people in France based on their answers to a 24 question survey, then there is a problem of getting enough surveys where all the questions are answered (or developing a way to impute the missing data).
The question is also difficult to answer because external factors need to be considered. This include cost of collecting samples, the time involved in sampling, deadlines for achieving research goals, and ethical considerations if the experiments involve harm or death of animals. Maybe the experiment risks massive environmental contamination, and the risk increases with increasing sample size.
Even if statistics is not a fun topic, I would suggest that you push the limits. I keep seeing studies with 3 to 5 replicates. Sometimes that is all the question is worth, but with so few replicates one tends to miss details in the system. Of course gathering more data improves the chance that the data analysis will take more work. A really small sample size also provides no forgiveness if one or two replicates fail for some reason and have to be deleted.
This was a nice simple question that has no simple answer that will hold for all possible research problems. It might help to consult with a local statistician to try to avoid unexpected problems. However, for a truly novel question there is no substitute for a good pilot study where you learn about the special problems associated with your task.
In my research I work with insects in the United States of America. There is little social stigma associated with killing a few more insects. So my tendency is to collect as many replicates as I can, and collect as much data from each replicate as I can. So I have the planned research question, and then several unplanned research questions just to provide a little insurance that the answer that I get from the main project is not an artifact of how I collected the data and ran the analysis.
Before you calculate your sample size, you have to define: population size (estimate), margin of error, confidence level and standard of deviation. G-Power is a good tool to calculate sample sizes - and it's free: http://www.gpower.hhu.de/
Well, it's been a long time I don't use it. What's your operational system? I'm sending you attached the downloadable app file for MacOs. Hope it helps.