Optimum for me would be a small sample size that also produces the correct answer every time. With this definition the answer is no.
A population is a sample from some underlying distribution. From that population you take a sample and then try to estimate the underlying distribution. If you do this millions of times, on average you will get the correct answer with very small sample sizes (4 or 5) and small population sizes. However, for any one experiment you are looking at random numbers that have little to do with the underlying distribution and a great deal to do with sample size.
You could try a program called G*Power (http://gpower.hhu.de/). However, at best this will be a minimum sample size. There is no guarantee that it will be sufficient once the data are collected. It provides a good guess to the extent that the values that you used in the program are correct. Generally there is some sampling error in the values you enter, and some assumptions that might (or might not) be correct. Things like the standard deviation that you measured in a preliminary test is an accurate reflection of the population you will sample in the full experiment. If the preliminary data were gathered a year ago and the full experiment run next year then the population might have changed slightly. That change will make the G*Power estimate less reliable.
Keep in mind that the sample size from G*Power does not include missing data. It does not factor in a flood that takes out block 5. It does not factor in the incomplete survey, dropped sample, or water that splashed on the data book thereby making 20 values unreadable.
Sample size is also about risk. It is always easier to look back and say "I gathered a few too many samples" than it is to look back and say "in hindsight my results are meaningless because I did not gather enough data."
I agree with all the above answers. I would also make emphasis in the costs and time needed to carry out your sampling protocol ensuring, as mentioned by Prof. Ebert, that measurement and recording is done carefully. Note that different statistical techniques can require different sample sizes to achieve the same precision. For example, depending on the nature of the population, some classical sampling techniques can be more efficient (for example, sistematic sampling can need a much smaller sample size than a completely random sampling to achieve the same precision). The same occurs with experimental design (e.g. a block design can need a smaller number of replicates than a fully randomized design). The book by Schaeffer is a basic good one, as well as the book by Schaeffer, Mendenhall, and Ott.