It depends on the subject matter, on how variable the data is you can get, on how big or small the effects are that you want to see and what margin of error you are willing to accept.
In his famous story "The lady tasting tea", Fisher found n = 8 enough to get data that would allow him to believe that a lady could taste if tea or mik was added first into the cup, if she was able to correctly separate 2 x 4 prepared cups into two groups ("tea first" and " milk first").
In a single run of measurements in a large hardon collider (LHC), about 5 billion of events are recorded, and even this is often not sufficient to identify the resulting particles with sufficient confidence. (http://rsta.royalsocietypublishing.org/content/roypta/373/2032/20140384.full.pdf)
Depends on the standard error you require for the analysis. With 400 participants (or particles or experiments) the standard error is 1/square root 400 or 0.05 or 5%. To reduce this to 1% would require 10000 whatevers.
OK That's enough. Theories and formulas for estimating the sample size or types of software such as Minitab are just the tools for better understanding. If you have more experience and understanding, then your experience is more correct than anything else.
Abolfazl Ghoodjani Formulas and algorithms are simply mathematical manipulations or mathematical engines. They provide absolutely no insight or understanding but may guide which variables are involved. Any understanding comes from an evaluation of the 'problem' with the space and the grey matter between the ears and then a suggested formulation that can be tested by experiment or against available stable data. This then allows predictions to be made which can also be tested.
The size of the sample selected for analysis largely depends on the expected variations in properties within a population, the seriousness of the outcome if a bad sample is not detected, the cost of analysis, and the type of analytical technique used.
Given this information it is often possible to use statistical techniques to design a sampling plan that specifies the minimum number of sub-samples that need to be analyzed to obtain an accurate representation of the population.
Often the size of the sample is impractically large, and so a process known as sequential sampling is used
Here sub-samples selected from the population are examined sequentially until the results are sufficiently definite from a statistical viewpoint