Hi Nurhayati Nurhayati , I think what you are probably asking is whether you need a certain sample size to achieve sufficient statistical power to be able to detect the effects that you are interested in. At least that is the consideration that most experimental researchers have to make when they plan sample size for experimental studies. Here are a few things to consider and some tips:
Sample size depends somewhat on the magnitude of the effect you are trying to detect. If the effects is very large in the population, you will likely detect it even at small sample sized. However, we typically deal with situations where we try to detect a very specific effect that we only hypothesize in a specific situation. Thus, detecting it may be harder and you need a larger sample size to be somewhat confident that a) the effect you find is not due to chance and b) to detect the effect at all.
One way to calculate the necessary sample size to detect an effect of a certain effect size (say, you think the effect may be medium, e.g., 0.3) you can look at power simulations at different sample sized. One popular tool to calculate power is GPower (http://www.gpower.hhu.de/fileadmin/redaktion/Fakultaeten/Mathematisch-Naturwissenschaftliche_Fakultaet/Psychologie/AAP/gpower/GPowerManual.pdf). You can find a lot online on this tool (http://www.statpower.net/Content/312/Handout/gpower-tutorial.pdf).
In quasi-experimental designs you usually do not have the luxury of determining the sample size at will, rather, you need to "take what is there". In this case I would recommend building a synthetic control group from the overall sample population. That is, using matching techniques such as Propensity Score Matching or Inverse Probability Weighted Regression Adjustment or the Synthetic Control Group method. These techniques allow you to build treatment groups with highly similar controls without treatment. The resulting sample has two groups that are comparable on a number of descriptive variables (e.g., firm size, industry, number of employees, financial leverage) but differ on the treatment (say, a new product announcement).
In conclusion, there are accepted and systematic ways to estimate your sample size requirement. However, you may experience some difficulty in meeting these requirements with quasi-experimental setup. In such cases, it is most important to identify valid covariates to match treatment and control-group and to make sure that the control group is sufficiently large to find comparable cases to the treatment group (usually, this is not a problem).
I am impressed by Prof Mafael's efforts in paragraph 3. Quasi-experimental designs are usually described as what you do if you can't randomize. This is often the case in educational studies, where student's are assigned to class groups by somewhat arcane methods designed by school administrators not educational researchers. The great statistician, R. A Fisher, the developer of experimental designs said "Without randomization there is no experimentation". Another great statistician and a student of Fisher's was once asked if one has to randomize. The following article was Box's Reply:
Much as I admire Prof. Mafael's fine attempt (and I must say the statistician Wayne Daniel offered a somewhat similar approach in his book Applied Nonparametric Statistics) I am most impressed by Prof. Box's article which again can be found at this link: