For good research, a hypothesis test is the worst you can do. It is the last option you have when you don't have any specific idea about what you measure and what it means, and the only thing you can interpret is whether a change/effect is positive or negative. I wouldn't call that good research.
IMHO, good research would mean to develop a meaningful quantitative model that in fact explains quantitative relationships between the interesting variables. This is more going towards "estimation" than "testing". Although estiation and testing are often mentioned together (as if they were two sides of the same coin) in many textbooks, they are quite different things. Good research would aim to answer, what the consequences are if a particular coefficient in a model is of this or of that size, and how sensitive the consequences change with the size of this coefficient. This allows to determine how precisely the coefficient should be estimated, and this finally gives the amount of data required to reach this precision.
BUT, I know well that it is a quite rare case that we could really do that (at least in rather complex systems as they are typical in life-sciences, econometrics, and social sciences). Here, testing is the only tool we have to take a first step. Good research would mean not to stop there, but unfortunately most papers and just showing that something is "significant" and this is seen as the end of all efforts (rather then the beginning). We so seem to get lost in a circle of producing "significant results" without ever developing reliable, meaningful and preferrably quantitative models that go beyond the isolated, sepecific research question. As an example you may compare how mechanics was developed in physics and how biologists struggle to develop systems biology (where biologists have clearly much larger, often hardly solveable problems to even define the system and to quantify the relevant variables).
For good research, a hypothesis test is the worst you can do. It is the last option you have when you don't have any specific idea about what you measure and what it means, and the only thing you can interpret is whether a change/effect is positive or negative. I wouldn't call that good research.
IMHO, good research would mean to develop a meaningful quantitative model that in fact explains quantitative relationships between the interesting variables. This is more going towards "estimation" than "testing". Although estiation and testing are often mentioned together (as if they were two sides of the same coin) in many textbooks, they are quite different things. Good research would aim to answer, what the consequences are if a particular coefficient in a model is of this or of that size, and how sensitive the consequences change with the size of this coefficient. This allows to determine how precisely the coefficient should be estimated, and this finally gives the amount of data required to reach this precision.
BUT, I know well that it is a quite rare case that we could really do that (at least in rather complex systems as they are typical in life-sciences, econometrics, and social sciences). Here, testing is the only tool we have to take a first step. Good research would mean not to stop there, but unfortunately most papers and just showing that something is "significant" and this is seen as the end of all efforts (rather then the beginning). We so seem to get lost in a circle of producing "significant results" without ever developing reliable, meaningful and preferrably quantitative models that go beyond the isolated, sepecific research question. As an example you may compare how mechanics was developed in physics and how biologists struggle to develop systems biology (where biologists have clearly much larger, often hardly solveable problems to even define the system and to quantify the relevant variables).
a rule of thumb consists in using the same sample size as the one used by previous good research investigating a similar phenomenon.
But if you want to be a bit more rigorous, the minimum sample size depends on several factors, among which the characteristics of the statistical test you are going to use and the effect size you are expecting to find. The latter also depending on previous similar research.
The good thing is that if you have these info, there is a program that can calculate this number for you called G*Power. You can download it for free here: http://www.gpower.hhu.de/
There is also a helpful tutorial in there, plus you can find lots of videos on YouTube that explain how to use it.
Research design is not an easy task. Various important factors are related with it. An experienced person should do the work, not only the student.
Sample size always depends on the subject of research. But now a days, you can not prepare it as per your desire, particularly where involvement of any living animal or man is concerned. You have to take permission of Ethics Committee.
They will decide the number of laboratory animals to be used.
I say that this is way too much for in some instances and way too little in others. Such "rules of thumb" are supporting, if not promoting, bad research.
Newton said in his book of Optics: As in Mathematicks, so in Natural Philosophy, the Investigation of difficult Things by the Method of Analysis, ought ever to precede the Method of Composition. This Analysis consists in making Experiments and Observations, and in drawing general Conclusions from them by Induction, and admitting of no Objections against the Conclusions, but such as are taken from Experiments, or other certain Truths. For Hypotheses are not to be regarded in experimental Philosophy. For me, hypotheses are an important source of uncertainty and nourish the proliferation of explanations in the social sciences. I plead for abandoning the hypothetical-deductive approach and for substituting it with the classical induction (on the sense of Bacon). This way of thinking is not in fashion today in the social sciences, yet it is common place in the natural sciences.
I entirely agree with Jochen Wilhem answer: a hypothesis test is the worst you can do.
Shocking as it may seem, Jochen Wilhelm's response is right on the money.
Sample size is not determined by the hypothesis you are testing, but by the effect you are trying to measure. It is based on the minimum size of that effect that would be of real-life importance and the size of risk of not finding it that you are prepared to take.
For example, if I am designing a study to improve outcome of physiotherapy treatment by introducing hydrotherapy, I have to ask what is the smallest improvement in client outcome that would justify the introduction of a therapy that requires expensive resources, both in terms of the facilities needed and the number of clients a therapist can see in a working day. These questions are not statistical but practical. This defines the effect size.
Then I have to define the risk of failing to find it. No study is guaranteed to find the effect, just as no fishing net is guaranteed to catch a fish. And, indeed, you can think of a study like a fishing net – the effect size you wish to detect is the size of the holes in the net, and the chances that you won't catch anything even with the right net is your power.
Some people recommend 80% power. To my mind, this is neither sensible nor ethical. To run a study with one chance in five (20%) of failing to find an important effect size, if it exists, is unjustifiable. If the effect is worth finding, then you need to ensure that you have a good chance of finding it – 90% or 95% power. To use 80% power suggests you don't care if you don't find it, which means it isn't important, which means your study is wasting research resources – including the unpaid labour of the participants – which is unethical.
The problem these days is that researchers are so much in hurry to get to press and so, with the minutest evidence of some sort of effect, since the null hypothesis is rejected, they run to press. The press on the other hand is anxiously waiting for those dear to come, and the next morning, we have a new "Journal" paper published!
There is no cut out rule . Generally it depends on the nature of data set. if time series according to Walter Enders he recommends 50 observations minimum for others i mean cross sectional and panel i guess the larger the better.The reason is to enjoy the variability given the benefit of law of large number. Finally it also depends on test to be carried out as some test required lag selection,so one need degree of freedom. I finally recommend the researcher to see top index reputable journals for the consensus practice
Thank you so much Dr Jochen Wilhelm Dr Job Nda Nmadu Dr Ronán Michael Conroy. Sample size is not determined by the hypothesis you are testing, but by the effect you are trying to measure.