The best sampling method can not be discussed without the knowledge of the subject of the study and the ultimate sampling unit.
As it is proposed to have a panel sample, it is presumed that one is interested to study some variable for a period. In such cases the sample should be selected in such a way that its representativeness is not lost over time.
There are randomized designs, model-assisted design-based methods, and model-based only methods. Regardless, stratification/subdividing the population by some auxiliary information is often very helpful.
In the first paragraph of my answer there, I note the problem with random sampling. That does not mean it isn't often best, though there is an argument that it could be considered irrelevant. However, I personally do think randomized designs are often best. Even so, auxiliary information is important, and even good stratified random sampling cannot be done without some such knowledge.
One particular problem with simple random sampling: If your population has a relatively very few, very large nembers, then the inclusion or non-inclusion of just one such case in your simple random sample will greatly impact not only your estimated mean (or total), but also estimated standard errors. That is, not only will accuracy be low, but also your assessment of it will be very bad as well.
Remember that variance and bias can have estimation problems themselves.
If you knew about those odd cases, say in a finite population of establishments, those large cases could be treated differently, using stratification, unequal probability sampling, and/or using regression modeling.
The best sampling method can not be discussed without the knowledge of the subject of the study and the ultimate sampling unit.
As it is proposed to have a panel sample, it is presumed that one is interested to study some variable for a period. In such cases the sample should be selected in such a way that its representativeness is not lost over time.
I fundamentally disagree that Simple Random Sampling is always the best – just look at the classic books of Leslie Kish (1987) Statistical design for research - and (1965) Survey sampling.
My view is that it is horses for courses and the ‘best’ design depends on what you are doing.
To take a single example – say I am interested in understanding peer effects on pupils’ progress in learning – I need a measure of pupil achievement and the achievement of their peers; say those that belong to the same classroom. If I do a random sample of 5000 pupils in the UK I will end up with 1 or at best 2 pupils in the same class – I would not be able to answer my research questions. So for this study the ‘best’ design is a multistage design that is highly clustered, and huge progress has been made in the analysis of such data in the last 30 years.
Moreover the population itself my be clustered eg patients in hospitals, pupils in school, people in neighbourhood, individuals in a twin pair; so that you need to take account of this important structure. Such samples are not defective they just need to be analysed properly.
+1 for all the comments noting that SRS is often inefficient (both in actually doing the sampling and its statistical properties) compared with alternatives and often not "the best". What I want to add is a clarification to some responses saying that SRS means that each individual has an equal chance of being chosen. This is a necessary but not sufficient condition, and many sampling procedures have equal chance for all individuals (if population is equally spread among six towns, if you roll a die to determine which town to sample, all people would have an equal chance of being sampled,) SRS is if you are sampling n people, each possible set of n people has equal chance of being sampled. Some answers say "random sampling." If by this they mean somewhere in the sampling procedure there is a probability process, that is probably too vague to help the reader.
Above, Beemnet said "The best sampling method can not be discussed without the knowledge of the subject of the study and the ultimate sampling unit." This is reminiscent of an interesting comment I believe Ken Brewer (note his Waksberg paper above) once made about advice he had gotten from his mentor, Ken Foreman: If I remember correctly, Foreman told Brewer a number of decades ago that there was "no substitute for knowing your data." Now that may have been more about knowing how to stratify at one level, besides the more complicated considerations which Kelvyn noted above, but together these are reasons that simple random sampling is generally inadequate.
As already said"The best sampling method can not be discussed without the knowledge of the subject of the study and the ultimate sampling unit."
There is nothing like best sampling method, you can just say appropriate sampling method. So according to situation sampling method vary.
Whatever going on early discussion that SRS is best or not. But one thing every body knows if the sampling frame is unknown then we can not apply probability sampling techniques.
There are thousands of examples we have only options left is to move with non-probability sampling techniques and which are the most appropriate in that particular case.
So, beside discussing best sampling techniques, give us the situation so that we can discuss the appropriate sampling techniques to be applied for that particular case.