For example, we have a coin with unknown probability of landing on each side: how many times must I repeat throwing the coin to find the best answer for probability of landing on each side.
using a Normal approximation (which is warranted since n will be large), the standard error when estimating a proportion is sqrt(p*(1-p)/n), so this depends on the observed p and the sample size.
To get an SE=0.001, you will need n= p*(1-p)/.001^2. With p=.5 (ie a fair coin) the numerator of SE has its maximum and you'll need a sample size of 250000.
Hmm, there is the Law of large numbers http://en.wikipedia.org/wiki/Law_of_large_numbers; it's strong version says, that you almost surely get the "best" (which is the "correct") probability p of heads, if you repeat throwing a coin infinitely often. But I doubt that's your question, so what is the "best" probability?
Have you ever heard of the concept of confidence intervals (CI)? They are related to the sample size and you can judge the information gained by conducting a study, eg throwing a coin 100 times. To get an "optimal" estimate, you can calculate the sample size needed to get an CI of predefined length, or vice versa get the length of the CI in dependence of n.
Keep throwing till the probability (relative frequency) stabilizes to get the estimate of the required probability. See statistical regularity concept.
I realize that many questions in researchgate ask for the "best" method without a definition of what the authors of the question mean with "best". For a given statistical definition of "best", an analytic answer can typically be calculated, at least for coin flips. Eik has already pointed in a direction that is commonly accepted.
The best answer that I am thinking about this question is as follows. Let say that you flip a fair coin n times, then we expect 50 percent of times to find a head and 50 percent tail, this is the expected distribution. Supose that from the sample you find from experiment of size n, you get f1 percent head and f2 tail. Now, the question can be formulated equivalently, can we say with a confidence level of let say 95 percent that these two distributions are the same. The answer can be found by using the chi square test. The critical values of chi square at 95 percent confidence level can be found in standard tables, by comparing this one with the one you get from the sample you can decide to continue or not the experiment.
Thanks all for answers. For example if we want to know with tenth of percentage error. how much trail is need for such problem.(toss coin) Or what procedure must take to sure about accuracy.
using a Normal approximation (which is warranted since n will be large), the standard error when estimating a proportion is sqrt(p*(1-p)/n), so this depends on the observed p and the sample size.
To get an SE=0.001, you will need n= p*(1-p)/.001^2. With p=.5 (ie a fair coin) the numerator of SE has its maximum and you'll need a sample size of 250000.
The Central Limit Theorem would indicated 30, but to be safe you could double it. That's what portfolio managers do in 'real-life' - while diversification could be achieved with 16 stocks/assets, a typical portfolio actually has 50-80 positions. If you want to do a simulation I have coded a Monte Carlo Bootstrapping routine that you can freely download from my site:
The law of large number say if we can conduct an experiment sufficiently large number of times, in identical conditions, then the proportion of favorable event gives you the probability. In general we consider a data as a large data if its size is 30 or larger. For a data larger than size 30 would be consistent to these result.
As has been said, if you want the "best" answer then you need to toss the coin an infinite number of times.
If you are not prepared to do that, then you need to look at the use to which the relative frequency (probability) will be put and analyse it to find a criterion to be met;. For example, that there is a 5% chance that the true answer is more than 10% away from the RF. You then use that criterion, whatever it is.
If you conduct the experiment practically large number of time you will found that the probability is converging toward a point, when you found that point of convergence is not changing. Stop repeating experiment...........