alpha is the type-I error (rate; these are always rates, that means: expectations about long-run maximal proportions of such errors).
The type-II error depends not only on alpha but also on many other things (e.g. the kind of test, the sample size, the effect size, ...). The type-II error is usually not controlled in research. If the type-II error is controlled, then its control is done by using the respective sample size (the "power"), and its value will be selected independently of alpha. There are some papers promoting some "optimal balance between alpha and power", but this has, to my opinion, no really practical foundation.
A sensible selection of alpha and power is possible only when there is a way to decipher a cost/benefit ratio of the reaserch, what is typically far from being possible in basic research (it possibly is, in some cases, for "industrial research").
alpha is the type-I error (rate; these are always rates, that means: expectations about long-run maximal proportions of such errors).
The type-II error depends not only on alpha but also on many other things (e.g. the kind of test, the sample size, the effect size, ...). The type-II error is usually not controlled in research. If the type-II error is controlled, then its control is done by using the respective sample size (the "power"), and its value will be selected independently of alpha. There are some papers promoting some "optimal balance between alpha and power", but this has, to my opinion, no really practical foundation.
A sensible selection of alpha and power is possible only when there is a way to decipher a cost/benefit ratio of the reaserch, what is typically far from being possible in basic research (it possibly is, in some cases, for "industrial research").
The two errors are especially important when calculating the simple size for a trial. Considering the null hypothesis as Ho (i.e. no significant differences between two means) and the alternative hypothesis as H1 (significant differences), four options could happen that you can find in the attached file.
I would suggest using SAS, R, or even something like Excel and using the random number generators to create populations that you can then sample. You can draw the sample directly from a random function or you can create a population using a random function and then withdraw samples. The latter approach more accurately simulates practice because you have a "population" that is from an underlying distribution of possible values and from which we take a sample. However, dispensing with that complexity will help simplify initial efforts.
Because this is done with a computer, you can withdraw samples many times. So I will have 100,000 trials where I gather 5 samples from each of two populations. I will then do this over using 50 samples, then 500 samples, and 5000 samples. I will have a random normal distribution with mean X and standard deviation Y and another population with mean X+1 and standard deviation Y. I then know that the true difference between the populations is 1, and I can record the outcome of my t-test (or whatever I hope to use on the real data). I can then add other constants, and I can change Y and determine exactly the limits of my method. Since I know which observations are from each population I can also calculate Type 1 and Type II error rates and see how these change. This is an easy programming exercise (though it will take time), but assumes that you know programming. The game here is using the computer to create a population where you know what the answer will be. Then sampling that population to see how the act of sampling and the method interact to distort your perception of "truth."
Dear Timothy A Ebert,Ignacio Alvarez,Kurt A Rinehart, and Jochen Wilhelm thank you very for your guidance and sharing understanding the type-I and Type-II errors and even concept of "power" and with respect to actual effect size.
In addition, you can 'decide' which Type I error (alpha) you want to 'tolerate'. However, for the Type II this is not straight, it has some other implications, and, if you don't 'control' the Type II error, it can be very high.
Even when you cannot reject Ho, you cannot affirm the Ho. The power of test (1-TypeII) is closely related to the effect size (also the sample size and the variance of the variable).