The best way to address these concerns is with appropriate statistical power. I know this seems like a basic response but it is absolutely vital. Best practices would be to use your own preliminary data on your outcomes from your interventions. I would also recommend that you use a true control group in your RCT, so that you can calculate Minimal Detectable Change scores. This allows you to evaluate your results, using not only traditional p values and effect sizes but also compare your changes relative to the error associated with each of your outcome measures.
We discuss these points in our RCT using manual therapy interventions. I hope this helps.
To answer your question directly, you can reduce the risk of a Type 1 error by lowering your significance threshold (e.g. From 0.05 to 0.01).
You can reduce the risk of a Type 2 error by increasing the power of your test. Along with this you should ensure that your sample size is sufficiently large based on your a priori estimate of your effect size.
Type 1 error (=α) is based on the cost of rejecting a true Ho, i.e. how much is accepted. The smaller the α, the largest the sample size needed. The most common value given is α=0.05 (5%)
Type 2 (=β) is related with the power of the study, i.e. Power=1-β. It's the probability of finding the difference when it exists. The most common value given is β≤0.20 (Power of 80%)
Ways to increase the power of your study (i.e. decrease β)
-increase the sample size
-decrease variation by good design and/or randomization
-increase the minimal clinical significant difference (Δ)
-increase α (but it's unethical and only in pilot studies it's allowed)
REMEMBER THAT α AND β ARE SET BY DEFAULT!!
In any case it's better to consult a medical statistician before you begin your study!