Onur Ertugrul this is an interesting question and my answer is no, I know of no such source. I think more important than the question itself is to consider why are you asking? If it is because you want to say it is OK for a RCT to have a power of 70% then it is for you to answer. The answer entirely depends on the context.
Leaving aside the fundamental absurdity of the "null hypothesis" based" significance testing" the issue is one of competing risks - 80% power was seen as the standard for a long time - giving you an 80% chance of finding a 'true' difference of a given size if that was indeed the case in the population. So you are implicitly accepting a 20% risk of falsely concluding no difference when there is. You balance this with a 5% (conventional) risk of rejecting the null (no difference) when it is true... Many reasonably argue that these risks ought to e the same (so power should be 95%) but the consequences of different errors differ so there is no absolute reason why the risks should be equal.
In my view the problem largely disappears if you think in terms of estimation and CI width - how precisely do you need estimate a parameter (difference) in order to usefully inform decision making? At 80% power you are implicitly accepting a lot of imprecision - which might or might not be OK. At 70% you are accepting more.