1) To try and work out the smallest effect size of interest (SESOI) and estimate power for that. I'd to this by working out plausible lower bound values for the effect and plausible upper bound values for the error/variance and estimate the effect size from that (or from plausible raw data) rather than going to a guess or estimate of standardized effect size.
2) Sensitivity analysis - compute power (or n or effect size depending what you want) by varying other relevant parameters across plausible ranges of values. Then plot the results for the different combinations. This is arguably a more systematic and detailed version of 1). This can be tricky for complex designs where you might have quite a few parameters and it may require simulation. even for a simple design you likely have the effect (e.g., mu1 - mu2), some measure of error variance, alpha, n and maybe a correlation parameter such as for a paired or repeated measures design. However, what you get is a better understanding of the power over a wide range of likely scenarios.
Its also important to consider other factors that impact power to detect effects such as collinearity and drop-out/attrition.
For other situations I might also look at margin of error (or width of interval estimates) or the stability of estimates for things like correlations if precision is more impact than ES.
Things like Green's rule are pretty useless unless your study closely matches the assumptions. I seem to recall that Green's rule involved a certain standardised effect size (beta = 0.3? maybe) and no collinearity, so pretty implausible in practical applications for regression.