Most sample size calculators apply a binary test set at 50/50, which is the maximum variation for that format. For a power analysis, you can make "conservative" estimates so that your sample is not too small.
You can never really calculate power or sample size. All we can do is estimate based on assumptions. Previous data is one way to guide those assumptions. (One can also note that strictly an experimental setup/design results in a function to determine power/sample and that a single point estimate will always be wrong).
However, the first step is to work out what you want to achieve. There's a difference between say a power calculation for a regulatory requirement, a justification for sample size or just a rough sense of whether a study is feasible.
Some options (very roughly worst to best):
- use estimates of typical effect size in your topic area (not preferred because its going to ignore lots of available information and can easily be inappropriate - e.g., if you change the design)
- use a measure of stability or precision such as planning for margin of error (especially in surveys) or the n at which a correlation stabilizes (e.g., see Article At what sample size do correlations stabilize?
). You can also plan for accuracy in parameter estimation (AIPE) which is basically the desired width of a CI
- use smallest effect size of interest - work out not the true effect size (which is unknown) but what size of an effect might be of clinical, pracical or theoretical interest. (Google SESOI or use smallest effect size of interest to find ideas).
- sensitivity analysis. graph the power or n required as a function of key parameters (which depends on model) but is usually at least alpha, n, unstandardized effect, sd (or variance) for power (but will require more parameters for a complex model). Choose plausible ranges of each parameter based on other factors. This is in a sense a more sophisticated version of the SESOI approach. This is usually done by simulation e.g., see https://rdrr.io/cran/paramtest/f/vignettes/Simulating-Power.Rmd
You may also not need a power analysis. Some justifications of sample size are simply pragmatic. For instance if the total n available (because of cost or because that's all the data that exist) is fixed then that's your justification. You could supplement this with a power analysis - ideally a sensitivity analysis but that may be overkill - especially for exploratory research.
Cemre Didem Eyipınar I do not claim to understand your research situation. The above answers might have already helped you. In case you are still searching, the following paper, that I find to be useful as a general guide, you might find useful too :
Article Sample size, power and effect size revisited: simplified and...