If possible choose one of the outcome measures as the primary one, and the others as secondary. You should then compute the sample size based on the primary outcome measure.
If each indicator is "important" you can also compute the sample size based on each one then choose the largest one. This would ensure that you have enough power for estimations involving each indicator. If the sample sizes are very different, it is possible for you to end up having too much power for some indicators.
p-value is a probability number you want to reject/accept your null hypothesis. In other word p-value is a probability you can make an error when you reject or accept the null hypothesis. Smaller p-value impacts to the bigger sample size. A researcher has an authority to set a p-value. Thus, p-value is not depend on how much indicator you have. To determine a sample size you must have a proportion of exposed/unexposed or a proportion of a disease (variable dependent) or health problem, power of study (beta).
I suggest basing your sample size on acceptable estimated standard errors, or RSEs: relative standard errors (standard error divided by estimated mean, total, whatever) for your variables of interest.
p-values themselves are functions of sample size, so that at a given level, say 0.05, a larger sample will tend to reject and a smaller sample size will tend not to reject a null hypothesis, regardless of the truth as to how nearly true a given hypothesis might be.
So, setting RSE goals would be much better.
Best wishes - Jim
Article Practical Interpretation of Hypothesis Tests - letter to the...