This is a good question. In some cases you should adjust n for factors not in the power calculation. For example non-response or dropout. If you think drop-out is likely to be 30% then you probably need to aim for n/.7 for instance. Similarly most power calculations ignore multicollinearity so you should adjust for that in some cases (e.g., to test unique effects of predictors in a regression).
Increasing n can also support a wider range of analyses and addressing more specific research Qs (e.g., planned subgroup analyses).
However there are also arguments not to increase n. The main one is ethical. If you are exposing participants to potential risk or harm then you. shouldn't exceed the target n without good reason. You might also consider cost to participants ... if you only need 300 participants then collecting more than you need might be considered wasteful of their time.
Mirna K. Faiq I don't think that's enough information to produce a useful estimate for sample size. First, what is your goal? The approach is different for finding a specific effect like a difference between groups or establishing prevalence or exploratory work looking at correlations (to give just a few examples).
Second, what's the context? You say its retrospective observational study so the upper limit probably depends on cost and effort and maybe other factors like privacy concerns or data quality from older data sets. I'd also take into account things like seasonality - if the data might be impacted by that you probably want to collect data for whole years for instance. If you can easily get a large sample without serious ethical or privacy concerns then I'd probably collect as large a sample as plausible in terms of effort and cost. Then in stead of a conventional power analysis I'd just look at the sensitivity to detect effects for that n and plot it in a graph. This can be done fairly easily in packages like GPower or R for many types of study.
I agree with David Morgan: generally, the more data the better. However, you shouldn't add data because you are not happy with the trends you are seeing at the present. That could bias the results.