I am doing a cross sectional study on normal subjects.
I have one independent variable from one instrument and fifteen dependent variables from other instrument (explaining different parameters) of the same patient.
Correlation does not make any distinction between explanatory and outcome variables. But you have one explanatory variable and 15 outcome variables. So why are you not using regression instead of correlation?
How important are each of the outcome variables individually? I.e., do you really have 15 univariate questions? Or do you have a multivariate question? If the latter, estimate a multivariate regression model, and don't bother with the univariate tests. For more about the distinction between univariate and multivariate questions, see the classic article by Huberty & Morris (1989), link provided below.
If you really do have 15 univariate questions, I suggest you look at the two Lancet articles by Schulz & Grimes (2005). Their articles are among the best I've ever seen on this difficult and contentious issue. You might also consider using the false discovery rate (FDR) method rather than using a Bonferroni correction. John MacDonald's Biostats Handbook has some accessible notes on that (link below).
Correlation does not make any distinction between explanatory and outcome variables. But you have one explanatory variable and 15 outcome variables. So why are you not using regression instead of correlation?
How important are each of the outcome variables individually? I.e., do you really have 15 univariate questions? Or do you have a multivariate question? If the latter, estimate a multivariate regression model, and don't bother with the univariate tests. For more about the distinction between univariate and multivariate questions, see the classic article by Huberty & Morris (1989), link provided below.
If you really do have 15 univariate questions, I suggest you look at the two Lancet articles by Schulz & Grimes (2005). Their articles are among the best I've ever seen on this difficult and contentious issue. You might also consider using the false discovery rate (FDR) method rather than using a Bonferroni correction. John MacDonald's Biostats Handbook has some accessible notes on that (link below).
Sometimes people will use a p-value correction for multiple correlations, but I don't think it's necessary in most cases. It really depends on the researcher's assessment of how important it is to avoid "type 1" errors vs. identifying more of the potentially significant relationships.
For any given data set and test, there's a trade off between "type 1" errors and "type 2" errors. The more conservative you are with avoiding "false positive" errors (using Bonferroni, using a low alpha value), the more likely you are to commit "false negative" errors. The only way out of this trap is to increase the power of your test (get more samples, decrease the variability of your measurements).
To add to Bruce Weaver's answer: I'll note that a Bonferroni correction is very conservative. There are other p-value correction methods that are probably much more desirable for your purposes. There is a distinction between those methods that control the familywise error rate (FWER) and those that control the false discovery rate (FDR). For suggestions, the Wikipedia articles on FWER and FDR are also helpful.