Not necessarily. Generally speaking, always remember that if you are using 95% confidence intervals, 1 out of 20 data will "naturally" be an outlier (1/20=5%), so the answer is "it depends".
Do not confuse confidence intervals, dispersion, and boxplots. Confidence intervals typically refer to a statistic, not to the data. I assume you mean a "prediction interval". Then, 5% of the values will not fall in the 95% prediction interval, tha't right. However, the whiskers in boxplots are not related to prediction intervals. So Kowing if a data value is whithin the whisker range does not tell you if it also woud be in the (what-so-ever-percent-) prediction interval.
I assumed outliers in multiple linear regression would be a good, and clear example since the question used the word "significant". Sometimes people tend to remove outliers after a regression and recalculate the model, when, as I said, we have to bear in mind that, to some extent, outliers identified in a systematic way are not to be ticked as influential, anomalous, or anything of the sort. At least not always: as you said, and said well, it depends.
Francis Bacon observed that we learn once when we find the general rule, but we learn again when we examine the exceptions. Outliers can simply indicate that the distribution is not symmetrical, but can indicate atypical processes at work. For example, very high triglycerides are the product of disease or alcohol consumption, rather than being extremes of the normal range.
Please feel free to explore this blog: http://alexonsimanddata.blogspot.com. I have a series of posts there made in 2012-2013 on outliers, their significance, and how to use the most basic functionalities of R to detect them.
I think you would need to be more clear on your definition of "significance". Most importantly - to declare significance, you first need to define what hypothesis you are actually testing. As already pointed out above, a boxplot is certainly not the right device to detect real "outliers", as its definition (1.5 x IQR) is guaranteed to be met even if the data is simulated from a normal distribution if there are only enough data points.
In your case, your hypothesis could be something along the lines of "the tails of my distribution are heavier than the tails of a normal distribution". Of course this would have to be made more precise. However, in that case, you could declare "significance" - however this significance would refer to the hypothesis tested (and therefore all data used) and not just a few outliers.
Holger, I would like to comment on what you've said about normal distribution.
Of course there are outliers; it's in the nature of things, and the g * IQR is as good a method as any. How do you define outliers with another method? If you use normal distribution as an example, any points outside the Z * sigma interval are outliers, right?
But then the only way you will not have outliers is if you go with 100% confidence level, which is not physically achievable.
What useful information does knowledge that my distribution is not normal give me if all I am looking for are points that do not belong in my distribution? For that, 1.5 * IQR is as good as any other method, and if faced with a choice of statistical significance vs. practical significance, I'll always choose practical.