The outlier detection just tells you which values you should check. "To check" means that you should look if these values are extremely implausible or even impossible (given your subject matter knowledge). These would be legitimate reasons to exclude those values. If not, check how these vales were recorded: can you identify experimental problems or mistakes in data transfer? If yes, it's again legitimate to exclude those values. If not, these values are as important as all other values, so it is important to keep them. For values with high leverage I would investigate the effect of these values on the final interpretation. If this effect is relevant, this needs to be discussed appropriately (not just removing and ignoring the uncomfortable values!). If there are rather many "outliers" better check if your model assumptions are ok or should be refined (e.g. missing interaction, non-linear relationships, and often distributional assumptions).
The outlier detection just tells you which values you should check. "To check" means that you should look if these values are extremely implausible or even impossible (given your subject matter knowledge). These would be legitimate reasons to exclude those values. If not, check how these vales were recorded: can you identify experimental problems or mistakes in data transfer? If yes, it's again legitimate to exclude those values. If not, these values are as important as all other values, so it is important to keep them. For values with high leverage I would investigate the effect of these values on the final interpretation. If this effect is relevant, this needs to be discussed appropriately (not just removing and ignoring the uncomfortable values!). If there are rather many "outliers" better check if your model assumptions are ok or should be refined (e.g. missing interaction, non-linear relationships, and often distributional assumptions).
One more vote for extreme caution in deleting data that manifest statistical indices that may be considered characteristic of an outlier. Jochen Wilhelm gave you excellent advice, and I suggest following it, and pay special attention to exploration to the notion of leverage that he mentions.
I would like to offer another issue. That is, are these actually outliers? It may have been intended, but the "distributional assumptions" Jochen mentions can be interpreted in respect to your assumption of sampling adequacy; if you infer, you make that assumption. I would like to posit that it can be the case that in an inadequately representative sample, an outlier is not an outlier at all, but a part of the population that has not be adequately represented. And, of course, one is using statistical indices from that potentially inadequate sample to deem the case an outlier. So, as Jochen suggests, and I tell my students, unless there are "extra-statistical" reasons that you find with further exploration after a outlier index flags a data point causing you to, as Jochen expertly put it, "check" that data point, you may not exclude it.