Nothing really, unless the data isn't linearly separable. If that's not the case, having large number of features would simply require larger computational effort for computing distance of each point from its nearest centroid during each iteration of k-means.
Try standardisation of the features (e.g. z-score) before using distance based algorithms to avoid features with highest values adopt further importance in the resulting model.
You may want to use some feature selection algorithm, and if your features are not in the same scale you should use some standardization methods as Xavier mentioned.