Clustering of real world data is often a complex problem and most clustering algorithms would offer different solutions. So its worthwhile to try new clustering methods. Please see link below for some of the clustering methods: http://www.softcomputing.net/publication.html
Sometimes, an easy method is applied prior to a more involved one, that may only converge toward a local optimum, is too instable, complex or costly to estimate directly.
A popular example is the Gaussian mixture model that tries to minimize a bound on the data's log-likelihood. Often, K-means is used as initialization. While the latter is also only locally optimal, it probably is faster (due to the less complex model with considerably less parameters) to realize, as no high-dimensional probabilities need to be re-eestimated from the start.
The lower number of free parameters of K-means (cluster centers versus means and covariance matrices) also might lead to good clusterings, where GMMs might fail.
Unfortunately, there is no (free-lunch) recipe how to combine clusterings, that I am aware of. The above combination, however, has found many fans in the community.