Using k-means clustering that uses Pearson correlation as the similarity metric worked best for me. MATLAB has the implementation for k-means clustering with Pearson correlation similarity metric.
Fuzzy Neuro Computing based clustering is best for highly correlated data because this algorithm has the learning capability due to artificial neural networks.
There is no best clustering algorithm as this is dependent on the distribution of data you have. This is why we use consensus clustering. Robust version of k-means clustering (PAM) along side with Affinity propagation (AP) are two good candidates which could be useful when trying different initialization methods. R has two different packages for these two techniques (pam and apcluster). Gaussian mixture modelling with EM could also be useful (in a consensus manner) when compared with pam and AP.
There is no such thing like best clustering for correlated data. It is higly task dependent. whith the same data you might want to cluster different events. If you want to take into account the correlation, do not use algorithms like k-means which based on Euclidean distans. Either use varuants of k-means whis metrics which use correlation, like Mahalanobis distance with regularelization or some variants of soft clustering.
I just want to introduce two recent published papers. I think that they must be interesting for you.
1. Mario A. T. Figueiredo, Robert D. Nowak, Ordered Weighted l1 Regularized Regression with Strongly Correlated Covariates: Theoretical Aspects, AAAI 2016.
2. Urvashi Oswal, Christopher Cox, Matthew A. Lambon Ralph, Timothy Rogers, Robert Nowak, Representational Similarity Learning with Application to Brain Networks, ICML 2016.
High Correlation is very subjective and please bear in mind that it will only indicate linear relationship. As suggested by Amitay a good option may be use Mahalnabis distance for calculating the dissimilarity matrix and then apply K means or Fuzzy C means.
Better you use K-means clustering technique. If the features are highly correlated, then please try to eliminate those features i.e., correlation features evaluation.
I agree with Lov Kumar but we can do some hybridization by integrating the answer of Amitay Nachmani. So you can use K-means as Lov said but use the Mahalanobis distance instead of euclidean distance as Amitay said.