I am working on a k-means algorithm that embodies a step of variable weighting. In each step is assigned a different weight to each variable according to a contribution to the cluster measure. The algorithm starts with a random partition into k groups, a step that computes centers, a step that affect data, and a step that compute weights of variables, on so on until convergence (no cluster switches of points).
Now I would evaluate the sensitivity of the algorithm to the initialization (apart from the number of steps and time spent by each initialization). I take the same dataset, I execute the algo, and obtain a label for each object (obviously I obtain the means and the final sum of within error squares, but affected by the weights of the variables), I repeat this 100 times. In order to asses the sensitivity to initialization I can work only on the final configuration of the cluster labels, I cannot really compare the final criterion (it may have a different scale due to the weights), or I can compare the centers (if they are close enough (but how enough). I can compare for each couple of clustering a RandIndex and observe the distribution of it (but this can be time expensive 100x99/2 comparisons), or can I do something else?