There are at least three kinds of probabilities you could associate with an SVM.
1. The probability that the machine makes a good decision (in one of many senses);
2. The probability of a specific label for a specific example being correct;
3. The probability that a difference (of label, performance, prevalence) impacts.
You seem to have been thinking in terms of variant 1, which includes measures like Kendall Tau, Sensitivity and Specificity. These relate to evaluation. Kendall Tau involves rank correlation using pair counting. Ranking is used when when we are not sure we are using consistent scales (low information in specific value), pairwise evaluation techniques are usually used in comparing unsupervised or clustering system (low information due to lack of supervised labels).
Sensitivity and Specificity are a good pair of measures, but on their own are sensitive to bias, so Youden J and dichotomous Informedness add Sensitivity and Specificity and subtract one (=Recall+InvRec-1) to give what can be shown to be the probability of an informed decision (versus guessing). Markededness measures the probability in the reverse direction (=Precision+InvPrec-1), and Matthews Correlation is their geometric mean. Informedness can also be understood in terms of distance above the chance line in a ROC curve. These are all chance-corrected measures in the Kappa family (subtract a chance estimate and renormalise to the form of a probability) but not all have a clear probabilistic interpretation, unlike Informedness.
RVM and Platt's SVM calibration address variant 2 of the question. It is useful to calibrate the raw activations (thinking in neural terms) or distances (thinking in decision boundary terms) into the form of probabilities. This can be done parametrically by using squashing functions, or empirically by counting in a validation set - Pool Adjacent Violators (PAV) or Receiver Operator Characteristics (ROC) can be used for this.
Variant 3 is probably not what you were thinking, and relates most obviously to statistical significance. If you tested on a different sample (data set) would you expect to get the same results? ROC Area Under the Curve addresses this in a different way. The difference between the single point ROC AUC ([Specificity + Sensitivity]/2 = [Informedness + 1]/2) and the multipoint ROC AUC or AUCH (Area Under the Convex Hull). Gives a measure of how well the system stands up to variations in the data samples (prevalence and cost variations).