ROC curves capture information about the entire distribution of decision thresholds, whereas populating a confusion matrix (i.e., calculating TN, TP, FN, and FP) requires selecting a single decision threshold.
To plot a ROC curve, use one of the functions suggested by Frerk Saxen.
To populate a confusion matrix, you must discretize your output: any examples with an SVM output value higher than the threshold are recoded as "True" and any examples with an SVM output value lower than the threshold are recoded as "False." You then compare these predictions to verified labels (i.e., ground truth) to get the four possibilities.
In pseudo-code:
TP = sum(prediction==True AND groundtruth==True)
FP = sum(prediction==True AND groundtruth==False)
TN = sum(prediction==False AND groundtruth==False)
I don't know what SVM is, but you need a set of situations for which you know the correct classification (a 'gold standard') and then just compare your classificator with the gold standard. TN = number of events that are negative that your classificator classified as negative, FP = events that are negative but that your classificator classified as positive, TP = events that are positive that you classified as positive and FN = events that are positive but that you classified as negative.
SVM stands for support vector machine. There are several matlab implementations for that, so I cant give you a clear answer. But I participated in similar questions. If you don't find your answer there, please ask a little bit more elaborate.
ROC curves capture information about the entire distribution of decision thresholds, whereas populating a confusion matrix (i.e., calculating TN, TP, FN, and FP) requires selecting a single decision threshold.
To plot a ROC curve, use one of the functions suggested by Frerk Saxen.
To populate a confusion matrix, you must discretize your output: any examples with an SVM output value higher than the threshold are recoded as "True" and any examples with an SVM output value lower than the threshold are recoded as "False." You then compare these predictions to verified labels (i.e., ground truth) to get the four possibilities.
In pseudo-code:
TP = sum(prediction==True AND groundtruth==True)
FP = sum(prediction==True AND groundtruth==False)
TN = sum(prediction==False AND groundtruth==False)
This paper introduces a detailed explanation with numerical examples many classification assessment methods or classification measures such as:
Accuracy, sensitivity, specificity, ROC curve, Precision-Recall curve, AUC score and many other metrics. In this paper, many details about the ROC curve, PR curve, and Detection Error Trade-off (DET) curve. Moreover, many details about some measures which are suitable for imbalanced data are explained.