The ROC curve sensitivity against 1 -specificity as you change the threshold for classification. That allows you to not only assess overall accuracy (the AUC) but also try to select a best threshold/
With a very low threshold, everything will be detected but you'll have no specificity. With a very high threshold, you'll have perfect specificity but never classify anything as positive.
You could plot specificity on the X-axis and just reverse the direction so it goes from 1 to 0 instead of 0 to 1.
It's more intuitive with 1-specificity, which is the rate of false positives among all cases that should be negative (false positive + true negative). As you move along the ROC curve, you get more true positive but also more false positive.
I would add that if you actually plot specificity on the X-axis starting from 0 to 1, you will end up with a left facing curve. However, the special meaning of the area under the curve for sensitivity vs (1-specificity) will be lost.
This paper introduces a detailed explanation with numerical examples many classification assessment methods or classification measures such as:
Accuracy, sensitivity, specificity, ROC curve, Precision-Recall curve, AUC score and many other metrics. In this paper, many details about the ROC curve, PR curve, and Detection Error Trade-off (DET) curve. Moreover, many details about some measures which are suitable for imbalanced data are explained.