Basically, you can compare two dependent ROC curves (obtained from the same data) by calculating a confidence interval for the difference between the individual ROC indices. A Wald test statistic then compares the observed difference divided by its standard error to the standard normal distribution to determine a p-value. There is some software out there that can calculate these statistics automatically, e.g.
ROC curve analysis is a useful tool to determine the optimal cut-off value for independent variables in prediction models. The key steps are:
1. Construct the ROC curve by plotting the true positive rate (sensitivity) against the false positive rate (1-specificity) for different cut-off values of the predictor variable.
2. Calculate the area under the ROC curve (AUC) which summarizes the overall diagnostic ability of the predictor variable. An AUC of 1 represents perfect discrimination, while 0.5 indicates the predictor performs no better than chance.
3. Identify the cut-off value that optimizes the trade-off between sensitivity and specificity based on the clinical context. This can be done by:
- Selecting the point on the ROC curve closest to the top-left corner (0,1), which maximizes both sensitivity and specificity.
- Choosing a cut-off that achieves a desired sensitivity or specificity level based on the consequences of false positives and false negatives.
4. The optimal cut-off point corresponds to the predictor value that yields the desired balance of sensitivity/specificity or maximizes the Youden index (sensitivity + specificity - 1)
By analyzing the ROC curve, researchers can select the most clinically relevant cut-off for continuous or ordinal predictor variables in diagnostic or prognostic models, optimizing the trade-off between false positives and false negatives.