Hi all,

I want to report the P values and confidence intervals for AUC values for two different multi-label classifiers. I got the following article, which is applicable for a binary classification problem.

https://stackoverflow.com/questions/52373318/how-to-compare-roc-auc-scores-of-different-binary-classifiers-and-assess-statist

But, how can we implement this in a multi-label setting?

Kindly provide some suggestions on this?

Thank you in advance!

More Ankush Jamthikar's questions See All
Similar questions and discussions