True Positive (TP) is an outcome where the model correctly predicts the positive class.
True Negative (TN) is an outcome where the model correctly predicts the negative class.
False Positive (FP) is an outcome where the model incorrectly predicts the positive class.
False Negative (FN) is an outcome where the model incorrectly predicts the negative class.
An ROC curve (receiver operating characteristic curve) is a graph showing the performance of a classification model at all classification thresholds.
ROC curve plots the two parameters, they are
True Positive Rate
False Positive Rate
True Positive Rate (TPR) is like a recall and is defined as mathematically
TPR = (TP/TP+FN)
False Positive Rate (FPR) is defined as mathematically
FPR = (FP/FP+TN)
An ROC curve plots TPR vs. FPR at different classification thresholds. Lowering the classification threshold classifies more items as positive, thus increasing both False Positives and True Positives.
ROC is for binary data (true/false; forest/non-forest); several software can calculate it, but R has outstanding capabilities (I guess in Python there are good choices, too). But, if you do not like scripting, just try TANAGRA which is free. https://eric.univ-lyon2.fr/~ricco/tanagra/en/tanagra.html