There are plenty of metrics for assessing the goodness of a classifier. I want to explain the kappa coefficient.
Cohen's kappa: It gives information about inter-rater agreement between predictions of the model and the truth. The smaller kappa is less agreement between the truth and predictions.
It's definition is (Pr(a) - Pr(e)) / (1 - Pr(e)) , where Pr(a) represents the actual agreement(accuracy) and Pr(e) represents chance agreement (McHugh, M.L. 2012)
If there is a complete agreement kappa is 1, which also means that there is no error. A nonpositive kappa represents that the model performance is equal or worse than chance agreement.
References:
McHugh, M. L. (2012). Interrater reliability: the kappa statistic. Biochemia Medica, 22(3), 276–282.
The paper that introduce kappa as a new technique:
Cohen, Jacob (1960). "A coefficient of agreement for nominal scales". Educational and Psychological Measurement. 20
true positive rate (TP rate) or recall or sensitivity, true negative rate (TN rate) or specificity, false positive rate (FP rate), false negative rate (FN rate), positive predictive value (PP value) or precision and negative predictive value (NP rate), recall, precision, specificity, sensitivity and so on.
For imbalanced classification;
Additional metrics of Geometric mean, F-measure, ROC and so on.