As we can calculate the accuracy quite easily on the basis of correctly classified pixels versus those pixels which are not classified correclty , so how can we calculate the Kappa Coefficient result ?
Kappa Coefficient, a.k.a. Cohen's kappa can be easily calculated using a formula and the number of true positive, false positive, false negative and true positive cases from the confusion matrix (contingency table). You can follow the example of this wikipedia-page:
https://en.wikipedia.org/wiki/Cohen%27s_kappa
If you use R statistical software, the most straightforward way calculating kappa by function cohen.kappa() of package 'psych', function Kappa() of package 'vcd' or function Kappa.test() of package 'fmsb'.
Note that the function kappa() of package 'base' is not for Cohen's kappa but for calculation of the Condition Number (multicollinearity measure).
There is a very nice book on computation of kappa and Tau by Congalton and Green. This book also has statistical z-test analysis for error matrices which are significant for understanding the comparison between multiple classifiers.
The kappa statistic is used to control only those instances that may have been correctly classified by chance. This can be calculated using both the observed (total) accuracy and the random accuracy. Kappa can be calculated as:
Kappa = (total accuracy – random accuracy) / (1- random accuracy).
Article Development of a Data Mining Based Model for Classification ...
n essence, the kappa statistic is a measure of how closely the instances classified by the machine learning classifier matched the data labeled as ground truth, controlling for the accuracy of a random classifier as measured by the expected accuracy.
you can check this useful link, it explains the different methods to assess the accuracy of the classification of remotely sensed data: http://www.50northspatial.org/classification-accuracy-assessment-confusion-matrix-method/