The goal of Kappa coefficient is to measure the agreement between two systems. In remote sensing, it is applied to define the classification precision of algorithm comparing the output of algorithm with already classified image/dataset (waiting result).
If you have list of algorithms and their Kappa, the strongest Kappa identifies the most precise algorithm, however, this case is true, if the classified categories are similar in all cases. Kappa considers all categories as equal sets (normalizes amount of samples), therefore it is more preferred then total accuracy, which increases precision when there are more samples of class with better classification precision neither with worse classification.
(If one class isn't recognized very well, Kappa is small despite of high precision of other classes)
Kappa coefficient and total accuracy are calculated from confusion matrix, therefore good style is to provide confusion matrix, which can be applied to calculate different coefficients for comparison.
Therefore, if confusion matrices are provided, the more correct solution is to rewrite confusion matrices only with common classes and recalculate their Kappa coefficients and compare them.