Hi! I estimated Kappa using MacKay accuracy assessment and attaching the same pdf to this. The method I used gave kappa as 92%. Does that mean 0.92% or I opted for wrong method?
The value of 92% means, the observed agreement across categories was 92% higher than that predicted by chance alone (via the marginal proportions).
As for whether you opted for the wrong method, that depends on: your specific research question; the data set (including the variables involved and how they are quantified); and the data collection method.
My interpretation is as Dr Morse. The Kappa theoretically varied from -1 to 1. In general, we compare methods that are supposed to be positively correlated with the construct we measure (ie Kappa are >=0). Most of statistical tests we do beyond calculating the value of Kappa from your data is to determine if Kappa is different from 0 (0 means= no agreement beyond chance) based on confidence intervals of the value you found. Another approach is to look for coverage of the 95% CI over the classical benchmarks of Kappa interpretations reported in different textbooks (see excellent textbook from Gwet 2014).
If your Kappa is 0.92, it means that after correcting for chance agreement you still have a high "correlation" between your 2 tests. Your 2 by 2 (or n x n) table should also show you that your 2 raters or measures, uncommonly give discrepant results.
Good luck!
Gwet, K. L. (2014). Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters. Advanced Analytics, LLC.