Kappa is the degree to which raters agree on the categorisation of items/responses. Report the kappa value & its significance (derived using the z-test). If the resultant z is significant, then there is significant agreement (beyond chance levels) on their ratings. It's basically a chance-corrected measure of agreement. Siegel & Castellan offer a basic discussion, and Fleiss et al. an extended discussion.
Without knowing more about your research, I'd say that is fine to report a single kappa value if you are looking at a single variable (otherwise, a kappa for each variable). As for your question, I'm not sure what you mean. Kappa is a pairwise comparison of raters.
I think you can report the single value with the IC 95% and report using the classification by Landis to report if your kappa is fair, moderate etc (Landis JR, Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics 33:159–174)