Miles and Huberman recommended a straightforward "percentage agreement" formula: the number of agreements divided by the total number of codes (i.e., agreements plus disagreements).
Others have proposed more complex ways of assessing inter-rater reliability, based on correcting for "agreement by chance" (e.g. Cohen's kappa and Krippendorff's alpha). I personally think that even when you reports one of these coefficients, it should be accompanied by the basic percentage agreement.
Sachin, In case it's useful, a free online tool that I often use and recommend is Deen Freelon's ReCal for two coders and ReCal3 if you have three or more coders. Here is the link:
Miles & Huberman (1994) stated that the analysis consists of three. activities that occur simultaneously, namely: data reduction, data display and. conclusion drawing / verification. In this technique data collection, the. researcher only focused on data reduction.