well, you have two options: first, you can do a McNemar Test, that its the equivalent of a two paired-sample t Test to probe if there is a diference in the responses. Second, i suppouse you can threat both measures as if they were from two judges, and run a Kappa Test to probe the magnitude of the agreement (Like a correlation, kappa values near 1 suggest a strong agreement).
It must be determined if the set of items refers to the same issue. The Cronbach alpha coefficient is calculated, but if it is nominal data (yes, no) the statistical equivalent to the Cronbach coefficient is the KR20 of Kuder-Richardson (alfa model).
A few years before Cronbach proposed the alpha coefficient as an indicator of internal consistency, Kuder and Richardson (1937) presented two formulas for calculating this indicator when the items are dichotomous. These two formulas are known as KR20 and KR21.
When the items in a test are dichotomous and the two possible response alternatives are codified as 0 and 1, the variance of an item is equal to the proportion of 0 for the proportion of 1. If the test is of performance and the answers to the different items are correct or incorrect, they are usually coded with 1 correct answers and 0 incorrect ones.