1. Indeed, reliability measures are: Cronbach’s alpha reliability, split-half reliability and Cronbach’s alpha if item deleted. They are essential and very important measures to assess the survey design .
2. The analyses of variance test is used to test the difference in ratings between more than two treatments/variables simultaneously .
3. I don't think that there is any relation between them.
reliability analyses may (mathematically) be formulated as analyses of variance. An intraclass correlation (ICC, Cronbach's alpha is a special case of it) may be thought as an oneway or twoway ANOVA.
Considering raters, there exist (at least) six different reliability coefficients (only within ICC). Two of them (the agreement coefficients, ICC(A,1) and ICC(A,k)) integrate (group) differences in their coefficient: the more different the smaller will be the reliability coefficient. But two of them (the consistency coefficients, ICC(C,1) and ICC(C,k)=Cronbach's alpha) are not sensitive to absolute differences between raters (or groups of them), but only to inter-rater consistency.