When you check your items (at least two) for internal consistency (how well they correlate with each other) you know that they measure the same thing if you get an alpha of at least 0.70.
If you have 6 or 7 items and want to know that they measure the same thing you calculate your Cronbach's alpha and at the same time you look what the alpha would be if an item is deleted. You should have an alpha of at least .70.
If your questionnaire has e.g. 5 questions that measures anxiety and 5 that measure aggression along with five that measure depression, you take these 3 blocks of five and calculate the alpha for each. You delete items in every block till you get an alpha of 0.70 or preferable higher.
When you check your items (at least two) for internal consistency (how well they correlate with each other) you know that they measure the same thing if you get an alpha of at least 0.70.
If you have 6 or 7 items and want to know that they measure the same thing you calculate your Cronbach's alpha and at the same time you look what the alpha would be if an item is deleted. You should have an alpha of at least .70.
If your questionnaire has e.g. 5 questions that measures anxiety and 5 that measure aggression along with five that measure depression, you take these 3 blocks of five and calculate the alpha for each. You delete items in every block till you get an alpha of 0.70 or preferable higher.
Cronbach's alpha > .70/.80 tells you that your items are consistent. If you are below the limit, you can add other items or items with a higher correlation. Alpha depends on the average correleation of your items. Before using ALPHA check if your set of items is unidimensional (factor analysis: only one factor >1). If this is not the case you have to split up the set of items. Beneath semantic considerations (see Béatrice above) you should also have in mind the statistical argument. If you find e.g. for anxiety more than one factor, find out if it would be sensible to have two types of anxiety and argue this in a substantive way.
For all the measure items together, it maybe you will undertake demographic analyses that are parametric tests, therefore the internal consistency (of alpha at least 0.7) is important to ensure reliability of your parametric statistics (for all measures). If your focus is instruments of measure (clusters of items), or as many researchers now practice - analyse collapsed clusters, then inconsistent items from the groups one or more items may need to be removed if alpha falls below 0.7. Therefore undertaking reliability on the clusters is important so you can follow procedures (outlined by Beatrice). All of this can of course be undertaken at a pilot stage to flag up possible issues at a later stage - removal of items at the pilot is not advised as it may correct itself after all data is collected. However it is an early quality check.
of your 2 questions, the answer for the first one is "NO" and for the second is "YES". Please mind the suggestion of Charles Berg that in case you have more than 4-5 items, then your first analytical step should be Factor Analysis to find out how many dimensions your items form (with more items, one usually finds more than one dimension). Only then aplly Cronbach alha procedure - but separately within each of the dimensions. And please bare in mind that the magnitude of Cronbach alpha is dependent partially on the number of items you are checking. With a larger number of items the alpha gets bigger.
I don’t think that alpha is in your case the best method to estimate reliability. Alpha has very strict assumptions: unidimensionality, uncorrelated errors, and essential tau-equivalence of all items. Essential tau-equivalence means, that all covariances between the items should be identical. These assumptions should be checked, and in most cases the assumptions are violated. Then, alpha over- or underestimates the true reliability. This is why you cannot trust alpha at all when the assumptions are not met. "Alpha if item deleted" does not help because the assumption for this procedure is equal error variances of all items, which is problematic, too.
An alternative is McDonald's omega:
Starkweather, J. (2012). Step out of the past: Stop using coefficient alpha; there are better ways to calculate reliability. Benchmarks RSS Matters Retrieved from: http://web3.unt.edu/benchmarks/issues/2012/06/rss-matters
Dunn, T. J., Baguley, T. & Brunsden, V. (2014). From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology, 105, 399–412.
Using omega, you can calculate reliability of either unidimensional or multidimensional scales using CFA. Generally, you would prefer to estimate the reliability of single scales. However, you may also be interested in investigating the proportion of total variance that can be attributed to all common factors, i.e., omega, sometimes also denoted as omega_total:
Reise, S. P., Bonifay, W. E., & Haviland, M. G. (2013). Scoring and modeling psychological measures in the presence of multidimensionality. Journal of Personality Assessment, 95, 129-140.