Alpha reflects what's called an essential tau-equivalence model, which means that item factor loadings on a single intended factor are all equal in a CFA model; the alpha-implied variance/covariance matrix restricts all covariances to be equal.
Omega assumes a congeneric model, which means that factor loadings are allowed to vary in a CFA model. If items are all standardized, then the omega-implied covariance matrix allows all covariances to vary.
Cutoff values are arbitrary -- Cronbach never said .70 was an ideal, but it was a rule of thumb in some cases (e.g., finding general relationships between distinct variables) -- in other cases (e.g., finding precise relationships between highly related variables and/or high-stakes selection tests), he said that alpha should be much higher. See Lance, Butts, & Michels, 2006, Organizational Research Methods.
For comparing alpha and omega (comparing tau-equivalent with congeneric models), see Graham, 2006, Educational Psychological Measurement; if you are also interested in alpha vs. omega in the presence of allowing items' uniquenesses (errors) to correlate, see Lucke, 2005, Applied Psychological Measurement.
Musa Adekunle Ayanwale do you have any evidence to back up your undifferentiated claim? I would cast doubt, that .70 is sufficient without knowing the application. And please do not cite Nunnaly (1978), where he mentioned .70, but his quote is taken out of context most of the time.
Researchers frequently invoke the authority of Nunally (Nunnally & Bernstein 1994) to justify the use of an alpha of 0·7 or more as indicating an acceptable level of scale reliability. As Lance points out, Nunally simply didn't say this (Lance et al. 2006). And it is worth quoting what Nunally did say:
"In the early stages of research… one saves time and energy by working with instruments that have only modest reliability, for which purpose reliabilities of ·70 or higher will suffice… In contrast to the standards in basic research, in many applied settings a reliability of ·80 is not nearly high enough… In many applied problems, a great deal hinges on the exact score made by a person on a test… In such instances it is frightening to think that any measurement error is permitted. Even with a reliability of ·90, the standard error of measurement is almost one-third as large as the standard deviation of the test scores."
Lance, C.E., Butts, M.M. & Michels, L.C., 2006. The Sources of Four Commonly Reported Cutoff Criteria: What Did They Really Say? Organizational Research Methods, 9(2), pp.202–220.
I might also recommend
Dunn, T.J., Baguley, T. & Brunsden, V., 2014. From alpha to omega: a practical solution to the pervasive problem of internal consistency estimation. Br J Psychol, 105(3), pp.399–412.
I do not know any overall meaningful cut-off values, in my opinion this does not exist with real data, all can just be a rule of thumb. Imagine the "magical" cut-off would be .80 and you reach .794. Now what? In my opinion you have to consider it individually according to the needs at hand and not mechanically in an "all or nothing" fashion.
But if you want to go back to Nunally (1978) you find a more differentiated view in his original article, which is often reduced to the .70 statement, presumably to justify their own bad results. Here what he wrote on pp 245-246:
[…] what a satisfactory level of reliability is depends on how a measure is being used. In the early stages of research . . . one saves time and energy by working with instruments that have only modest reliability, for which purpose reliabilities of .70 or higher will suffice. . . . In contrast to the standards in basic research, in many applied settings a reliability of .80 is not nearly high enough. In basic research, the concern is with the size of correlations and with the differences in means for different experimental treatments, for which purposes a reliability of .80 for the different measures is adequate. In many applied problems, a great deal hinges on the exact score made by a person on a test. . . . In such instances it is frightening to think that any measurement error is permitted. Even with a reliability of .90, the standard error of measurement is almost one-third as large as the standard deviation of the test scores. In those applied settings where important decisions are made with respect to specific test scores, a reliability of .90 is the minimum that should be tolerated, and a reliability of .95 should be considered the desirable standard.[…]
Maybe this is a good starting point to think about ones own results in a more differentiated way.
There are no commonly used cut-off values for McDonald’s Omega. McDonald’s Omega is a reliability coefficient that is similar to Cronbach’s Alpha. It is used to estimate the internal consistency of a scale or measure. The interpretation of McDonald’s Omega is the same as Cronbach’s Alpha