The dependent variable should be operationally defined in measurable terms. As such they should be characterized as reliable and valid. Could someone clarify these concept?
Generally, the reliability of a measure refers to the degree to which the measure produces consistent results upon repeated application. In other words, reliability is the relative absence of random unsystemic errors of measurement. Validity refers to the degree to which a measure or procedure succeeds in doing what it purports to do.
Prof Halkos have nicely presented the concept and meaning of reliability and validity of measures and thus, nothing left to add to it. However, I am just presenting the same thing in other words.
Reliability refers to the consistency of the measure and this consistency may be gauged across time or the content of the measure. If we get the measurement using the same tool/measure across two points of time (separated by a defined time interval e.g., 1 day, 1 week, 1 month or any other time interval depending on the nature of the construct to be measured) and the results are consistent (e.g., a high correlation between the two measurements) then this reflects the temporal consistency of the measure. This aspect of reliability is also called test - retest reliability.
Another aspect of reliability is called internal consistency and it reflects that each and every element or component of the given measure are consistent or highly correlated with each other. If the measure happens to have a single component (item/scale) then this form of reliability can be measured by taking another measure of the same construct which is equivalent to the measure under consideration. The correlation between the original measure and the alternative equivalent measure reflects the internal consistency of the given measure. Sometimes this methodology of estimating internal consistency is referred to as "parallel/ alternative form reliability".
The overall reliability of a given measure depends on both the temporal consistency as well as internal consistency because some errors of measurement are inevitable because of temporal variations as well as variations in the content/sub-components of the measure.
Statistically, the reliability of a measure is the proportion of true variance to total variance or 1 minus the proportion of error variance.
On the other hand, the validity of a measure is the proportion of common variance to total variance. The total variance is the sum of common variance, specific variance and error variance and the sum of common and specific variance represents the amount of true variance. Thus, technically, the reliability is always the upper limit of the validity of the measure and validity will equal to reliability only when the specific variance of the measure is zero.
As Prof. Halkos have mentioned the validity of the measure reflects the extent to which it measures the construct for which it has been developed. To demonstrate this we generally take some external criterion reflecting the construct under consideration (say another measure of the same construct with demonstrated validity) and correlate it with given measure. The magnitude of squared correlation reflects the shared or common variance. higher the common variance higher is the validity of the measure.
Validity is the ability of a statistical instrument to measure what it is designed to measure (for example, the research question), which can be either in terms of content (face) validity or criterion validity or construct validity that can be convergent in nature or discriminant.
While reliability deals with the tendency of a measurement construct to repeatedly ensure that transient and situational factors do not impede the testing of a phenomenon by confirming the consistency of a measure (for example, if a researcher or other researchers measure the same problem, they will get the same result, just like distance measurement in inches, centimeters, meters, and kilometers etc.).