A research instrument is considered valid to the extent that it measures what the user is trying to measure. If the research instrument also measures extraneous variables, its validity is weakened proportionately.Test reliability, usually reported in reliability coefficients, is a measure of how consistently a research instrument measures whatever it is measuring.
Test reliability, usually reported in reliability coefficients, is a measure of how consistently a research instrument measures whatever it is measuring. The coefficient of stability, or a report of test-retest reliability, is an indication of this stability in performance over time
Parallel Forms Reliability-- For standardized instruments that measure a single construct, the researcher can assess reliability through parallel forms reliability. In this test, the researcher correlates the scores from two versions of the same instrument. For example, a measure of mental health, the Mental Health Index, has two forms of psychological distress measures. Version 1 is based on the full battery of questions. To reduce the burden on respondents, the test developers created a new, shorter instrument by selecting a subset of Version 1 items. To assess parallel forms reliability, the researcher would calculate a correlation coefficient to measure the strength of the relationship between the scores on the two versions. Scores of .80 or better are considered to show good parallel forms reliability.
Split-Half Reliability--A test related to parallel forms reliability, split-half reliability is used for standardized instruments that measure a single concept using several items. The instrument assesses split-half reliability by correlating half the items with the other half. Because all of the items measure the same concept, the two halves should correlate highly. Consider, for example, a depression scale that contains 40 items. If all items measure aspects of depression, half of the items should correlate highly with the other half. As with parallel forms reliability, a correlation coefficient of .80 or higher would be expected.
Internal Consistency-- Researchers use internal consistency as a test of reliability for standardized instruments that either measure a single concept or measure multiple concepts but calculate a score for each concept-also referred to as each sub-scale-separately. Internal consistency is concerned with the extent to which all of the items included in the index or scale for a single concept hold together, or how consistently the items are scored. ...
The coefficient of stability, or a report of test-retest reliability, is an indication of this stability in performance over time.
A research instrument cannot be considered valid unless it measures something explicitly relevant