Explore BrainMass

Reliability and Validity Examining the differences, etc.

Reliability ad validity. In this solution the differences of the two terms are identified. The solution goes on to call out the advantages and disadvantages of survey results by providing an example of survey research in the news.

1. What is the difference between reliability and validity?

Imagine that you are going to develop a new instrument for research in your field, using course readings, provide specific examples of how you might go about establishing its reliability and validity. (Make sure to cover at least one approach for determining reliability and one for determining validity.

2. What are some of the advantages and disadvantages of survey research?

Provide an example of survey research findings that were recently published in the news.
a) First, briefly summarize the study design and findings.
b) Second, based on what we have read about survey research, provide critical feedback on this study's design or explain what additional information you would need to make a critical assessment of this study.

Solution Preview


Definition: Reliability is the consistency of your measurement, or the degree to which an instrument measures the same way each time it is used under the same condition with the same subjects. In short, it is the repeatability of your measurement. A measure is considered reliable if a person's score on the same test given twice is similar. It is important to remember that reliability is not measured, it is estimated.

There are two ways that reliability is usually estimated: test/retest and internal consistency.

Test/retest is the more conservative method to estimate reliability. Simply put, the idea behind test/retest is that you should get the same score on test 1 as you do on test 2. The three main components to this method are as follows:

1.) implement your measurement instrument at two separate times for each subject;
2). compute the correlation between the two separate measurements; and
3) assume there is no change in the underlying condition (or trait you are trying to measure) between test 1 and test 2.
Internal Consistency
Internal consistency estimates reliability by grouping questions in a questionnaire that measure the same concept. For example, you could write two sets of three questions that measure the same concept (say class participation) and after collecting the responses, run a correlation between those two groups of three questions to determine if your instrument is reliably measuring that concept.

One common way of computing correlation values among the questions on your instruments is by using Cronbach's Alpha. In short, Cronbach's alpha splits all the questions on your instrument every possible way and computes correlation values for them all (we use a computer program for this part). In the end, your computer output generates one number for Cronbach's alpha - and just like a correlation coefficient, the closer it is to one, the higher the reliability estimate of your instrument. Cronbach's alpha is a less conservative estimate of reliability than test/retest.

The primary difference between test/retest and internal consistency estimates of reliability is that test/retest involves two administrations of the measurement instrument, whereas the internal consistency method involves only one administration of that instrument.


Definition:Validity is the strength of our conclusions, inferences or propositions. More formally, Cook and Campbell (1979) define it as the "best available approximation to the truth or falsity of a given inference, proposition or conclusion." In short, were we right? Let's look at a simple example. Say we are studying the effect of strict attendance policies on class participation. In our case, we saw that class participation did increase after the policy was established. Each type of validity would highlight a different aspect of the relationship between our treatment (strict attendance policy) and our observed outcome (increased class participation).

Types of Validity:
There are four types of validity commonly examined in social research.

1. Conclusion validity asks is there a relationship between the program and the observed outcome? Or, in our example, is there a connection between the attendance policy and the increased participation we saw?
2. Internal Validity asks if there is a relationship between the program and the outcome we saw, is it a causal relationship? For example, did the attendance policy cause class participation to increase?

3. Construct validity is the hardest to understand in my opinion. It asks if there is there a relationship between how I operationalized my concepts in this study to the actual causal relationship I'm trying to study/? Or in our example, did our treatment (attendance policy) reflect the construct of attendance, and did our measured outcome - increased class participation - reflect the construct of participation? Overall, we are trying to generalize our conceptualized treatment and outcomes to broader constructs of the same concepts.

4. External validity refers to our ability to generalize the results of our study to other settings. In our example, ...

Solution Summary

Reliability ad validity. In this solution the differences of the two terms are identified. The solution goes on to call out the advantages and disadvantages of survey results by providing an example of survey research in the news.