Assessment Reliability & Validity
Understanding the various types of assessment and the significance individual test results yield are of utmost importance in special education. Defining the words validity and reliability are not what is important, but understanding the purpose for needing valid and reliable measures is of extreme importance.
Validity denotes the extent to which an assessment is measuring what is supposed to measure.
Criterion-related Validity: A method of assessing the validity of an instrument by comparing its scores with another criterion known already to be a measure of the same trait or skill.
Concurrent Validity: The extent to which a procedure correlates with the current behavior of subjects.
Predictive Validity: The extent to which a procedure allows accurate predictions about a subject's future behavior.
Content Validity: Whether the individual items of a test represent what you actually want you to assess.
Construct Validity: The extent to which a test measures a theoretical construct or attribute.
Construct: Abstract concepts such as intelligence, self-concept, motivation, aggression, and creativity that can be observed by some type of instrument. A test's construct validity is often assessed by its convergent and discriminate validity.
Factors affecting validity:
The differences between reliability and validity in educational assessment are defined and explained.