The effect size is a measure which quantifies the magnitude of the relationship between the variables being compared in a statistical analysis. The computation of the effect size is important for understanding whether or not any meaning or importance is associated with a statistical difference. If a researcher is able to reject the null hypothesis, all this means is that a statistical difference exists, but not that this difference is important.
Effect sizes can be easily calculated and are standardized to a common scale which allows for comparisons to be made between different effect size derivations1. The general formula to compute the effect size is as follows2:
The effect size can take on a negative, neutral or positive meaning. Firstly, effect sizes are expressed as decimal numbers, with larger values being indicative of a higher level of importance being associated with the difference found2. Generally a value greater than 0.5 implies that the difference found is important, but as a general rule, a value of 0.8 or greater is indicative of a powerful treatment1, 2.
A value of 0.00 signifies that a neutral relationship exists between the control treatment and the experimental treatment. Thus, no difference is evident. A negative value states that the control group performed better than the experimental treatment1.
Evidently, the effect size is a rather critical concept in statistics because it is crucial to comprehend whether the results a researcher collects have applicable importance to real-life situations. Without knowing the effect size, the information collected from hypothesis testing is purely statistical in relevance.
1. My Environmental Education Evaluation Resource Assistant. (2014). Power Analysis, Statistical Significance, & Effect Size. Retrieved from: http://meera.snre.umich.edu/plan-an-evaluation/related-topics/power-analysis-statistical-significance-effect-size
Image Credit: Wikimedia Commons