Type I and Type II errors are measures which are directly linked with hypothesis testing and result from a researcher assessing the claim being tested inaccurately. These errors usually occur because the alpha level is either too high or too low. A researcher can never be 100% sure when rejecting or failing to reject a null hypothesis due to the possibility of an error.
A Type I error takes place when a researcher rejects a null hypothesis which is actually true. Thus, the researcher should have failed to reject the null hypothesis. Type I errors are more common as the alpha value increases1.
Conversely, a Type II error occurs when a researcher fails to reject the null hypothesis when in fact, the null hypothesis is not true. In this case, the assumption being made is that there is no difference between the different sample means when in reality there is.
A Type I error is represented by the symbol alpha (α) and a Type II error by the letter beta (β). It is important to realize that in any test there is always the possibility of error. The larger the significance level is set to, the more likely the chance of error becomes. Generally the significance level is set at 5% and this represents a way of controlling for the chance of a Type I error. This level can be thought of as the largest probability a researcher will allow for a Type I error to occur and still reject the null hypothesis.
With a Type II error, generally a researcher will decide to continue conducting further studies. In the study of statistics, understanding both Type I and Type II errors is critical as they play an important part in designing the significance levels for hypothesis tests.
1. Villanova University. (2014). The Concepts of Statistical Power and Effect Size. Retrieved from: http://www83.homepage.villanova.edu/richard.jacobs/EDU%208603/lessons/stastical%20power.html© BrainMass Inc. brainmass.com September 18, 2018, 9:31 am ad1c9bdddf