Parametric tests are statistical tests based on a set of assumptions about the probability distribution of a data set and this allows for a researcher to make inferences on the parameters of interest. It is the shape of the distribution which these assumptions are based off of, so for instance, with respect to the normal distribution a bell shaped curve is assumed. Certain parametric tests, such as t-tests and an ANOVA test, assume a normal distribution^{1}.

These assumptions are crucial because if they are not met, non-parametric tests must be used. Furthermore, comparisons between groups cannot be directly made if these assumptions are not met. Although, there are a few tests which can still be utilized even if all of the assumptions are not met and thus, these tests act as exceptions. For instance, an F-test can still be used if the first two assumptions are violated, only if the number of values in the data sets being compared is equal^{1}.

There are three assumptions which need to be followed^{1}:

- Normality: The observations need to be taken from normally distributed populations.
- Homogeneity of variance: The variances of the populations being compared needs to be equal.
- Scale: An interval measurement scale or ratio scale must be used.

Since the variances (and also standard deviations) of the populations need to be the same, data sets with outliers cannot be used. Outliers skew a population’s data and increase the variance value. Commonly used parametric tests in statistics include: t-tests, Z-tests and F-tests.

References:

1. Foster, J.J. (2006). *Parametric Tests. *Retrieved from: http://srmo.sagepub.com.proxy.queensu.ca/view/the-sage-dictionary-of-social-research-methods/n142.xml.