Skip to Main Content

Analysis of variance, including the t tests, is widely used to test the hypothesis that one or more treatments had no effect on the mean of some observed variable. All forms of analysis of variance, including the t tests, are based on the assumption that the observations are drawn from normally distributed populations in which the variances are the same even if the treatments change the mean responses. These assumptions are often satisfied well enough to make analysis of variance an extremely useful statistical procedure. On the other hand, experiments often yield data that are not compatible with these assumptions. In addition, there are often problems in which the observations are measured on an ordinal scale rather than an interval scale and may not be amenable to an analysis of variance. This chapter develops analogs to the t tests and analysis of variance based on ranks of the observations rather than the observations themselves. This approach uses information about the relative sizes of the observations without assuming anything about the specific nature of the population they were drawn from.

We will begin with the nonparametric analog to the unpaired and paired t tests, the Mann-Whitney rank-sum test, and Wilcoxon signed-rank test. Then we will present the analogs of one-way analysis of variance, the Kruskal-Wallis analysis of variance based on ranks, and the Friedman repeated measures analysis of variance based on ranks.

As already noted, analysis of variance is called a parametric statistical method because it is based on estimates of the two population parameters, the mean and standard deviation (or variance), that completely define a normal distribution. Given the assumption that the samples are drawn from normally distributed populations, one can compute the distributions of the F or t test statistics that will occur in all possible experiments of a given size when the treatments have no effect. The critical values that define a value of F or t can then be obtained from that distribution. When the assumptions of parametric statistical methods are satisfied, they are the most powerful tests available.

If the populations the observations were drawn from are not normally distributed (or are not reasonably compatible with other assumptions of a parametric method, such as equal variances in all the treatment groups), parametric methods become quite unreliable because the mean and standard deviation, the key elements of parametric statistics, no longer completely describe the population. In fact, when the population substantially deviates from normality, interpreting the mean and standard deviation in terms of a normal distribution can produce a very misleading picture.

For example, recall our discussion of the distribution of heights of the entire population of Jupiter. The mean height of all Jovians is 37.6 cm in Figure 2-3A and the standard deviation is 4.5 cm. Rather than being equally distributed about the mean, the population is skewed toward taller heights. Specifically, the heights of Jovians range from 31 to ...

Pop-up div Successfully Displayed

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.