The classical tests for two samples include:

- comparing two variances (Fisher's
*F*test, var.test) - comparing two sample means with normal errors (Student's
*t*test, t.test) - comparing two means with non-normal errors (Wilcoxon's rank test, wilcox.test)
- comparing two proportions (the binomial test, prop.test)
- correlating two variables (Pearson's or Spearman's rank correlation, cor.test)
- testing for independence of two variables in a contingency table (chi-squared, chisq.test, or Fisher's exact test, fisher.test).

Before we can carry out a test to compare two sample means (see below), we need to test whether the sample variances are significantly different (see p. 294). The test could not be simpler. It is called Fisher's *F* test after the famous statistician and geneticist R.A. Fisher, who worked at Rothamsted in south-east England. To compare two variances, all you do is divide the larger variance by the smaller variance. Obviously, if the variances are the same, the ratio will be 1. In order to be significantly different, the ratio will need to be significantly bigger than 1 (because the larger variance goes on top, in the numerator). How will we know a significant value of the variance ratio from a non-significant one? The answer, as always, is to look up the *critical value* of the variance ratio. In this case, we want critical values of Fisher's *F*. The R function for this is qf, which stands for ‘quantiles of the *F* distribution’.

For our example ...

Start Free Trial

No credit card required