butteryfly

 

 

Analysis of Variance

(ANOVA)

 

This module is NOT covered on the Nicholas School Diagnostic Exam.

 

The Analysis of Variance, often called ANOVA, is a statistical tool that allows us to determine if there is a difference of means across multiple groups (three or more). The test allows you to test whether there is a difference in means among groups, but it does NOT provide information as to which group is different (or by how much). This is very important to keep in mind as you learn about ANOVA.

 

The mechanics of the ANOVA is based on comparing the variation WITHIN groups to the variation ACROSS groups. It may be easiest to think about this test conceptually first.

 

Let’s say we ran an experiment to test how a variety of corn reacts to differing types of fertilizer (yes I’m from Indiana). Our corn (Variety A) was grown in an experimental plot where application could be measured. Group 1 (100 stalks) received no fertilizer (our control), Group 2 (100 stalks) received one treatment of Fertilizer Grow-A-Lot, and Group 3 (100 stalks) received one treatment of Fertilizer GreenGrowth. We measured the height of each of the stalks after four weeks. The data are shown in the figure below.

 

CornVarietyA

 

 

As you can see in the side-by-side box plot, the variation of corn stalk growth across groups is greater than the variation within each group. The medians of each of the groups are quite distinct from each other (i.e., do not fall within the data range of the other groups). Intuitively, this plot suggests that the means of the groups are NOT equal. In the language of ANOVA hypotheses, we could assert with relative confidence, that at least one mean is different than the other means.

 

Now let’s look at a different plot, this time with a different variety of corn, but with the same treatments. In this case, there is less evidence to suggest that at least one of the population mean heights is different from the others. See how the IQRs overlap and the medians are not that different? In this case, the variation within each group is relatively large, while the variation ACROSS the groups is not.

CornVarietyB

So now let’s look at the math behind the ANOVA. There are three types of ‘variation’ we can think about with ANOVA: total variation, treatment variation and random variation.

 

Total Variation

 

Total variation is the sum of the SQUARED differences between each individual measurement (in this case the 300 corn stalks (100 in each group), and the GRAND MEAN (the mean of ALL of the stalks). In this equation, the subscript i represents each individual stalk in a group (1-100 stalks) while the subscript j represents the group number (groups 1-3). There are n observations in each group. The total number of groups equals k (in this case 3).

 

TotalVariationEquation

BETWEEN Group Variation

 

The next equation is the BETWEEN variation, which is the squared differences between EACH GROUP MEAN and the GRAND MEAN. You could think of this as the variation among (or between) groups. This sum needs to be weighted for the number of observations in each of the groups.

 

TreatmentVariation

 

 

Within Group Variation

 

Random variation is the sum of squared differences between each individual observation and the mean of its group. This is often called WITHIN group variation.

 

RandomVariation

 

The F-Statistic

 

 

Now we can calculate the F-statistic–which equals the the ratio of the estimate of the BETWEEN GROUP (treatment) variation  the to the estimate of the WITHIN group variation, each divided by their degrees of freedom. The F-statistic has two types of degrees of freedom, one for the numerator and one for the denominator. The numerator degrees of freedom is the number of groups minus one (k-1). The denominator degrees of freedom is the number of observations in a group minus one, multiplied by the number of groups k(N-1).

 

FstatEquation

 

The F-statistic is distributed on the F distribution–which is a family of distributions (similar to the normal, t, etc.). There are an infinite number of F distributions based on its two (numerator, denominator) degrees of freedom. Unlike the normal distribution, the F-distribution only has F values in the positive range. Also, the distribution is right (or positively) skewed. The graph below is the F-distribution for for the degrees of freedom (2, 297) from the example above.

FdistGraph

 

ANOVA Example in STATA

 

cornphoto

 

So now let’s let STATA do the ANOVA calculations for us to compare corn stalk growth on Variety A of corn across fertilizer treatments (the green box plot above). But before we run the test, we should establish our hypotheses.

 

Ho: μ1 = μ2 = μ3

Ha: At least one population mean of corn stalk height is different than the others.

 

 

We will now run the ANOVA calculation in STATA using the data from the three plots of 100 stalks each. The variable name ‘y’ is the height of the corn (cm) and the variable group designates the group (1, 2 or 3).

 

The STATA code is oneway y group [y is the continuous response variable and group is the treatment variable].

 

CornANOVAstata

1.  As you can see in the table above, ANOVA calculated the Total Sum of Squares (Total variation) to be 32655.8269, which should equal the Sum of Squares Between plus the Sum of Squares Within.

 

2. The Between and Within SS were calculated using the equations outlined above.

 

3. The degrees of freedom for the Between Groups SS equals 2 (k-1, or 3-1) and Within groups SS df = k(N-1) = 3(99) = 297.

 

4. STATA calculated the MS (mean squared errors) by dividing the sum of squares by the degrees of freedom.

 

5. The F-statistic is the MS (between groups)/MS (within groups) = 1797.68185/97.8466774 = 18.37

 

6. The p-value (0.00) was calculated from the F-distribution and is the area to the right of 18.37 under the F (2,297) distribution.

 

7. From these calculations, we can reject the null hypothesis and have convincing evidence in support of the alternative hypothesis. In other words, there is strong evidence suggesting that the mean corn stalk growth of at least one of the treatments is different than the others.

 

8. BE CAREFUL!!!  Based on the ANOVA table produced by STATA, we can not conclude which mean is different than the others. The box plot will provide evidence for this, but to be cautious, we should run a post-hoc comparison test to see which mean differs (the topic of another module).

 

9. We need to test the assumptions of the ANOVA to ensure that our results are valid.

 

ANOVA Assumptions

 

1. The population values of each group should be normally distributed.

 

2. The variance of the population values of each group should be approximately equal.

 

3. Outliers should be rare. This is most important when you have an unbalanced design.

 

4. The observations should be independent within and across groups.

 

Based on the box plots of the sample data, it appears that the first three assumptions hold. Remember, we use sample data to test assumptions we make about our populations. For our third assumption to hold, we need to assume that there is no spatial autocorrelation in our data (that there is not spatial patterning in the height of corn stalks in each of our three plots (and across plots). We could venture into more complex spatial statistics to examine this hypothesis, but for now, based on our research design, that the third assumption holds.

 

 

 

 

 

 

 

 

 

Comments are closed.