| Statistics for the Behavioral Sciences, 4/e Michael Thorne,
Mississippi State University -- Mississippi State Martin Giesen,
Mississippi State University -- Mississippi State
One-Way Analysis of Variance With Post Hoc Comparisons
Chapter OverviewThe analysis of variance, or ANOVA, is a widely used test for comparing more than two groups. Two
reasons for not using the two-sample t test are that multiple t tests are tedious to compute and that the more
tests you do on the same data, the more likely you are to commit a Type I error (reject a true null).
The total variability in some data can be partitioned or divided into the within-groups variability and
the between-groups variability. The variability within each group stems from individual differences and
experimental error; the variability between groups comes from individual differences, experimental error,
and the treatment effect. The ANOVA test is the ratio of a measure of variability between groups to a
measure of the variability within groups. If there is no treatment effect, the computed value of F will be
close to 1. However, if there is a treatment effect, the F ratio will be relatively large because of the added
source of variability contributing to the between-group differences. One-way between-subjects ANOVA
applies to situations in which the data from three or more independent groups are analyzed.
The first step in determining the indices of variability is to compute the sums of squares. The total sum
of squares is the sum of the squared deviations of each score from the total mean. The sum of squares
within each group is the sum of the squared deviations of each score in a group from its group mean, with
the deviations summed across groups. Finally, the sum of squares between groups can be obtained by
subtraction: SSb = SStot – SSw. Also, SSb is the square of the deviation between each group mean and the
total mean multiplied by the number of subjects in a particular group and summed over groups. It’s a good
idea to compute SSb to test the accuracy of your other computations.
After the sums of squares have been determined, appropriate degrees of freedom is computed for
each. For SStot, or the total sum of squares, df = N – 1, where N is the total number of cases sampled. For
SSb, or the sum of squares between groups, df = K – 1, where K is the number of groups. df for SSw, or the
sum of squares within groups, is N – K.
Both SSb and SSw are divided by their respective df to give the average or mean square. The ratio of
MSb to MSw is called the F ratio. A relatively large value of F indicates greater variability between groups
than within groups and may indicate sampling from different populations. The computed value of F is
compared with values known to cut off deviant portions (5% or 1%) of the distribution of F. If the
computed F exceeds critical values from Table C (see Appendix 2), the null hypothesis is rejected, and we
conclude that at least one of the samples probably came from a different population. To help summarize the
results, as they are computed, values are entered into the analysis of variance summary table shown here.Summary Table for Between-Subjects ANOVA (2.0K)
Two tests are presented for further significance testing following a significant F ratio: the Fisher LSD
and the Tukey HSD. Both tests are used to make all pairwise comparisons—comparing all groups by
looking at one pair at a time. The LSD test is sometimes called a protected t test because it follows a
significant F test. In the LSD test, the difference between a pair of means is significant if it is greater than
LSD, which is computed with a formula; the same is true for the HSD test; that is, a difference between a
pair of means is significant if the difference exceeds the computed value of HSD. A table of differences is
used to summarize the results of both tests.
The one-way repeated measures ANOVA applies to situations in which the same (or matched)
participants are tested on more than two occasions. The first step is to compute the sums of squares. The
total and between-groups sums of squares are computed using the same procedures as in one-way between-subjects ANOVA. However, the within-groups sum of squares is divided into two parts: subjects sum of
squares (SSsubj) and error sum of squares (SSerror). SSsubj is the squared deviation between the mean score for
each subject and the total mean, multiplied by the number of groups and summed over subjects. SSerror is the
variability remaining after removing SSb and SSsubj from SStot and can be obtained by subtraction: SSerror =
SStot – SSb – SSsubj. Computational formulas were given for each of the sums of squares.
As in one-way between-subjects ANOVA, dftot = N – 1, and dfb = K – 1. Subjects degrees of freedom
(dfsubj) equal the number of subjects minus 1 (S – 1), and error degrees of freedom (dferror) equal
(K – 1)(S – 1).
Both SSb and SSerror are divided by the appropriate df to give MSb and MSerror, respectively. The F ratio
is obtained by dividing MSb by MSerror. If the computed F is greater than or equal to the critical values from
Table C (Appendix 2), the null hypothesis is rejected. With slight modifications, the LSD and HSD tests
can be used for post hoc testing following a significant repeated measures ANOVA.
To summarize the results, values are entered in a summary table, as shown here.Summary Table for One-Way Repeated Measures ANOVA (3.0K) |
|