# [R] anova on aov with Error() [long question]

Jonathan Baron baron at cattell.psych.upenn.edu
Wed May 15 18:16:03 CEST 2002

```This is a fairly long problem, but one I run into often.  This
may be a problem in statistics, not R, in which case I shouldn't
expect an answer.  The punch line is at the end (III).  The rest
is motivational background.  Forgive any idiocy.

-------

In an experiment, suppose, each of 100 subjects reads 20 cases
and rates their familiarity and emotion (how emotion-arousing
they are).  I want to know whether familiarity affects emotion.
I'm not interested in individual differences among subjects,
which I assume are due to differences in use of the scales.

I have selected the 20 cases haphazardly.  I assume that each
measure (familiarity and emotion) contains two kinds of kinds of
effects, each with its own error, one specific to the _case_
(hence "common" to all subjects) and one that is "idiosyncratic"
to the particular subject-case combination.

------

I. One wrong way to analyze these data is to compute the
correlation within each subject and then do (e.g.) a t-test on
the correlations.  This is wrong because a correlation could
arise from a small correlation of the common errors (which arises
by chance), repeated identically for each subject, even if there
is no real correlation in the effects.

------

II. So here is another way that makes sense but is not elegant.
famil is the 100x20 matrix (100 rows) of responses to the
familiarity question, and emot for the emotion question.  (Skip
this if you want to get to my R question.  This is just to make
the problem clearer.)

famil.vm <- colMeans(famil) # variable means for the 20 cases
emot.vm <- colMeans(emot)
fe <- rep(NA,100) # vector to store within-subject correlations
for (i in 1:100)
{fe[i] <- lm(emot[i,] ~ famil[i,] + famil.vm)\$coefficients}

I can now do t.test(fe) to see if these coefficients are
positive, and I have removed one of the common error components
(famil.vm) so that the correlation cannot arise from a
correlation of the two common error components.

But I have also removed the real effect of famil.vm, and that is
part of the effect of interest (the common part).  So, to get a
p-value for the common effects, I just do

summary(lm(emot.vm ~ famil.vm))

I now have two tests of essentially the same hypothesis, one
concerning the common effects and one concerning the
idiosyncratic effects.  If I'm desperate, I can use Fisher's
formula to combine two tests of the same hypothesis.

-------

III. I was thinking though that I could use the Error() term in
aov() to do both things at once by comparing two models, one with
both the common and idiosyncratic terms and one without either.
(Not much left in this simple example, but there are other cases
where that is not true.)

So I would set up a new matrix (m1) in which one row was an
observation and there were (factor) codes for subject and case,
like this:

subject case emot famil
1       1    5    4
1       2    3    2
...

Then I say something like:

a1 <- aov(emot ~ famil + Error(case/subject),data=m1)

and when I say summary(a1) I get what looks like two analyses.
One is the analysis by case that is equivalent to
summary(lm(emot.vm ~ famil.vm)).  The other is what looks to be
the case:subj interaction, which would be roughly equivalent to
(and probably better than) the test involving the within-subject
correlations ... I think (but I'm not sure).

In principle, then, I ought to be able to fit another model,
something like:

a0 <- aov(emot ~ 1 + Error(case/subject),data=m1)

and compare the two models to test the combination of the two
effects of familiarity.  But anova(a0,a1) does not work.  Nor
does anova(a1), and summary(a1) does not give an overall test.
anova() works only when there is no Error() term.  Can I just
compare the residual errors directly?  Or what?

--
Jonathan Baron, Professor of Psychology, University of Pennsylvania