[R] Enduring LME confusion… or Psychologists and Mixed-Effects

Spencer Graves spencer.graves at pdf.com
Tue Aug 10 12:44:20 CEST 2004


      Have you considered trying a Monte Carlo?  The significance 
probabilities for unbalanced anovas use approximations.  Package nlme 
provides "simulate.lme" to facilitate this.  I believe this function is 
also mentioned in Pinheiro and Bates (2000). 

      hope this helps.  spencer graves
p.s.  You could try the same thing in both library(nlme) and 
library(lme4).  Package lme4 is newer and, at least for most cases, 
better. 

Gijs Plomp wrote:

> Dear ExpeRts,
>
> Suppose I have a typical psychological experiment that is a 
> within-subjects design with multiple crossed variables and a 
> continuous response variable. Subjects are considered a random effect. 
> So I could model
> > aov1 <- aov(resp~fact1*fact2+Error(subj/(fact1*fact2))
>
> However, this only holds for orthogonal designs with equal numbers of 
> observation and no missing values. These assumptions are easily 
> violated so I seek refuge in fitting a mixed-effects model with the 
> nlme library.
> > lme1 <- lme(resp~fact1*fact2, random=~1|subj)
>
> When testing the 'significance’ of the effects of my factors, with 
> anova(lme1), the degrees of freedom that lme uses in the denominator 
> spans all observations and is identical for all factors and their 
> interaction. I read in a previous post on the list ("[R] Help with lme 
> basics") that this is inherent to lme. I studied the instructive book 
> of Pinheiro & Bates and I understand why the degrees of freedom are 
> assigned as they are, but think it may not be appropriate in this 
> case. Used in this way it seems that lme is more prone to type 1 
> errors than aov.
>
> To get more conservative degrees of freedom one could model
> > lme2 <- lme(resp~fact1*fact2, random=~1|subj/fact1/fact2)
>
> But this is not a correct model because it assumes the factors to be 
> hierarchically ordered, which they are not.
>
> Another alternative is to model the random effect using a matrix, as 
> seen in "[R] lme and mixed effects" on this list.
> > lme3 <- (resp~fact1*fact2, random=list(subj=pdIdent(form=~fact1-1),  
> subj=~1,  fact2=~1)
>
> This provides 'correct’ degrees of freedom for fact1, but not for the 
> other effects and I must confess that I don't understand this use of 
> matrices, I’m not a statistician.
>
> My questions thus come down to this:
>
> 1. When aov’s assumptions are violated, can lme provide the right 
> model for within-subjects designs where multiple fixed effects are NOT 
> hierarchically ordered?
>
> 2. Are the degrees of freedom in anova(lme1) the right ones to report? 
> If so, how do I convince a reviewer that, despite the large number of 
> degrees of freedom, lme does provide a conservative evaluation of the 
> effects? If not, how does one get the right denDf in a way that can be 
> easily understood?
>
> I hope that my confusion is all due to an ignorance of statistics and 
> that someone on this list will kindly point that out to me. I do 
> realize that this type of question has been asked before, but think 
> that an illuminating answer can help R spread into the psychological 
> community.
>
> ______________________________________________
> R-help at stat.math.ethz.ch mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html




More information about the R-help mailing list