[R] lmer() vs. lme() gave different variance component estimates

Peter Dalgaard pdalgd at gmail.com
Mon Sep 20 21:28:43 CEST 2010


On 09/20/2010 08:09 PM, array chip wrote:
> Thank you Peter for your explanation of relationship between aov and lme. It 
> makes perfect sense. 
> 
> 
> When you said "you might have computed the average of all 8
> measurements on each animal and computed a 1-way ANOVA" for treatment effect, 
> would this be the case for balanced design, or it is also true for unbalanced 
> data?

It is only exactly true for a balanced design, although it can be a
practical expedient in nearly-balanced cases, especially if there is a
clearly dominant animal variation. In strongly unbalanced data, you get
reduced efficiency because animals with less data should be downweighted
(not proportionally if there is substantial between-animal variation,
though). And of course the whole thing relies on the fact that you have
individuals nested in treatment (no animals had multiple treatments)

> 
> Another question is if 1-way ANOVA is equivalent to mixed model for testing 
> treatment effect, what would be reason why mixed model is used? Just to estimate 
> the variance components? If the interest is not in the estimation of variance 
> components, then there is no need to run mixed models to test treatment effects?

Not too far off the mark. In more complex cases, there is the advantage
that the mixed model helps figure out a sensible analysis for you.

 
> And my last question is I am glad to find that glht() from multcomp package 
> works well with a lmer() fit for multiple comparisons. Given Professor Bates's 
> view that denominator degree's of freedom is not well defined in mixed models, 
> are the results from glht() reasonable/meaningful? If not, will the suggested 
> 1-way ANOVA used together with glht() give us correct post-hoc multiple 
> comparsion results?

I think Doug's view is that DFs are not _reliably_estimated_ with any of
the current procedures. In the balanced cases, they are very well
defined (well, give or take the issues with "negative variances"), and I
would expect glht() to give meaningful results. Do check the residuals
for at least approximate normality, though.


> 
> Thank you very much!
> 
> John
> 
> 
> 
> 
> 
> ----- Original Message ----
> From: Peter Dalgaard <pdalgd at gmail.com>
> To: array chip <arrayprofile at yahoo.com>
> Cc: r-help at r-project.org; r-sig-mixed-models at r-project.org
> Sent: Sat, September 18, 2010 1:35:45 AM
> Subject: Re: [R] lmer() vs. lme() gave different variance component estimates
> 
> 
> For a nested design, the relation is quite straightforward: The residual
> MS are the variances of sample means scaled to be comparable with the
> residuals (so that in the absense of random components, all
> MS are equal to within the F-ratio variability). So to get the id:eye
> variance component, subtract the Within MS from the id:eye MS and divide
> by the number of replicates (4 in this case since you have 640
> observations on 160 eyes) (14.4 - 0.01875)/4 = 3.59, and similarly, the
> id variance is the MS for id minus that for id:eye scaled by 8:
> (42.482-14.4)/8 = 3.51.
> 
> I.e. it is reproducing the lmer results above, but of course not those
> from your original post.
> 
> (Notice, by the way, that if you are only interested in the treatment
> effect, you might as well have computed the average of all 8
> measurements on each animal and computed a 1-way ANOVA).
> 


-- 
Peter Dalgaard
Center for Statistics, Copenhagen Business School
Phone: (+45)38153501
Email: pd.mes at cbs.dk  Priv: PDalgd at gmail.com



More information about the R-help mailing list