[R] conf int mixed effects

Douglas Bates bates at stat.wisc.edu
Thu Nov 13 17:09:31 CET 2003


Joerg Schaber <Joerg.Schaber at uv.es> writes:

> I naively thought when I can give estimates of the random effects I
> should also be able to calculate confidence levels of these estimates
> (that's what statistics is about, isn't it?)

Again, to be technical, the random effects are not parameters.  They
are random variables that are part of the probability model and in
that sense they are not 'estimated' per se.  The quantities that are
often interpreted as 'the estimates of the random effects' are in fact
the best linear unbiased predictors (BLUPs).

There could be ways of assigning intervals that somehow measure the
precision of these predictors but, as I said, it is not easy to define
these in a way that is completely consistent with the theory.  I would
prefer not to do it rather than to do it sloppily and end up in a lot
of controversy.

(I already spend an inordinate amount of my time responding to claims
about the calculation of degrees of freedom in approximations to the
distribution of the fixed-effects estimates.  Note to those who know
the "correct" degrees of freedom that should be used - it's all an
approximation.)

> For example, similar to the fixed case, I can calculate a
> variance-covariance matrix (C) for the random effects (e.g. following
> Hemmerle and Hartley,TECHNOMETRICS 15 (4): 819-831 1973) and using the
> t-value for the given confidence level and degrees of freedom (t), I
> can estimate confidence intervals for random effect i (r[i]) as
> something like r[i] +- t*sqrt(C[i][i]).

I haven't looked at the article (my copies of Technometrics don't go
back to 1973) but it is unlikely that such an article would apply to
all the cases that lme can handle and I suspect that some of the
theoretical niceties have been glossed over.




More information about the R-help mailing list