[R] absurd computiation times of lme

Douglas Bates bates at stat.wisc.edu
Fri Oct 11 23:43:25 CEST 2002


Christof Meigen <christof at nicht-ich.de> writes:

> Renaud Lancelot <lancelot at sentoo.sn> writes:
> > It is because of the random effects (the estimations of the var-cov
> > random-effect matrix is very computer intensive). I think you would need
> > a very large data set to be able to estimate so many random-effect
> > parameters (21 parameters: 6 variances and 15 covariances). 
> 
> Well, in the case of the children I do have quite large datasets,
> around 1000 children with altogether much more than 5000 measurements.

But you are also implicitly estimating the random effects for each
child.  These are sometimes regarded as 'nuisance' parameters but they
still need to be estimated, at least implicitly.  In this case there
would be about 6000 of them (1000 children by 6 random effects per
child).

I would recommend that you start with a spline model for the fixed
effects but use either a simple additive shift for the random effects
(random = ~1|Subject) or an additive shift and a shift in the time
trend (random = ~ age | Subject).  You simply don't have enough data
to estimate 6 parameters from the data for each child.

There is a big difference when fitting random effects between adding
parameters in the fixed effects, which are estimated from all the
data, and adding parameters in the random effects, which are estimated
from the data for one subject.
-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !)  To: r-help-request at stat.math.ethz.ch
_._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._



More information about the R-help mailing list