[R] Performance problem

Stephan Moratti stephan.moratti at uni-konstanz.de
Tue Jul 20 13:03:04 CEST 2004

>From: gerhard.krennrich at basf-ag.de
>Precedence: list
>MIME-Version: 1.0
>To: r-help at stat.math.ethz.ch
>Date: Tue, 20 Jul 2004 11:23:25 +0200
>Message-ID: <OF79E18D67.A9A4078E-ONC1256ED7.0032096F at rz-c007-j650.basf-ag.de>
>Content-Type: text/plain; charset=us-ascii
>Subject: [R] Performance problem
>Message: 60
>Dear all,
>I have a performance problem in terms of computing time.
>I estimate mixed models on a fairly large number of subgroups (10000) using
>lme(.) within the by(.) function and it takes hours to do the calculation
>on a fast notebook under Windows.
>I suspect by(.) to be a poor implementation for doing individual analysis
>on subgroups.
>Is there an alternative and more efficient way for doing by-group
>processing within lme(.).
>Here some code to give you a glimpse
>gfit <- by(longdata, gen, function(x) lme(fixed=response ~ dye + C(treat,
>base = 4 ),
>            data=x,random =~ 1 | slide)  )
>Thanks in advance & regards
>Gerhard Krennrich

Sorry, that I can't contribute to a solution. But I have a similar problem,
doing lme's on 350 source estimations of MEG brain data. So if somebody
knows some improvement, please let me know !

Stephan Moratti

Dipl. Psych. Stephan Moratti
Dept. of Psychology
University of Konstanz
P.O Box D25
Phone: +40 (0)7531 882385
Fax: +49 (0)7531 884601
D-78457 Konstanz, Germany

e-mail: Stephan.Moratti at uni-konstanz.de

More information about the R-help mailing list