[R] Alternative to slow double for() loop

Bayesianbay@aol.com Bayesianbay at aol.com
Wed Oct 2 13:48:56 CEST 2002


Dear List

Many thanks to those who helped me yesterday regarding possible ways to 
increase memory size in R.

I have found the inefficient part of my program to be a double for() loop, 
and was wondering if anybody could suggest an alternative to using this 
double loop which would speed things up.

The program looks like this:

for (j in 1:m) {
for (i in 1:n) {
times<-comp.list[[j]][which(comp.list[[j]]$V1==i),]
T<-ncol(times)
Y<-times$V2
Y<-data.matrix(Y)
cova<-subset(times, select=V3:V16)
cova<-data.matrix(cova)
pr<-exp(cova%*%beta)/(1+exp(cova%*%beta))
dipr<-diag(c(pr[1,1], pr[2,1]))
dipr1<-dipr-(pr%*%t(pr))
A<-diag(c(dipr1[1,1],dipr1[2,2]))
D<-t(cova)%*%A
V<-A
u1<-D%*%solve(V)%*%(Y-pr)
u<-u+u1
usq<-usq+(u1%*%t(u1))
dvd<-dvd+D%*%solve(V)%*%t(D)
u.list[[j]]<-u
usq.list[[j]]<-usq
dvd.list[[j]]<-dvd
}}

where j are a number of different data sets and i are the numbers of people 
within the data set

Many thanks for any help, I'm still trying to learn the best ways of writing 
R code!

Laura
-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !)  To: r-help-request at stat.math.ethz.ch
_._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._



More information about the R-help mailing list