[R] Improving data processing efficiency
dfolkins at gmail.com
Sat Jun 7 01:35:13 CEST 2008
> p <- profr(fcn_create_nonissuing_match_by_quarterssinceissue(...))
> That should at least help you see where the slow bits are.
so profiling reveals that '[.data.frame' and '[[.data.frame' and '[' are
the biggest timesuckers...
i suppose i'll try using matrices and see how that stacks up (since all
my cols are numeric, should be a problem-free approach).
but i'm really wondering if there isn't some neat vectorized approach i
could use to avoid at least one of the nested loops...
More information about the R-help