[R] Amazon AWS, RGenoud, Parallel Computing

Lui ## lui.r.project at googlemail.com
Sat Jun 11 13:03:10 CEST 2011


Dear R group,

since I do only have a moderately fast MacBook and I would like to get
my results faster than within 72h per run ( :-((( ), I tried Amazon
AWS, which offers pretty fast computers via remote access. I don't
want to post any code because its several hundreds lines of code, I am
not looking for the "optimal" answer, but maybe some suggestions from
you if faced similar problems.

I did install Revolution on Windows on the amazon instance. I usually
go for the large one (8 cores, about 20 Ghz, several GB of RAM).

- I am running a financial analysis over several periods (months) in
which various CVaR calculations are made (with the rGenoud package).
The periods do depend on each other, so parallelizing that does not
work. I was quite surprised how well written all the libraries seem
for R on Mac since they seem to use my dual core on the Macbook for a
large portion of the calculations (I guess the matrix multiplications
and the like). I was a little bit astonished though, that the
performance increase on the Amazon instance (about 5 times faster than
my Macbook) was very moderate with only about a 30% decrease in
calculation time. The CPUs were about 60% in use (obviously, the code
was not written specifically for several cores).

(1) I did try to use multiple cores for the rGenoud package (snow
package) as mentioned on the excellent website
(http://sekhon.berkeley.edu/rgenoud/multiple_cpus.html) but found a
rather strange behaviour: The CPU use on the Amazon instances would
decrease to about 25% with periodic peaks. At least the first
instance/optimization rum took significant longer (several times
longer) than without explicitly including multicores in the genoud
function. The number of cores I used was usually smaller than the
number of cores I had at my service (4 of 8). So it does not seem like
I am able to improve my performance here, even though I think it is
somewhat strange...

(2) I tried to improve the performance by parallelizing the "solution
quality functions" (which are subject to minimization by rGenoud): One
was basically a sorting algorithm (CVaR), the other one just a matrix
multiplication sort of thing. Parallelizing either the composition of
the solution function (which was the sum of the CVaR and matrix
multiplication) or parallelizing the sort function (splitting up the
dataset and later uniting subsets of the solution again) did not show
any improvements: the performance was much worse - even  though all 8
CPUs were 100% idle... I do think that it has to do with all the data
management between the instances...

I am a little bit puzzled now about what I could do... It seems like
there are only very limited options for me to increase the
performance. Does anybody have experience with parallel computations
with rGenoud or parallelized sorting algorithms? I think one major
problem is that the sorting happens rather quick (only a few hundred
entries to sort), but needs to be done very frequently (population
size >2000, iterations >500), so I guess the problem with the
"housekeeping" of the parallel computation deminishes all benefits.

I tried snowfall (for #2) and the snow package (#1). I also tried the
"foreach" library - but could get it working on windows...

Suggestions with respect to operating system, Amazon AWS, or rgenoud
are highly appreciated.

Thanks a lot!

Lui



More information about the R-help mailing list