[R] General-purpose GPU computing in statistics (using R)
rvaradhan at jhmi.edu
Sat Jun 5 16:00:49 CEST 2010
Dear Professor Ripley,
Thank you very much for your lucid and useful reply. I really appreciate your perspectives on this. I have one follow-up question. Could you please explain the meaning of the statement that the algorithms would need to exploit multi-core CPUs differently than they would GPGPUs? What is (are) the main differences likely to be? One thing I could think of would be that the each GPGPU has limited memory and hence, in addition to parallelism, the data also needs to be broken up into smaller chunks, which is not the case for multicore CPUs. Are there other major differences?
Ravi Varadhan, Ph.D.
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University
Ph. (410) 502-2619
email: rvaradhan at jhmi.edu
----- Original Message -----
From: Prof Brian Ripley <ripley at stats.ox.ac.uk>
Date: Friday, June 4, 2010 6:26 am
Subject: Re: [R] General-purpose GPU computing in statistics (using R)
To: Ravi Varadhan <rvaradhan at jhmi.edu>
Cc: R-help at r-project.org
> On Thu, 3 Jun 2010, Ravi Varadhan wrote:
> >Hi All,
> >I have been reading about general purpose GPU (graphical processing
> >computing for computational statistics. I know very little about
> this, but
> >I read that GPUs currently cannot handle double-precision floating points
> Not so for a while, and the latest ones are quite fast at it.
> >and also that they are not necessarily IEEE compliant. However, I
> am not
> >sure what the practical impact of this limitation is likely to be on
> >computational statistics problems (e.g. optimization, multivariate analysis,
> >MCMC, etc.).
> >What are the main obstacles that are likely to prevent widespread
> use of
> >this technology in computational statistics?
> Developing highly parallel algorithms that can exploit the
> architectures. That's not just in statistics, see e.g.
> (A Tesla C2050 is the latest generation GPU -- shipping within the
> last month.)
> >Can algorithms be coded in R to take advantage of the GPU
> architecture to speed up computations? I would appreciate hearing
> from R sages about their views on the usefulness of general purpose
> GPU (graphical processing units) computing for computational
> statistics. I would also like to hear about views on the future of
> GPGPU - i.e. is it here to stay or is it just a gimmick that will
> quietly disappear into the oblivion.
> They need a lot of programming work to use, and the R packages
> currently attempting to use them (cudaBayesreg and gputools) are very
> specialized. It seems likely that they will remain a niche area, In
> much the same way that enhanced BLAS are -- there are problems for
> which the latter can make a big difference, but they are far from
> universally useful.
> We've been here several times before: when I was on UK national
> supercomputing committees in the 1980s and 90s there were several
> similar contenders (SIMD arrays, Inmos Transputers ...) and all faded
> That is not to say that general purpose parallelism is not going to
> be central, as we each get (several) machines with many CPU cores.
> But that sort of parallelism is likely to be exploited in different
> ways from that of GPUs.
> >Thanks very much.
> >Best regards,
> >Ravi Varadhan, Ph.D.
> >Assistant Professor,
> >Center on Aging and Health,
> >Johns Hopkins University School of Medicine
> >rvaradhan at jhmi.edu
> > [[alternative HTML version deleted]]
> >R-help at r-project.org mailing list
> >PLEASE do read the posting guide
> >and provide commented, minimal, self-contained, reproducible code.
> Brian D. Ripley, ripley at stats.ox.ac.uk
> Professor of Applied Statistics,
> University of Oxford, Tel: +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UK Fax: +44 1865 272595
More information about the R-help