[R] Mathematica now working with Nvidia GPUs --> any plan for R?

Mose mose.andre at gmail.com
Wed Nov 19 08:27:04 CET 2008


Oops, now with link to Ahmed's page

http://cs.anu.edu.au/people/Ahmed.ElZein/doku.php?id=research:more


On Tue, Nov 18, 2008 at 11:25 PM, Mose <mose.andre at gmail.com> wrote:
> GPU architecture is different enough from CPU architecture that you
> don't need 10s of GPUs to see a performance benefit over today's, say,
> 8 core CPUs.  Lots of GPUs now give you a (relatively cheap)
> "supercomputer" -- look up nVidia's Tesla marketing mumbo jumbo.  One
> GPU still gives you a 'heckuva job'.
>
> From Wikipedia's GPU page, speaking on modern general purpose GPUs:
>
> http://en.wikipedia.org/wiki/Graphics_processing_unit
>
> "Typically the performance advantage is only obtained by running the
> single active program simultaneously on many example problems in
> parallel using the GPU's SIMD architecture[11]. However, substantial
> acceleration can also be obtained by not compiling the programs but
> instead transferring them to the GPU and interpreting them there[12].
> Acceleration can then be obtained by either interpreting multiple
> programs simultaneously, simultaneously running multiple example
> problems, or combinations of both. A modern GPU (e.g. 8800 GTX) can
> readily simultaneously interpret hundreds of thousands of very small
> programs."
>
> The first sentence, you can imagine, applies to some a lot of matrix work.
>
> There are BLAS libraries for some GPUs (e.g. CUDA BLAS).  You can
> probably imagine having R use it.  Ahmed El Zein has a poster about
> his presentation "Performance Evaluation of the NVIDIA GeForce 8800
> GTX GPU for Machine Learning" that gives some more interesting info.
>
> -Mose
>
>
> On Tue, Nov 18, 2008 at 10:56 PM, Prof Brian Ripley
> <ripley at stats.ox.ac.uk> wrote:
>> On Tue, 18 Nov 2008, Emmanuel Levy wrote:
>>
>>> Dear All,
>>>
>>> I just read an announcement saying that Mathematica is launching a
>>> version working with Nvidia GPUs. It is claimed that it'd make it
>>> ~10-100x faster!
>>> http://www.physorg.com/news146247669.html
>>
>> Well, lots of things are 'claimed' in marketing (and Wolfram is not shy to
>> claim).  I think that you need lots of GPUs, as well as the right problem.
>>
>>> I was wondering if you are aware of any development going into this
>>> direction with R?
>>
>> It seems so, as users have asked about using CUDA in R packages.
>>
>> Parallelization is not at all easy, but there is work on making R better
>> able to use multi-core CPUs, which are expected to become far more common
>> that tens of GPUs.
>>
>>> Thanks for sharing your thoughts,
>>>
>>> Best wishes,
>>>
>>> Emmanuel
>>
>> PS: R-devel is the list on which to discuss the development of R.
>>
>> --
>> Brian D. Ripley,                  ripley at stats.ox.ac.uk
>> Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
>> University of Oxford,             Tel:  +44 1865 272861 (self)
>> 1 South Parks Road,                     +44 1865 272866 (PA)
>> Oxford OX1 3TG, UK                Fax:  +44 1865 272595
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>



More information about the R-help mailing list