[Rd] Manipulating single-precision (float) arrays in .Call functions

Duncan Murdoch murdoch.duncan at gmail.com
Tue Jul 19 16:34:40 CEST 2011


On 11-07-19 7:48 AM, Matthew Dowle wrote:
>
> "Prof Brian Ripley"<ripley at stats.ox.ac.uk>  wrote in message
> news:alpine.LFD.2.02.1107190640280.28269 at gannet.stats.ox.ac.uk...
>> On Mon, 18 Jul 2011, Alireza Mahani wrote:
>>
>>> Simon,
>>>
>>> Thank you for elaborating on the limitations of R in handling float
>>> types. I
>>> think I'm pretty much there with you.
>>>
>>> As for the insufficiency of single-precision math (and hence limitations
>>> of
>>> GPU), my personal take so far has been that double-precision becomes
>>> crucial
>>> when some sort of error accumulation occurs. For example, in differential
>>> equations where boundary values are integrated to arrive at interior
>>> values,
>>> etc. On the other hand, in my personal line of work (Hierarchical
>>> Bayesian
>>> models for quantitative marketing), we have so much inherent uncertainty
>>> and
>>> noise at so many levels in the problem (and no significant error
>>> accumulation sources) that single vs double precision issue is often
>>> inconsequential for us. So I think it really depends on the field as well
>>> as
>>> the nature of the problem.
>>
>> The main reason to use only double precision in R was that on modern CPUs
>> double precision calculations are as fast as single-precision ones, and
>> with 64-bit CPUs they are a single access.
>> So the extra precision comes more-or-less for free.
>
> But, isn't it much more of the 'less free' when large data sets are
> considered? If a double matrix takes 3GB, it's 1.5GB in single.
> That might alleviate the dreaded out-of-memory error for some
> users in some circumstances. On 64bit, 50GB reduces to 25GB
> and that might make the difference between getting
> something done, or not. If single were appropriate, of course.
> For GPU too, i/o often dominates iiuc.
>
> For space reasons, is there any possibility of R supporting single
> precision (and single bit logical to reduce memory for logicals by
> 32 times)? I guess there might be complaints from users using
> single inappropriately (or worse, not realising we have an instable
> result due to single).

You can do any of this using external pointers now.  That will remind 
you that every single function to operate on such objects needs to be 
rewritten.

It's a huge amount of work, benefiting very few people.  I don't think 
anyone in R Core will do it.

Duncan Murdoch

> Matthew
>
>> You also under-estimate the extent to which stability of commonly used
>> algorithms relies on double precision.  (There are stable single-precision
>> versions, but they are no longer commonly used.  And as Simon said, in
>> some cases stability is ensured by using extra precision where available.)
>>
>> I disagree slightly with Simon on GPUs: I am told by local experts that
>> the double-precision on the latest GPUs (those from the last year or so)
>> is perfectly usable.  See the performance claims on
>> http://en.wikipedia.org/wiki/Nvidia_Tesla of about 50% of the SP
>> performance in DP.
>>
>>>
>>> Regards,
>>> Alireza
>>>
>>>
>>> --
>>> View this message in context:
>>> http://r.789695.n4.nabble.com/Manipulating-single-precision-float-arrays-in-Call-functions-tp3675684p3677232.html
>>> Sent from the R devel mailing list archive at Nabble.com.
>>>
>>> ______________________________________________
>>> R-devel at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>>
>>
>> --
>> Brian D. Ripley,                  ripley at stats.ox.ac.uk
>> Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
>> University of Oxford,             Tel:  +44 1865 272861 (self)
>> 1 South Parks Road,                     +44 1865 272866 (PA)
>> Oxford OX1 3TG, UK                Fax:  +44 1865 272595
>>
>
> ______________________________________________
> R-devel at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel



More information about the R-devel mailing list