[Rd] Manipulating single-precision (float) arrays in .Call functions
simon.urbanek at r-project.org
Tue Jul 19 15:32:54 CEST 2011
On Jul 19, 2011, at 7:48 AM, Matthew Dowle wrote:
> "Prof Brian Ripley" <ripley at stats.ox.ac.uk> wrote in message
> news:alpine.LFD.2.02.1107190640280.28269 at gannet.stats.ox.ac.uk...
>> On Mon, 18 Jul 2011, Alireza Mahani wrote:
>>> Thank you for elaborating on the limitations of R in handling float
>>> types. I
>>> think I'm pretty much there with you.
>>> As for the insufficiency of single-precision math (and hence limitations
>>> GPU), my personal take so far has been that double-precision becomes
>>> when some sort of error accumulation occurs. For example, in differential
>>> equations where boundary values are integrated to arrive at interior
>>> etc. On the other hand, in my personal line of work (Hierarchical
>>> models for quantitative marketing), we have so much inherent uncertainty
>>> noise at so many levels in the problem (and no significant error
>>> accumulation sources) that single vs double precision issue is often
>>> inconsequential for us. So I think it really depends on the field as well
>>> the nature of the problem.
>> The main reason to use only double precision in R was that on modern CPUs
>> double precision calculations are as fast as single-precision ones, and
>> with 64-bit CPUs they are a single access.
>> So the extra precision comes more-or-less for free.
> But, isn't it much more of the 'less free' when large data sets are considered? If a double matrix takes 3GB, it's 1.5GB in single.
> That might alleviate the dreaded out-of-memory error for some users in some circumstances. On 64bit, 50GB reduces to 25GB
I'd like to see your 50Gb matrix in R ;) - you can't have a float matrix bigger than 8Gb, for doubles it is 16Gb so you don't gain anything in scalability. IMHO memory is not a strong case these days when hundreds GB of RAM are affordable...
Also you would not complain about pointers going from 4 to 8 bytes in 64-bit thus doubling your memory use for string vectors...
> and that might make the difference between getting
> something done, or not. If single were appropriate, of course.
> For GPU too, i/o often dominates iiuc.
> For space reasons, is there any possibility of R supporting single
> precision (and single bit logical to reduce memory for logicals by
> 32 times)? I guess there might be complaints from users using
> single inappropriately (or worse, not realising we have an instable
> result due to single).
>> You also under-estimate the extent to which stability of commonly used
>> algorithms relies on double precision. (There are stable single-precision
>> versions, but they are no longer commonly used. And as Simon said, in
>> some cases stability is ensured by using extra precision where available.)
>> I disagree slightly with Simon on GPUs: I am told by local experts that
>> the double-precision on the latest GPUs (those from the last year or so)
>> is perfectly usable. See the performance claims on
>> http://en.wikipedia.org/wiki/Nvidia_Tesla of about 50% of the SP
>> performance in DP.
>>> View this message in context:
>>> Sent from the R devel mailing list archive at Nabble.com.
>>> R-devel at r-project.org mailing list
>> Brian D. Ripley, ripley at stats.ox.ac.uk
>> Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
>> University of Oxford, Tel: +44 1865 272861 (self)
>> 1 South Parks Road, +44 1865 272866 (PA)
>> Oxford OX1 3TG, UK Fax: +44 1865 272595
> R-devel at r-project.org mailing list
More information about the R-devel