[Rd] Floating point control (was: [R] Variance for Vector of Constants is STILL Not Zero)

Duncan Murdoch murdoch at stats.uwo.ca
Sat Feb 18 16:41:06 CET 2006


Over on R-help, the old problem of floating point precision has come up 
again (see my example below, where calling RSiteSearch can change the 
results of the var() function).

The problem here is that on Windows many DLLs set the precision of the 
fpu to 53 bit mantissas, whereas R normally uses 64 bit mantissas. 
(Some Microsoft docs refer to these as 64 bit and 80 bit precision 
respectively, because they count the sign and exponent bits too).

When R calls out to the system, if one of these DLLs gets control, it 
may change the precision and not change it back.  This can happen for 
example in calls to display a file dialog or anything else where a DLL 
can set a hook; it's very hard to predict.

I consider this to be very poor programming; DLLs shouldn't 
unnecessarily change the operating environment of their caller. 
However, it's something we've got to live with.

Currently R itself sets the FPU precision to 64 bit mantissas when it 
starts and preserves it across dyn.load calls.  I think we need to be 
more aggressive about protecting the precision.  Specifically, in any 
case where we know we are directly calling an external function we 
should protect the precision across the call.

A problem is that the C runtime library also makes calls to system 
functions, so some of those calls are probably risky too.  It's not 
reasonable to protect all C library calls, but I think we should fairly 
aggressively test for changes, fix them, and optionally report them.

Another problem is that R itself is used as a DLL.  Should it set the 
precision to 64 bit mantissas, or try to maintain whatever precision the 
caller gave it?  I'd lean towards documenting a requirement for 64 bit 
precision on entry and documenting that we may change the precision to 
64 bits.

Yet another problem is that Microsoft's .NET only supports 53 bit 
precision, according to some documentation I've read.  Do we need to 
interoperate with .NET?

I don't know if this is a Windows-only problem, or if it occurs on any 
other systems, but I think the only way to know is to add the tests on 
all systems.

I'd like to suggest the following:

  - We add R level functions to get and set the floating point control bits.

  - We save the value that is set, and work aggressively to make sure it 
doesn't get changed by other mechanisms, with debugging options to 
report all unexpected changes.

I don't know how portable any of this will be.  Is the _controlfp() 
function standard C, or is it only available on some of our platforms?

Duncan Murdoch

On 2/17/2006 11:05 PM, Duncan Murdoch wrote:

> My guess is that you've got a video driver or some other software that's 
> messing with your floating point processor, reducing the precision from 
> 64 bit to 53 or less.  I can reproduce the error after running 
> RSiteSearch, which messes with my fpu in that way:
> 
>  > var(rep(0.2, 100))
> [1] 0
>  > RSiteSearch('fpu')
> A search query has been submitted to http://search.r-project.org
> The results page should open in your browser shortly
>  > var(rep(0.2, 100))
> [1] 1.525181e-31
> 
> (I'm not blaming RSiteSearch for doing something bad, it's the system 
> DLLs that it calls that are at fault.)
> 
> I think this is something we should address, but it's not easy.



More information about the R-devel mailing list