[R] exactly representable numbers

Duncan Murdoch murdoch at stats.uwo.ca
Mon Sep 11 14:46:25 CEST 2006


On 9/11/2006 8:20 AM, Prof Brian Ripley wrote:
> On Mon, 11 Sep 2006, Duncan Murdoch wrote:
> 
>> On 9/11/2006 6:15 AM, Robin Hankin wrote:
>> > Hi Sundar
>> > 
>> > 
>> > thanks for this.  But I didn't make it clear that I'm interested in  
>> > extreme numbers
>> > such as 1e300 and 1e-300.
> 
> That's not relevant, unless you are interested in extremely small numbers.
> 
>> > Then
>> > 
>> >  > f(1e300)
>> > [1] 7.911257e+283
> 
> (That is inaccurate)
> 
>> > is different from
>> > 
>> > 1e300*.Machine$double.eps
> 
> Yes, since 1e300 is not a power of two.  However, Sundar is right in the 
> sense that this is an upper bound for normalized numbers.
> 
> f(1) is .Machine$double.neg.eps, but so it is for all 1 <= x < 2.
> This gives you the answer:  .Machine$double.neg.eps * 2^floor(log2(x))

I'm not sure what is going wrong, but that is too small (on my machine, 
at least):

 > f1 <- function(x) .Machine$double.neg.eps * 2^floor(log2(x))
 > f1(1e300)
[1] 7.435085e+283
 > 1e300 + f1(1e300) == 1e300
[1] TRUE

Notice the difference in the 3rd decimal place from the empirical answer 
from my bisection search below.

> 
> Similarly for going below (but carefully as you get an extra halving on 
> the powers of two).
> 
> These results hold for all but denormalized numbers (those below 1e-308).
> 
> 
>> > [I'm interested in the gap between successive different exactly  
>> > representable
>> > numbers right across the IEEE range]
>> 
>> I'm not sure the result you're looking for is well defined, because on 
>> at least the Windows platform, R makes use of 80 bit temporaries as well 
>> as 64 bit double precision reals.  I don't know any, but would guess 
>> there exist examples of apparently equivalent formulations of your 
>> question that give different answers because one uses the temporaries 
>> and the other doesn't.
> 
> Not at R level.  For something to get stored in a real vector, it will be 
> a standard 64-bit double.

I don't think that's a proof, since R level code can call C functions, 
and there are an awful lot of callable functions in R, but I don't have 
a counter-example.

Duncan Murdoch


> 
>> But in answer to your question:  a bisection search is what I'd use: 
>> you start with x+delta > x, and you know x+0 == x, so use 0 and delta as 
>> bracketing points.  You should be able to find the value in about 50-60 
>> bisections if you start with delta == x, many fewer if you make use of 
>> the double.eps value.  Here's my version:  not tested too much.
>> 
>> f <- function(x) {
>>    u <- x
>>    l <- 0
>>    mid <- u/2
>>    while (l < mid && mid < u) {
>>      if (x < x + mid) u <- mid
>>      else l <- mid
>>      mid <- (l + u)/2
>>    }
>>    u
>> }
>> 
>>  > f(1e300)
>> [1] 7.438715e+283
>>  > 1e300 + 7.438715e+283  > 1e300
>> [1] TRUE
>>  > 1e300 + 7.438714e+283  > 1e300
>> [1] FALSE
>> 
>> 
>> Duncan Murdoch
>> 
>> > 
>> > 
>> > 
>> > rksh
>> > 
>> > 
>> > 
>> > 
>> > 
>> > On 11 Sep 2006, at 10:50, Sundar Dorai-Raj wrote:
>> > 
>> >>
>> >> Robin Hankin said the following on 9/11/2006 3:52 AM:
>> >>> Hi
>> >>> Given a real number x,  I want to know how accurately R can  
>> >>> represent  numbers near x.
>> >>> In particular, I want to know the infimum of exactly representable
>> >>> numbers greater than x, and the supremum of exactly representable   
>> >>> numbers
>> >>> less than x.  And then the interesting thing is the difference   
>> >>> between these two.
>> >>> I have a little function that does some of this:
>> >>> f <- function(x,FAC=1.1){
>> >>>    delta <- x
>> >>> while(x+delta > x){
>> >>>    delta <- delta/FAC
>> >>> }
>> >>> return(delta*FAC)
>> >>> }
>> >>> But this can't be optimal.
>> >>> Is there a better way?
>> >> I believe this is what .Machine$double.eps is. From ?.Machine
>> >>
>> >> double.eps: the smallest positive floating-point number 'x' such that
>> >>           '1 + x != 1'.  It equals 'base^ulp.digits' if either 'base'
>> >>           is 2 or 'rounding' is 0;  otherwise, it is  
>> >> '(base^ulp.digits)
>> >>           / 2'.
>> >>
>> >> See also .Machine$double.neg.eps. Is this what you need?
>> >>
>> >> --sundar
>> > 
>> > --
>> > Robin Hankin
>> > Uncertainty Analyst
>> > National Oceanography Centre, Southampton
>> > European Way, Southampton SO14 3ZH, UK
>> >   tel  023-8059-7743
>> > 
>> > ______________________________________________
>> > R-help at stat.math.ethz.ch mailing list
>> > https://stat.ethz.ch/mailman/listinfo/r-help
>> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> > and provide commented, minimal, self-contained, reproducible code.
>> 
>> ______________________________________________
>> R-help at stat.math.ethz.ch mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>> 
>



More information about the R-help mailing list