# [Rd] Computing means, variances and sums

Prof Brian Ripley ripley at stats.ox.ac.uk
Sun Feb 19 11:58:56 CET 2006

```There has been a recent thread on R-help on this, which resurrected
concepts from bug reports PR#1228 and PR#6743.  Since the discussion has
included a lot of erroneous 'information' based on misunderstandings of
floating-point computations, this is an attempt to set the record straight

The problem was that var(rep(0.02, 10)) was observed to be (on some
machines) about 1e-35.  This can easily be explained.

0.02 is not an exactly repesented binary fraction.  Repeatedly adding the
represented quantity makes increasing rounding errors relative to the
exact computation, so (on Sparc Solaris)

> var(rep(0.02, 10))
 1.337451e-35
> options(digits=18)
> sum(rep(0.02, 10))
 0.19999999999999998
> sum(rep(0.02, 10)) -0.2
 -2.775557561562891e-17
> sum(rep(0.02, 10))/10 -0.02
 -3.469446951953614e-18
> z <- sum(rep(0.02, 10))/10 -0.02
> 10*z^2/9
 1.3374513502689138e-35

and so the non-zero variance is arising from (x[i] - mean) being non-zero.
(I did check that was what was happening at C level.)

There has been talk of other ways to arrange these computations, for
example Kahan summation and Welford's algorithm (see Chan & Lewis, 1979,
CACM 22, 526-531 and references therein).  However, R already used the
two-pass algorithm which is the most accurate (in terms of error bounds)
in that reference.

Why are most people seeing 0?  Because the way computation is done in
modern FPUs is not the computation analysed in early numerical analysis
papers, including in Chan & Lewis.  First, all FPUs that I am aware of
allow the use of guard digits, effectively doing intermediate computations
to one more bit than required.  And many use extended precision registers
for computations which they can keep in FP registers, thereby keeping
another 10 or more bits.  (This includes R on most OSes on ix86 CPUs, the
exception being on Windows where the FPU has been reset by some other
software. Typically it is not the case for RISC CPUs, e.g. Sparc.)

The use of extended precision registers invalidates the textbook
comparisons of accuracy in at least two ways.  First, the error analysis
is different if all intermediate results are stored in extended precision.
Second, the simpler the algorithm, the more intermediate results which
will remain in extended precision.  This means that (for example) Kahan
summation is usually less accurate than direct summation on real-world
FPUs.

There are at least two steps which can be done to improve accuracy.
One is to ensure that intermediate results are stored in extended
precision.  Every R platform of which I am aware has a 'long double' type
which can be used.  On ix86 this is the extended precision type used
internally in the FPU and so the cost is zero or close to zero, whereas on
a Sparc the extra precision is more but there is some cost.  The second
step is to use iterative refinement, so the final part of mean.default
currently is

## sum(int) can overflow, so convert here.
if(is.integer(x)) x <- as.numeric(x)
## use one round of iterative refinement
res <- sum(x)/n
if(is.finite(res)) res + sum(x-res)/n else res

This is a well-known technique in numerical linear algebra, and improves
the accuracy whilst doubling the cost.  (This is about to be replaced by
an internal function to allow the intermediate result to be stored in a
long double.)

Note the is.finite(res) there.  R works with the extended IEC60059 (aka
IEEE754) quantities of Inf, -Inf and NaN (NA being a special NaN).  The
rearranged computations do not work correctly for those quantities.  So
although they can be more 'efficient' (in terms of flops), they have to be
supplemented by some other calculation to ensure that the specials are
handled correctly.  Remember once again that we get both speed and
accuracy advantages by keeping computations in FPU registers, so
complicating the code has considerable cost.

R-devel will shortly use long doubles for critical intermediate results
and iterative refinement for calculations of means.  This may be slower
but it would be an extreme case in which the speed difference was
detectable.  Higher accuracy has a cost too: there are several packages
(dprep and mclust are two) whose results are critically dependent on fine
details of computations and will for example infinite-loop if an optimized
BLAS is used on AMD64.

The choice of algorithms in R is an eclectic mixture of accuracy and
speed.  When (some of) R-core decided to make use of a BLAS for e.g.
matrix products this produced a large speed increase for those with
optimized BLAS (and a small speed decrease for others), but it did result
in lower accuracy and problems with NAs etc (and the alternative
algorithms have since been added back to cover such cases).  But it seems
that nowadays few R users understand the notion of rounding error, and it
is easier to make the computations maximally accurate than to keep
explaining basic numerical analysis on R-help.  (The problem is thereby
just pushed farther down the line, because for example linear regression
is not maximally accurate.)

--
Brian D. Ripley,                  ripley at stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford,             Tel:  +44 1865 272861 (self)
1 South Parks Road,                     +44 1865 272866 (PA)
Oxford OX1 3TG, UK                Fax:  +44 1865 272595

```