[R] Logical inconsistency

Wacek Kusnierczyk Waclaw.Marcin.Kusnierczyk at idi.ntnu.no
Sat Dec 6 10:49:24 CET 2008

Berwin A Turlach wrote:
> G'day Wacek,
> On Fri, 05 Dec 2008 14:18:51 +0100
> Wacek Kusnierczyk <Waclaw.Marcin.Kusnierczyk at idi.ntnu.no> wrote:
>> well, this answer the question only partially.  this explains why a
>> system with finite precision arithmetic, such as r, will fail to be
>> logically correct in certain cases.  it does not explain why r, a
>> language said to isolate a user from the underlying implementational
>> choices, would have to fail this way. 
> I am not sure who said that R is "a language said to isolate...", but I
> guess you were told this on some other occasion.  For me the question
> would be whether a user wants to be isolated from implementational
> choices.  I know that I don't, e.g., knowing that R stores matrices in
> column-major form instead of row-major form, together with R's
> recycling rules,  is very helpful for arranging certain calculations.

does not matter much where the claim about isolation was made, it was on
one of the threads on this list.  it's simply wrong, and in some
contexts it's good it is.
the column- vs row-major order might be one example, though it does not
have to be exposing the underlying representation (even if it actually is).

also, i do not claim that r should implement arbitrary precision
floating point arithmetic.  for the efficiency of numerical computations
involving floats it's certainly better to use doubles than represent
real numbers by storing each digit separately, for example.

>> there is, in principle, no problem in having a high-level language
>> perform the computation in a logically consistent way.  
> Is this now supposed to be a "Radio Eriwan" joke?  As another saying
> goes: in theory there is no difference between theory and practice, in
> practice there is.

no joke, sorry to disappoint you. 

>> for example, bc is an "arbitrary precision calculator language", and
>> has no problem with examples as the above:
> Fair enough, and when bc can fit linear models, generalised linear
> models, mixed effect models, non-linear models and the myriad of other
> things I need day in day out, preferably in arbitrary precision, then I
> might consider changing to it.....

nope, and as mentioned above, if you have involving computations on
floats it would make little sense to implement arbitrary precision.  the
point was, the user was surprised, and the answer pointed her, if
indirectly, to an article which is addressed to computer scientists and
discusses low-level representational details, but the user was
presumably interested in stats, not computer science.  so the answer
felt like lacking a justification.  i didn't say (an exception?) that
the design is wrong in any way.  i only opposed it to a claim made
earlier here in defence against some of my other fiercely truculent

>> the fact that r (and many others, including matlab and sage, perhaps
>> not mathematica) does not perform logically here is a consequence of
>> its implementation of floating point arithmetic. 
> But you are wrong here, R performs logically *in the logic of finite
> precision arithmetic*.  The problem is that you are using finite
> precision arithmetic but expect that the rules and logic of infinite
> precision arithmetic hold.  If you want to use have infinite precision
> arithmetic, then use a tool that (supposedly) implements it.

well, it clearly depends what you regard as logical.  you can have r say
'1==0' is true, and argue that it's correct by the logic adopted. 
fine.  the issue is, if you assume your users are statisticians, not
computer scientists, you should not be surprised the logic some of them
assume differs from the one you implement.  i've taken the 'logically
consistent' of emma's as referring to number arithmetic on numbers, not
to arithmetic on number representations.

i think a short comment in the faq explaining why r does not adopt
arbitrary precision would not be harmful.


More information about the R-help mailing list