[Rd] Philosophy behind converting Fortran to C for use in R
avraham.adler at gmail.com
Tue Jun 6 22:27:23 CEST 2017
This is not a question about a bug or even best practices; rather I'm
trying to understand the philosophy or theory as to why certain
portions of the R codebase are written as they are. If this question
is better posed elsewhere, please point me in the proper direction.
In the thread about the issues with the Tukey line, Martin said :
> when this topic came up last (for me) in Dec. 2014, I did spend about 2 days work (or more?)
> to get the FORTRAN code from the 1981 - book (which is abbreviated the "ABC of EDA")
> from a somewhat useful OCR scan into compilable Fortran code and then f2c'ed,
> wrote an R interface function found problems…
I have seen this in the R source code and elsewhere, that native
Fortran is converted to C via f2c and then run as C within R. This is
notwithstanding R's ability to use Fortran, either directly through
.Fortran()  or via .Call() using simple helper C-wrappers .
I'm curious as to the reason. Is it because much of the code was
written before Fortran 90 compilers were freely available? Does it
help with maintenance or make debugging easier? Is it faster or more
likely to compile cleanly?
 Such as kmeans does for the Hartigan-Wong method in the stats package
 Such as the mvtnorm package does
More information about the R-devel