[Rd] Performance of .C and .Call functions vs. native R code
Gabriel Becker
gmbecker at ucdavis.edu
Thu Jul 14 17:59:49 CEST 2011
On Thu, Jul 14, 2011 at 8:21 AM, Alireza Mahani
<alireza.s.mahani at gmail.com>wrote:
> (I am using a LINUX machine)
>
> Jeff,
>
> In creating reproducible results, I 'partially' answered my question. I
> have
> attached two scripts, 'mvMultiply.r' and 'mvMultiply.cc'. Please copy both
> files into your chosen directory, then run 'Rscript mvMultiply.r' in that
> directory while changing the two boolean parameters 'INCLUDE_DATAPREP' and
> 'ROWMAJOR' to all four permutations. (The variable 'diffVec' is there to
> verify that the two methods produce the same output vector.)
>
> Below are the results that I get, along with discussion (tR and tCall are
> in
> sec):
>
> INCLUDE_DATAPREP,ROWMAJOR,tR,tCall
> F,F,13.536,13.875
> F,T,13.824,14.299
> T,F,13.688,18.167
> T,T,13.982,30.730
>
> Interpretation: The execution time for the .Call line is nearly identical
> to
> the call to R operator '%*%'. Two data preparation lines for the .Call
> method create the overhead:
>
> A <- t(A) (~12sec, or 12usec per call)
> dim(A) <- dim(A)[1] * dim(A)[2] (~4sec, or 4usec per call)
>
>
AFAIK R stores matrices as vectors internally anyway and the dims just tell
it the position of the various elements, so I'm not sure that second line is
needed at all.
I have attached a tiny piece of c code which verifies this.
The output I get from that is:
> dyn.load("/home/gmbecker/gabe/matvectest.so")
> vec = 1.1:8.1
> mat = matrix(vec, ncol = 4)
> .Call("R_MatVecTest", vec, mat, 8L)
[1] TRUE
Note if you create the matrix with byrow=TRUE they may not be the same.
Hope that helps,
Gabe
> While the first line can be avoided by providing options in c++ function
> (as
> is done in the BLAS API), I wonder if the second line can be avoided, aside
> from the obvious option of rewriting the R scripts to use evectors instead
> of
> matrices. But this defies one of the primary advantages of using R, which
> is
> succinct, high-level coding. In particular, if one has several matrices as
> input into a .Call function, then the overhead from matrix-to-vector
> transformations can add up. To summarize, my questions are:
>
> 1- Do the above results seem reasonable to you? Is there a similar penalty
> in R's '%*%' operator for transforming matrices to vectors before calling
> BLAS functions?
> 2- Are there techniques for reducing the above overhead for developers
> looking to augment their R code with compiled code?
>
> Regards,
> Alireza
>
> ---------------------------------------
> # mvMultiply.r
> # comparing performance of matrix multiplication in R (using '%*%'
> operator)
> vs. calling compiled code (using .Call function)
> # y [m x 1] = A [m x n] %*% x [n x 1]
>
> rm(list = ls())
> system("R CMD SHLIB mvMultiply.cc")
> dyn.load("mvMultiply.so")
>
> INCLUDE_DATAPREP <- F
> ROWMAJOR <- F #indicates whether the c++ function treats A in a row-major
> or
> column-major fashion
>
> m <- 100
> n <- 10
> N <- 1000000
>
> diffVec <- array(0, dim = N)
> tR <- 0.0
> tCall <- 0.0
> for (i in 1:N) {
>
> A <- runif(m*n); dim(A) <- c(m,n)
> x <- runif(n)
>
> t1 <- proc.time()[3]
> y1 <- A %*% x
> tR <- tR + proc.time()[3] - t1
>
> if (INCLUDE_DATAPREP) {
> t1 <- proc.time()[3]
> }
> if (ROWMAJOR) { #since R will convert matrix to vector in a
> column-major
> fashion, if the c++ function expects a row-major format, we need to
> transpose A before converting it to a vector
> A <- t(A)
> }
> dim(A) <- dim(A)[1] * dim(A)[2]
> if (!INCLUDE_DATAPREP) {
> t1 <- proc.time()[3]
> }
> y2 <- .Call("matvecMultiply", as.double(A), as.double(x),
> as.logical(c(ROWMAJOR)))
> tCall <- tCall + proc.time()[3] - t1
>
> diffVec[i] <- max(abs(y2 - y1))
> }
> cat("Data prep time for '.Call' included: ", INCLUDE_DATAPREP, "\n")
> cat("C++ function expects row-major matrix: ", ROWMAJOR, "\n")
> cat("Time - Using '%*%' operator in R: ", tR, "sec\n")
> cat("Time - Using '.Call' function: ", tCall, "sec\n")
> cat("Maximum difference between methods: ", max(diffVec), "\n")
>
> dyn.unload("mvMultiply.so")
> ---------------------------------------
> # mvMultiply.cc
> #include <Rinternals.h>
> #include <R.h>
>
> extern "C" {
>
> SEXP matvecMultiply(SEXP A, SEXP x, SEXP rowmajor) {
> double *rA = REAL(A), *rx = REAL(x), *ry;
> int *rrm = LOGICAL(rowmajor);
> int n = length(x);
> int m = length(A) / n;
> SEXP y;
> PROTECT(y = allocVector(REALSXP, m));
> ry = REAL(y);
> for (int i = 0; i < m; i++) {
> ry[i] = 0.0;
> for (int j = 0; j < n; j++) {
> if (rrm[0] == 1) {
> ry[i] += rA[i * n + j] * rx[j];
> } else {
> ry[i] += rA[j * m + i] * rx[j];
> }
> }
> }
> UNPROTECT(1);
> return(y);
> }
>
> }
>
>
> --
> View this message in context:
> http://r.789695.n4.nabble.com/Performance-of-C-and-Call-functions-vs-native-R-code-tp3665017p3667896.html
> Sent from the R devel mailing list archive at Nabble.com.
>
> ______________________________________________
> R-devel at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>
--
Gabriel Becker
Graduate Student
Statistics Department
University of California, Davis
More information about the R-devel
mailing list