[R] convert to binary to decimal

Jim Regetz regetz at nceas.ucsb.edu
Sat Feb 17 02:44:28 CET 2007


Roland Rau wrote:
> On 2/16/07, Petr Pikal <petr.pikal at precheza.cz> wrote:
>> Hi
>>
>> slight modification of your function can be probably even quicker:
>>
>> fff<-function(x) sum(2^(which(rev(x))-1))
>> :-)
>> Petr
>>
>>
> Yes, your function is slightly but consistently faster than my suggestion.
> But my "tests" show still Bert Gunter's function to be far ahead of the
> rest.
> 

Mere trifling at this point, but here's a tweak that yields slightly
faster performance on my system (with a caveat):

# Bert's original function
bert.gunter <- function(x) {
  sum(x * 2^(rev(seq_along(x)) - 1))
}

# A slightly modified function
dead.horse <- function(x) {
  sum( 2^(rev(seq_along(x))-1)[x] )
}

set.seed(1)
huge.list <- replicate(20000,
         sample(c(TRUE,FALSE), 20, replace=TRUE), simplify=FALSE)
horse.time <- replicate(15, system.time(lapply(huge.list, dead.horse)))
bert.time <- replicate(15, system.time(lapply(huge.list, bert.gunter)))


# Print mean times (exclude first 2 to improve consistency)
> rowMeans(bert.time[, -(1:2)])
[1] 0.618600 0.000867 0.621000 0.000000 0.000000
> rowMeans(horse.time[, -(1:2)])
[1] 0.580286 0.000571 0.582143 0.000000 0.000000

Hope no one comes along to beat this function ;-)

Incidentally, I generated huge.list by randomly sampling TRUE and FALSE
values with equal probability. I believe this matches what Roland did,
and it seems quite reasonable given that the vectors are meant to
represent binary numbers. But FWIW, as the vectors get more densely
populated with TRUE values, dead.horse() loses its advantage. In the
limit, if all values are TRUE, Bert's multiplication is slightly faster
than logical indexing.

Fun on a Friday...

Cheers,
Jim

------------------------------
James Regetz, Ph.D.
Scientific Programmer/Analyst
National Center for Ecological Analysis & Synthesis
735 State St, Suite 300
Santa Barbara, CA 93101



More information about the R-help mailing list