[R] How long does skipping in read.table take

Mike Marchywka marchywka at hotmail.com
Fri Oct 22 23:43:38 CEST 2010






> Date: Fri, 22 Oct 2010 17:17:58 -0400
> From: dimitri.liakhovitski at gmail.com
> To: r-help at r-project.org
> Subject: [R] How long does skipping in read.table take
>
> I know I could figure it out empirically - but maybe based on your
> experience you can tell me if it's doable in a reasonable amount of
> time:
> I have a table (in .txt) with a 17,000,000 rows (and 30 columns).
> I can't read it all in (there are many strings). So I thought I could
> read it in in parts (e.g., 1 milllion) using nrows= and skip.
> I was able to read in the first 1,000,000 rows no problem in 45 sec.
> But then I tried to skip 16,999,999 rows and then read in things. Then
> R crashed. Should I try again - or is it too many rows to skip for R?
>
I've seen this come up a few times already in my brief time on 
the list. Quick goog search does turn up things like this to deal
with large datasets,

http://yusung.blogspot.com/2007/09/dealing-with-large-data-set-in-r.html

With most OO languages and use of accessors, you can hide a lot of 
things and the data handler is free to return a value or values to
you however makes sense- memory,disk, or even socket is hidden. I'm 
amazed that R is this general to allow package creators this freedom.



Just generally, memory management is a big problem even among computer
people using "computer" ( hard core programming rather than something like R) 
languages. People assume "well gee
I made an array it must be all in memory." Often however the OS tries to
give you VM- which is probably worse than having a file in terms of performance.

One rule that is good to consider is to "act locally" - that is try to
operate only with adjacent data and do something like stream or block
your input data. An R streaming IO class could potentially be very fast
and give implementors a reason to think globally but act locally. 
As is probably apparent, it is easy for even stats and math tasks to 
become IO limited rather than CPU bound.

As an aside, you can use some external utilities to split the file I guess
if split files are ok to use with your R code, head and tail for example
can isolate line ranges. In the past I've crated indexes of line offsets
and then used perl for random access but not sure how that would work with
R. 


> Thank you!

Thank google. 

>





Mike Marchywka | V.P. Technology

415-264-8477
marchywka at phluant.com

Online Advertising and Analytics for Mobile
http://www.phluant.com





 		 	   		  


More information about the R-help mailing list