[R] efficiency

jimi adams adams.644 at osu.edu
Tue Apr 30 00:37:33 CEST 2002


i have a set of  files that i am reading into R one at a time and applying 
to a function that i have written
where each is a 'table' n (columns) x 10000 (rows)
n varies across the files and most of the rows only have data in the first 
few columns
currently i am reading them in with the command:
read.table(file="2.75.0.997.1", header=FALSE, sep="", skip=13, fill=, 
row.names=1, nrows=10000)->list

***and it works fine
however we are now working with a huge table.
i was wondering if there is a more efficient way to read this in

IDEALLY i would like to have it as a list where each element is a row from 
the input file, eliminating all of the NA's that the above approach results 
in , such that i would have a list with 10000 elements and each of variable 
length from 1:n

any help greatly appreciated
jimi adams
Department of Sociology
The Ohio State University
300 Bricker Hall
190 N. Oval Mall
Columbus, OH 43210-1353
614-688-4261

our mind has a remarkable ability to think of contents as being independent 
of the act of thinking
                                             -georg simmel

-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !)  To: r-help-request at stat.math.ethz.ch
_._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._



More information about the R-help mailing list