[R] Reading huge chunks of data from MySQL into Windows R

bogdan romocea br44114 at yahoo.com
Mon Jun 6 18:02:44 CEST 2005


On June 06, 2005 Dubravko Dolic wrote:
> So I think the other way round will serve best: 
> Do everything in R and avoid using SQL on the database...

I'm not so sure. How about your MySQL experience? 
What types of queries do you run most often? What indexes do you have?
What kind of indexes? What storage engine(s)? How about the query
cache? What kinds of joins do you do? In what order? Etc etc etc.
Optimizing MySQL queries is a (much) more complex job than it may
appear at first sight. 

An index on userid would cause your query to run in seconds instead of
minutes (the full table scan would be avoided). If you're talking about
counting stuff from very large tables, R can't possibly be faster than
MySQL.



-----Original Message-----
From: Dubravko Dolic [mailto:Dubravko.Dolic at komdat.com]
Sent: Monday, June 06, 2005 10:43 AM
To: br44114 at gmail.com
Cc: r-help at stat.math.ethz.ch
Subject: AW: [R] Reading huge chunks of data from MySQL into Windows R


In my (limited) experience R is more powerful concerning data
manipulation. An example: I have a vector holding a user id. Some user
ids can appear more than once. Doing SELECT COUNT(DISTINCT userid) on
MySQL will take approx. 15 min. Doing length(unique(userid)) will take
(almost) no time...

So I think the other way round will serve best: Do everything in R and
avoid using SQL on the database...




-----Ursprüngliche Nachricht-----
Von: bogdan romocea [mailto:br44114 at yahoo.com] 
Gesendet: Montag, 6. Juni 2005 16:27
An: Dubravko Dolic
Cc: r-help at stat.math.ethz.ch
Betreff: RE: [R] Reading huge chunks of data from MySQL into Windows R

You don't say what you want to do with the data, how many columns you
have etc. However, I would suggest proceeding in this order:
1. Avoid R; do everything in MySQL.
2. Use random samples.
3. If for some reason you need to process all 160 million rows in R, do
it in a loop. Pull no more than, say, 50-100k rows at a time. This
approach would allow you to process billions of rows without the memory
and disk requirements going through the roof.

hth,
b.


-----Original Message-----
From: Dubravko Dolic [mailto:Dubravko.Dolic at komdat.com]
Sent: Monday, June 06, 2005 9:31 AM
To: r-help at stat.math.ethz.ch
Subject: [R] Reading huge chunks of data from MySQL into Windows R


Dear List,

 

I'm trying to use R under Windows on a huge database in MySQL via ODBC
(technical reasons for this...). Now I want to read tables with some
160.000.000 entries into R. I would be lucky if anyone out there has
some good hints what to consider concerning memory management. I'm not
sure about the best methods reading such huge files into R. for the
moment I spilt the whole table into readable parts stick them together
in R again. 

 

Any hints welcome.

 

 

 

Dubravko Dolic

Statistical Analyst

 

Email: dubravko.dolic at komdat.com


	[[alternative HTML version deleted]]

______________________________________________
R-help at stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html







		
__________________________________ 

Get on-the-go sports scores, stock quotes, news and more. Check it out!




More information about the R-help mailing list