[R] memory limits in R loading a dataset and using the packagetree
Sicotte, Hugues Ph.D.
Sicotte.Hugues at mayo.edu
Fri Jan 5 22:48:28 CET 2007
I agree about sampling, but.. You can go a little further with your
The defaults in R is to play nice and limit your allocation to half
the available RAM. Make sure you have a lot of disk swap space (at least
1G with 2G of RAM) and you can set your memory limit to 2G for R.
See help(memory.size) and use the memory.limit function
P.s. Someone let me use their 16Gig of RAM linux
And I was able to run R-64 bits with "top" showing 6Gigs of RAM
allocated (with suitable --max-mem-size command line parameters at
startup for R).
From: r-help-bounces at stat.math.ethz.ch
[mailto:r-help-bounces at stat.math.ethz.ch] On Behalf Of Weiwei Shi
Sent: Friday, January 05, 2007 2:12 PM
To: domenico pestalozzi
Cc: r-help at stat.math.ethz.ch
Subject: Re: [R] memory limits in R loading a dataset and using the
IMHO, R is not good at really large-scale data mining, esp. when the
algorithm is complicated. The alternatives are
1. sampling your data; sometimes you really do not need that large
number of records and the accuracy might already be good enough when
you load less.
2. find an alternative (commercial software) to do this job if you
really need to load all.
3. make a wrapper function, sampling your data and load it into R and
build model and repeat this process until you get n models. Then you
can do like meta-learning or simply majority-win if your problem is
On 1/4/07, domenico pestalozzi <statadat at gmail.com> wrote:
> I think the question is discussed in other thread, but I don't exactly
> what I want .
> I'm working in Windows XP with 2GB of memory and a Pentium 4 -
> I have the necessity of working with large dataset, generally from
> records to 800,000 (according to the project), and about 300 variables
> (...but a dataset with 800,000 records could not be "large" in your
> opinion...). Because of we are deciding if R will be the official
> in our company, I'd like to say if the possibility of using R with
> datasets depends only by the characteristics of the "engine" (memory
> In this case we can improve the machine (for example, what memory you
> For example, I have a dataset of 200,000 records and 211 variables but
> can't load the dataset because R doesn't work : I control the loading
> procedure (read.table in R) by using the windows task-manager and R is
> blocked when the file paging is 1.10 GB.
> After this I try with a sample of 100,000 records and I can correctly
> tha dataset, but I'd like to use the package tree, but after some
> I use this tree(variable1~., myDataset) ) I obtain the message
> total allocation of 1014Mb".
> I'd like your opinion and suggestion, considering that I could improve
> memory) my computer.
> [[alternative HTML version deleted]]
> R-help at stat.math.ethz.ch mailing list
> PLEASE do read the posting guide
> and provide commented, minimal, self-contained, reproducible code.
Weiwei Shi, Ph.D
"Did you always know?"
"No, I did not. But I believed..."
R-help at stat.math.ethz.ch mailing list
PLEASE do read the posting guide
and provide commented, minimal, self-contained, reproducible code.
More information about the R-help