[R] R's memory limitation and Hadoop
h.wickham at gmail.com
Tue Sep 16 15:53:15 CEST 2014
Hundreds of thousands of records usually fit into memory fine.
On Tue, Sep 16, 2014 at 12:40 PM, Barry King <barry.king at qlx.com> wrote:
> Is there a way to get around R’s memory-bound limitation by interfacing
> with a Hadoop database or should I look at products like SAS or JMP to work
> with data that has hundreds of thousands of records? Any help is
> *Barry E. King, Ph.D.*
> Analytics Modeler
> Qualex Consulting Services, Inc.
> Barry.King at qlx.com
> O: (317)940-5464
> M: (317)507-0661
> [[alternative HTML version deleted]]
> R-help at r-project.org mailing list
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
More information about the R-help