[BioC] A question related to handling large data analysis in bioconducto r

Li, Aiguo (NIH/NCI) liai at mail.nih.gov
Tue Jun 29 23:53:46 CEST 2004

Hi all.
My name is AG LEE, a new bioconductor user. Our project is using HG_U133
plus 2 chips which contain approximately 56,000 probes and the .cel file in
text format is about 32MB.  Currently we have more than 100 chips and number
is growing quickly.  I tried to get the data into bioconductor using
ReadAffy and can only read in 19 chips before it run out of memory (my
machine has 1 GB memory).  I tried to normalize the data using "expresso(d,
normalize.method ="invariantset", bg.correct = FALSE,
pmcorrect.method="pmonly", summary.method="liwong"" and it ran out of memory
before the completion.  I have option to upgrade my memory to 4GB, but still
concern whether it will really help when our chips number reaches several
hundres or thousands.
Does anybody can help me with some suggestions?
Thanks in advance.

	[[alternative HTML version deleted]]

More information about the Bioconductor mailing list