large amount of slides/ out of memory
1
0
Entering edit mode
Simon Lin ▴ 210
@simon-lin-461
Last seen 10.2 years ago
"Out of memory error" is a common challenge of running a very large batch of Affymetrix chips with RMA or gcRMA. To solve this program, I am working on a dcExpresso function. It is in alpha stage now. I will move it into beta release in the next two weeks. Best regards, Simon ================================================= Simon M. Lin, M.D. Manager, Duke Bioinformatics Shared Resource Assistant Research Professor, Biostatistics and Bioinformatics Box 3958, Duke University Medical Center Rm 286 Hanes, Trent Dr, Durham, NC 27710 Ph: (919) 681-9646 FAX: (919) 681-8028 Lin00025 (at) mc.duke.edu http://dbsr.duke.edu ================================================= Date: Mon, 7 Jun 2004 08:36:24 +0200 (MEST) From: "R.G.W. Verhaak" <r.verhaak@erasmusmc.nl> Subject: Re: [BioC] large amount of slides To: bioconductor@stat.math.ethz.ch Message-ID: <43766.130.115.244.114.1086590184.squirrel@130.115.244.114> Content-Type: text/plain;charset=iso-8859-1 I have succesfully ran GCRMA on a dataset of 285 HGU133a chips, on a machine with 8 Gb RAM installed; I noticed a peak memory use of 5,5 Gb (although I have not been monitoring it continuously). I would say 200 chips use equally less memory, so around 4 Gb. Roel Verhaak > > Message: 9 > Date: Fri, 04 Jun 2004 10:06:14 -0500 > From: "Vada Wilcox" <v_wilcox@hotmail.com> > Subject: [BioC] large amount of slides > To: bioconductor@stat.math.ethz.ch > Message-ID: <bay19-f34sdgaixwb9d0002ec89@hotmail.com> > Content-Type: text/plain; format=flowed > > Dear all, > > I have been using RMA succesfully for a while now, but in the past I have > only used it on a small amount of slides. I would like to do my study on a > larger scale now, with data (series of experiments) from other researchers > as well. My questions is the following: if I want to study, let's say 200 > slides, do I have to read them all into R at once (so together I mean, > with > read.affy() in package affy), or is it OK to read them series by series > (so > all wild types and controls of one researcher at a time)? > If it is really necessary to read all of them in at one time how much RAM > would I need (for let's say 200 CELfiles) and how can I raise the RAM? I > now > it's possible to raise it by using 'max vsize = ...' but I haven't been > able > to do it succesfully for 200 experiments though. Can somebody help me on > this? >
hgu133a gcrma hgu133a gcrma • 799 views
ADD COMMENT
0
Entering edit mode
@james-w-macdonald-5106
Last seen 3 days ago
United States
Note too that if you are simply doing RMA on the chips, you can use justRMA which will give identical results, but using far less RAM. HTH, Jim James W. MacDonald Affymetrix and cDNA Microarray Core University of Michigan Cancer Center 1500 E. Medical Center Drive 7410 CCGC Ann Arbor MI 48109 734-647-5623 >>> "Simon Lin" <simon.lin@duke.edu> 06/08/04 5:28 PM >>> "Out of memory error" is a common challenge of running a very large batch of Affymetrix chips with RMA or gcRMA. To solve this program, I am working on a dcExpresso function. It is in alpha stage now. I will move it into beta release in the next two weeks. Best regards, Simon ================================================= Simon M. Lin, M.D. Manager, Duke Bioinformatics Shared Resource Assistant Research Professor, Biostatistics and Bioinformatics Box 3958, Duke University Medical Center Rm 286 Hanes, Trent Dr, Durham, NC 27710 Ph: (919) 681-9646 FAX: (919) 681-8028 Lin00025 (at) mc.duke.edu http://dbsr.duke.edu ================================================= Date: Mon, 7 Jun 2004 08:36:24 +0200 (MEST) From: "R.G.W. Verhaak" <r.verhaak@erasmusmc.nl> Subject: Re: [BioC] large amount of slides To: bioconductor@stat.math.ethz.ch Message-ID: <43766.130.115.244.114.1086590184.squirrel@130.115.244.114> Content-Type: text/plain;charset=iso-8859-1 I have succesfully ran GCRMA on a dataset of 285 HGU133a chips, on a machine with 8 Gb RAM installed; I noticed a peak memory use of 5,5 Gb (although I have not been monitoring it continuously). I would say 200 chips use equally less memory, so around 4 Gb. Roel Verhaak > > Message: 9 > Date: Fri, 04 Jun 2004 10:06:14 -0500 > From: "Vada Wilcox" <v_wilcox@hotmail.com> > Subject: [BioC] large amount of slides > To: bioconductor@stat.math.ethz.ch > Message-ID: <bay19-f34sdgaixwb9d0002ec89@hotmail.com> > Content-Type: text/plain; format=flowed > > Dear all, > > I have been using RMA succesfully for a while now, but in the past I have > only used it on a small amount of slides. I would like to do my study on a > larger scale now, with data (series of experiments) from other researchers > as well. My questions is the following: if I want to study, let's say 200 > slides, do I have to read them all into R at once (so together I mean, > with > read.affy() in package affy), or is it OK to read them series by series > (so > all wild types and controls of one researcher at a time)? > If it is really necessary to read all of them in at one time how much RAM > would I need (for let's say 200 CELfiles) and how can I raise the RAM? I > now > it's possible to raise it by using 'max vsize = ...' but I haven't been > able > to do it succesfully for 200 experiments though. Can somebody help me on > this? > _______________________________________________ Bioconductor mailing list Bioconductor@stat.math.ethz.ch https://www.stat.math.ethz.ch/mailman/listinfo/bioconductor
ADD COMMENT

Login before adding your answer.

Traffic: 493 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6