affy: expresso in separated steps
1
0
Entering edit mode
@leonardo-kenji-shikida-2136
Last seen 10.3 years ago
Hi I'd like to know how to perform affy expresso in separate steps for example what I'd like is CEL data => bg correction => save corrected data into a file X load file X => normalization => save normalized data into file Y load file Y => summarization => save summarized data into file Z and so on it's not clear to me [1] how to access these intermediary datasets. should I save both pm(Data) and mm(Data)? [2] if the only thing I need is the intermediary dataset or if I need anything alse such as platform info (CDF files for example) I hope I've been clear about my doubt thanks in advance Kenji
Normalization cdf affy Normalization cdf affy • 1.5k views
ADD COMMENT
0
Entering edit mode
@james-w-macdonald-5106
Last seen 10 hours ago
United States
Hi Kenji, Leonardo K. Shikida wrote: > Hi > > I'd like to know how to perform affy expresso in separate steps > > for example > > what I'd like is > > CEL data => bg correction => save corrected data into a file X > load file X => normalization => save normalized data into file Y > load file Y => summarization => save summarized data into file Z I wouldn't save things in files. The objects designed to contain your data are pretty complex, but are designed to make manipulation of your data simple. If you write out to files you increase the complexity of dealing with your data and lose all of the nice functions designed to make your life simpler. You can instead keep your data in an AffyBatch (until you summarize) and just save the objects as you go through your process. For instance: dat <- ReadAffy() bgdat <- bg.correct(dat, method) ## for methods see bgcorrect.methods() normdat <- normalize(bgdat, method) ## for methods see normalize.methods(dat) eset <- computeExprSet(normdat, summary.method = method, pmcorrect.method = pmmethod) ## for summary and pmcorrect methods see express.summary.stat.methods() pmcorrect.methods() > > and so on > > it's not clear to me > > [1] how to access these intermediary datasets. should I save both > pm(Data) and mm(Data)? > [2] if the only thing I need is the intermediary dataset or if I need > anything alse such as platform info (CDF files for example) You will need a cdf package. If you are using a commercially available chip and just want to use the 'regular' Affy cdf, then you don't need to do anything. If you don't have the required package it will be downloaded for you. If you want to use a different cdf, there is the cdfname argument to ReadAffy (if BioC has these cdfs; an example would be the MBNI cdfs). If the chip isn't commercial, you will need to get the cdf from Affy, build a package using the makecdfenv package, and then build and install yourself. Best, Jim > > I hope I've been clear about my doubt > > thanks in advance > > Kenji > > _______________________________________________ > Bioconductor mailing list > Bioconductor at stat.math.ethz.ch > https://stat.ethz.ch/mailman/listinfo/bioconductor > Search the archives: http://news.gmane.org/gmane.science.biology.informatics.conductor -- James W. MacDonald, M.S. Biostatistician Douglas Lab University of Michigan Department of Human Genetics 5912 Buhl 1241 E. Catherine St. Ann Arbor MI 48109-5618 734-615-7826
ADD COMMENT
0
Entering edit mode
Hi James thanks for the fast answer I am afraid I can't do that. The idea here is to reuse some other normalization methods (not implemented in R), so I'd have to, somehow, save these intermediary results, perform another normalization method, then restore this normalized data to perform summarization, etc The problem, as you've pointed out, is that affy abstracts the internal data structure to make my life easier. My work will probably need to deal with this internal structure somehow. Maybe I could just save the object, export PM and MM data as CSV, perform the normalization, then restore the object using the load command and overwrite its PM and MM data with the normalized CSV files... Sounds like an horrible way to deal with this situation :-) so I am open to better ideas... [] Kenji On Thu, Oct 15, 2009 at 5:00 PM, James W. MacDonald <jmacdon at="" med.umich.edu=""> wrote: > Hi Kenji, > > Leonardo K. Shikida wrote: >> >> Hi >> >> I'd like to know how to perform affy expresso in separate steps >> >> for example >> >> what I'd like is >> >> CEL data => bg correction => save corrected data into a file X >> load file X => normalization => save normalized data into file Y >> load file Y => summarization => save summarized data into file Z > > I wouldn't save things in files. The objects designed to contain your data > are pretty complex, but are designed to make manipulation of your data > simple. If you write out to files you increase the complexity of dealing > with your data and lose all of the nice functions designed to make your life > simpler. > > You can instead keep your data in an AffyBatch (until you summarize) and > just save the objects as you go through your process. For instance: > > dat <- ReadAffy() > bgdat <- bg.correct(dat, method) > > ## for methods see bgcorrect.methods() > > normdat <- normalize(bgdat, method) > > ## for methods see normalize.methods(dat) > > eset <- computeExprSet(normdat, summary.method = method, pmcorrect.method = > pmmethod) > > ## for summary and pmcorrect methods see > express.summary.stat.methods() > pmcorrect.methods() > > >> >> and so on >> >> it's not clear to me >> >> [1] how to access these intermediary datasets. should I save both >> pm(Data) and mm(Data)? >> [2] if the only thing I need is the intermediary dataset or if I need >> anything alse such as platform info (CDF files for example) > > You will need a cdf package. If you are using a commercially available chip > and just want to use the 'regular' Affy cdf, then you don't need to do > anything. If you don't have the required package it will be downloaded for > you. If you want to use a different cdf, there is the cdfname argument to > ReadAffy (if BioC has these cdfs; an example would be the MBNI cdfs). If the > chip isn't commercial, you will need to get the cdf from Affy, build a > package using the makecdfenv package, and then build and install yourself. > > Best, > > Jim > > >> >> I hope I've been clear about my doubt >> >> thanks in advance >> >> Kenji >> >> _______________________________________________ >> Bioconductor mailing list >> Bioconductor at stat.math.ethz.ch >> https://stat.ethz.ch/mailman/listinfo/bioconductor >> Search the archives: >> http://news.gmane.org/gmane.science.biology.informatics.conductor > > -- > James W. MacDonald, M.S. > Biostatistician > Douglas Lab > University of Michigan > Department of Human Genetics > 5912 Buhl > 1241 E. Catherine St. > Ann Arbor MI 48109-5618 > 734-615-7826 >
ADD REPLY
0
Entering edit mode
Leonardo K. Shikida wrote: > Hi James > > thanks for the fast answer > > I am afraid I can't do that. The idea here is to reuse some other > normalization methods (not implemented in R), so I'd have to, somehow, > save these intermediary results, perform another normalization method, > then restore this normalized data to perform summarization, etc > > The problem, as you've pointed out, is that affy abstracts the > internal data structure to make my life easier. My work will probably > need to deal with this internal structure somehow. > > Maybe I could just save the object, export PM and MM data as CSV, > perform the normalization, then restore the object using the load > command and overwrite its PM and MM data with the normalized CSV > files... Yup. As bogus as that sounds, I don't know any other way to do it. Although you wouldn't use the load command. You would use read.csv and pm or mm. Just make sure your normalization method doesn't scramble the PM and MM values or you are really toast. Best, Jim > > Sounds like an horrible way to deal with this situation :-) so I am > open to better ideas... > > [] > > Kenji > > > > On Thu, Oct 15, 2009 at 5:00 PM, James W. MacDonald > <jmacdon at="" med.umich.edu=""> wrote: >> Hi Kenji, >> >> Leonardo K. Shikida wrote: >>> Hi >>> >>> I'd like to know how to perform affy expresso in separate steps >>> >>> for example >>> >>> what I'd like is >>> >>> CEL data => bg correction => save corrected data into a file X >>> load file X => normalization => save normalized data into file Y >>> load file Y => summarization => save summarized data into file Z >> I wouldn't save things in files. The objects designed to contain your data >> are pretty complex, but are designed to make manipulation of your data >> simple. If you write out to files you increase the complexity of dealing >> with your data and lose all of the nice functions designed to make your life >> simpler. >> >> You can instead keep your data in an AffyBatch (until you summarize) and >> just save the objects as you go through your process. For instance: >> >> dat <- ReadAffy() >> bgdat <- bg.correct(dat, method) >> >> ## for methods see bgcorrect.methods() >> >> normdat <- normalize(bgdat, method) >> >> ## for methods see normalize.methods(dat) >> >> eset <- computeExprSet(normdat, summary.method = method, pmcorrect.method = >> pmmethod) >> >> ## for summary and pmcorrect methods see >> express.summary.stat.methods() >> pmcorrect.methods() >> >> >>> and so on >>> >>> it's not clear to me >>> >>> [1] how to access these intermediary datasets. should I save both >>> pm(Data) and mm(Data)? >>> [2] if the only thing I need is the intermediary dataset or if I need >>> anything alse such as platform info (CDF files for example) >> You will need a cdf package. If you are using a commercially available chip >> and just want to use the 'regular' Affy cdf, then you don't need to do >> anything. If you don't have the required package it will be downloaded for >> you. If you want to use a different cdf, there is the cdfname argument to >> ReadAffy (if BioC has these cdfs; an example would be the MBNI cdfs). If the >> chip isn't commercial, you will need to get the cdf from Affy, build a >> package using the makecdfenv package, and then build and install yourself. >> >> Best, >> >> Jim >> >> >>> I hope I've been clear about my doubt >>> >>> thanks in advance >>> >>> Kenji >>> >>> _______________________________________________ >>> Bioconductor mailing list >>> Bioconductor at stat.math.ethz.ch >>> https://stat.ethz.ch/mailman/listinfo/bioconductor >>> Search the archives: >>> http://news.gmane.org/gmane.science.biology.informatics.conductor >> -- >> James W. MacDonald, M.S. >> Biostatistician >> Douglas Lab >> University of Michigan >> Department of Human Genetics >> 5912 Buhl >> 1241 E. Catherine St. >> Ann Arbor MI 48109-5618 >> 734-615-7826 >> -- James W. MacDonald, M.S. Biostatistician Douglas Lab University of Michigan Department of Human Genetics 5912 Buhl 1241 E. Catherine St. Ann Arbor MI 48109-5618 734-615-7826
ADD REPLY
0
Entering edit mode
Dear Kenji, Maybe you could use package xps, which has a similar function "express" which allows you to do normalization stepwise and save interim results as text files for reuse, see e.g. the recent vignette: http://bioconductor.org/packages/2.5/bioc/vignettes/xps/inst/doc/xpsPr eprocess.pdf and the script in xps/examples/script4xpsPreprocess.R Best regards Christian _._._._._._._._._._._._._._._._._._ C.h.r.i.s.t.i.a.n S.t.r.a.t.o.w.a V.i.e.n.n.a A.u.s.t.r.i.a e.m.a.i.l: cstrato at aon.at _._._._._._._._._._._._._._._._._._ Leonardo K. Shikida wrote: > Hi James > > thanks for the fast answer > > I am afraid I can't do that. The idea here is to reuse some other > normalization methods (not implemented in R), so I'd have to, somehow, > save these intermediary results, perform another normalization method, > then restore this normalized data to perform summarization, etc > > The problem, as you've pointed out, is that affy abstracts the > internal data structure to make my life easier. My work will probably > need to deal with this internal structure somehow. > > Maybe I could just save the object, export PM and MM data as CSV, > perform the normalization, then restore the object using the load > command and overwrite its PM and MM data with the normalized CSV > files... > > Sounds like an horrible way to deal with this situation :-) so I am > open to better ideas... > > [] > > Kenji > > > > On Thu, Oct 15, 2009 at 5:00 PM, James W. MacDonald > <jmacdon at="" med.umich.edu=""> wrote: > >> Hi Kenji, >> >> Leonardo K. Shikida wrote: >> >>> Hi >>> >>> I'd like to know how to perform affy expresso in separate steps >>> >>> for example >>> >>> what I'd like is >>> >>> CEL data => bg correction => save corrected data into a file X >>> load file X => normalization => save normalized data into file Y >>> load file Y => summarization => save summarized data into file Z >>> >> I wouldn't save things in files. The objects designed to contain your data >> are pretty complex, but are designed to make manipulation of your data >> simple. If you write out to files you increase the complexity of dealing >> with your data and lose all of the nice functions designed to make your life >> simpler. >> >> You can instead keep your data in an AffyBatch (until you summarize) and >> just save the objects as you go through your process. For instance: >> >> dat <- ReadAffy() >> bgdat <- bg.correct(dat, method) >> >> ## for methods see bgcorrect.methods() >> >> normdat <- normalize(bgdat, method) >> >> ## for methods see normalize.methods(dat) >> >> eset <- computeExprSet(normdat, summary.method = method, pmcorrect.method = >> pmmethod) >> >> ## for summary and pmcorrect methods see >> express.summary.stat.methods() >> pmcorrect.methods() >> >> >> >>> and so on >>> >>> it's not clear to me >>> >>> [1] how to access these intermediary datasets. should I save both >>> pm(Data) and mm(Data)? >>> [2] if the only thing I need is the intermediary dataset or if I need >>> anything alse such as platform info (CDF files for example) >>> >> You will need a cdf package. If you are using a commercially available chip >> and just want to use the 'regular' Affy cdf, then you don't need to do >> anything. If you don't have the required package it will be downloaded for >> you. If you want to use a different cdf, there is the cdfname argument to >> ReadAffy (if BioC has these cdfs; an example would be the MBNI cdfs). If the >> chip isn't commercial, you will need to get the cdf from Affy, build a >> package using the makecdfenv package, and then build and install yourself. >> >> Best, >> >> Jim >> >> >> >>> I hope I've been clear about my doubt >>> >>> thanks in advance >>> >>> Kenji >>> >>> _______________________________________________ >>> Bioconductor mailing list >>> Bioconductor at stat.math.ethz.ch >>> https://stat.ethz.ch/mailman/listinfo/bioconductor >>> Search the archives: >>> http://news.gmane.org/gmane.science.biology.informatics.conductor >>> >> -- >> James W. MacDonald, M.S. >> Biostatistician >> Douglas Lab >> University of Michigan >> Department of Human Genetics >> 5912 Buhl >> 1241 E. Catherine St. >> Ann Arbor MI 48109-5618 >> 734-615-7826 >> >> > > _______________________________________________ > Bioconductor mailing list > Bioconductor at stat.math.ethz.ch > https://stat.ethz.ch/mailman/listinfo/bioconductor > Search the archives: http://news.gmane.org/gmane.science.biology.informatics.conductor > >
ADD REPLY
0
Entering edit mode
wow I'll give it a try! thanks! [] Kenji On Thu, Oct 15, 2009 at 5:22 PM, cstrato <cstrato at="" aon.at=""> wrote: > Dear Kenji, > > Maybe you could use package xps, which has a similar function "express" > which allows you to do normalization stepwise and save interim results as > text files for reuse, see e.g. the recent vignette: > http://bioconductor.org/packages/2.5/bioc/vignettes/xps/inst/doc/xps Preprocess.pdf > and the script in xps/examples/script4xpsPreprocess.R > > Best regards > Christian > _._._._._._._._._._._._._._._._._._ > C.h.r.i.s.t.i.a.n ? S.t.r.a.t.o.w.a > V.i.e.n.n.a ? ? ? ? ? A.u.s.t.r.i.a > e.m.a.i.l: ? ? ? ?cstrato at aon.at > _._._._._._._._._._._._._._._._._._ > > > Leonardo K. Shikida wrote: >> >> Hi James >> >> thanks for the fast answer >> >> I am afraid I can't do that. The idea here is to reuse some other >> normalization methods (not implemented in R), so I'd have to, somehow, >> save these intermediary results, perform another normalization method, >> then restore this normalized data to perform summarization, etc >> >> The problem, as you've pointed out, is that affy abstracts the >> internal data structure to make my life easier. My work will probably >> need to deal with this internal structure somehow. >> >> Maybe I could just save the object, export PM and MM data as CSV, >> perform the normalization, then restore the object using the load >> command and overwrite its PM and MM data with the normalized CSV >> files... >> >> Sounds like an horrible way to deal with this situation :-) so I am >> open to better ideas... >> >> [] >> >> Kenji >> >> >> >> On Thu, Oct 15, 2009 at 5:00 PM, James W. MacDonald >> <jmacdon at="" med.umich.edu=""> wrote: >> >>> >>> Hi Kenji, >>> >>> Leonardo K. Shikida wrote: >>> >>>> >>>> Hi >>>> >>>> I'd like to know how to perform affy expresso in separate steps >>>> >>>> for example >>>> >>>> what I'd like is >>>> >>>> CEL data => bg correction => save corrected data into a file X >>>> load file X => normalization => save normalized data into file Y >>>> load file Y => summarization => save summarized data into file Z >>>> >>> >>> I wouldn't save things in files. The objects designed to contain your >>> data >>> are pretty complex, but are designed to make manipulation of your data >>> simple. If you write out to files you increase the complexity of dealing >>> with your data and lose all of the nice functions designed to make your >>> life >>> simpler. >>> >>> You can instead keep your data in an AffyBatch (until you summarize) and >>> just save the objects as you go through your process. For instance: >>> >>> dat <- ReadAffy() >>> bgdat <- bg.correct(dat, method) >>> >>> ## for methods see bgcorrect.methods() >>> >>> normdat <- normalize(bgdat, method) >>> >>> ## for methods see normalize.methods(dat) >>> >>> eset <- computeExprSet(normdat, summary.method = method, pmcorrect.method >>> = >>> pmmethod) >>> >>> ## for summary and pmcorrect methods see >>> express.summary.stat.methods() >>> pmcorrect.methods() >>> >>> >>> >>>> >>>> and so on >>>> >>>> it's not clear to me >>>> >>>> [1] how to access these intermediary datasets. should I save both >>>> pm(Data) and mm(Data)? >>>> [2] if the only thing I need is the intermediary dataset or if I need >>>> anything alse such as platform info (CDF files for example) >>>> >>> >>> You will need a cdf package. If you are using a commercially available >>> chip >>> and just want to use the 'regular' Affy cdf, then you don't need to do >>> anything. If you don't have the required package it will be downloaded >>> for >>> you. If you want to use a different cdf, there is the cdfname argument to >>> ReadAffy (if BioC has these cdfs; an example would be the MBNI cdfs). If >>> the >>> chip isn't commercial, you will need to get the cdf from Affy, build a >>> package using the makecdfenv package, and then build and install >>> yourself. >>> >>> Best, >>> >>> Jim >>> >>> >>> >>>> >>>> I hope I've been clear about my doubt >>>> >>>> thanks in advance >>>> >>>> Kenji >>>> >>>> _______________________________________________ >>>> Bioconductor mailing list >>>> Bioconductor at stat.math.ethz.ch >>>> https://stat.ethz.ch/mailman/listinfo/bioconductor >>>> Search the archives: >>>> http://news.gmane.org/gmane.science.biology.informatics.conductor >>>> >>> >>> -- >>> James W. MacDonald, M.S. >>> Biostatistician >>> Douglas Lab >>> University of Michigan >>> Department of Human Genetics >>> 5912 Buhl >>> 1241 E. Catherine St. >>> Ann Arbor MI 48109-5618 >>> 734-615-7826 >>> >>> >> >> _______________________________________________ >> Bioconductor mailing list >> Bioconductor at stat.math.ethz.ch >> https://stat.ethz.ch/mailman/listinfo/bioconductor >> Search the archives: >> http://news.gmane.org/gmane.science.biology.informatics.conductor >> >> > >
ADD REPLY
0
Entering edit mode
Sounds like the aroma.affymetrix package is what you want - it's preprocessing takes CEL files as input and outputs CEL files; http://www.braju.com/R/aroma.affymetrix/ /H On Thu, Oct 15, 2009 at 1:23 PM, Leonardo K. Shikida <shikida at="" gmail.com=""> wrote: > wow > > I'll give it a try! thanks! > > [] > > Kenji > > > > On Thu, Oct 15, 2009 at 5:22 PM, cstrato <cstrato at="" aon.at=""> wrote: >> Dear Kenji, >> >> Maybe you could use package xps, which has a similar function "express" >> which allows you to do normalization stepwise and save interim results as >> text files for reuse, see e.g. the recent vignette: >> http://bioconductor.org/packages/2.5/bioc/vignettes/xps/inst/doc/xp sPreprocess.pdf >> and the script in xps/examples/script4xpsPreprocess.R >> >> Best regards >> Christian >> _._._._._._._._._._._._._._._._._._ >> C.h.r.i.s.t.i.a.n ? S.t.r.a.t.o.w.a >> V.i.e.n.n.a ? ? ? ? ? A.u.s.t.r.i.a >> e.m.a.i.l: ? ? ? ?cstrato at aon.at >> _._._._._._._._._._._._._._._._._._ >> >> >> Leonardo K. Shikida wrote: >>> >>> Hi James >>> >>> thanks for the fast answer >>> >>> I am afraid I can't do that. The idea here is to reuse some other >>> normalization methods (not implemented in R), so I'd have to, somehow, >>> save these intermediary results, perform another normalization method, >>> then restore this normalized data to perform summarization, etc >>> >>> The problem, as you've pointed out, is that affy abstracts the >>> internal data structure to make my life easier. My work will probably >>> need to deal with this internal structure somehow. >>> >>> Maybe I could just save the object, export PM and MM data as CSV, >>> perform the normalization, then restore the object using the load >>> command and overwrite its PM and MM data with the normalized CSV >>> files... >>> >>> Sounds like an horrible way to deal with this situation :-) so I am >>> open to better ideas... >>> >>> [] >>> >>> Kenji >>> >>> >>> >>> On Thu, Oct 15, 2009 at 5:00 PM, James W. MacDonald >>> <jmacdon at="" med.umich.edu=""> wrote: >>> >>>> >>>> Hi Kenji, >>>> >>>> Leonardo K. Shikida wrote: >>>> >>>>> >>>>> Hi >>>>> >>>>> I'd like to know how to perform affy expresso in separate steps >>>>> >>>>> for example >>>>> >>>>> what I'd like is >>>>> >>>>> CEL data => bg correction => save corrected data into a file X >>>>> load file X => normalization => save normalized data into file Y >>>>> load file Y => summarization => save summarized data into file Z >>>>> >>>> >>>> I wouldn't save things in files. The objects designed to contain your >>>> data >>>> are pretty complex, but are designed to make manipulation of your data >>>> simple. If you write out to files you increase the complexity of dealing >>>> with your data and lose all of the nice functions designed to make your >>>> life >>>> simpler. >>>> >>>> You can instead keep your data in an AffyBatch (until you summarize) and >>>> just save the objects as you go through your process. For instance: >>>> >>>> dat <- ReadAffy() >>>> bgdat <- bg.correct(dat, method) >>>> >>>> ## for methods see bgcorrect.methods() >>>> >>>> normdat <- normalize(bgdat, method) >>>> >>>> ## for methods see normalize.methods(dat) >>>> >>>> eset <- computeExprSet(normdat, summary.method = method, pmcorrect.method >>>> = >>>> pmmethod) >>>> >>>> ## for summary and pmcorrect methods see >>>> express.summary.stat.methods() >>>> pmcorrect.methods() >>>> >>>> >>>> >>>>> >>>>> and so on >>>>> >>>>> it's not clear to me >>>>> >>>>> [1] how to access these intermediary datasets. should I save both >>>>> pm(Data) and mm(Data)? >>>>> [2] if the only thing I need is the intermediary dataset or if I need >>>>> anything alse such as platform info (CDF files for example) >>>>> >>>> >>>> You will need a cdf package. If you are using a commercially available >>>> chip >>>> and just want to use the 'regular' Affy cdf, then you don't need to do >>>> anything. If you don't have the required package it will be downloaded >>>> for >>>> you. If you want to use a different cdf, there is the cdfname argument to >>>> ReadAffy (if BioC has these cdfs; an example would be the MBNI cdfs). If >>>> the >>>> chip isn't commercial, you will need to get the cdf from Affy, build a >>>> package using the makecdfenv package, and then build and install >>>> yourself. >>>> >>>> Best, >>>> >>>> Jim >>>> >>>> >>>> >>>>> >>>>> I hope I've been clear about my doubt >>>>> >>>>> thanks in advance >>>>> >>>>> Kenji >>>>> >>>>> _______________________________________________ >>>>> Bioconductor mailing list >>>>> Bioconductor at stat.math.ethz.ch >>>>> https://stat.ethz.ch/mailman/listinfo/bioconductor >>>>> Search the archives: >>>>> http://news.gmane.org/gmane.science.biology.informatics.conductor >>>>> >>>> >>>> -- >>>> James W. MacDonald, M.S. >>>> Biostatistician >>>> Douglas Lab >>>> University of Michigan >>>> Department of Human Genetics >>>> 5912 Buhl >>>> 1241 E. Catherine St. >>>> Ann Arbor MI 48109-5618 >>>> 734-615-7826 >>>> >>>> >>> >>> _______________________________________________ >>> Bioconductor mailing list >>> Bioconductor at stat.math.ethz.ch >>> https://stat.ethz.ch/mailman/listinfo/bioconductor >>> Search the archives: >>> http://news.gmane.org/gmane.science.biology.informatics.conductor >>> >>> >> >> > > _______________________________________________ > Bioconductor mailing list > Bioconductor at stat.math.ethz.ch > https://stat.ethz.ch/mailman/listinfo/bioconductor > Search the archives: http://news.gmane.org/gmane.science.biology.informatics.conductor >
ADD REPLY
0
Entering edit mode
Hi Christian this seems to be in the right direction, but I could not find any example of how it explicitly saves intermediary results as text files thanks in advance Kenji On Thu, Oct 15, 2009 at 6:22 PM, cstrato <cstrato at="" aon.at=""> wrote: > Dear Kenji, > > Maybe you could use package xps, which has a similar function "express" > which allows you to do normalization stepwise and save interim results as > text files for reuse, see e.g. the recent vignette: > http://bioconductor.org/packages/2.5/bioc/vignettes/xps/inst/doc/xps Preprocess.pdf > and the script in xps/examples/script4xpsPreprocess.R > > Best regards > Christian > _._._._._._._._._._._._._._._._._._ > C.h.r.i.s.t.i.a.n ? S.t.r.a.t.o.w.a > V.i.e.n.n.a ? ? ? ? ? A.u.s.t.r.i.a > e.m.a.i.l: ? ? ? ?cstrato at aon.at > _._._._._._._._._._._._._._._._._._ > > > Leonardo K. Shikida wrote: >> >> Hi James >> >> thanks for the fast answer >> >> I am afraid I can't do that. The idea here is to reuse some other >> normalization methods (not implemented in R), so I'd have to, somehow, >> save these intermediary results, perform another normalization method, >> then restore this normalized data to perform summarization, etc >> >> The problem, as you've pointed out, is that affy abstracts the >> internal data structure to make my life easier. My work will probably >> need to deal with this internal structure somehow. >> >> Maybe I could just save the object, export PM and MM data as CSV, >> perform the normalization, then restore the object using the load >> command and overwrite its PM and MM data with the normalized CSV >> files... >> >> Sounds like an horrible way to deal with this situation :-) so I am >> open to better ideas... >> >> [] >> >> Kenji >> >> >> >> On Thu, Oct 15, 2009 at 5:00 PM, James W. MacDonald >> <jmacdon at="" med.umich.edu=""> wrote: >> >>> >>> Hi Kenji, >>> >>> Leonardo K. Shikida wrote: >>> >>>> >>>> Hi >>>> >>>> I'd like to know how to perform affy expresso in separate steps >>>> >>>> for example >>>> >>>> what I'd like is >>>> >>>> CEL data => bg correction => save corrected data into a file X >>>> load file X => normalization => save normalized data into file Y >>>> load file Y => summarization => save summarized data into file Z >>>> >>> >>> I wouldn't save things in files. The objects designed to contain your >>> data >>> are pretty complex, but are designed to make manipulation of your data >>> simple. If you write out to files you increase the complexity of dealing >>> with your data and lose all of the nice functions designed to make your >>> life >>> simpler. >>> >>> You can instead keep your data in an AffyBatch (until you summarize) and >>> just save the objects as you go through your process. For instance: >>> >>> dat <- ReadAffy() >>> bgdat <- bg.correct(dat, method) >>> >>> ## for methods see bgcorrect.methods() >>> >>> normdat <- normalize(bgdat, method) >>> >>> ## for methods see normalize.methods(dat) >>> >>> eset <- computeExprSet(normdat, summary.method = method, pmcorrect.method >>> = >>> pmmethod) >>> >>> ## for summary and pmcorrect methods see >>> express.summary.stat.methods() >>> pmcorrect.methods() >>> >>> >>> >>>> >>>> and so on >>>> >>>> it's not clear to me >>>> >>>> [1] how to access these intermediary datasets. should I save both >>>> pm(Data) and mm(Data)? >>>> [2] if the only thing I need is the intermediary dataset or if I need >>>> anything alse such as platform info (CDF files for example) >>>> >>> >>> You will need a cdf package. If you are using a commercially available >>> chip >>> and just want to use the 'regular' Affy cdf, then you don't need to do >>> anything. If you don't have the required package it will be downloaded >>> for >>> you. If you want to use a different cdf, there is the cdfname argument to >>> ReadAffy (if BioC has these cdfs; an example would be the MBNI cdfs). If >>> the >>> chip isn't commercial, you will need to get the cdf from Affy, build a >>> package using the makecdfenv package, and then build and install >>> yourself. >>> >>> Best, >>> >>> Jim >>> >>> >>> >>>> >>>> I hope I've been clear about my doubt >>>> >>>> thanks in advance >>>> >>>> Kenji >>>> >>>> _______________________________________________ >>>> Bioconductor mailing list >>>> Bioconductor at stat.math.ethz.ch >>>> https://stat.ethz.ch/mailman/listinfo/bioconductor >>>> Search the archives: >>>> http://news.gmane.org/gmane.science.biology.informatics.conductor >>>> >>> >>> -- >>> James W. MacDonald, M.S. >>> Biostatistician >>> Douglas Lab >>> University of Michigan >>> Department of Human Genetics >>> 5912 Buhl >>> 1241 E. Catherine St. >>> Ann Arbor MI 48109-5618 >>> 734-615-7826 >>> >>> >> >> _______________________________________________ >> Bioconductor mailing list >> Bioconductor at stat.math.ethz.ch >> https://stat.ethz.ch/mailman/listinfo/bioconductor >> Search the archives: >> http://news.gmane.org/gmane.science.biology.informatics.conductor >> >> > >
ADD REPLY
0
Entering edit mode
Dear Kenji, Please have a look at the help file "?export" which shows you how to save the results as text files. If you set parameter "as.dataframe=TRUE" the text file will be automatically imported into R as data.frame. Best regards Christian Leonardo K. Shikida wrote: > Hi Christian > > this seems to be in the right direction, but I could not find any > example of how it explicitly saves intermediary results as text files > > thanks in advance > > Kenji > > > > On Thu, Oct 15, 2009 at 6:22 PM, cstrato <cstrato at="" aon.at=""> wrote: > >> Dear Kenji, >> >> Maybe you could use package xps, which has a similar function "express" >> which allows you to do normalization stepwise and save interim results as >> text files for reuse, see e.g. the recent vignette: >> http://bioconductor.org/packages/2.5/bioc/vignettes/xps/inst/doc/xp sPreprocess.pdf >> and the script in xps/examples/script4xpsPreprocess.R >> >> Best regards >> Christian >> _._._._._._._._._._._._._._._._._._ >> C.h.r.i.s.t.i.a.n S.t.r.a.t.o.w.a >> V.i.e.n.n.a A.u.s.t.r.i.a >> e.m.a.i.l: cstrato at aon.at >> _._._._._._._._._._._._._._._._._._ >> >> >> Leonardo K. Shikida wrote: >> >>> Hi James >>> >>> thanks for the fast answer >>> >>> I am afraid I can't do that. The idea here is to reuse some other >>> normalization methods (not implemented in R), so I'd have to, somehow, >>> save these intermediary results, perform another normalization method, >>> then restore this normalized data to perform summarization, etc >>> >>> The problem, as you've pointed out, is that affy abstracts the >>> internal data structure to make my life easier. My work will probably >>> need to deal with this internal structure somehow. >>> >>> Maybe I could just save the object, export PM and MM data as CSV, >>> perform the normalization, then restore the object using the load >>> command and overwrite its PM and MM data with the normalized CSV >>> files... >>> >>> Sounds like an horrible way to deal with this situation :-) so I am >>> open to better ideas... >>> >>> [] >>> >>> Kenji >>> >>> >>> >>> On Thu, Oct 15, 2009 at 5:00 PM, James W. MacDonald >>> <jmacdon at="" med.umich.edu=""> wrote: >>> >>> >>>> Hi Kenji, >>>> >>>> Leonardo K. Shikida wrote: >>>> >>>> >>>>> Hi >>>>> >>>>> I'd like to know how to perform affy expresso in separate steps >>>>> >>>>> for example >>>>> >>>>> what I'd like is >>>>> >>>>> CEL data => bg correction => save corrected data into a file X >>>>> load file X => normalization => save normalized data into file Y >>>>> load file Y => summarization => save summarized data into file Z >>>>> >>>>> >>>> I wouldn't save things in files. The objects designed to contain your >>>> data >>>> are pretty complex, but are designed to make manipulation of your data >>>> simple. If you write out to files you increase the complexity of dealing >>>> with your data and lose all of the nice functions designed to make your >>>> life >>>> simpler. >>>> >>>> You can instead keep your data in an AffyBatch (until you summarize) and >>>> just save the objects as you go through your process. For instance: >>>> >>>> dat <- ReadAffy() >>>> bgdat <- bg.correct(dat, method) >>>> >>>> ## for methods see bgcorrect.methods() >>>> >>>> normdat <- normalize(bgdat, method) >>>> >>>> ## for methods see normalize.methods(dat) >>>> >>>> eset <- computeExprSet(normdat, summary.method = method, pmcorrect.method >>>> = >>>> pmmethod) >>>> >>>> ## for summary and pmcorrect methods see >>>> express.summary.stat.methods() >>>> pmcorrect.methods() >>>> >>>> >>>> >>>> >>>>> and so on >>>>> >>>>> it's not clear to me >>>>> >>>>> [1] how to access these intermediary datasets. should I save both >>>>> pm(Data) and mm(Data)? >>>>> [2] if the only thing I need is the intermediary dataset or if I need >>>>> anything alse such as platform info (CDF files for example) >>>>> >>>>> >>>> You will need a cdf package. If you are using a commercially available >>>> chip >>>> and just want to use the 'regular' Affy cdf, then you don't need to do >>>> anything. If you don't have the required package it will be downloaded >>>> for >>>> you. If you want to use a different cdf, there is the cdfname argument to >>>> ReadAffy (if BioC has these cdfs; an example would be the MBNI cdfs). If >>>> the >>>> chip isn't commercial, you will need to get the cdf from Affy, build a >>>> package using the makecdfenv package, and then build and install >>>> yourself. >>>> >>>> Best, >>>> >>>> Jim >>>> >>>> >>>> >>>> >>>>> I hope I've been clear about my doubt >>>>> >>>>> thanks in advance >>>>> >>>>> Kenji >>>>> >>>>> _______________________________________________ >>>>> Bioconductor mailing list >>>>> Bioconductor at stat.math.ethz.ch >>>>> https://stat.ethz.ch/mailman/listinfo/bioconductor >>>>> Search the archives: >>>>> http://news.gmane.org/gmane.science.biology.informatics.conductor >>>>> >>>>> >>>> -- >>>> James W. MacDonald, M.S. >>>> Biostatistician >>>> Douglas Lab >>>> University of Michigan >>>> Department of Human Genetics >>>> 5912 Buhl >>>> 1241 E. Catherine St. >>>> Ann Arbor MI 48109-5618 >>>> 734-615-7826 >>>> >>>> >>>> >>> _______________________________________________ >>> Bioconductor mailing list >>> Bioconductor at stat.math.ethz.ch >>> https://stat.ethz.ch/mailman/listinfo/bioconductor >>> Search the archives: >>> http://news.gmane.org/gmane.science.biology.informatics.conductor >>> >>> >>> >> > >
ADD REPLY

Login before adding your answer.

Traffic: 525 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6