Entering edit mode
Giulio Di Giovanni
▴
540
@giulio-di-giovanni-950
Last seen 10.2 years ago
Hi all,
I've an experiment with almost 300 arrays, single channel (not
Affymetrix, but some in-house made peptide arrays). Differently from
all my past experiments, this time I have the suspicion that the
normalization step it's not necessary, or worse.1) the qqplot of
unnormalized intensities seems pretty normal, and normalizing only
slightly (with really a small effects) improves the situation.2) after
normalization I lose some signal (of course) and I lose ALL what they
seem differentially recognized peptides, in a 2 groups comparison.
Before normalization they stand out quite consistently in 200 vs 100
arrays.3) we are not talking about genes, so most of the usual
hypothesis to be made in order to apply normalization are here not
valid. For example, in this case we have a mass response, where 90% of
the spots have higher intensity in one group compared to the other. So
I cannot use many of the most common normalization methods. I use a
linear model based method instead which in the past, on smaller
experiments, gave good results. But now even this seems to have a too
drastic impact on the data.
Besides the qqplot, or the boxplot of the slide intensities (the
latter in this case gives no information at all, the 300 boxes either
before and after normalization are not the same line), could you
please suggest me...- some diagnostic tools, plots or packages to
asses the quality of the normalization procedure.- some plots or tools
used for counter-examples where the normalization it's not only not
effective, but even has a negative impact in terms of data loss?
Right now the only thing I can think about it's to convert my data
matrix into an expression set and to apply affy's pseudo-MAplot to the
various arrays, but I don't have big hopes ... :(
Any help will be highly appreciated,Thanks and regards,
Giulio
[[alternative HTML version deleted]]