Hi,
My group are planning to conduct a 15k compound high content biology screen and I'm looking into options for analysing the image data. We'd probably just be doing a simple cell count and maybe cell size to start with. Although we have commerical software to do this (Definiens), I'm not sure how well our set-up will scale given the hardware it's on. Another option is to use our institute's NGS compute cluster, and analyse the data using the Bioconductor package EBImage which I think would scale much better.
In terms of learning resources there's the package vignette and the following tutorial from CSAMA2015, but are there any additional workflow papers or other resources that might be helpful to me in setting this up?
http://www.bioconductor.org/help/course-materials/2015/BioC2015/BioC2015Oles.html
Thanks,
Phil
Thanks Wolfgang this is exactly the sort of information I was hoping for.
My main issue to start with is how to get the image data out of the commercial software (Perkin Elmer Opera Phenix) in the first place and into an open format. I naively assumed it would just be a bunch of images but it seems not. The Open Microscopy Environment project seems to have lots of useful tools as well though so I'm hopeful I'll be able to work my way through this.
In that case RBioFormats might prove helpful. It provides an R interface to the OME image formats reader, which should allow you to load the data directly into R without the need of intermediate format conversion.