Hi Mike and others,
I want to use apeglm for effect-size shrinkage in a (normal) linear regression setting and I'm interested in shrinkage of almost all of the model coefficients. Here's a toy example:
library("apeglm")
library("DEP")
library("limma")
library("SummarizedExperiment")
library("dplyr")
# load data
data <- UbiLength %>%
filter(Reverse != "+", Potential.contaminant != "+") %>%
make_unique("Gene.names", "Protein.IDs", delim = ";")
LFQ_columns <- grep("LFQ.", colnames(data))
se <- make_se(data, LFQ_columns, UbiLength_ExpDesign)
# data and model matrix
Y <- assay(se)[!rowAnyMissings(assay(se)),]
X <- model.matrix(~condition, colData(se))
# model fitting
fit <- lmFit(Y, X)
coef <- fit$coefficients
coef_error <- fit$stdev.unscaled * fit$sigma
# normal log-likelihood
log_lik_norm <- function (y, x, beta, param, offset) {
dnorm(y, mean = x %*% beta, sd = param, log = TRUE)
}
# apeglm
coef_nr <- 2
mle <- log(2) * cbind(coef[,coef_nr], coef_error[,coef_nr])
res <- apeglm(Y=Y, x=X, log.lik = log_lik_norm, param = rowSds(Y), mle=mle, coef=coef_nr, threshold = 1.2)
My questions are:
- Is my setup for the normal log-likelihood correct?
- Is it possible to shrink multiple coefficients simultaneously and obtain the corresponding s-values and fsr? In my application, Y has dimension 1500 x 50000 and X has 200 columns, which is why I don't want to run the model for each coefficient separately.
I also had a look at ashr, but testing against a threshold is not implemented there.
Best, Frederik
Thank you, I'll try that out!