I am trying to run this package on a BAM file containing 90 million paired-end reads. I am running this on a linux node with 16 cores and 256 GB of RAM. This pipeline keeps failing because it needs too much memory (267 GB on the most recent failure). Is there anything I can do to allow the package the run with enough memory (possibly splitting analysis up by chromosome and combining the output)? It is failing at the step where it is creating the shifted bam file. Any other recommendations to shift the bam files and split the NFR, and mono-/di-/tri- nucleosome reads. Thanks!
Best regards,
Michael