Hello,
I am also having memory issues with bfconvert and I came across this thread as I was looking for solutions. I am setting BF_MAX_MEM to 14G on a node that has 16G of RAM (figuring 2G should be fine for the OS). My file is quite large (~700G). Is there an option to make bfconvert more I/O-intensive rather than memory-intensive? That is, rather than trying to hold too much in memory, could one designate a large tmp directory to which any intermediate results could be written, rather than storing them in memory? My use case is extraction of single z-stacks (one series, one time point) from a large file having dimensions of series, time, and Z. I would expect that despite the large input file size, the memory requirement would not be extraordinary (binary seek to the appropriate position(s) in the file and write metadata and pixel data to a file). The z-stacks I am extracting are themselves about 340M in .ome.tif format. Nevertheless, I am always exceeding the step memory limit on the HPC I am using. The specific error I see is the following:
- Code: Select all
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)
at java.lang.StringCoding.decode(StringCoding.java:193)
at java.lang.String.<init>(String.java:426)
at java.io.ByteArrayOutputStream.toString(ByteArrayOutputStream.java:245)
at loci.common.xml.XMLTools.dumpXML(XMLTools.java:252)
at loci.common.xml.XMLTools.dumpXML(XMLTools.java:234)
at ome.xml.meta.AbstractOMEXMLMetadata.dumpXML(AbstractOMEXMLMetadata.java:112)
at ome.xml.meta.OMEXMLMetadataImpl.dumpXML(OMEXMLMetadataImpl.java:105)
at loci.formats.services.OMEXMLServiceImpl.getOMEXML(OMEXMLServiceImpl.java:465)
at loci.formats.tools.ImageConverter.testConvert(ImageConverter.java:414)
at loci.formats.tools.ImageConverter.main(ImageConverter.java:880)
Any suggestions? Thanks.
PS: If it is better that this thread not be hijacked, let me know and I can move this message to a new thread.