What is the maximum size of a blob stored on a Centera cluster ?
Page 29 of the API Reference guide, it indicates 100GB with a note "The EMC-recommended limit".
Does this means that the limit can be increased ? I didn't find a parameter for this in the API docs, so this would probably mean an EMC intervention on the cluster ?
I'm asking this because I have to store a 200GB file (yes, two hundred gigabytes). I know I could split it, maybe 50 4gigs pieces, Or 4 50gigs pieces and use blob slicing on each piece. Buf if possible, I'd like to avoid special treatment for this one file.
The 100GB file size is the maximum file size we test in our QA tests so that is the maximum we will support, you can in fact go higher but EMC would not support this if you ran into problems in the field
If you were looking for the best in parallel write performance with the FPBlob_WritePartial call then over 8 chunks you wont see any performance improvement as certainly 50 parallel threads would more than likely interfere with other IO activity.
If you were writing them sequentially in one thread it probably wouldn't matter, a smaller number seems easier to manage although I suppose if you had an error in ingesting one it will mean you having to restart more of a transfer.