Unsolved
1 Rookie
•
62 Posts
0
745
February 20th, 2022 10:00
H500 Disk Pool Size
I found out that on our current cluster (4x H500 nodes) the maximum write throughput I can get on a "streaming" profile is around 330MB/s for a single large file that is written in a single thread / single channel mode (SMB3 w/o multichannel). One node can max out a 10GbE link in writes if it gets written to on multiple threads (multiple files) at the same time and does about 660MB/s on a single file if written to in SMB Multichannel.
Now my assumtion is that the single file single thread throughput is due to the Cluster being limited in maximum stripes with 4 nodes (and therefore maximum number of used disks to store the file). We are about to expand the Cluster to 8 nodes which will dramatically expand the striping.
But what about the maximum number of concurrently used disks per file in a 8 versus 4 node Gen6 cluster? Does the disk pool size expand when we grow from 4 to 8 nodes? If not, the striping expansion would not have an increasing effect on throughput when max stripes increases from 6+2 to 14+2...
Or am I getting it totally wrong?
If my assumtion is right (and disk pool size will not be the limiting factor in an 8 node cluster), I will see either about a doubling or a little more (14 vs 6 stripes) in throughput in a single file single thread scenario. Correct?



DELL-Josh Cr
Moderator
•
9.4K Posts
0
February 21st, 2022 12:00
Hi,
What kind of drives are you using? More drives and controllers from more nodes should help, but it is possible something else is the bottleneck. https://dell.to/3BFlsSl
CendresMetaux
1 Rookie
•
62 Posts
0
February 22nd, 2022 02:00
We are using 2TB drives (60 drives total atm. with 4 nodes) and one 1.6TB SSD per node as L3.
Scenario:
VEEAM Backup uses the Isilon Cluster as backup repository.
Multiple concurrent backup streams to the Isilon, distributed to multiple nodes via SC, maxes out the Isilon cluster ingest rate at about 1.7-2.0GB/s write throughput depending on junk/block size (2GB/s is quite a feat for 4 nodes if you ask me!).
Multiple concurrent backup streams from VEEAM to a single Isilon node maxes out the node NIC at 10GbE (dual NIC failover config) with 900-1100MB/s write throughput.
One single backup stream from VEEAM to a single Isilon node maxes out at about 350MB/s.
Using tools like Diskspd, FIO, etc. seems to show similar speeds, as long as no thread concurrency is active. When activating multi-thread write concurrency to the same file (on Isilon), speeds go up to about 660-700MB/s per file. Same seems to happen with limited gain when pushing pressure by increasing outstanding IOs in the test patterns...
Is this to be expected for "streaming" access pattern?
Interestingly, when pushing files via Windows Explorer (I know, I know, that's not testing...) we get up to 900MB/s throughput - and we have SMB CA active which in theory should disable all Explorer cheats like lazy writes, early commit, etc. and show somewhat useful test results, that, in my believe should be similar to other single-threaded low outstanding IO test results...
DELL-Josh Cr
Moderator
•
9.4K Posts
0
February 22nd, 2022 09:00
It sounds like just this one scenario is slower than the rest of the write operations. I was not able to find any specific expected performance numbers. When you expand to 8 nodes it should do a data rebalance and use more drives and nodes. Page 9 https://dell.to/3HfpbqW
CendresMetaux
1 Rookie
•
62 Posts
0
February 22nd, 2022 10:00
Well, then we'll see in about a month wen the additional nodes are seup and rebalance is done
In the meantime I opened a case wirh veeam to have a performance investigation from teir side as well... Maybe its an issue/config between veeam and Isilon