What kind of drives are you using? More drives and controllers from more nodes should help, but it is possible something else is the bottleneck. https://dell.to/3BFlsSl
We are using 2TB drives (60 drives total atm. with 4 nodes) and one 1.6TB SSD per node as L3.
Scenario:
VEEAM Backup uses the Isilon Cluster as backup repository.
Multiple concurrent backup streams to the Isilon, distributed to multiple nodes via SC, maxes out the Isilon cluster ingest rate at about 1.7-2.0GB/s write throughput depending on junk/block size (2GB/s is quite a feat for 4 nodes if you ask me!).
Multiple concurrent backup streams from VEEAM to a single Isilon node maxes out the node NIC at 10GbE (dual NIC failover config) with 900-1100MB/s write throughput.
One single backup stream from VEEAM to a single Isilon node maxes out at about 350MB/s.
Using tools like Diskspd, FIO, etc. seems to show similar speeds, as long as no thread concurrency is active. When activating multi-thread write concurrency to the same file (on Isilon), speeds go up to about 660-700MB/s per file. Same seems to happen with limited gain when pushing pressure by increasing outstanding IOs in the test patterns...
Is this to be expected for "streaming" access pattern?
Interestingly, when pushing files via Windows Explorer (I know, I know, that's not testing...) we get up to 900MB/s throughput - and we have SMB CA active which in theory should disable all Explorer cheats like lazy writes, early commit, etc. and show somewhat useful test results, that, in my believe should be similar to other single-threaded low outstanding IO test results...
It sounds like just this one scenario is slower than the rest of the write operations. I was not able to find any specific expected performance numbers. When you expand to 8 nodes it should do a data rebalance and use more drives and nodes. Page 9 https://dell.to/3HfpbqW
Well, then we'll see in about a month wen the additional nodes are seup and rebalance is done
In the meantime I opened a case wirh veeam to have a performance investigation from teir side as well... Maybe its an issue/config between veeam and Isilon
DELL-Josh Cr
Moderator
•
9.5K Posts
0
February 21st, 2022 12:00
Hi,
What kind of drives are you using? More drives and controllers from more nodes should help, but it is possible something else is the bottleneck. https://dell.to/3BFlsSl
CendresMetaux
1 Rookie
•
62 Posts
0
February 22nd, 2022 02:00
We are using 2TB drives (60 drives total atm. with 4 nodes) and one 1.6TB SSD per node as L3.
Scenario:
VEEAM Backup uses the Isilon Cluster as backup repository.
Multiple concurrent backup streams to the Isilon, distributed to multiple nodes via SC, maxes out the Isilon cluster ingest rate at about 1.7-2.0GB/s write throughput depending on junk/block size (2GB/s is quite a feat for 4 nodes if you ask me!).
Multiple concurrent backup streams from VEEAM to a single Isilon node maxes out the node NIC at 10GbE (dual NIC failover config) with 900-1100MB/s write throughput.
One single backup stream from VEEAM to a single Isilon node maxes out at about 350MB/s.
Using tools like Diskspd, FIO, etc. seems to show similar speeds, as long as no thread concurrency is active. When activating multi-thread write concurrency to the same file (on Isilon), speeds go up to about 660-700MB/s per file. Same seems to happen with limited gain when pushing pressure by increasing outstanding IOs in the test patterns...
Is this to be expected for "streaming" access pattern?
Interestingly, when pushing files via Windows Explorer (I know, I know, that's not testing...) we get up to 900MB/s throughput - and we have SMB CA active which in theory should disable all Explorer cheats like lazy writes, early commit, etc. and show somewhat useful test results, that, in my believe should be similar to other single-threaded low outstanding IO test results...
DELL-Josh Cr
Moderator
•
9.5K Posts
0
February 22nd, 2022 09:00
It sounds like just this one scenario is slower than the rest of the write operations. I was not able to find any specific expected performance numbers. When you expand to 8 nodes it should do a data rebalance and use more drives and nodes. Page 9 https://dell.to/3HfpbqW
CendresMetaux
1 Rookie
•
62 Posts
0
February 22nd, 2022 10:00
Well, then we'll see in about a month wen the additional nodes are seup and rebalance is done
In the meantime I opened a case wirh veeam to have a performance investigation from teir side as well... Maybe its an issue/config between veeam and Isilon