May I know the biggest difference on performance between VNX/CLARiiON and Symmetrix from your perspective?
Well, where to begin...
- The VNX is an active-passive array, meaning although both Storage Processors are being used, only one SP will actually own a LUN. So you can only use 50% of the ports simultaneously.
- If you hit a storage processor bottleneck on the CLARiiON on VNX, it's not possible to add another SP. You can upgrade it (data in place upgrade, but downtime is required), or you throw a new VNX next to it. For a VMAX you can add some more processing power: add another controller! (up to the model limits of course)
- FAST VP for the VNX/CX uses chunks of 1GB which are moved once every 24 hours. For a VMAX, the chunks are 768KB (quite a difference!) which are moved at multiple times per day (i can't remember for sure how often, i think once every hour or so?).
No doubt a true Symm expert can keep expanding this list for a couple of hours.
Don't write off the VNX range if you need power: the bigger VNX systems are quite capable of providing a lot of performance. But if you need to scale up a lot, VMAX is something to think about...
The number of BEs (back end or buses) is also something to consider. The largest V-MAX can have 128 BE connections (which comes down to 64 because all DAEs are dual connected). The number of FAs (front end adapters or host connectivity adapters) on the largest V-MAX can have 128 FAs. The largest VNX has 5 expansion slots per storage processor (which need to contain the same expansion modules in SPA and SPB). Each module can have 4 FAs or 4 BEs, so theoretically each SP can have 20 (5 x 4) ports, which contain FA as well as BE ports. A more specific list can be found in the PDF I'll mention later (I have some trouble finding the right one).
The largest VNX can have 1000 disks, while a V-MAX can have 3200 disks
The amount of cache: VNX 7500 has 96GB and the V-MAX 40K can have up to 2TB of cache (8 x 256GB).
A complete list of the VNX internals can be found on: http://www.emc.com/collateral/software/specification-sheet/h8514-vnx-series-ss.pdf
Message was edited by: RRR added a link to the VNX hardware specs.
thank you for both, BTW, FAST VP storage tiering would cause performance degration during production hour. I my question is: Does FAST VP software prioritize i/o, I mean when tiering operation is conflict with normal i/o operation, would FAST VP software stop moving data temporary and let the normal i/o to be finished first?
Steve, it's always best to schedule pool relocations during off-hours. If you're in a 24/7 business then you've got a slight problem and the time with the lowest overall system load will have to do.
You can tune the relocation rate a bit, so find the best relocation rate for your system which doesn't impact production too much.
If you put it on high and your normal I/O is already stressing the system a lot, you will notice a performance impact.
Understand. Another questions, the current storage tiering is based on BLOCK, I was told File-level-based storage tering would be more efficient, how do you think?
Well if we look at the CX/VNX, you're promoting or demoting a 1GB chunk. That chunk could contain many files and folders, some of them maybe very active and others not used for years. If you could do this tiering based on files or on smaller chunks, that would certainly allow for a smaller tier 1 and thus cheaper system.
The problem is that if you make the chunks very small (or start tracking individual files), you'll need to keep track of many more chunks/items. That will inevitably increase load on the SP. So it's a bit of a balancing act... efficient tiering (small chunks) vs low SP load (larger chunks).
my thoughts is : iSCSI replies on TCP/IP much has much more overhead than FC. So for IOPS-intensive App (small, random), the performance for iSCSI would be worse than FC cuz each small I/O would needs to be package into a corresponding IP packet, which is really inefficient. On the other hand, FC has lower overhead under the same I/O patten.
But for bandwidth-intensive App (sequential, larage), iSCSI may even perform better than FC because it has larger bandwidth (10Gbps vs. 8Gbps), especially end-to-end Jumbo Frame support is enabled.