Unsolved
This post is more than 5 years old
236 Posts
0
1594
Pools spanning multiple DAEs
Hi All,
Is it supported, a best practice, or recommended if I have a RAID set (RAID 6 6+2 for example), that is part of the storage pool for either block or file, spanning across two DAEs that are either on the same bus, or on different buses?
Appreciate your input.
victory_is_mine
236 Posts
0
October 24th, 2014 14:00
Thank you guys, appreciate it.
brettesinclair
2 Intern
2 Intern
•
715 Posts
0
October 24th, 2014 14:00
Go for it, nothing wrong with that.
(good post by raid-zero)
victory_is_mine
236 Posts
0
October 24th, 2014 14:00
I did consult the VNX2 bp doc that you have mentioned, it does not says that it is not the best practice to span the RAID set across two DAEs, it says that you can do it - as you mentioned they call it vertical positioning. So there would be nothing wrong if I have 3 drives on bus 1 enc 1 and 5 drives in bus 0 enc 2 from the same RAID 6 set, wouldn't it?
rzero
58 Posts
1
October 24th, 2014 14:00
There are differing best practices that have evolved throughout the life of the Clariion and VNX. Here is a good reference for a lot of things.
VNX2:https://www.emc.com/collateral/software/white-papers/h10938-vnx-best-practices-wp.pdf
VNX1:https://www.emc.com/collateral/white-papers/h12682-vnx-best-practices-wp.pdf
Keeping disks on the same bus (regardless of their DAE location) is referred to as "horizontal" positioning, while splitting them across buses is "vertical" positioning.
People have some differing opinions on what is best depending on how long they've been working with the technology since it has evolved so much. In general:
rzero
58 Posts
0
October 24th, 2014 15:00
It is a little confusing because they talk about drive locality but if you notice the section is actually talking about traditional RGs for Classic LUNs. I notice some language got taken out as well...check out the VNX1 Best Practice doc and look at the last sentence in this same section:
"Drive location selection does not apply to storage pools and therefore is not a consideration."
No reason to think this same logic doesn't apply to VNX2, especially considering once again that hot sparing is now permanent.
As noted I try to maintain the traditional RG rules on construction and organization where I can, especially when dealing with a new array simply because I usually have that luxury. But in the end this is really just to satisfy my own OCD instead of producing a performance or reliability benefit.
victory_is_mine
236 Posts
0
October 24th, 2014 21:00
I have a few out of topic questions: it is recommended to have no more than 5 SSDs for FAST VP per DAE. Is it true? I cannot find it in the best practices docs.
Will I benefit from enabling FAST Cache on the file LUNs?
brettesinclair
2 Intern
2 Intern
•
715 Posts
0
October 25th, 2014 01:00
Are you referring to vnx2 series ?
I'm not aware of any specific limitations, it's just recommended to spread them over the buses.
Are you concerned about saturating the bus with io ? I don't think is is a concern anymore.
Keep in mind that the best practice is still R5 (4+1) for the extreme performance tier in fast vp.
victory_is_mine
236 Posts
0
October 26th, 2014 15:00
Yes, I am referring to VNX2. I thought, for example, if you put 15 EFD SSDs for FAST VP in the same DAE, you will not be able to take advantage of the IOPs of these drives and will saturate the bus. Cannot find any documented proof in any EMC documentation though.
rzero
58 Posts
0
October 27th, 2014 10:00
I haven't heard of 5 but I have heard of 8 before. I think this is a case where someone has either mixed up recommendations or there was a poor job communicating the requirements.
Take a look at https://support.emc.com/kb/73184 "Fast Cache Configuration Best Practices." Note the difference between the VNX1 and VNX2 list of best practices.
The main thing to see here is the recommendation of no more than 8 FAST cache disks per bus. I think this was sometimes communicated as "no more than 8 FAST cache disks on the DPE before spreading them around" and then later turned into "no more than 8 EFD per DAE."
I certainly would welcome correction if I'm wrong but I don't see any reason that having 10 high activity EFD drives on 1_0 is worse than having 5 high activity drives on 1_0 and 5 on 1_1. The bus itself is the point of saturation, and if it gets saturated anywhere things are going to get ugly fast. It is actually recommended to keep the higher performing drives on the lowest numbered DAE as well, which would conflict with the "X drives per DAE" recommendation.
Then we can consider a FAST cache EFD drive vs a FAST VP EFD drive. While it is true that the FAST VP drive has a lower threshold of performance, and it is true that generally FAST cache drives are the hottest things in the system, is it impossible to saturate a bus with FAST VP EFD drives? Certainly not. It would be silly for example to divide up 2 FAST cache drives between Bus 0 and Bus 1, and then stack up 50 FAST VP EFD drives on Bus 0 (5 per shelf so we are "best practice"-ing!) and none on Bus 1.
With a VNX2 the real thing to keep in mind, in my opinion, is balance and bus saturation. You want to keep your workload (and ergo typically the drives) split equally between buses. Then you want to monitor your bus utilization to make sure you aren't overwhelming it. But keep in mind the buses are 4 x 6Gbps SAS so it takes either a lot of disks or a really heavy bandwidth workload to fill them up.
It depends on what kind of activity is happening on the file side and what your need for it is. If you are doing performance related stuff on the file side (like ESX datastores or HPC with NFS) then you may benefit from enabling it. If you are hosting a small amount of CIFS shares or user home directories, you may get more bang for the buck by leaving it enabled for your application or datastore LUNs.
It isn't bad to enable it on the file side, it is just a question of whether you will see a benefit from it in your environment.
brettesinclair
2 Intern
2 Intern
•
715 Posts
0
October 27th, 2014 15:00
In the absence of any official guidance or recommendation being available, I wonder whether it's not an issue anymore with the updated vnx2 backend?
Especially when you consider that the VNX-F is available and comes std with 25 slot dae's filled with 400GB eMLC drives.
Reading the spec sheet, it doesn't sound overly different to the std vnx dae hardware and bus specs.
Would be nice to have an EMC'r give some pseudo-official info on max SSD per dae ?
victory_is_mine
236 Posts
0
October 29th, 2014 11:00
Thank you All for the input, good discussion. I will go hunt down an EMC'r to shed some more light on the FAST VP drives.