what EMC best Practices for these
1-TDEV best practices size , is it 64GB
2-min 8 hyper per disk drive and what the max and what best Practices (in VP environment )?
3-is it better to create separate thin pool for recover point journals ( to segregate or not) ?
4-is it better to create one disk group per thin pool
for example for
to create on DG on 300FC15K disk and create two thin pools on that DG one for R1 and the other thin pool for R6(6+2)
or should i create two different DGs, ( this will be easier in deciding hyper sizes to get at least 8 hypers per drive)
again to segregate or not
5- is it better to do wide FA consolidation
like 80 hosts across 8 engine( where i may hit FA mapping limits)
or 10 hosts per engine assuming ( all hosts generate same number of IOs, and having same application
again to segregate or not
1. TDEV size is flexible up to 240GB right now. Feel free to make any size you like. Considerations on IO concurrency in SRDF/S environments, so it can be advantageous to create metadevices. Provided your host does not overwhelm the queue depth on a single lun, then any size is good.
2. Correct. There are internal EMC tools to size TDATS based on drive size and RAID type.
3. There is no good technical reason to split drives of the same tier into multiple pools in my view.
4. Not sure why you would create to RAID types on the same disk technology?
5. Go as wide as you can, while allowing all of the capacity to be addressed.
1- 240GB is the limitation not the best practices , so which size make sense for single tdev (meta member size)
2-what the name of this internal tool,
3- what about isloating replication IOS from DATA IOS , and incase of pool failure
4-customer requirement to have FC Raid 1, FC Raid 6 and SATA RAID 6,
5- with going wide , i will hit the mapping limits on FA ,
The maximum allowed is 4096 for CPU (both ports).(including meta member)
1. Any device requiring high performance should be a meta, whatever the size.
2. There should be 8 TDATs per disk per pool. Or the minimum number to support the capacity
3. If you are worried about a pool failure, you should have some replication to another box, such as RDF
4. Why R6 on FC?
5. Use fewer larger volumes??
1- meta members counts , power of 4 , or it doesn't matter any more , I still like to use 8,12,16 way ,meat and I consider 32 way meat is very wide , while other consider it normal
what do you think
R6 on FC ,,, VP environment , with wide stripping where tdev stripped in round robin fashion among all tdats r5 kinda look more risky and r6 provide more protection
2- larger . fewer , is relative words ,
I consider 128GB tdev is huge , while many consider it normal size for TDEV
Powers of 2 isn't a factor with VP devices.
R6 does offer more protection from disk failures for sure. However at a VERY big cost. You may have to put in twice as many disks and engines over a R1 FC pool to support the same workload. If you look at my latest best practices EMC World pitch, I go over the calculations on how many disks you need to support a given workload. If you cannot tolerate any data loss, you should have remote replication as things can happen to a box that even R6 can't protect against, such as fire, flood, earthquake, etc.
Many customers are using 2TB LUNs or larger these days.
No, RAID1 is the preferred option for FC. RAID5 will have the lowest availability, and R1 may be less expensive, since in a FAST environment, you are not looking for $/GB but $/IOP
And when I mentioned replication, I was just suggesting that some of your % busy may not be due to host IO. The latest 5876 SR has a fix which will move some of the % busy to the low priority bucket where it belongs.