Unsolved
This post is more than 5 years old
2 Intern
•
261 Posts
1
4513
November 16th, 2011 09:00
Drive layout for VNX5500
Hello all,
Customer just purchased number of disks for new VNX5500 (Block Only).
It has 5 EFDs 100GB FAST CACHE SAS ; 16 2TB NL SAS 7200k 3.5" ; 27 2.5" 600GB 10K SAS and ;3.5 " 600GB 10K VAULT PACK .
Customer has a plan to migrate off of cx700 to VNX, this will be done later. However, i 'm going create pool for the customer and carve out LUns later.
I have 2 questions:-
1) I want to know if the drive layout is okie for proper load distrubution.
2) Customer has a mix of Solaris, Windows, VMWare and Xen servers. applications running 're Clearcase (Mostly Solaris hosts), Oracle DB, MS SQL Server.Those application pretty much use Raid 5 for the application.
EFD’s that were purchased are strictly for FAST cache.
I 'm thinking of creating 2 pools , one for SAS +EFDs ( R5) and Other NL SAS (R6) . My question is -based on host and application wise which servers should be using what pool ? Can some one please elaborate a little ? Specially for XEN server and solaries (running clearcase).what is good for oracle and MS Sql?
Appreciate some insight ! thanks.



jps00
2 Intern
•
392 Posts
1
November 16th, 2011 10:00
Which filesystems are going into which pools?
For example that SAS pool will have a lot of native IOPS, the NL-SAS pool much fewer.
If the EFDs are going to be used for FAST Cache, that FAST Cache can be used for both pools, or you can exclude a pool from being cached. All of the applications you list can benefit from FAST Cache. However, you may want to think about where the application logs are going to go. Sequential writes are not a FAST Cache's best friend.
You may want to review the 'Storage System Sizing and Performance Planning' section of EMC Unified Best Practices for Performance and Availability: Common Platform and Block O.E. 31.5 to check-up on the performance of the pools you plan to create. Depending on the IOPS needed, you may want to put one or more RAID groups of SAS drives in that NL-SAS pool. VNX Best Practices is available on Powerlink.
deeppat
2 Intern
•
261 Posts
0
November 17th, 2011 10:00
Thanks for the response . Here's my situation -
I have in all 16 2TB NL SAS ; 31 600GB SAS ; 5 100GB EFD. I am going to RG with 4 vault drives ( 600GB SAS) . 2 NL SAS and 2 SAS as hot spare.
This leaves me with - 14- 2TB
25 - 600Gb sas
5 100GB EFD
If i create R5 with 5 (4+1) SAS drives combination in one pool , and create another pool for NL SAS with R6 ( 6+2 ) . I am still left with 6 SAS drives.
customer has Oracle DBA , SQL and Clearcase (Mostly Solaris hosts) as major application.
I have gone through the Best prac Doc but still confused with how can i make best use of the combination of the drives i have for the above applications.
Can you please advise me a general configuration without looking into the granularity of performance.
thanks!
jps00
2 Intern
•
392 Posts
1
November 17th, 2011 11:00
One thing I meant to mention, is that according to the latest VNX Best Practices, you only need 1 EFD, 1 NL-SAS, and 1 SAS hot spare for that number of drives.
So you really have 15 NL-SAS, 26 SAS and 4 EFDs to play with.
Do you have the FAST Enabler, or are you restricted to single tiered pools with 'basic' virtual provisioning?
No matter what you do, with that many drives, use all 4 EFDs as a FAST Cache. FAST Cache will 'suck-up' any boot storms that your system will have, and make-up for a lot of IOPS missing from the NL-SAS RAID groups.
If you have the FAST Enabler, your options are (obviously) The One Big Pool or many smaller pools. You have that gray area' number of drives. You have more then enough for one pool, and maybe enough for two.
If you do not have the FAST Enabler, you can either create two pools, one for SAS drives and the other for NL-SAS drives. Another alternative is to use traditional RAID groups.
The One Big Pool using the FAST Enabler is the easiest to set-up and maintain. However, performance may not be as good as two or more smaller pools.
Two or more smaller pools let you target pools to different filesystems. However, the separate pools have to natively have enough IOPS to handle their workloads.
Its real important to know how many IOPS you need per LUN before you start creating pools.
If I had no idea how many IOPS per LUN were required, I'd just create one big pool with all my drives and put all my LUNs in it. (This assumes you have the FAST Enabler.)
If I knew how many IOPS I needed per LUN, I would create the two, two pool scenarios. One scenario would be two pools, one pool all SAS drives, the other all NL-SAS. The other scenario would be one pool with fewer SAS drives, the other with SAS and NL-SAS drives. Note the difference here is between two single tier pools, and a single tier and a two tier pool (assuming you have the FAST Enabler).
The all SAS pool would give good performance to the VMs and the DB LUNs needing higher IOPS. The other pool would be for LUNs having a higher response time. Putting a SAS tier 'above' the NL-SAS drive tier will decrease the response time for highly-used data in that pool's LUNs.
It is likely you will have to make the availability decision of using NL-SAS drives in RAID 5. This is not recommended. However, your business priorities, confidence in your company's back-up and recovery practices and procedures, and your needs may make this decision practical.
No matter what you do, you need to run the estimating calculations to determine how many IOPS you need per LUN to meet the base workloads, before you start allocating drives to pools.
HTH
deeppat
2 Intern
•
261 Posts
0
November 17th, 2011 12:00
Thanks a lot for such a clear explanation. I 've understood this nicely.Customer has FAST suite license and has FAST enable, UQM . I 've asked the customer to get me the IOPs details.Tthey just want me to create the pool( they will carve out the luns later when doing migration).But like you mentioned , IOPS is needed to meet the workload.
I need to understand what type of Drive and Raid is preferable for Clearcase (Mostly used by their Solaris hosts)?This is one of their major applicatios running.Any insight?
jps00
2 Intern
•
392 Posts
0
November 18th, 2011 04:00
I need to understand what type of Drive and Raid is preferable for Clearcase
Rational recommends either RAID 5 or 1/0, although its a 'soft' recommendation. Naturally, their recommendation is coached in terms of buying IBM storage systems.
They recommend using enterprise drives.
You'll have to explore with the customer their capacity, availability, and response time requirements.
It maybe that two tiered SAS/NL-SAS pool would work for them for Clearcase, if you can make the IOPS work. If the Clearcase filesystem must be High Availability, that will make it RAID 6. This will crimp the pool's performance, and decrease its capacity, by increasing the number of drives in the tiers to handle the additional parity.
Finally, you can contact your EMC Sales Rep to engage a USPEED professional for help.
USPEED has access to proprietary software tools that can estimate the performance of FAST Configurations. Note that any USPEED help will want the same IOPS/LUN information discussed above. If you 'run the numbers', and the answers are not clear, getting a 'second opinion' from USPEED may be helpful.
HTH
deeppat
2 Intern
•
261 Posts
0
November 18th, 2011 13:00
thanks jps00-
customer doesn't have IOPS info with them . They 're happy with One Big pool with FAST enabler.That way they woudn't have lot of unused storage, and the data will be across many more drives.Plus, performance isn't there big concern .
This is what i will go for -
3* ( 4+1) 2TB NL SAS AND 5* ( 4+1) in 1 pool and enable FAST. -keeping multiple of 5 .
Only thing i am using 2 HS for 27 drives i am left with.
This was a good lesson. Thank you jps00 for your time.
deeppat
2 Intern
•
261 Posts
0
November 20th, 2011 23:00
I have one last query and i am all done.
In the drive layout -( check my posted layout) -If i am going exactly with the HOT SPARE i selected;
i am left with 14 NL SAS drives out of 16 , with BUS 0 and BUS1 each having a HS.2*(4+1) + 1*(3+1)
25 SAS drives out of 27 ,with BUS 0 and BUS 1 each having HS. 5 *( 4+1)
4 EFD out of 5 , with BUS0 only having a HS.( all strictly for FAST CACHE).
Now when i am creating a one BIG POOL with Raid 5 - it has 29 disks in all which misses the best pract of having multiple of 5 disks in R5 pool but in return I get HS for NL SAS..
If i ignore the Bus1_enc0 HS , i get 15 NL SAS and a pool with 30 disks . If a NL SAS disk has to fail in BUS1 , would the HS that was located in Bus0_ENC 0 kick off?
What will happen if one EFD disk fails in BUS1 ENC0.?
Can you let me know what should be the best choice in this scenario please. This will be my last question.
dynamox
9 Legend
•
20.4K Posts
1
November 21st, 2011 06:00
HS are global, i know i've never notice any performance impact when a HS kicks-in for a drive on another bus. I would personally prefer properly configured pool vs a HS on each bus.
deeppat
2 Intern
•
261 Posts
0
November 21st, 2011 06:00
Thnaks Dynamox- Now i am getting the design part
Thank you very much JSP00 and Dynamox.
Rainer_EMC
4 Operator
•
8.6K Posts
1
November 21st, 2011 06:00
I agree – the hot spare being actually used should be for a limited time only till you equalize back to the failed slot.
Better to optimize for the common case of not having a failed disk
Rainer
jps00
2 Intern
•
392 Posts
1
November 21st, 2011 07:00
OK. The customer is always right.
Two things I recommend.
Make sure your pool policy is 'Highest' to start-out. (This is in VNX Best Practices.)
Create all your LUNs first, then start filling them one-by-one. This ensures the widest initial distribution of the LUNs across all the pool's private RAID groups.