Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1071

June 26th, 2012 15:00

Which Raid-5 configuration is more performant?

If I have a DAE with 15 drives, is it more performant to create a single Raid-5 14+1 or 3x Raid-5 4+1 raid groups?

This is for a database application which will consume the entire DAE for this single app.

Aside from the fact that having 3 LUNs provides more host-paths than a single LUN does, is there any plus or minus to using a large raid group rather than multiple smaller ones? Im already aware that 3 smaller raid groups are a bit less space efficient, not worried about volume, only about performance at this point.

This DB is a large block, fairly random access with 512K block size. Ive configured the Clariion with small read cache and large as possible write cache and lowered the watermarks to 40/60... The database spans 4 DAE's so my LUN config would be 4x (14+1) luns or 12x (4+1) luns.

I would be interested in knowing why a particular config is chosen as well.

Thanks,

Ron

247 Posts

June 29th, 2012 01:00

If you are not worrying about capacity at this point, i'd go with the 4+1 R5 setups. For reasons already discussed by dynamox: rebuild time (disk failures WILL happen, so let's limit the time your at risk for a dual disk failure and also have lower performance) and a proper stripe size of 4x64KB = 256KB (giving a higher chance for full stripe writes, which lower back-end load significantly).

Depending on how many LUNs you prefer to assign to the host, either assign them directly to the host(s) or use MetaLUNs to stripe across the RAID groups to take advantage of the disk count. Seeing you have 4 DAE's, if you place them on different busses, you could stripe your MetaLUN across DAE's.

If you take the MetaLUN approach with 4+1 R5 RGs, you can keep your Element Size Multiplier in the MetaLUN config tab on 4.

1 Rookie

 • 

20.4K Posts

June 26th, 2012 16:00

are you striping in the OS/DB like Oracle ASM. I would think distributing multiple LUNs between SPA and SPB would you give you more cache resources, ports, paths ..etc. Rebuild times on a large raid group like that would be much longer so you could be affecting your DB much longer.

222 Posts

June 26th, 2012 18:00

Yes, the database software is striping accross available LUN's. However, according to database best practices documents, we have sufficient luns given the database size. It's not like additional luns wouldn't help, but im thinking more along the lines of pure RaidGroup performance. Does having to compute 3x Raid-5 parity accross 3 RG's (5 spindles) compare to computing 1x R5 parity accross 1 RG (15 spindles) make any difference.

1 Rookie

 • 

20.4K Posts

June 26th, 2012 19:00

i am no performance guru but 1 raid group with 15 drives is going to give you a weird stripe size of 896. With small block random I/O as you describe, chances for full stripe write are already small but with big stripe size chances are even smaller. I would be more concerned with rebuild times and double drive failure ..chances are small but still.

No Events found!

Top