Start a Conversation

Unsolved

This post is more than 5 years old

1955

March 23rd, 2015 16:00

VNX 5600 Disk Layout Review

I've got a few VNX 5600's that will be installed in the near future. My primary concern is proper layout for EFD's. As of now, the plan is to put everything in 1, possibly 2 storage pools. I've read the best practices documentation & have avoided putting any FAST Cache EFD's on 0_0. I spread the FAST Cache EFD's across the available DAE's and limited to no more than 4 per DAE. I'd appreciate any thoughts or suggestions for the design. Thanks in advance!

1 Attachment

March 23rd, 2015 19:00

Quick observations...

Your EFD layout looks ok to me.

You have used a non best practice drive count for the RAID6 NL-SAS tier (recommended 14+2), shouldn't be a problem as long as it's consistent.

You have no spare for the one of the EFD drive types. Spares are not interchangeable between EFD drive types.

1 Rookie

 • 

20.4K Posts

March 23rd, 2015 21:00

74 NL-SAS works perfect in 6+2R6 config

1 Rookie

 • 

20.4K Posts

March 24th, 2015 06:00

if i am not mistaken there are only two buses on the 5600, so should be pretty simply to balance.

86 Posts

March 24th, 2015 06:00

I appreciate it, guys! You're right on the R6. I'm definitely going to change that to 6+2. I'm actually only going to have 72 NL-SAS since 3 will be hot spares, but the math still works. Also, to clarify, all of the EFD's are the same drive type. I may have miss-typed something in the spreadsheet.

Like I mentioned, I've read through the BP documents, but the other thing I've had trouble finding documented is exactly how the DAE's should be setup as far as Bus & Enclosure numbering. I'd appreciate any opinions on that configuration.

86 Posts

March 24th, 2015 08:00

One other thing. I've got 260 600GB SAS disks. If I'm using 9 hot spares & doing R5 (4+1), that leaves 1 additional disk. What would be the best way to incorporate that disk? I could make it a hot spare, but that seems like a waste however I may not have a choice.

1 Rookie

 • 

20.4K Posts

March 24th, 2015 09:00

can't do much with one disk, an extra hot spare can't hurt. One less HS to purchase the next time around.

86 Posts

March 24th, 2015 09:00

Hmm...I wasn't aware there was a different drive type within the 200GB EFD's. I found another thread saying that FLV42S6F-200 is SLC FLASH while V4-2S6FX-200 is eMLC, therefore they aren't the same drives. That being said I'm guessing best practice would be to have a hot spare for each type. Good catch,guys.

5 Practitioner

 • 

274.2K Posts

March 24th, 2015 09:00

Hi Clayton,

the layout looks good, but even though they are the same size, depending on what type of drive you have chosen for the 200GB Tier1 EFD, you might not be able to use the same hot spare for those and for the FAST Cache ones. You might want to double check that.

As mentioned before, I would recommend changing the RAID6 setting from 12+2 to 6+2 as it is a configuration that is recommended as part of the best practices.

As for your single 600GB drive, I would make it a hot spare. There isn't much you can do with a single drive and if the customer ever increases the drive count, at least, you won't have to worry about needing another hot spare for a while.

My 2cents.

86 Posts

March 24th, 2015 10:00

So, my question now is, will a hot spare work...at least temporarily. For example, a eMLC drive fails, will the SLC drive at least hot spare long enough for a replacement or will it not hot spare at all?

5 Practitioner

 • 

274.2K Posts

March 24th, 2015 10:00

If an eMLC drive fails, then, unfortunately, the SLC hot spare will not be invoked as it is a different drive type, so in your scenario, it will not hot spare at all.

5 Practitioner

 • 

274.2K Posts

March 24th, 2015 10:00

You are correct. The FAST Cache EFDs are SLC, whereas the cheaper FAST VP EFDs are eMLC, therefore they need different hot spares.

March 24th, 2015 14:00

Clayton wrote:

I appreciate it, guys! You're right on the R6. I'm definitely going to change that to 6+2. I'm actually only going to have 72 NL-SAS since 3 will be hot spares, but the math still works. Also, to clarify, all of the EFD's are the same drive type. I may have miss-typed something in the spreadsheet.

Good call on the 6+2 RAID6.  You lose a lot of capacity to parity though but that just a consequence of the architecture. There's always trade-offs.

86 Posts

March 24th, 2015 14:00

I may still go with R6 14+2 vs 6+2 to increase overall usable capacity. We'll see...

1 Rookie

 • 

20.4K Posts

March 24th, 2015 15:00

think about rebuild times on big drives, chances of multiple failures. If this was a dedicated R6 pool for some deep archive data / backups, i would try to squeeze as much usable capacity as possible. This is going to be part of FAST right ? One private RG goes, entire pool goes.  That's just my opinion

5 Practitioner

 • 

274.2K Posts

March 24th, 2015 15:00

Based on best practices, I wouldn't recommend going with 14+2 and here is why: from what you said, you have 72 usable NL-SAS, meaning that if you use 14+2 R6, you will have 4 x 14+2 and 1 x 6+2, which is not really optimal. I understand about the high overhead of 6+2, but in your case, I would still go 9 x 6+2 than 4 x 14+2 + 1 x 6+2.

No Events found!

Top