Start a Conversation

Unsolved

This post is more than 5 years old

S

1266

May 12th, 2017 15:00

VNX2/5600 fast cache disks

We are on VNX5600, and have FC 400g (8 drives + 1 HS) on the DPE

We have expanded capacity on the VNX2 over time, we now want to expand FC with 2 more drives to max at 10, but we don't have any spare slots in the original DPE 0_0.

We do have empty slots (and unused disks can move) in other DAEs, and can have 11 free (all enclosure 0 on other BUSes)

But, I am confused as to FC disk placement, seems from #442856 that we CAN use 0_0 for FC, on VNX2, but that is not recommended for VNX1, and #428695 adds to my confusion about FC disk locations and mirror locations vs. what I have seen on some community postings.

So, my questions are...

Is it best practice on VNX2 to NOT put FAST Cache on 0_0?

Then, some options for expansion:

1) Leave (9) on DPE, add 2 more FC drives and mirror IN same enclosure?

   2_0_1 plus 2_0_2

2) Leave (9) on DPE, add 2 more FC drives and mirror BETWEEN different enclosures?

   2_0_1 plus 5_0_1

3) Move 1/2 of the FC drives OFF of 0_0 and set 0_0_x as HALF OF MIRROR and 2_0_x or 5_0_x as HALF OF MIRROR

   i.e.

   0_0_4 plus 2_0_1; 0_0_5 plus 5_0_1; etc.

4) Move ALL the FC drives OFF of 0_0 and spread/mirror them?

   i.e.

    same DAE?

      2_0_1 plus 2_0_2; 5_0_1 plus 5_0_2, etc.

    different DAE?

      2_0_1 plus 5_0_1; 2_0_2 plus 5_0_2, etc.

Thanks.

4.5K Posts

May 15th, 2017 13:00

The 5600 has multiple backend buses (6) - the recommendation for FAST Cache is:

For VNX Series arrays with MCx (Release 33), the best practices for drive placement, are:

  • Spread the drives as evenly as possible across the available backend busses (5600 has 6 backend buses).
  • Add no more than 8 FAST Cache flash drives per bus, including any unused flash drives for use as hot-spares.
  • Use DAE 0 on each bus when possible for flash drives, as this has the lowest latency

In your example, #3 or #4 would be OK. If you have bus 0, bus 2 and bus 5 splitting the 10 disks between them would be fine also - you just want to have the EFD disk on all the buses if possible to spread the workload over all the buses.

geln

24 Posts

May 16th, 2017 11:00

Hi Glen, thanks for that information. So, just to clarify before I go through the work in moving disks around, #1 or #2 should NOT be done?

This confusion is even mentioned in #428695, it mentions "older best practice"

----

In the desire to follow these EFD best practices, the configuration for FAST Cache LUNs is sometimes coming into conflict with an older best practice of not configuring LUNs split between enclosure 0_0 (which is battery backed up) and other (non-battery backed up) enclosures.   This problematic configuration is made more common by the fact that most arrays ordered with EFDs are sent with EFDs already in enclosure 0_0

-----

but then it makes it seem like "not old, but still is best practice"

As a general rule, when configuring FAST Cache, make sure all of the drives involved are either entirely contained within enclosure 0_0 (only recommended for 2 or 4 drive FAST Cache configurations) or entirely in enclosures other than 0_0

4.5K Posts

May 17th, 2017 07:00

The rule still applies for VNX2, if you use bus 0 enc 0 then the number of disks should limited to 2 or 4 and those should be set as primary/secondary. I was mistaken when I read your note initially, I was thinking about EFD in raid groups or Pools. So yes, item #4 would be the safest.

For example;

1-0-4 - primary

1-0-5 - secondary

2-0-6 - primary

2-0-7 - secondary

The problem occurs because if you suffer a power outage you'll have power on DAE 0-0, but if the mirror pairs are between 0-0 and 1-0, then secondary mirrors in 1-0 will be disabled before the disks in 0-0, breaking the mirror and when power is restored, you must wait for the rebuild on the mirror pairs to finish and it could fault. If you can't get all the disk in 0-0, then you should not use 0-0.

You can use CLI to add the disks - there's a KB that talks about how to add R1 (mirrored pairs) disks to get them in the correct order or when you add the disks to FAST cache in the GUI, the order you add them sets the mirror-pair relationship. So in GUI you would first add 1-0-4, then 1-0-5 (that creates the mirror with 1-0-4 as primary and 1-0-5 as the secondary).

glen

No Events found!

Top