Unsolved

This post is more than 5 years old

15 Posts

813

August 5th, 2014 21:00

FAST Cache Disk Layout

I'm currently re-organising our disks on our VNX to distribute some load and I figured I take a closer look at our FAST Cache disk layout.

I initially came up with the following configuration...

Configuration 1 - The RAID1 is split over bus 0 and 1 and all the primary's on bus 1 (as per FAST Cache white paper)

naviseccli -h cache -fast -create -disks 1_0_0 0_1_0 1_0_1 0_1_1 1_0_2 0_1_2 1_0_3 0_1_3 -mode rw -rtype r1

config1.PNG.png

I then realized with the nature of RAID1, when data is read, it's only read from the primary, however, when data is written, it's written to both primary and secondary. Thus the primary drive would be generating more traffic on a bus than a secondary would. So I came up with Configuration 2. According to http://support.emc.com/kb/82823 and I quote...

Existing best practices documentation for Enterprise Flash Drives (EFDs) states that for optimal performance, EFD RAID groups should be spread across enclosures or buses or both.

It doesn't mention if the primary and secondary drives should be evenly distributed as well.

Configuration 2 - The RAID is split over bus 0 and 1 and the primary's are split over bus 0 and 1

naviseccli -h cache -fast -create -disks 1_0_0 0_1_0 0_1_1 1_0_1 1_0_2 0_1_2 0_1_3 1_0_3 -mode rw -rtype r1

Config2.PNG.png

So my question is, does it matter where the primary and secondary drives are located on the bus's? Which configuration is preferred or does it not matter?

August 5th, 2014 22:00

Nope, it doesn't matter where they are located with New VNX-2 (MCx), but yes it is the best practice that they should be distributed on-to both the SPs, should an I/O burst occur, be handled by both the SP. Cause hardware will still be hardware, even if we have lots of enhancement on software level.

Second it is a good practice if you can keep them in the same Enclosure for performance reasons.

Thanx

Rakesh

11 Legend

 • 

20.4K Posts

 • 

87.4K Points

August 5th, 2014 22:00

11 Legend

 • 

20.4K Posts

 • 

87.4K Points

August 5th, 2014 22:00

kill all your browser sessions (not just a specific tab) and try again.

15 Posts

August 5th, 2014 22:00

dynamo, I receive a "Access denied" message when I go to that link. Could you paste the contents here?

15 Posts

August 5th, 2014 23:00

Thanks, I can see the article. Here is the section that references my topic.

  • The highest level of availability can be achieved by binding the Primary and Secondary for each RAID 1 pair are on different buses (i.e. not just different enclosures on the same bus).  However this does add to the complexity of configuring FAST Cache, because the configuration would need to be done using the secure CLI.  This is not considered necessary for most implementations, but is done to further reduce the chances of multiple drive failures (which are already unlikely).
    • For example, in a VNX with four backend buses and eight drives to be added to FAST Cache, one example layout would be:
      • 1_0_0 (P1); 2_0_1 (S1); 2_0_0 (P2); 3_0_1 (S2); 3_0_0 (P3); 1_0_1 (S3); 1_0_2 (P4); 2_0_2 (S4) (drives added in this order)
      • Where the notation P2 means the Primary and S2 means the Secondary in the second RAID 1 etc.  (Do not include the labels such as (S1) in a bind command.)
    • The above example would ensure that the Primary and Secondary mirrored drives share a minimum number of components. It is easiest to be certain of the order in which these drives are bound by using the following CLI command to bind the FAST Cache:  Naviseccli cache -fast -create -disks disksList -mode rw -rtype r_1

The KB doesn't specifically say "split primary and secondary evenly over buses to distribute bus load", however in the example they have given...

config3.PNG.png

It appears my "Configuration 2" is recommend when you're only dealing with two buses.

11 Legend

 • 

20.4K Posts

 • 

87.4K Points

August 6th, 2014 04:00

yep, looks like it's pretty much like config 2, they are rotating which bus is the primary/secondary for each pair ..i am surprised they did not use bus 0.

0 events found

No Events found!

Top