开始新对话

未解决

此帖子已超过 5 年

1986

2013年12月19日 20:00

Best Practices for Symmetrix Configuration

Best Practices for Symmetrix Configuration


Introduction

This article summarizes the frequently asked questions (FAQ) on Symmetriv VMAX configurations.



Detailed Information

Q1: When configuring Front-End Adapters (SAFs also referred to as FAs), why is it recommended to “go wide before deep”?
A1: Symmetrix has CPU's dedicated to Front-End ports. Using two ports on the same FA CPU does not give twice as much performance, as the same CPU has to serve both ports. There are several recommendations for FA configuration:

·         Use all the 0 (zero) ports on a director first and then the 1 (one) ports.

·         Do not zone the host to the same port of the same director board.

·         A given host should never be mapped to both ports of one FA CPU.

·         They should also never be only connected to one director board for redundancy.

·         Spread connections across engines / directors first, then on the same director.

·         You get increased availability if you spread the connections "wide" first.

·         Configure multiple paths to FA ports on separate processor slices and directors.

·         Each host should be configured to a minimum of two FA's (SAFs), 4 is recommended.

·         Connect at least two HBAs across redundant fabrics for high availability.

Q2: Why shouldn't zoning be one-to-one?
A2: This is an old Brocade and McData recommendation. 1-to-1 zoning causes memory leaking and increases the number of indexes that the switch (ASICs and ports) have to access to validate source and target communication. This drives up memory and CPU utilization. This also makes troubleshooting much more difficult for both yourself, and for EMC Customer Service. Zoning should be from initiator, to all targets of the same array, or same tape library, or same NAS head (as required).

Q3: What is the result of adding more disk drives to the same loop on the backend?
A3: Performance will scale to a point where the maximum capacities of the DA are reached. Adding more disks beyond this point will not improve performance. While you can add more disks for greater storage capacity, the IOPS will not scale.
Note: When a system is upgraded (especially engine adds) it is important to rebalance the disks on the backend, and also important to rebalance the host connections across the front end. It has been discovered that many systems that started as a 2 Engine system and were upgraded to a 4, or 6, Engine system, the back-end disks do get rebalanced, but the front-end ports do not. As a result we have many unused FA processors on the new engines and overloaded FA processors on the original engines.

Q4: For what types of I/O does the RAID protection of a LUN make a major difference?
A4: Random write. A host write could create different I/O's with different RAID type:

·         With Random writes:

·         For RAID-1, one write will result in 2 back-end writes.

·         For RAID-5, one host write will result in 2 back-end reads and 2 back-end writes, for a total of 4 IOPS.

·         For RAID-6, one host write will result in 3 back-end reads and 3 back-end writes for a total of 6 IOPS.

·         Note that Sequential writes for RAID-5 or RAID-6 can be more efficient than RAID-1.

·         Performance of reads is similar across all protection types.

Q5: Which is the recommended protection type, RAID-1, RAID-5, or RAID-6?
A5: Besides the major difference with random write performance mentioned above, cost may be a factor. RAID-5 or RAID-6 are best at 12.5% or 25% protection overhead, and RAID-1 has a 50% protection overhead. RAID-1 has the best random write performance and RAID-6 has the best resiliency.

Q6: What is the generally recommended RAID protection for the Enterprise Flash Drives (EFD), Fiber Channel (FC), and Serial ATA  (SATA) Tiers?
A6: The recommendations are:

·         EFD: RAID-5 (3+1)

·         FC: RAID-1

·         SATA: RAID-6 (6+2)

·         For EFD, RAID-1 is more expensive, and RAID-6 will drive up DA utilization. For FC, RAID-1 offers the right balance between performance, cost and reliability. Using RAID-1 relieves the DA's of parity calculations and extra I/O’s during writes. For SATA, which stores the large data with limited requirements of performance, RAID-1 will cost too much space and RAID-6 is more appropriate. RAID-6 (14+2) is not recommended on the Symmetrix VMAX 10K (987) and VMAX 40K systems.

Q7: How many IOPS can FC / SATA / EFD technology handle?
A7: For small random I/O the following are the estimated IOPS:

·         5K FC disk can do 150 IOPS

·         10K FC disk can do 120 IOPS

·         7200 SATA disk can do about 50 IOPS

·         EFD (SSD Flash) disk can do about 1000 IOPS

Q8: When configuring drives of different types (Tiers) what is the best practice for the distribution of these on the back-end?
A8: Distribute them as uniformly as possible across all DAs. Keep the physical distribution balanced. If not, overall system performance will be lowered. Skewed workloads are limited by the most heavily utilized components. Balance all drives across all available DAs where possible. Ideal configurations would be:

·         Multiples of 8 drives per VMAX 10K or 20K Engine

·         Multiples of 16 drives per VMAX 10K (959) and VMAX 40K Engine

·         All of the same type and speed, spares not included.

Q9: Should I segregate “X” volumes from “Y” volumes?
A9: You have two choices:

1.     For the most optimal system performance, you should not segregate applications/BCVs/Clones/Thin pools onto separate physical disks, DAs or Engines.

2.     For the most predictable performance, you should segregate applications/BCVs/Clones/Thin pools onto separate physical disks, DAs or Engines.

·         Segregate disks horizontally not vertically.

·         Do not segregate to “Engines”, buy a separate system instead.

·         You should not segregate if the workloads on the various set of volumes in question are not concurrent with one another, even if you desire the most predictable performance.

Q10: Which type of Meta configuration is recommended, Concatenated or Striped Meta?
A10:  Use Striped Metas unless ease of expansion is much more important than any performance factor. Host based striping can also be used instead of Meta volumes.

Q11: For Meta devices, is it necessary to use Host Striping?
A11: If a  Symmetrix Meta volume is used, Host Striping is not recommended. The reason is to avoid too many levels of striping. One or two levels of striping is enough, more can make any workload look random, even highly sequential ones. Host Striping uses many LUN's and multiple host cycles, and should be used with raw devices, not Symmetrix Meta devices.

Q12: Why is it recommended that the data devices in a Thin Pool should all be about the same size?
A12: As the extents are allocated in a round-robin fashion, using different sizes in the Thin pool will lead to uneven data distribution and therefore an unbalanced workload.
Note: TDATs on disks with SFS or VAULT volumes can be a little smaller.

Q13: What is the best practices for FAST VP?
A13: The recommendations are:

·         Policy: 100% across all technologies – 100%, 100%, 100% for a self managing black box solution.

·         Operation mode: automatic

·         Workload analysis period: 7 days 168 hours or 4 weeks.

·         Initial analysis period: at least 24 hours, best practice is the default of 168 hours.

·         Performance time windows: 24 hours.

·         Pool reserved capacity: Based on the lowest allocation warning level for that thin pool. For pools with bound devices 10-15% and pools with no bound thin devices set PRC to 1%.

Q14: Why should the EFD (SSD Flash drives) be uniformly distributed in the back-end, when designing a FAST VP solution?
A14: To ensure balanced DA utilization. With unbalanced EFD configuration, DAs with EFDs will be more utilized and those without EFDs will be less utilized. Since the EFD can do many more I/Os per disk than a spinning drive, their placement can cause a big skew on the DA CPUs.

Q15: When should you choose to deploy the 100GB, 200GB, and 400GB EFD (SSD Flash) drives?
A15: All EFD deliver the same number of IOPS. Using smaller EFD can make it easier to obtain the sweet spot for the number of EFDs per engine of 8, or 16, depending on the Symmetrix VMAX model.


             

iEMC APJ

Please click here for for all contents shared by us.

3.2K 消息

2013年12月20日 22:00

英文的也好哦

1 消息

2015年8月16日 01:00

Thank you !

Sincere Regards,

Aneesh

找不到事件!

Top