Highlighted
mrizzi2
1 Nickel

VNX5100 system configuration optimization for VMware vSphere 4.1 infrastructure

Hello,

we are setting up a new VMware vSphere 4.1 Update 1 production infrastructure as described below:

==================================================
- 2x IBM x3650 M3 hosts (7945K4G) each running VMware ESXi 4.1 Update 1 Build 348481 and equipped with 2x QLogic 8Gb FC Single-port HBAs for IBM System x (42D0501).
- 1x EMC VNX5100 Dual SP enclosure with Fibre Channel front-end ports running VNX Block OE release 05.31.000.5.011.
- 1x IBM x3650 M3 physical Windows server (7945K3G) running Windows Server 2008 R2 Standard x64 SP2, VMware vCenter Server 4.1.0 Build 345043 and Symantec Backup Exec 2010 R3 fully updated media server software. This server is also equipped with 2x QLogic 8Gb FC Single-port HBAs for IBM System x (42D0501) so that it can directly access the virtual machines residing on the SAN and the backup operations can be offloaded to the media server without affecting the production ESXi server hosts.
- 2x EMC Connectrix DS-300B 8Gb SAN Switches both running Firmware release v6.4.1a and typical zoning configuration are used in conjunction with the Dual SP enclosure and the Dual QLogic 8Gb FC Single-Port PCIe HBAs to eliminate any Single Point of Failure on the SAN.

- Both the VMware ESXi 4.1 Update 1 hosts and the IBM x3650 M3 physical Windows server are using their native multipath support.
==================================================


As part of the process of configuring our first VNX system and optimizing the new VMware vSphere 4 infrastructure in our lab before going to production I would like to check with you experts if the below mentioned aspects make sense to you and if out of courtesy you could provide additional observations/recommendations.

==================================================

1) Both EMC Connectrix DS-300B 8Gb SAN Switches have been configured using typical zoning, all the lines for all connected HBAs/SP are green when we check the connectivity status in the EZSwitchSetup utility and both VMware native multipath failover and Windows native multipath failover see all paths to the LUNs. Is any additional configuration necessary/recommended as a best practice ?

2) With regard to the best/recommended storage multipathing and failover configuration for VMware ESXi 4.1, I am struggling to find the best/recommended storage multipathing and failover policy for our VNX5100 system since I have seen quite different recommendations from both EMC, VMware and the multipathing policy VMware ESXi automatically pre-decided upon installation:

- As per VMware compatibility list for our VNX5100 FC system the native VMware Multipathing needs to be configured with “VMW_SATP_ALUA_CX” SATP Plugin and“VMW_PSP_FIXED” policy.

- As per EMC Document “Using EMC VNX Storage with VMware vSphere” version 1.0 EMC does not mention a specific SATP Plugin for use with a VNX5100 FC system, however it recommends the new “VMW_PSP_FIXED_AP” policy introduced with vSphere 4.1. As per the EMC Document above, “VMW_PSP_FIXED_AP” policy “selects the Array LUN preferred path when VNX is set for ALUA mode” which, based on my understanding, is the default failover mode configured on a VNX5100 system.

- As per the information I have gathered from the VMware vSphere Client when verifying the multipathing policy VMware ESXi automatically pre-decided for our VNX5100 upon installation, the SATP Plugin that has been automatically pre-configured based on the array type is “VMW_SATP_CX” and the multipathing policy is “VMW_PSP_MRU”.

3) With regard to the best/recommended storage multipathing and failover configuration for Windows Server 2008 R2 Standard x64 SP2 native failover, based on my understanding it should be enough to install Windows native failover, register the server with the system using the Unisphere Server Utility or Unisphere Host Agent and finally setting Windows native failover to claim control of VNX devices without any additional MPIO configuration. Is my understanding correct ?

4) With regard to configuring the system write/read cache with Unisphere, I have noticed it lets the administrator partition the appropriate write/read cache memory size as needed. I understand the best answer is probably “it depends” based on the workload and a number of other factors, however I hope someone could out of courtesy give me a starting point for initially enabling and sizing the system write/read cache with Unisphere (the amount of free memory I currently have on each SP is about 800 MB).

==================================================

I would really appreciate if someone would review this discussion and share its considerations/field experience.

Regards,

Massimiliano

Labels (1)
0 Kudos
1 Reply
cgermain
1 Copper

Re: VNX5100 system configuration optimization for VMware vSphere 4.1 infrastructure

Hello,

I will be out of the office on vacation Monday August 8,2011 Through Friday August 12, 2011, Returning Monday August 15, 2011.

Should you need immediate assistance please contact the help desk at 1-877-362-4832

Thank You

Chris Germain

Delaware North Companies Inc.

Sr. Systems Engineer

Infrastructure Services

716-858-5391

cgermain@dncinc.com

>>> maxximus <emc-community-network@emc.com> 08/06/11 06:33 >>>

maxximus created the discussion

"VNX5100 system configuration optimization for VMware vSphere 4.1 infrastructure"

To view the discussion, visit: https://community.emc.com/message/558931#558931

0 Kudos