Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

56570

July 31st, 2012 11:00

PS4100X - Initial Set configuration (vSphere 5)

Hello all

we have just bought a PS4100X with 12 x 600GB 10K SAS (7.2TB RAW)

This device is due to replace our existing MD3000I Powervault Device that is being used in out Vmware ESX enviroment.

What RAID / ESX  Datastore configuration would people recommend.

Currnently we have the following set up for example.

2 x Raid 10 Disk Groups with 1TB Virtual Disks on each.

The 2 1TB Disks are then presented to ESX and used by 2 Datastores.

cheers in advance.

5 Practitioner

 • 

274.2K Posts

July 31st, 2012 12:00

I would suggest breaking up the TB volumes into smaller ones. I.e. 600GB  (depending on the size and number of the VMs on each datastore)   Each volume has a defined queue, typically 32.   More volumes mean more outstanding IOs can be running.   Also there are times when ESX must lock (reserve) a volume, while reserved no other node can do IO to that volume.  So more volumes lessens the issue and means other volumes will still have ongoing IOs.

Here are some best practices

1.)  Delayed ACK DISABLED

2.)  Large Receive Offload DISABLED

3.)  Make sure they are using either VMware Round Robin (with IOs per path changed to 3), or preferably MEM 1.1.0.

4.)  If they have multiple VMDKs (or RDMs) in their VM, each of them (up to 4) needs its own Virtual SCSI adapter.

5.)  Update to latest build of ESX, to get latest NIC drivers.  

6.)  Too few volumes for the number of VMs they have.

Here's some source material for the Delayed ACK

kb.vmware.com/.../search.do

virtualgeek.typepad.com/.../a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html

How to disable LRO

Solution Title HOWTO: Disable Large Receive Offload (LRO) in ESX v4/v5

Solution Details Within VMware, the following command will query the current LRO value.

# esxcfg-advcfg -g /Net/TcpipDefLROEnabled

To set the LRO value to zero (disabled):

# esxcfg-advcfg -s 0 /Net/TcpipDefLROEnabled

NOTE: a server reboot is required.

Info on changing LRO in the Guest network.

docwiki.cisco.com/.../Disable_LRO

How to add virtual SCSI adapter:

kb.vmware.com/.../search.do

5 Practitioner

 • 

274.2K Posts

September 25th, 2012 08:00

Re: Guest iSCSI vs ESX iSCSI:  

For me, the decision point is whether or not your application can take advantage of the EQL HIT kits.  So applications like Exchange, SQL, Sharepoint, etc...   The benefit is so clear that it makes sense to use the MS iSCSI initiator to connect to the volumes vs. the ESX iSCSI initiator.   Also, if your backup program support transportable (AKA off host) backups then again the guest initiator is a clear advantage.

The benefit of multiple Virtual SCSI adapters is performance.  

Re: NICs.  Correct, unless we're talking about 10GbE NICs, having dedicated NICs for ESX traffic and Guest iSCSI traffic is a best practice.

16 Posts

July 31st, 2012 13:00

Thanks for this detailed information.

based on 12 600GB Drives how would we suggest creating the raid sets / LUNs i.e would you create 3 x 4 Disk Raid 10 LUNS, with a Datastore on each, or would you create a mix of say Raid 5, and Raid 6.

As i get the comment regarding splitting up into smaller vloumes, but want to ensure good raid protection and esx performanace.

5 Practitioner

 • 

274.2K Posts

July 31st, 2012 14:00

I would not use RAID5 for ESX (or any random write environment)   RAID10 or RAID 50.  RAID5 is also the most vulnerable to double fault.  

You also want to keep free space on the array.   And on the VMFS volumes as well.  ESX needs some room to write logs, snapshots and swapfiles, etc...   That's another reason I like 600GB volumes.  I make sure to leave about 100GB free on each.  A bit of overkill but after having some snapshots fail to be deleted they grew until all space was consumed, shutting down the VMs.

Strongly suggest making sure array is upgraded to 5.2.4-H1 and ESX it upgraded to latest build as well.

Also you can't mix (or split) RAID types within a single member array.  One array, one RAID level.  

16 Posts

August 26th, 2012 04:00

Thanks for the info.

I have created a 4.7 TB array on my san. Using Raid 6 and a hot spare.

I was advised to create volumes and present them directly to the guest os inside the vm enviroment, but I was just going to create datastores, as my enviroment has low i/o requirements.

This being the case, is there any best practices for creating datastores, or do I just have to follow esx best practices for datastores.

cheers

John

5 Practitioner

 • 

274.2K Posts

August 26th, 2012 07:00

Presenting volumes directly to the guest has other advantages over performance.

I.e. with Windows SQL, Exchange environments, the Windows HIT kit provides huge benefits on backups, restoring (especially individual mailboxes), and database maintenance.  

If you are using a backup program like Backup Exec which support off host backups, snapshots will be mounted to the BUE server automatically vs. going through an agent.   Resulting in faster backups and lower load on the host server.    That's not possible with VMFS volumes.  

So you don't have to do it exclusively one way or another.  

Re: Datastores.  Create multiple, smaller datastores vs. one or two huge ones.   The amount of I/O that can be performed to one volume by a given host is finite.   Also, there are IO operations when ESX will exclusively lock a volume to a node for a short period of time.  While that's occurring, IO from the other nodes to that volume is delayed.   More VMFS volumes mean that IO will still occur to the other volumes.  

In the VM settings, when you have multiple VMDKs (or RDMs), create a new Virtual SCSI adapter for each VMDK/RDM.  (up to 4 SCSI controllers per VM max)    This will greatly increase IO capability in the VM.  For same reason, don't use partitions in the VM disks.  Always create a new VMDK.  It allows the VM to push out more I/O at once.   Since just like a volume, the IO to a 'disk' is limited by the Command Tag Queue value.   When you add the VMDK, on the right side of the wizard is "Device Node".   First VMDK will be 0:0  (SCSI controller 0, target 0).   When you add the next disk it will default to 0:1.  Change that to 1:0.   (You have to scroll down the list going from 0:1 through 0:15, before 1:0 will show up.)     When you select 1:0, it will create a new virtual SCSI controller for you.   So the next VMDK will be 2:0, and so on.  

If you have to have more than four VMDKs,  split up heavier IO vmdks with lower ones.  

On the boot VMDK you want to use the LSI logic virtual SCSI adapter, on the other VMDKs, change that to "Paravirtual".    It's also in the vm settings, you select the other virtual adapter and you'll see "Change type" in the upper right hand corner.     make sure you load the VM tools after the OS is loaded to support that adapter.   That driver type is optimized for IO inside ESX.  

If you google "esx paravirtual" there are some good articles about it.

In this thread is some info on turning off delayed ack and LRO.   Additionally you need to change the login_timeout and if you can't install the Dell MEM package, change the pathing type from FIXED to Round Robin and change the IOs per path value from 1000 to 3.

This thread has info on that.

en.community.dell.com/.../20128179.aspx

Regards,

5 Practitioner

 • 

274.2K Posts

August 26th, 2012 08:00

Ideally, you would have a separate vSwitch and physical NICs for that VM iSCSI traffic.   It's not required, since in most installations, you don't max out the network bandwidth.

16 Posts

August 26th, 2012 08:00

One thing I did wonder, is if you are running presenting volumes directly to the vm, the vm will have to share the vswitch with a network port group and iscsi traffic. Does this not affect performance, as I have separate vswitches for iscsi traffic, and network port groups on another vswitch. or did I miss something.

16 Posts

August 26th, 2012 08:00

cheers Don food for thought.

16 Posts

August 26th, 2012 10:00

"Ideally, you would have a separate vSwitch and physical NICs for that VM iSCSI traffic. It's not required, since in most installations, you don't max out the network bandwidth."

didnt quite follow sorry. currently i have physical nics hanging off the vswitch for network connectivity and additional physical nics attached to another vswitch that runs the iscsi traffic.

The iscsi traffic is ran though physically seperated by switches, therefor a windows 2003 server for example can not see the iscsi target on the san if i created it a volume.

hope that makes sense

cheers

John

16 Posts

August 26th, 2012 10:00

also.

Where you say "In the VM settings, when you have multiple VMDKs (or RDMs), create a new Virtual SCSI adapter for each VMDK/RDM."

Is this not the standard action when you create a additonal virtual disk.

cheers

John

5 Practitioner

 • 

274.2K Posts

August 26th, 2012 13:00

No that's what it should do but you have to do it manually.   Otherwise it hangs them on one scsi controller  

5 Practitioner

 • 

274.2K Posts

August 26th, 2012 13:00

You would create a new vswitch and add physical nics to it.  Those nics would be in same subnet as the esx iscsi nics are.  So those vm would be able to access the san.  

That way you don't share phyysical nics for esx iscsi and guest iscsi

16 Posts

September 25th, 2012 04:00

one final question would be is it better to create volumes and present them to the VMs using the method above. as in  "the VM settings, when you have multiple VMDKs (or RDMs), create a new Virtual SCSI adapter for each VMDK/RDM.  (up to 4 SCSI controllers per VM max)"

or do it via the direct mapoping on the vswitch as you suggested "  You would create a new vswitch and add physical nics to it.  Those nics would be in same subnet as the esx iscsi nics are.  So those vm would be able to access the san.  That way you don't share phyysical nics for esx iscsi and guest iscsi"

I guess it depends on the applicaions?

cheers

John

16 Posts

September 25th, 2012 04:00

I get it, sorry, been away,

So the traffic is separated by vswitch but contactable by network.

sorry for being so slow.

No Events found!

Top