bourne553
1 Nickel

EQL volume size and usage in ESXi

Hi,

I've been going through the forums looking at posts pertaining to volume size on the equallogic.

For the most part is looks like smaller sizes are recommended, but I also see a lot saying that this varies greatly depending upon the intended use for the space.

http://en.community.dell.com/support-forums/storage/f/3775/t/19540581

http://en.community.dell.com/support-forums/storage/f/3775/t/19537117

I inherited 2 equallogics that are currently carved up into large volume sizes which are either 4 or 8 TB.

I am looking to rebuild a file server that we have and I require 8TB of space.

What is the best approach to provide a large amount of space for a file server? Are there any documents that cover recommendation for server/application types and how to provision the space?

I initially started off with a single 8TB volume that I then made available to our ESXi infrastructure, formatted and added it to our Linux file server. I was immediately greeted with a warning from ESXi about there being no space left in that datastore. I then started reading about the dangers of filling up datastores 100%.

So now I am sort of back to square one and trying to figure out the best way to provide a large a amount of space to a server, while still maintaining best practices for Equallogic and ESXi.

My next option is to go with smaller volumes and datatores, and use LVM to combine these inside the Linux OS itself. But that in itself posses it's own challenges.

I know this is a loaded question because it varies so much based on a bunch of different factors. But any tips or information would be helpful.

Cheers

0 Kudos
11 Replies
bealdrid
1 Nickel

RE: EQL volume size and usage in ESXi

Honestly I'd probably just go with in-guest iSCSI for this purpose and take VMWare out of the picture entirely.  We do this on both Windows and Linux with good results and we have volumes up to the max (15T) provisioned.  The Linux and Windows HIT kits will help with setting this up, especially MPIO.

0 Kudos
bourne553
1 Nickel

RE: EQL volume size and usage in ESXi

Interesting that you mention this.

After posting I continued my research and stumbled upon this - http://blog.stephendolphin.co.uk/project-work/using-the-dell-eql-mem-module-to-simplify-my-backups-a...

The article also mentions bypassing VMware altogether. I honestly hadn't even thought of this.

I will look into the HIT kits, I've not heard of these before.

In the article above it mentions that it adds some complexity to the solution. 

Your experience has been good using this method?

0 Kudos
bealdrid
1 Nickel

RE: EQL volume size and usage in ESXi

It has been overall good.  It does buy you some advantages like being able to snapshot or replicate the guest volume independent of the OS or VMware.  I suppose a slight disadvantage is it does add some additional setup time, for example to configure the network since the guest needs access to the SAN subnet.

0 Kudos
bourne553
1 Nickel

RE: EQL volume size and usage in ESXi

Yeah that makes sense.

I was just reading about the HIT Kit. It sounds like it offers some interesting options.

For the Linux support I see that they only support Red Hat 6.5.

I would like to run this on CentOS 7. Scanning through the forums it seems there is zero official support for CentOS.

Not sure if I should the risk of running the kit on an unsupported platform.

0 Kudos

RE: EQL volume size and usage in ESXi

Hello,

Re: CentOS is not a supported distro, I'm sorry.   RHEL v7.x and OEL v7.x support will come later this year.

Re: Volume size.  Hypervisors add a level of complexity regarding volume size.  It's entirely based on the I/O load intended for that volume.  I wouldn't stack exchange and SQL servers on a single large volume for example.  Each volume connection negotiates a queue depth with the initiator.  Typical values today are 64 to 128 commands max at one time.  When the queue fills, I/O stops until the storage processes off commands in the queue.  So the more VMs and nodes accessing a single volume will likely hit this limit more often.  Also, there are certain I/O operations on VMFS Datastores that require a node to assert an exclusive SCSI2 reservation.   VAAI provides Atomic Test & Set (ATS) which helps limit these events, but they still occur.  When one node locks the volume, ALL other nodes have to wait until that node releases it.   So if all your VMs were on one gigantic volume this could become a bottleneck..  Multiple smaller volumes mean that when one or even two Datastores were locked the others could still be active.  

For a fileserver this isn't typically the same concern.  Unless it was under extraordinary load. Then the queue limit could be a problem.  Again more volumes would help limit this.  

Please do make sure your ESXI servers are configured per our best practices Tech Report.

en.community.dell.com/.../download

Regards,

Don

Social Media and Community Professional
#IWork4Dell
Get Support on Twitter - @dellcarespro

bourne553
1 Nickel

RE: EQL volume size and usage in ESXi

Thanks for the detailed response.

I understand about the lack of CentOS support.

I just need to remember that if I ever intend to use the HIT kit.

When you talk about the 64 to 128 commands max? This is the queue depth? So seeing spikes in queue depth up to 40 or 50 isn't all bad?

I think I am beginning to see the full picture a little bit more. I am just trying to keep all the variables straight. Given what you said, going with a single large volume just for data storage may not be an entirely bad thing as long as the total I/O required isn't going to be insane. Unfortunately I don't have a volume that is operating in this manor currently so I have no data to go off of.

But this has certainly given me lots to think about!

Cheers

0 Kudos

RE: EQL volume size and usage in ESXi

Re: Queue.  That's a feature of SCSI.  Each device negotiates that depth value at connection time. TCQ Tag Command Queuing has been around for quite sometime.  With SCSI only one device (disk)on a chain can be active at one time.  So while connected you want to send the drive as much data as possible and get back as much as possible during that sequence.  Then move on to next disk.  

This holds true inside the ESXi hypervisor too.  If you have multiple VMDKs in a VM, but default they share one Virtual SCSI adapter.  The adapter negotiates a queue depth with each virtual disk.  So adding more Virtual SCSI adapters in each VM (up to 4 max) will increase the IO capability of that VM.  And provide more I/O to the storage as well.  

How to do that is covered in the Tech Report I referenced earlier.

Re: Datastores.  Correct it depends entirely on what that Datastore is going to be used for.  If it were really low I/O VMs.  I.e. you were hosting hundreds of small webservers (non commerce) a single Datastore would likely work fine.  I prefer "more" smaller Datastores.  Especially if they are I/O intensive or you are using VMware snapshots for long periods of time.  Since snapshots eat up space, if one did use up all free space, that's only one Datastore.

Regards,

Don

Social Media and Community Professional
#IWork4Dell
Get Support on Twitter - @dellcarespro

0 Kudos
bourne553
1 Nickel

RE: EQL volume size and usage in ESXi

Re: This holds true inside the ESXi hypervisor too.  If you have multiple VMDKs in a VM, but default they share one Virtual SCSI adapter.  The adapter negotiates a queue depth with each virtual disk.  So adding more Virtual SCSI adapters in each VM (up to 4 max) will increase the IO capability of that VM.  And provide more I/O to the storage as well. 

Sorry to keep asking questions, but I was just wondering, you mention that we can add additional Virtual SCSI adapters which results in additional I/O capability (is this the same as I/O capacity?) for the VM. Does this though add additional I/O pressure on the underlying Equallogic volume? For example, if I had a single VM with two Virtual SCSI adapters, is that the same as having 2 different VM's with a single Virtual SCSI adapter each?

Or am I getting things mixed up?

Re: Datastores

So a very general rule of thumb could be, if you have low I/O VMs, having multiple VMs in a single volume is relatively safe.

If you have higher I/O VMs then it is better to create individual volumes to handle those VMs.

This is very general of course.

0 Kudos

RE: EQL volume size and usage in ESXi

No problem asking more questions.

re: Multiple SCSI adapters.  Yes, it will allow more I/O to be handled concurrently.  So yes it could add more I/O to the EQL volume.   It's not identical to two VMs with one adapter, especially as you scale up.

Since the I/O patterns won't likely be the same.  I.e. C: for OS,  😧 for data, E: for logs.  Each with their own adapter will help insure best possible performance.  

For higher I/O vms I still don't do one VM to a Datastore.  That tends to be a waste of space.  vSphere 6.0 with EQL v8.0 firmware supports VMware VVOLs.  This  allows you to have individual volumes for each virtual disk but not have to manage them that way. It appears like a VMFS Datatore.   Or using Storage Direct, where the VM OS handles the iSCSI/SCSI process is another option. That gives you potential to use HIT kit on supported OS's and remove the VMFS Datastore overhead.

SANHQ is your friend here.   It will help you monitor by volume.  SANHQ can be downloaded from the E!QLsupport.dell.com website.  

Don

Social Media and Community Professional
#IWork4Dell
Get Support on Twitter - @dellcarespro

0 Kudos