Start a Conversation

Unsolved

This post is more than 5 years old

52772

July 6th, 2015 06:00

EQL volume size and usage in ESXi

Hi,

I've been going through the forums looking at posts pertaining to volume size on the equallogic.

For the most part is looks like smaller sizes are recommended, but I also see a lot saying that this varies greatly depending upon the intended use for the space.

http://en.community.dell.com/support-forums/storage/f/3775/t/19540581

http://en.community.dell.com/support-forums/storage/f/3775/t/19537117

I inherited 2 equallogics that are currently carved up into large volume sizes which are either 4 or 8 TB.

I am looking to rebuild a file server that we have and I require 8TB of space.

What is the best approach to provide a large amount of space for a file server? Are there any documents that cover recommendation for server/application types and how to provision the space?

I initially started off with a single 8TB volume that I then made available to our ESXi infrastructure, formatted and added it to our Linux file server. I was immediately greeted with a warning from ESXi about there being no space left in that datastore. I then started reading about the dangers of filling up datastores 100%.

So now I am sort of back to square one and trying to figure out the best way to provide a large a amount of space to a server, while still maintaining best practices for Equallogic and ESXi.

My next option is to go with smaller volumes and datatores, and use LVM to combine these inside the Linux OS itself. But that in itself posses it's own challenges.

I know this is a loaded question because it varies so much based on a bunch of different factors. But any tips or information would be helpful.

Cheers

56 Posts

July 6th, 2015 06:00

Honestly I'd probably just go with in-guest iSCSI for this purpose and take VMWare out of the picture entirely.  We do this on both Windows and Linux with good results and we have volumes up to the max (15T) provisioned.  The Linux and Windows HIT kits will help with setting this up, especially MPIO.

10 Posts

July 6th, 2015 06:00

Interesting that you mention this.

After posting I continued my research and stumbled upon this - http://blog.stephendolphin.co.uk/project-work/using-the-dell-eql-mem-module-to-simplify-my-backups-also-thanks-again-veeam/

The article also mentions bypassing VMware altogether. I honestly hadn't even thought of this.

I will look into the HIT kits, I've not heard of these before.

In the article above it mentions that it adds some complexity to the solution. 

Your experience has been good using this method?

56 Posts

July 6th, 2015 08:00

It has been overall good.  It does buy you some advantages like being able to snapshot or replicate the guest volume independent of the OS or VMware.  I suppose a slight disadvantage is it does add some additional setup time, for example to configure the network since the guest needs access to the SAN subnet.

10 Posts

July 6th, 2015 08:00

Yeah that makes sense.

I was just reading about the HIT Kit. It sounds like it offers some interesting options.

For the Linux support I see that they only support Red Hat 6.5.

I would like to run this on CentOS 7. Scanning through the forums it seems there is zero official support for CentOS.

Not sure if I should the risk of running the kit on an unsupported platform.

5 Practitioner

 • 

274.2K Posts

July 6th, 2015 09:00

Hello,

Re: CentOS is not a supported distro, I'm sorry.   RHEL v7.x and OEL v7.x support will come later this year.

Re: Volume size.  Hypervisors add a level of complexity regarding volume size.  It's entirely based on the I/O load intended for that volume.  I wouldn't stack exchange and SQL servers on a single large volume for example.  Each volume connection negotiates a queue depth with the initiator.  Typical values today are 64 to 128 commands max at one time.  When the queue fills, I/O stops until the storage processes off commands in the queue.  So the more VMs and nodes accessing a single volume will likely hit this limit more often.  Also, there are certain I/O operations on VMFS Datastores that require a node to assert an exclusive SCSI2 reservation.   VAAI provides Atomic Test & Set (ATS) which helps limit these events, but they still occur.  When one node locks the volume, ALL other nodes have to wait until that node releases it.   So if all your VMs were on one gigantic volume this could become a bottleneck..  Multiple smaller volumes mean that when one or even two Datastores were locked the others could still be active.  

For a fileserver this isn't typically the same concern.  Unless it was under extraordinary load. Then the queue limit could be a problem.  Again more volumes would help limit this.  

Please do make sure your ESXI servers are configured per our best practices Tech Report.

en.community.dell.com/.../download

Regards,

Don

10 Posts

July 6th, 2015 12:00

Thanks for the detailed response.

I understand about the lack of CentOS support.

I just need to remember that if I ever intend to use the HIT kit.

When you talk about the 64 to 128 commands max? This is the queue depth? So seeing spikes in queue depth up to 40 or 50 isn't all bad?

I think I am beginning to see the full picture a little bit more. I am just trying to keep all the variables straight. Given what you said, going with a single large volume just for data storage may not be an entirely bad thing as long as the total I/O required isn't going to be insane. Unfortunately I don't have a volume that is operating in this manor currently so I have no data to go off of.

But this has certainly given me lots to think about!

Cheers

5 Practitioner

 • 

274.2K Posts

July 6th, 2015 12:00

Re: Queue.  That's a feature of SCSI.  Each device negotiates that depth value at connection time. TCQ Tag Command Queuing has been around for quite sometime.  With SCSI only one device (disk)on a chain can be active at one time.  So while connected you want to send the drive as much data as possible and get back as much as possible during that sequence.  Then move on to next disk.  

This holds true inside the ESXi hypervisor too.  If you have multiple VMDKs in a VM, but default they share one Virtual SCSI adapter.  The adapter negotiates a queue depth with each virtual disk.  So adding more Virtual SCSI adapters in each VM (up to 4 max) will increase the IO capability of that VM.  And provide more I/O to the storage as well.  

How to do that is covered in the Tech Report I referenced earlier.

Re: Datastores.  Correct it depends entirely on what that Datastore is going to be used for.  If it were really low I/O VMs.  I.e. you were hosting hundreds of small webservers (non commerce) a single Datastore would likely work fine.  I prefer "more" smaller Datastores.  Especially if they are I/O intensive or you are using VMware snapshots for long periods of time.  Since snapshots eat up space, if one did use up all free space, that's only one Datastore.

Regards,

Don

5 Practitioner

 • 

274.2K Posts

July 6th, 2015 14:00

No problem asking more questions.

re: Multiple SCSI adapters.  Yes, it will allow more I/O to be handled concurrently.  So yes it could add more I/O to the EQL volume.   It's not identical to two VMs with one adapter, especially as you scale up.

Since the I/O patterns won't likely be the same.  I.e. C: for OS,  D: for data, E: for logs.  Each with their own adapter will help insure best possible performance.  

For higher I/O vms I still don't do one VM to a Datastore.  That tends to be a waste of space.  vSphere 6.0 with EQL v8.0 firmware supports VMware VVOLs.  This  allows you to have individual volumes for each virtual disk but not have to manage them that way. It appears like a VMFS Datatore.   Or using Storage Direct, where the VM OS handles the iSCSI/SCSI process is another option. That gives you potential to use HIT kit on supported OS's and remove the VMFS Datastore overhead.

SANHQ is your friend here.   It will help you monitor by volume.  SANHQ can be downloaded from the E!QLsupport.dell.com website.  

Don

10 Posts

July 6th, 2015 14:00

Re: This holds true inside the ESXi hypervisor too.  If you have multiple VMDKs in a VM, but default they share one Virtual SCSI adapter.  The adapter negotiates a queue depth with each virtual disk.  So adding more Virtual SCSI adapters in each VM (up to 4 max) will increase the IO capability of that VM.  And provide more I/O to the storage as well. 

Sorry to keep asking questions, but I was just wondering, you mention that we can add additional Virtual SCSI adapters which results in additional I/O capability (is this the same as I/O capacity?) for the VM. Does this though add additional I/O pressure on the underlying Equallogic volume? For example, if I had a single VM with two Virtual SCSI adapters, is that the same as having 2 different VM's with a single Virtual SCSI adapter each?

Or am I getting things mixed up?

Re: Datastores

So a very general rule of thumb could be, if you have low I/O VMs, having multiple VMs in a single volume is relatively safe.

If you have higher I/O VMs then it is better to create individual volumes to handle those VMs.

This is very general of course.

10 Posts

July 7th, 2015 09:00

Re: 

Multiple SCSI adapters.  Yes, it will allow more I/O to be handled concurrently.  So yes it could add more I/O to the EQL volume.   It's not identical to two VMs with one adapter, especially as you scale up.

Since the I/O patterns won't likely be the same.  I.e. C: for OS,  D: for data, E: for logs.  Each with their own adapter will help insure best possible performance. 

This makes sense. Thank you for explaining. A colleague of mine actually suggested this very thing. Interesting topic I will have to read more into it.

Re: For higher I/O vms

This makes sense as well. I guess it all comes down to balancing. I haven't seen or read anything about the new verison of vSphere or EQL. Something to add to the list.

I do have SanHQ currently running and with the details you have provided I have been going through and looking at the data. I will be honest, I am having a bit of a hard time discerning what it all means. But currently nothing is jumping out to me as being an obvious issue. I will be interested to see how this new volume fairs.

Re: HIT Kit

I am curious about what situations it is recommended to use this type of interface over going through VMWare? The gentleman that originally responded to this post suggested that I use a HIT Kit and connect directly to the large volume I have. 

It appears I will be going a different direction with my file server after all (at the OS level anyway) so using a HIT Kit may become an option.

5 Practitioner

 • 

274.2K Posts

July 7th, 2015 10:00

Re: HIT kit.  The largest benefit is in Windows SQL, Exchange, and SharePoint environments.  The HIT/ME (Microsoft Edition) provides a host of benefits in that environment.  I.e. you can use the HIT GUI to restore mailbox(es) using an EQL HW snapshot w/o touching the Exchange or EQL GUI.   It creates the recover group, mounts the snapshot of the inbox to the users mailbox, etc...  Integrates tightly into MS SQL as well with similar kinds of features.

HIT for Linux (HIT/LE) has some MSSQL integration and filesystem support for "freeze" to create much more consistent snapshots and replicas.  It also allows you to run UNMAP (aka space reclaim) on supported filesystems.  (Like EXT4).  

The biggest benefit to HIT/LE is the EQL enhanced MPIO.  This replaces the stock multipathd.   Especially when you have multimember pools, the MPIO code provides much greater performance over multipathd.   EQL FW v6.0 or greater is required to support UNMAP.  EQL does support UNMAP when the host sends the proper command.

Note: Right now UNMAP with VVOLs are NOT supported.  That's expected for a future release of EQL FW.

re: SANHQ.  There is a section in the Admin guide that talks about how to interpret the performance data SANHQ provides.

Also SANHQ v3.1 also has support for VVOLs.  So you will be able to track individual DISK performance in each VM.  Which I think is going to be very helpful.

Regards,

Don

No Events found!

Top