Unsolved
This post is more than 5 years old
10 Posts
0
52852
July 6th, 2015 06:00
EQL volume size and usage in ESXi
Hi,
I've been going through the forums looking at posts pertaining to volume size on the equallogic.
For the most part is looks like smaller sizes are recommended, but I also see a lot saying that this varies greatly depending upon the intended use for the space.
http://en.community.dell.com/support-forums/storage/f/3775/t/19540581
http://en.community.dell.com/support-forums/storage/f/3775/t/19537117
I inherited 2 equallogics that are currently carved up into large volume sizes which are either 4 or 8 TB.
I am looking to rebuild a file server that we have and I require 8TB of space.
What is the best approach to provide a large amount of space for a file server? Are there any documents that cover recommendation for server/application types and how to provision the space?
I initially started off with a single 8TB volume that I then made available to our ESXi infrastructure, formatted and added it to our Linux file server. I was immediately greeted with a warning from ESXi about there being no space left in that datastore. I then started reading about the dangers of filling up datastores 100%.
So now I am sort of back to square one and trying to figure out the best way to provide a large a amount of space to a server, while still maintaining best practices for Equallogic and ESXi.
My next option is to go with smaller volumes and datatores, and use LVM to combine these inside the Linux OS itself. But that in itself posses it's own challenges.
I know this is a loaded question because it varies so much based on a bunch of different factors. But any tips or information would be helpful.
Cheers



bealdrid
1 Rookie
•
56 Posts
0
July 6th, 2015 06:00
Honestly I'd probably just go with in-guest iSCSI for this purpose and take VMWare out of the picture entirely. We do this on both Windows and Linux with good results and we have volumes up to the max (15T) provisioned. The Linux and Windows HIT kits will help with setting this up, especially MPIO.
bourne553
10 Posts
0
July 6th, 2015 06:00
Interesting that you mention this.
After posting I continued my research and stumbled upon this - http://blog.stephendolphin.co.uk/project-work/using-the-dell-eql-mem-module-to-simplify-my-backups-also-thanks-again-veeam/
The article also mentions bypassing VMware altogether. I honestly hadn't even thought of this.
I will look into the HIT kits, I've not heard of these before.
In the article above it mentions that it adds some complexity to the solution.
Your experience has been good using this method?
bealdrid
1 Rookie
•
56 Posts
0
July 6th, 2015 08:00
It has been overall good. It does buy you some advantages like being able to snapshot or replicate the guest volume independent of the OS or VMware. I suppose a slight disadvantage is it does add some additional setup time, for example to configure the network since the guest needs access to the SAN subnet.
bourne553
10 Posts
0
July 6th, 2015 08:00
Yeah that makes sense.
I was just reading about the HIT Kit. It sounds like it offers some interesting options.
For the Linux support I see that they only support Red Hat 6.5.
I would like to run this on CentOS 7. Scanning through the forums it seems there is zero official support for CentOS.
Not sure if I should the risk of running the kit on an unsupported platform.
bourne553
10 Posts
0
July 6th, 2015 12:00
Thanks for the detailed response.
I understand about the lack of CentOS support.
I just need to remember that if I ever intend to use the HIT kit.
When you talk about the 64 to 128 commands max? This is the queue depth? So seeing spikes in queue depth up to 40 or 50 isn't all bad?
I think I am beginning to see the full picture a little bit more. I am just trying to keep all the variables straight. Given what you said, going with a single large volume just for data storage may not be an entirely bad thing as long as the total I/O required isn't going to be insane. Unfortunately I don't have a volume that is operating in this manor currently so I have no data to go off of.
But this has certainly given me lots to think about!
Cheers
bourne553
10 Posts
0
July 6th, 2015 14:00
Re: This holds true inside the ESXi hypervisor too. If you have multiple VMDKs in a VM, but default they share one Virtual SCSI adapter. The adapter negotiates a queue depth with each virtual disk. So adding more Virtual SCSI adapters in each VM (up to 4 max) will increase the IO capability of that VM. And provide more I/O to the storage as well.
Sorry to keep asking questions, but I was just wondering, you mention that we can add additional Virtual SCSI adapters which results in additional I/O capability (is this the same as I/O capacity?) for the VM. Does this though add additional I/O pressure on the underlying Equallogic volume? For example, if I had a single VM with two Virtual SCSI adapters, is that the same as having 2 different VM's with a single Virtual SCSI adapter each?
Or am I getting things mixed up?
Re: Datastores
So a very general rule of thumb could be, if you have low I/O VMs, having multiple VMs in a single volume is relatively safe.
If you have higher I/O VMs then it is better to create individual volumes to handle those VMs.
This is very general of course.
bourne553
10 Posts
0
July 7th, 2015 09:00
Re:
Multiple SCSI adapters. Yes, it will allow more I/O to be handled concurrently. So yes it could add more I/O to the EQL volume. It's not identical to two VMs with one adapter, especially as you scale up.
Since the I/O patterns won't likely be the same. I.e. C: for OS, D: for data, E: for logs. Each with their own adapter will help insure best possible performance.
This makes sense. Thank you for explaining. A colleague of mine actually suggested this very thing. Interesting topic I will have to read more into it.
Re: For higher I/O vms
This makes sense as well. I guess it all comes down to balancing. I haven't seen or read anything about the new verison of vSphere or EQL. Something to add to the list.
I do have SanHQ currently running and with the details you have provided I have been going through and looking at the data. I will be honest, I am having a bit of a hard time discerning what it all means. But currently nothing is jumping out to me as being an obvious issue. I will be interested to see how this new volume fairs.
Re: HIT Kit
I am curious about what situations it is recommended to use this type of interface over going through VMWare? The gentleman that originally responded to this post suggested that I use a HIT Kit and connect directly to the large volume I have.
It appears I will be going a different direction with my file server after all (at the OS level anyway) so using a HIT Kit may become an option.