Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

69157

February 9th, 2012 12:00

Management Networks and SAN Snapshotting

Hi all!

Long time lurker, first time poster. I must say that I'm very happy with the purchase of our EqualLogic PS4000 series SANs. I've been pouring over both EqualLogic and VMware's Best Practice Guides and I have a couple questions hanging out in my mind that I'm hoping someone can answer. I suspect it might help other folks who have searched for what I'm asking and come up short. I feel like I have quite a few of the pieces but I need some help putting them together. I hope you read this, Don! :emotion-10:

I'm using my SAN solely to host VMware vSphere VMs. I'm using Veeam Backup & Restore 6 to back up these VMs to a NAS box. It's been working great! Veeam is running off a physical server connected to the SAN with dual gigabit NICs. I'm using the 'SAN Mode' backup (their name for an off-host backup) to snapshot the VMs and dump them on my NAS box.

...now for the EqualLogic questions!

  1. I understand I can use my EqualLogic SAN to snapshot a LUN. I also understand that I can set up replica LUNs, and I can replicate LUNs between two EqualLogic boxes. The LUNs I have carved out are solely for VMFS. Considering I'm using Veeam with the SAN Mode backup option - is there any reason why I'd want to reserve space for snapshots on my EqualLogic? Right now I have the default 20% reserve and for my situation it seems like it's wasted space? Am I missing something here? Note that I get de-duplication and compression with Veeam!
  2. My PS4000 has a dedicated 100Mbit management port that I'm not able to use for iSCSI traffic. My plan was to VLAN off a management network for it on my switches that I'm solely using for my VM hosts and the SAN. Then I found out that my dedicated management port would need to be on a separate subnet from my other interfaces. Now I'm thinking about not even using that dedicated 100Mbit management port and just doing management through the iSCSI group IP address. To me it seems like routing that management traffic is not worthwhile. Are there negatives associated with managing the EqualLogic through the iSCSI group IP address that I'm missing?
  3. My EqualLogic is on a physically separated network segment with no route to the Internet or my production network. I like this idea - but then I cannot use NTP or SMTP. Of course I've manually set the time correctly on my EqualLogic but I'm interested in how accurate that time needs to be. Is it only used for time & date stamps on SAN based snapshots and logs? Regarding SMTP - since I'm not able to reach my normal SMTP server, can I set up SAN HQ to collect any alerts from the SAN and e-mail me that way? Are there any alerts that wouldn't come through? What are other folks doing to address this?
  4. Adding multiple volumes does not increase SAN performance in any way, right? Is the main reason I'd set up multiple smaller volumes versus a larger one (2TB - 512B to make VMware happy) just for the ability to set up different RAID types? I understand this question gets more complex with multiple SANs but for this question I'm talking about a single SAN
I apologize for being so verbose. I blame it on all the forum lurking I've been doing where people don't post enough information.

203 Posts

February 15th, 2012 19:00

Interesting... observation.  I took a host into maint mode, and of the datastores it was connected to (already set to RR), I applied the "esxcli storage nmp psp roundrobin deviceconfig set -d naa.[idnumber] -I 3 -t iops".  Verified all of the changes.  Took it out of maintenance mode, threw a couple of VM's on it.  Ran Iometer on one of them.  Then I went into vcenter on the realtime monitoring of the Network, and isolated just the two NICs associated with my iSCSI vswitch.  And I still only see one of them being used.  Weird...

I know I'd probably solve all of this by just installing and using the MEM, but that looks fairly involved from what I can tell.

5 Practitioner

 • 

274.2K Posts

February 15th, 2012 20:00

that is very weird.     Make sure that the failover in the ISCSI VMK ports properties is set to "No"    It's in the TEAMING tab, right above where you set the VMNICs to unused.  

Installing MEM 1.1 on ESXi v5 is VERY easy.  Since iSCSI is already setup all you are doing is replacing the PSP.  

I save the MEM zip file to a datastore then run the install on the ESX cli.  You can use the VIC client to upload the ZIP file to the datastore, then SSH/Console into the ESXi server.

#esxcli software vib install --depot  /vmfs/volumes/ /dell-xxxxx.zip

You can either reboot or restart hostd so that the eqllogic namespace commands are available in esxcli.

5 Practitioner

 • 

274.2K Posts

February 15th, 2012 20:00

If you can't figure it out, then open a case, they can review the config via WebEx and maybe spot the problem.

203 Posts

February 15th, 2012 20:00

In each of the given vmkernel ports, in the NIC Teaming tab, where I have one "active" and one "unused", all of the "Policy Exceptions" do not have a tick next to any of them, but the Failback does have a grayed out "yes"  For the actual vmkernel ports, the "Override switch failover order" is ticked so that one can be set as active, and one as unused.

I'll have to experiment on one of my other hosts for the MEM.  I see the latest one just came out for 5.0, so that might give me a little inspiration (along with currently only utilizing one NIC).

203 Posts

February 15th, 2012 20:00

Gotcha.  I did that (ticked the checkbox, setting it to no), rescanned the volumes for good measure, and ran another test.  still pulling off of just one vnic.  

Yeah, I'm all vsphere 5.0, so maybe I'll give that a try sometime soon.  It would be nice to figure this one out though.  Hmm...

5 Practitioner

 • 

274.2K Posts

February 15th, 2012 20:00

You still want to select the Checkbox next to Failover.   That will allow you to change it to No.  Otherwise ESXi v5.0 might use one of the unused ports anyway and that can cause latency and connection problems.

MEM 1.1.0 has a build for both ESX(i) v4.1 and ESXi v5.0.    Installing in 4.1 is a little harder but I use the vMA appliance and upload the setup.pl and the package file to the vma and install it there.  Still pretty easy just slower than installing right on VMFS volume with ESXi v5.0

Here's a VMware KB that talks about the failover issue in ESXi v5.0

kb.vmware.com/.../search.do

A fix is expected from VMware.

203 Posts

February 15th, 2012 21:00

Good idea.  Thanks for the good info Don.  Much appreciated.  Hopefully I can thank you in person at the DSF this year.

5 Practitioner

 • 

274.2K Posts

February 16th, 2012 06:00

You are very welcome.  

re: DSF   They don't let me out much.  ;-)     If you end up at the Nashua facility someday do look me up.  

Regards,

203 Posts

February 16th, 2012 11:00

Well thats a shame.  ...Oh, as for the issue.  Tech support lead me down the right road to the cause.  The lack of the vmkernel port bindings under the iSCSI software adapter.  Fixed, and everything is fine now.

5 Practitioner

 • 

274.2K Posts

February 16th, 2012 11:00

Glad they found the issue.   I should have had you dump the kernel bindings.    

No Events found!

Top