Start a Conversation

Unsolved

This post is more than 5 years old

973

September 28th, 2015 12:00

Moving direct attached x64 Win2012 hosts to brocade 6505

Hi All,

I have to move 4 direct attached win2012 hosts running MPIO to brocade 6505 switches. The array is VNX5300. Currently the servers are connected in the following way: one HBA goes to SPA port, and the other HBA goes to SPB port. I will have it connected the following way: two new storage ports and one server HBA on fabric A, and two new storage ports and one server HBA on fabric B.

Has anyone done anything similar before? My main concern is MPIO, that is if I have to manually configure it on windows side to see all the new paths. Since the server HBA is going to be zoned to two different storage ports on each switch, will MPIO pick them up automatically, or a server reboot is needed? Also, can this migration be done with everything online, no downtime?

195 Posts

September 29th, 2015 06:00

Q:  Are you currently running Powerpath on the windows servers for MPIO?

Q:  What HBAs are you using, and were they specifically configured for point to point, or are they still in sort of 'Auto' mode for connection type?

I'm not certain that anyone can guarantee that you won't need to reboot the hosts at some point in the process, and I would consider it good practice to at least reboot them once all the work was done to insure that you didn't have an issue waiting to bite you on the next reboot.

But from the sounds of it, as long as your HBAs aren't specifically configured for point to point use only, you should be able get set up and start the first changes, then stop and reconsider if things look bad.

You would do something like this:

1> Install new switches

2> cable the new storage ports to them and check those connections in Unisphere and Brocade Network Adviser

3> trespass your LUNs to SPB

4> take the SPA port from the HBA port from one Win host and move it to one of the switches

5> create the zones for that HBA to the two storage ports, and activate that zone set.

6> go to Powerpath on that server and see if the new paths show up and are alive.

7> if you are happy, repeat 4-6 for the other three hosts, either one at a time, or all at once.

Once you get that far you should have live connections to both SPs from the one HBA port, and you could re-distribute the LUNs to their proper SPs.  If you don't, then you should have seen that at step #6, and at that point you haven't lost connection to the storage from any of the hosts and you can consider an alternative plan.

4.5K Posts

September 30th, 2015 14:00

Try going to mydocuments.emc.com and selecting VNX Series then under the VNX Server Tasks, select Attach a server - this will provide an installation guide that will help.

glen

September 30th, 2015 18:00

No PowerPath, it is Native Windows MPIO.

HBAs are QLE2562. Not sure what you mean by point to point config for HBAs, never saw it. Probably it is not the case here.

I have 4 hosts - 2 separate MS clusters consisting of two nodes. One cluster is SQL, the other is Hyper-V. Each host is connected with one FC cable to SPA and one FC cable to SPB. Hosts are configured for Failover mode 4 - ALUA. Each cluster has two volumes presented to it: Quorum and Data.

You are saying in step 3> trespass your LUNs to SPB. Why do I have to do it? If I unplug the cable that connects an HBA0 and an SPA port, and there is a LUN that is owned by SPA, the host should still be able to access it via SPB path, due to ALUA and native MPIO software on the server, and this path will be "Active/unoptimized". I thought I will not have to trespass anything, because I am not rebooting any SPs. Can you explain please, maybe I am not seeing something here?

September 30th, 2015 18:00

Saw it already, it is a nice doc, but does not really have the procedure to move from direct attached to switch attached, although some of it can be applied here I think.

195 Posts

October 1st, 2015 07:00

Regarding step #3:  you asked for a non-disruptive procedure.  By manually trespassing you eliminate the possibility of unplugging one side, only to find out that the trespass didn't work for whatever reason.  If you do it manually, while connectivity is still present, it will snap right back if the alternate paths aren't available for whatever reason.  In an optimal situation, it is not necessary, but it could save you I/O failures.

Whether the lack of powerpath, or the HBA settings, are an issue are things that you would likely see when you get to step #6.  You can go into MPIO and see the condition of your paths there and then; you would probably want to run a rescan to make sure that the system looked for new paths and devices.

October 14th, 2015 07:00

Anyways, this is what I did, and all the services and resources stayed online during this migration:

1). Configured the zoning on the switches before hand.

2). Connected the new storage ports to the switches.

3). Started on the SQL cluster: failed all cluster services and resources to the active node.

4). Moved HBA0 cable on the passive node to the switch, manually registered the new paths on the array, added the new paths for that host under the advanced properties in the storage group (engineering mode is required for that), rescanned the disks on the host, and under the MPIO properties for the LUNs made sure that they have three paths. Three paths because, one HBA was still direct attached, and the other one was switch attached and zoned to two SP ports.

5). Repeated the same steps for the second host HBA.

6). Rebooted the passive cluster nodes, to make sure everything reconnects.

7). Failed the cluster services and resources to a passive node, and repeated the same steps for the remaining nodes.

No LUN trespass necessary.

195 Posts

October 14th, 2015 08:00

Excellent. 

You relied on a redundant server rather than redundant controllers.

October 15th, 2015 17:00

actually I relied on both - redundant SPs and redundant server connections.

4.5K Posts

October 21st, 2015 10:00

Please mark your question as answered - it will be helpful to others attempting the same operation. Glad it all worked well for you.

glen

No Events found!

Top