Start a Conversation

Unsolved

This post is more than 5 years old

3808

July 21st, 2013 12:00

CX3-20 - How to configure and setup metalun for sharing in a Windows 2008 failover cluster

Good afternoon, I am faced with a task to create a Metalun and share it within a Windows 2008 failover cluster. Below I'll list all the items that I am using for inter connectivity and what I am trying to accomplish.

Hardware: EMC Clariion CX3-20 fibre connections

- To create my metalun I am using 2 - fully populated DAE with 15x1TB SATAII  drives in each, 2 raid5 groups were created and then extended the LUN and selected the "concatenate" option to make a single LUN of 25TB. However, after I created the metalun with as a concatenate option the activity LEDs on all the drives are flashing repeatedly..."is this normal?

-  The servers and software being used: 2 Dell PowerEdge R710, Windows Server 2008 Ent R2 64-bit setup as Hyper-V server in a failover cluster. They both have a Qlogic QLE2462 HBAs and will be connected through a Brocade 5000 SAN switch.

- My questions are

* Connecting my EMC controllers and server HBAs to the Brocade 5000. The servers have QLE2462 (2 fiber connecions in one), what is the proper way to zone them up with SPA0 and SPB1? Do I create a single zone and add the 4 WWN associated with the server HBAs and add the 2 EMC SPA0 and SPB1 controllers to it as well?

* How do I associate or link up LUN to my clustered servers? Do I have to associate both physical servers or the virtual server under Unisphere to the LUN?

* Lastly, what is the process to bring everything online and test?

I placed a video of hard disk activity after I created the concatenate option for the metalun. Let me know if this is normal.

Thank you.

21 Posts

July 24th, 2013 15:00

So i look it up and our main file server which has 4 LUNs attached to it and is the head server which has Navisphere, Unisphere and confirmed that it has Powerpath 4.6.1 (32bit). Will i need to install Powerpath on my 2 cluster nodes and any other server connecting to any LUNs on our Clariion?

As for the hot spares, i didnt set any aside as hot spares. I'm sure we can purchase spare drives at a later time and mark them as hot spares.

21 Posts

July 26th, 2013 09:00

Good morning, so i checked and found the license for PowerPath. In regards to the Metalun.. i did what you suggested and set aside one drive for each RAID group as a hot spare and it was setup as a striped metalun. Now comes the fun part. I downloaded the latest Navi, Navi CLI and PowerPath agents and installed on both of my nodes for the Win2008 Hyper-V cluster. I connected both to the Brocade switch and created 4 Aliases, they are as follows:

1st Node: HBA Port1 connects to Brocade Port 8 / Alias name convention Irvhvhost1_Port1, Irvhvhost1_Port2, Irvhvhost2_Port1 and Irvhvhost2_Port2. Remember that i have a QLE2462 dual port HBA. My following question, do i create 4 separate zones for each respective HBA port associated to the SP or do i create just 2 zones which includes Port1 for both servers and SPA1 and Port2 for both servers and SPB0? After i create either or zone setup I'll add the 2 hosts and Lun to their own storage group and will i need to reboot servers? Since it s a cluster i take it only one will have to be up to configure the LUN under cluster admin. Any thoughts?

Thanks Dynamox.

1 Rookie

 • 

20.4K Posts

July 26th, 2013 09:00

BowAdmin wrote:

Good morning, so i checked and found the license for PowerPath. In regards to the Metalun.. i did what you suggested and set aside one drive for each RAID group as a hot spare and it was setup as a striped metalun.

I am not sure i am following you here, did you setup two drives as hot spares ? They should look like this in Navisphere, notice how there is private LUN 200 created inside of that raid group. Very important or HS will not kick in.

7-26-2013 12-13-08 PM.jpg

Zoning - create separate zones for each host/hba/sp pairing, ie:

zones:

server1-hba1-cx3-spb0

server1-hba1-cx3-spa1

server1-hba2-cx3-spb0

server1-hba2-cx3-spa1

server2-hba1-cx3-spb0

server2-hba1-cx3-spa1

server2-hba2-cx3-spb0

server2-hba2-cx3-spa1

Once systems are logged-in to Navishpere and registered (this should happen automatically since you are running Navisphere agent), both servers will need to be added to one storage group.

1 Rookie

 • 

20.4K Posts

July 26th, 2013 13:00

i try to match navi agent version to flare version on of my array. How many NICs on Hyper-V servers ?

21 Posts

July 26th, 2013 13:00

Ok, so that means i should remove agents and install the previous agents i have been using. That goes for the PowerPath agent as well. As for the Nics on our servers. It has 2 quad nics. One onboard and one card.

Onboard - Setup as our management nic

quad nic card - setup with 2 teams. One for our internal domain and the other for our web domain. The onboard has static IP and the teams DHCP.

21 Posts

July 26th, 2013 13:00

One additional piece of detail. On the 2 Hyper-V cluster nodes we are using Navi agent ver. 6.26.32.0.72 and our previous servers it Nave agent ver. 6.22.0. Do you think would make a difference? I just checked under SG and hosts list and it sees both hosts but displays as agent not connected nor does it display their IPs.

1 Rookie

 • 

20.4K Posts

July 26th, 2013 13:00

go with the latest PowerPath for your OS

Take a look at this primus solution emc62479

21 Posts

July 26th, 2013 13:00


Ok, so i finished creating all zones under the Brocade.. similar setup to what you suggested and enabled the config. Now that i signed back into Navisphere to join them to the storage group this is what i see. I've attached some images the hosts in questions and RAID groups. Will i need to reinstall the Navi agent?

Hosts on CX3.JPG.jpg

RAIDGroups.JPG.jpg

21 Posts

July 26th, 2013 15:00

Dynamox, would mind telling me how i can access the primus solutions?

21 Posts

July 26th, 2013 15:00

You think this is where the hostname of the server and ip goes?

NaviAgentconfig.JPG.jpg

1 Rookie

 • 

20.4K Posts

July 26th, 2013 17:00

in your screenshot, i can't tell if there are any private LUNs under RG 202 and 203

1 Rookie

 • 

20.4K Posts

July 26th, 2013 17:00

https://support.emc.com/search?text=emc62479

i never configure those options in the agent, just make sure the agent is bound to correct NIC (the one that can talk to SPA/SPB) as described in the primus solution.

4.5K Posts

July 29th, 2013 15:00

That screen cap is for the array (Privileged user). For a EMC Array you would put in "system" as the Username and the IP of the SPA or SPB Management port (ideally you would create two entries - one for each SP).

This will create the entries in the agent.conf file and allow that array to manage the Host Agent on the host.

There is more information about the EMC Host Agent in Support article emc254967 - search on support.emc.com.

glen

21 Posts

July 29th, 2013 17:00

Thanks for the all the info. As far as the Private LUNs, I've attached a jpeg of the HotSpare RGs and they are in a Private LUN and also attached a jpeg of my active hosts. I want it to tell you that i finally got the 2 hyper-v cluster hosts to be registered by our Clariion. Steps i took to get them to show up correctly:

1. Removed latest Navi agent and installed version 6.20 same as all other hosts

2. Created a txt file named AgentID.txt and added 2 lines of info. First line the FQDN of the host and second line its respective IP and saved it under C:\ProgramFiles\EMC\Navisphere directory

3. Last thing, i moved the order of the Nics and moved to the top of the list the nic i want it to use . I've attached pictures to reference what i am saying.PrivateLunsHotSpareRGroups.JPG.jpgActive Hosts on CX3.JPG.jpgNicAdvancedSettings.JPG.jpg

* On the host picture, do you know how i can remove the highlighted entries? Those are one of the Nic teams but is not connected.

* At this point, i will be binding both host and LUN to their own storage group. Anything else you suggest i look into?

1 Rookie

 • 

20.4K Posts

July 29th, 2013 18:00

do you see them in Connectivity Status ?

No Events found!

Top