Start a Conversation

Unsolved

This post is more than 5 years old

3807

July 21st, 2013 12:00

CX3-20 - How to configure and setup metalun for sharing in a Windows 2008 failover cluster

Good afternoon, I am faced with a task to create a Metalun and share it within a Windows 2008 failover cluster. Below I'll list all the items that I am using for inter connectivity and what I am trying to accomplish.

Hardware: EMC Clariion CX3-20 fibre connections

- To create my metalun I am using 2 - fully populated DAE with 15x1TB SATAII  drives in each, 2 raid5 groups were created and then extended the LUN and selected the "concatenate" option to make a single LUN of 25TB. However, after I created the metalun with as a concatenate option the activity LEDs on all the drives are flashing repeatedly..."is this normal?

-  The servers and software being used: 2 Dell PowerEdge R710, Windows Server 2008 Ent R2 64-bit setup as Hyper-V server in a failover cluster. They both have a Qlogic QLE2462 HBAs and will be connected through a Brocade 5000 SAN switch.

- My questions are

* Connecting my EMC controllers and server HBAs to the Brocade 5000. The servers have QLE2462 (2 fiber connecions in one), what is the proper way to zone them up with SPA0 and SPB1? Do I create a single zone and add the 4 WWN associated with the server HBAs and add the 2 EMC SPA0 and SPB1 controllers to it as well?

* How do I associate or link up LUN to my clustered servers? Do I have to associate both physical servers or the virtual server under Unisphere to the LUN?

* Lastly, what is the process to bring everything online and test?

I placed a video of hard disk activity after I created the concatenate option for the metalun. Let me know if this is normal.

Thank you.

1 Rookie

 • 

20.4K Posts

July 21st, 2013 13:00

regarding zoning take a look at this discussion, even it's about VNX same rules apply to Clariion

FC SAN Zoning Best Practices

LEDs are flashing because when you create a brand new LUN Clariion is zeroing all the drives and then runs background verify ..it's normal.

MetaLUN - as you probably know when you create MetaLUN you can select concatenated and striped, each has its purposes (performance, easy of expansion ..etc). Typically for Hyper-V you want to get as many IOPS as possible so i would expect you to go with configuration that would give you as many spindles as possible (striped meta). Take a look at this paper and make sure what you configured is what  you want.

http://www.emc.com/collateral/hardware/white-papers/h1024-clariion-metaluns-cncpt-wp-ldv.pdf

Since this is going to be a Hyper-V cluster, all nodes of the cluster will need to see the same storage. You will create one storage group and add all members of the cluster and all LUNs to this one storage group.

Another good resource for you to read

http://www.emc.com/collateral/hardware/white-papers/h6182-using-clariion-microsoft-hyper-v-wp.pdf

4K Posts

July 21st, 2013 22:00

You can use the cluster validation wizard to test your cluster configurations:

Failover Cluster Step-by-Step Guide: Validating Hardware for a Failover Cluster

21 Posts

July 22nd, 2013 11:00

dynamox, thank you very much for the feedback and links you provided. I have a few more questions, perhaps you can answer them. Currently we have a total of 7 DAE all connected on the same BUS 0 with 10 LUNs that are split up between SPA1 and SPB0 which are connected to a Qlogic 5200. This is how everything was setup from day one and not sure if it was setup the proper way. Based on my previous question, I will be connecting SPA0 and SPB1 to our Brocade 5000 fabric and it is not linked to the Qlogic.

- Do the fabrics need to be interconnected with each other?

- Will i need to create 2 separate FC zones for both servers in on cluster?

- Do you foresee a problem with having all DAE connected to the same BUS?

All our LUNs are linked up to our main production file server, one standalone Hyper-V server, 2 Exchange 2007 servers in a CCR and the new Metalun of 25TB allocated for out Hyper-V cluster. I would like to get additional feedback and see if i'm following the proper steps. Thanks again Dynamox.

1 Rookie

 • 

20.4K Posts

July 22nd, 2013 14:00

How many Brocade switches do you have ?

21 Posts

July 22nd, 2013 16:00

We only have 1 Brocade 5000 w/32ports. I was just testing connection, but ran into some configuration problems. Port and items connected:

Port 0 = SPA1

Port 1 = SPB0

Port 2 = Dev server w/Qlogic QLE2460 HBA

* Our previous running setup consists of 1 Qlogic SANbox 5200, where these ports are currently connected to:

Port 0 = SPA0

Port 4 = SPB1

Port 1 = File server hosting 4 LUNs

Port 3 = Standalone Hyper-V server connected to one LUN

Port 2 = Dev server above, "I unplugged server from this switch and connected it to Brocade" to check if it could be done.

- Note that the Qlogic is 2gb throughoutput and the Brocade is 4gb.. not sure if it matters any but just thought i threw that out there.

- As best practice, will i need to interconnect our Brocade and Qlogic switches?

1 Rookie

 • 

20.4K Posts

July 22nd, 2013 16:00

regarding your questions about DAEs, there is only one bus on CX3-20 so you don't really have any options there.

What is the goal here, migrate all hosts to Brocade or simply add another switch to provide more ports. I have never connected Qlogic to Brocade switches so i am not sure about the nuances of interconnecting those two.

So you connected your Dev server to Brocade switch, did the zoning and it did not login to CX3 ? What configuration problems did you encounter ?

21 Posts

July 22nd, 2013 17:00

Thanks for your prompt response and thanks on the answer regarding the BUS connectivity. Well our goal here is:

- Move off all hosts from Qlogic to Brocade 5000 fabric, since the Brocade will allow us to grow more with 32 activated ports.

- Make sure that i dont lose any of my data on the LUNs liked to our file server and standalone Hype-V server when i move them to the Brocade

- As for the errors with my Dev server, not sure if i did it correctly but this is what I did:

1. Connected SPs and Dev server to Brocade switch

2. Opened Zone Administration on Fabric

3. Clicked on ZONE tab and created new zone named DEV and added Ports 1 and 2 on switch as members

Port 1 = SPB0

Port 2 = Dev server (HBA:QLE2460)

4. Clicked on Zone config, created New Zone config named DEV_config and added DEV zone as config members

5. Clicked on Zoning Actions and selected Enable config

6. Commit was successful

- Checked server and disk 0 is unknown or unreadable. I rescan disks and rebooted server and does not see disk. Did i misconfigured something on the switch? do i need to include the additional port for SPA1?

1 Rookie

 • 

20.4K Posts

July 22nd, 2013 19:00

I assume Dev server was not connected to CX3 before.

1) ssh to Brocade and verify everything is logged

switchshow

2) Login to WebUI > Zone Admin

3) Create New Alias and select WWN for server and move it to the right, repeat for SPA1 and SPB0

7-22-2013 10-01-46 PM.jpg

4) Switch to Zone tab and create new zone ( i like naming convention "server-hba1-cx3-spa1") and add alias for server and for SPA1. Create new zone "server-hba1-cx3-spb0" and add alias for server and SPB0

5) Switch to Zone Config tab and add zones created in step 4 and Enable Config.

At this point Dev server should be listed in Connectivity Status on CX3, if you have installed Navisphere agent it will automatically register itself and show up with the proper name. If you do not have host agent installed you can always manually register WWNs (let me know if you don't know how).  Now you just create a new storage group and add LUNs, host and bounce the host.  Are you using PowerPath or Native MPIO ?

As far as migration is concerned it's very doable, since you only have one switch i assume you only have one HBAs in your servers. If that's the case then it will need to be a shutdown, recable, rezone type of of migration. One thing you will need to do after you rezone the host to SPA1 and SPB0 is to "reconnect" HBAs, follow this primus emc114677

Any plans to add another switch, second HBA ..you have multiple single point of failures.

21 Posts

July 24th, 2013 14:00

Dynamox thanks again, I'm very grateful for all the help and lots of useful information you passed on. I didn't get back to you any sooner due to a server problem we had this past couple of days, so back to this project. I followed your instructions and it seems that all i was missing was the creation of the Aliases under the Brocade. I also tested switching my Dev server from the Qlogic to the Brocade by disconnecting the cables and reconnecting and it picks up the binded LUN for my Dev server. Also, i just destroyed the Concatenated Metalun and redoing it as a stripe Metalun since it will be a for our Virtual cluster...one question though. Can you tell me if the i set these things correctly:

- I have 2 separate DAEs with 15x1TB drives in each

- 2 RAID5 groups, checked the "enable auto assign" option, set the max size and in TB (since each is about 13TB) or should i have set it to GB?

- LUN 9 = connects to SPB, BUS0 Enclosure 6 & LUN 10 = connects to SPA, BUS0 Enclosure 5

- Once the RAID group finishes transitioning I will then Extend the LUN and joining both in a stripe metalun

- Once complete, I'll create a storage group and associate both of my host servers for my cluster, the striped metalun

- Navisphere agent will be installed on both servers prior to adding them to the storage group and will follow your instructions for creating the zoning on the Brocade.

* To answer your questions above:

- Are we using Powerpath or MPIO = I believe it is Powerpath, how can you tell if one is using MPIO.

- As far as adding another switch = Its one reason i was trying to see if i could trunk the Brocade to the Qlogic.

One last question, i noticed that when i created 2 separate zones for SPA1 and host and SPB0 and host 2 separate drives drives appeared under disk management on my Dev server, is that normal?

Once again, thanks for your help.

1 Rookie

 • 

20.4K Posts

July 24th, 2013 14:00

hot spares are already accounted for or they will need to come out out of these 30 drives ?

PowerPath - check in Control Panel > Add/Remove Programs. If you are not using PowerPath then you need to use some kind of multipathing, Native MPIO should work.

As far as multiple drives showing on a host, that is because you are not running PowerPath or you have not configured native MPIO. You have to get that addressed or it's corruption waiting to happen. There is a section in this document that describes how to claim DGC devices using MPIO

https://support.emc.com/docu5134_Host_Connectivity_Guide_for_Windows.pdf?language=en_US

21 Posts

July 24th, 2013 15:00

Can you walk me through in setting up the hot spares. Will this need to be done on both DAE and can it be done with the RAID5 groupd already created? How can i look up how many licenses we have for Powerpath?

21 Posts

July 24th, 2013 15:00

So does it mean that i have to blow away the configured RAID 5 and recreate them both the RAID5 and hot spare? Do you know if theres a way to check how many PowerPath licenses i may have?

1 Rookie

 • 

20.4K Posts

July 24th, 2013 15:00

same place where you create raid group, select one drive and for raid type select hot spare (going from memory). After you are done you will need to create a LUN on that raid group (you won't present it to anything but you must do it).

PowerPath licenses are sold either through EMC or authorized resellers.

1 Rookie

 • 

20.4K Posts

July 24th, 2013 15:00

yes, you will need to delete the raid groups, create your hot spares and then re-create your raid groups again (one less drive per RG). Get in touch with your local EMC contact/reseller, they might pull it up either from maintenance contract on install base database.

1 Rookie

 • 

20.4K Posts

July 24th, 2013 15:00

If you purchased PowerPath licenses, i would definitely use that versus native MPIO (better failover and load-balancing). Make sure to download the latest version as 4.6 is ancient.

Set hot spares right now, don't take any chances.  I would reserve 2 drives for HS.

No Events found!

Top