Start a Conversation

Unsolved

This post is more than 5 years old

1064

February 10th, 2010 11:00

SAN centralization

Hello,

I have 6 Emc disk arrays corresponding to 6 SAN networks, 2 of

them have integrated Brocade switches (Blade) and the rest are

normal switches, all in redundancy, how can I centralize all the

SANs and later on virtualize them. The aim is to get more flexibility

in affecting LUNs from one server (network) to the other network.

Thanks

COnsty

232 Posts

February 10th, 2010 11:00

Yes, 6 stand alone SAN switches not connected to other switches.

Thanks

Consty

1 Rookie

 • 

20.4K Posts

February 10th, 2010 11:00

What kind of switches, what kind of arrays, what kind of blade enclosures ?

This e-mail message (including any attachments) is for the sole use of

the intended recipient(s) and may contain confidential and privileged

information. If the reader of this message is not the intended

recipient, you are hereby notified that any dissemination, distribution

or copying of this message (including any attachments) is strictly

prohibited.

If you have received this message in error, please contact

the sender by reply e-mail message and destroy all copies of the

original message (including attachments).

1 Rookie

 • 

20.4K Posts

February 10th, 2010 11:00

So 6 stand alone san switches, not connected to other switches ?

This e-mail message (including any attachments) is for the sole use of

the intended recipient(s) and may contain confidential and privileged

information. If the reader of this message is not the intended

recipient, you are hereby notified that any dissemination, distribution

or copying of this message (including any attachments) is strictly

prohibited.

If you have received this message in error, please contact

the sender by reply e-mail message and destroy all copies of the

original message (including attachments).

232 Posts

February 10th, 2010 11:00

All the switches are Brocade : BR300, DS200B, Brocade 4GB 6+14ports, Silkworm 3250

Blades are IBM HS21, HS22

Arrays are CX300, CX500, CX3-40, CX3-80, CX4-480

Regards

Consty

2.1K Posts

February 10th, 2010 12:00

So, if I understand correctly (and forgive me for asking the same questions again a different way)...

You have one switch attached to each array (with multiple connections?), with the hosts accessing that array connecting through that switch (with one connection or two each?)

If this is correct, it may take some work to get things revamped to what it sounds like you want in the end. If the hosts have two HBA connections each, the ideal target state would be two have two identical fabrics with each host and each array connecting to both fabrics. Not sure if that works with the blade enclosures, but they may have to connect one to each fabric if not. If you only have one HBA from each host (and no option to put more in) then you will likely end up with one big fabric that all the switches and hosts attach to. Either way will allow any host can be provisioned from any array once the change is complete.

I haven't worked hands on with any of the speific switch models you have, so I'm not absolutely certain of the interoperability without doing some more research.

232 Posts

February 10th, 2010 14:00

Each array (or SAN network) has 2 switches connected (redundancy),

each server then has 2 HBA.

Thanks

Consty

2.1K Posts

February 11th, 2010 07:00

OK, I think I missed something there. Your original post indicated 6 arrays, and a follow up post indicated 6 switches, but this post says each array is connected to two switches.

It might help if you could "diagram" your current environment for us something like this:

CX300 - connected to 2 x 300B switches

CX500 - connected to 2 x 200B switches

etc.

And I'm not sure how the Blade Chassis fit into the picture. Do they connect to another pair of switches for one of the arrays (using NPIV?), or do they connect directly to an array through the integrated switches?

Sorry to ask so many questions, but the better we understand where you are starting, the better advice we can give on how to get where you want to be.

232 Posts

February 12th, 2010 06:00

1) CX300 - connected to 2 x Sikworm 3250

2) CX500 - connected to 2 x DS200B

3) CX3-40 - connected to 2 x DS200B

4) CX3-80 - connected to 2 x integrated Brocade 4Gb (direct connection to the array)

5) CX3-80 - connected to 2 x integrated Brocade 4Gb (direct connection to the array)

6) CX4-480 connected to 2 x BR300

Thanks

Consty

1 Rookie

 • 

20.4K Posts

February 13th, 2010 13:00

how are you doing on port count, any opportunies for consolidation ? (getting rid of 3250's maybe). None of your switches are true director class switches so i see some sort of mash fabric in your future.

232 Posts

February 14th, 2010 10:00

As I was saying in my first message, we can first consolidate

and then virtualize. If necessary the switches will be changed.

Regards

Thanks

Consty

2.1K Posts

February 16th, 2010 12:00

As dynamox pointed out, none of your switches really stand out as anything I would want to put at the core of a consolidated fabric. The best way to proceed now depends a lot on what you see for growth in connectivity needs over the next 2 to 3 years. For now I'm going to make an assumption of moderate continued growth in server connectivity, as well as connectivity for another array or two. I'm also going to go with an assumption that all the switches you have today are licensed to allow full fabric connectivity. As I mentioned, I'm not directly familiar with these models so I don't know if this is an issue. If it is just lump straight to option number two.

If this were my environment to plan for (based on the info we have so far) I would probably try to do something like this:

Option 1

  • buy a pair of Brocade DS5100B switches populated with all 40 ports installed (you might be able to get away with 32 ports installed at first, but it depends on how many FE connections you use from each CLARiiON array)
  • These new switches would become the Core of a pair of new core-edge fabrics for your environment
  • Check the configuration of all the existing switches to make sure that there will be no conflicts with settings (especially Domain IDs and zoning databases) that would prevent merging them into larger fabrics
  • You would start off migrating one array at a time
    • If you have even a single free port on each switch you can do this by merging one of the switches for that array into the fabric of one of the DS5100Bs by creating an ISL. If you have two free ports you should set up two ISLs right from the start
    • Once the ISL is established you would move the FE ports of the array from the old switch to the new DS5100B. If you have AIX or HP-UX hosts this will probably require you to rescan for new paths to the storage array (because you move the storage end-point)
    • If you did not have enough ports to create two ISLs at the beginning of the process do so now
    • Once you have the storage ports moved from one old switch and the paths confirmed functional, repeat the process with the other old switch connecting to the other DS5100B
  • Once you have completed these steps for all the arrays you will have a functional core edge fabric. The DS5100B switches will be the core switch for each fabric (with storage ports and ISLs) and the other switches will be the edge switches (with host connectivity). At this point you will be able to allocate storage from any array to any host simply by zoning appropriately.

Option 2

  • Buy a pair of DS5300B switches with at least enough ports enabled to account for all your existing storage and HBA ports. These will each make up the core of one "fabric" (or possibly the entire fabric depending on your licensing options on the existing switches.
  • For any switches that CAN be merged into a larger fabric, follow the steps above in Option 1
  • For any switches that cannot be merged into a larger fabric, you would have to take an outage on the attached hosts as you migrate both the array ports and host HBA ports to the new switches and add the required zoning

So, suddenly I'm looking at my watch thinking (is it time to go pick up my son yet) and losing focus on this. I reread through, but it isn't making as much sense this time as last. I'll leave it as is and look at it again tomorrow. Maybe in the mean time others can comment, suggest, modify, and otherwise correct and we can potentially refine the plan further based on your findings regarding fabric licensing (or the requirement for such).

232 Posts

February 18th, 2010 12:00

Thanks for your answer,

I was thinking of a backbone running at 8Gb/s by adding or replacing old

switches by new ones, what do you think about that.

Regards

Consty

2.1K Posts

February 22nd, 2010 12:00

Even though the new switches support 8G FC connectivity, we haven't populated any of ours with 8G SFPs yet. We are still putting in the 4G SFP since the price is right and not much hardware out there can take advantage of anything faster yet.

I guess it all depends on your growth plans. If you will be getting lots of new equipment that will be 8G, then it might make sense. Otherwise you can switch to 4G and still have the option of adding 8G when you need it. Keep in mind that the SFPs you choose won't change the internal bandwidth of the switch.

No Events found!

Top