Start a Conversation

This post is more than 5 years old


Go to Solution


November 2nd, 2015 08:00

Brocade extended fabrics


I have a Brocade SAN fabric (DCX-8510-8 switches ) in current datacenter. I would like to migrate  to a new datacenter which is 100km away using broacde extended fabrics. Here is the plan. Please tell me if this will work:

  1. Install new DCX-8510-8 switches in new datacenter and connect new storage array to this fabric.
  2. Lay dark fiber between new datacenter to the current datacenter.
  3. Connect the DCX-8510-8 core switch of current datacenter to the DCX-8510-8 core switch of the new datacenter and configure brocade extended fabrics so that the switches in the new datacenter join the existing fabric of the current datacenter.
  4. Zone the hosts in current datacenter to the storage in the new datacenter.
  5. Provision storage from new datacenter to the host in the current datacenter.
  6. Host now is receiving storage from current datacenter and new datacenter.
  7. Copy data host based from current datacenter and new datacenter.
  8. Move host to new datacenter.


  1. Can host access storage 100kms away using brocade extended fabrics?
  2. Is there any special hardware required (swiches, SFPs, licenses) to implement this?
  3. What are the caveats?. If any?

1 Rookie


148 Posts

November 26th, 2015 01:00


Reading up on this thread and the replies, it seems like a full blown implementation, for data centre migration, and as Allen Ward already mentioned, how long does this need to stay in place.

There are so many questions and solution that I advise you to get in touch with your local EMC team and get professional services to look at this and they can help you with the best and tailored solution for your needs.

I'll PM you for your details.



195 Posts

November 2nd, 2015 11:00

We did much the same with ESX and two datacenters ~10km apart.  The 10km optics did not require the extended fabrics license on the switches, but I believe that you will need that licensing for the longer distance.

At 100km, you will be seeing ~1ms latency on the links; that will be measurable, but should be ok for most uses.

My biggest advice is to carefully vet, and monitor, the signal strength on the ISLs between the buildings.  We found that getting the connections to light-up under no load was relatively easy, but that when one or more ISLs have marginal to poor reception of the signal from the other end that we would see very spiky response times at the hosts.  We have two fabrics with 8 x 8Gb ISLs in each fabric; tons of bandwidth compared to our needs.  One flaky ISL can cause one out of every X I/Os to get lost, and while the MPIO drivers handle that, they only recognize it after the failed I/O times out.

I feel that we are stable, and *could* run split hosts/storage, but in practice we only exploited the connections heavily during migration.  We do not, as a rule, configure production hosts in one DC with storage from the other.  So I'd suggest that you do consider it mostly a migration tool.

For us, the actual migration was very painless. We are ESX users, and nearly completely virtualized.  So we constructed a modest cluster of hosts in the new building, and then used Vmotion/storage Vmotion to load them up.  After that first seeding operation, we were able to shut down and relocate hosts a few at a time and repeat the process.  No outages, or even downtimes, and only a fractional amount of new hosts.  The storage in the old building was left in place to be repurposed, and the new DC had new storage installed.

1 Rookie


5.7K Posts

November 4th, 2015 13:00

  1. Latency will be a little big higher, but if you can handle the extra 1 ms or so, this should be fine. But running the long wave link on 8 Gb will be hard, if not impossible, I even suspect that running it at 4 Gb will be a challenge. Brocade supports Smart Optics long wave SFPs that can handle 80 kilometers, but 100 is quite a bit further!
  2. The extended fabric licence is - as far as I know - only to get extra buffer to buffer credits on those long links. You could do it without the license, but performance will be poor, because of the lack of necessary buffer credits. You will need long wave (9 micron) SFPs instead of multi mode SFPs. Unless you're running DWDM, which can work with multi mode SFPs from your existing switches to the DWDM module, but DWDM is costly.
  3. If you want redundancy, you'll need to build a trunk per fabric, which requires an extra license as well. You could do it without, but you'll miss the load balancing feature and 1 ISL might be overloaded while the other is idle.

195 Posts

November 5th, 2015 07:00

One concern to be aware of with trunking is that there is a maximum allowable length difference between the shortest and longest pairs in a trunk group.  The published limit is 400m, but there is a warning that differences of 30m or more may cause performance degradation.

I'm going to site this Brocade community post as a decent source for a discussion of this:

What is the maximum distance difference between ISL - Brocade Community Forums - 57950

November 5th, 2015 09:00

Thank you so much for your response. Few questions:

1) When you say "running the long wave link on 8 Gb will be hard", what exactly do you mean?. Do you mean installing long wave SFP on a 8gb switch is difficult?

2) Could you please provide details on the DWDM option?. What are the network requirements and why it is expensive?

Thank you.

1 Rookie


5.7K Posts

November 10th, 2015 06:00

Keep the distance difference as low as possible in a trunk. If they're only meters appart, you're fine.

1 Rookie


5.7K Posts

November 10th, 2015 06:00

I just hope you can find the appropriate SFPs that can do the distance. Plus the switches will need buffers or else the speed will drop because buffers will drop to 0 and slowly recover from that. Effectively you will end up sending less GBps than when you had enough buffers. And 100km is quite a distance. I would turn to your reseller and ask for help getting the right SFPs.

2.1K Posts

November 20th, 2015 07:00

I'm a little late to the discussion but I'd like to start by looking at this a little differently. You are asking about "migrating" to a new data center yet you are talking a lot about running hosts in one data center off storage in another one. If you are migrating is that really a requirement? How are you planning on doing the actual migration?

I just want to make sure that we aren't going to far down a solution path that might not be the best path for what you need. What kind of storage are you using. What kind of replication do you have available today? If you provide more detail on what you want to accomplish overall there may be a less painful way to accomplish it.

We have done migration between data centers within a city, between provinces, and across the Atlantic and used several different methods all depending on what was available and what the business requirements were.

2.1K Posts

November 26th, 2015 11:00

Good point Ed, I was more focussed on making sure we were all looking at this issue from the right perspective and forgot that if I was right it would likely be too big to "solutioneer" here. Hopefully this will help keep things on track.

No Events found!