This post is more than 5 years old
1 Rookie
•
16 Posts
0
7994
Brocade extended fabrics
Hello,
I have a Brocade SAN fabric (DCX-8510-8 switches ) in current datacenter. I would like to migrate to a new datacenter which is 100km away using broacde extended fabrics. Here is the plan. Please tell me if this will work:
- Install new DCX-8510-8 switches in new datacenter and connect new storage array to this fabric.
- Lay dark fiber between new datacenter to the current datacenter.
- Connect the DCX-8510-8 core switch of current datacenter to the DCX-8510-8 core switch of the new datacenter and configure brocade extended fabrics so that the switches in the new datacenter join the existing fabric of the current datacenter.
- Zone the hosts in current datacenter to the storage in the new datacenter.
- Provision storage from new datacenter to the host in the current datacenter.
- Host now is receiving storage from current datacenter and new datacenter.
- Copy data host based from current datacenter and new datacenter.
- Move host to new datacenter.
Questions:
- Can host access storage 100kms away using brocade extended fabrics?
- Is there any special hardware required (swiches, SFPs, licenses) to implement this?
- What are the caveats?. If any?
Ed Schulte
148 Posts
1
November 26th, 2015 01:00
Hello,
Reading up on this thread and the replies, it seems like a full blown implementation, for data centre migration, and as Allen Ward already mentioned, how long does this need to stay in place.
There are so many questions and solution that I advise you to get in touch with your local EMC team and get professional services to look at this and they can help you with the best and tailored solution for your needs.
I'll PM you for your details.
Regards,
Ed
ZaphodB
195 Posts
1
November 2nd, 2015 11:00
We did much the same with ESX and two datacenters ~10km apart. The 10km optics did not require the extended fabrics license on the switches, but I believe that you will need that licensing for the longer distance.
At 100km, you will be seeing ~1ms latency on the links; that will be measurable, but should be ok for most uses.
My biggest advice is to carefully vet, and monitor, the signal strength on the ISLs between the buildings. We found that getting the connections to light-up under no load was relatively easy, but that when one or more ISLs have marginal to poor reception of the signal from the other end that we would see very spiky response times at the hosts. We have two fabrics with 8 x 8Gb ISLs in each fabric; tons of bandwidth compared to our needs. One flaky ISL can cause one out of every X I/Os to get lost, and while the MPIO drivers handle that, they only recognize it after the failed I/O times out.
I feel that we are stable, and *could* run split hosts/storage, but in practice we only exploited the connections heavily during migration. We do not, as a rule, configure production hosts in one DC with storage from the other. So I'd suggest that you do consider it mostly a migration tool.
For us, the actual migration was very painless. We are ESX users, and nearly completely virtualized. So we constructed a modest cluster of hosts in the new building, and then used Vmotion/storage Vmotion to load them up. After that first seeding operation, we were able to shut down and relocate hosts a few at a time and repeat the process. No outages, or even downtimes, and only a fractional amount of new hosts. The storage in the old building was left in place to be repurposed, and the new DC had new storage installed.
RRR
2 Intern
2 Intern
•
5.7K Posts
1
November 4th, 2015 13:00
ZaphodB
195 Posts
1
November 5th, 2015 07:00
One concern to be aware of with trunking is that there is a maximum allowable length difference between the shortest and longest pairs in a trunk group. The published limit is 400m, but there is a warning that differences of 30m or more may cause performance degradation.
I'm going to site this Brocade community post as a decent source for a discussion of this:
What is the maximum distance difference between ISL - Brocade Community Forums - 57950
NAS_Person_2010
1 Rookie
1 Rookie
•
16 Posts
0
November 5th, 2015 09:00
Thank you so much for your response. Few questions:
1) When you say "running the long wave link on 8 Gb will be hard", what exactly do you mean?. Do you mean installing long wave SFP on a 8gb switch is difficult?
2) Could you please provide details on the DWDM option?. What are the network requirements and why it is expensive?
Thank you.
RRR
2 Intern
2 Intern
•
5.7K Posts
0
November 10th, 2015 06:00
Keep the distance difference as low as possible in a trunk. If they're only meters appart, you're fine.
RRR
2 Intern
2 Intern
•
5.7K Posts
0
November 10th, 2015 06:00
I just hope you can find the appropriate SFPs that can do the distance. Plus the switches will need buffers or else the speed will drop because buffers will drop to 0 and slowly recover from that. Effectively you will end up sending less GBps than when you had enough buffers. And 100km is quite a distance. I would turn to your reseller and ask for help getting the right SFPs.
Allen Ward
2.1K Posts
1
November 20th, 2015 07:00
I'm a little late to the discussion but I'd like to start by looking at this a little differently. You are asking about "migrating" to a new data center yet you are talking a lot about running hosts in one data center off storage in another one. If you are migrating is that really a requirement? How are you planning on doing the actual migration?
I just want to make sure that we aren't going to far down a solution path that might not be the best path for what you need. What kind of storage are you using. What kind of replication do you have available today? If you provide more detail on what you want to accomplish overall there may be a less painful way to accomplish it.
We have done migration between data centers within a city, between provinces, and across the Atlantic and used several different methods all depending on what was available and what the business requirements were.
Allen Ward
2.1K Posts
0
November 26th, 2015 11:00
Good point Ed, I was more focussed on making sure we were all looking at this issue from the right perspective and forgot that if I was right it would likely be too big to "solutioneer" here. Hopefully this will help keep things on track.