Does anyone know of the way of calculation way at the data transfer rate for the environment as follows ?
- Source/Target device = VMAX (3390 emulation ,56KB per track / total capacity=6TB)
- SRDF mode = Adaptive Copy Disk mode
- Distance = Tokyo - New York = 10,000km
- Link_band = 1*100Mbps
You're using FCIP ? I've been using SRDF/S over 230 miles using MDS9216i's over 2 links with a combined bandwidth of 1.2Gbps and the speed I got was about 400MB per second (9216i uses compression).
I have no clue what happens when you're running acp_disk over 10k miles. Keep me informed, this is interesting.
Thank you for your inform.
There is a potential customer who is planning the data center migration(6TB,Tokyo to NY) in the latter half of the year.
The question from the customer is how much time to take when the data transfer is done by using SRDF/DM(maybe FCIP) between VMAX.
Very interesting project you have there. But as said, I don't know what the MBps is going to be when you're actually moving data over half the world.
The 9216i's have data compression
And it were 2 links, one of 800Mbps and the other 400Mbps, so 1.2Gbps.... Data compression is the magic word and over FCIP (200 miles !) doing SRDF/S at this speed.... Waaw ! It's really true !!
If you ask the EMC guys in the Netherlands and mention my name, they'll know which customer this is.
You have no network latency issues obviously with a distance of 230 miles with SRDF/S? It's just I thought the EMC recommedation is to use SRDF/A with distances over 125 miles/200 KM. At least that's what I have been told on my Symmetrix Business Continuity mgmt training 😜 Obviously this EMC miles metric depends on bandwidth available 😛
Good to get feedback on response times with those distances considering Ireland's landmass geographic size...
I must admit that this 375km was when I was worknig for a lerge telecom company and the lines were VERY good. The max latency we had was 6ms and the switches that we were using were Cisco 9216i that were using data compression. I must say this set up was very impressive. But on the other hand the apps that were using these hosts had to tolerate this extra 6ms, but I never heard any complaints.
Both lines ran accross the country (not in a straight line) which explains the 375mk (as the crow flies it would have been 200km). The DMX2/3s we were using dealt with the redundancy so no customer ever experienced any serious outage