There you go! That’s exactly why they talk deciBell jibberish. Even Penny would understand and we can’t have that, can we?
Or you can be the slightly more normal one and he can be the smarter one 🙂 It's all in your perspective!
So this was about getting the link "up" on both ends, but what about actually sending data accross those links? Longer links can actually contain multiple fibre channel frames!! The speed of light isn't infinate, so it takes time (micro seconds to sub milli seconds) for a frame to actually travel from Tx (transmitter) to Rx (receiver).
If a fibre optic path is an FC ISL, then in knowing its line bit-length, we can calculate the maximum number of frames that can simultaneously fit (source to destination and back) on that physical link. That number is the exact anmount of buffer to buffers credits your ISL switch ports will require.
So if you upgrade a port's speed from 2 to 4 Gbps for example, the amount of frames that can be on the link simultaniously doubled and the amount of B2B credits has to be doubled as well.
Please note that although the link between sites can be fast enough and have enough buffer credits to facilitate line rate communication, the receiving array can still be a bottleneck which can cause the sending side to slow down. Consider for example MirrorView or SRDF. The primary storage array is written to by a number of hosts and the the back end of the primary array can handle the amount of IOps. All I/Os that were written are also sent to the secondary array to be replicated. If the receiving end cannot handle that amount of (write) IOps because the number of disks is too low or there's not enough cache available, the receiving (busy) array is processing the incoming I/Os not as fast as the ISL between the sites can provide and so acknowledgements aren't sent back to the primary array instantaniously, but a little bit slower. So in the end the sending application receives this acknowledgement somewhat later and we all know that comminucation is built on send / receive and ACK, right? If the ACK comes in slow, the conversation is slower than you'd like it to be.
So configuring the link between sites is one thing, sizing the recveiving array is another (important) thing!
Before we begin an example of how to calculate buffer requirements, it is important to know the numerical definition of a Fibre Channel Gigabit, as well as to understand the structure of a Fibre Channel Frame.
In the Fibre Channel world, one gigabit is defined to be 1,062,500,000 bits (which is not 1024x1024x1024). Other Fibre Channel Gigabit values are then derived from this reference definition. For example, two (Fibre Channel) Gb = 2 x 1,062,500,000 bits = 2,125,000,000 bits. To avoid confusion with the traditional (non Fibre Channel) definition of a Gb, throughout this document I will use the symbol Gbfc to mean “1,062,500,000 bits”, or 1 Fibre Channel Gb.
1 Gbfc = 1,062,500,000
2 Gbfc = 2,125,000,000
4 Gbfc = 4,250,000,000
8 Gbfc = 8,500,000,000
10 Gbfc = 10,625,000,000
16 Gbfc = 17,000,000,000
Next, we show the anatomy of a Fibre Channel Frame with notes.
Start of Frame: 4 bytes or 32 bits
Standard Frame Header: 24 bytes or 192 bits
Data (payload): [0 – 2,112] bytes or [0 – 16,896] bits
CRC: 4 bytes or 32 bits
End of Frame: 4 bytes or 32 bits
TOTAL (Nbr bits/frame): [36 – 2,148] bytes or 288 – 17,184 bits
The term byte used here means 8 bits (not the 10 bits that result from 8/10 bit encoding).
The maximum Fibre Channel frame size is 2,148 bytes.
The final frame size must be a multiple of 4 bytes. Thus the Data (payload) segment will, as necessary, be padded with 1 to 3 “fill-bytes” to achieve an overall 4 byte frame alignment.
The standard Frame Header size is 24 bytes. However, up to 64 additional bytes (for a total of an 88 byte header) can be included for applications that need extensive control information. Since the total frame size cannot exceed the maximum of 2,148 bytes, these additional Header bytes will subtract from the Data segment size by as much as 64 bytes (per frame). This is why the maximum Data (payload) size is 2,112 (because [2,112 – 64] = 2,048, which is exactly 2K-bytes of data).
The final frame, once constructed, is passed through the 8 byte to 10 byte conversion process.
In the FC world, 1 Word = 4 x 8/10 bit encoded bytes (40 bits).
Then we have the speed of light:
299,792.458 km / s == 0.00000333564095 s / km, so that's 3.33564095 micro seconds per kilometer, that's 3.3 millionth of a second for each km "traveled".
NOTE: this is in vacuum! I'll adjust the outcome later. The speed of light in fiber is approximately 200,000 km per second instead of almost 300,000.
Next we take the linear distance between the two sites and determine:
Let say, for the purposes of discussion, that we have redundant (i.e. two) fibre optic paths between a primary and secondary site.
The linear distance (as opposed to displacement) is 91.732608 Km (or about 57 miles) for one path, and 43.452288 km (or about 27 miles) for the other path.
Suppose the link speed is 2 Gbfc:
|Distance (Km)||Distance (secs)||# of Bits (1Way)||8/10 FC bytes|
Again: this is in vacuum instead of fiber! But stay with me, I'll fix it later on!!
Calculations notes for the tables above:
Column 2 indicates the amount of time (in seconds) it takes one OPTICAL VARIATION (however many number of bits that optical variation happens to represent) to travel the one-way distance specified in column one. It comes from multiplying the number of seconds it takes light to travel 1 km (i.e. 0.00000333564095 s / km), by the number of km traveled which, is specified in the first Column.
Column 3 is the product of column 2 (i.e. the line distance in seconds) and the FC bit insertion rate (2,125,000,000). This column effectively represents the amount of additional I/O that could have been processed at the host, had the response to that I/O been instantaneous; (actually twice that amount, since acknowledgment time for those I/O’s, coming back the other way, have to be accounted for as well).
Column 4 is Column 3 divided by 10. This is done to group single bits in transit into equivalent 8/10 bit “byte” quantities. The Fibre Channel protocol converts every 8 bit byte into a 10 bit equivalent (via the 8/10 bit encoding algorithm) before transmitting it. So the value in this Column 4 will determine the number of buffers needed for an ISL switch port.
Frame Length8 (in km) = (299,792.458 km/s) x (Seconds-Between- Inserted-Bits secs/bit) x (Number-Of-Bits-Per-Frame bits)
Frame Length10 (in km) = (299,792.458 km/s) x (Seconds-Between- Inserted-Bits secs/bit) x (Number-Of-Bits-Per-Frame bits) x 10/8
Seconds-Between-Inserted-Bits = 1/1,062,500,000 (for 1Gbps FC) = 1/2,125,000,000 (for 2Gbps FC)
Number-Of-Bits-Per-Frame = Variable depending on Data payload and/or Header size. See table and notes above. In most cases the Header will be the Standard size of 24 bytes.
Number of bits / frame (after 8/10 bit encoding)
|Frame Length (km) @ 2 Gbps FC (.000141078803764705 km/bit)||Number of in-transit frames (1- way) for 91.732608 km leg.|
|17,184 / 21,480||3.0303 km (2112 PL-bytes)||30.2170 frames (buffers)|
And again: this is vacuum, not in fiber!!! Stay with me a little bit longer....
Column 1 represents the total number of 8/10 bits in a Fibre Channel frame (with a standard header size of 24 bytes) at varying Data payloads (PL).
Column 2 represents the product of the 10 bit value in Column 1, and (1/2,125,000,000). Thus, this Column (Column 2) essentially represents the linear distance that 1 (one) single frame consumes for the specified payload (PL).
Column 3 represents the quotient derived by dividing the longest (worst case) DWDM line distance (in kilometers), by the number of kilometers per frame (calculated in Column 2). Thus, this column essentially indicates how many additional ONE-WAY frames worth of data could have been processed by the host/application, had the response to the first frame been instantaneous. In other words, this is how many ONE-WAY (not round trip) switch buffers you would need to allow non-stop transmission. Double (i.e. round trip) the values in this column 3 to yield the number of buffers required of your ISL switch ports.
So in our example for running 2 long links using 2 Gbfc speed (in vacuum):
Can't we make a rule of thumb out of this?
I admit, the numbers I used are taken from an example document I have lying around so I didn't need to the math myself , but making a ROT out of it backfires at me.
The easiest way to calculate the rule of thumb is getting back to 1 km, so at 2 Gbfc speed the number of B2B credits needed for a 1 km link is 2 x 30.2170 / 91.732608 = 0.6588 IN VACUUM.
Other examples we can now use to more easily remember these "nice" numbers:
2 Gbfc over a 2 km link = 2 x 0.6588 = ± 1.32
4 Gbfc over a 2 km link = 4 x 0.6588 = ± 2.64
4 Gbfc over a 20 km link = 40 x 0.6588 = ± 26.4
So what would be a nice number to remember? 4 Gbfc requires 1.3 x the number of km? 1.33 x the number of km? I guess we have just created a rule of thumb! 1.33 can be remembered very easily! Right?
So running a 4Gb link over 44 km requires 1.33 x 44 = 58.52 := 59 B2B credits.
A 2 Gbfc link using the same distance (44 km) will need half of that, so 30 B2B credits. So half the speed, half the buffer credits, double the speed, double the buffer credits.
HOWEVER: the speed of light in vacuum needs to be devided by the refraction index of fiber, which is about 1.5.
Buffers Rule of Thumb (taking the actual speed of light into account):
4 Gbfc needs 1.33 times 1.5 times the distance in kilometer = 1.995 buffers per kilometer
This extra 1.5 is because of the refraction index of the carrier used. Most fibers have an index of approximately 1.5, which will "slow down" the speed of light to about 200,000 kilometers per second.
(from km to mile you need to multiply the outcome in km by 1.609, which is not that easy to remember, but you can figure out a number that works for you, right?)
EMC as well as Brocade recommend adding an extra 20% B2B credits to the amount you just calculated.
Message was edited by: RRR Added 20% extra similar to the bestp practices by EMC and Brocade.
Thanks. And I almost lost my post!! When I was done I accidently switched to the advanced editor and nothing happened.... aaaargh! Fortunately I copied the html text to notepad only a minute before, so I was able to restore my post. Took me an hour and a half or so to type this, so I was glad it wasn't lost in the end .
So what about this 1.33 factor for a 4Gb link? Do you think it's correct? I can't find any errors in my calculations
And I was afraid my post was long... yikes!
I originally calculated all buffer credits back from 2gbit, which is/was 1 credit per km. 4Gbit would need 2 credits per km, etc.
Don't forget that too much credits can't harm performance (assuming you're not running short on other links), and that you can always monitor credit usage on the switch. Cisco Device Manager has a counter that shows the slack you've got:
In this case it's an ISL that's maybe a couple of hundred meters long, so it can cope with 1 or 2 credits depending on actual speed and distance. The port is nevertheless assigned 32 credits, of which 32 are still available (see bottom 2 numbers, CurrRxBbCredits and CurrTxBbCredits.