Start a Conversation

Unsolved

This post is more than 5 years old

7093

March 29th, 2016 15:00

Port group in Brocade 6510 Switch

In any Cisco SAN switch each port bandwidth is shared with a port group consists of 12 port , for Brocade 6510 is the same concept is applied there ? I need to dedicate a port speed for few ports and I need more information.

Message was edited by: JOE.

143 Posts

March 30th, 2016 07:00

Hi Joe,

The DS-6510B has 48 ports, by default 24 ports are enabled, and 24 can be enabled by "port on demand license". All ports can do 16 GB/s.

You should be able to find the hardware manual and the Product Support Bulletin here:

https://support.emc.com/products/16423_Connectrix-DS-6510B/Documentation/

Regards,

Ed

6 Posts

March 30th, 2016 07:00

Thanks, I mean the port group bandwidth not the total number of ports

143 Posts

March 31st, 2016 05:00

Hi Joe,

Yes I understood that. So all ports do 16 GB/S.

Where you able to download the Product Support Bulletin?

Regards,

Ed

2.1K Posts

March 31st, 2016 07:00

Just to put Ed's comment in different words in case that helps. Brocade and Cisco take a different approach to providing bandwidth to switch ports. Cisco goes with the idea (which is probably right most of the time) that you will never drive ALL the ports in a switch (or in a given set of ports) full out at the same time. Brocade builds on the assumption that you might.

On most (maybe all, but definitely including all the current departmental switches like the 6510 and 6520) Brocade switches you can drive every port at full 16G at the same time (assuming you are using 16G SFPs).

6 Posts

March 31st, 2016 09:00

Just to clarify further . in cisco the port groups have fixed amount of bandwidth , this port group shares common resources from an assigned pool of allocated bandwidth , so that cannot be applied on this brocade ?

6 Posts

March 31st, 2016 09:00

All ports do 16g, so I dedicate the throughput for the 48 ports so its not being shared with any other ports. checked the documents and found that 6510 comes with condor3 ASIC 6*8 port in total

6 Posts

March 31st, 2016 09:00

Thanks, so driving each port to 16G wont affect the others as in MDSs there is a limited bandwidth per port group and driving one port to 8G will impact the rest of ports so this seems to be different here ..

2.1K Posts

March 31st, 2016 12:00

That  correct. You could drive every port on the entire switch at 16G and the only thing you would really need to worry about would be any ISLs involved :-)

Mind you, if you were going to run EVERY port that busy you might need to reconsider your entire environment *lol*

6 Posts

March 31st, 2016 13:00

Why I have to consider the ISLs if the ports cant run 16g individually ? lol why to reconsider its a small env

2.1K Posts

March 31st, 2016 13:00

Sorry that might not have been clear. I was suggesting that if you were running all the ports at 16G full out then you would need to be careful of any ISLs because that would be a lot of traffic if it needed to cross them.

38 Posts

April 20th, 2016 02:00

Older Cisco MDS 95xx director blades provide host optimized ports and the possibility to set single ports to dedicated for storage ports, inter switch links, uplinks to NPV switches or high end servers. Depending on the line card 12, 6, 3 or zero ports share the slot bandwidth of 12.8 or 32.4 Gbps - please review the attached picture "Collapsed Core Design" for a example...

MDS departmental switches will have no oversubscription, the current MDS switches are:

- MDS 9148S:  48 x 16G FC

- MDS 9250i:    40 x 16G FC, 8 x 10G FCoE, 2 x 1/10G FCIP/iSCSI, and Write Acceleration

- MDS 9396S:    96 x 16G FC

The current MDS 9700 directors investment protection will be, that you can have 8 or 16G optics (as required) today, and they are ready today for upcoming speeds (40G FCoE, 32G FC) - without any oversubscription. You need to upgrade the firmware, add X-Bar modules and add the new card - that's it!

In contrast to a CLOS architecture, the advantage of the MDS 9000 cross bar will be, that there is no oversubscription and the latency for ALL frames will be low, predictable and stable.

One note about speed: Increasing speed from 8G to 16G would not be the solution for performance issues. Today, I see nodes providing up to 200 buffer-to-buffer credits - Each MDS 9396S / 9700 FC port provides up to 500 credits to be able to service high end devices requirements.

1 Attachment

No Events found!

Top