Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

53392

July 30th, 2013 08:00

MXL PS-M4110 DCB issues

Hi all,

The setup:

Configured the MXL as per Dell Force10 MXL 10/40GbE Blade Switch Configuration guide. The MXL's have the latest firmware.I have configured the DCB VLAN in the GUI.

The Issue:

On the MXL console the port flaps and this error is displayed:

%DIFFSERV-4-DSM_PFC_NUM_NO_DROP_Q_EXCEEDS_LIMIT: Configuring PFC priorities failed on interface Te 0/14 due to System Exceeds Max Allowed Lossless Queues limit of 2. Update Local Params with PFC Defaults(No priorities enabled for PFC) incase admin params update failure or update with admin params for remote params update failure, Adminstrator has to configure with PFC priorities with in the loss less queue limit.

Any clues please?

84 Posts

July 31st, 2013 15:00

The issue was resolved be removing the PS-M4110, I guess this made the port renegotiate DCB. The port flapping issue turned out to the the eth1 interface. I forced the port on the switch to SPEED 1000 and it settled down.

84 Posts

July 30th, 2013 09:00

OK, thanks but I can't get to the array via IP to update the firmware for the reason on the first post.

5 Practitioner

 • 

274.2K Posts

July 30th, 2013 09:00

What are you using for NICs?  Intel X520?   What OS is the server running?

have you opened a case with Dell Support?  

84 Posts

July 30th, 2013 09:00

Hi Don,

This error is storage side te0/14 is one of the ETHs in the PS-m4110.

Thanks for your help.

5 Practitioner

 • 

274.2K Posts

July 30th, 2013 09:00

What firmware is on the blade storage? Should be 6.0.5.

84 Posts

July 30th, 2013 09:00

Hi Don,

There is nothing loaded onto the Blades yet. They are switched off. The NIC will be a 57810 with firmware 7.6.15

I thought I would get the storage operational first.

Thanks again.

5 Practitioner

 • 

274.2K Posts

July 30th, 2013 09:00

Thank you, I did understand that message.

On the server (blades) connecting to the MXL, what are the NICs/CNAs?  Broadcom or Intel?

DCB isn't like port speed, flowcontrol and duplex where they easily autonegotiate.  

What OS is running on the servers?

84 Posts

July 30th, 2013 09:00

It is on 6.0.4 currently. Does 6.0.5 specifically DCB issues?

5 Practitioner

 • 

274.2K Posts

July 30th, 2013 09:00

Nothing specific that I am aware of just as a general rule.  

5 Practitioner

 • 

274.2K Posts

July 30th, 2013 11:00

Something to try is make sure that all the server ports and EQL array ports have spanning-tree rstp set.  

If that doesn't help, try disabling DCB on the switch.  #no dcb enable.

If that calms things down, then I would suspect a configuration problem and opening a case with the Force10 support folks.

5 Practitioner

 • 

274.2K Posts

July 31st, 2013 15:00

You set it to 1000 or 10000?

84 Posts

August 1st, 2013 04:00

SPEED 1000

I understand this port connects to eth1 which is the management NIC

4 Operator

 • 

9.3K Posts

August 1st, 2013 08:00

On the PS-M4110 eth1 is indeed the management port and this connects to the internal switch fabric that the CMC and the DRACs (in the blade servers) are connected to. This switch is not configurable.

No Events found!

Top