Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

6867

July 27th, 2016 04:00

6544 transmit Discards - N4032

That's not a huge amount of discards given how long the port has been up but its still indicating some network congestion.  I've noticed this on several trunks and was wondering if anyone thought it excessive?  

Thanks for any comments! 

5 Practitioner

 • 

274.2K Posts

July 29th, 2016 11:00

All of those counters look great to me too. Dell Networking switches implement receive flow control only. They never issue a flow control PAUSE frame when congested, but do respect received flow control PAUSE frames received from other switches. So I could see this being an issue only if the switch was receiving pause frames, which it does not look like it is.

a packet capture may be the way to go. This would give more details on the types of packets being discarded. use WireShark to analyze the capture and inspect the discarded packets.

The firmware being run is a bit outdated, If you have some downtime you could schedule for an update. I would update to 6.2.6.7 for now. That release is pretty stable.

http://dell.to/2ajZnOb

There is also supposed to be a new 6.3 firmware release in the works, hopefully we see it within the next couple weeks. Having the firmware up to date can help with overall operability, and may help the switch process those packets.

You mentioned previous a theory of the interface being saturated. Do you have any additional interface on the switch/server? You could setup a LAG and see if that helps alleviate some of the issues being seen.

5 Practitioner

 • 

274.2K Posts

July 27th, 2016 11:00

Where are you seeing this number at? under the output from the command # show interfaces counters? I would compare the number of discards to the total packets on the interfaces. This will give you an overall percentage of frames that have been discarded.

May also look at the firmware on the switch. The newer firmware can resolve some dropped packet scenarios.

http://dell.to/29ngiyW

Keep us posted.

29 Posts

July 28th, 2016 04:00

From Here Daniel:

Total Received Packets Not Forwarded........... 5
Total Packets Transmitted Successfully......... 1237660571
Unicast Packets Transmitted.................... 1234071929
Multicast Packets Transmitted.................. 179273
Broadcast Packets Transmitted.................. 3409369
Transmit Packets Discarded..................... 1140
Total Transmit Errors.......................... 0
Total Transmit Packets Discarded............... 1140
Single Collision Frames........................ 0
Multiple Collision Frames...................... 0
Excessive Collision Frames..................... 0

Value is different today but still some losses.  I'm trying to identify the source of a report creation that is taking way longer than is should overnight and starting from the network up.  This port is part of my data network that is directly connected to my ESXi hosts.

5 Practitioner

 • 

274.2K Posts

July 28th, 2016 06:00

It does not seem excessive, the number of discards is very minute compared to the total number of packets. Does the discard counter go up steady throughout the day? Or stay the same until the report is run?

Looking at the packet count, I don;t think this is the issue, but worth checking if storm control is enabled. If it is and an interface receives a burst of broadcast packets, it could be the reason for the discards.

Can you run the following command on that same interface?
# show rmon statistics (Interface)
This will give us a little more information on the different types of packets on that interface.

Have you looked through the switch logs for any error messages? # show logging

What firmware is the switch currently at? If there is any update, we could look through the release notes.

29 Posts

July 28th, 2016 07:00

Thanks Daniel,

I'll gather the metrics and post my results.

29 Posts

July 29th, 2016 02:00

So......there's not really a great deal going on that is raising any alarms for me... An example output using rmon is:

Octets: 3232609560  Packets: 910881979

Broadcast: 3331140  Multicast: 1307492

CRC Align Errors: 0  Collisions: 0

Undersize Pkts: 0  Oversize Pkts: 0

Fragments: 0  Jabbers: 0

64 Octets: 207811926  65 - 127 Octets: 19526231

128 - 255 Octets: 87053953  256 - 511 Octets: 96860197

512 - 1023 Octets: 22978576  1024 - 1518 Octets: 476651096

HC Overflow Pkts: 0  HC Pkts: 910881979

HC Overflow Octets: 184  HC Octets: 793506592024

HC Overflow Pkts 64 Octets: 0  HC Pkts 64 Octets: 207811926

HC Overflow Pkts 65 - 127 Octets: 0  HC Pkts 65 - 127 Octets: 19526231

HC Overflow Pkts 128 - 255 Octets: 0  HC Pkts 128 - 255 Octets: 87053953

HC Overflow Pkts 256 - 511 Octets: 0  HC Pkts 256 - 511 Octets: 96860197

HC Overflow Pkts 512 - 1023 Octets: 0  HC Pkts 512 - 1023 Octets: 22978576

HC Overflow Pkts 1024 - 1518 Octets: 0  HC Pkts 1024 - 1518 Octets: 476651096

Looks ok to me but I could be missing something.  I noticed some input and crc errors on the other end of this connection but again they are very small (513 and 384).  I doubt that would be causing the hours being added to the job run in question...... i'm now monitoring various bits and pieces on the VM itself so will see what that produces....  

software version is 6.2.0.5

I noticed that flow control was enabled on the up links to the cisco stack and im having difficulty deciding if its having any impact.  I understand what it does but I'm not seeing any evidence of buffers filling up on the cisco stack or any failures.  Got 11MB per second when transferring a  600MB file across the data network.  Way too slow.

29 Posts

August 1st, 2016 01:00

Thanks Daniel,

Not convinced there is a problem now. Might just be a poorly configured network segment.  I'll do some more testing.  Thanks again for your help on this!

No Events found!

Top