Total Received Packets Not Forwarded........... 5 Total Packets Transmitted Successfully......... 1237660571 Unicast Packets Transmitted.................... 1234071929 Multicast Packets Transmitted.................. 179273 Broadcast Packets Transmitted.................. 3409369 Transmit Packets Discarded..................... 1140 Total Transmit Errors.......................... 0 Total Transmit Packets Discarded............... 1140 Single Collision Frames........................ 0 Multiple Collision Frames...................... 0 Excessive Collision Frames..................... 0
Value is different today but still some losses. I'm trying to identify the source of a report creation that is taking way longer than is should overnight and starting from the network up. This port is part of my data network that is directly connected to my ESXi hosts.
HC Overflow Pkts 64 Octets: 0 HC Pkts 64 Octets: 207811926
HC Overflow Pkts 65 - 127 Octets: 0 HC Pkts 65 - 127 Octets: 19526231
HC Overflow Pkts 128 - 255 Octets: 0 HC Pkts 128 - 255 Octets: 87053953
HC Overflow Pkts 256 - 511 Octets: 0 HC Pkts 256 - 511 Octets: 96860197
HC Overflow Pkts 512 - 1023 Octets: 0 HC Pkts 512 - 1023 Octets: 22978576
HC Overflow Pkts 1024 - 1518 Octets: 0 HC Pkts 1024 - 1518 Octets: 476651096
Looks ok to me but I could be missing something. I noticed some input and crc errors on the other end of this connection but again they are very small (513 and 384). I doubt that would be causing the hours being added to the job run in question...... i'm now monitoring various bits and pieces on the VM itself so will see what that produces....
software version is 6.2.0.5
I noticed that flow control was enabled on the up links to the cisco stack and im having difficulty deciding if its having any impact. I understand what it does but I'm not seeing any evidence of buffers filling up on the cisco stack or any failures. Got 11MB per second when transferring a 600MB file across the data network. Way too slow.
Not convinced there is a problem now. Might just be a poorly configured network segment. I'll do some more testing. Thanks again for your help on this!
Kona153
1 Rookie
•
29 Posts
0
July 28th, 2016 04:00
From Here Daniel:
Total Received Packets Not Forwarded........... 5
Total Packets Transmitted Successfully......... 1237660571
Unicast Packets Transmitted.................... 1234071929
Multicast Packets Transmitted.................. 179273
Broadcast Packets Transmitted.................. 3409369
Transmit Packets Discarded..................... 1140
Total Transmit Errors.......................... 0
Total Transmit Packets Discarded............... 1140
Single Collision Frames........................ 0
Multiple Collision Frames...................... 0
Excessive Collision Frames..................... 0
Value is different today but still some losses. I'm trying to identify the source of a report creation that is taking way longer than is should overnight and starting from the network up. This port is part of my data network that is directly connected to my ESXi hosts.
Kona153
1 Rookie
•
29 Posts
0
July 28th, 2016 07:00
Thanks Daniel,
I'll gather the metrics and post my results.
Kona153
1 Rookie
•
29 Posts
0
July 29th, 2016 02:00
So......there's not really a great deal going on that is raising any alarms for me... An example output using rmon is:
Octets: 3232609560 Packets: 910881979
Broadcast: 3331140 Multicast: 1307492
CRC Align Errors: 0 Collisions: 0
Undersize Pkts: 0 Oversize Pkts: 0
Fragments: 0 Jabbers: 0
64 Octets: 207811926 65 - 127 Octets: 19526231
128 - 255 Octets: 87053953 256 - 511 Octets: 96860197
512 - 1023 Octets: 22978576 1024 - 1518 Octets: 476651096
HC Overflow Pkts: 0 HC Pkts: 910881979
HC Overflow Octets: 184 HC Octets: 793506592024
HC Overflow Pkts 64 Octets: 0 HC Pkts 64 Octets: 207811926
HC Overflow Pkts 65 - 127 Octets: 0 HC Pkts 65 - 127 Octets: 19526231
HC Overflow Pkts 128 - 255 Octets: 0 HC Pkts 128 - 255 Octets: 87053953
HC Overflow Pkts 256 - 511 Octets: 0 HC Pkts 256 - 511 Octets: 96860197
HC Overflow Pkts 512 - 1023 Octets: 0 HC Pkts 512 - 1023 Octets: 22978576
HC Overflow Pkts 1024 - 1518 Octets: 0 HC Pkts 1024 - 1518 Octets: 476651096
Looks ok to me but I could be missing something. I noticed some input and crc errors on the other end of this connection but again they are very small (513 and 384). I doubt that would be causing the hours being added to the job run in question...... i'm now monitoring various bits and pieces on the VM itself so will see what that produces....
software version is 6.2.0.5
I noticed that flow control was enabled on the up links to the cisco stack and im having difficulty deciding if its having any impact. I understand what it does but I'm not seeing any evidence of buffers filling up on the cisco stack or any failures. Got 11MB per second when transferring a 600MB file across the data network. Way too slow.
Kona153
1 Rookie
•
29 Posts
0
August 1st, 2016 01:00
Thanks Daniel,
Not convinced there is a problem now. Might just be a poorly configured network segment. I'll do some more testing. Thanks again for your help on this!