2 Intern

 • 

196 Posts

November 29th, 2016 08:00

Hello there, MRRack. :emotion-15:

I think I may contribute here with these two articles:

Just one note about this last article: such unusual packet traffic it's hard to pin-point. You will need to sniff it with wireshark and most of the time you will not find it. I recomend you start with the basics passing only the VLANs you need on the uplink trunk. I've seen by my own experience that common windows multicast and broadcast coming from unnecessary VLANs cause the cpu usage increase.

Hope that helps.

Best regards,

14 Posts

December 7th, 2015 05:00

More updates into it, what it looks to me is like the switch is doing everything in software.

Latency is very high with a single port, and the more features I add the more cpu it takes.

If switch is behaving like this with no traffic I cannot put in into production. I am still hoping maybe for a quick setting that I am missing before RMA the switch.

More info below, and yes it's a Dell refurbished.

console#show switch

   Management Standby   Preconfig     Plugged-in    Switch        Code

SW  Status     Status    Model ID      Model ID      Status        Version

--- ---------- --------- ------------- ------------- ------------- -----------

1   Mgmt Sw              PCM8024       PCM8024       OK            5.1.9.3

console#show system id

Service Tag: 0000000

Chassis Service Tag: FZL4B4J

Serial Number: CN2829814E0018

Asset Tag: none

Unit Service tag       Chassis Serv tag  Serial number           Asset tag

---- ------------      ----------------  --------------          ------------

1    0000000           FZL4B4J           CN2829814E0018          none

console#show version

System Description................ Dell Ethernet Switch

System Up Time.................... 0 days, 00h:19m:34s

System Contact....................

System Name.......................

System Location...................

Burned In MAC Address............. 5C26.0AB2.42F9

System Object ID.................. 1.3.6.1.4.1.674.10895.3022

System Model ID................... PCM8024

Machine Type...................... PowerConnect M8024

unit image1      image2      current-active next-active

---- ----------- ----------- -------------- --------------

1    5.1.9.3          image1         image1

console#show running-config

!Current Configuration:

!System Description "PowerConnect M8024, 5.1.9.3, VxWorks 6.6"

!System Software Version 5.1.9.3

!Cut-through mode is configured as disabled

!System Operational Mode "Normal"

!

configure

slot 1/0 1    ! PCM8024

interface vlan 1 1

exit

username "root" password e6e66b8981c1030d5650da159e79539a privilege 15 encrypted

snmp-server engineid local 800002a2035c260ab242f9

exit

console#show process cpu

Memory Utilization Report

status      bytes

------ ----------

 free  212307208

alloc  258264104

CPU Utilization:

 PID      Name                    5 Secs     60 Secs    300 Secs

-----------------------------------------------------------------

3f33540 tExcTask                   0.20%       0.04%       0.01%

3fed930 tTffsPTask                 0.00%       0.00%       0.01%

400add0 tNet0                      0.00%       0.05%       0.08%

423b320 tIomEvtMon                 0.40%       1.61%       1.25%

4244b38 osapiTimer                 1.80%       1.20%       1.06%

44554e0 bcmL2X.0                   4.00%       4.69%       4.71%

447bda0 bcmCNTR.0                  2.80%       1.81%       1.87%

4b83260 bcmRX                      6.21%       5.55%       4.48%

51557a0 MAC Send Task              0.60%       0.66%       0.27%

515eca0 MAC Age Task               0.20%       0.20%       0.40%

54606b8 USL Worker Task            0.00%       0.04%       0.03%

54fc510 USL Control Task           0.00%       0.00%       0.01%

551caf0 bcmLINK.0                  1.40%       1.73%       1.71%

814ec30 tL7Timer0                  0.00%       0.00%       0.01%

816b030 osapiWdTask                0.20%       0.07%       0.04%

9287e90 servPortMonTask            0.40%       0.32%       0.31%

94164b8 simPts_task                0.60%       0.40%       0.32%

963b488 UtilTask                   0.20%       0.06%       0.02%

9a47f28 emWeb                      4.40%       1.63%       0.55%

a198068 hapiL3AsyncTask            0.00%       0.00%       0.02%

a2b93d0 cmgrTask                   0.00%       0.02%       0.03%

a2e7e80 trafficStormControl        0.20%       0.19%       0.19%

a616930 DHCP snoop                 0.20%       0.09%       0.06%

a6b2240 Dynamic ARP Inspect        0.60%       0.67%       0.57%

b003770 dot1s_timer_task           5.81%       6.33%       6.20%

b340770 Dot1s transport tas        0.00%       0.01%       0.00%

b7f3e40 radius_task                0.00%       0.11%       0.37%

b85d340 tacacs_rx_task             0.00%       0.03%       0.05%

b86fed0 unitMgrTask                0.80%       0.88%       0.80%

bab8b60 snoopTask                  0.20%       0.08%       0.06%

c233140 dhcpsPingTask              0.80%       0.60%       0.52%

c571f88 sFlowTask                  0.00%       0.11%       0.08%

c617518 spmTask                    0.00%       0.14%       0.17%

cfa53c0 tRtrDiscProcessingT        0.00%       0.02%       0.05%

da25da0 pktRcvrTask                0.00%       0.01%       0.01%

12274168 iscsiTask                  0.60%       0.76%       0.72%

12364290 lldpTask                   0.00%       0.06%       0.06%

12498538 DHCPv4 Client Task         0.20%       0.06%       0.05%

124a34b0 isdpTask                   0.00%       0.00%       0.01%

1266cad8 RMONTask                   0.80%       0.79%       0.54%

1267e710 boxs Req                   0.40%       0.20%       0.09%

-----------------------------------------------------------------

Total CPU Utilization             34.06%      31.40%      27.97%

console#

14 Posts

March 9th, 2016 15:00

Hi, need to bump this up, I can still see this issue on some switches including the m8024-k ones.

14 Posts

November 29th, 2016 08:00

Thank you Antonio
I can confirm the performance is not being affected so is just a cosmetic issue.
Not using ipv6 for MLD yet, disabled via sdm

2 Intern

 • 

196 Posts

November 30th, 2016 03:00

Ok, MrRack. Glad to know this is not a real issue.

Thank you for your feedback.

No Events found!

Top