Start a Conversation

Unsolved

This post is more than 5 years old

151809

January 17th, 2014 14:00

ipMapForwardingTask very high CPU

Hello all,

We have a stack of 7 units of Dell PowerConnect 6224 with 2.2.0.3 in firmware version.

For couple of day, we have very high CPU on our switch :

Task                    Utilization
----------------------- -----------
osapiTimer                    1.20%
bcmL2X.0                      0.65%
bcmCNTR.0                     0.15%
bcmTX                         0.05%
bcmLINK.0                     0.35%
bcmRX                         2.10%
bcmATP-TX                     0.65%
bcmATP-RX                     0.20%
MAC Send Task                 0.15%
MAC Age Task                  0.70%
dtlTask                       0.15%
hapiRxTask                    0.15%
SNMPTask                     35.85%
radius_timer_task             0.05%
unitMgrTask                   1.05%
dot3ad_timer_task             0.20%
dot3ad_lac_task               0.05%
spmTask                       0.15%
ipMapForwardingTask          55.05%
OSPF Protocol                 0.10%
BXS Req                       0.05%
OSPF Receive                  0.15%
Kernel/Interrupt/Idle         0.80%

Total                        100.00%

The process ipMapForwardingTask use the CPU and we don't understand why.

This platform is in production for several years without any issue

Do you know if the are any tools to find the packets who use this process.

Please reply, If you have any idea about our issue

Thank you very much

Regards,

T

5 Practitioner

 • 

274.2K Posts

January 20th, 2014 06:00

The firmware is way out of date. Up to date firmware can help with operability and would be the first thing I do.

http://www.dell.com/support/drivers/us/en/19/DriverDetails/Product/powerconnect-6224?driverId=77XG3&osCode=NAA&fileId=3288111910&languageCode=en&categoryId=NI

 

Make sure to follow the instructions in the download on how to update the firmware.

 

Looking at some other posts regarding this issue, it seems that spanning tree may be the cause of this.

http://craigwmiller.wordpress.com/2010/04/18/adventures-with-spanning-tree/

http://en.community.dell.com/support-forums/servers/f/866/p/19242933/19420582.aspx

 

Make sure all your networking devices are using the same spanning tree protocol.

Monitor the network to see if any additional networking devices are being plugged in.

If the 6224 is intended to be the root, manually set the cost so it is the root.

 

Feel free to post up the running config, we can help look through it.

21 Posts

January 22nd, 2014 05:00

Anything new about this?


I have the same problems with just one 6248 PowerConnect Switch (No Stacking). Sometimes the ipMapForwardingTask is between 5-10 %, and sometimes it increases up to 50 - 60 %.

Pinging VLAN Interface adresses will result in very high latency (100 - 300 ms) or even timeouts!

We put every edge port of all our Dell switches to portfast mode (No other switches in our network, just some small 8-port netgear switches)

Firmware is latest up to date (3.3.8.2)!


We configured RSTP on all our servers. There are at least no topology changes in our L3-Switch which could show me that there is a spanning-tree related problem at all...

I tried different things as mentioned in the topic above, for example disabled the ARP "Age Time" to 300 seconds or disabled the "Dynamic Renew Mode".

Here ist another example output of the "show processes cpu" when VLAN Interface has high latency / timeouts (1) and when everything is working fine (2):

(1) CPU Utilization:

 PID     Name                     5 Sec    1 Min    5 Min
---------------------------------------------------------
 3374cf0 tNetTask                 0.31%    0.29%    0.09%
 35611d0 ipnetd                   0.00%    0.08%    0.03%
 3573770 tXbdService              0.00%    0.06%    0.15%
 358e7b0 osapiTimer               1.74%    1.36%    1.29%
 370e0f0 bcmL2X.0                 0.15%    0.29%    0.32%
 373d1f0 bcmCNTR.0                0.31%    0.36%    0.33%
 3756a80 bcmTX                    0.95%    1.06%    1.31%
 3c9b050 bcmL2X.1                 0.47%    0.35%    0.42%
 3cb1e70 bcmCNTR.1                0.31%    0.22%    0.34%
 3dd9730 bcmRX                    4.44%    4.55%    5.32%
 4011ee0 MAC Send Task            0.79%    0.70%    0.75%
 401b3e0 MAC Age Task             0.00%    0.28%    0.23%
 4b23c50 USL Worker Task          0.00%    0.00%    0.02%
 4bdfae0 bcmLINK.0                0.00%    0.39%    0.58%
 4be8fe0 bcmLINK.1                0.63%    0.53%    0.53%
 52ea220 tL7Timer0                0.00%    0.06%    0.00%
 530fb00 osapiMonTask             0.00%    0.29%    0.15%
 5ffe660 simPts_task              0.00%    0.06%    0.17%
 64329e0 dtlTask                  3.17%    3.00%    3.34%
 6464160 tEmWeb                   0.15%    0.63%    0.17%
 64b63c0 hapiBpduTxTask           0.15%    0.10%    0.01%
 64c46e0 hapiRxTask               2.53%    2.55%    2.88%
 6ae50c0 DHCP snoop               0.00%    0.06%    0.01%
 6b7b5d0 Dynamic ARP Inspection   0.00%    0.02%    0.00%
 77bc630 dot1s_timer_task         0.15%    0.46%    0.45%
 9626320 sFlowTask                0.00%    0.06%    0.00%
 9756e00 ipMapForwardingTask     52.69%   47.33%   50.97%
 9896140 ARP Timer                0.00%    0.02%    0.00%
 9c678b0 tRtrDiscProcessingTask   0.00%    0.02%    0.00%
 cbf2610 ip6MapLocalDataTask      0.00%    0.06%    0.00%
 cd46800 lldpTask                 0.31%    0.33%    0.30%
 d59fe40 tCptvPrtl                0.00%    0.02%    0.00%
 da279f0 isdpTask                 0.00%    0.14%    0.07%
 e22b120 RMONTask                 0.15%    0.09%    0.15%
 e2363b0 boxs Req                 0.15%    0.07%    0.15%
---------------------------------------------------------
 Total CPU Utilization           69.55%   65.89%   70.53%

(2) CPU Utilization:

 PID     Name                     5 Sec    1 Min    5 Min
---------------------------------------------------------
 335dec0 tTffsPTask               0.00%    0.02%    0.00%
 3374cf0 tNetTask                 0.31%    0.22%    0.07%
 35611d0 ipnetd                   0.00%    0.02%    0.00%
 358e7b0 osapiTimer               0.79%    1.06%    1.24%
 370e0f0 bcmL2X.0                 0.47%    0.47%    0.20%
 373d1f0 bcmCNTR.0                0.15%    0.37%    0.33%
 3756a80 bcmTX                    0.31%    0.31%    0.27%
 3c9b050 bcmL2X.1                 0.00%    0.22%    0.30%
 3cb1e70 bcmCNTR.1                0.31%    0.40%    0.20%
 3dd9730 bcmRX                    1.58%    1.97%    2.20%
 4011ee0 MAC Send Task            0.79%    0.69%    0.77%
 401b3e0 MAC Age Task             0.63%    0.28%    0.22%
 4b23c50 USL Worker Task          0.00%    0.00%    0.01%
 4bdfae0 bcmLINK.0                0.47%    0.41%    0.34%
 4be8fe0 bcmLINK.1                0.95%    0.55%    0.47%
 52ea220 tL7Timer0                0.00%    0.04%    0.01%
 530fb00 osapiMonTask             0.00%    0.45%    0.16%
 5ffe660 simPts_task              0.00%    0.11%    0.02%
 64329e0 dtlTask                  0.79%    1.11%    1.45%
 64b63c0 hapiBpduTxTask           0.00%    0.10%    0.01%
 64c46e0 hapiRxTask               0.79%    0.85%    1.05%
 6ae50c0 DHCP snoop               0.00%    0.06%    0.00%
 6b7b5d0 Dynamic ARP Inspection   0.00%    0.02%    0.01%
 77bc630 dot1s_timer_task         0.79%    0.50%    0.48%
 8580e80 radius_rx_task           0.00%    0.02%    0.00%
 9756e00 ipMapForwardingTask      4.76%   14.30%   17.08%
 9896140 ARP Timer                0.00%    0.02%    0.00%
 9c55480 IpHelperTask             0.00%    0.00%    0.01%
 cbf2610 ip6MapLocalDataTask      0.00%    0.02%    0.01%
 cd46800 lldpTask                 0.47%    0.29%    0.37%
 da279f0 isdpTask                 0.00%    0.06%    0.05%
 e22b120 RMONTask                 0.15%    0.07%    0.15%
 e2363b0 boxs Req                 0.15%    0.07%    0.15%
---------------------------------------------------------
 Total CPU Utilization           14.66%   25.08%   27.63%


21 Posts

January 22nd, 2014 06:00

Running Configuration:

!Current Configuration:
!System Description "PowerConnect 6248, 3.3.8.2, VxWorks 6.5"
!System Software Version 3.3.8.2
!Cut-through mode is configured as disabled
!


configure
vlan database
vlan 20,40,50,60,70,80,90,1000,2007-2009,3007-3008
vlan routing 1 1
vlan routing 40 2
vlan routing 90 3
vlan routing 20 4
vlan routing 80 7
vlan routing 2007 8
vlan routing 2009 9
exit

sntp unicast client enable
sntp server 192.53.X.X
sntp server 192.53.X.X
clock timezone 2 minutes 0
stack
member 1 2
exit


ip address 10.1.X.X 255.255.0.0
ip default-gateway 10.1.X.X
ip address vlan 1000
ip name-server 10.2.X.X
ip name-server 10.2.X.X
logging file info
ip routing
ip route 0.0.0.0 0.0.0.0 10.2.X.X
ip route 172.16.X.0 255.255.255.0 10.20.X.X
ip route 172.16.X.0 255.255.255.0 10.20.X.X
ip route 172.16.X.0 255.255.255.0 10.20.X.X
arp timeout 300
arp 10.7.X.X 02BF.0A07.030A
ip helper-address 10.2.X.X dhcp
interface vlan 1
routing
ip address 10.2.X.X 255.255.0.0
bandwidth 10000
ip mtu 1500
exit


interface vlan 20
routing
ip address 10.20.X.X 255.255.255.0
bandwidth 10000
ip mtu 1500
exit


interface vlan 40
routing
ip address 10.40.X.X 255.255.255.0
exit
interface vlan 50
exit
interface vlan 60
exit
interface vlan 70
exit
interface vlan 80
name "StagingArea"
routing
ip address 10.80.X.X 255.255.255.0
bandwidth 10000
ip mtu 1500
exit
interface vlan 1000
exit
interface vlan 2007
name "Exchange CCR"
routing
ip address 10.7.X.X 255.255.255.0
bandwidth 10000
ip mtu 1500
exit
interface vlan 2008
name "Exchange Heartbeats"
exit
interface vlan 2009
name "Exchange NLB"
routing
ip address 10.7.X.X 255.255.255.0
exit
interface vlan 3007
name "iSCSI-Traffic"
exit
interface vlan 3008
name "iSCSI-Heartbeat"
exit
username "Administrator" password XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX level 15 encrypted
line telnet
exec-timeout 600
exit


spanning-tree bpdu flooding
spanning-tree priority 4096
!
interface ethernet 1/g1
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
switchport access vlan 1000
exit
!
interface ethernet 1/g2
no negotiation
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
switchport access vlan 20
exit
!
interface ethernet 1/g3
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
switchport access vlan 20
exit
!
interface ethernet 1/g4
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g5
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g6
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g7
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g8
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g9
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g10
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g11
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g12
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g13
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g14
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g15
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g16
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g17
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g18
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g19
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g20
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g21
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g22
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g23
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g24
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g25
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g26
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g27
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g28
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g29
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g30
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g31
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g32
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g33
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g34
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g35
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g36
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g37
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g38
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g39
switchport mode general
switchport general allowed vlan add 20,40,50,60,70,80,1000,3007 tagged
exit
!
interface ethernet 1/g40
switchport mode general
switchport general allowed vlan add 20,40,50,60,70,1000 tagged
exit
!
interface ethernet 1/g41
switchport mode general
switchport general allowed vlan add 20,60,70 tagged
exit
!
interface ethernet 1/g42
switchport mode general
switchport general allowed vlan add 20,50,60,70,1000 tagged
exit
!
interface ethernet 1/g43
switchport mode general
switchport general allowed vlan add 20,60,70 tagged
exit
!
interface ethernet 1/g44
switchport mode general
switchport general allowed vlan add 20,60,70 tagged
exit
!
interface ethernet 1/g45
switchport mode general
switchport general allowed vlan add 20,40,50,60,70,80,1000,3007 tagged
exit
!
interface ethernet 1/g46
switchport mode general
switchport general allowed vlan add 70,2007-2009 tagged
exit
!
interface ethernet 1/g47
switchport mode general
switchport general allowed vlan add 20,1000 tagged
exit
!
interface ethernet 1/g48
switchport mode general
switchport general allowed vlan add 20,60,2007-2009 tagged
exit
!
interface ethernet 1/xg1
switchport mode trunk
switchport trunk allowed vlan add 1,20,50,60,70,1000,2007-2009,3007-3008
exit
!
interface ethernet 1/xg2
switchport mode trunk
switchport trunk allowed vlan add 1,20,40,50,60,70,80,90,1000,2007-2009
switchport trunk allowed vlan add 3007-3008
exit
!
interface ethernet 1/xg3
spanning-tree portfast
exit
!
interface ethernet 1/xg4
spanning-tree portfast
exit
!
snmp-server community xxx ro ipaddress 10.1.X.X
snmp-server community xxx ro ipaddress 10.1.X.X
snmp-server community xxx rw ipaddress 10.1.X.X
exit

5 Practitioner

 • 

274.2K Posts

January 22nd, 2014 06:00

KNKA, the majority of your ports have spanning tree disabled on them, are these ports not being used?

#spanning-tree disable

 

Have you monitored any traffic to see if there is a source for the increase? You run the following commands on the different interfaces.

console#show statistics ethernet 1/g1

Compare the ports with each other to see if one has more of a specific traffic type than the others.

 

Then you can monitor the traffic using these commands.

To monitor specific interface

 

console(config)#monitor session 1 source interface

1/g8

console(config)#monitor session 1 destination interface 1/g10

console(config)#monitor session 1 mode

 

To monitor CPU traffic

console(config)#monitor session 1 source cpu

console(config)#monitor session 1 destination interface 1/g10

console(config)#monitor session 1 mode

Run something like wireshark on the destination port to view the traffic. View the traffic during normal operation and during increased CPU usage.

 

This should help narrow down if a specific device is suddenly flooding the network.

21 Posts

January 22nd, 2014 07:00

Port g1-g38 was spanning-tree disabled because they were just edge ports...

All the other are uplinks to other switches... I thought this would be no problem to deactivate STP on that specific edge ports? Maybe I was wrong, so please tell me =)

We got that 6248 and all the interfaces in nagios monitoring. In fact we can see the traffic / bandwidth of all interfaces in a 10 minutes period.


Another curious thing:

If I ping VLAN 1 Interface (Gateway IP of VLAN 1) from a workstation in VLAN 40 we got these values very often:

Reply from 10.2.1.254: bytes=32 time=3ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64
Reply from 10.2.1.254: bytes=32 time=4ms TTL=64
Reply from 10.2.1.254: bytes=32 time=2ms TTL=64
Request timed out.
Reply from 10.2.1.254: bytes=32 time=163ms TTL=64
Reply from 10.2.1.254: bytes=32 time=33ms TTL=64
Reply from 10.2.1.254: bytes=32 time=2ms TTL=64
Reply from 10.2.1.254: bytes=32 time=34ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64

but from the same workstation (inside VLAN 40) to another workstation (inside VLAN 1), we have good response times continuously at the same time:

Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127

21 Posts

January 22nd, 2014 07:00

Just another quick notation about the ipMapForwardingTask.

Almost every time I check the "show process cpu" utilizations, the ipMapForwardingTask is above 20 % and the total CPU Utilization is about 30 - 35 %.

Its very hard to find a moment where the ipMapForwardingTask is below 10 %.


What does is this "ipMapForwardingTask" exactely? The process which is forwarding IP packets, broadcasts, multicasts?!

Is there any definition for this task?

Thank you very much for your investigation!

5 Practitioner

 • 

274.2K Posts

January 22nd, 2014 08:00

Sorry I don’t have a precise definition of ipMapForwardingTask.

 

Even if you have physical access to the switch locked down right now, who knows how things will be in the future. And having STP enabled on all ports is recommended. Loops in the network are not generally created on purpose, and having STP enabled will help prevent accidents from taking down the network.

 

Make sure RSTP is enabled globally

#spanning-tree mode rstp

 

On the interfaces with spanning tree disabled, re-enable it.

#no spanning-tree disable

 

On edge ports leave portfast enabled. And verify configuration with to command:

#show spanning-tree

 

Other switches on the network should also use the same whether RSTP or MSTP. Do you have a designated Root switch on the network? If not it may be something to look into.

 

I also noticed that this switch has routing enabled. Is this switch your Layer 3 switch for the network? Or do you have multiple switches on the network with routing enabled?

 

Doing some searching I ran across a post where others had similar issues and it was usggested to Globally run this command. It will disable ICMP redirect on all interfaces. I think it would be worth trying out in this situation.

#no ip redirects

Then monitor and see if the CPU usage goes down any.

 

Thanks

21 Posts

January 23rd, 2014 00:00

Good Morning,


I re-enabled the STP on all edge ports on that switch.

RSTP was already configured (on all PowerConnects in the network) and Portfast on all edge ports!

This is our routing switch at the moment, but the only one! There is no other Switch routing.

I deactivated the ICMP redirects as you mentioned in the last section of your post!

Immediately the CPU Utilization for the ipMapForwardingTask run below 2 %! It seems that this option disabled has solved the issues. There are no request timeouts anymore and even the ping times to the VLAN Interface IP addresses are plenty times better now!

A good articel which is dealing with different things about the problems I mentioned at the top of this thread are mentioned here:

http://hasanmansur.com/2012/10/15/powerconnect-latency-packet-loss-troubleshooting/

Even the ICMP Redicret thing is described  very good!

Hope other Administrators with this problems will read this.

Thank you very very much for your fast and competent help!

5 Practitioner

 • 

274.2K Posts

January 23rd, 2014 05:00

Thierry, Can you run the command:

#no ip redirects

And see if it resolves the issue for you as well?

Thanks

5 Practitioner

 • 

274.2K Posts

January 23rd, 2014 05:00

Excellent to hear! thanks for keeping us up to date.

21 Posts

January 23rd, 2014 05:00

Just a quick update...


The ipForwardTask was very good about 3-4 hours... Always between 0,6 - 2,5 % and an overall CPU Utilization about 7 - 10 % of our L3-PC-6248 Switch!

Now we had about 30 minutes where the ipForwardTask increased again up to 20 - 25 % and an overall Utilization about 40 - 45 %.

#show logging

hast some strange entries in between this time:


<190> JAN 23 14:14:58 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491049 %% nimCheckIfNumber: internal interface number 819 out of range
<190> JAN 23 14:14:58 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491060 %% nimCheckIfNumber: internal interface number 820 out of range
<190> JAN 23 14:14:58 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491061 %% nimCheckIfNumber: internal interface number 821 out of range
<190> JAN 23 14:14:58 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491062 %% nimCheckIfNumber: internal interface number 822 out of range


<190> JAN 23 14:14:58 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491063 %% nimCheckIfNumber: internal interface number 823 out of range
<190> JAN 23 14:14:58 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491064 %% nimCheckIfNumber: internal interface number 824 out of range
<190> JAN 23 14:15:32 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491548 %% nimCheckIfNumber: internal interface number 819 out of range
<190> JAN 23 14:15:32 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491559 %% nimCheckIfNumber: internal interface number 820 out of range
<190> JAN 23 14:15:32 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491560 %% nimCheckIfNumber: internal interface number 821 out of range
<190> JAN 23 14:15:32 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491561 %% nimCheckIfNumber: internal interface number 822 out of range
<190> JAN 23 14:15:32 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491562 %% nimCheckIfNumber: internal interface number 823 out of range
<190> JAN 23 14:15:32 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491563 %% nimCheckIfNumber: internal interface number 824 out of range


<190> JAN 23 14:15:34 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491574 %% nimCheckIfNumber: internal interface number 819 out of range
<190> JAN 23 14:15:34 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491575 %% nimCheckIfNumber: internal interface number 820 out of range
<190> JAN 23 14:15:34 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491576 %% nimCheckIfNumber: internal interface number 821 out of range
<190> JAN 23 14:15:34 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491577 %% nimCheckIfNumber: internal interface number 822 out of range
<190> JAN 23 14:15:34 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491578 %% nimCheckIfNumber: internal interface number 823 out of range
<190> JAN 23 14:15:34 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491579 %% nimCheckIfNumber: internal interface number 824 out of range
<190> JAN 23 14:17:26 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 493288 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:17:26 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 493289 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6


<190> JAN 23 14:17:26 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 493290 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:17:26 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 493291 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:17:26 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 493292 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:17:26 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 493293 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<187> JAN 23 14:17:26 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493294 %% Failure getting the forwarding database ID for fid 2
<187> JAN 23 14:17:26 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493304 %% Failure getting the forwarding database ID for fid 3
<187> JAN 23 14:17:26 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493305 %% Failure getting the forwarding database ID for fid 4
<187> JAN 23 14:17:26 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493316 %% Failure getting the forwarding database ID for fid 5


<187> JAN 23 14:17:26 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493317 %% Failure getting the forwarding database ID for fid 6
<187> JAN 23 14:17:26 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493318 %% Failure getting the forwarding database ID for fid 7
<187> JAN 23 14:17:26 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493319 %% Failure getting the forwarding database ID for fid 8
<187> JAN 23 14:17:28 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493330 %% Failure getting the forwarding database ID for fid 8
<187> JAN 23 14:17:28 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493331 %% Failure getting the forwarding database ID for fid 9
<187> JAN 23 14:17:28 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493332 %% Failure getting the forwarding database ID for fid 10
<187> JAN 23 14:17:28 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493333 %% Failure getting the forwarding database ID for fid 11
<187> JAN 23 14:17:28 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493334 %% Failure getting the forwarding database ID for fid 12


<187> JAN 23 14:17:28 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493335 %% Failure getting the forwarding database ID for fid 13
<190> JAN 23 14:23:20 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 498440 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:23:20 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 498441 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:23:20 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 498442 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:23:20 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 498443 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:23:20 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 498444 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:23:20 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 498445 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<187> JAN 23 14:23:20 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498455 %% Failure getting the forwarding database ID for fid 2


<187> JAN 23 14:23:20 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498456 %% Failure getting the forwarding database ID for fid 3
<187> JAN 23 14:23:20 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498457 %% Failure getting the forwarding database ID for fid 4
<187> JAN 23 14:23:20 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498468 %% Failure getting the forwarding database ID for fid 5
<187> JAN 23 14:23:20 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498469 %% Failure getting the forwarding database ID for fid 6
<187> JAN 23 14:23:20 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498470 %% Failure getting the forwarding database ID for fid 7
<187> JAN 23 14:23:20 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498471 %% Failure getting the forwarding database ID for fid 8
<187> JAN 23 14:23:22 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498482 %% Failure getting the forwarding database ID for fid 8
<187> JAN 23 14:23:22 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498483 %% Failure getting the forwarding database ID for fid 9


<187> JAN 23 14:23:22 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498484 %% Failure getting the forwarding database ID for fid 10
<187> JAN 23 14:23:22 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498485 %% Failure getting the forwarding database ID for fid 11
<187> JAN 23 14:23:22 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498486 %% Failure getting the forwarding database ID for fid 12
<187> JAN 23 14:23:22 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498487 %% Failure getting the forwarding database ID for fid 13

Any idea what happened?

I just run the following command:

# no ip redirect

globally and for all routed VLAN interfaces and

# no ip unreachables

on every routed VLAN.

Thanks in advance

5 Practitioner

 • 

274.2K Posts

January 23rd, 2014 06:00

What are we trying to accomplish by running the following command?

# no ip unreachables

Lets revert this back to the default setting of enabled.

ip unreachables

Continue to monitor to see if get these messages still.

21 Posts

January 23rd, 2014 06:00

Just reverted it! Last 60 minutes very fine without problems.

Just found out that 1/g39 Uplink Port to a 5448 has many "Frames too Long" packages... Many hundrets - thousands per second...

Maybe here is something wrong too... Could cause much CPU load / problems?

5 Practitioner

 • 

274.2K Posts

January 23rd, 2014 07:00

Just to be sure, lets run #ip redirect, to be sure that implementing #no ip redirect, did not play some role in this.

If the messages continue, then go back to #no ip redirect. May need to setup port monitoring on that port to see what traffic is on that port. Is there any iSCSI traffic coming in from the 5500?

5 Practitioner

 • 

274.2K Posts

January 23rd, 2014 07:00

Do you have jumbo frames enabled? if not i would enable and see if those messages go away.

#mtu 9216

No Events found!

Top