21 Posts

January 22nd, 2014 05:00

Anything new about this?


I have the same problems with just one 6248 PowerConnect Switch (No Stacking). Sometimes the ipMapForwardingTask is between 5-10 %, and sometimes it increases up to 50 - 60 %.

Pinging VLAN Interface adresses will result in very high latency (100 - 300 ms) or even timeouts!

We put every edge port of all our Dell switches to portfast mode (No other switches in our network, just some small 8-port netgear switches)

Firmware is latest up to date (3.3.8.2)!


We configured RSTP on all our servers. There are at least no topology changes in our L3-Switch which could show me that there is a spanning-tree related problem at all...

I tried different things as mentioned in the topic above, for example disabled the ARP "Age Time" to 300 seconds or disabled the "Dynamic Renew Mode".

Here ist another example output of the "show processes cpu" when VLAN Interface has high latency / timeouts (1) and when everything is working fine (2):

(1) CPU Utilization:

 PID     Name                     5 Sec    1 Min    5 Min
---------------------------------------------------------
 3374cf0 tNetTask                 0.31%    0.29%    0.09%
 35611d0 ipnetd                   0.00%    0.08%    0.03%
 3573770 tXbdService              0.00%    0.06%    0.15%
 358e7b0 osapiTimer               1.74%    1.36%    1.29%
 370e0f0 bcmL2X.0                 0.15%    0.29%    0.32%
 373d1f0 bcmCNTR.0                0.31%    0.36%    0.33%
 3756a80 bcmTX                    0.95%    1.06%    1.31%
 3c9b050 bcmL2X.1                 0.47%    0.35%    0.42%
 3cb1e70 bcmCNTR.1                0.31%    0.22%    0.34%
 3dd9730 bcmRX                    4.44%    4.55%    5.32%
 4011ee0 MAC Send Task            0.79%    0.70%    0.75%
 401b3e0 MAC Age Task             0.00%    0.28%    0.23%
 4b23c50 USL Worker Task          0.00%    0.00%    0.02%
 4bdfae0 bcmLINK.0                0.00%    0.39%    0.58%
 4be8fe0 bcmLINK.1                0.63%    0.53%    0.53%
 52ea220 tL7Timer0                0.00%    0.06%    0.00%
 530fb00 osapiMonTask             0.00%    0.29%    0.15%
 5ffe660 simPts_task              0.00%    0.06%    0.17%
 64329e0 dtlTask                  3.17%    3.00%    3.34%
 6464160 tEmWeb                   0.15%    0.63%    0.17%
 64b63c0 hapiBpduTxTask           0.15%    0.10%    0.01%
 64c46e0 hapiRxTask               2.53%    2.55%    2.88%
 6ae50c0 DHCP snoop               0.00%    0.06%    0.01%
 6b7b5d0 Dynamic ARP Inspection   0.00%    0.02%    0.00%
 77bc630 dot1s_timer_task         0.15%    0.46%    0.45%
 9626320 sFlowTask                0.00%    0.06%    0.00%
 9756e00 ipMapForwardingTask     52.69%   47.33%   50.97%
 9896140 ARP Timer                0.00%    0.02%    0.00%
 9c678b0 tRtrDiscProcessingTask   0.00%    0.02%    0.00%
 cbf2610 ip6MapLocalDataTask      0.00%    0.06%    0.00%
 cd46800 lldpTask                 0.31%    0.33%    0.30%
 d59fe40 tCptvPrtl                0.00%    0.02%    0.00%
 da279f0 isdpTask                 0.00%    0.14%    0.07%
 e22b120 RMONTask                 0.15%    0.09%    0.15%
 e2363b0 boxs Req                 0.15%    0.07%    0.15%
---------------------------------------------------------
 Total CPU Utilization           69.55%   65.89%   70.53%

(2) CPU Utilization:

 PID     Name                     5 Sec    1 Min    5 Min
---------------------------------------------------------
 335dec0 tTffsPTask               0.00%    0.02%    0.00%
 3374cf0 tNetTask                 0.31%    0.22%    0.07%
 35611d0 ipnetd                   0.00%    0.02%    0.00%
 358e7b0 osapiTimer               0.79%    1.06%    1.24%
 370e0f0 bcmL2X.0                 0.47%    0.47%    0.20%
 373d1f0 bcmCNTR.0                0.15%    0.37%    0.33%
 3756a80 bcmTX                    0.31%    0.31%    0.27%
 3c9b050 bcmL2X.1                 0.00%    0.22%    0.30%
 3cb1e70 bcmCNTR.1                0.31%    0.40%    0.20%
 3dd9730 bcmRX                    1.58%    1.97%    2.20%
 4011ee0 MAC Send Task            0.79%    0.69%    0.77%
 401b3e0 MAC Age Task             0.63%    0.28%    0.22%
 4b23c50 USL Worker Task          0.00%    0.00%    0.01%
 4bdfae0 bcmLINK.0                0.47%    0.41%    0.34%
 4be8fe0 bcmLINK.1                0.95%    0.55%    0.47%
 52ea220 tL7Timer0                0.00%    0.04%    0.01%
 530fb00 osapiMonTask             0.00%    0.45%    0.16%
 5ffe660 simPts_task              0.00%    0.11%    0.02%
 64329e0 dtlTask                  0.79%    1.11%    1.45%
 64b63c0 hapiBpduTxTask           0.00%    0.10%    0.01%
 64c46e0 hapiRxTask               0.79%    0.85%    1.05%
 6ae50c0 DHCP snoop               0.00%    0.06%    0.00%
 6b7b5d0 Dynamic ARP Inspection   0.00%    0.02%    0.01%
 77bc630 dot1s_timer_task         0.79%    0.50%    0.48%
 8580e80 radius_rx_task           0.00%    0.02%    0.00%
 9756e00 ipMapForwardingTask      4.76%   14.30%   17.08%
 9896140 ARP Timer                0.00%    0.02%    0.00%
 9c55480 IpHelperTask             0.00%    0.00%    0.01%
 cbf2610 ip6MapLocalDataTask      0.00%    0.02%    0.01%
 cd46800 lldpTask                 0.47%    0.29%    0.37%
 da279f0 isdpTask                 0.00%    0.06%    0.05%
 e22b120 RMONTask                 0.15%    0.07%    0.15%
 e2363b0 boxs Req                 0.15%    0.07%    0.15%
---------------------------------------------------------
 Total CPU Utilization           14.66%   25.08%   27.63%


21 Posts

January 22nd, 2014 06:00

Running Configuration:

!Current Configuration:
!System Description "PowerConnect 6248, 3.3.8.2, VxWorks 6.5"
!System Software Version 3.3.8.2
!Cut-through mode is configured as disabled
!


configure
vlan database
vlan 20,40,50,60,70,80,90,1000,2007-2009,3007-3008
vlan routing 1 1
vlan routing 40 2
vlan routing 90 3
vlan routing 20 4
vlan routing 80 7
vlan routing 2007 8
vlan routing 2009 9
exit

sntp unicast client enable
sntp server 192.53.X.X
sntp server 192.53.X.X
clock timezone 2 minutes 0
stack
member 1 2
exit


ip address 10.1.X.X 255.255.0.0
ip default-gateway 10.1.X.X
ip address vlan 1000
ip name-server 10.2.X.X
ip name-server 10.2.X.X
logging file info
ip routing
ip route 0.0.0.0 0.0.0.0 10.2.X.X
ip route 172.16.X.0 255.255.255.0 10.20.X.X
ip route 172.16.X.0 255.255.255.0 10.20.X.X
ip route 172.16.X.0 255.255.255.0 10.20.X.X
arp timeout 300
arp 10.7.X.X 02BF.0A07.030A
ip helper-address 10.2.X.X dhcp
interface vlan 1
routing
ip address 10.2.X.X 255.255.0.0
bandwidth 10000
ip mtu 1500
exit


interface vlan 20
routing
ip address 10.20.X.X 255.255.255.0
bandwidth 10000
ip mtu 1500
exit


interface vlan 40
routing
ip address 10.40.X.X 255.255.255.0
exit
interface vlan 50
exit
interface vlan 60
exit
interface vlan 70
exit
interface vlan 80
name "StagingArea"
routing
ip address 10.80.X.X 255.255.255.0
bandwidth 10000
ip mtu 1500
exit
interface vlan 1000
exit
interface vlan 2007
name "Exchange CCR"
routing
ip address 10.7.X.X 255.255.255.0
bandwidth 10000
ip mtu 1500
exit
interface vlan 2008
name "Exchange Heartbeats"
exit
interface vlan 2009
name "Exchange NLB"
routing
ip address 10.7.X.X 255.255.255.0
exit
interface vlan 3007
name "iSCSI-Traffic"
exit
interface vlan 3008
name "iSCSI-Heartbeat"
exit
username "Administrator" password XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX level 15 encrypted
line telnet
exec-timeout 600
exit


spanning-tree bpdu flooding
spanning-tree priority 4096
!
interface ethernet 1/g1
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
switchport access vlan 1000
exit
!
interface ethernet 1/g2
no negotiation
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
switchport access vlan 20
exit
!
interface ethernet 1/g3
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
switchport access vlan 20
exit
!
interface ethernet 1/g4
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g5
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g6
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g7
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g8
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g9
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g10
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g11
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g12
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g13
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g14
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g15
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g16
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g17
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g18
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g19
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g20
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g21
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g22
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g23
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g24
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g25
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g26
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g27
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g28
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g29
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g30
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g31
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g32
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g33
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g34
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g35
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g36
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g37
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g38
spanning-tree disable
spanning-tree portfast
spanning-tree tcnguard
spanning-tree auto-portfast
spanning-tree guard root
exit
!
interface ethernet 1/g39
switchport mode general
switchport general allowed vlan add 20,40,50,60,70,80,1000,3007 tagged
exit
!
interface ethernet 1/g40
switchport mode general
switchport general allowed vlan add 20,40,50,60,70,1000 tagged
exit
!
interface ethernet 1/g41
switchport mode general
switchport general allowed vlan add 20,60,70 tagged
exit
!
interface ethernet 1/g42
switchport mode general
switchport general allowed vlan add 20,50,60,70,1000 tagged
exit
!
interface ethernet 1/g43
switchport mode general
switchport general allowed vlan add 20,60,70 tagged
exit
!
interface ethernet 1/g44
switchport mode general
switchport general allowed vlan add 20,60,70 tagged
exit
!
interface ethernet 1/g45
switchport mode general
switchport general allowed vlan add 20,40,50,60,70,80,1000,3007 tagged
exit
!
interface ethernet 1/g46
switchport mode general
switchport general allowed vlan add 70,2007-2009 tagged
exit
!
interface ethernet 1/g47
switchport mode general
switchport general allowed vlan add 20,1000 tagged
exit
!
interface ethernet 1/g48
switchport mode general
switchport general allowed vlan add 20,60,2007-2009 tagged
exit
!
interface ethernet 1/xg1
switchport mode trunk
switchport trunk allowed vlan add 1,20,50,60,70,1000,2007-2009,3007-3008
exit
!
interface ethernet 1/xg2
switchport mode trunk
switchport trunk allowed vlan add 1,20,40,50,60,70,80,90,1000,2007-2009
switchport trunk allowed vlan add 3007-3008
exit
!
interface ethernet 1/xg3
spanning-tree portfast
exit
!
interface ethernet 1/xg4
spanning-tree portfast
exit
!
snmp-server community xxx ro ipaddress 10.1.X.X
snmp-server community xxx ro ipaddress 10.1.X.X
snmp-server community xxx rw ipaddress 10.1.X.X
exit

21 Posts

January 22nd, 2014 07:00

Port g1-g38 was spanning-tree disabled because they were just edge ports...

All the other are uplinks to other switches... I thought this would be no problem to deactivate STP on that specific edge ports? Maybe I was wrong, so please tell me =)

We got that 6248 and all the interfaces in nagios monitoring. In fact we can see the traffic / bandwidth of all interfaces in a 10 minutes period.


Another curious thing:

If I ping VLAN 1 Interface (Gateway IP of VLAN 1) from a workstation in VLAN 40 we got these values very often:

Reply from 10.2.1.254: bytes=32 time=3ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64
Reply from 10.2.1.254: bytes=32 time=4ms TTL=64
Reply from 10.2.1.254: bytes=32 time=2ms TTL=64
Request timed out.
Reply from 10.2.1.254: bytes=32 time=163ms TTL=64
Reply from 10.2.1.254: bytes=32 time=33ms TTL=64
Reply from 10.2.1.254: bytes=32 time=2ms TTL=64
Reply from 10.2.1.254: bytes=32 time=34ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64
Reply from 10.2.1.254: bytes=32 time=1ms TTL=64

but from the same workstation (inside VLAN 40) to another workstation (inside VLAN 1), we have good response times continuously at the same time:

Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127
Reply from 10.2.1.22: bytes=32 time<1ms TTL=127

21 Posts

January 22nd, 2014 07:00

Just another quick notation about the ipMapForwardingTask.

Almost every time I check the "show process cpu" utilizations, the ipMapForwardingTask is above 20 % and the total CPU Utilization is about 30 - 35 %.

Its very hard to find a moment where the ipMapForwardingTask is below 10 %.


What does is this "ipMapForwardingTask" exactely? The process which is forwarding IP packets, broadcasts, multicasts?!

Is there any definition for this task?

Thank you very much for your investigation!

21 Posts

January 23rd, 2014 00:00

Good Morning,


I re-enabled the STP on all edge ports on that switch.

RSTP was already configured (on all PowerConnects in the network) and Portfast on all edge ports!

This is our routing switch at the moment, but the only one! There is no other Switch routing.

I deactivated the ICMP redirects as you mentioned in the last section of your post!

Immediately the CPU Utilization for the ipMapForwardingTask run below 2 %! It seems that this option disabled has solved the issues. There are no request timeouts anymore and even the ping times to the VLAN Interface IP addresses are plenty times better now!

A good articel which is dealing with different things about the problems I mentioned at the top of this thread are mentioned here:

http://hasanmansur.com/2012/10/15/powerconnect-latency-packet-loss-troubleshooting/

Even the ICMP Redicret thing is described  very good!

Hope other Administrators with this problems will read this.

Thank you very very much for your fast and competent help!

21 Posts

January 23rd, 2014 05:00

Just a quick update...


The ipForwardTask was very good about 3-4 hours... Always between 0,6 - 2,5 % and an overall CPU Utilization about 7 - 10 % of our L3-PC-6248 Switch!

Now we had about 30 minutes where the ipForwardTask increased again up to 20 - 25 % and an overall Utilization about 40 - 45 %.

#show logging

hast some strange entries in between this time:


<190> JAN 23 14:14:58 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491049 %% nimCheckIfNumber: internal interface number 819 out of range
<190> JAN 23 14:14:58 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491060 %% nimCheckIfNumber: internal interface number 820 out of range
<190> JAN 23 14:14:58 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491061 %% nimCheckIfNumber: internal interface number 821 out of range
<190> JAN 23 14:14:58 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491062 %% nimCheckIfNumber: internal interface number 822 out of range


<190> JAN 23 14:14:58 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491063 %% nimCheckIfNumber: internal interface number 823 out of range
<190> JAN 23 14:14:58 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491064 %% nimCheckIfNumber: internal interface number 824 out of range
<190> JAN 23 14:15:32 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491548 %% nimCheckIfNumber: internal interface number 819 out of range
<190> JAN 23 14:15:32 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491559 %% nimCheckIfNumber: internal interface number 820 out of range
<190> JAN 23 14:15:32 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491560 %% nimCheckIfNumber: internal interface number 821 out of range
<190> JAN 23 14:15:32 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491561 %% nimCheckIfNumber: internal interface number 822 out of range
<190> JAN 23 14:15:32 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491562 %% nimCheckIfNumber: internal interface number 823 out of range
<190> JAN 23 14:15:32 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491563 %% nimCheckIfNumber: internal interface number 824 out of range


<190> JAN 23 14:15:34 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491574 %% nimCheckIfNumber: internal interface number 819 out of range
<190> JAN 23 14:15:34 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491575 %% nimCheckIfNumber: internal interface number 820 out of range
<190> JAN 23 14:15:34 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491576 %% nimCheckIfNumber: internal interface number 821 out of range
<190> JAN 23 14:15:34 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491577 %% nimCheckIfNumber: internal interface number 822 out of range
<190> JAN 23 14:15:34 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491578 %% nimCheckIfNumber: internal interface number 823 out of range
<190> JAN 23 14:15:34 L3-Switch-IP-1 NIM[112744896]: nim_intf_map_api.c(403) 491579 %% nimCheckIfNumber: internal interface number 824 out of range
<190> JAN 23 14:17:26 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 493288 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:17:26 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 493289 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6


<190> JAN 23 14:17:26 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 493290 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:17:26 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 493291 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:17:26 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 493292 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:17:26 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 493293 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<187> JAN 23 14:17:26 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493294 %% Failure getting the forwarding database ID for fid 2
<187> JAN 23 14:17:26 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493304 %% Failure getting the forwarding database ID for fid 3
<187> JAN 23 14:17:26 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493305 %% Failure getting the forwarding database ID for fid 4
<187> JAN 23 14:17:26 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493316 %% Failure getting the forwarding database ID for fid 5


<187> JAN 23 14:17:26 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493317 %% Failure getting the forwarding database ID for fid 6
<187> JAN 23 14:17:26 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493318 %% Failure getting the forwarding database ID for fid 7
<187> JAN 23 14:17:26 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493319 %% Failure getting the forwarding database ID for fid 8
<187> JAN 23 14:17:28 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493330 %% Failure getting the forwarding database ID for fid 8
<187> JAN 23 14:17:28 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493331 %% Failure getting the forwarding database ID for fid 9
<187> JAN 23 14:17:28 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493332 %% Failure getting the forwarding database ID for fid 10
<187> JAN 23 14:17:28 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493333 %% Failure getting the forwarding database ID for fid 11
<187> JAN 23 14:17:28 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493334 %% Failure getting the forwarding database ID for fid 12


<187> JAN 23 14:17:28 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 493335 %% Failure getting the forwarding database ID for fid 13
<190> JAN 23 14:23:20 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 498440 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:23:20 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 498441 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:23:20 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 498442 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:23:20 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 498443 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:23:20 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 498444 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<190> JAN 23 14:23:20 L3-Switch-IP-1 UNKN[112744896]: osapi_ipeak.c(1381) 498445 %% osapiIfIpv6AddrsGet: could not get interface mottsec0! errno = 6
<187> JAN 23 14:23:20 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498455 %% Failure getting the forwarding database ID for fid 2


<187> JAN 23 14:23:20 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498456 %% Failure getting the forwarding database ID for fid 3
<187> JAN 23 14:23:20 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498457 %% Failure getting the forwarding database ID for fid 4
<187> JAN 23 14:23:20 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498468 %% Failure getting the forwarding database ID for fid 5
<187> JAN 23 14:23:20 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498469 %% Failure getting the forwarding database ID for fid 6
<187> JAN 23 14:23:20 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498470 %% Failure getting the forwarding database ID for fid 7
<187> JAN 23 14:23:20 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498471 %% Failure getting the forwarding database ID for fid 8
<187> JAN 23 14:23:22 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498482 %% Failure getting the forwarding database ID for fid 8
<187> JAN 23 14:23:22 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498483 %% Failure getting the forwarding database ID for fid 9


<187> JAN 23 14:23:22 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498484 %% Failure getting the forwarding database ID for fid 10
<187> JAN 23 14:23:22 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498485 %% Failure getting the forwarding database ID for fid 11
<187> JAN 23 14:23:22 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498486 %% Failure getting the forwarding database ID for fid 12
<187> JAN 23 14:23:22 L3-Switch-IP-1 FDB[112744896]: fdb.c(893) 498487 %% Failure getting the forwarding database ID for fid 13

Any idea what happened?

I just run the following command:

# no ip redirect

globally and for all routed VLAN interfaces and

# no ip unreachables

on every routed VLAN.

Thanks in advance

21 Posts

January 23rd, 2014 06:00

Just reverted it! Last 60 minutes very fine without problems.

Just found out that 1/g39 Uplink Port to a 5448 has many "Frames too Long" packages... Many hundrets - thousands per second...

Maybe here is something wrong too... Could cause much CPU load / problems?

21 Posts

January 23rd, 2014 07:00

Set the MTU to the maximum value on p39... But still ~ 1000 "Packets too Long" countings added per seconds on just that port 39.

I checked the switch which is connected on port 39, but i did not find any notice about which edge port is responsible for this too long packets...

21 Posts

January 23rd, 2014 23:00

# no ip redirect


is not the reason for this issue. The amount of too long packages are increasing with or without these option set.


Yes we are using iSCSI in our enviroment, but not directly on that main routing switch. One iSCSI Array (PV 3620i) is connected to a 5548, another PV 3620i to another 6248PC Switch .


But from the buggy Uplink Port g39, there is no iSCSI in the background at all.

21 Posts

January 24th, 2014 01:00

Ping the Default VLAN 1 Interface Address still has some delays and timeouts:

Reply from 10.2.X.X: bytes=32 time=1ms TTL=64
Reply from 10.2.X.X: bytes=32 time=1ms TTL=64
Reply from 10.2.X.X: bytes=32 time=1ms TTL=64
Reply from 10.2.X.X: bytes=32 time=1ms TTL=64
Reply from 10.2.X.X: bytes=32 time=1ms TTL=64
Reply from 10.2.X.X: bytes=32 time=1ms TTL=64
Reply from 10.2.X.X: bytes=32 time=156ms TTL=64
Request timed out.
Reply from 10.2.X.X: bytes=32 time=175ms TTL=64
Reply from 10.2.X.X: bytes=32 time=182ms TTL=64
Reply from 10.2.X.X: bytes=32 time=212ms TTL=64
Reply from 10.2.X.X: bytes=32 time=210ms TTL=64
Reply from 10.2.X.X: bytes=32 time=178ms TTL=64
Request timed out.
Reply from 10.2.X.X: bytes=32 time=174ms TTL=64
Reply from 10.2.X.X: bytes=32 time=234ms TTL=64
Reply from 10.2.X.X: bytes=32 time=210ms TTL=64
Reply from 10.2.X.X: bytes=32 time=133ms TTL=64
Reply from 10.2.X.X: bytes=32 time=163ms TTL=64
Request timed out.
Reply from 10.2.X.X: bytes=32 time=1ms TTL=64
Reply from 10.2.X.X: bytes=32 time=1ms TTL=64
Reply from 10.2.X.X: bytes=32 time=1ms TTL=64
Reply from 10.2.X.X: bytes=32 time=1ms TTL=64
Reply from 10.2.X.X: bytes=32 time=1ms TTL=64
Reply from 10.2.X.X: bytes=32 time=1ms TTL=64

But the overall situation is really really better, particularly the ipForwardingTask!

But there are no strange logging output since the things i posted yesterday... No high Toplogy Changes which could indicate an STP problem...

2 Posts

January 24th, 2014 01:00

Thank you for your reply

We have set "no ip redirects" but our issue is not fixed.

Task                    Utilization
----------------------- -----------
LOG                           0.05%
osapiTimer                    1.20%
bcmL2X.0                      0.85%
bcmCNTR.0                     0.25%
bcmLINK.0                     0.60%
bcmRX                         1.10%
bcmNHOP                       0.05%
bcmATP-TX                     0.10%
bcmATP-RX                     0.10%
MAC Send Task                 0.55%
dtlTask                       0.25%
hapiRxTask                    0.05%
RMONTask                      0.10%
unitMgrTask                   0.20%
dot3ad_timer_task             0.15%
ipMapForwardingTask          52.20%
BXS Req                       0.10%
OSPF Receive                  0.10%
Kernel/Interrupt/Idle        42.00%

Total                        100.00%

and

PING 172.16.199.253 (172.16.199.253) 56(84) bytes of data.
64 bytes from 172.16.199.253: icmp_seq=1 ttl=61 time=2.04 ms
64 bytes from 172.16.199.253: icmp_seq=2 ttl=61 time=2.21 ms
64 bytes from 172.16.199.253: icmp_seq=3 ttl=61 time=2.60 ms
64 bytes from 172.16.199.253: icmp_seq=4 ttl=61 time=2.27 ms
64 bytes from 172.16.199.253: icmp_seq=5 ttl=61 time=2.17 ms
64 bytes from 172.16.199.253: icmp_seq=6 ttl=61 time=499 ms
64 bytes from 172.16.199.253: icmp_seq=7 ttl=61 time=41.9 ms
64 bytes from 172.16.199.253: icmp_seq=8 ttl=61 time=48.4 ms
64 bytes from 172.16.199.253: icmp_seq=9 ttl=61 time=62.6 ms
64 bytes from 172.16.199.253: icmp_seq=10 ttl=61 time=155 ms
64 bytes from 172.16.199.253: icmp_seq=11 ttl=61 time=36.8 ms
64 bytes from 172.16.199.253: icmp_seq=12 ttl=61 time=42.4 ms
64 bytes from 172.16.199.253: icmp_seq=13 ttl=61 time=54.1 ms
64 bytes from 172.16.199.253: icmp_seq=14 ttl=61 time=315 ms

Thank you for your help

regards,

21 Posts

January 27th, 2014 04:00

Hello Daniel,

Port 39 is an Uplink to another Switch, which has just workstations, telephones patched... No servers, no iSCSI devices. We are not able to see which endpoint is causing these large sizes...

If I ping a device in VLAN 1, everything is fine permanently (< 1 ms), there are no problems. Just the IP addresses of the different routed VLAN interfaces got that problems sometimes.

Maybe everything is fine (Pinging to device in VLAN 1 is without problems), but even if there are no problems with the PowerConnect switch, it would be interessting what causes the delays and timeouts in that VLAN interface ip addresses.

6 Posts

February 11th, 2014 12:00

I have the same problem on multiple switches running at layer 3. Same exact symptoms where the routed interfaces are not accessible but all users on the routed subnets work fine. This only effects managment on the devices running at layer 3. For us the problem seems to be related to multinetted vlan interfaces as we do not see this issue on switches without multinetted vlan interfaces. I've also worked extensively with Dell engineering over the last year and we are still experiencing this problem. Not sure why this issue is so difficult for Dell to resolve!

6 Posts

February 11th, 2014 13:00

Daniel, yes we have the 'no ip redirect' command configured on all vlan interfaces and we are running version 5.1.2.3 on all of our PowerConnect gear.

21 Posts

February 12th, 2014 00:00

We still got these problems from time to time...

Routing Switch is on latest Firmware and #ip redirect is disabled...

I saw that one Uplink to another PC6248 Switch has many "Recieved Pause Frames" on XG/1...

Disabled flow control (on our four PC6248 Core Switches) as mentioned in that article:

http://monolight.cc/2011/08/flow-control-flaw-in-broadcom-bcm5709-nics-and-bcm56xxx-switches/

„There is a design flaw in Broadcom’s “bnx2″ NetXtreme II BCM5709 PCI Express NICs. These NICs are extremely popular, Dell and HP use them throughout their PowerEdge and ProLiant standalone and blade server ranges.

The flaw is in the flow control (802.3x) implementation and results in a switch-wide or network-wide loss of connectivity. As is common in major failures, there is more than one underlying cause.”

„If you can’t disable flow control on all switches, at least disable it on your core switches. If you use it in the core, you’re Doing It Wrong™.”

“Do not use BCM56314 and BCM56820-based OEM switches (e.g. Dell PowerConnect 6248, M8024, 8024F). Get your switches from a respectable network hardware vendor“

 

did not solve the issues!

 

 

No Events found!

Top