Start a Conversation

Unsolved

This post is more than 5 years old

308467

December 10th, 2013 05:00

High CPU for ipMapForwardingTask

I have a problem with Powerconnect 6248, which appeared recently. Time from time CPU for ipMapForwardingTask rises up to 30-40% instead of normal operation (average 6%) and PINGs to switch raises from 1ms to 50-100ms. Logs show nothing. It last from several dozens seconds to minutes. Then it goes back to its normal stage, but after few hours (or less) repeats again. What could be the reason? Current FW 3.3.1.10

Moderator

 • 

8.7K Posts

December 10th, 2013 08:00

Hi,

There was a fix for ipmapforwarding task in a later firmware, so we will want to try and update that. http://www.dell.com/support/drivers/us/en/555/DriverDetails/Product/powerconnect-6248?driverId=77XG3&osCode=NAA&fileId=3288111910&languageCode=en&categoryId=NI

Moderator

 • 

8.7K Posts

December 12th, 2013 07:00

Are there any other changes in the environment when it occurs, this can happen if it is recalculating spanning tree routes. Is portfast enabled?

22 Posts

December 12th, 2013 07:00

Hi,


This night we updated switch to FW 3.3.8.2, but unfortunately problem appeared several hours later again. What could be next steps to locate the reason? Logs are in debug, but there is no entries (except for SNTP synchronization).

Moderator

 • 

8.7K Posts

December 12th, 2013 09:00

If you can try disabling STP go ahead and do that.

22 Posts

December 12th, 2013 09:00

As far as I can see there is no changes and all ports are Port Fast

Topology Changes Count 1
Last Topology Change 0 day 13 hr 55 min 45 sec

If it could help I can disable STP for testing purpose.

What I see that there is also snoopTask consuming more than 10% (averaging 4%) of CPU time from time, but this could be OK as we have Muticast traffic with IGMP enabled. And these task are not related (for example when one is up to 15% another is down to 3%).

22 Posts

December 13th, 2013 10:00

STP is not the case. Disabled it on every port and disabled it on Global. And today again the same - CPU for ipMapForwardingTask went to 50% for about 3 minutes then dropped to normal 3-7%. Checked also snoopTask - when IGMP Snooping Status was disabled, this task went to 0.00%. Any other suggestions what could cause so high CPU usage for ipMapForwardingTask time after time?

Moderator

 • 

8.7K Posts

December 13th, 2013 10:00

That task is for how busy the CPU on the switch is, so something may be causing a traffic spike. Have you noticed any issues with other switches or devices on the network?

22 Posts

December 28th, 2013 07:00

No, there is no other problems detected in network. Despite of this problem all devices behind the switch currently does not see any delay. But I am not sure this will not affect our network in future. By the way problem was gone for a week and now it came back (I can see it from monitoring; see below). Yesterday I noticed about 5 min interval when ipMapForwardingTask was in 50-60% CPU range (overall CPU was ~80-90%) and pings went as high as 300 ms.

What I have done when searching for problem cause - when problem was detected I disable all ports except one (incoming) and still CPU was as high as previous. As switch is connected to Internet and configured in routing mode (with static route) this means some data or packets comes from our ISP (or Internet) and initiates such CPU behavior. But the most frustrating thing - logs show nothing when the problem occurs and I can not imagine any solution (switch was worked more then 3 years already and problem occurred on beginning of December). I tried almost everything I could imagine (disable ICMP, STP, enabled DoS protection etc.) - no results. And problem is not related with traffic going trough the switch as it can appear in night (when there is less than 20Mbps load) and sometimes does not occur in day (when there is about 350 Mbps load).

Any help would be appreciated.

Moderator

 • 

8.7K Posts

December 31st, 2013 07:00

I emailed you at the email in your account information asking for some additional information.

6 Posts

February 11th, 2014 13:00

Amongst our 600+ PowerConnect 7048P & 8024F switches, we have the same problem on multiple switches running at layer 3. We see the same exact symptoms where the routed interfaces are not accessible but all users on the routed subnets work fine. This only effects managment on the devices running at layer 3. I've worked with Dell engineering off an on over the last year on this issue and we are still experiencing this problem. I'm still trying to verify this, but for us the problem may be related to multinetted vlan interfaces as we only see this issue on our switches with multinetted vlan interfaces. Somewhat large data transfers taking place over multinetted vlan interfaces seems to be a problem for PowerConnect. 

22 Posts

April 8th, 2014 07:00

Skehoe, as you wrote "Somewhat large data transfers taking place over multinetted vlan interfaces seems to be a problem for PowerConnect" - so I checked our ping/data graphics (see below). You can see correlations between load and management interface pings. We have also another Powerconnect 6248 operating at Layer2 with similar loads (but less hosts) and there is no problems at all. Maybe it is related to Layer3 and load (we have ~750 hosts behind this switch). I do not like this symptoms but currently we live with this problem because I do not have time to further investigate this problem (I have spent already many hours with investigation). Maybe we should simply get new Layer3 device and let Powerconnect 6248 to do only Layer2 stuff. Or these high management interface pings and high CPU is a normal behavior for such loads?!

Moderator

 • 

8.7K Posts

April 8th, 2014 08:00

You may want to try disabling ICMP redirect on all interfaces. Sometimes this helps with the issue.

#no ip redirects

22 Posts

April 30th, 2014 10:00

ICMP redirect are disabled on all interfaces and globally, but it does not help unfortunately. And described behavior is still observed.

22 Posts

June 4th, 2014 08:00

I will also copy CPU load by processes

This time aggregated data rate was ~100-150Mbps, but the load of CPU still was high few minutes:

Task Name 5 Seconds 1 Minute 5 Minutes

tTffsPTask 0.00% 0.00% 0.15%

tNetTask 0.00% 0.06% 0.15%

ipnetd 0.00% 0.04% 0.01%

osapiTimer 1.27% 1.32% 1.13%

bcmL2X.0 0.31% 0.16% 0.13%

bcmCNTR.0 0.47% 0.41% 0.47%

bcmTX 3.33% 2.63% 1.91%

bcmL2X.1 0.47% 0.19% 0.33%

bcmCNTR.1 0.15% 0.32% 0.42%

bcmRX 5.08% 5.74% 4.32%

bcmNHOP 0.00% 0.02% 0.00%

MAC Send Task 0.15% 0.37% 0.45%

MAC Age Task 0.00% 0.18% 0.30%

USL Worker Task 0.00% 0.00% 0.01%

bcmLINK.0 0.15% 0.81% 0.51%

bcmLINK.1 0.63% 0.58% 0.39%

tL7Timer0 0.15% 0.06% 0.01%

osapiMonTask 0.00% 0.00% 0.08%

simPts_task 0.31% 0.04% 0.01%

dtlTask 1.74% 2.17% 1.76%

tEmWeb 0.00% 0.38% 0.10%

hapiRxTask 1.58% 1.90% 1.42%

DHCP snoop 0.15% 0.05% 0.00%

Dynamic ARP Inspection 0.00% 0.04% 0.01%

SNMPTask 0.00% 0.02% 0.30%

dot1s_timer_task 0.00% 0.09% 0.01%

unitMgrTask 0.00% 0.00% 0.01%

snoopTask 9.22% 4.76% 4.22%

ipMapForwardingTask 62.95% 64.66% 48.36%

tArpCallback 0.00% 0.04% 0.00%

ARP Timer 0.79% 1.72% 2.64%

lldpTask 0.15% 0.19% 0.30%

tCptvPrtl 0.00% 0.02% 0.01%

isdpTask 0.00% 0.08% 0.11%

RMONTask 0.31% 0.09% 0.30%

boxs Req 0.00% 0.07% 0.01%

After few minutes all was normal again:

Task Name 5 Seconds 1 Minute 5 Minutes

tTffsPTask 0.00% 0.02% 0.00%

tNetTask 0.31% 0.11% 0.30%

ipnetd 0.00% 0.00% 0.01%

tXbdService 0.00% 0.02% 0.00%

osapiTimer 1.11% 1.17% 1.08%

bcmL2X.0 0.47% 0.31% 0.30%

bcmCNTR.0 0.47% 0.39% 0.26%

bcmTX 0.00% 1.43% 1.80%

bcmL2X.1 0.00% 0.19% 0.30%

bcmCNTR.1 0.31% 0.35% 0.32%

bcmRX 1.58% 3.29% 4.13%

MAC Send Task 0.15% 0.40% 0.43%

MAC Age Task 0.00% 0.16% 0.30%

USL Worker Task 0.00% 0.06% 0.01%

bcmLINK.0 0.31% 0.44% 0.50%

bcmLINK.1 0.15% 0.48% 0.53%

tL7Timer0 0.00% 0.04% 0.01%

osapiMonTask 0.00% 0.13% 0.15%

simPts_task 0.00% 0.02% 0.01%

dtlTask 0.15% 1.53% 1.72%

tEmWeb 0.00% 0.02% 0.18%

hapiRxTask 0.31% 1.05% 1.26%

DHCP snoop 0.00% 0.02% 0.00%

Dynamic ARP Inspection 0.00% 0.00% 0.15%

SNMPTask 0.00% 0.28% 0.30%

dot1s_timer_task 0.15% 0.07% 0.01%

radius_task 0.00% 0.02% 0.00%

unitMgrTask 0.00% 0.00% 0.01%

snoopTask 0.31% 2.40% 2.59%

ipMapForwardingTask 2.54% 31.62% 47.25%

tArpCallback 0.00% 0.05% 0.01%

ARP Timer 5.08% 3.44% 2.61%

IpHelperTask 0.00% 0.06% 0.00%

tRtrDiscProcessingTask 0.00% 0.02% 0.00%

pktRcvrTask 0.15% 0.02% 0.00%

lldpTask 0.00% 0.10% 0.14%

tCptvPrtl 0.00% 0.00% 0.01%

isdpTask 0.00% 0.10% 0.14%

RMONTask 0.15% 0.09% 0.30%

boxs Req 0.15% 0.07% 0.15%

17 Posts

July 7th, 2014 11:00

Jurism, i know this is an old forum. I was just curious, what did you use to grab the counters and create this graph?

No Events found!

Top