Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Dell EMC Configuration Guide for the S4048T–ON System 9.14.2.4

PDF

CoPP for OSPFv3 Packets

You can create an IPv6 ACL for control-plane traffic policing for OSPFv3, in addition to the CoPP support for VRRP, BGP, and ICMP. You can use the ipv6 access-list name cpu-qos permit ospfv3 command to allow CoPP traffic for OSPFv3. Control Plane Policing (CoPP) enables more number of CPU queues to be made available on ports for IPv6 and ICMPv6 packets.

CoPP enhancements are to enhance the capability of FTOS by utilizing more number of CPU queues on CMIC port and sending control packets to different queues that internally reduce limitation or contention of control protocols sharing the same queues (that is, before this functionality of CoPP for OSPV3 was introduced, OSPF might have caused the LACP flap because of both control traffic sent to same Q7 on CPU port). Non CPU port should have only 4 dedicated control queues and remaining shared for both data and traffic. Number of control queues is increased on the CPU port. When tunneling packets from non-master to master unit, high-gig queues are used.

Prior to the release 9.4.(0.0), all IPv6 packets are taken to same queues there is no priority between the ICMPv6 packets and unknown IPv6 packets. Due to this NS/NA/RS/RA packets not given high priority leads to the session establishment problem. To solve this issue, starting from release 9.4.(0.0), IPv6 NDP packets use different CPU queues when compared to the Generic IPv6 multicast traffic. These entries are installed in system when application is triggered..

CPU Processing of CoPP Traffic

The systems use FP rules to take the packets to control plane by CopyToCPU or redirect packet to CPU port. Only 8 CPU queues are used while sending the packet to CPU. The CPU Management Interface Controller (CMIC) interface on all the systems supports 48 queues in hardware. However, FTOS supports only 8 CMIC queues – 4 for data streams that are CPU bound – SFLOW packets, packet streams that are trapped to CPU for logging info on MAC learn limit exceeded and other violations, L3 packets with unknown destination for soft forwarding etc. Other 4 CMIC queues will carry the L2/L3 well-known protocol streams. However there are about 20 well known protocol streams that have to share these 4 CMIC queues. Before 9.4.(0.0)Dell EMC Networking OS used only 8 queues most of the queues are shared to multiple protocols. So, increasing the number of CMIC queues will reduce the contention among the protocols for the queue bandwidth.

Currently, there are 4 Queues for data and 4 for control in both front-end and back-plane ports. In stacked systems, the control streams that reach standby or slave units will be tunneled through the backplane ports across stack-units to reach the CPU of the master unit. In this case, the packets that reach slave unit’s CMIC via queues 0 – 7 will take same queues 0 – 7 on the back-plane ports while traversing across units and finally on the master CMIC, they are queued on the same queues 0 – 7. In this case, the queue (4 – 7) taken by the well-known protocol streams are uniform across different queuing points, and the queue (0 – 3) taken by the CPU bound data streams are uniform. In back-plane ports, queue 0 – 3 will carry both the front-end bound data streams as well as the CPU bound data streams which is acceptable but the well-known protocol streams must not be mixed with the data streams on queues 0 – 3 in back-plane ports.

Increased CPU Queues for CoPP

FTOS classifies every packet ingress from the front end port to system as control traffic or data traffic by having the pre-defined rules based on protocol type or packets types like ttl, slow path etc. FP is used to classify the traffic to transmit the control traffic to CMIC port. Other major function performed by the FP rule is to decide to which CPU queue the packet must be sent. All other packets will be forwarded or dropped at the ingress.

All packet transmitted to CPU will transmit to local CPU by using the CPU queues and processed. But in stacked system only mater CPU is responsible for the control plane actions. So control packets received in master or slave units will be tunneled to master CPU to process.

As part of enhancements, CPU queues are increased from 8 to 12 on CPU port. However, the front-end port and the backplane ports support only 8 queues. As a result, when packets are transmitted to the local CPU, the CPU uses Q0-Q11 queues. The control packets that are tunneled to the master unit are isolated from the data queues and the control queues in the backplane links. Control traffic must be sent over the control queues Q4-Q7 on higig links. After reaching the master unit tunneled packets must be transmitted to the CPU using the Q0-Q11 queues.

The backplane ports can have a maximum of 4 control queues. So, when we have more than ‘n’ CMIC queues for well-known protocols and n > 4, then streams on ‘n’ CMIC queues must be multiplexed on 4 control queues on back-plane ports and on the Master unit, these streams must be de-multiplexed to ‘n’ CMIC queues on the Master CPU.

After control packets reach the CPU through the CMIC port, the software schedules to process traffic on each 12 CPU queues. This aspect must be ensured even in case of stand-alone systems and there is no dependency with stacking.

Policing provides a method for protecting CPU bound control plane packets by policing packets transmited to CPU with a specified rate and from undesired or malicious traffic. This is done at each CPU queue on each unit.

FP Entries for Distribution of NDP Packets to Various CPU Queues

  • At present generic mac based entries in system flow region will take IPv6 packets to CPU.

    • OSPFv3 – 33:33:0:0:0:5 – Q7

    • - 33:33:0:0:0:6 – Q7

    • IPv6 Multicast – 33:33:0:0:0:0 – Q1

  • Add/remove specific ICMPv6 NDP protocol entry when user configures the first ipv6 address in the front panel port

    • Distribute ICMPv6 NS/RS packets to Q5.

    • Distribute ICMPv6 NA/RA packets to Q6.

FP is installed for all Front panel ports.

NDP Packets

Neighbor discovery protocol has 4 types of packets NS, NA, RA, RS. These packets need to be taken to CPU for neighbor discovery.

  • Unicast NDP packets:

    • Packets hitting the L3 host/route table and discovered as local terminated packets/CPU bound traffic. For CPU bound traffic route entry have CPU action. Below are packets are CPU bound traffic.

      • Packets destined to chassis.

      • Route with Unresolved Arp

      • Unknown traffic in IP Subnet range

      • Unknown traffic hitting the default route entry.

  • Multicast NDP packets

    • NDP packets with destination MAC is multicast

      • DST MAC 33:33:XX:XX:XX:XX

  • NDP Packets in VLT peer routing enable

    • VLT peer routing enable cases each VLT node will have route entry for link local address of both self and peer VLT node. Peer VLT link local entry will have egress port as ICL link. And Actual link local address will have entry to CopyToCpu. But NDP packets destined to peer VLT node needs to be taken to CPU and tunneled to the peer VLT node..

  • NDP packets in VLT peer routing disable case

    • NDP packets intended to peer VLT chassis taken to CPU and tunnel to peer.

The following table describes the protocol to queue mapping with the CPU queues increased to be 12.

Table 1. Redirecting Control Traffic to 12 CPU queuesRedirecting Control Traffic to 12 CPU queues

CPU Queue

Weights

Rate (pps)

Protocol

0

100

1300 BFD

1

1

300

MC

2

2

300

TTL0, TTL1, IP with options, Mac limit violation, Hyper pull, L3 with Bcast MacDA, Unknown L3, ARP unresolved, ACL Logging

3

4

400

sFlow, L3 MTU Fail frames

4

127

2000

IPC/IRC, VLT Control frames

5

16

300

ARP Request, NS, RS, iSCSI OPT Snooping

6

16

400

ICMP, ARP Reply, NTP, Local terminated L3, NA, RA,ICMPv6 (other Than NDP and MLD)

7

64

400

xSTP, FRRP, LACP, 802.1x,ECFM,L2PT,TRILL, Open flow

8

32

400

PVST, LLDP, GVRP, FCOE, FEFD, Trace flow

9

64

600

OSPF, ISIS, RIPv2, BGP

10

32

300

DHCP, VRRP

11

32

300

PIM, IGMP, MSDP, MLD

Catch-All Entry for IPv6 Packets

Dell EMC Networking OS currently supports configuration of IPv6 subnets greater than /64 mask length, but the agent writes it to the default LPM table where the key length is 64 bits. The device supports table to store up to 256 subnets of maximum of /128 mask lengths. This can be enabled and agent can be modified to update the /128 table for mask lengths greater than /64. This will restrict the subnet sizes to required optimal level which would avoid these NDP attacks. The IPv6 stack already supports handling of >/64 subnets and doesn’t require any additional work. The default catch-all entry is put in the LPM table for IPv4 and IPv6. If this is included for IPv6, you can disable this capability by using the no ipv6 unknown-unicast command. Typically, the catch-all entry in LPM table is used for soft forwarding and generating ICMP unreachable messages to the source. If this is in place then irrespective of whether it is </64 subnet or >/64 subnet, it doesn’t have any effect as there would always be LPM hit and traffic are sent to CPU.

Unknown unicast L3 packets are terminated to the CPU CoS queue which is also shared for other types of control-plane packets like ARP Request, Multicast traffic, L3 packets with Broadcast MAC address. The catch-all route poses a risk of overloading the CPU with unknown unicast packets. This CLI knob to turn off the catch-all route is of use in networks where the user does not want to generate Destination Unreachable messages and have the CPU queue’s bandwidth available for higher priority control-plane traffic.


Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\