OpenShift: How to Increase Network Ring Buffer Size on Worker Nodes for Control Plane

Summary: This article outlines the procedure to increase the RX ring buffers on the interfaces of each worker node in an OpenShift Container Platform (OCP) Hosted Control Plane (HCP) cluster.

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Instructions

Preparation

 In the examples below we have identified two interfaces that need modification of the RX buffers. 

 

eno12399np0
ens1f1np1

We have verified the valid values for the ring buffers on the NICs by using the ethtool -g command. This can be done from a debug pod on the node or extracted from an SOS bundle. 

 

# oc debug node/samplenode

# chroot /host

# ethtool -g eno12399np0
Ring parameters for eno12399np0:
Pre-set maximums:
RX:             8192
RX Mini:        n/a
RX Jumbo:       n/a
TX:             8192
Current hardware settings:
RX:             4096
RX Mini:        n/a
RX Jumbo:       n/a
TX:             1024
RX Buf Len:     n/a
CQE Size:       n/a
TX Push:        off
TCP data split: off

We have decided on 4096 for the value to set the ring buffers. 

We will perform the following steps to complete the operation.

  1. Create Butane files. 
  2. Convert those Butane files to YAML
  3. Embed the generated YAML into a configmap object. 
  4. Create the configmaps
  5.  Edit the nodepool object to reference the configmaps created. 
  6. Wait for the configuration update to complete. 
  7. Validate the configuration update completed and the changes were successfully implemented.  

 

Prepare Butane files.

Create two Butane files, one for each interface.

# eno12399np0.bu
variant: openshift
version: 4.18.0
metadata:
  name: 99-worker-ethtool-eno12399np0-buffer
  labels:
    machineconfiguration.openshift.io/role: worker
storage:
  files:
  - path: /etc/systemd/system/set-ethtool-eno12399np0-buffer.service
    mode: 0644
    overwrite: true
    contents:
      inline: |
        [Unit]
        Description=Set ethtool RX buffer size for network interface
        Requires=NetworkManager.service
        After=NetworkManager.service
        Before=ovs-configuration.service
        DefaultDependencies=no
        [Service]
        Type=oneshot
        ExecStart=/bin/bash -c "/sbin/ethtool -G eno12399np0 rx 4096 >> /var/log/user-data.log 2>&1"
        [Install]
        WantedBy=multi-user.target
systemd:
  units:
  - name: set-ethtool-eno12399np0-buffer.service
    enabled: true
# ens1f1np1.bu
variant: openshift
version: 4.18.0
metadata:
  name: 99-worker-ethtool-ens1f1np1-buffer
  labels:
    machineconfiguration.openshift.io/role: worker
storage:
  files:
  - path: /etc/systemd/system/set-ethtool-ens1f1np1-buffer.service
    mode: 0644
    overwrite: true
    contents:
      inline: |
        [Unit]
        Description=Set ethtool RX buffer size for network interface
        Requires=NetworkManager.service
        After=NetworkManager.service
        Before=ovs-configuration.service
        DefaultDependencies=no
        [Service]
        Type=oneshot
        ExecStart=/bin/bash -c "/sbin/ethtool -G ens1f1np1 rx 4096 >> /var/log/user-data.log 2>&1"
        [Install]
        WantedBy=multi-user.target
systemd:
  units:
  - name: set-ethtool-ens1f1np1-buffer.service
    enabled: true

Convert Butane files to MachineConfig YAML

Download the Butane executable and run the conversion.

# Download butane (example)
$ curl -L -o butane https://mirror.openshift.com/pub/openshift-v4/clients/butane/butane-linux-amd64
$ chmod +x butane

# Convert
$ ./butane eno12399np0.bu -o eno12399np0.yaml
$ ./butane ens1f1np1.bu -o ens1f1np1.yaml

Create ConfigMap objects.

Wrap each generated YAML in a ConfigMap named mc-worker-ethtool- -buffer in the clusters namespace.

# mc-worker-ethtool-eno12399np0-buffer.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: mc-worker-ethtool-eno12399np0-buffer
  namespace: clusters
data:
  config: |
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 99-worker-ethtool-eno12399np0-buffer
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              compression: gzip
              source: data:;base64,[BASE64_CONTENT]
            mode: 420
            overwrite: true
            path: /etc/systemd/system/set-ethtool-eno12399np0-buffer.service
        systemd:
          units:
          - name: set-ethtool-eno12399np0-buffer.service
            enabled: true
# mc-worker-ethtool-ens1f1np1-buffer.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: mc-worker-ethtool-ens1f1np1-buffer
  namespace: clusters
data:
  config: |
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 99-worker-ethtool-ens1f1np1-buffer
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              compression: gzip
              source: data:;base64,[BASE64_CONTENT]
            mode: 420
            overwrite: true
            path: /etc/systemd/system/set-ethtool-ens1f1np1-buffer.service
        systemd:
          units:
          - name: set-ethtool-ens1f1np1-buffer.service
            enabled: true

Apply ConfigMaps to the cluster.

$ oc apply -f mc-worker-ethtool-eno12399np0-buffer.yaml
$ oc apply -f mc-worker-ethtool-ens1f1np1-buffer.yaml

Attach ConfigMaps to the node pool.

Edit the node pool and add the ConfigMap names under spec.config 

$ oc edit nodepool [NODEPOOL_NAME] -n clusters

Insert the following lines in the spec section:

spec:
  config:
  - name: mc-worker-ethtool-eno12399np0-buffer
  - name: mc-worker-ethtool-ens1f1np1-buffer

Verify node‑pool update.

$ oc get nodepool -n clusters

Confirm that UPDATINGCONFIG shows True and that the version matches the cluster version.

Check service status on each worker node.

for i in $(oc get nodes -l node-role.kubernetes.io/worker= --no-headers | awk '{print $1}'); do
  oc debug node/$i -- chroot /host systemctl status set-ethtool-eno12399np0-buffer.service;
done
for i in $(oc get nodes -l node-role.kubernetes.io/worker= --no-headers | awk '{print $1}'); do
  oc debug node/$i -- chroot /host systemctl status set-ethtool-ens1f1np1-buffer.service;
done

Validate ring‑buffer settings

for i in $(oc get nodes -l node-role.kubernetes.io/worker= --no-headers | awk '{print $1}'); do
  oc debug node/$i -- chroot /host ethtool -g eno12399np0;
done
for i in $(oc get nodes -l node-role.kubernetes.io/worker= --no-headers | awk '{print $1}'); do
  oc debug node/$i -- chroot /host ethtool -g ens1f1np1;
done

Affected Products

APEX, APEX Cloud Platform for Red Hat OpenShift
Article Properties
Article Number: 000428878
Article Type: How To
Last Modified: 17 فبراير 2026
Version:  1
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.