Dell NativeEdge: How to resolve pull rate limits when trying to deploy Calico for NativeEdge Deployment

Summary: This article outlines how to authenticate with Docker Hub to avoid download rate limit issues.

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Symptoms

From NativeEdge Orchestrator 2.1.0.0 and later, Calico is recommended as a container network interface (CNI)
Instructions for modifications for installing Calico are contained in the NativeEdge Orchestrator Deployment Guide . However during the installation process of Calico, it downloads from Docker Hub. Docker Hub implements a download rate limit, which can be encountered during the installation of Calico. (Further details of the rate limit can be found on docker.com.)

When the user lists the pods after installing, the user can see that the calico pod is in a Init:ImagePullBackOff which indicates that there are issues downloading the container.

There are two calico pods that can encounter the docker.io rate limits:
  • calico-kube-controllers
  • calico-node
The example below illustrates the issue for calico node (similar error messages also appear for calico-kube-controllers if rate limits are encountered.)
 
#kubectl get pods -A

NAMESPACE         NAME                                       READY   STATUS                  RESTARTS   AGE
kube-system       local-path-provisioner-957fdf8bc-cl2nl     0/1     Pending                 0          6m50s
kube-system       metrics-server-648b5df564-bncjh            0/1     Pending                 0          6m50s
kube-system       coredns-77ccd57875-cng6c                   0/1     Pending                 0          6m50s
kube-system       calico-kube-controllers-67c64d8b8f-p868c   0/1     Pending                 0          6m39s
kube-system       calico-node-6q82x                          0/1     Init:ImagePullBackOff   0          6m37s

This can be seen further when we describe the calico-node pod
#kubectl describe pod calico-node-6q82x -n kube-system
Name:                 calico-node-xscmk
Namespace:            kube-system
..
Events:
  Type     Reason     Age                 From               Message
  ----     ------     ----                ----               -------
  Normal   Scheduled  2m4s                default-scheduler  Successfully assigned kube-system/calico-node-xscmk to sre08129
  Warning  Failed     81s (x2 over 2m2s)  kubelet            Failed to pull image "docker.io/calico/cni:v3.28.0": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.28.0": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:7a3a5cf6c79243ba2de9eef8cb20fac7c46ef75b858956b9884b0ce87b9a354d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Normal   Pulling    41s (x4 over 2m3s)  kubelet            Pulling image "docker.io/calico/cni:v3.28.0"
  Warning  Failed     40s (x2 over 106s)  kubelet            Failed to pull image "docker.io/calico/cni:v3.28.0": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.28.0": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:cef0c907b8f4cadc63701d371e6f24d325795bcf0be84d6a517e33000ff35f70: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Warning  Failed     40s (x4 over 2m2s)  kubelet            Error: ErrImagePull
  Normal   BackOff    13s (x6 over 2m1s)  kubelet            Back-off pulling image "docker.io/calico/cni:v3.28.0"
  Warning  Failed     13s (x6 over 2m1s)  kubelet            Error: ImagePullBackOff

As the user can see from the above, the pod fails to download because of:
"Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"

Cause

This issue occurs due to pull rate limits set by dockerhub - see https://www.docker.com/increase-rate-limit for details.

Resolution

If the user uses an authenticated docker hub account that means the user can have a higher individual pull rate limit. The user must update the Kubernetes cluster secrets to use the credentials. 

There are multiple methods that you can create a registry secret.

Method 1: Manually using password

  1. Create an account on docker.io 
  2. Create a secret in the kube-system namespace. This example creates a secret called regcred using the credentials used to register an account on docker.io
    kubectl create secret docker-registry regcred --docker-server=https://index.docker.io/v1/ --docker-username=dockeriousername --docker-password=dockeriopassword --docker-email=emailusertoregsisterondockerio --namespace="kube-system"

Method 2: Using config.json authentication-token

  1. Login in to docker hub  
    docker login -u emailaddressforaccount/dockeruser
  2. This creates a config.json file. See example output below
    {
            "auths": {
                    "https://index.docker.io/v1/": {
                            "auth": "Z2Vhcm9pZGdAZ21haWwuY29tOnFpemNhcC1xYXpyZXctYm93Qnkz"
                    }
            }
  3.  Create a secret in the kube-system namespace.
    sudo kubectl create secret generic regcred --from-file=.dockerconfigjson=/root/.docker/config.json --type=kubernetes.io/dockerconfigjson -n kube-system

Update: Calico YAML for docker credentials 

  1. The user must edit the calico.yaml to use the secret (regcred) has been created.
  2. This has to applied in the spec section of the top level of the relevant DaemonSet/Deployment (that is calico-node and calico-kube-controllers)
    1. For reference here is a real example of changing the calico-node DaemonSet level
      kind: DaemonSet
      apiVersion: apps/v1
      metadata:
        name: calico-node
        namespace: kube-system
        labels:
          k8s-app: calico-node
      spec:
        selector:
          matchLabels:
            k8s-app: calico-node
          ...
        template:
          metadata:
            labels:
              k8s-app: calico-node
          spec:
            imagePullSecrets: 
            - name: regcred
            nodeSelector:
            ....
      
    2. For reference here is a real example of changing the calico-kube-controllers Deployment level
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: calico-kube-controllers
        namespace: kube-system
        labels:
          k8s-app: calico-kube-controllers
      spec:
        # The controllers can only have a single active instance.
        replicas: 1
        selector:
          matchLabels:
            k8s-app: calico-kube-controllers
        strategy:
          type: Recreate
        template:
          metadata:
            name: calico-kube-controllers
            namespace: kube-system
            labels:
              k8s-app: calico-kube-controllers
          spec:
            imagePullSecrets:
            - name: regcred
            nodeSelector:
            ....
      
  3. Apply the calico.yaml after the changes
    kubectl apply -f calico.yaml
    poddisruptionbudget.policy/calico-kube-controllers created
    serviceaccount/calico-kube-controllers created
    serviceaccount/calico-node created
    serviceaccount/calico-cni-plugin created
    configmap/calico-config created
    customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
    clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrole.rbac.authorization.k8s.io/calico-node created
    clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
    clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrolebinding.rbac.authorization.k8s.io/calico-node created
    clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
    daemonset.apps/calico-node created
    deployment.apps/calico-kube-controllers created
  4. The user should see the Kubernetes cluster up and running successfully.
    kubectl get po -A
    NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
    kube-system   local-path-provisioner-957fdf8bc-x5bn6     1/1     Running   0          22h
    kube-system   coredns-77ccd57875-hf82q                   1/1     Running   0          22h
    kube-system   calico-kube-controllers-8498bff86b-tprzt   1/1     Running   0          9m18s
    kube-system   calico-node-pxwqm                          1/1     Running   0          9m18s
    kube-system   metrics-server-648b5df564-xdh4h            1/1     Running   0          22h

     

Affected Products

NativeEdge Solutions, NativeEdge
Article Properties
Article Number: 000225940
Article Type: Solution
Last Modified: 01 Oct 2024
Version:  4
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.