Zu den Hauptinhalten
  • Bestellungen schnell und einfach aufgeben
  • Bestellungen anzeigen und den Versandstatus verfolgen
  • Profitieren Sie von exklusiven Prämien und Rabatten für Mitglieder
  • Erstellen Sie eine Liste Ihrer Produkte, auf die Sie jederzeit zugreifen können.
  • Verwalten Sie mit der Unternehmensverwaltung Ihre Dell EMC Seiten, Produkte und produktspezifischen Kontakte.

Dell PowerFlex 4.5.x Install and Upgrade Guide

Configure NVMe initiators on hosts for Linux-based systems

For systems using NVMe over TCP technology, configure the NVMe initiator on all hosts that access PowerFlex storage. This procedure is provided as an example, only. Consult your operating system's documentation for detailed instructions about configuring NVMe initiators for NVMe over TCP topologies.

Prerequisites

Ensure that you have the login credentials for each host.

About this task

When NVMe over TCP technology is used, PowerFlex software is not installed on the compute nodes. Instead, configure the NVMe initiators in the operating system on the PowerFlex compute-only node or PowerFlex hyperconverged node, and configure the NVMe targets in PowerFlex.

NOTE:Ensure that the hosts are running Linux operating system versions that are supported by this version of PowerFlex. For more information, see the system requirements in the Dell PowerFlex 4.5.x Technical Overview.

The following example shows one way of configuring the NVMe initiator. See your operating system's documentation for details and other configuration options.

Steps

  1. Ensure that the NVMe modules are loaded on the NVMe initiator:
    1. Log in to the host.
    2. In the command line, run the command:
      lsmod | grep nvme
    3. Ensure that nvme_tcp and nvme_fabrics modules are loaded.
      If they are loaded, go to step 2. If they are not loaded, continue to the next substep.
    4. Perform one of the following procedures:
      • For manual loading of modules (perform after every reboot), run:
        modprobe nvme_tcp
        modprobe nvme_fabrics
      • For automatic loading of NVMe modules, run:
        echo nvme_tcp >> /etc/modules-load.d/nvme_tcp.conf
        echo nvme_fabrics >> /etc/modules-load.d/nvme_tcp.conf
  2. Ensure that the NVMe CLI is installed on the initiator:
    1. Run the following command:
      nvme
      If the CLI responds, go to step 3. If it does not, go to the next substep.
    2. Run the command:
      dnf install nvme-cli
  3. Ensure that a unique host NQN and unique host ID are configured in the initiator.
    1. To verify that a host NQN is present, run the command:
      nvme show-hostnqn
      Example of a host NQN:
      nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0051-5610-8046-b5c04f484b32
    2. If no host NQN exists, generate one by running the command:
      nvme gen-hostnqn > /etc/nvme/hostnqn
      NOTE:Make a note of each PowerFlex node's host NQN. You need this information when configuring the NVMe targets with PowerFlex Manager.
    3. To verify that a host ID is present, run the command:
      cat /etc/nvme/hostid
      Example of a host ID:
      4c4c4544-0051-5610-8046-b5c04f484b32
      Last 36 characters of a nvme gen-hostqn name can be used. The format must be precise, otherwise the connection fails.
  4. Perform the following actions for each required protection domain:
    1. Add an NVMe host to PowerFlex.
    2. Create volume.
    3. Map volume
      For more information, see the Dell PowerFlex 4.5.x Administration Guide and Dell PowerFlex 4.5.x CLI Reference Guide.
  5. Discover and connect to the target PowerFlex system. For more information, see Discover and connect to the target PowerFlex system.
  6. Configure the type of multipath used:
    Multipath type Procedure
    Native NVMe Multipath Check if the native NVMe multipath is enabled, using the command:
    cat /sys/module/nvme_core/parameters/multipath

    If Y is returned, native NVMe multipath is already enabled.

    To enable it, run:

    grubby --update-kernel=ALL --args="nvme_core.multipath=Y"
    grub2-mkconfig -o /boot/grub2/grub.cfg
    reboot

    To enable active/active protocol which remains persistent after reboots, run:

    echo 'ACTION=="add", SUBSYSTEM=="nvme-subsystem", ATTR{iopolicy}="round-robin"' > /lib/udev/rules.d/71-nvme-iopolicy.rules
  7. Configure a persistent initiator. This initiator automatically forms NVMe connections after reboots.
    1. Create a service that runs nvme connect-all on reboots:
      vi /etc/systemd/system/nvme_fabrics_persistent.service
      [Unit]
      Description=NVMf auto discovery service
      Requires=network.target
      After=systemd-modules-load.service network.target
      [Service]
      Type=oneshot
      ExecStart=/usr/sbin/nvme connect-all
      StandardOutput=journal
      [Install]
      WantedBy=multi-user.target timers.target
    2. Start a new service, and enable it:
      systemctl start nvme_fabrics_persistent.service
      systemctl enable nvme_fabrics_persistent.service
  8. Run the following command to update the udev rule to make the recovery delay 15 seconds:
    echo ACTION!=\"remove\", KERNELS==\"ctl\", SUBSYSTEMS==\"nvme-fabrics\", ATTR{transport}==\"tcp\", ATTR{model}==\"powerflex\", ATTR{recovery_delay}=\"15\" >> /etc/udev/rules.d/71-nvme-recovery-delay.rules
    NOTE:This step is required only for SUSE Linux Enterprise Server (SLES) 15.5 version.

Diesen Inhalt bewerten

Präzise
Nützlich
Leicht verständlich
War dieser Artikel hilfreich?
0/3000 characters
  Bitte geben Sie eine Bewertung ab (1 bis 5 Sterne).
  Bitte geben Sie eine Bewertung ab (1 bis 5 Sterne).
  Bitte geben Sie eine Bewertung ab (1 bis 5 Sterne).
  Bitte geben Sie an, ob der Artikel hilfreich war.
  Die folgenden Sonderzeichen dürfen in Kommentaren nicht verwendet werden: <>()\