Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Deploy Dell PowerFlex v3.6.x

PDF

Deploy PowerFlex with RDM/VMDK device management

Use the VMware deployment wizard to deploy PowerFlex when devices are configured as RDM or VMDK. Note that DirectPath architecture is the recommended best practice for RAID and SAS Controller managed drives.

Prerequisites

Configure advanced installation options (optional):
  • Ensure that all ESXi servers to be added to the system were pre-configured with the pre-deployment steps.
  • Configure advanced installation options (optional):
    • Enable creation of VMDK.
      NOTE: Use of VMDK-based disks for storage devices is not recommended, and should be used only if no other option is possible.
    • Force RDM on device with unknown RDM support.
    • Allow the taking over of devices that were used in other PowerFlex systems.
    • Allow the use of non-local datastores for the PowerFlex Gateway.
    • Increase the parallelism limit.

To access these settings, click Advanced settings on the PowerFlex screen.

NOTE: Only ESXi v6.x-based systems can be installed using the vSphere PowerFlex plug-in. ESXi v7 is not supported. Use manual deployment procedures for ESXi 7.

About this task

For two-layer systems where only the SDCs are deployed on ESXi servers, follow the deployment procedures for two-layer systems.

Steps

  1. From the Basic tasks section of the screen, click Deploy PowerFlex environment.

    The PowerFlex VMware deployment wizard begins. If you exited the previous deployment before completion, you will be able to return from where you left off.

    NOTE: When you use the deployment wizard, it is assumed that you are using the provided PowerFlex OVA template to create the PowerFlex virtual machines.
  2. In the Select Installation type screen, start the deployment of a new system:
    1. Select Create new PowerFlex system.
    2. Review and approve the license terms.
    3. Click Next.
  3. In the Create new system screen, enter the following, then click Next:
    • System Name: Enter a unique name for this system.
    • Admin Password: Enter and confirm a password for the PowerFlex admin user. The password must meet the listed criteria.
  4. In the Add ESX Hosts to Cluster screen, select the ESXi hosts to add as part of the system:
    1. Select the vCenter on which to deploy the PowerFlex system.
      The vCenter information is populated in the lower part of the screen.
    2. Expand the vCenter, select the ESXi hosts to add to the PowerFlex system, then click Next.
      NOTE: To configure PowerFlex, you must select a minimum of three ESXi hosts. ESXi hosts that do not have the SDC installed, or hosts for which DirectPath was configured before deployment, but DirectPath was not selected in the previous step, will not be available.
    The Select management components screen appears.
  5. Configure the management components:
    1. Select an option to deploy either a 3-node or 5-node cluster.
      The next fields on this screen will change, depending on your choice.
    2. Select an ESXi server to serve for each of the MDM cluster roles.
      You can give a name to the MDM servers, such as Manager1, and so on.
    3. Select ESXi servers to serve as Standby Manager and tiebreaker roles (optional).
    4. Click Next.
    The Configure Performance, Sizing, and Syslog screen appears.
  6. Configure the following settings (optional), then click Next:
    • To configure components with compact performance profiles, clear the high-performance option.
    • To configure the allocation of SVM RAM, select from the following:
      • To use default RAM allocation, select Standard size.
      • To use custom settings, select Custom size, and enter the maximum capacity and maximum number of volumes.
    • To configure syslog reporting, select Configure syslog, and enter the syslog server, port (default: 1468), and facility (default: 0).
    • To configure DNS servers, enter their details.
    The Configure Protection Domains screen appears.

    You can create (or remove) Protection Domains (PD). You must create at least one PD.

  7. Create a Protection Domain:
    1. Enter the following information:
      • Protection Domain name: It is recommended to use a meaningful name.
      • Read RAM Cache size per SDS: Minimum 128 MB (You can increase this for your environment needs.)
    2. Click Add.
      The added PDs appear in the lower section of the screen, together with the existing PDs. To remove a newly created PD, select it and click Remove.
    3. To create an additional PD, repeat this step.
    4. Click Next.
    The Configure Acceleration Pool screen appears. In this screen, you can create an Acceleration Pool, which will be used to accelerate storage.
  8. Create an Acceleration Pool:
    1. Enter the Acceleration Pool name.
    2. Select the Protection Domain to which the Acceleration Pool will belong.
    3. Click Add, and then Next.

    The Create a new Storage Pool screen appears.

    In the Configure Storage Pools screen, you can create (or remove) Storage Pools (SP). You must create at least one SP.

  9. Create a Storage Pool:
    1. Enter the Storage Pool name: It is recommended to use meaningful names.
    2. Select to which PD to add the SP.
    3. Select the expected Device Media Type for the SP (HDD or SSD).
    4. Select the External Acceleration type (if used):
      • none—No devices are accelerated by a non-PowerFlex read or write cache
      • read—All devices are accelerated by a non-PowerFlex read cache
      • write—All devices are accelerated by a non-PowerFlex write cache
      • read_and_write—All devices are accelerated by both non-PowerFlex read and write cache

      This input is required in order to prevent the generation of false alerts for media type mismatches. For example, if an HDD device is added which the SDS perceives as being too fast to fit the HDD criteria, alerts might be generated. External acceleration/caching is explained in the Getting to Know Dell PowerFlex Guide.

    5. To enable zero padding, select Enable zero padding. Zero padding must be enabled for using the background scanner in data comparison mode.
    6. To enable Read Flash cache, select the Enable RFcache check box.
    7. Click Add.

      This input is required in order to prevent the generation of false alerts for media type mismatches. For example, if an HDD device is added which the SDS perceives as being too fast to fit the HDD criteria, alerts might be generated. External acceleration/caching is explained in the Getting to Know Dell PowerFlex Guide.

      The added SPs appear in the lower section of the screen, together with the existing PDs. To remove a newly created SP, select it and click Remove.
    8. To create additional SPs, repeat this step.
    9. Click Next.
    The Create Fault Sets screen appears. You can use this screen to create Fault Sets (optional).
    NOTE: When defining Fault Sets, you must follow the Fault Set guidelines described in the Getting to Know PowerFlex guide. Failure to do so may prevent creation of volumes.
  10. Create a Fault Set (optional):
    1. Enter the Fault Set name. It is recommended to use meaningful names.
    2. Select to which PD to add the Fault Set.
    3. Click Add
      Added Fault Sets appear in the lower section of the screen, inside the folder of the parent PD. You can remove a newly created Fault Set by selecting it and clicking Remove.
    4. Repeat these steps to create additional Fault Sets (minimum of three), then click Next.
    The Add SDSs screen appears.
  11. Configure the following for every ESXi host or SVM:
    1. Select the corresponding SDS check box to assign an SDS role.
      NOTE: To make the same selections for every ESXi in a cluster, you can make your selections per cluster or datacenter.
    2. If the node is an SDS, assign a Protection Domain.
    3. You can select a Fault Set (optional).
    4. Click Next.

    The Add devices to SDSs screen appears, showing the clusters.

    This screen has the following tabs:

    • Information - shows the selected ESXi and cluster.
    • Assign devices - select hosts and assign devices.
    • Replicate selection - replicate device selections from one host to others.
      NOTE: This can be very useful if your ESXis have identical attached devices. For example, if you select an SSD device for the source ESXi, and then replicate this selection to the targets, the deployment wizard can automatically select all other SSD devices on the target SDSs.

      Device matching is performed based on the device runtime name.

      To replicate device selections, all of the following conditions must be met:

      • The number of devices on each ESXi must be identical.
      • Source and target devices must be identical in the following ways: a) both are SSD or non-SSD, b) both have datastores on them or do not, c) both are roughly the same size (within 20%), and d) both are connected via a RAID controller or directly attached.
      • At least one of the following conditions must be met: a) both SDSs are in the same Protection Domain, b) both SDSs are in different Protection Domains, but with the same list of Storage Pools, or c) the target SDS is in a Protection Domain with only one Storage Pool.
  12. On the Information tab, select an ESXi host from a cluster, then click Assign devices.
    The Assign devices tab appears.

    This screen shows the devices whose free space can be added to the selected ESXi host/SDS. You should balance the capacity over the selected SDS.

  13. To assign a device’s space to an SDS, perform the following:
    1. In the Use for drop-down, select Storage or Acceleration.
    2. In the Pool Name drop-down, select the Storage Pool (SP) to which to assign the device.
      NOTE: If the selected SP has RFcache enabled, you must select at least one RFcache device for that SDS node.
      NOTE: You can select all available devices by clicking Select all devices, and selecting their use and Storage Pool.
      NOTE: If you selected to create VMDK (before the deployment), the following options appear:
      • Create VMDK. Select this for all relevant devices.
      • Select all available devices. Click this to select all devices with a VMFS, and with unused capacity that can be added to the PowerFlex system.
    3. Click Assign.
  14. To replicate selections to other SDSs, perform the following:
    1. Select the Replicate selection tab.
    2. Select the ESXi whose device selection you wish to replicate.
      This is the source ESXi.
    3. Select the target ESXis to which to replicate the selection of the source ESXi.
    4. Click Copy configuration.
      The results are displayed in the right pane of the screen.
  15. When you have selected devices for all SDSs, click Next.
    NOTE: You must select at least one device for each SDS.
    The Add SDCs screen appears.
  16. Configure the SDCs:
    1. For each ESXi to be added as an SDC:
      1. Select the SDC check box.
      2. Enter the ESXi root password.
      NOTE: To show the entered ESXi passwords, select Show passwords.
    2. Choose whether to enable or disable LUN number comparison for ESXi hosts.

      In general, in environments where the SDC is installed on ESXi and also on physical hosts, you should set this to Disable.

      NOTE: Before disabling LUN number comparison, consult your environment administrator.
    3. Click Next.
    The Configure Upgrade Components dialog box appears.
  17. Configure the PowerFlex Gateway and LIA:
    1. Select an ESXi to host the PowerFlex Gateway Storage virtual machine (SVM).
      A unique SVM will be created for the PowerFlex Gateway.

      If the previously-selected ESXi servers do not have sufficient free space (on any datastore) to contain the PowerFlex SVM template, an SVM, and the PowerFlex Gateway SVM, you will not have an option to select an ESXi in this step. It will be done automatically.

    2. Enter and confirm a password for the PowerFlex Gateway administrative user.
    3. Enter and confirm a password for the LIA.
      The password must be the same across all SVMs in the system.
    4. Click Next.
      NOTE: You can only move forward if the passwords meet the listed criteria, and if the confirmation passwords match the entered passwords.
    The Select OVA Template screen appears:
  18. Configure templates:
    1. Select the template to use to create the PowerFlex virtual machines (SVM).
      The default is PowerFlex SVM Template. If you uploaded a template to multiple datastores, you can select them all, for faster deployment.

      If the PowerFlex Gateway selection was performed automatically in the previous step (indicating insufficient space), you must choose at least two templates in this step, one of which will be converted to the PowerFlex Gateway SVM.

      After selecting the templates, the deployment wizard will automatically select one of the ESXis with the templates to host the PowerFlex Gateway, and during deployment will convert the template to a VM for the PowerFlex Gateway (instead of cloning the template).

      NOTE: If you select a custom template, ensure that it is compatible with the PowerFlex plug-in and the PowerFlex MDM.
    2. Enter and confirm a new root password that will be used for all SVMs to be created.
    3. Click Next.

    The Configure Networks screen appears:

  19. Select the network configuration. You can select an existing (simple or distributed) network, or select Create a new network.

    The Create a new network command is only relevant for a regular vSwitch, and not for a distributed vSwitch.

    You can use a single network for management and data transfer, or separate networks. Separate networks are recommended for security and increased efficiency. You can select one data network, or two.

    The management network, used to connect and manage the SVMs, is normally connected to the client management network, a 1 GB network.

    The data network is internal, enabling communication between the PowerFlex components, and is recommended to be at least a 10 GB network.

    For high availability and performance, it is recommended to have two data networks.

    NOTE: The selected networks must have communication with all of the system nodes. In some cases, while the wizard does verify that the network names match, this does not guarantee communication, as the VLAN IDs may have been manually altered.
    1. To use one network, select a protocol (IPv4 or IPv6), and a management network, then proceed to the next step, configuring the SVMs.
      For best results, it is highly recommended to use the PowerFlex plug-in to create the data networks, as opposed to creating them manually.
    2. To use separate networks, select a protocol (IPv4 or IPv6) for the management network label, and one or two data network labels. If the data network already exists (such as a customer pre-configured distributed switch or a simple vswitch), select it from the drop-down box. Otherwise, configure the data network by clicking Create new network.

      The Create New Data Network screen appears.

    3. Configure the networks:
      NOTE: You can click to auto-fill the values for Data NIC and VMkernel IP.
      • Network name: The name of the VMware network

      • VMkernel name: The name of the VMkernel (used to support multipathing)

      • VLAN ID: the network ID

      • Network type: IPv4 or IPv6

      • For each ESXi, select a Data NIC, a VMkernel IP, and a VmKernel Subnet Mask.

    4. Click OK.

      The data network is created.

      The wizard will automatically configure the following for the data network:

      • vSwitch
      • VMkernel Port
      • Virtual Machine Port Group
      • VMkernel Port Binding
    5. Click Next.
    The Configure SVM screen appears.
  20. Configure all the SVMs:
    NOTE: You can click to auto-fill a range of values for IP addresses, subnet mask and default gateway.
    1. Enter the IP address, subnet mask, and default gateway for the management network, then the data network.
    2. Enter the Cluster Virtual IP address for each network interface.
    3. You can select a datastore, or allow automatic selection.
    4. Configure the cluster's virtual IP addresses by entering the virtual IP address for each data network.
    5. Click Next.
    Icons indicate the role that the server plays in the PowerFlex system.
    The Review Summary screen appears.
  21. Review the configuration.
    Click Finish to begin deployment or Back to make changes.
  22. Enter the vCenter user name and password, then click OK to begin the deployment.

    The Deployment Progress screen appears.

    During the deployment process you can view progress, pause the deployment, and view logs.

    To pause the deployment, click Pause. Steps that are already in progress will pause after they are completed.

    After pausing, select one of the following options:

    • Continue deployment to continue.
    • Abort to abort the deployment process.
    • Cancel and Rollback entire deployment to roll back all deployment activities (rollback cannot be canceled once started).
    • Rollback failed tasks to roll back only the tasks that failed (rollback cannot be canceled once started).
  23. When the deployment is complete, click Finish.
    If a task failed, click Continue deployment to try again.

Next steps

  • After deployment is complete, set all SVMs to start automatically with the system. Do not set SVMs under the VMware resource-pool feature.
  • Perform the post-installation tasks described in this guide.

Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\