In this detailed tutorial, we guide you through configuring templates to deploy clusters in PowerFlex Manager. Learn how to customize sample templates for storage and compute clusters, utilize NVMe over TCP capabilities, assign networks, and configure nodes. This video is perfect for IT professionals looking to streamline their cluster deployment process using PowerFlex Manager.
In the previous videos, we walked through deploying and then performing the initial configuration of PowerFlex Manager with our networks configured and our node and switch resources discovered. We are now ready to create and publish one of the declarative templates that govern the deployment of clusters in PowerFlex Manager. The clusters we deploy are called Resource Groups. There are several sample templates included in PowerFlex Manager to create a particular type of cluster.
Begin with one of the sample templates and customize it to your own environment. We want to create a storage cluster, and we want to use the new NVMe over TCP capabilities. This template will install the new storage data target service that enables PowerFlex volumes to be consumed over the NVMe over TCP protocol. After selecting the template, we clone it and begin customizing. Give it a name and select which compliance catalog will apply to the cluster.
The required networks are already provided in the template. From among the networks already defined in PowerFlex Manager you will assign the appropriate network type. Select the operating system credentials you want assigned to the nodes and the Linux OS image previously uploaded into PowerFlex Manager. In PowerFlex 4.0, the PowerFlex Gateway is a containerized service running in the MNO stack, so there is only one option to choose here. When we discovered our nodes, we added them all to the global pool, so leave this at the default.
Click finish to complete the first step of cloning the template. In the next step, we specify several node specific configurations. We wish to deploy a four node cluster, so no changes are needed here. We'll let PowerFlex Manager auto select the hostname. Provide the NTP server to ensure time synchronization. As an appliance deployment with fully managed networking, we'll use the default and preferred LSP switch port configuration. We'll leave the hardware and Bios settings at the default values.
The network VLANs we specified earlier are assigned to the network interfaces, but changes can be made here if needed. Save the draft and then we must publish it before we can use it to deploy a resource group. Once published, let's go ahead and deploy. Review the deployment settings. We will deploy right now, although we could choose to schedule this for later. Check the summary and set it going. Now we wait. As the network switches are prepared, the nodes are configured with the operating system and IPs, and the PowerFlex software is installed. This may take a couple of hours, so we'll accelerate through this process.
When the deployment finishes notice that the deployment state says incomplete. The service requires a volume to be considered complete in PowerFlex Manager. Once we add that the service will turn complete and then healthy. Next, we'll create a compute cluster. For this resource group, we'll create a pair of Linux compute nodes. Again we clone the template and set the networks and OS credentials. This compute cluster will access storage from the storage cluster we just created. And we will pull nodes from the same node pool. As with the other resource group.
We'll let PowerFlex Manager assign host names based on a template and configure NTP. Note the role this time is compute only. This tells PowerFlex Manager only to configure the Storage Data client or SDC on these nodes. Leave the hardware and Bios settings at their defaults and verify the networks assigned at the interfaces. In this case, we have one bonded interface that will carry application traffic, and a second bonded interface that will speak to the flex data networks configured for the storage cluster. This is important. The SDC must be able to see and communicate with all of the data networks that serve volume data.
Make sure that all of them are assigned here. Save. Publish, and deploy. We'll tell PowerFlex Manager to deploy right now. And once again, because the deployment process takes a couple of hours, while PowerFlex Manager installs and configures the operating systems, sets up networking in the switches and the nodes, and then installs and configures the PowerFlex software, we will speed up the video and arrive quickly at our destination. The SDC has been installed on the nodes and they are registered in the system.
In PowerFlex 4.0, we have a new category, hosts, which includes both SDC clients and the new NVMe over TCP initiators. While we were waiting for the compute cluster to deploy, we created a couple of volumes in the system. Let's go ahead and map those to our SDCs. If we have SSH into the compute nodes, we see the volumes appear as disks available for use by the operating system.
Returning to the resource group overview, we also see that the two volumes now appear in the Resource Groups Resource map. At this point, we have a fully configured PowerFlex system ready to accept your block workloads.