PowerStore: Using VMFs on PowerStore X model appliance internal nodes
Summary: This knowledgebase article explains how to use VMFs on PowerStore X model appliance internal nodes. Preview Version – This KB relates to features planned for FH-Core.
Instructions
By default, AppsON virtual machines leverage PowerStore’s efficient vVol implementation due to its simple nature, design optimizations, and integration within the PowerStore UI. Although it is still recommended to use vVols due to its simple nature, optimizations, and integration within the PowerStore UI, starting with PowerStore version 2.0, PowerStore X model appliances also support VMFS Datastores for the storage of Virtual Machines within AppsON by allowing the mapping of block volumes to PowerStore’s internal ESXi hosts using PowerStore’s REST API or CLI. In these cases, the following should be considered for performance reasons when using VMFS:
- Always use more than one VMFS datastores, distributing VMs across the VMFS datastores.
- The underlying block volumes that make up the VMFS datastores, should be affined to opposite nodes in PowerStore.
Note: PowerStore’s vVol architecture is designed in such a way that the above two points do not apply, hence the recommendation to use vVols.
In addition, as DRS distributes the Virtual Machines across both nodes, some virtual machines will have an indirect I/O path to its Datastore which will go through the ToR switch, while others have a direct I/O path that goes directly from the VM to the storage. If needed, a VM affinity rule can be created in vSphere with a “should run” policy, as described in Virtualization Guide available from PowerStore Info Hub.
High-Level Overview of Steps
- Identify the internal PowerStore X ESXi hosts. There are two per PowerStore X appliance.
- Create at least two volumes per appliance, to be formatted as VMFS.
- Map the two volumes to each of the two PowerStore X appliance’s ESXi hosts to ensure HA.
- Distribute the node affinity evenly across the number of volumes to be used as VMFS and the number of internal ESXi hosts they are mapped to. If there is a case of a single appliance PowerStore X with two VMFS volumes to be used, set the node affinity to NodeA on VMFS volume 1, and set the node affinity to NodeB on VMFS volume 2.
- Rescan the storage adapter in vSphere for the PowerStore X appliance’s ESXi hosts and create a VMFS Datastore for each volume presented, ensuring it is mapped to both hosts in the appliance.
Using PowerStore CLI (pstcli):
Identify the internal PowerStore X nodes by issuing “host show” command. The internal hosts will be displayed as “<NameOfCluster>-Appliance<#>-node-A” and “<NameOfCluster>-Appliance<#>-node-B”, with a description of “Internal host for the system” and an “Os_Type” value of ESXi
***EXAMPLE*** pstcli -d <cluster_IP> -u admin -p <password> host show # | id | name | description | os_type | host_group.name ----+--------------------------------------+-----------------------------+-------------------------------+---------+----------------- 1 | e744d953-b5ba-4d20-88ac-51aba9098e30 | AB-H1234-appliance-1-node-B | Internal host for the system. | ESXi | 2 | eb81db9f-1410-480f-8199-40ab2fa8d41a | AB-H1234-appliance-1-node-A | Internal host for the system. | ESXi |
2. Create two new volumes by issuing the “volume create” command, specifying a name and size.
***EXAMPLE*** pstcli -d <cluster_IP> -u admin -p <password> volume create -name VMFS1 -size 549755813888 -performance_policy_id default_medium -appliance_id A1 Created # | id ----+-------------------------------------- 1 | 6ff93940-6337-46dc-b68d-5fc99004dd71 pstcli -d <cluster_IP> -u admin -p <password> volume create -name VMFS2 -size 549755813888 -performance_policy_id default_medium -appliance_id A1 Created # | id ----+-------------------------------------- 1 | 4fd9a173-41d1-4dc3-806f-e9e5366715a4 pstcli -d 10.123.123.123 -u admin -p 333!xxxxxx volume show # | id | name | type | wwn | size | protection_policy.name ----+--------------------------------------+-------+---------+--------------------------------------+------------------------+------------------------ 1 | 4fd9a173-41d1-4dc3-806f-e9e5366715a4 | VMFS2 | Primary | naa.68ccf09800e95b41cfb7beb83a82aec0 | 549755813888 (512.00G) | 2 | 6ff93940-6337-46dc-b68d-5fc99004dd71 | VMFS1 | Primary | naa.68ccf09800d6c5db7018b8f3e71ecf28 | 549755813888 (512.00G) |
3. Map these new volumes to both internal ESXi hosts by issuing a “volume” command with “attach -host_id”.
***EXAMPLE*** pstcli -d <cluster_IP> -u admin -p <password> volume -id 4fd9a173-41d1-4dc3-806f-e9e5366715a4 attach -host_id e744d953-b5ba-4d20-88ac-51aba9098e30 -logical_unit_number 1 Success pstcli -d <cluster_IP> -u admin -p <password> volume -id 4fd9a173-41d1-4dc3-806f-e9e5366715a4 attach -host_id eb81db9f-1410-480f-8199-40ab2fa8d41a -logical_unit_number 1 Success pstcli -d <cluster_IP> -u admin -p <password> volume -id 6ff93940-6337-46dc-b68d-5fc99004dd71 attach -host_id e744d953-b5ba-4d20-88ac-51aba9098e30 -logical_unit_number 2 Success pstcli -d <cluster_IP> -u admin -p <password> volume -id 6ff93940-6337-46dc-b68d-5fc99004dd71 attach -host_id eb81db9f-1410-480f-8199-40ab2fa8d41a -logical_unit_number 2 Success
4. Assign host affinity to each volume, with each volume affined to a different node in the appliance, by issuing a “volume” command with “set -node_affinity.”
***EXAMPLE*** pstcli -d <cluster_IP> -u admin -p <password> volume -id 4fd9a173-41d1-4dc3-806f-e9e5366715a4 set -node_affinity Preferred_Node_A Success pstcli -d <cluster_IP> -u admin -p <password> volume -id 6ff93940-6337-46dc-b68d-5fc99004dd71 set -node_affinity Preferred_Node_B Success
Using REST API:
Go to https://<cluster_IP>/swaggerui for REST API definitions.
1. Identify the internal PowerStore X nodes by doing a GET command on the “host” object. The internal hosts will be displayed as “<NameOfCluster>-Appliance<#>-node-A” and “<NameOfCluster>-Appliance<#>-node-B.”
***EXAMPLE***
curl -k -i -u admin:<password> -X GET "https://<cluster_ip>/api/rest/host?select=name,id" -H "accept: application/json"
[{"name":"AB-H1234-appliance-1-node-A","id":"164fa5af-9e91-4e86-9c30-7ca0b2647549"},
{"name":"AB-H1234-appliance-1-node-B","id":"20207f44-f5b6-42a6-874a-b2e743f4bc5a"}]
2. Create two new volumes by doing a POST command on the “volume” object, specifying the name and size of the volume.
***EXAMPLE***
curl -k -u admin: <password> -X POST "https://<cluster_ip>/api/rest/volume" -H "accept: application/json" -H "Content-Type: application/json" -H "DELL-EMC-TOKEN: <token>" -d "{ \"name\": \"VMFS1\", \"size\": 549755813888, \"appliance_id\": \"A1\", \"performance_policy_id\": \"default_medium\"}" | json_reformat
curl -k -u admin: <password> -X POST "https://<cluster_ip>/api/rest/volume" -H "accept: application/json" -H "Content-Type: application/json" -H "DELL-EMC-TOKEN: <token>" -d "{ \"name\": \"VMFS2\", \"size\": 549755813888, \"appliance_id\": \"A1\", \"performance_policy_id\": \"default_medium\"}" | json_reformat
3. Map these two volumes to both ESXi internal hosts, by issuing a POST command on the “volume” object, specifying the host ID returned in step 1.
***EXAMPLE***
curl -k -u admin: <password> -X POST "https://<cluster_ip>/api/rest/volume/04f0499c-0f13-4f39-a455-846297358d01/attach" -H "accept: application/json" -H "Content-Type: application/json" -H "DELL-EMC-TOKEN: <token>" -d "{ \"host_id\": \"164fa5af-9e91-4e86-9c30-7ca0b2647549\", \"logical_unit_number\": 0}"
curl -k -u admin: <password> -X POST "https://<cluster_ip>/api/rest/volume/04f0499c-0f13-4f39-a455-846297358d01/attach" -H "accept: application/json" -H "Content-Type: application/json" -H "DELL-EMC-TOKEN: <token>" -d "{ \"host_id\": \"20207f44-f5b6-42a6-874a-b2e743f4bc5a\", \"logical_unit_number\": 0}"
curl -k -u admin: <password> -X POST "https://<cluster_ip>/api/rest/volume/f1577f97-9a4b-4b51-a89b-7e135eda8b29/attach" -H "accept: application/json" -H "Content-Type: application/json" -H "DELL-EMC-TOKEN: <token>" -d "{ \"host_id\": \"164fa5af-9e91-4e86-9c30-7ca0b2647549\", \"logical_unit_number\": 1}"
curl -k -u admin: <password> -X POST "https://<cluster_ip>/api/rest/volume/f1577f97-9a4b-4b51-a89b-7e135eda8b29/attach" -H "accept: application/json" -H "Content-Type: application/json" -H "DELL-EMC-TOKEN: <token>" -d "{ \"host_id\": \"20207f44-f5b6-42a6-874a-b2e743f4bc5a\", \"logical_unit_number\": 1}"
4. Assign host affinity to each volume, with each volume affined to a different node in the appliance, by issuing a PATCH request on the “volume” object.
***EXAMPLE***
curl -k -u admin:<password> -X PATCH "https://<cluster_ip>/api/rest/volume/04f0499c-0f13-4f39-a455-846297358d01" -H "accept: application/json" -H "Content-Type: application/json" -H "DELL-EMC-TOKEN: <token>" -d "{\"node_affinity\": \"Preferred_Node_A\" }"
curl -k -u admin: <password> -X PATCH "https://<cluster_ip>/api/rest/volume/f1577f97-9a4b-4b51-a89b-7e135eda8b29" -H "accept: application/json" -H "Content-Type: application/json" -H "DELL-EMC-TOKEN: <token>" -d "{\"node_affinity\": \"Preferred_Node_B\" }"