PowerStore 2.0 was released in June 2021.
Let's have a look together to the main new features available with that version.
V1: Single drive failure. Only RAID5 (4+1 or 8+1). Minimum 6 drives (RAID5 (4+1) + spare space (one drive per RRS)). RAID Resiliency Sets (RRS = Fault Domain) of 25 drives.
V2: Add Double drive failure. RAID6 (4+2, 8+2, or 16+2). Minimum 7 drives (RAID6 (4+2) + spare space (one drive per RRS)). RRS of 50 drives for RAID6 (still RRS of 25 drives for RAID5). RAID6 is only available for new Cluster or new appliance added to an existing Cluster. We cannot change the RAID type. RAID width will remain the same when we add new drives to the Appliance. RAID width depends on the initial drive count of the Appliance.
V1: Not supported
V2: NVMe-OF: New protocol for hosts to access storage system. Only supported through Fiber Channel. The same FC front end port can be used for SCSI or NVMe (no option to only enable one of them).
V1: Single storage network could support a single VLAN/Subnet.
V2: 32 Storage networks. Up to 8 storage networks per Interface
New Platform. No hypervisor-based model (No PS-500X model).
Riptide platform: single socket, less memory, full NVMe Base enclosure, no NVRAM drives, maximum 25 Drives (No expansion enclosures). It can be clustered with other models.
Mezz0 card is optional. Without Mezz0, there’s no NAS services available or Clustering.
To replace NVRAM drives (Write Cache), the write cache on one Node is mirrored to the peer Node before ACK is sent to the Host. BBU are used to protect the Write Cache in case of a Power Outage. Write Cache content will be flushed to M.2 device (vaulted cache data).
V1-SP3: Best practice automation. Ideal configuration for optimal performances:
Cluster and Storage MTU=9000
2 active iSCSI targets per controller VM (not needed for 1000X)
ESXi iSCSI queue depth increased to 256 (not needed for 1000X)
Optimal configuration can be configured during ICW (Initial Configuration Wizard)
V2: Create a Multi appliances cluster with up to 4 PS-X appliances
V2: Ability to initiate a Failover Test from the DR site. We can use the current destination data or snapshot data. Destination will become accessible R/W while the replication is still running in the background. No time limitation for running the Failover Test. Both Clusters must be on V2 release.
V1: Need to Power off the VM before migrating its VVols from one Appliance to another (Multi Appliances Cluster)
V2: Migration of VVols is completely transparent. No host rescan needed.
We have many additional features integrated with 2.0. Always check the Dell EMC PowerStore Release Notes for PowerStore OS on the https://www.dell.com/support/ web site.
If you have any question, feel free to ask in that discussion!
The Tolerance Level for each appliance is shown in the PowerStore Manager UI under Hardware > APPLIANCES (The Tolerance Level column may need to be manually added to the UI view).
Please see an example below. Click on the filter icon to add the Tolerance Level column.
Hope this helps.
You can run service scripts to view information about the RAID configuration on the appliance.
Running the service scripts Prerequisites
To run the service scripts:
Example of the service script: svc_diag list --storage
You should be able to view the raid type and raid width as per the below example: