PowerStore Education

Last reply by 07-28-2022 Unsolved
Start a Discussion
Dell Technologies
6336

PowerStore 2.0 New Features

Hi Folks,

PowerStore 2.0 was released in June 2021.

Let's have a look together to the main new features available with that version.

  • Drive fault tolerance

V1: Single drive failure. Only RAID5 (4+1 or 8+1). Minimum 6 drives (RAID5 (4+1) + spare space (one drive per RRS)). RAID Resiliency Sets (RRS = Fault Domain) of 25 drives.

V2: Add Double drive failure. RAID6 (4+2, 8+2, or 16+2). Minimum 7 drives (RAID6 (4+2) + spare space (one drive per RRS)). RRS of 50 drives for RAID6 (still RRS of 25 drives for RAID5). RAID6 is only available for new Cluster or new appliance added to an existing Cluster. We cannot change the RAID type. RAID width will remain the same when we add new drives to the Appliance. RAID width depends on the initial drive count of the Appliance.

  • NVMe over Fabric

V1: Not supported

V2: NVMe-OF: New protocol for hosts to access storage system. Only supported through Fiber Channel. The same FC front end port can be used for SCSI or NVMe (no option to only enable one of them).

  • New support for 1.5TB SCM drives with V2
  • Storage Network Scalability

V1: Single storage network could support a single VLAN/Subnet.

V2: 32 Storage networks. Up to 8 storage networks per Interface

  • PowerStore 500T (New Model)

New Platform. No hypervisor-based model (No PS-500X model).

Riptide platform: single socket, less memory, full NVMe Base enclosure, no NVRAM drives, maximum 25 Drives (No expansion enclosures). It can be clustered with other models.

Mezz0 card is optional. Without Mezz0, there’s no NAS services available or Clustering.

To replace NVRAM drives (Write Cache), the write cache on one Node is mirrored to the peer Node before ACK is sent to the Host. BBU are used to protect the Write Cache in case of a Power Outage. Write Cache content will be flushed to M.2 device (vaulted cache data).

  • PowerStore X Enhancements

V1-SP3: Best practice automation. Ideal configuration for optimal performances:

              Cluster and Storage MTU=9000

              2 active iSCSI targets per controller VM (not needed for 1000X)

              ESXi iSCSI queue depth increased to 256 (not needed for 1000X)

             Optimal configuration can be configured during ICW (Initial Configuration Wizard)

V2: Create a Multi appliances cluster with up to 4 PS-X appliances

  • DR Failover Test

V2: Ability to initiate a Failover Test from the DR site. We can use the current destination data or snapshot data. Destination will become accessible R/W while the replication is still running in the background. No time limitation for running the Failover Test. Both Clusters must be on V2 release.

  • Online VVol Migration

V1: Need to Power off the VM before migrating its VVols from one Appliance to another (Multi Appliances Cluster)

V2: Migration of VVols is completely transparent. No host rescan needed.

 

We have many additional features integrated with 2.0. Always check the Dell EMC PowerStore Release Notes for PowerStore OS on the https://www.dell.com/support/ web site.

If you have any question, feel free to ask in that discussion!

Replies (5)
2 Bronze
2 Bronze
820

Once the Drive fault tolerance is set there way to display the configure in the GUI or the pstcli with the Raid information  

775

Hi @SeanieLUNs 

The Tolerance Level for each appliance is shown in the PowerStore Manager UI under Hardware APPLIANCES (The Tolerance Level column may need to be manually added to the UI view).

Please see an example below. Click on the filter icon to add the Tolerance Level column.  

FT.jpg

Hope this helps. 

 

771

Thanks for the help.
Also, is there a way to show that the actual Raid type (4+1, or 8+1) using single or (4+2,8+2 or 16+2) using double?

759

Hello @SeanieLUNs 

You can run service scripts to view information about the RAID configuration on the appliance.

Running the service scripts Prerequisites

  • Obtain the password for the Service account.
  • In PowerStore Manager, under Settings, enable SSH.
  • Download and install an SSH client, such as PuTTY, to a computer that has network access to the cluster. You use the SSH client to run the scripts.

To run the service scripts:

Steps

  1. Launch an SSH client, and connect to the cluster using the management IP address. For example, in PuTTY, enter the management IP for the destination.
  2. Enter the username and password for the service account to log in to the system. Once logged in, you should be connected directly to the serviceability docker container.
  3. Type the name of the script to run.

Example of the service script: svc_diag list --storage

You should be able to view the raid type and raid width as per the below example:

SVC_RAID.jpg

2 Bronze
2 Bronze
732

Thanks, @Ooi Hoo Hong 

This is the exact information I was looking for.
Cheers 

Latest Solutions
Top Contributor