Reply to Message

Reply to Message

View discussion in a popup

Replying to:
Jasonc1
2 Iron

Re: Moving Disk Transparently From Host Point of View

1. With a Thin environment and multiple tiers of disk you can change which tier(s) of disk a volume belongs to. for example a 3 tier environment with EFD, FC and SATA disks will typically have a FAST policy of 100/100/100. In other works parts of a a volume can sit on any tier and the array will dynamically manage this based on performance history. You can modify the FAST policy such that 100% of the volume is on EFD disk with a 100/0/0  policy.

2. FAST changes are done via Unisphere or Symcli (open systems command line)

3. FAST is only for (disk) multi-tier 20/40K

4. Based on this, you have a THICK environment. A single disk failure will not cause an issue on an R1 or R2 device. They MUST be RAID protected. It is of course possible to have a multi drive failure scenario where the same RAID group is compromised multiple times. The array does guard against this with Hot Spares. When a drive fails it will invoke a Hot spare to dynamically replace it. See below. If you found yourself in the situation where a RAID group was completely compromised (Data loss on the R2 side). You did not need to move volumes IMHO. Replace the disks, and perform a SRDF FULL ESTABLISH on the devices in the RG would have sorted you out.

5. Thin is the way to go. The more modern VMAX arrays are thin only.

SPLITTING SRDF

SRDF is typically for Disaster Recovery purposes. Each time you SPLIT, the DR position is compromised. This is best avoided :-) What is the business case to RDF split? Could you CLONE the R2 devices and IPL off the Clones  instead?