I am Mainframe System programmer and new to EMC storage. I have a question: "sometimes there would be some problems on storage and there is a need to move data from one particular physical disk to another physical disk(full copy) or even rarely, release whole storage box for some hardware maintenance. Does VMAX 20K/40k have any functions to move whole disk transparently from host view? for example can we move SYSRES(live operating system) from one symm device to another symm device in a live system without any disruption?"
First a little background on the VMAX..
VMAX 20K/40K can be configured as thick or thin. That means your sysres will sit in one particular RAID group of disks for thick provisioned. In a thin environment the disks are virtualized into a single pool per installed disk technology.
Further, with thin there can be different tiers of disk (slow > fast) and with VMAX FAST technology, the data on your sysres may be on one or multiple tiers at the same time. There are FAST policies that can manage your environment from a performance perspective.
So you will need to understand what type of environment you have with your VMAX before considering a physical disk move per your question.
So yes, you can influence where your sysres is physically via FAST policies for example - within an array. Totally transparent to the host.
From my experience with Mainframe and VMAX, pretty much the only reason to move host devices around at the back-end is for performance reasons. FAST policies are an excellent tool for this.
For a Thick provisioned environment, you are limited to an FDRPAS or ZOS Migrator type product to move the volume at the host.
Before considering any volume moves, you need to have a good handle on the configuration of the array...
Does that answer the question for you?
Thanks, Jasonc for your reply. It was really helpful for me. As I said before I'm Mainframe System programmer and not familiar to storage deeply however because of some problems with storage I need to improve my storage knowledge so maybe we can initiate changing some of our storage configurations. For clarification I have more questions:
1) You said with FAST technology and thin devices I can move all data related to particular physical disk to other devices and release this physical device.
2) With FAST technology and thin devices can I move data manually from Unisphere or with Mainframe Solution Enabler installed on z/OS, from the host(Mainframe) at any time?
3) If we only have one type of devices is it good to config FAST and thin devices?
4) One of the problems that we'd encountered was disk crash on one of our R2 physical devices(thick) that included 14 Mainframe volsers, In our procedure, we split the SRDF every night and then start our operations on R2 devices, but because of this problem(on both M1 and M2 of R2) we couldn't manage to split the disks overnight and our storage admins asked us to move R1 disks into different disks using ADRDSSU and then they changed R2 crashed disks. Maybe it is related to the RAID configuration that we need this kind of manual interference by z/OS operator but I really looking for a way of changing EMC configuration or some transparent data movement or moving data from EMC rather than z/OS.
5) Can I conclude that it is better to configure VMAX with thin devices rather than thick devices(is there any benefit to the thick device)?
1. With a Thin environment and multiple tiers of disk you can change which tier(s) of disk a volume belongs to. for example a 3 tier environment with EFD, FC and SATA disks will typically have a FAST policy of 100/100/100. In other works parts of a a volume can sit on any tier and the array will dynamically manage this based on performance history. You can modify the FAST policy such that 100% of the volume is on EFD disk with a 100/0/0 policy.
2. FAST changes are done via Unisphere or Symcli (open systems command line)
3. FAST is only for (disk) multi-tier 20/40K
4. Based on this, you have a THICK environment. A single disk failure will not cause an issue on an R1 or R2 device. They MUST be RAID protected. It is of course possible to have a multi drive failure scenario where the same RAID group is compromised multiple times. The array does guard against this with Hot Spares. When a drive fails it will invoke a Hot spare to dynamically replace it. See below. If you found yourself in the situation where a RAID group was completely compromised (Data loss on the R2 side). You did not need to move volumes IMHO. Replace the disks, and perform a SRDF FULL ESTABLISH on the devices in the RG would have sorted you out.
5. Thin is the way to go. The more modern VMAX arrays are thin only.
SRDF is typically for Disaster Recovery purposes. Each time you SPLIT, the DR position is compromised. This is best avoided 🙂 What is the business case to RDF split? Could you CLONE the R2 devices and IPL off the Clones instead?
Thank you Jason again for your great replies. Now I have a better view of the problem:
1) There is no need for manually moving data from one physical device to the other just because of a disk crash. With FAST technology it can be done dynamically using our own policies.
2) It is better to define thin devices, this is the trend of VMAX.
During our end-of-day operations, we need at least 4 copies of our database(one for online, one for batch, one for full backup and the other for DR). So there are R1, R2, R2-BCV and another DR-R2 and during the night all will be split. We don't do IPL because only databases are mirrored, after splitting the R2 mirror DBMS will start and continues online operation. The whole process takes less than 5 minutes. I don't know whether there are better solutions for getting fast 4 point-in-time copies of databases?!
With your 4 copies, you probably don't have the best possible storage configuration for the purpose under current best practice. Likely it is what it is right now. If and when your next storage refresh is due, there are much more efficient methods to achieve what your requirement requires with the current VMAX and Powermax technology. Make sure your voice is heard when the configuration requirements are discussed when the time comes.