Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

3392

December 19th, 2010 20:00

aix lvm volume group migrations

we are migrating from IBM array to EMC VMAX array. The host OS is AIX  and planning to use AIX LVM. We want to do migrations based on volume groups

i.e if a volume group has 3 devices 20gb 30gb 40gb then we will assign a 90gb vmax lun and move the volume group to the new VMAX lun.

I want to know the AIX LVM commands involved in this process of migrating vg's.

2 Intern

 • 

20.4K Posts

December 20th, 2010 20:00

Using Smit

1) Go to Volume Groups and select "Set Characteristics of a Volume Group
2) select "Add a Physical Volume to a Volume Group"
3) select VOLUME GROUP name and then PHYSICAL VOLUME name (these should be your new hdiskpower devices
4) At this point new hdiskpower devices are part of the volume group
5) In Smit select "Mirror a Volume Group"
6) Select volume group you are mirroring and hdiskpower devices you added earlier
7) When mirroring is done you will need to "un-mirror" old disks
8) In smit select "Unmirror a VOlume Group"
9) select volume and then pick old disks
10)next step is to remove old disks from the volume group
11) In Smit select "Set Characteristics of a Volume Group
12) In smit select "Remove a Physical Volume from a Volume Group"
13) select volume group and old disks
14) run lspv and old disks should have "None" in the column.
15) final step is to remove devices from AIX perspective (rmdev -dl hdisk2)

by the way, you might want to still consider using striped metas as they may provide better performance in certain workloads. There have been a couple of threads discussing that.

2 Intern

 • 

20.4K Posts

December 19th, 2010 21:00

John,

are your system admins not familiar with AIX LVM mirroring ? It's very simple.

Regarding you device sizes, have you establish a "building block" on you array. For example we have a building block of 17G, meaning all of your symdev are 17G in size. Whenever an application needs let's say 68G we can create either a 4 x 17G meta or 2 x 34G metas. For Unix/Linux systems try to come up with some kind of standard, each device presented to the system has an OS I/O path, so if you were to present one huge 90G meta vs 3 x 30G metas ..it matters because in the latter case your system has 3 I/O paths in the OS so it's more efficient. Your system admin add these devices in the volume group and gets the capacity they need either way ..but from performance perspective it's better to have more smaller devices. For example for our Oracle RAC boxes we have established a guideline where we increase their storage in 34G increments.  So if we get a request for 100G , we provision 3 x 34G Metas (each meta is comprised of 2 x 17G symdevs). Yes it's not exactly 100G but you establish a standard instead of building custom meta/devices (like you do on Clariion). In my opinion it's just so much easier to manage, configure for local and remote replication.  My 2 cents

44 Posts

December 20th, 2010 19:00

Dynamox,

you are right, we are implementing Virtual Provisioning in our environment and we do have standard thin devices curved in the thinpool. ex:- hyper of 16,32,64,128&240gb standard sizes. We decided not to use metas for AIX volume groups. To match the volume group size we will select the appropriate hyper luns in multiples. AIX admin does know the procedure but I want know the procedure and would want to know if he is doing the right way. We are migrating VIO Lpars and also some physical to logical migrations.

44 Posts

December 21st, 2010 12:00

Will it be the same for VIO clients too? I mean the VIO client only has hdisks right? PP is installed only on VIO servers.

2 Intern

 • 

20.4K Posts

December 21st, 2010 12:00

i don't have experience with VIO ..sorry.

January 25th, 2014 22:00

pretty old thread that I've stumbled upon ..

anyways.

shouldn't be any different for the VIO clients, john.

allocate your EMC LUNs to the VIO servers, make them visible to your VIO clients and do a mirrorvg to mirror your VGs to the EMC LUNs .. verify and once done, unmirror your VGs, etc. [you could use SMIT like dynamox mentioned]

yes, the VIO clients do not have powerpath installed because your EMC LUNs are masked to the physical HBA's on the VIO server (run an lspath instead to see the LUNs coming from the VIO servers) .. on the VIO client you'd still see the VG information across your PV's in a lspv, right? So, you do not have to touch powerpath anywhere, when you do LVM migrations. Just make sure you're mirroring to the right set of disks, whether or not, they are under powerpath control.

thanks .

No Events found!

Top