When using Powerpath ME do you need a underlying replication technology to work in conjunction with the Powerpath ME tool set earlier documentation seem to indicate that as the name suggest it is an enabler allowing psuedo names to be used as source and target but the copy process was done by somthing like open replicator or srdf timefinder commands and in some cases the array needed to be connected
However later documentation seem to indicate it is a standalone tool set do we need to have anything else to make PowerPath Migration Enabler work other than the having powerpath fully installed and a licence for this product and software etc
Ok looking at the enviroment requirements for PowrPath ME I find that is also not an option for migrating GFS2 file systems and alongside Open Migrator is not support
PPME supported linux file systems are –
EXT2,
EXT3,
EXT4 ( supported only on RHEL6)
XFS ( Supported on RHEL6, SLES 11 SP2 and SLES 11 SP3)
And ReiserFS ( Supported on SLES 10 SP4, SLES 11 SP2 and SLES 11 SP3)
Unfortunately PPME is not qualified to support GFS. As per the note we have RHEL 5.8, EMC can use PPME to do migration for EXT2 and EXT3 file system.
PowerPath release note for the latest version 5.7 for Linux, refer to the topic “Environment and system requirements”, page no. – 16 , support category – File System.
If neither Open Migrator or PowerPath ME are option in this case and the customer is not keen on using SRDF between the two arrays due to the requirements and changes needed
Is there any EMC migration product or other porduct that can succesfully deal with the GFS2 file system format for migration without having to make changes to the actual orginal data, logical volume groups etc
The PowerPath Elab Support Matrix (Attached) mentions:
"EMC supports the following filesystems: ext2, ext3, ext4, xfs, gfs, and gfs2 unless otherwise noted in the ESM. If there is a customer requirement for a different filesystem, please submit an RPQ."
In addition the EMC Host Connectivity Guide for Linux (attached) also states:
That it supports GFS2 file systems
However when i rasied a call to EMc they stated
PowerPath supports GFS but PPME does not support it and not tested. PPME use either host or storage copy technology to do the migration. PPME interact with file system at the time of cleanup.
If we want to use PPME for GFS, we may need to raise RPQ. You can try to use PPME in file system offline mode but for EMC support we need to have RPQ.
So it seem it is statd that it supports GFS2 in documentation but not if we take the answer i got from EMC
All too lsets I have looked at seem to have the same restriction tha that the file system needs to be on a single host not a cluster which is pretty normal and that the file system needs to get a proper copy effectively shutdown and unmounted
This basically seem to be the case if i look to use DD or resync
Has anybody succesfull ymigrated a GFS2 file system without actually taking the system down completely to enable the bulk copy to be down
While not directly your situation, I had success with GFS2 migrations from CX3-240 to CX4-480 and from CX3-240 to VNX 5300, both using SANCopy.
Per the EMC Host Connectivity Guide for Linux, yes, you are correct - Linux connectivity to GFS2-enabled filesystems is supported. However, PowerPath does not support GFS2, so if you use PowerPath to manage the block devices allocated, EMC may not provide support to you, if something goes awry. So, if PowerPath does not support the device - by extension - neither does PPME. No conflict or confusion there.
In our case, we had to take an outage to start the copy (Hot Push in all cases) and timed the completion with kernel updates and planned maintenance on the Linux hosts. By time the copies with planned maintenance, it was not impactful at all.
I am trying to accomplish a similar migration, SANCopy of a clustered GFS2 filesystem. If you got this to work with SANCopy could you share your experience and the steps you took?
I'm looking at shutting down all applications, disabling all cluster services, then completing the SANCopy migration by doing a vgexport of the old disks then vgimport of the new disks, on each of the nodes.
Four Linux hosts involved with the application were running RedHat; two were clustered and not using PowerPath, two were not clustered, but using PowerPath to access several volumes. All four hosts were attached to a CX3-240, with a mix of LUNs and MetaLUNs. We deployed a new CX4-480 and put in the same Navisphere Domain as the CX3-240 and configured SANCopy sessions from the source CX3 to the target CX4. We provisioned new LUNs on the CX4-480 and created new FC zones, but did not activate them. I also configured almost 100GB of RLP LUNs, since some of the hosts have a high change rate of data.
During scheduled downtime, we shutdown all four Linux hosts at the end of their planned maintenance. While the hosts were booting, I activated SANCopy sessions to perform the bulk copy a full (Bulk or non-Incremental) copy. However, because all four hosts were down for an extended period of time, the bulk copy completed in under 90 minutes. Before the UNIX admins brought the hosts, up, I created a mark and an Incremental SANCopy session. This way, the incremental session was now tracking the changes to the source LUN and no further pause in I/O was required. When we were ready to transition the cluster to the new array, we shutdown the application, unmounted the filesystems and vgexported the source CX3 disks. When then activated new FC zones that pointed the Linux servers to the CX4 disks instead of the CX3 disks and ran a VG import on the CX4 disks. After the new disks were visible, we mounted filesystems and restarted applications.
The total application downtime was under 15 minutes, because we had tested this process on non-production hosts first. We also chose to change FC zones at the end, after Risk Management requested that the hosts be unable to write to the old disks as a failsafe in our process, though you might choose to simply add new zones.
We repeated this process a few years later, moving a different CX3-240 to a VNX. At the time, the VNX was running older FLARE and I could not add it to the same domain as the CX3, so we had to script all the SANCopy sessions with Naviseccli. That was actually much easier than I expected, so I might suggest using Naviseccli, if you're hand with scripts.
michael_churchi
1 Rookie
•
46 Posts
0
January 21st, 2014 03:00
When using Powerpath ME do you need a underlying replication technology to work in conjunction with the Powerpath ME tool set earlier documentation seem to indicate that as the name suggest it is an enabler allowing psuedo names to be used as source and target but the copy process was done by somthing like open replicator or srdf timefinder commands and in some cases the array needed to be connected
However later documentation seem to indicate it is a standalone tool set do we need to have anything else to make PowerPath Migration Enabler work other than the having powerpath fully installed and a licence for this product and software etc
michael_churchi
1 Rookie
•
46 Posts
0
January 21st, 2014 23:00
Ok looking at the enviroment requirements for PowrPath ME I find that is also not an option for migrating GFS2 file systems and alongside Open Migrator is not support
PPME supported linux file systems are –
EXT2,
EXT3,
EXT4 ( supported only on RHEL6)
XFS ( Supported on RHEL6, SLES 11 SP2 and SLES 11 SP3)
And ReiserFS ( Supported on SLES 10 SP4, SLES 11 SP2 and SLES 11 SP3)
Unfortunately PPME is not qualified to support GFS. As per the note we have RHEL 5.8, EMC can use PPME to do migration for EXT2 and EXT3 file system.
PowerPath release note for the latest version 5.7 for Linux, refer to the topic “Environment and system requirements”, page no. – 16 , support category – File System.
If neither Open Migrator or PowerPath ME are option in this case and the customer is not keen on using SRDF between the two arrays due to the requirements and changes needed
Is there any EMC migration product or other porduct that can succesfully deal with the GFS2 file system format for migration without having to make changes to the actual orginal data, logical volume groups etc
michael_churchi
1 Rookie
•
46 Posts
0
January 27th, 2014 04:00
Ok conflicting information on this one
The PowerPath Elab Support Matrix (Attached) mentions:
"EMC supports the following filesystems: ext2, ext3, ext4, xfs, gfs, and gfs2 unless otherwise noted in the ESM. If there is a customer requirement for a different filesystem, please submit an RPQ."
In addition the EMC Host Connectivity Guide for Linux (attached) also states:
That it supports GFS2 file systems
However when i rasied a call to EMc they stated
PowerPath supports GFS but PPME does not support it and not tested. PPME use either host or storage copy technology to do the migration. PPME interact with file system at the time of cleanup.
If we want to use PPME for GFS, we may need to raise RPQ. You can try to use PPME in file system offline mode but for EMC support we need to have RPQ.
So it seem it is statd that it supports GFS2 in documentation but not if we take the answer i got from EMC
All too lsets I have looked at seem to have the same restriction tha that the file system needs to be on a single host not a cluster which is pretty normal and that the file system needs to get a proper copy effectively shutdown and unmounted
This basically seem to be the case if i look to use DD or resync
Has anybody succesfull ymigrated a GFS2 file system without actually taking the system down completely to enable the bulk copy to be down
If so what method did you use or tool set
umichklewis
3 Apprentice
•
1.2K Posts
0
January 28th, 2014 09:00
While not directly your situation, I had success with GFS2 migrations from CX3-240 to CX4-480 and from CX3-240 to VNX 5300, both using SANCopy.
Per the EMC Host Connectivity Guide for Linux, yes, you are correct - Linux connectivity to GFS2-enabled filesystems is supported. However, PowerPath does not support GFS2, so if you use PowerPath to manage the block devices allocated, EMC may not provide support to you, if something goes awry. So, if PowerPath does not support the device - by extension - neither does PPME. No conflict or confusion there.
In our case, we had to take an outage to start the copy (Hot Push in all cases) and timed the completion with kernel updates and planned maintenance on the Linux hosts. By time the copies with planned maintenance, it was not impactful at all.
Let us know if that helps!
Karl
storagtetalk
1 Rookie
•
53 Posts
0
April 12th, 2014 08:00
I am trying to accomplish a similar migration, SANCopy of a clustered GFS2 filesystem. If you got this to work with SANCopy could you share your experience and the steps you took?
I'm looking at shutting down all applications, disabling all cluster services, then completing the SANCopy migration by doing a vgexport of the old disks then vgimport of the new disks, on each of the nodes.
Thanks!
umichklewis
3 Apprentice
•
1.2K Posts
0
April 15th, 2014 14:00
Here's what I can describe of the environment:
Four Linux hosts involved with the application were running RedHat; two were clustered and not using PowerPath, two were not clustered, but using PowerPath to access several volumes. All four hosts were attached to a CX3-240, with a mix of LUNs and MetaLUNs. We deployed a new CX4-480 and put in the same Navisphere Domain as the CX3-240 and configured SANCopy sessions from the source CX3 to the target CX4. We provisioned new LUNs on the CX4-480 and created new FC zones, but did not activate them. I also configured almost 100GB of RLP LUNs, since some of the hosts have a high change rate of data.
During scheduled downtime, we shutdown all four Linux hosts at the end of their planned maintenance. While the hosts were booting, I activated SANCopy sessions to perform the bulk copy a full (Bulk or non-Incremental) copy. However, because all four hosts were down for an extended period of time, the bulk copy completed in under 90 minutes. Before the UNIX admins brought the hosts, up, I created a mark and an Incremental SANCopy session. This way, the incremental session was now tracking the changes to the source LUN and no further pause in I/O was required. When we were ready to transition the cluster to the new array, we shutdown the application, unmounted the filesystems and vgexported the source CX3 disks. When then activated new FC zones that pointed the Linux servers to the CX4 disks instead of the CX3 disks and ran a VG import on the CX4 disks. After the new disks were visible, we mounted filesystems and restarted applications.
The total application downtime was under 15 minutes, because we had tested this process on non-production hosts first. We also chose to change FC zones at the end, after Risk Management requested that the hosts be unable to write to the old disks as a failsafe in our process, though you might choose to simply add new zones.
We repeated this process a few years later, moving a different CX3-240 to a VNX. At the time, the VNX was running older FLARE and I could not add it to the same domain as the CX3, so we had to script all the SANCopy sessions with Naviseccli. That was actually much easier than I expected, so I might suggest using Naviseccli, if you're hand with scripts.
Let me know if that helps!
Karl