Unsolved
This post is more than 5 years old
15 Posts
0
1000
Vmax and open replicator
I have a customer that is using Hp's polyserve matrix and we are discussing possible migration methods. We have tested Open Rep already and it works fine but they want to know if we can use a tool called PSFS freeze to freeze a filesystem ( Symm devices) and initiate and open replicator session without actually shutting down the servers.
I always shutdown the servers before creating a pair and activating a session and am not sure this would work. Is this supported by EMC?
ZepHead
88 Posts
0
September 28th, 2010 19:00
Search for and found "PSFS" in Elab Navigator but that is as far as I went - wondering if it even entered into the radar scope of the Elab group.
So seeing that there was a 'searchable' hit for "PSFS", I would recommend to start by seeing if your host and overall environment is supported. You can begin with: emc161911, to use Elab Navigator in order to query the support matrix itself.
The key is to drawn a close-enough query against your environment and see if this tool shows up (probably best to turn this document to pdf for search purposes), plus other factual qualification data you should know about.
In addition to this, there are the OpemReplicator guides available via PowerLink for other features, requirements, special notes, etc. - which may not exactly answer your question, but may provide a valid alternative or another approach..
You can also search Knowledgebase (PowerLink), for any relevant hits for "Hp's polyserve".. I found approx 3 hits for "HP polyserve" after a quick check, but I am not familiar enough with this product
Don't forget and/or rule out OpenMigrator for migration needs in general and within the scope of supported environments. Of course, your local sales team can help you with determining the appropriate product line for your needs.
johncampbell1
52 Posts
0
April 5th, 2011 01:00
Hi Rockman
Would be very interested to hear more of you experience with migrating HP Polyserve Matrix. We are about to embark on same, from Polyserve on HP EVA8100 to VMAX, and are limbering up to use OR. Did you have issues preserving the LUN identity for the matrix server?
Our process is complicated in the sense that two of our 6-node Polyserve clusters need to undergo a san circuit migration as well as storage.
Would be most interested to find out how it all went.
thanks John
rockman881
15 Posts
0
April 5th, 2011 05:00
John,
I migrated four different clusters with no issues at all. The only thing is you can't migrate the membership partitions using open replicator.
I used hot pull and had a 3rd hba connected to the old frame so I would have access to the original membership partitions.
johncampbell1
52 Posts
0
April 12th, 2011 01:00
Hi Rockman
Many thanks for your response - would you be prepared to elaborate in a little more detail how you handled the issue of the membership partitions please?
John
rockman881
15 Posts
0
April 12th, 2011 05:00
All servers had a third hba that was not in use. I should point out that I also migrated the servers from McData switches to Cisco switches as part of the migration effort. Here are the steps.
1. Cable up third hba to existing fabric (McData)
2. Zone third hba to Fa port
3. Map the membership luns to that Fa port
4. Block the switch port for the third hba
5. Mask the membership luns to the third hba wwn on the above Fa port
The above can all be done prior to the migration event.
6. Shutdown hosts
7. Unblock switch port of third hba
8. Create OR session using Hot Pull
9. Activate OR session
Power up hosts. The servers will see all the new data volumes on the Vmax and also see the original membship partitions down the third hba.
Manually copy the membership partitions to the new empty membship luns that you have in the Vmax masking view.
Obviously this requires exports and imports on the Polyserve side of things as well. I do have a very detailed plan which includes host steps as well if you need that.
johncampbell1
52 Posts
0
April 12th, 2011 06:00
Rockman
Once again thank you for sharing this information - if you are prepared to share more details re. the Polyserve end it would be much appreciated and carefully studied.
Our task seems similar in many respects to yours - we need to move off dual Brocade 4100- based SAN islands and into 48000 Directors, changing from HP EVA to VMAX targets at the same time. We have some latitude in that our Polyserve Windows hosts have some redundant dual-port HBAs that we might use to emulate what you have done, so that sounds encouraging. We need to migrate many different host types, some using Host based volume managers, some with OR, but it is the Polyserve estate that we believe demands some extra care.
John.
rockman881
15 Posts
0
April 12th, 2011 13:00
Pre-migration work
Drop new cables for existing HBA connections to Cisco switches (HBA1NEW, HBA2NEW)
Drop new cable for membership disks to third HBA (HBA3)
Zone and mask new VMAX membership LUNs to HBA1NEW, HBA2NEW
Zone and mask existing DMX membership LUNs to HBA3
Run script to document log file locations for all databases
Backup membership partition information to a file.
Low
mpdump -f premigrationbackup
Migration work
Disable SQL instances across the cluster. From the applications tab you can highlight each instance and select “disable across all nodes”.
Low
Re-enable SQL instances
Deport all dynamic volumes from the cluster (this also unassigns the paths).
Medium
Reimport all volumes
Deport non-membership LUN's from Polyserve.
Medium
Reimport LUN's
Power down all servers. This is required for Open Replicator to start processing.
Swap existing HBA cables for new SAN infrastructure to each server. This is due to a requirement to reconfigure the SAN infrastructure for these devices, separate from the migration.
Details below
Disconnect existing HBA cables, HBA1, HBA2
Connect new HBA cables - HBA1NEW, HBA2NEW, HBA3 - for VMAX storage
Medium
Return to original cables, reimport LUN's
At this point the new data and membership VMAX LUNs will be visible on HBA1NEW and HBA2NEW, and the old DMX membership LUNs will be visible on HBA3
Confirm 4 Gb/s link
Run scripts to remove LUN assignments.
Initiate open replicator for all DATA luns ( do NOT include the membership luns). Use hot pull migration.
Low
Stop open replicator
Power on all servers.
Perform diskpart on new VMAX membership LUN's
Import new VMAX membership LUN's into Polyserve.
Import all dynamic volumes with new targets after hot migration is started.
Medium
Deport new volumes
Assign paths for new dynamic volumes.
Medium
Unassign paths
Manually edit script to assign each patch to the correct volume (see script tab)
Backup membership partition again which now has the new volume configuration.
Low
mpdump -f postmigrationbackup
Go into cluster configuration and replace membership luns with new VMAX membership luns, one at a time.
Medium
If membership partition becomes corrupted or has other issues, perform recovery using the appropriate tools and backup taken with the updated configuration.
Deport old DMX membership LUNs from Polyserve
Disconnect HBA3 cable
Re-enable SQL instances and bring online
Perform SQL health checks
If new volumes are not working:
Deport new volumes from Polyserve
Deport new data LUNs from Polyserve
Power down all servers
Reconnect cable HBA3
Power up all servers
Replace VMAX membership disks with DMX membership LUNs
Power down all servers
Revert to previous HBA cables (HBA1, HBA2)
Power up all servers
Reimport old volumes
Reassign paths to original volumes
Disconnect HBA3 cable
johncampbell1
52 Posts
0
April 18th, 2011 01:00
Rockman
Once again, many thanks for sharing this detailed procedure. I have one hopefully last spefici point to clarify over this line (from below)
Initiate open replicator for all DATA luns ( do NOT include the membership luns). Use hot pull migration
By "hot pull migration" do you mean migrate (verb) using -hot -pull
or
migrate using -hot -pull -migrate (as a parameter )
ie with the latter implying a retention of the disk WWN using a Federated Live Migration?
When we've done it at our end i'll ensure this trail is updated with our experience.
Regards
John