Unsolved

This post is more than 5 years old

190 Posts

664

December 3rd, 2010 11:00

Move file system from one pool to another

I have a PFS that is replicated with ReplicatorV2 currently.  I need to move it from FC to SATA on both the source and destination Celerras.  It's about 2TB of data and I have a 100Mbit WAN connection (I can dial it up to 200Mbit temporarily).  I have an extremely light user base over the weekend and I can schedule an entire weekend outage at this time the of year.  I can see a few ways to do this...

1) Setup a local RepV2 session to the SATA pool from the existing PFS.  When this completes, fail it over, delete the original replication session, and set up a new replica to the far end/dr site.

2) Over the weekend do a copy and then fire up the replica to the far side (this would involve the sync running during business hours or be unprotected for a week - probably not a big deal since the previous copy will be hanging around 4 weeks so I can get access to the snapshots if need be).

3) Create a new file system, copy data from the "shares" across onsey-twosey - yuck.  Too much work.

4)  And this one is weird - and not sure if this is doable/supported - set up a replication session on the far side celerra to its SATA disks (you can replicate a replica, right?) and then replicate the replica BACK to the production celerra.  Fail it around-the-horn back to the production box.

I'm leaning towards option 2 - seems like there are less places where it can break though option 1 seems doable.  I'm still on 5.6 and scheduled for a 6.0 upgrade soon if that matters much.

Thoughts?

Dan

2 Intern

 • 

366 Posts

December 4th, 2010 03:00

Hi Dan,

I prefer to use option 1.

try this :

Use the Celerra Replicator.

Follow the steps listed below :


1) List the interconnect loopback id for the Data Mover the PFS ( Production File System ) is mounted to :

$ nas_cel -interconnect -list
id     name               source_server   destination_system   destination_server
20001  loopback           server_2        CS_NS40_1_MSS        server_2


2) Create a replication session for this PFS :

$ nas_replicate -create gustavo_rep -source -fs gustavo -destination -pool clarata_archive -interconnect id=20001 -max_time_out_of_sync 10
OK

With the above syntax, a file system named _replica1 will be created on the specified storage pool.


$ nas_replicate -l
Name                      Type       Local Mover               Interconnect         Celerra      Status
gustavo_rep               filesystem server_2                  <->loopback          CS_NS40_1_M+ OK


$ nas_replicate -i gustavo_rep
ID                             = 1027_APM00073700689_0000_1103_APM00073700689_0000
Name                           = gustavo_rep
Source Status                  = OK
Network Status                 = OK
Destination Status             = OK
Last Sync Time                 = Fri Dec 03 22:26:00 BRST 2010
Type                           = filesystem
Celerra Network Server         = CS_NS40_1_MSS
Dart Interconnect              = loopback
Peer Dart Interconnect         = loopback
Replication Role               = loopback
Source Filesystem              = gustavo
Source Data Mover              = server_2
Source Interface               = 127.0.0.1
Source Control Port            = 0
Source Current Data Port       = 0
Destination Filesystem         = gustavo_replica1
Destination Data Mover         = server_2
Destination Interface          = 127.0.0.1
Destination Control Port       = 5085
Destination Data Port          = 8888
Max Out of Sync Time (minutes) = 10
Next Transfer Size (KB)        = 0
Current Transfer Size (KB)     = 0
Current Transfer Remain (KB)   = 0
Estimated Completion Time      =
Current Transfer is Full Copy  = No
Current Transfer Rate (KB/s)   = 0
Current Read Rate (KB/s)       = 0
Current Write Rate (KB/s)      = 0
Previous Transfer Rate (KB/s)  = 51405
Previous Read Rate (KB/s)      = 1446
Previous Write Rate (KB/s)     = 952
Average Transfer Rate (KB/s)   = 51405
Average Read Rate (KB/s)       = 1446
Average Write Rate (KB/s)      = 952


3) When the copy finishes, rename the original file system :


$ nas_fs -rename gustavo gustavo_orig
id        = 619
name      = gustavo_orig
acl       = 0
in_use    = True
type      = uxfs
worm      = off
volume    = v1027
pool      = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers= server_2
rw_vdms   =
ro_vdms   =
auto_ext  = no,virtual_provision=no
deduplication   = On

ckpts     = root_rep_ckpt_619_147321_2,root_rep_ckpt_619_147321_1
rep_sess  = 1027_APM00073700689_0000_1103_APM00073700689_0000(ckpts: root_rep_ckpt_619_147321_1, root_rep_ckpt_619_147321_2)
stor_devs = APM00073700689-000E,APM00073700689-0013,APM00073700689-0008,APM00073700689-000B
disks     = d19,d14,d16,d11
disk=d19   stor_dev=APM00073700689-000E addr=c16t1l10       server=server_2
disk=d19   stor_dev=APM00073700689-000E addr=c0t1l10        server=server_2
disk=d14   stor_dev=APM00073700689-0013 addr=c0t1l13        server=server_2
disk=d14   stor_dev=APM00073700689-0013 addr=c16t1l13       server=server_2
disk=d16   stor_dev=APM00073700689-0008 addr=c16t1l4        server=server_2
disk=d16   stor_dev=APM00073700689-0008 addr=c0t1l4         server=server_2
disk=d11   stor_dev=APM00073700689-000B addr=c0t1l7         server=server_2
disk=d11   stor_dev=APM00073700689-000B addr=c16t1l7        server=server_2


4) Then, rename the destination file system in order to keep the original one :


$ nas_fs -rename gustavo_replica1 gustavo
id        = 660
name      = gustavo
acl       = 0
in_use    = True
type      = uxfs
worm      = off
volume    = v1103
pool      = clarata_archive
member_of = root_avm_fs_group_4
rw_servers= server_2
ro_servers=
rw_vdms   =
ro_vdms   =
auto_ext  = no,virtual_provision=no
deduplication   = On
ckpts     = root_rep_ckpt_660_147328_2,root_rep_ckpt_660_147328_1
rep_sess  = 1027_APM00073700689_0000_1103_APM00073700689_0000(ckpts: root_rep_ckpt_660_147328_1, root_rep_ckpt_660_147328_2)
stor_devs = APM00073700689-0018,APM00073700689-001A,APM00073700689-0009,APM00073700689-002C
disks     = d33,d34,d36,d31
disk=d33   stor_dev=APM00073700689-0018 addr=c16t1l10       server=server_2
disk=d33   stor_dev=APM00073700689-0018 addr=c0t1l10        server=server_2
disk=d34   stor_dev=APM00073700689-001A addr=c0t1l13        server=server_2
disk=d34   stor_dev=APM00073700689-001A addr=c16t1l13       server=server_2
disk=d36   stor_dev=APM00073700689-0009 addr=c16t1l4        server=server_2
disk=d36   stor_dev=APM00073700689-0009 addr=c0t1l4         server=server_2
disk=d31   stor_dev=APM00073700689-002C addr=c0t1l7         server=server_2
disk=d31   stor_dev=APM00073700689-002C addr=c16t1l7        server=server_2

5) Delete the replica ( this will also remove the internal checkpoints used by the replication ) :


$ nas_replicate -delete gustavo_rep -mode both
OK

6) Umount both file systems :

$ server_umount server_2 -p gustavo_orig
server_2 : done
$ server_umount server_2 -p gustavo
server_2 : done

7) Then mount the new file system on the original mountpoint :

$ server_mount server_2 gustavo /gustavo
server_2 : done

This will ensure all exports and shares will still be correct. The exports and shares points to the mountpoint, and since we are keeping the same mountpoint as the original file system, no changes are needed.

8) Remove the original file system ( this could be done few days later ) :

$ nas_fs -d gustavo_orig

Note : If you have checkpoints on the PFS, you need to remove them before step 6 as well any checkpoint schedules for the PFS. The checkpoint schedules will need to be recreated for the new file system.

ATTENTION : This procedure requires a brief disruption on steps 6-7 that should take around to 1-2 minutes.

11 Legend

 • 

20.4K Posts

 • 

87.4K Points

December 4th, 2010 06:00

Gustavo,

what happens to the original source-dr relationship ? Did you have to cancel that relationship before you could create local replication session ? Now that you have a new source file system, what (if anything) you do with source-dr replication ?

Thanks

2 Intern

 • 

366 Posts

December 4th, 2010 07:00

Hi dynamox,

Yes, in case the original PFS was replicated, you would need to stop and remove it before the procedure, then recreate it again after.

I have created these steps as a general procedure.

Gustavo Barreto.

0 events found

No Events found!

Top