Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2834

August 14th, 2013 11:00

how to migrate celerra_archive pool from SATA to FC drives

celerra_archive pool at present with SATA drives, we want move entire pool to FC drives.

Array:NS-480

Flare:30.X

nascode:6.X

celerra_archive pool is configured with scsi drives (clariion LUNs)

please help how to do this? and best practices.

August 22nd, 2013 20:00

Thnaks for reply,

I got below solution from ECN

·         Migrate filesystem to new storage pool while preserving shares & quotas

Turns out that the quotas are filesystem dependant. So the only way to migrate the filesystem with shares AND quotas is to recreate the quotas on the new filesystem prior to setting up the replication. Below are the steps I used with a test filesystem:

Migrating Celerra Filesystems to new Storage Pool (Keeping Quota and Shares intact)

NOTE:
Current Filesystem Name: TESTFS
New (target) Filesystem Name will be TESTFS_NEW
Preferred DM/vdm: vdm01
New Storage Pool: symm_new_pool

GATHER INFORMATION:

Get Current FS Size information:
nas_fs -size TESTFS
nas_fs -info TESTFS

$ nas_fs -size TESTFS
total = 10044 avail = 4682 used = 5361 ( 53% ) (sizes in MB) ( blockcount = 20889600 )
volume: total = 10200 (sizes in MB) ( blockcount = 20889600 ) avail = 4683 used = 5517 ( 54% )
$ nas_fs -info TESTFS
id        = 8070
name      = TESTFS
acl       = 0
in_use    = True
type      = uxfs
worm      = off
volume    = v12723
pool      = symm_old_pool
member_of = root_avm_fs_group_21
rw_servers= server_2
ro_servers=
rw_vdms   = vdm01
ro_vdms   =
auto_ext  = no,virtual_provision=no
deduplication   = On
stor_devs = 000192602783-45B1,000192602783-45D1
disks     = d12,d13
disk=d12   stor_dev=000192602783-45B1   addr=c16t1l6-117-1  server=server_2
disk=d13   stor_dev=000192602783-45D1   addr=c16t1l7-117-1  server=server_2

Get Current Tree Quota Information for FS:
nas_quotas -list -tree -fs TESTFS
nas_quotas -report -tree -fs TESTFS

$ nas_quotas -list -tree -fs TESTFS
+------------------------------------------------------------------------------+
| Quota trees for filesystem TESTFS mounted  on /root_vdm_1/TESTFS:
+------+-----------------------------------------------------------------------+
|TreeId| Quota tree path (Comment)                                             |
+------+-----------------------------------------------------------------------+
|    1 | /TESTFS (.Testing.)                                                   |
+------+-----------------------------------------------------------------------+
$ nas_quotas -report -tree -fs TESTFS
Report for tree quotas on filesystem TESTFS mounted on /root_vdm_1/TESTFS
+------------+-----------------------------------------------+-----------------------------------------------+
| Tree       |                 Bytes Used  (1K)              |                    Files                      |
+------------+------------+------------+------------+--------+------------+------------+------------+--------+
|            |    Used    |    Soft    |    Hard    |Timeleft|    Used    |    Soft    |    Hard    |Timeleft|
+------------+------------+------------+------------+--------+------------+------------+------------+--------+
|#1          |     5487328|     5767168|     6291456|        |         589|           0|           0|        |
+------------+------------+------------+------------+--------+------------+------------+------------+--------+


CREATE TARGET FILESYSTEM

Create New storage:
nas_fs -name TESTFS_NEW -type uxfs -create size=10200M pool=symm_new_pool storage=000192602783 -auto_extend yes -vp no -hwm 90% -max_size 10240M -option slice=yes

Mount New Storage on appropriate DM/vdm:
server_mountpoint vdm01 -create /TESTFS_NEW
server_mount vdm01 TESTFS_NEW /TESTFS_NEW

$ server_mountpoint vdm01 -create /TESTFS_NEW
vdm01 : done
$ server_mount vdm01 TESTFS_NEW /TESTFS_NEW
vdm01 : done

Create Quota for New Storage:
nas_quotas -on -tree -fs TESTFS_NEW -path /TESTFS -comment ‘Testing’
nas_quotas -edit -tree -fs TESTFS_NEW -block 6291456:5767168 1

Mount NEW FS as Read Only (so it can be used for replication):
server_mount vdm01 -Force -option ro TESTFS_NEW /TESTFS_NEW

Verify NEW FS is Read Only:
nas_fs -info TESTFS_NEW

REPLICATION:

Create Replication:
nas_replicate -create TESTFS_REPLICATION -source -fs TESTFS -destination -fs TESTFS_NEW -interconnect loopback -max_time_out_of_sync 5 -overwrite_destination

List replication session:
nas_replicate -info -all

When Replication is done (Current Transfer is Full Copy = No), Switchover:
nas_replicate -switchover TESTFS_REPLICATION

Check FS status to make sure TESTFS is RO and TESTFS_NEW is RW:
nas_fs -info TESTFS
nas_fs -info TESTFS_NEW

Verify no checkpoints exist:
fs_ckpt TESTFS -list
fs_ckpt TESTFS_NEW -list

Delete Replication Session:
nas_replicate -delete TESTFS_REPLICATION -mode both

SWAP FILESYSTEMS

Unmount Filesystems:
server_umount ALL -perm /TESTFS
server_umount ALL -perm /TESTFS_NEW

Rename Filesystems:
nas_fs -rename TESTFS TESTFS_OLD
nas_fs -rename TESTFS_NEW TESTFS

Remove server2 mount for new FS left over from repl switchover:
server_mount server_2 -option RO accesspolicy=NATIVE TESTFS /TESTFS
server_umount server_2 -perm /TESTFS

Mount new FS on preferred vdm:
server_mount vdm01 accesspolicy=NATIVE TESTFS /TESTFS

Verify that the shares work by accessing them, and run nas_quotas -report -tree -fs TESTFS to verify that the Quota can see the new storage.

CLEANUP

(if original filesystem was on VDM, there will be an orphaned mount/mountpoint for the new filesystem for server_2. This process cleans up that orphaned mount/mountpoint)
Mount old FS to remove server_2 mount left over from repl switchover:
server_mount server_2 -option RO accesspolicy=NATIVE TESTFS_OLD /TESTFS_NEW
server_umount server_2 -perm /TESTFS_NEW

Delete old FS:
nas_fs -delete TESTFS_OLD -Force

1 Rookie

 • 

20.4K Posts

August 14th, 2013 11:00

create new file systems and use Celerra Replicator

August 15th, 2013 03:00

I want move entire Pool to FC drives

8.6K Posts

August 15th, 2013 04:00

Replicator

August 15th, 2013 20:00

Thanks for the reply,

August 21st, 2013 22:00

after replication done, if we do failover filesystem can accessable through replication interface.

how to make it acessable through same interface(IP).


1K Posts

August 22nd, 2013 05:00

You don't assign an IP address to a filesystem. You assign an IP address to an interface and you use that interface for your CIFS server, NFS exports, etc.

No Events found!

Top