Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

9437

October 8th, 2008 11:00

best way to move file systems between storage pools

I'm trying to migrate a few file systems used for cifs to a new larger storage pool.
Is there an easy way to do this other than using robocopy and recreating the cifs shares?
I was hoping there was a command within the celerra to migrate the file system across the storage pools without losing or recreating the cifs information.

Thanks,jb

2 Intern

 • 

20.4K Posts

October 10th, 2008 12:00

1) create file system to be exactly the same as the source, it this example i am creating file system target_filesystem to be exact same size as source_filesystem (my source)

nas_fs -name target_filesystem -type rawfs -create samesize=source_filesystem pool=symm_std storage=000290100123

2) create mountpoint and mount target file system as read-only

server_mountpoint server_3 -create /target_filesystem

server_mount server_3 -option ro target_filesystem /target_filesystem

3) Create checkpiont of source file system

fs_ckpt source_filesystem -name fs_source_ckpt1 -Create

4) Copy checkpoint to target file system

fs_copy -start fs_source_ckpt1 target_filesystem

5) When fs_copy is finished you can delete checkpoint created in step 4

nas_fs -delete fs_source_ckpt1 -o umount=yes

6) Umount source and target file systems

server_umount server_3 -perm /target_filesystem
server_umount server_3 -perm /source_filesystem

7) Rename old file system to temp name, rename target file system to original name

nas_fs -rename source_filesystem source_filesystem_old
nas_fs -rename target_filesystem source_filesystem

8) Mount new file system

server_mount server-3 -o rw source_filesystem /source_filesystem

90 Posts

October 8th, 2008 11:00

We did this just 3 weeks ago to move from DMX storage to CLARiiON. We presented the new storage and used fs_copy to copy the file systems, unmount the old, name shuffle, remount the new with the same mount name and you are done. You will need to stop customer access during the fs_copy. We used replication for any non replicated volumes, so the cutover was fast. We kept the fs_copy short, by doing daily incremental copies till the cutover. If you have 5.6 you can replicate the replicated volumes as well due to 5.6 repl v2 supporting one to many replication.

Let me know if you need specifics on how we did this or more info.

8.6K Posts

October 8th, 2008 11:00

please provide some more details, like

- are you just using CIFS ?
- do you have a Replicator license ?
- is the CIFS server part of a VDM ?
- how much downtime is acceptable ?
- why do you move the file systems ?
- what DART version ?
- moving meaning copying onto another file system that is mounted on the same data mover ?
- do you have a modem or ESRS dialin ?

108 Posts

October 8th, 2008 13:00

We have to do the same thing in our env. I just want to merge small pool into big one.
do you have any document mentioning the steps?
if you can provide me on mahajana@un.org

Thanks,

8.6K Posts

October 8th, 2008 13:00

I just want to merge small pool into big one.


what *exactly* are you trying to do ?

system defined pools are a just a collection of LUNs that have the same performance criteria so you cant merge them.

Even if you could, just changing the pool wouldnt affect file systems that were already carved from that pool.

If you need to merge a file system then you have to use file-based tools

Replicator or fs_copy/nas_copy works on a block level for one *complete* file system - thats why its fast and convinient and copies all the security attributes of file/dirs
So you cant use it to "merge" two or more file systems into one - its only one-to-one and src and dst have to be exactly the same size

for file-based tools your options are:
- client side copy with emcopy/robocopy/rsync/SecureCopy/...
- server_archive
- NDMP backup and restore

90 Posts

October 8th, 2008 13:00

What steps or concepts do you need help with? I may be able to remove all the data that is specific to my company and insert dummy values to give you the steps. Let me look into that.

2 Intern

 • 

20.4K Posts

October 8th, 2008 13:00

what is nas_copy ?

8.6K Posts

October 8th, 2008 13:00

nas_copy is the 5.6 version of fs_copy (or better said its the RepV2 version of fs_copy)

see attached for usage

It does require a Replicator license though

1 Attachment

2 Intern

 • 

20.4K Posts

October 8th, 2008 14:00

oh nice ....i gotta hurry up and upgrade my NS80 to 5.6 ...well ..after i migrate off CFS14 ;)

56 Posts

October 9th, 2008 06:00

This is great information thank you!!

But of course I have more questions :-)

My NS is a gateway unit.

I'm sorry I'm not fimilar with local replicator or fs_copy.

If I have /fs_test on the raid3, will replicator or fs_copy allow me to create/copy/move /fs_test to the raid6 pool?

I'll run setup_clariion2 and post it.

56 Posts

October 9th, 2008 06:00

Here's the background.
When we first setup the NAS, we got some 500GB ATA drives, they had to be setup in a raid3 (this was a few years ago)
I just got some more ATA drives but they are 1TB drives, which I was told should be configured in a raid6.
My raid3 pool is about 98% used. So I want to migrate the file systems from that storage pool to the new larger raid6 storage pool. Then I will take the 500GB and recreate them as a raid6 group so they are in the same storage pool.
I'm running 5.5.35-0 code.

8.6K Posts

October 9th, 2008 06:00

ok - makes sense

RAID3 for ATA was the recommendation back then - nowadays you can also use RAID5 or RAID6

I assume you have a true NS Integrated (no gateway) so you need to use one of the supported DAE templates

If you can post your current layout we can advise on whats best
see setup_clariion2 list config http://forums.emc.com/forums/message.jspa?messageID=487966#487966

For the data migration the most convinient and least downtime is to use local Replicator (if you have a license) or fs_copy (full and some incrementals)
Since that is a block copy it will automatically take all the ACLs, tree quota, with it

It wont copy checkpoints though - so if you need to keep around the old checkpoints for a while you either need to manually transfer them or keep the old file system around till you no longer need them

http://forums.emc.com/forums/thread.jspa?messageID=431057񩏑

You should stop CIFS or down the interfaces during the last incremental copy and while you are re-mounting to make sure you dont have any updates to the "old" file system

As sagle said - if you make sure that you are mounting the new file system on the same path that the old one was, your CIFS share's are also going to work fine.

8.6K Posts

October 9th, 2008 07:00

My NS is a gateway unit.


fine - in that case you are not bound to DAE templates and dont use setup_clariion to create raidgroups and LUNs.
You use NaviSphere for that - see http://corpusweb130.corp.emc.com/ns/common/post_install/Config_storage_FC_NaviManager.htm
and pay particular care to the HLU being larger than 16

what Clariion model and which Flare version are you using ?

I'm sorry I'm not fimilar with local replicator or fs_copy.


no problem - just grab the Using Celerra Replicator manual from Powerlink
attached is the command-line usage of fs_copy

Replicator will first issue the fs_copy to get a baseline and then continously send the changes to the other file system.
If you get EMC sales to allow you temporarily use of a Replicator license you could cut your downtime to a few minutes
(effectively the time it takes to switch the replication and remount the file system)
If you decide to use Replicator just use the GUI - it will do a lot of things automatically and save time over the CLI

If not using Replicator then fs_copy standalone is described in the "Copy file system to multiple destinations with fs_copy" section of that manual

If I have /fs_test on the raid3, will replicator or fs_copy allow me to create/copy/move /fs_test to the raid6 pool?


yes, fs_copy only cares about that the src and dst are exactly the same size

Here's the basic flow:
- create your new raidgroups and LUNs and make sure the Celerra see's them
- create you new file system using the samesize option of nas_fs
- fs_copy for the baseline
- create checkpoints
- fs_copy incremental (or more than once)
- stop user access (stop CIFS)
- do the last fs_copy incremental
- delete the checkpoints used by replication
- umount src and dst
- mount dst as the same path as previously src
- restart CIFS

Before you remove your old LUNs I suggest that you contact EMC customer service to do a dialing and make sure that they are really arent used any longer

I'll run setup_clariion2 and post it.


ok - try that. if might now show your layout correctly though since its more for NS Integrated units

56 Posts

October 9th, 2008 12:00

Here's the setup_clariion output:


System 10.30.34.31 is up
System 10.30.34.32 is up

Clariion Array: APM00064700107 Model: CX3-40 Memory: 4096

Enclosure(s) 0_0,1_0,0_1,1_1,1_2,1_3 are installed in the system.

Enclosure info:
----------------------------------------------------------------
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
----------------------------------------------------------------
1_3: 146 146 146 146 146 146 146 146 146 146 146
FC HS 65 65 65 65 65 52 52 52 52 52 EMP EMP EMP EMP R5
----------------------------------------------------------------
1_2: 146 146 146 146 146 146 146 146 146 146 146 146 146 146 146
FC 60 60 60 60 60 61 61 61 61 61 62 62 62 62 62 R5
----------------------------------------------------------------
1_1: 146 146 146 146 146 146 146 146 146 146 146 146 146 146 146
FC 50 50 50 50 51 51 63 63 63 63 63 UB UB UB UB MIX
----------------------------------------------------------------
0_1: 300 300 300 300 300 300 300 300 300 300 300
FC UB UB UB UB UB UB UB UB UB UB UB EMP EMP EMP EMP UB
----------------------------------------------------------------
1_0: 500 500 500 500 500 500 50010001000100010001000100010001000
ATA 30 30 30 30 30 HS UB 35 35 35 35 35 35 UB 237 MIX
----------------------------------------------------------------
0_0: 146 146 146 146 146 146 146 146 146 146 146 146 146 146 146
FC 0 0 0 0 0 1 1 1 1 1 2 2 2 2 2 R5
----------------------------------------------------------------


Disk group info:
----------------
Disk Group ID: 0 r5 Disks: 0_0_0,0_0_1,0_0_2,0_0_3,0_0_4
Disk Group ID: 1 r5 Disks: 0_0_5,0_0_6,0_0_7,0_0_8,0_0_9
Disk Group ID: 2 r5 Disks: 0_0_10,0_0_11,0_0_12,0_0_13,0_0_14
Disk Group ID: 30 r3 Disks: 1_0_0,1_0_1,1_0_2,1_0_3,1_0_4
Disk Group ID: 35 r6 Disks: 1_0_7,1_0_8,1_0_9,1_0_10,1_0_11,1_0_12
Disk Group ID: 50 r1_0 Disks: 1_1_0,1_1_2,1_1_1,1_1_3
Disk Group ID: 51 r1 Disks: 1_1_4,1_1_5
Disk Group ID: 52 r5 Disks: 1_3_10,1_3_9,1_3_8,1_3_7,1_3_6
Disk Group ID: 60 r5 Disks: 1_2_0,1_2_1,1_2_2,1_2_3,1_2_4
Disk Group ID: 61 r5 Disks: 1_2_5,1_2_6,1_2_7,1_2_8,1_2_9
Disk Group ID: 62 r5 Disks: 1_2_10,1_2_11,1_2_12,1_2_13,1_2_14
Disk Group ID: 63 r5 Disks: 1_1_6,1_1_7,1_1_8,1_1_9,1_1_10
Disk Group ID: 65 r5 Disks: 1_3_1,1_3_2,1_3_3,1_3_4,1_3_5


Lun info:
---------
Lun ID: 0 RG ID: 0 State: Bound root_disk
Lun ID: 1 RG ID: 0 State: Bound root_ldisk
Lun ID: 2 RG ID: 0 State: Bound d3
Lun ID: 3 RG ID: 0 State: Bound d4
Lun ID: 4 RG ID: 0 State: Bound d5
Lun ID: 5 RG ID: 0 State: Bound d6
Lun ID: 16 RG ID: 0 State: Bound d10
Lun ID: 17 RG ID: 0 State: Bound d7
Lun ID: 18 RG ID: 1 State: Bound d11
Lun ID: 19 RG ID: 1 State: Bound d8
Lun ID: 20 RG ID: 2 State: Bound d12
Lun ID: 21 RG ID: 2 State: Bound d9
Lun ID: 22 RG ID: 60 State: Bound d14
Lun ID: 23 RG ID: 52 State: Bound d15
Lun ID: 24 RG ID: 52 State: Bound d16
Lun ID: 25 RG ID: 63 State: Bound d18
Lun ID: 26 RG ID: 63 State: Bound d21
Lun ID: 30 RG ID: 30 State: Bound d13
Lun ID: 35 RG ID: 35 State: Bound d22
Lun ID: 36 RG ID: 35 State: Bound d23
Lun ID: 40 RG ID: 60 State: Bound ??
Lun ID: 41 RG ID: 50 State: Bound ??
Lun ID: 42 RG ID: 51 State: Bound ??
Lun ID: 43 RG ID: 50 State: Bound ??
Lun ID: 45 RG ID: 52 State: Bound ??
Lun ID: 61 RG ID: 61 State: Bound ??
Argument "N/A" isn't numeric in sprintf at common.pm line 1161.
Lun ID: 63 RG ID: 0 State: Bound ??
Argument "N/A" isn't numeric in sprintf at common.pm line 1161.
Lun ID: 64 RG ID: 0 State: Bound ??
Lun ID: 65 RG ID: 65 State: Bound ??
Lun ID: 66 RG ID: 52 State: Bound ??
Argument "N/A" isn't numeric in sprintf at common.pm line 1161.
Lun ID: 67 RG ID: 0 State: Bound ??
Lun ID: 2042 RG ID: 60 State: Bound ??
Lun ID: 2043 RG ID: 62 State: Bound ??
Lun ID: 2044 RG ID: 61 State: Bound ??
Lun ID: 2045 RG ID: 62 State: Bound ??
Lun ID: 4094 RG ID: 60 State: Bound ??
Lun ID: 4095 RG ID: 61 State: Bound ??


Spare info:
-----------
Spare ID: 237 Disk: 1_0_14
Spare ID: 238 Disk: 1_3_0
Spare ID: 239 Disk: 1_0_5

8.6K Posts

October 9th, 2008 12:00

20 y 0 cx3
21 y 0 cx32 <----want to get rid off


these two dont look like system pools so I dont know whats in them
Either you created a user-defined pool or you renamed a system pool

22 n 0 clarata_archive


yes, thats the system pool for RAID5 ATA disks

your ATA RAID3 should show up in the clarata_r3 pool
No Events found!

Top