Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2419

November 19th, 2010 07:00

Need to migrate File Systems to new Luns

Hello,

probably this is an already threaten thread....

I've a NS480 box with NAS 5.6.49-3 version.

At the moment, my clar5_performance pool is composed of 4x 4+1 450 GB disks in RAID 5 configuration (6,3 TB of datas).

In one month, I'll receive brand new 600 GB disks (probably 20), and I want to migrate on them all of my cifs file systems. (consider that I'll configure all of disks in RAID 5 configuration [4+1 disks and creating 2x luns per storage group... ].

How should I do them? with Clariion Lun migration? With File system copy?

Can I do it without service outage?

Thanks for any answer.

PS: I'm using AVM

Riccardo Barone

8.6K Posts

November 22nd, 2010 05:00

2) nas_pool -extend of system_defined pool in order to use the new luns

there is no need to manually extend a system pool - when the Celerra discovers new LUNs they become automatically eligable for the system pool that matches their profile and they are "pulled in" on demand

for the rest of the procedure - I would suggest to look at using multiple checkpoints and differential copies to minimize the downtime.

Or - if you have a the license - at using full-blown Replicator, which does this automatically

Rainer

1 Rookie

 • 

20.4K Posts

November 19th, 2010 07:00

if you were to use nas_copy here are my notes (substitute fs_copy to nas_copy)

1. Un-mount  target file system

server_umount  server_4 fs65

2. Convert  target file system from uxfs to rawfs

nas_fs  -Type rawfs fs65 -Force

3. Mount  target file system read-only

server_mount  server_4 -option ro fs65 /filesystem

4. Create  checkpoint of source file system

fs_ckpt  fs22 -name fs22_ckpt1 -Create

5. Copy  checkpoint to target file system

fs_copy  -start fs22_ckpt1 fs65 -option monitor=off

fs_copy  -l (to see copy progress)

6. When  fs_copy is finished you can delete checkpoint created in step  4.

nas_fs  -delete fs22_ckpt1 -o umount=yes

7. Mount  target file system read/write

server_mount  server_4 -option rw fs65 /filesystem

8.6K Posts

November 19th, 2010 07:00

The easiest and safest way to do that is to use Celerra Replicator

1 Rookie

 • 

20.4K Posts

November 19th, 2010 07:00

can't use Clariion lun migrator with Celerra, if source and target file system will be the same i would use nas_copy, if not you could use server_archive.

366 Posts

November 19th, 2010 07:00

Hi,

One of the differences between ReplicatorV1 ( fs_copy ), and ReplicatorV2 ( nas_copy ), is that you don't need anymore the destination file system to be rawfs.

While these steps seems to be correct, I prefer to create a real replication session. The steps would be :

1) create a replication session to the destination file system with nas_replicate;

2) when in sync, switch it over;

3) delete the replication session;

4) umount the original file system;

5) umount the destination ( new ) file system;

6) mount the new file system on the original mountpoint.

There is a brief disruption, but depending on your environment it would be very short or even

Gustavo Barreto.

75 Posts

November 19th, 2010 07:00

thanks for your answer.

Yes It's a copy in the same box. What do I need to do that? a Checkpoint?

Is It disruptive?

366 Posts

November 19th, 2010 08:00

Hi,

the fs_copy was used on ReplicatorV1 ( NAS code 5.5 ), and the nas_copy is used on ReplicatorV2 ( NAS code 5.6, 6.0 ). Other than this they should be very similar regarding the performance.

It will be a loopback replication, so the data does not go to the network. It's hard to estimate the tranfer rate you will reach. It depends primarily on the backend layout/performance.

You are right. The new disks will be added to the same pool ( clar_r5_performance ) as the original ones.

You have some options :

1) use MVM ( Manual Volume Management ) and create the new file systems on the new dvols;

2) create a user defined pool and use this pool to create the new file systems;

3) fill all your clar_r5_performance BEFORE add the new disks. ( create dummy volumes/file systems )

The new area will be "potential" on the clar_r5_performance pool.

List your disks ( nas_disk -l ), and take not of the dvols created on the new disks.

After create the new file systems, check where they were created with nas_fs -i command, and make sure they were created on the new dvols.

While the third option seems to be more complicated, the pro is that you don't loose the system pool ( clar_r5_performance ) benefits.

What are your plans to the old disks ? Will you remove them from the Celerra ?

Gustavo Barreto.

75 Posts

November 19th, 2010 08:00

Which transfer speed should I expect from fs_copy or nas_copy? Which one is faster?

another question: my new luns will be add to my clar5_performance pool. How can I be sure that the target file systems will be written ONLY on the new luns?

1 Rookie

 • 

20.4K Posts

November 19th, 2010 09:00

Gustavo,

with option 3 you will always have to create dummy file systems to force new file systems onto the new drives, hopefully the old drives can be reclaimed from the pool otherwise it's  PITA to manage.

366 Posts

November 19th, 2010 09:00

Hi dynamox,

yes...you are correct. That's why i asked if he will remove the old disks from the Celerra.

75 Posts

November 22nd, 2010 00:00

Thanks Gustavo...

the third option is probably the best one for me since I don't want to loose my system_defined pool benefits....

Just for summary:

1) create dummy file system in order to fulfill the clar5_performance pool

2) nas_pool -extend of system_defined pool in order to use the new luns

3) create the new file system

4) Stop I/O on the original FS (mounting it in R/O?)

5) nas_copy of the file systems to the new one

6) at the end, unmount the old file system

7) mount the new one on the old mount point

Is It ok?

75 Posts

November 22nd, 2010 00:00

server_archive is the option that I'll use in future for archive old data on cheap disks....

Which software can use its API?

8.6K Posts

November 22nd, 2010 03:00

At the moment, my clar5_performance pool is composed of 4x 4+1 450 GB disks in RAID 5 configuration (6,3 TB of datas).

In one month, I'll receive brand new 600 GB disks (probably 20), and I want to migrate on them all of my cifs file systems. (consider that I'll configure all of disks in RAID 5 configuration [4+1 disks and creating 2x luns per storage group... ].

How should I do them? with Clariion Lun migration? With File system copy?


Hi Riccardo,

I would suggest to work with your local EMC technical to investigate the use of LUN migration in your specific scenario.

Clariion LUN migration can be used for Celerra LUNs under very specific circumstances

see Knowledgebase  emc144545 which says:

  • CLARiiON LUN Migrations are supported, but only under the following criteria. These are restrictions which MUST be followed carefully, otherwise data outages or data loss can be incurred.


  • When LUNs in a Celerra storage group are greater than an HID of 15, the following rules apply to LUN migrations with regards to data LUNs:



    • The RAID type is the same (for example, RAID 5 RAID5, or RAID 3  RAID 3)

      (NOT RAID 5 -> RAID 3 or RAID 3 -> RAID5)
       

    • The drive type is similar.

      (FC allowed to FC), (ATA allowed to ATA)
       

    • The number of the physical drives in the RAID group is the same.

      (Source and target RAID groups)
       

    • The source and target LUNs must be identical size

      (According to block count size, not MB size)

You do need to be careful to make sure to understand how your file systems are laid out to avoid migrating only parts of a file system and to make sure the destination LUNs are *exactly* the same size - thats why I recommend to have an expert from EMC to take a look at your config.

Sure you can use the nas_copy methods but Clariion LUN migration would be the least amount of work (if you want to migrate *all* file systems) and would require no Celerra reconfig or disruption.

Rainer

P.S.: I think what it basically means is the the "old" LUNs and the "new" LUN have to have the same AVM performance profile (end up in the same system pool)

8.6K Posts

November 22nd, 2010 03:00

riker82 wrote:

server_archive is the option that I'll use in future for archive old data on cheap disks....

Which software can use its API?

there is no API for server_archive - just the command line. You can also use ndmpcopy for copying files/dirs

If you want real archiving / HSM like functionality there is the DHSM (FileMover) API that is supported by EMC FMA (File Management Appliance - aka Celerra FAST) and a number of third-party products.

This will move your files automatically according to pre-defined schedules to another tier or storage system and leaves stubs in place so that it still looks the same for the end user.

see http://powerlink.emc.com/km/live1/en_US/Offering_Basics/White_Paper/h6834-celerra-fully-automated-storage-tiering-ref-arc.pdf

There is also a downloadable "trial" virtual edition to test it - see http://virtualgeek.typepad.com/virtual_geek/2010/05/get-yer-emc-fma-virtual-appliance-here.html

Rainer

5 Practitioner

 • 

274.2K Posts

November 22nd, 2010 12:00

The easiest and least disruptive way, I have used, is via local replication.  When you add your new 600GB disks, put all those LUNs into a separate nas_pool.

Then when you set up local replication pick that new pool for your local destination filesystem.

All the target file systems will have "replica", in the name, if you use the wizard, to set up replication.

When you are ready for the cutover, just use the "switchover" option.

After you are done, if you want to keep your old file systems, rename them.

Then rename your new file systems to the original names. This is non-disruptive and easily done with Celerra manager.

No Events found!

Top