Start a Conversation

Unsolved

This post is more than 5 years old

D

1689

April 26th, 2012 08:00

LUN migrator - celerra pool

I know there are multiple old posts around the issue of "moving" a filesystem to get more performance, and I'd like to know if in the newer DART is it supported to use the LUN migrator on the backend to move from say R5 to R10 RGs without having to use replicator and thus avoid any host downtime?

I know there are solutions out there from Gustavo and others which all require host outage of some sort, and we have a FS which is multiprotocol, very active, and 24x7, so unmounting and remounting is not really an option. Does the Celerra choke if you migrate LUNs on the backend even with newer code rev's?

Also, the DART is 7.0.40-1 and the reasoning is simple - large FS, only FS on the pool, and production systems 24x7. So, wouldn't the Celerra be smart enough to know (or have the code enhanced to allow it) that "all" LUNs composing of the pool would be migrated to other RGs and possibly RAID type so that at the end the pool would "re-identify" itself as a new "clarxxx_r# " pool?

thanks

8.6K Posts

April 26th, 2012 11:00

dynamox wrote:

end of year for 7.1 ?

earlier

but you should know that you wont get roadmap info on the forum

157 Posts

April 26th, 2012 11:00

Well, I don't know if FAST VP is what I need. If you are saying I could convert and existing Celerra storage pool to thin vp, on the NAS, not the backend then sure. Plus, I can't really wait for vaporware (solid that is).

thanks

1 Rookie

 • 

20.4K Posts

April 26th, 2012 11:00

end of year for 7.1 ?

14 Posts

April 26th, 2012 11:00

It sounds like what you are asking for it to use FAST VP with File, which we support today. In the next release (7.1) our block pools will support multiple RAID group types, e.g. EFD with RAID10 + SAS with RAID5 + NL SAS with RAID6. Then FAST VP policy will move the data around according to its temperature.

Not exactly what you were asking -- the new code will not rebrand an existing non-block-pool to a new name, e.g. change clar_r5_performance to clar_r5_economy -- that functionality will stay the same. But if you configure your backend with a FAST VP storagepool and provision space from it to the File side, then after the upgrade to the new code, you will be able to extend that pool with other types of drives with other types of RAID configs. And as a result, your slower filesystems will automatically (in accordance with rebalance policy) be migrated to slower drives.

1 Rookie

 • 

20.4K Posts

April 26th, 2012 12:00

not asking for any commitments here ..god forbid

157 Posts

April 26th, 2012 12:00

I’ve re-read this a couple times, and maybe it’s more to do with me not knowing what is really there in the 7.0 code we are running. Here is the pic:

6 R10 LUNs across 6 RGs

All in one Celerra pool

1 big FS

1 share

Many CIFS hosts

24x7 no downtime

Goal is more for capacity increase but also to understand if it could be done for performance as well (vice versa).

I am proposing migrating all 6 LUNs to R5 groups. If the next release will allow a TP to be created on the CX, and those LUNs added to the existing Celerra pool, I suppose I get it. But then how do I reclaim the old R10 LUNs?

Sorry for the lame questions. Must need coffee.

thanks

8.6K Posts

April 26th, 2012 13:00

no - next release and FAST VP wont help you there if you currently are on traditional raidgroups - you would still have the problem how to get your file system from the old LUNs to the new pool based LUNs with the change in storage profiles.

Either try if LUN migration works and gets approved in your case.

Or use Replicator

WIth Replicator you can make a "switchover" a minute or so. Correctly executed we have done it fast enough that VMware didnt time out and just kept on running.

CIFS clients arent the problem - they will just do a reconnect of the CIFS session as if you had a network glitch.

If you also have NFS clients then in your case of a local move its a problem since the NFS file handle will change and NFS clients would have to remount.

8.6K Posts

April 26th, 2012 13:00

I assume you are using traditional raidgroups and not Clariion pool based LUNs - right ?

I would suggest you ask your EMC pre-sales contact to ask engineering via RPQ

The Celerra wont choke but there are a couple of issues

- typically AVM will put multiple file systems on a LUN (dvol) so you cant just "move" one file system

- the storage profile for a dvol is normally only set when the dvol is first discovered and info is put into various config files - thats how the system knows which system storage pool that dvol belongs to.

For normal operations it isnt a problem if the storage profile is incorrect - however when you for example extend the file system (either manually or automatically as part of virtual provisioning) or you create a checkpoint then this info is used to decide which pool to allocate the additional storage from. You might get extensions from the wrong pool or errors.

The storage profile configuration isnt something you can change easily but I think in the latest release AVM has become more clever.

If you have the space you try with a small test file system

Rainer

P.S.: If you use pool based LUNs (as Mark mentioned) its easier - there if the pool contains multiple drive types the storage profile for any dvol in there is MIXED anyway so there isnt much to change.

157 Posts

April 27th, 2012 12:00

Ok, so I tried this for a test since the suspense was, well, you know...

  1. create 2 LUNs on seperate R10 RGs
  2. Created a stripe, created the meta, created the test pool
  3. Created a test FS on the new pool and shared via CIFS, test files blah
  4. Created 2 more LUNs on separate R5 RGs
  5. Migrated R10 LUNs to R5 LUNs
  6. Viewed "Storage Protection" properties in Unisphere for the "disks" comprising of the test FS: Reported correctly as RAID5(4+1) including correct RG IDs for each "new" location.

I checked the server log and saw no errors nor anything funky in the console. So, I understand the profile issue, and also the risk to AVM "intelligence", autogrow, etc, and also to a certain extent multiple FS's on the same pool especially with checkpoint storage, but where else could I look in the NAS logs for any messages which might indicate "help me - I've fallen" or "meh, whatever" signs?

I will take this to our local team and see about the RPQ but it sure looks to me like a relatively straight forward process, given the "simple" config in here to start with.

I will though, now that I'm more curious than anything, try the IP replicator/loopback process mentioned above.

thanks again

No Events found!

Top