Start a Conversation

Unsolved

This post is more than 5 years old

U

3171

June 30th, 2015 10:00

New to VNX 5200

First off, im only a part-time storage admin, with no familiarity of VNX 5200, but some Clariion experience.

We just bought a pair of VNX 5200's with a single tray of 25x900GB drives.

I have some questions to get me going here.

1) Do I use the same Navisphere GUI and naviseccli CLI to admin the array?

2) With only 25 drives in the array, my thought was to create 1x hot spare and 2x 11+1 Raid-5 groups, is there a better approach?

3) Does the array support NFS directly or do I need an external server to host NFS?

4) I will need to implement SAN-COPY from the VNX(production) to an older Clariion CX4(DR), but it will  have to go over iSCSI, which I have never configured before, is there a cheat-sheat on how to setup SAN-COPY (or what ever it's called in VNX land) using iSCSI?

5) Im currently using a CX4 to host luns for production (including ESXi), and will be migrating the luns to the VNX by presenting new luns and using LVM to migrate data, however, for ESXi, is there any special integration or capability the VNX offers that I can make use of?

Thanks for any suggestions, and your understanding that I am in-experienced in the VNX line.

2 Intern

 • 

222 Posts

June 30th, 2015 14:00

Thanks, I have had trouble with pools on a CX array, where you cannot shrink them once you migrate luns. I would probably go with 2 raid groups to allow for data to be moved if I need to change the disk configuration.

As for sancopy, it's a one-time sync from a clone to the DR site, so the "source" is static, and periodically refreshed. Again, I have some CX experience, so im probably using the wrong terms here, but I don't need sync-replication.

Im pretty sure we didn't get any data movers, so bummer, I will have to continue using host-based NFS.

Thanks again.

1.2K Posts

June 30th, 2015 14:00

Welcome aboard!

1) Yes - Navisphere (actually Unisphere for the VNX) GUI and Naviseccli will provide management functionality

2) If you're considering future expansion, or wish to leverage Pools, you might wish to consider creating RAID5 groups of 4+1 or 8+1.  Sure, this will use up more spindles for parity, but will allow you more flexibility to grow your pools.  Of course, if that isn't of interest, then you should just create RAID5 groups of your choosing.

3) No.  The VNX 5200 is a block-only array.  However, depending on your purchase, it might have included NAS datamovers.  You'll want to check with your sales team to review your order and find out what was shipped.  If your order does not include a datamover, you might wish to host NFS from a locally-attached server, if you don't decide to purchase them.

4) Are you leveraging SANCopy for a one-time copy, or do you wish to try and data in-sync between them?  For the latter case, you'll be using MirrorView instead of SANCopy.  There are a few EMC videos on YouTube for configuring SANCopy, but the trickiest part will be configuring a Unisphere domain.  EMC has white papers on the Support site about SANCopy and Unisphere domains.

5) EMC offers a host of vSphere plug-ins for VMware integration.  This will give your VMware admins greater visibility into the storage from the vSphere client.  If you search for EMC Virtual Storage Integrator on the Support site, you can find the downloads and instructions for configuration.  It's pretty handy to have all the information you need in the vSphere web client!

Let us know if you have more questions!

Karl

8 Posts

July 1st, 2015 00:00

With a single 25 disk bay I suggest you to configure the disks as follows:

The first four disks (0_0_0-0_0_3) as a Raid Group 3+1. These are the vault drives, which also contain the Unisphere Operating Environment. Best Practice is NOT to put data on these disks as you don't want IO contention with the VNX OS. Nevertheless I sometimes use this Raid Group for static, largely read-only data like ISOs or VM Templates, but never for VMs or DBs.

With the rest I would create a Storage Pool with 4 Raid 5 Groups (4+1). This gives you a contiguous space for creating LUNs and you don't have to worry about Raid Groups anymore. This also gives you a good performance, as the I/O is spread out over all disks. There remains one disk as a hot spare.

2 Intern

 • 

222 Posts

July 1st, 2015 06:00

I have been leery of pools, as they create an object which is just too large to manage.

My CX4's are constantly loosing disks, which I can't replace easily, so pools were constantly getting holes in them and I had to migrate luns and rebuild into smaller chunks. Using raid groups allows me to have 2 or more in which I can move things around pretty easily, because the effective size of the raid group is smaller, making it easier to manage.

I would especially be leery of creating all the storage in a single pool, at a minimum I would want to have 2 pools so if/when I need to shuffle I have room to move.

That said, these do come with a 3 year support contract, so maybe it's no as hard to get replacement parts, and I could keep the pool(s) alive that way.

On a side note, does the VNX also have 2 storage processors, SPA/B like the CX?

2 Intern

 • 

222 Posts

July 1st, 2015 08:00

Whoops, mis-read your reply.

The VNX does have a 3-year support contract, but the older CX arrays do not. They are like 5 years old, and failing left-and-right.

2 Intern

 • 

20.4K Posts

July 1st, 2015 08:00

can you eloborate a little more about "so pools were constantly getting holes in them and I had to migrate luns and rebuild into smaller chunks" ?  


You are missing out on a lot of functionality/performance but sticking to traditional raid groups. FAST, VNX Snapshots ..etc

2 Intern

 • 

222 Posts

July 1st, 2015 08:00

No, way to expensive for management to accept.

I have, on occasion, resorted to eBay to locate replacement drives, but basically we have just been shrinking the total available space by a handfull of drives every year, and re-organizing what's left into working raid groups.

Im not opposed to setting up pools, but as I said earlier, im wary of creating any storage container (pool or raid group) larger than 50% of the entire space. I know that we got 3 years of support on the new VNX arrays, so we are probably good there, im assuming that the support will include sending replacement drives for free.

Right now, im all about learning the differences between my current CX arrays and the new VNX arrays, so I can hit the ground running when the new arrays show up in a few weeks. I will have to migrate all the CX luns to the VNX since the deal we got from EMC included returning the old CX arrays. The biggest issue I have is that the new arrays have a much smaller spindle count than the old CX arrays, although the drive sizes are larger. On the CX arrays, I had only about 50% space utilization, but on the new VNX arrays, we will be over 95% capacity from the start. Ive warned management that we will probably have to get another tray to expand onto, and that will give me more drives to work with. Perhaps at that time I can create 2 pools of equal size, which is what I would prefer based on my experience with the CX arrays.

Do you (or others) generally prefer one large pool on the array? If so, how do you manage to deal with space issues which require moving drives? Also, do pools support drives of different sizes or speeds? Ive never tried mixing drive types/speeds on the CX arrays, didn't think it was a good idea.

2 Intern

 • 

222 Posts

July 1st, 2015 08:00

On the CX arrays I currently have, the disks keep failing, every few weeks we seem to loose another drive.

We eventually consume all the hot-spares and are forced to reconfigure the disk groups to construct new raid groups which a full set of drives and get some hot spares back in service.

the CX arrays are unsupported, so getting replacement drives isn't an option, we continually have to downsize the raid groups and move luns around to fit them in where space is available.

I had (about a year ago) setup a pool with 6 DAE's of 146Gb drives in a single pool. When I ran out of hot-spares, there was no way for me to reconfigure the pool, and I was forced to move all the luns off the array to another array in order to be able to reconfigure the space around the failed drives.

I would never again create a pool larger than 50% of the array, since I needed the second pool to be able to move the luns and allow me to reconfigre (downsize) the pool to fit within the working drives.

Is there a better way to manage this issue that im not aware of?

2 Intern

 • 

20.4K Posts

July 1st, 2015 08:00

you don't have a support contract on the VNX ?

2 Intern

 • 

20.4K Posts

July 1st, 2015 09:00

i don't know why you keep mentioning "moving drives", i don't move any drives.  VNX under an active support contract will email-home and support will either send you a replacement drive or EMC CE shows up and replaces it for you (depends on your contract).  I have a pretty large pool (around 230 drives), some people don't feel comfortable with pools that big and go with smaller pools. Just remember that drives can only belong to one pool, so by putting many drives into one pool you get better performance as your data is spread among more spindles.  Have you had a chance to look at the VNX best practices white papers ? They do a pretty good job covering concepts such as pool, failure domain ..etc.

2 Intern

 • 

20.4K Posts

July 1st, 2015 09:00

you need to revisit your strategy how you manage this array.  I thought i posted the link earlier but don't see it now.

http://davidring.ie/2014/05/05/emc-vnx-mcx-hot-sparing-considerations/

http://www.storagenetworks.com/documents/emc/emc-vnx-best-practices-wp.pdf

2 Intern

 • 

222 Posts

July 1st, 2015 09:00

Just to clarify, the VNX is new to us, so we haven't moved drives around like we do with the CX arrays.

In the CX arrays, when drives fail, they leave holes in the DAE's when we remove them. I don't have drive blanks, so I have been taking drives from other DAE's to fill-in the open slots for faulted drives, trying to keep the DAE's as full as possible. Ive removed on DAE from the array because after moving 15 drives up to other DAE's it's completely empty now, so I just un-plugged it to save power costs.

When I "moved" drives on the CX arrays, I have had (sometimes) to re-create the raid groups since the sizes/speeds aren't always grouped together physically, so if I had a 7+1 in a set of slots, when I move drives to fill in holes, that raid group might either move to different drive slots or may shrink to a 6+1 or a 5+1 depending on drive availability.

If I had a large pool, and started loosing drives and could not replace them, the entire pool would fault and I would loose all the luns. That's why I need to keep at least 2 pools so that I can shuffle luns around if I need to reconstruct the pool. With raid groups, I always have many, so shuffling luns around is not a problem, and I can usually find space to move luns to allow me to re-organize the raid group.

As to the white paper, I wasn't aware there was a best practices guide, ill see if I can find that in google and give it a look-over.

2 Intern

 • 

222 Posts

July 1st, 2015 10:00

Say, question on expansion of VNX 5200.

In case my boss asks, where can I find the part numbers for the DAE's and drives for this type of array?

In the case of the CX4, I had a compatability matrix which had the part numbers, but I haven't found one for the VNX yet.

Also, does adding a DAE require a license of any kind?

2 Intern

 • 

222 Posts

July 1st, 2015 11:00

Thanks, this will be the first time ive actually had an array under support, so it should be interesting.

2 Intern

 • 

222 Posts

July 1st, 2015 11:00

Where can I find the "host agent" software? I tried looking at the url below, but a search for "host agent" returned 0 results.

https://support.emc.com/downloads/

No Events found!

Top