Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

721

December 13th, 2011 07:00

Celerra R1+0 - 2 disk support only???

Customer has a Celerra NS-120 supporting VMware data stores on NFS.  The underlying file system is created on an AVM-provisioned R1+0 volume spread across a number of 2-disk R1 volumes on the underlying CX4. 

This datastore experiences severe latency issues, and I believe it is because the Celerra has put most of the operational data on just a few of the underlying disks.  i.e. the I/O workload is not being evenly distributed across the underlying d-vols.

I'd like to work with the customer to tear down the file system, and re-create the d-vols on native R1+0 provided by the CLARiiON, to get a true workload distribution. across the disks.

Looking at the documentation, is seems that native, more-than-two-disk, R1+0 is not supported by the Celerra.  However, I thought that this had been supported in later DART releases.

Can someone clarify?

Thanks,

Eric

674 Posts

December 15th, 2011 02:00

Just look at the "Managing Celerra Volumes and File Systems Manually" manual.

From a support point of view, there is no restriction how many "equal" luns (same disks, same lun size and same raid type) to use for a filesystem.

Even if not all combinations are a good idea.

Also think about a later filesystem extension.F.e. if the original FS is stripped across 30 Luns, any extension with less than 30 Luns will decrease the performance.

And if you stripe across many spindles for performance reason, but use only part of the stripe for the performance FS (slice it out of this stripe), think about not using the unused part, because if another FS will be located there it will increase the load of the spindles "needed" for the first FS.

If you decide to use a custom defined pool (from a Celerra wording) do not put Luns or the Celerra dvols (f.e. d37) into this pool. Instead create a volume of stripped dvols and put this into the pool. Because else AVM will concat dvols and you will not reach the expected performance.

674 Posts

December 13th, 2011 23:00

Celerra does not support more than 2 disks in Raid1+0

This is not different for VNX file or "later DART releases"

If you want to use Raid1+0 disks for performance reasons, just manualy create a stripped volume using all Raid1+0 luns you want to (limit is 16 TB) and slice the filesystem space out of this volume.

124 Posts

December 14th, 2011 07:00

Peter_EMC:

Thank you for the response.  Can you authoritatively say that this solution is supported by EMC?

Thanks,

Eric

8.6K Posts

December 15th, 2011 04:00

Or think about putting in some flash drives instead

No Events found!

Top