Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1756

February 7th, 2011 04:00

How many disk groups can be managed at most by a V-Max?

Hi all,

How many disk groups can be managed at most by a V-Max?
I form disk groups with 16 physical disks and I would need 150 disk groups at most to populate 10 storage arrays, but I heard that we could not exceed 99 disk groups at most.
What are the capacities of the 5874 & 5875 microcode?

108 Posts

February 9th, 2011 01:00

Hello Ppmatic,

I am the author of EMC internal solution emc136067 and I have just drafted a new solution emc261699. This solution has only just been written (so it is not yet visible on Powerlink) and an EMC Engineering fix is still required. A SymmWin workaround has been provided in this solution and your local CE/IDE/FSS can follow-up on this.

Please contact your local EMC Support Representative and ask them to refer to emc261699.

Best Regards,

Michael.

2 Intern

 • 

20.4K Posts

February 8th, 2011 04:00

you want to create 150 disk groups per VMAX ? Why only 16 physical disks per disk group ?

2 Intern

 • 

20.4K Posts

February 8th, 2011 05:00

why not add all the drives (same type) into one disk group ? When you create you TDAT devices they will span a lot more physical spindles and DA pairs ?

12 Posts

February 8th, 2011 05:00

No, we want for instance to control the spanning of a Raid-6 14+2 TDAT volume over the same 16 disks and avoid the spanning of parity over disks that would not be driven by the same DA pairs.

We use the spreading algorithm of the thin pool to span our extents over more physical spindles and DA pairs.

12 Posts

February 8th, 2011 05:00

yes, that's the goal. 16 disks per disk group because each disk group relies on one disk per back-end loop over 16 back-end loops driven by 4 back-end directors in 2 engines. 16 disks per disk group in order to be able to add the adequate need of storage in a thin pool.

I heard that there were 255 disk groups available until 5773 enginuity and that starting 5874 the maximum disk group number value is limited to 99 (decimal).

Is there any official statement about that?

I heard that the maximum disk groups number was limited in order to avoid some performance issues?

I don't understand why.

Is the 99 limitation a 5874/5875 enginuity limitation or just a symwin interface limitation?

Can it be passed over in 5874/5875 enginuity?

2 Intern

 • 

20.4K Posts

February 8th, 2011 08:00

something is just odd about this approach, maybe EMC folks can comment ..Quincy ?

1.3K Posts

February 8th, 2011 09:00

Yes, I thought this sounded a bit strange too.  From what I gather, the goal is to try and keep a specific pool from having any TDATs that share the same drives.  This might seem like a good approch, but I feel it will probably limit performance, not help it over all.  The best practice for a TDAT pool supporting a given set of TDEVs is to have 8 active physical hypers per disk in the TDAT pool, or the smallest number of hypers that would use the capacity desired.

Having all these disk groups is likely to result in disk bottlenecks.

12 Posts

February 8th, 2011 09:00

The question was not about our storage design (we have between 4 and 55 physical TDAT splits per disk, depending on disk capacity, protection & the hypervolume maximum size of 262668 cylinders)

but it was about the newly discovered decrease to 99 disk groups, starting at enginuity 5874, while enginuity 5773 tolerated 255 disk groups!

What are the reasons for this limitation?

Can this limitation be circumvented to inject a binfile?

12 Posts

February 9th, 2011 00:00

FYI, Primus emc136067 talks about this issue.

 

It looks like the issue I have on V-Max with Enginuity 5874 release.

Is a workaround now available for Enginuity 5874/5875 releases?

Message was edited by: Michael Lee. Removed content of solution emc136067 as this is an EMC internal support article. Please refer to solution emc261699 for updates and progress on this issue.

448 Posts

February 10th, 2011 06:00

It sounds as though they are using disk groups to maintain raid group affinity.  I am not sure that I understand why you would need to do this.

I do use disk groups to make it easier to make sure we only put TDAT's from specific drives into a thin pool.  All disks that contain TDAT's for a specified pool are placed in the same disk group.  I have disk groups as small as 24 physicals and some over 100 in our V-Max.

108 Posts

February 10th, 2011 23:00

Hello All,

The solution has been published and the enhancement request has been submitted to EMC Engineering for their consideration.

Just F.Y.I the original request for 255 disk groups at Enginuity 5771 was for the following reason:

The configuration in the above case was not going to be permanent – just a way of targeting the placement of the migrated data. Once the data had been placed the number of groups was to be changed back to a more manageable / reasonable number.

Regards,

Michael.

12 Posts

May 13th, 2011 07:00

Hello Michael,

What is the status of the enhancement request you submitted to EMC Engineering (OPT # 359440) requesting that they allow 255 (decimal) physical disk groups to be configured via the Add (wizard) on the SymmWin Disk Map screen?

When will the permanent fix be released?

Regards,

Patrick

108 Posts

May 15th, 2011 20:00

Hello Patrick,

I have updated the solution (emc261699) with the latest advice from Engineering, however, it is bad news. Engineering have rejected this enhancement request stating that 100 physical groups should be sufficient for Enginuity 5874 and 5875. Please contact your local EMC Support Representative. They may be able to convince Engineering to "change their mind", otherwise your local team will need to investigate the workaround included in this article.

Best Regards,

Michael.

448 Posts

May 16th, 2011 05:00

There seems to be a misconception on how TDAT's work, you can ignore this as I am attempting to follow the thought in this thread.

A TDAT is built off a raid group and stays on that raid group whether it is in a disk group comprised of only the raid group or many raid groups.  You then assign the TDAT to a thin pool now unless you plan to make a thin pool per raid group this is where things change.  When you bind a lun into a thin pool it is created as evenly as possible in round robin fashion across all the TDAT's in the thin pool.  This is the whole point of the thin pool process to be able to thin provision, use more spindles on the backend, provide more flexibility etc, FAST V/P etc.

If you (for some reason) have to have a specific data placement then using classic luns may be a better choice for you over thin pools.

Just so you are aware I setup a tech refresh of a DB2 datawarehouse 15 partitions each over 2.5 TB's on thin pools and it is performing exceedingly well.  Each partition is comprised if a thin pool consisting of 24 drives (3 raid-5 7 groups).  I did have to "convince" the DB2 DBA's that they did not need to maintain data placement as they had on the DMX3000 we migrated them from to maintain performance; by convince I mean months of performance testing, and they still probably dont agree with me.  They have a 24 CPU IBM P6 series machine with 12 hba's running PowerPath.

2 Intern

 • 

1.3K Posts

June 16th, 2011 03:00

12 HBA ! or 12 paths?

No Events found!

Top