Start a Conversation

Unsolved

This post is more than 5 years old

S

3826

February 20th, 2010 22:00

Disk groups on V-max

I have this new V-max and testing it....V-max with two drive types EFD and FC 15k drives.

By default,if you look at the V-max it creates two disk group types by drive type.

Planning to split FC drives into two RAID types RAID1 and RAID 5.V-max allows to creates both the RAID type on the same physical disk.

-Is there any way to configure more than one disk groups from the FC drives(could not locate an option in SMC).Can this be done on Bin file level?

-When using the Thin Provisioning,EMC recommendation was to use  Strip Thin device and concatenate data devices.I am not clear on that..does that mean create a Thin device x times as original data device and stripe it..then future additions would be only on the back end?am i thinking right about this?

-Meta devices:While creating meta devices,is there any way ensure the hyper do not wrap up on the same disk?Is there any auto meta creation on V-max?

Sharing your experiences would be appreciated.

130 Posts

February 22nd, 2010 12:00

Thanks..Think about my other part in the question..how to ensure it picks different disks?

Also without thin devices disk offload size was 64KB but with thin devices it was 12tracks(768KB).or at least it loads data to same disk till it fills the data extent portion.

Wondering what would happen if multiple devices hitting the same hyper and performance concerns.

1 Rookie

 • 

20.4K Posts

February 22nd, 2010 12:00

yeah ..64G has always been there for thick, i guess it's there to stay for thin as well.

108 Posts

February 22nd, 2010 17:00

Hi Srichev,

Phew!!, this thread got really busy, really fast. Boom and Dynamox (and the other contributors) are of course correct but I just wanted to clarify my original answer to point 3 on meta volumes. What I said is correct and applies to physical meta volumes. Meta volumes made up of differnent hyper volumes on physical disk. SymmWin will avoid using meta members that share the same physical disk. But I should have differentiated between physical meta volumes and Thin meta volumes. Again as Boom has highlighted the Thin pool is a collection of hypers (with the protection method of your choice) and these are hopefully scattered across different physicals behind different DA slices. The data is automatically striped across the entire available Thin pool but this is divorced from the host presentable Thin device. The Thin device is a cache only volume so selecting a striped Thin meta (not recommended) or concatenated Thin meta (recommended) construction does not have any affect on performance since this selection doesn't change the striping on the backend Thin pool. So striping regular "fat" volumes gives you a performance improvement on the Symmetrix back end but this is not true of striped Thin meta's. This is highlighted in the Technical Note. The note also details why striped Thin meta's are a bad idea, primarily:

Caution: Striped thin metadevices cannot be expanded while preserving data. When a regular striped metadevice is expanded, a BCV meta must be established to the standard metadevice so that the data can be restored to the newly grown standard meta following the expansion. Because thin devices cannot have BCVs attached to them, meta expansion while preserving data is not possible with striped thin metadevices.

Best Regards,

Michael.

108 Posts

February 22nd, 2010 18:00

Hello All,

I forgot to mention that the 64GB and 240GB is again a SymmWin limitation. The largest single discrete regular volume or single discrete Thin device that can be created at Enginuity 5773 is 65,520 cylinders (or 64GB 1000x1000x1000 based). To present a larger volume to the host you need to create a meta volume. At Enginuity 5874 the largest single discrete regular volume or single discrete Thin device can be 262,668 cylinders (or 240GB 1024x1024x1024 based). To present a larger volume to the host, again you need to create a meta volume. The striping of the data devices in the Thin pool is in a round robin across all members. The stripe size is determined by the Enginuity code and it is not user configurable. The "chunk" sizes may change with future Enginuity releases. So as Boom has already stated it is 12 tracks for Raid-1 (and 12 tracks for Raid-5 (3+1), 28 tracks for Raid-5 (7+1), 24 tracks for Raid-6 (6+2) and 56 tracks for Raid-6 (14+2)).

Regards,

Michael.

1.3K Posts

February 23rd, 2010 05:00

It may seem logical that because a TDEV is a cache only device, that making a meta would only be needed for capacity, and that striping it would make no difference in performance.

However a TDEV is still a Symmetrix logical volume with some of the attributes of a regular logical.

The text below in italics is from the document on Powerlink titled: Best Practices for Fast, Simple Capacity Allocation with EMC Symmetrix Virtual Provisioning Technical Note

The third item below is in the process of being changed to enhance performance.  The 5874.207 release is better than the prior release, and the next release should be better still.

In most cases, EMC recommends using concatenated rather than striped metadevices with Virtual Provisioning. However, there may be certain situations where better performance may be achieved using striped metas.

With Synchronous SRDF

®, Enginuity allows one outstanding write per thin device per path. With concatenated metadevices, this could cause a performance problem by limiting the concurrency of writes. This limit will not affect striped metadevices in the same way because of the small size of the metavolume stripe (1 cylinder or 1920 blocks).

Enginuity allows eight read requests per path per thin device. This limits the number of read requests that can be passed through to the thin pool regardless of the number of data devices that may be in it. This can cause slower performance in environments with a high read miss rate.

Symmetrix Enginuity has a logical volume write pending limit to prevent one volume from monopolizing writeable cache. Because each meta member gets a small percentage of cache, a striped meta is likely to offer more writable cache to the meta volume.

Before configuring striped metadevices, please consult with an EMC performance specialist.

Caution: Striped thin metadevices cannot be expanded while preserving data.

1 Rookie

 • 

20.4K Posts

February 23rd, 2010 17:00

i am curious why so much concern over the ability to increase meta volumes size? I don't know anybody in the Unix world who expands meta volumes, every shop i worked at running multi-terabyte SAP/Oracle/PeopleSoft instances uses some form of LVM to manage storage, whether it's native LVM, Veritas or ASM. You want to have more queues in the OS,  much easier to backup a few logical volumes vs one huge one. For windows there is always the option of using BCV to protect data during expansion. If i want performance and don't care about volume expansion ..should i stripe my thin meta?

1 Rookie

 • 

20.4K Posts

February 23rd, 2010 18:00

the question the comes to my mind is plaid, first i have my striped "raid sets", then virtual pool stripes my TDEV over multiple datadevs and then i create striped meta on top of that.  So that's triple striping ..and now i will present these TDEVs to ASM and Oracle will stripe again. That does not sound good.

130 Posts

February 23rd, 2010 18:00

For performance reasons,I would do multiple volumes assigned to the hosts and let LVM take care the striping which creates parallel queues for disk access instead of one big volume.

I assume back end care needs to be taken to make sure all the symdev in the pool are spread across multiple DA pairs.One concern i am not sure is impact of double striping(in case backend data dev are RAID5).Did anyone experience issues with double striping?

1.3K Posts

February 23rd, 2010 21:00

No argument here.  Too many levels of striping are not a good idea.  However the benefit may, in some cases, outweigh the penalty.   In the future these reasons for making a striped meta volume on VP (thin) may go away.

448 Posts

March 1st, 2010 04:00

Just to echo Quincy I created 200 GB thin devices just last week.  In SMC when you click into the field that denotes the size of the disk there are some set drop down choices; you just type over the field manually to get a larger device.
No Events found!

Top