Unsolved
This post is more than 5 years old
130 Posts
0
3826
Disk groups on V-max
I have this new V-max and testing it....V-max with two drive types EFD and FC 15k drives.
By default,if you look at the V-max it creates two disk group types by drive type.
Planning to split FC drives into two RAID types RAID1 and RAID 5.V-max allows to creates both the RAID type on the same physical disk.
-Is there any way to configure more than one disk groups from the FC drives(could not locate an option in SMC).Can this be done on Bin file level?
-When using the Thin Provisioning,EMC recommendation was to use Strip Thin device and concatenate data devices.I am not clear on that..does that mean create a Thin device x times as original data device and stripe it..then future additions would be only on the back end?am i thinking right about this?
-Meta devices:While creating meta devices,is there any way ensure the hyper do not wrap up on the same disk?Is there any auto meta creation on V-max?
Sharing your experiences would be appreciated.
srichev
130 Posts
0
February 22nd, 2010 12:00
Thanks..Think about my other part in the question..how to ensure it picks different disks?
Also without thin devices disk offload size was 64KB but with thin devices it was 12tracks(768KB).or at least it loads data to same disk till it fills the data extent portion.
Wondering what would happen if multiple devices hitting the same hyper and performance concerns.
dynamox
1 Rookie
1 Rookie
•
20.4K Posts
0
February 22nd, 2010 12:00
mlee2
108 Posts
0
February 22nd, 2010 17:00
Hi Srichev,
Phew!!, this thread got really busy, really fast. Boom and Dynamox (and the other contributors) are of course correct but I just wanted to clarify my original answer to point 3 on meta volumes. What I said is correct and applies to physical meta volumes. Meta volumes made up of differnent hyper volumes on physical disk. SymmWin will avoid using meta members that share the same physical disk. But I should have differentiated between physical meta volumes and Thin meta volumes. Again as Boom has highlighted the Thin pool is a collection of hypers (with the protection method of your choice) and these are hopefully scattered across different physicals behind different DA slices. The data is automatically striped across the entire available Thin pool but this is divorced from the host presentable Thin device. The Thin device is a cache only volume so selecting a striped Thin meta (not recommended) or concatenated Thin meta (recommended) construction does not have any affect on performance since this selection doesn't change the striping on the backend Thin pool. So striping regular "fat" volumes gives you a performance improvement on the Symmetrix back end but this is not true of striped Thin meta's. This is highlighted in the Technical Note. The note also details why striped Thin meta's are a bad idea, primarily:
Best Regards,
Michael.
mlee2
108 Posts
0
February 22nd, 2010 18:00
Hello All,
I forgot to mention that the 64GB and 240GB is again a SymmWin limitation. The largest single discrete regular volume or single discrete Thin device that can be created at Enginuity 5773 is 65,520 cylinders (or 64GB 1000x1000x1000 based). To present a larger volume to the host you need to create a meta volume. At Enginuity 5874 the largest single discrete regular volume or single discrete Thin device can be 262,668 cylinders (or 240GB 1024x1024x1024 based). To present a larger volume to the host, again you need to create a meta volume. The striping of the data devices in the Thin pool is in a round robin across all members. The stripe size is determined by the Enginuity code and it is not user configurable. The "chunk" sizes may change with future Enginuity releases. So as Boom has already stated it is 12 tracks for Raid-1 (and 12 tracks for Raid-5 (3+1), 28 tracks for Raid-5 (7+1), 24 tracks for Raid-6 (6+2) and 56 tracks for Raid-6 (14+2)).
Regards,
Michael.
Quincy561
1.3K Posts
0
February 23rd, 2010 05:00
It may seem logical that because a TDEV is a cache only device, that making a meta would only be needed for capacity, and that striping it would make no difference in performance.
However a TDEV is still a Symmetrix logical volume with some of the attributes of a regular logical.
The text below in italics is from the document on Powerlink titled: Best Practices for Fast, Simple Capacity Allocation with EMC Symmetrix Virtual Provisioning Technical Note
The third item below is in the process of being changed to enhance performance. The 5874.207 release is better than the prior release, and the next release should be better still.
In most cases, EMC recommends using concatenated rather than striped metadevices with Virtual Provisioning. However, there may be certain situations where better performance may be achieved using striped metas.
With Synchronous SRDF
®, Enginuity allows one outstanding write per thin device per path. With concatenated metadevices, this could cause a performance problem by limiting the concurrency of writes. This limit will not affect striped metadevices in the same way because of the small size of the metavolume stripe (1 cylinder or 1920 blocks).
Enginuity allows eight read requests per path per thin device. This limits the number of read requests that can be passed through to the thin pool regardless of the number of data devices that may be in it. This can cause slower performance in environments with a high read miss rate.
Symmetrix Enginuity has a logical volume write pending limit to prevent one volume from monopolizing writeable cache. Because each meta member gets a small percentage of cache, a striped meta is likely to offer more writable cache to the meta volume.
Before configuring striped metadevices, please consult with an EMC performance specialist.
Caution: Striped thin metadevices cannot be expanded while preserving data.
dynamox
1 Rookie
1 Rookie
•
20.4K Posts
0
February 23rd, 2010 17:00
dynamox
1 Rookie
1 Rookie
•
20.4K Posts
0
February 23rd, 2010 18:00
srichev
130 Posts
0
February 23rd, 2010 18:00
For performance reasons,I would do multiple volumes assigned to the hosts and let LVM take care the striping which creates parallel queues for disk access instead of one big volume.
I assume back end care needs to be taken to make sure all the symdev in the pool are spread across multiple DA pairs.One concern i am not sure is impact of double striping(in case backend data dev are RAID5).Did anyone experience issues with double striping?
Quincy561
1.3K Posts
0
February 23rd, 2010 21:00
RobertDudley
448 Posts
0
March 1st, 2010 04:00