Start a Conversation

Unsolved

This post is more than 5 years old

5557

July 23rd, 2012 06:00

Concat or Striped meta tdevs

What is the best method in thin provisioning for creating meta tdev. Concat tdev meta or strped.

Also what ate parameters which needs to be looked at for choosing either of concat or striped.

859 Posts

July 23rd, 2012 06:00

If you have the recent enginuity then i would recommend striped. If you have got older code and you would need to frequently expand the metas then go for concat.

regards,

Saurabh

2 Intern

 • 

20.4K Posts

July 23rd, 2012 06:00

this question came up recently, search this forum.

448 Posts

July 23rd, 2012 06:00

If meta expansion is a concern concatenate.

If you want the best possible performance and are not planning to expand the meta voluems stripe them.  Striping takes advantage of being able to accept/push writes down to each meta member; so the more meta members potentially better performance (within reason).

Saurabh Rohilla said the same thing I was just a bit more verbose.

1.3K Posts

July 23rd, 2012 07:00

I like to start with, if you do not need expansion, stripe.

If you need expansion but performance is more important than ease of expansion, stripe.

If expansion is more important than performance, then use concatenated.

The performance is more than writes, more members makes the LUN have a higher performance limit, and it has nothing really to do with the amount of write cache that is available in most cases.  8 meta members, or maybe 16 meta members is usually enough to get the maximum performance from a FA CPU.

20 Posts

July 24th, 2012 05:00

I have read in virtual provising notes of EMC that normally while creating meta tdevs we concatenate .

So shall i take that even in vmax stripe is the best method to go for

859 Posts

July 25th, 2012 00:00

Hi,

check Quincy's reply for your answer.

regards,

Saurabh

110 Posts

September 20th, 2012 01:00

My understanding is the data device in the thin pool does writes IO's in a striped manner.

Using a striped thin device wouldn't create a write performance issue as the write IO has to read all the way till end and write again? For read, I know more spindle will help for faster.

1.3K Posts

September 20th, 2012 06:00

Archuperi721,

Yes, the TDATs are used in 768K chunks, so the pool is already striped.

The writes to Symmetrix should always be to cache, and should not be waiting on the disks, so more spindles shouldn't make a difference.

The write performance we have been talking about here is to CACHE, not waiting for disks.

30 Posts

November 1st, 2012 12:00

Striped will give you better performance than concatenated in most cases. If the IOPS are less the performance difference between concatenated and striped will not be noticeable. If you have busy server doing high IOPS like database you will get better performance with striped.

If you are running SRDF/S you should stripe. In latest enginuity version you can expand the striped meta ( starting from 5875.267.201 at least from what I know)

7 Posts

January 31st, 2013 12:00

I had a major database issue that was using concatenated Metas.  This caused LARGE Read/Write Response times.  A few hundred milliseconds.  After converting everything to striped, performance was MUCH better with wait times in the 10's to 30's at peak load.  At this point, I create striped for any database server that appears to be significant.

You can still expand Striped Metas, but you have to (1) create a BCV device of the same size & geometry & striped as the original device, (2) create slices that are the same geometry as the meta members of the source (striped) device, and finally (3) expand the striped meta, using the protection BCV you created in Step 1 & the slices you created in Step 2.

Hope this helps.

26 Posts

January 31st, 2013 17:00

Hi gungun9,

Always go for striped meta's. Concat meta's can suffer from the 5% Device Write-Pending limit on the Front-End. The Concat meta's was the best practice for 5874 microcode, but now (since 5875 and above) we always suggest to go with stripe meta's. Even if the simplicity of expansion is an issue for you, with 5875 and above you can easily expand a striped meta.

Regards,

Kleanthis

278 Posts

February 1st, 2013 01:00

Hi GunGun9,

Using a volume manager or other type of virtualization on top of standard RAID-5 based LUNs (which is what you are doing) is one good way to improve I/O performance. One thing to consider is what happens when the application runs out of disk space, and you need to add storage for capacity. If you are striping across a few LUNs in say, a Veritas disk group, and you need to expand the volume, application performance may be affected during the expand operation.

 

LUN and RAID concatentation 

If you are using LUN concatenation, you would simply add a new LUN to the group and expand the volume to include the new LUN, which would have less impact on the application. As long as the database is designed properly to use all LUNs in parallel, then using the concatenation method (something like one file system per LUN) would result in a trade-off between performance and ease of management for the application.

When using RAID-0 striping on top of RAID-5 LUNs, make sure your LUNs are all provisioned from different parity groups.

If you are using meta-volumes in the array, there may be a chance where you would stripe within the same parity group on the array, which could affect performance and defeat the reason to stripe in the first place. 

Database guys always like to go with larger numbers of smaller disks for performance. The problem is you can't buy 36 GB drives anymore, and 146 GB will be the smallest drive available fairly soon. Lots of smaller drives is good for random I/O, but fewer larger disks can work fine for sequential transaction log I/O. 

A good methodology would be to use partitioning within the RAID groups on the array to minimize seek time from the outermost disk cylinder to the end of the partition, which in effect creates your smaller drives for you. You could then use the outermost partitions on multiple RAID-5 groups to create your RAID-0 stripe within the volume manager. When assigning LUNs, use as many storage ports as possible, and spread the load across all of your HBA's within the server. This will not only increase the available queues for the operating system, but also maximize available bandwidth to the disks. 

As far as queue depth is concerned, most HBA drivers default to 32 per LUN, and 256 per port. Storage array ports support either 256 or 512 queues per physical port (some support more, some support less. Check with your vendor) The trick is to use as many queues as you can without running into "queue full" conditions. You can change a driver's queue settings via the bundled application (such as HBA anywhere), and try increasing the depth per LUN to 64 or 128, and see what happens. 

Using more LUNs per volume manager disk groups is better than fewer, since this increases available queues (at least 3 should be used per group for availablity reasons). 

It is also advisable to keep random and sequential workloads separate. Use different RAID groups for the workload type to keep them separate. It is also best practice to keep database log volumes on different physical spindles than the database itself.

26 Posts

February 1st, 2013 14:00

Dear NY Yankees,

It's not completed true regarding the striping over the LVM.

LVM striping is used only when we have pure write I/O workload on the devices f.e. Oracle Redo Logs, taking advantage the multiple I/O queues that are created on the operating system level.

If you are using LVM striping (instead of creating a Meta) that has random workload (read and write I/O) then you will destroy the prefetch algorithm of Symmetrix thus you will create unncessary Backend I/O operations (that will be initiated from prefetch) and you will end up consuming cache slots that you will probably never use.

KR,

Kleanthis

1.3K Posts

February 4th, 2013 07:00

The difference in performance between concat and striped is generally not due to the increased WP limit. 

July 27th, 2016 19:00

Hi Dynamox,

I searched the forum but unable to find it, could you please provide me the link

Thanks,

Abhimanyu

No Events found!

Top