Start a Conversation

Unsolved

This post is more than 5 years old

1980

August 3rd, 2011 06:00

Configure and Migrate MetaVolumes to Flash Drives?

We recently bought (or were given) 5 x Flash Drives (EFDs) to "try" in our VMAX.

Our configuration is this:

R1 is 1 x 69 GB MetaVolume made from 8 x 8.6 GB 2WM HyperVolumes.  (We have 100s of them.)

BCV1 is 1 x 69 GB MetaVolume made from 8 x 8.6 GB R5(3+1) BCVs.

BCVB is the same.

R2 is 1 x 69 GB MetaVolume made from 8 x 8.6 GB 2WM HyperVolumes.

GBCV (Gold BCV) is 1 x 69 GB MetaVolume made from 8 x 8.6 GB R5(7+1) HyperVolumes.

QBCV is 1 x 69 GB MetaVolume made from 8 x 8.6 GB 2WM HyperVolumes.

So, our basic scheme is to use 69 GB MetaVolumes, made up from 8 x 8.6 GB HyperVolumes.

Because we only have 5 x EFDs, we are going to configure them as R5(3+1) + 1 x Hot Spare.  So, we only will have 1 x EFG "RAID Group".  So, there doesn't seem to be any value in making them 8.6 GB HyperVolumes and then makiing them into 69 GB MetaVolumes.  I'd have meta-members on the same physical disk - that doesn't seem to make sense, does it?  So, it would seem logical to make up 69 GB HyperVolumes on the EFD drives.

However, if I make 69 GB HyperVolumes on the EFD drives, then I CAN''T use "Virtual LUN Migration" to migrate my existing R1s to the my new Flash Volumes - apparently "Virtual LUN Migration" only works if the source and target are similiary constructed (MetaVolumes made up of the same size and number of components), but of different RAID Types.

Is this right?  What should I do here?

Also, if I could use "Virtual LUN Migration", will it migrate data on-the-fly while continuosly and seamlessly  "replicating" my R1s to BCV1, BCVB, R2 and GBCV, all concurrently?

   Stuart

1.3K Posts

August 3rd, 2011 06:00

Normally we tell folks never to put metas on the same raid group, but with EFDs it doesn't hurt since there are no "seeks".  You will get better performance from a meta volume on the 4 EFDs than just a single volume.

138 Posts

August 3rd, 2011 06:00

I will get BETTER performance configuring MetaVolumes on the same disks than if I just configured 69 GB volumes on the disks?  Odd!

Is my reading of the "Virtual LUN Migraiton" manual correct?  I MUST configure both source and target as MetaVolumes of same number and size members, but of different RAID Types?

1.3K Posts

August 3rd, 2011 06:00

However NEVER make a striped meta on spinning rust that are on the same disks.

1.3K Posts

August 3rd, 2011 06:00

Not odd at all.  There are many code paths in a Symm volume that will limit concurrency.  More volumes, better concurrency.

August 4th, 2011 11:00

To answer the second part of your question

"Also, if I could use "Virtual LUN Migration", will it migrate data  on-the-fly while continuosly and seamlessly  "replicating" my R1s to  BCV1, BCVB, R2 and GBCV, all concurrently?"

Yes It willmigrate data  on-the-fly while continuosly and seamlessly  "replicating" your BCVs and  R2.

Checkout this image

Suported Configurations.jpg

1.3K Posts

September 8th, 2011 18:00

each sym volume ( single or meta) is seen by the host server as one physical disk. So where is this better concurrency coming from? 

1.3K Posts

September 9th, 2011 03:00

In the Symmetrix internal code or Enginuity, that is where.

1.3K Posts

September 9th, 2011 06:00

The limitation described below is still valid; isn't it? The host O\S, device driver and HBA usually allocate a fixed set of resources for each volume, regardless of size of volume etc. This means that if there is a large metavolume (16 members or more), it may not have enough host resources or I\O bandwidth allocated by the host to satisfy the performance requirements. This is not a concern for single-threaded applications, applications with low I\O, and applications with a high cache hit rate, but other environments may see performance scaling non-linearly with increasing I\O due to these issues. The main issue described above is with host queues and "queue depth". Each volume recognised by a host gets an I\O queue with it's own queue depth. Consider a 90GB dataset, presented either as 10 x 9GB volumes or as a 1 x 9-member meta, with a queue depth of 8. In this way the maximum outstanding I\Os to a volume as seen from the host is: Max Outstanding I\Os (non-meta) = 10 x 8 = 80 Max Outstanding I\Os (metavol))  = 1 x 8 = 8 This can have 3 effects. Firstly a non-meta can have 8 I\Os driven in parallel, whereas the meta can have only 1 (no Powerpath). Secondly, the maximum outstanding I\Os is relatively limited in meta environments so the Symm cannot work so efficiently grouping I\Os for destage. Thirdly, each I\O waiting in the queue will have the response time of all the preceeding I\Os in the queue plus it's own, so with meta's there is a larger chance of higher response times. The queue depth is generally set at O\S, device driver and HBA levels, and is often a fixed value. A too high value of queue depth can stall applications and must be considered carefully by an expert. To a technical person this sounds like a bad move to use metavolumes, but this really only affects applications doing a lot of I\O to a large metavolume with a low cache hit rate, most applications on Symmetrix do not hit this bottleneck. The convenience and striping of metavolumes are big advantages and should also be considered. It is also possible to avoid the bottleneck with metavolumes by increasing the queue depth where possible and appropriate, and by using Powerpath which automatically gives each path has it's own queue.

448 Posts

September 9th, 2011 06:00

If you are testing EFD drives be careful in your testing and expectations.  Flash drives really shine for highly random read workloads.  In my experience you are not going to see a big gain on other work loads because of the cache in the array and pre-fetch algorithms.

I had DBA's run tests and say they only gained 3% performance on flash.  What were the tests?  Full reads and full writes in comparison to the FC drives they were running into caching and pre-fetch.  I explained why the tests were an invalid comparison.

I then turned the disks over to the system admin with instructions to run random read's via the DD utility on an AIX system with 12 HBA's and 20 active CPU's.  He managed to hit 44,000 IOPS on one 200GB flash lun configured in a VP pool on raid-5 7+1 EFD's.

1.3K Posts

September 9th, 2011 07:00

SKT, yes more host LUNs can add more concurrency, more queues, etc.  The same happens inside the Symm.  More Symm volumes gives more concurrency.

Robert.  How do you perform random IO with DD?  AFAIK, you can only do sequential IO with DD.

448 Posts

September 9th, 2011 08:00

Our AIX system admin is quite good and found flag settings to enable different I/O patterns.  Unfortunately I no longer work at the same company with the person but I cold possibly find that out.

My point is that in tsting of flash drives you may not see a noticable difference compared to fiber channel disks simply due ot the other functions within a V-max. Writes are serviced by cache and sequential reads invoke pre-fetch so you have to find a way to truly test flash.

1.3K Posts

September 9th, 2011 08:00

I remember Quincy sayin ideal configuration is one EFD per engine to balance the load on DAs,

As far i know the engine numbers come line 1/SE, 2, 4, 6, 8. If what Quincy said makes sense then i am not expecting you have have an odd/5 number of EFDs on your system . Can you cross check?

1.3K Posts

September 9th, 2011 09:00

It was 5 EFDs, 4 active 3+1 with 1 spare disk.  8 per engine is the optimal count (plus spares)

448 Posts

September 9th, 2011 09:00

You can have an odd number as it would be either 2 mirrored pairs or one raid-5 3+1 group with a spare.

No Events found!

Top