Start a Conversation

Unsolved

This post is more than 5 years old

5718

February 20th, 2013 05:00

Slow performance on EMC Vmax

We have a New box with no load on it, we have added some thin luns to SUN server bounded to FC Thin pool, and fast vp was never enabled at all.. But Server admins complains that his IBM Disk has good read and write responses time compared to the new EMC Disk, he tried creating 10 GB files on both IBM array and EMC array, IBM outperforms EMC.. 

Everything has been set properly including flag settings, multipathing , but still we have a poor performance, any one has any suggestions or recommendation?

859 Posts

February 20th, 2013 06:00

how he is monitoring the response time?

467 Posts

February 20th, 2013 06:00

What do you see during his test on the array performance side?  Do you have any performance logging enabled?

20 Posts

February 22nd, 2013 22:00

We have collected the TTP and BTP files and gave it to EMC, they are not seeing anything abnormal, but still Unix admin is confident abt his statement and generates the same report with almost same response time from EMC.. we have engaged EMC support.. let see what they are coming up with.

1.3K Posts

February 23rd, 2013 06:00

So they are using a "create file" as a benchmark for the storage?  I would suggest a more realistic benchmark that would simulate your production load, I know you may create files during normal production, but I doubt this is the primary workload.  I can think of several reasons why the create file might be slower on the VMAX than the IBM, if you really want to investigate this, please PM me.

278 Posts

February 23rd, 2013 14:00

Hi Koteswaran,

does these devices bound on the FC Thin Pools have SRDF?

20 Posts

February 23rd, 2013 21:00

Yes we are using SRDF/A, But it should not create much problems.. We also have the mainframes just a Dev environment running on 2 Engine Vmax, again it is suffering with 80% FA utilisation & Backends are 60|% utilisation during the during the Mainframe batch processing, Not sure how the Vmax will suffer if we move our Production Mainframe Data + Open system servers.

278 Posts

February 24th, 2013 04:00

Yes probably the SRDF/A should not affect the performance.

I faced an issue like yours but with a new VMAX 20K and the SRDF/S because of the GSW.

General Safe Write (GSW) if i remember well, can create performance issues.

Adds around 1 ms to 1,5 ms extra response time.

The issue that i faced had to do with the REDO logs of the applications.

The customer performed billing cycles and some batches flows on a specific dates in a month and the GSW created so much delay issues to the REDO logs, so the whole database "weigh down" the whole environment.

Finally the GSW disabled on the VMAX and the issues resolbed.

All the above has to do with SRDF/S. Eventually SRDF/A should not affect the performance on the devices.

On the two engines VMAX are you using all the FA's (8 FAs) for your DEV environment.

What kind of procedures are executing on that DEV environment?

20 Posts

February 24th, 2013 22:00

As of now only one Sun Box for Oracle database is using the Vmax for open systems with 4 paths and then the mainframe development batch will run between 5 - 9 PM, and we dont have any thing else on the box.. but still we are getting poor response time ..

278 Posts

February 25th, 2013 01:00

What kind of multipathing are you using? The Native or Veritas?

Do you have SPA?

Can you see the utilization of those 4 FA's if is the same?

1.3K Posts

February 25th, 2013 05:00

When you say "poor response times", can you provide more detail?  Are we still talking about the create file test?

1 Message

April 9th, 2013 08:00

Koteswaran, did you find a resolution to this? We are experiencing similar problems with our new VMAX 10K using an AIX test server.

We are using NDisk64 as our load testing utility. The test is performing Random IO across a 1 GB file with a 1 MB block size. Our IBM 4800 starts at 80 IOPS and learns his way up to 650 IOPS after a few tests. The VMAX starts at 340 IOPS and never goes any higher with subsequent tests. What is weird is that the VMAX performs the same for 10K SAS as it does EFDs. My initial reaction was that caching is disabled, but if it were a caching issue, I would still expect the EFDs to drastically outperform the SAS in random IO. Performing the same test with sequential IO produces the same results of 340 IOPS.

We are currently in the process of trying to find Windows and Linux test systems to see if we get consistent results across platforms.

No Events found!

Top