Start a Conversation

Unsolved

This post is more than 5 years old

86633

August 7th, 2012 08:00

Ask the Expert: Performance Calculations on Clariion/VNX

Performance calculations on the CLARiiON/VNX  with RRR & Jon Klaus

 

Welcome to the EMC Support Community Ask the Expert conversation. This is an opportunity to learn about Performance calculations on the Clariion /VNX systems and the various considerations that must be taken into account

 

This discussion begins on Monday, August  13th. Get ready by bookmarking this page or signing up for email notifications.

 

Your hosts:

 

https://community.emc.com/profile-image-display.jspa?imageID=4416&size=350

 

Rob Koper is working in the IT industry since 1994 and since 2004 working for Open Line Consultancy. He started with Clariion CX300 and DMX-2 and worked with all newer arrays ever since, up to current technologies like VNX 5700 and the larger DMX-4 and VMAX 20k systems. He's mainly involved in managing and migrating data to storage arrays over large Cisco and Brocade SANs that span multiple sites widely spread through the Netherlands. Since 2007 he's an active member on ECN and the Support Forums and he currently holds Proven Professional certifications like Implementation Engineer for VNX, Clariion (expert) and Symmetrix as well as Technology Architect for Clariion and Symmetrix.

 

https://community.emc.com/profile-image-display.jspa?imageID=6000&size=350

Jon Klaus has been working at Open Line since 2008 as a project consultant on various storage and server virtualization projects. To prepare for these projects, an intensive one year barrage of courses on CLARiiON and Celerra has yielded him the EMCTAe and EMCIEe certifications on CLARiiON and EMCIE + EMCTA status on Celerra.

Currently Jon is contracted by a large multinational and part of a team that is responsible for running and maintaining several (EMC) storage and backup systems throughout Europe. Amongst his day-to-day activities are: performance troubleshooting, storage migrations and designing a new architecture for the Europe storage and backup environment.

 

This event ran from the 13th until the 31st of August .

Here is a summary document og the higlights of that discussion as set out by the experts. Ask The Expert: Performance Calculations on Clariion/VNX wrap up

 

 

The discussion itself follows below.

247 Posts

August 23rd, 2012 11:00

ankitmehta Want to find information about Navisphere/Unisphere Analyzer & to determine if U have performance issues? Refer KB Article:emc218359 #EMCATE -8:50 PM Aug 23rd, 2012

75 Posts

August 23rd, 2012 11:00

As discussed on twitter I've been using this DIY heatmap to visualize the CLARiiON performance on one view: http://www.penguinpunk.net/blog/emc-diy-heatmaps/

Check out the sample output file: http://www.penguinpunk.net/blog/wp-content/uploads/2011/12/heatmap.html

@henriwithani

247 Posts

August 23rd, 2012 12:00

It's a wrap. The conversation continues on http://t.co/uYvJf5CQ Thanks to all participants #EMCATE -9:02 PM Aug 23rd, 2012

Seriously, thanks to all who participated in this tweetchat. I t was fun and informative. We plan to do a lot more of these #EMCATE -9:03 PM Aug 23rd, 2012

Thanks everyone, we had some great fun and good questions!

75 Posts

August 23rd, 2012 14:00

136 Posts

August 23rd, 2012 18:00

Could you share some experience (best practice) on LUN layout design for OLTP & OLAP application?

thank you!

2 Intern

 • 

5.7K Posts

August 24th, 2012 06:00

Thanks! This is good stuff!!!

2 Intern

 • 

5.7K Posts

August 24th, 2012 06:00

Absolutely! Let's do this again some time.

247 Posts

August 27th, 2012 02:00

Do you have an app in mind? EMC has quite a lot of best practices documents up on powerlink and/or support.emc.com. Your best bet is searching up there for a recommendation document.

But all those recommendations come from the random/sequential I/O patterns and the fact whether it's write or read. So if you know what your logs and/or db is going to do, you can do the math real quick, using the previous posts in this thread.

Personal experience: Our OLTP apps are all sharing the same RAID5 pool, but that's due to the fact the $$$-aspect has been a key driver lately . All things considered, performance is good at the moment, so i wouldn't change that for something more expensive. The massive amount of drives absorbs pretty much every peak and FAST VP / FAST Cache smoothen it out even more.

The only pain point is a LUN that's asking for 10000 IOps at 99,9% random small read. Response times are still good for that LUN, but you can imagine the impact on a CX4 at that point.

Given a bag of coins, I'd probably invest in a number of SSD drives, since the Tier1 drives from the pool are getting hammered. I'd first use it to create a new RAID5/10 group, dedicated for that greedy LUN, and try and see if the pool->RG switch brings down the SP utilization a bit. If that doesn't bring much improvement, I'd move those drives into the pool.

2 Intern

 • 

5.7K Posts

August 27th, 2012 05:00

Nice case, Jon! 10,000 read IOps is quite a lot!

136 Posts

August 28th, 2012 18:00

that's a good sharing, thank you, Jon.

247 Posts

August 28th, 2012 23:00

You're welcome!

Is anyone using snapshots? Do you guys know how to calculate the amount of IOps you need to reserve for the RLP LUNs? I feel a little COFW post coming up..

2 Intern

 • 

5.7K Posts

August 29th, 2012 03:00

Ok, I know this is going to be interesting, especially since VNX now has the new VNX Snapshot feature (compared to the use of the Reserved LUN Pool).

Please let us know how to calculate the worst case scenario on Snapshots!

247 Posts

August 29th, 2012 04:00

Haha yes, that's where you have to build on your write penalty knowledge, and then things get exciting!

To be perfectly honest, I do need to grab some whitepapers and notes to make a proper example, so I'll do that tonight. Hang tight!

2 Intern

 • 

5.7K Posts

August 29th, 2012 05:00

If I'm correct the new VNX Snapshot technology no longer copies data on first write, but simply writes the new data into the same pool where the actual LUN is stored. This ofcourse invokes the same write penalty as before on regular LUNs except that there's no COFW anymore, so no piling up COFWs! I'm very eager to see how the new way compares to the old way of storing old/new data

67 Posts

August 29th, 2012 12:00

Yes that makes sense. I thought my reply would appear at the relevant area of the FAST cache discussion area but appeared at the end so my comment is out of context here so  apologies.

The discussion on this end of the discussion is the new VNX Snapshot technology. This is now ROW (Redirect on write) to new location within the same VNX pool and no longer COFW (Copy on First Write) holding writes to primary lun until original data copied to the reserve lun pool. This defininitely seems better for performance as the write/change to the primary can occur straight away and a read for example of the source lun does not need to be constructed/read from two different places.

No Events found!

Top