Start a Conversation

Unsolved

This post is more than 5 years old

85813

August 7th, 2012 08:00

Ask the Expert: Performance Calculations on Clariion/VNX

Performance calculations on the CLARiiON/VNX  with RRR & Jon Klaus

 

Welcome to the EMC Support Community Ask the Expert conversation. This is an opportunity to learn about Performance calculations on the Clariion /VNX systems and the various considerations that must be taken into account

 

This discussion begins on Monday, August  13th. Get ready by bookmarking this page or signing up for email notifications.

 

Your hosts:

 

https://community.emc.com/profile-image-display.jspa?imageID=4416&size=350

 

Rob Koper is working in the IT industry since 1994 and since 2004 working for Open Line Consultancy. He started with Clariion CX300 and DMX-2 and worked with all newer arrays ever since, up to current technologies like VNX 5700 and the larger DMX-4 and VMAX 20k systems. He's mainly involved in managing and migrating data to storage arrays over large Cisco and Brocade SANs that span multiple sites widely spread through the Netherlands. Since 2007 he's an active member on ECN and the Support Forums and he currently holds Proven Professional certifications like Implementation Engineer for VNX, Clariion (expert) and Symmetrix as well as Technology Architect for Clariion and Symmetrix.

 

https://community.emc.com/profile-image-display.jspa?imageID=6000&size=350

Jon Klaus has been working at Open Line since 2008 as a project consultant on various storage and server virtualization projects. To prepare for these projects, an intensive one year barrage of courses on CLARiiON and Celerra has yielded him the EMCTAe and EMCIEe certifications on CLARiiON and EMCIE + EMCTA status on Celerra.

Currently Jon is contracted by a large multinational and part of a team that is responsible for running and maintaining several (EMC) storage and backup systems throughout Europe. Amongst his day-to-day activities are: performance troubleshooting, storage migrations and designing a new architecture for the Europe storage and backup environment.

 

This event ran from the 13th until the 31st of August .

Here is a summary document og the higlights of that discussion as set out by the experts. Ask The Expert: Performance Calculations on Clariion/VNX wrap up

 

 

The discussion itself follows below.

1.4K Posts

August 20th, 2012 18:00

Hi experts,

May I know the biggest difference on performance between VNX/CLARiiON and Symmetrix from your perspective?

thank you

247 Posts

August 21st, 2012 02:00

Well, where to begin...

- The VNX is an active-passive array, meaning although both Storage Processors are being used, only one SP will actually own a LUN. So you can only use 50% of the ports simultaneously.

- If you hit a storage processor bottleneck on the CLARiiON on VNX, it's not possible to add another SP. You can upgrade it (data in place upgrade, but downtime is required), or you throw a new VNX next to it. For a VMAX you can add some more processing power: add another controller! (up to the model limits of course)
- FAST VP for the VNX/CX uses chunks of 1GB which are moved once every 24 hours. For a VMAX, the chunks are 768KB (quite a difference!) which are moved at multiple times per day (i can't remember for sure how often, i think once every hour or so?).

No doubt a true Symm expert can keep expanding this list for a couple of hours.

Don't write off the VNX range if you need power: the bigger VNX systems are quite capable of providing a lot of performance. But if you need to scale up a lot, VMAX is something to think about...

5.7K Posts

August 21st, 2012 03:00

The number of BEs (back end or buses) is also something to consider. The largest V-MAX can have 128 BE connections (which comes down to 64 because all DAEs are dual connected). The number of FAs (front end adapters or host connectivity adapters) on the largest V-MAX can have 128 FAs. The largest VNX has 5 expansion slots per storage processor (which need to contain the same expansion modules in SPA and SPB). Each module can have 4 FAs or 4 BEs, so theoretically each SP can have 20 (5 x 4) ports, which contain FA as well as BE ports. A more specific list can be found in the PDF I'll mention later (I have some trouble finding the right one).

The largest VNX can have 1000 disks, while a V-MAX can have 3200 disks

The amount of cache: VNX 7500 has 96GB and the V-MAX 40K can have up to 2TB of cache (8 x 256GB).

A complete list of the VNX internals can be found on: http://www.emc.com/collateral/software/specification-sheet/h8514-vnx-series-ss.pdf

Message was edited by: RRR added a link to the VNX hardware specs.

136 Posts

August 21st, 2012 23:00

thank you for both, BTW, FAST VP storage tiering would cause performance degration during production hour. I my question is: Does FAST VP software prioritize i/o, I mean when tiering operation is conflict with normal i/o operation, would FAST VP software stop moving data temporary and let the normal i/o to be finished first?

136 Posts

August 22nd, 2012 00:00

Understand. Another questions, the current storage tiering is based on BLOCK, I was told File-level-based storage tering would be more efficient, how do you think?

247 Posts

August 22nd, 2012 00:00

Steve, it's always best to schedule pool relocations during off-hours. If you're in a 24/7 business then you've got a slight problem and the time with the lowest overall system load will have to do.

You can tune the relocation rate a bit, so find the best relocation rate for your system which doesn't impact production too much.

RelocationRate.jpg

If you put it on high and your normal I/O is already stressing the system a lot, you will notice a performance impact.

5.7K Posts

August 22nd, 2012 03:00

I stumbled on an interesting calculating discussing which was posted while I was on holiday : https://community.emc.com/message/656985#656985

Take a look on how the math is done!!! Very interesting

247 Posts

August 22nd, 2012 04:00

Well if we look at the CX/VNX, you're promoting or demoting a 1GB chunk. That chunk could contain many files and folders, some of them maybe very active and others not used for years. If you could do this tiering based on files or on smaller chunks, that would certainly allow for a smaller tier 1 and thus cheaper system.

The problem is that if you make the chunks very small (or start tracking individual files), you'll need to keep track of many more chunks/items. That will inevitably increase load on the SP. So it's a bit of a balancing act... efficient tiering (small chunks) vs low SP load (larger chunks).

136 Posts

August 22nd, 2012 19:00

my thoughts is : iSCSI replies on TCP/IP much has much more overhead than FC. So for IOPS-intensive App (small, random), the performance for iSCSI would be worse than FC cuz each small I/O would needs to be package into a corresponding IP packet, which is really inefficient. On the other hand, FC has lower overhead under the same I/O patten.

But for bandwidth-intensive App (sequential, larage), iSCSI may even perform better than FC because it has larger bandwidth (10Gbps vs. 8Gbps), especially end-to-end Jumbo Frame support is enabled.

136 Posts

August 22nd, 2012 19:00

Good point of view. I think both level could do tiering but just they are just uncomparable because they are not working at the same software layer, then the design intention would be different.

136 Posts

August 23rd, 2012 01:00

Yes, that's what mean. For normal day to day production, i should say it depends. We cannot say for sure that iSCSI cannot be deployed under [random, small] i/o profile, I mean if the response time is just fine, then why not? What i was talking about is the best practice, what technology is suitable under what kind of contidion. iSCSI is a cheapper solution, if the we don't need so much kind of performance and iSCSI can just meet my needs, then iSCSI is the right choice.

8.6K Posts

August 23rd, 2012 01:00

well - ISCSI has more overhead than FC

when using multiple LAN interfaces with trunking due to IP you can only use one interface for a data transfer

FCoE is much smarter there and less protocol overhead

If you have a good relationship to your EMC TC - ask him to show you some performance data

5.7K Posts

August 23rd, 2012 01:00

That's exactly how I see it, but the problem is that every single customer is asking for more performance in the end so by default FC is the better choice (at a real money cost) and when cost comes into the conversation the alternative is iSCSI (at a performance cost).

5.7K Posts

August 23rd, 2012 01:00

So what you're saying is that iSCSI is especially good for large sequential I/O patterns. I'm thinking video streaming and backups, but not normal day to day storage production usage, right?

136 Posts

August 23rd, 2012 01:00

Agree, i still believe FC is a better choice.

No Events found!

Top