SteveZhou
2 Iron

Re: Ask the Expert: Performance Calculations on Clariion/VNX

Good point of view. I think both level could do tiering but just they are just uncomparable because they are not working at the same software layer, then the design intention would be different.

0 Kudos
RRR
5 Osmium

Re: Ask the Expert: Performance Calculations on Clariion/VNX

So what you're saying is that iSCSI is especially good for large sequential I/O patterns. I'm thinking video streaming and backups, but not normal day to day storage production usage, right?

0 Kudos
SteveZhou
2 Iron

Re: Ask the Expert: Performance Calculations on Clariion/VNX

Yes, that's what mean. For normal day to day production, i should say it depends. We cannot say for sure that iSCSI cannot be deployed under [random, small] i/o profile, I mean if the response time is just fine, then why not? What i was talking about is the best practice, what technology is suitable under what kind of contidion. iSCSI is a cheapper solution, if the we don't need so much kind of performance and iSCSI can just meet my needs, then iSCSI is the right choice.

RRR
5 Osmium

Re: Ask the Expert: Performance Calculations on Clariion/VNX

That's exactly how I see it, but the problem is that every single customer is asking for more performance in the end so by default FC is the better choice (at a real money cost) and when cost comes into the conversation the alternative is iSCSI (at a performance cost).

0 Kudos
SteveZhou
2 Iron

Re: Ask the Expert: Performance Calculations on Clariion/VNX

Agree, i still believe FC is a better choice.

0 Kudos
Rainer_EMC
5 Iridium

Re: Ask the Expert: Performance Calculations on Clariion/VNX

well - ISCSI has more overhead than FC

when using multiple LAN interfaces with trunking due to IP you can only use one interface for a data transfer

FCoE is much smarter there and less protocol overhead

If you have a good relationship to your EMC TC - ask him to show you some performance data

JonK1
3 Silver

Re: Ask the Expert: Performance Calculations on Clariion/VNX

So Rob has mentioned it before... at some point in time you'll get at the office, open your Unisphere Analyzer and see the following.

QueueFull.jpg

Yikes, now what!?

First of all, check your configuration. In this case it was quite obvious where the problem lies: out of the four FC front-end ports per SP, only two were connected. The other ports were used for migration purposes when the system was initially installed. After the migration finished, they were never reconnected...

But even if you have all the ports connected, there are some smart things to keep in mind.

  • If you are replicating, try to keep the MirrorView port free from host traffic. This will prevent host I/O from interfering with your replication I/O, keeping your synchronous mirroring fast and thus prevent host write I/O slowdowns.
  • How would you spread your servers across the available FE port: attach them to all the available ports? Not a good idea!
    • Documentation on powerlink clearly states that too many paths per host (i.e. more than four paths) can result in lengthy failovers. So unless you have extremely high bandwidth requirements, limit yourself to four paths per host max.
    • Remember from your specsheets or training that a CLARiiON model has a limited amount of initiators? If you have 8 paths instead of 4, you have double the amount of initiators active. Depending from your environment, you may end up with a system that has GB's to spare but can't add another server!
    • Make yourself a spreadsheet to keep track of which server is zoned to which port, and start staggering them. Something like this may do...

    

          Fabrics.JPG

(For the careful reader: This array will never replicate using MirrorView, so that's why we're using port A7 & B7 for Host I/O ).

  • And of course, if careful planning still gets you Queue Full errors, do remember Rob's post about HBA queue settings!
RRR
5 Osmium

Re: Ask the Expert: Performance Calculations on Clariion/VNX

Better safe than sorry! Starting with a decent design includes a decent forecast of the amount of hosts that are going to be attached to a VNX or Clariion and the number of LUNs they are going to get. Based on this you can calculate the queue depth setting to avoid the QFULLs which are bad for performance since HBAs and host OSs will slow down generating I/Os. If you simply avoid outstanding I/Os getting too high / too much the feared QFULLs won't appear and the performance stays predictable.

0 Kudos
dynamox
6 Thallium

Re: Ask the Expert: Performance Calculations on Clariion/VNX

Jon Klaus wrote:

  • How would you spread your servers across the available FE port: attach them to all the available ports? Not a good idea!
    • Documentation on powerlink clearly states that too many paths per host (i.e. more than four paths) can result in lengthy failovers. So unless you have extremely high bandwidth requirements, limit yourself to four paths per host max.

hmm ..can you provide links/references where it states that more than 4 paths to a LUN will lead to slow failover ? I have never read anything like that nor has it ever been mentioned in Performance Workshops that i've taken.

0 Kudos
JonK1
3 Silver

Re: Ask the Expert: Performance Calculations on Clariion/VNX

Certainly! I found this in the CLARiiON best practices for Performance and Availability document, R30. On page 25 it states:

PowerPath allows the host to connect to a LUN through more than one SP port. This is known as multipathing. PowerPath optimizes multipathed LUNs with load-balancing algorithms. It offers several load-balancing algorithms. Port load balancing equalizes the I/O workload over all available channels. We recommend the default algorithm, ClarOpt, which adjusts for number of bytes transferred and for the queue depth.

Hosts connected to CLARiiONs benefit from multipathing. Direct-attach multipathing requires at least two HBAs; SAN multipathing also requires at least two HBAs. Each HBA needs to be zoned to more than one SP port. The advantages of multipathing are:
- Failover from port to port on the same SP, maintaining an even system load and minimizing LUN trespassing
- Port load balancing across SP ports and host HBAs
- Higher bandwidth attach from host to storage system (assuming the host has as many HBAs as paths used)

While PowerPath offers load balancing across all available active paths, this comes at some cost:
- Some host CPU resources are used during both normal operations, as well as during failover.
- Every active and passive path from the host requires an initiator record; there are a finite number of initiators per system.
- Active paths increase time to fail over in some situations. (PowerPath tries several paths before trespassing a LUN from one SP to the other.)

Because of these factors, active paths should be limited, via zoning, to two storage system ports per HBA for each storage system SP to which the host is attached. The exception is in environments where bursts of I/O from other hosts sharing the storage system ports are unpredictable and severe. In this case, four storage system ports per HBA should be used.

The EMC PowerPath Version 5.5 Product Guide available on Powerlink provides additional details on
PowerPath configuration and usage.

Message was edited by: Jon Klaus - Correcting layout for some of the botched PDF c/p results...