Unsolved
This post is more than 5 years old
2 Intern
•
211 Posts
0
1142
August 16th, 2013 08:00
device pinning on Clariion or VNX?
Based on my understanding , device pinning works only with Symmitrix DMX/VMAX FAST VP. Could anybody please point out if it also work on Clariion or VNX FAST VP?
Also any detailed documents on how to configure it?
Thanks very much in advance!
No Events found!



etaljic81
1K Posts
0
August 16th, 2013 08:00
What exactly are you trying to do?
emcmagic
2 Intern
•
211 Posts
0
August 16th, 2013 09:00
The topic was brought up in a conversation. I don't want to make the question too complicated, or confuse you, but for now, I'd just like to find out what it is or how it works conceptually, If it just works on Symmitrix or Clariion/VNX as well, and any documents on configurations.
emcmagic
2 Intern
•
211 Posts
0
August 16th, 2013 11:00
dynamox,
Okay, got it, the answer is that Clariion/VNX could not do it. Would you explore it a little bit more on "use traditional raid groups or homogeneous pool" to achieve the same?
Ernes,
Based on what you put it, LUN in Clarrion/VNX only can get "pinned" on the lowest tier", not on higher tiers. Is the "Start High then Auto" policy sane as "auto-tier" policy?
dynamox
9 Legend
•
20.4K Posts
0
August 16th, 2013 11:00
i don't think there is way to pin a LUN to specific tier on Clariion/VNX like it is on VMAX using FAST. You may need to use traditional raid groups or homogeneous pool.
etaljic81
1K Posts
0
August 16th, 2013 11:00
If you bind a LUN on a pool and select "Lowest Available Tier" it will stay on that tier and not move up. For example, if you have EFD, SAS and NL-SAS drives in a pool and choose "Lowest Available Tier" it will not auto-tier and always stay on NL-SAS drives. The "Start High then Auto" policy will find the LUN on the highest tier but once you scheduled FAST VP 1GB chunks will auto-tier across all three different tiers.
Does that answer your question?
dynamox
9 Legend
•
20.4K Posts
0
August 16th, 2013 13:00
when you say you want to "pin" something to specific tier, you want to have predictable performance based on that tier characteristics. For example we had an accounting app that was mostly idle during the month but would get really "hot" for end of month closing. During the month it would trickle all the way down to NL-SAS but then when they would hit it really hard during closing, it was sucking wind for a while until it all bubbled back up to EFD and SAS. Data set was so big that even FAST Cache could not help much so we ended up moving that app to a dedicated pool that was comprised of SAS drives only. If this app was on a VMAX we could play funky games with changing FAST policies and forcing extends to go up to higher tiers before closing period would start.
etaljic81
1K Posts
0
August 19th, 2013 05:00
Forgot to answer your question (Is the "Start High then Auto" policy sane as "auto-tier" policy?). It's not the same. Let's say you are creating the first LUN on a pool. If you create it using start high then auto then all of the 1GB chunks will be bound to the highest tier. The auto tier will split the LUN into 50/50. 50% of the LUN will be on the high tier and the other 50% will be on the low tier. GO ahead and create the LUN with the auto-tier policy and go back to check the properties (select the tiering tab) and you'll see that.
etaljic81
1K Posts
0
August 19th, 2013 05:00
You have several option: start high then auto, auto tier, highest tier, lowest tier. When you bind a LUN in the GUI you can choose the highest tier policy and the LUN will be bound on the highest tier in that pool. Once you create the LUN you can go into properties of that LUN, select the Tiering tab and set it to no movement. That way the LUN will always stay on the highest tier.
You can do the above from the CLI by executing one command (creates LUN 1, size 100GB on pool name Pool1):
naviseccli.exe -h 192.168.10.10 lun -create -type nonThin -capacity 100 -sq gb -poolname Pool1 -sp A -l 1 -tieringPolicy noMovement -initialTier highestAvailable
Hope this answers your question.