i have a Metalun with 6 lun components , the las 2 components of this Metalun, are having utilization at almost at 99%
the metalun is experience low performance accessing it from a linux server, are those two last luns components the cause
of this delaym how to prove this
thanks.
Solved! Go to Solution.
Hi,
Can you tell me that:
1. What's your metalun type? Is it the striped metalun or concatenated metalun?
2. Was the array installed navisphere analyzer enabler license or not?
3. What's your application pattern? OLTP/backup/video image or ...? Random or sequential or mixed application?
4. Is your IT environment met EMC support matrix (including host/switch)?
Your description is general. I can't give the exact suggestion. If you have emcgrab/switch log and SPCollect log and navisphere analyzer data (performance log of the array), that will help us to address your problem.
Thks.
Rgds,
WW
Hi,
Can you tell me that:
1. What's your metalun type? Is it the striped metalun or concatenated metalun?
2. Was the array installed navisphere analyzer enabler license or not?
3. What's your application pattern? OLTP/backup/video image or ...? Random or sequential or mixed application?
4. Is your IT environment met EMC support matrix (including host/switch)?
Your description is general. I can't give the exact suggestion. If you have emcgrab/switch log and SPCollect log and navisphere analyzer data (performance log of the array), that will help us to address your problem.
Thks.
Rgds,
WW
1. is striped meta
2. no navisphere analyzer enabler, just start analyzer, extract .naz file, decript them to naz file, last merge them
3. Random and Sequential
4. yes the host its under matrix,
more info you want i can attachtment it, but do i have to open a SR to atatched these files?
thanks
MC.
Hi,
Yup. Opening a SR is the best approach to engage CLARiiON performance engineer to take a look. Of course if you can upload all necessary logs, I will help to take a look. Thks.
Best Regards,
ww
Of the six component LUNs are any in the same raid group?
In the raid groups that contain the component LUNs is the disk Total IOPS higher than the other raid groups that contain the component LUNs?
What you want to find out is if another LUN(s) in a raid group that contains the metaLUN component LUNs are overdriving the disks.
What version of FLARE are you running?
glen
flare is 04.29.000.5.006
the thing is that raidgroup 3 its utilization is higher compared to the other ones
what i am trying to understand is what is the first and last component of this metalun
if you see it belongs to RG3, could be the way the order of the components in metalun
the reason of the slow performance.
there is no failed components. read and write cache is enabled
any idea,
other thing spB has more otilization that sp A,
metalun1 belongs to spB
what else can i ckeck
thanks, mc.
In Navisphere look in the LUN Folders/MetaLUN - you'll see the metaLUN listed - open the tree and you should see Component 0 - all the component LUNs should be listed here - this is the order that they were added to the original metaLUN - right click on each component LUN to get which raid group each LUN belongs to. Then right click on each Raid Group and select Partition tab this will show you where in the raid group the different LUNs have physically been created - see if all the component LUNs are in the same place. Also, make sure that there is only one metaLUN component LUN per raid group.
If RG 3 has higher utilization than the other component LUN's raid groups, then there must be a LUN in the raid group that is getting more workload - look at each LUN to see where the load is coming from.
Ideally you should only have metaLUN component LUNs in each raid group - see the new MetaLUN White Paper in the Documents section on this forum for a more in-depth explanation of stripped metaLUN configurations.
glen
PS - if needed you can open a SR, but I believe you are on the right track - you have extra workload from outside of the metaLUN that is interferring with the metaLUN, you just need to track it down.
take a look of this
all luns belongs to the spB, why if autoassign is enable, in all of them
including the firts one are associated to spB,
can i do a manual tresspass on this meta and change the default owner?
what do you think
mc
MC,
by default all of the components of a metalun will assume the same owner as the "head" lun of the meta, when it is created.
You cannot do anything with these individual component luns as they become private luns.
If you want to trespass the metal lun it is always done by trespassing the "head" lun of the meta.
This will trespass all of the luns in that meta to the other SP.
ok
doing load balance, i ask, does this tresspass ocurrs automatically
yes or not, and why.
By now this is my list of luns owner
a think there are a lot from the spB side, i dont know why
if at the firts time they were more or less balanced.
i think there is a little disorder here but i dont know how
to fix without impact production lun access.
in order to fix low performance access of an specific META
thanks
MC.