Unsolved
This post is more than 5 years old
152 Posts
0
2138
windows 2008 cluster having slow write on E:drive(tran Log)
e had a issue with a node of a windows cluster. There are 6 devices mapped and masked to that node from DMX-4. When i see the powermt display dev=all out put i see that out of 6 devices 1 of them has High queued i/o's .The rest 5 dont have any queued i/o's . I did some analysis on that device and i see the below readings.
DEVICE IO/sec KB/sec % Hits %Seq Num WP
19:48:11 READ WRITE READ WRITE RD WRT RD WRT Tracks
19:48:11 DEV001 (0100) 124 2154 16776 5241 100 96 13 53 753
The FA port utilization also seems to be high
19:47:04 DIRECTOR PORT IO/sec Kbytes/sec
19:47:04 FA-8C 1 2589 27628
19:47:04 FA-9C 1 5085 84102
can anyone tell me how to reduce the queued I/o's for that particular device (0100)
Pseudo
Symmetrix
Logical device
state=alive; policy=SymmOpt; priority=0; queued-IOs=28
==============================================================================
---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
2 port2\path0\tgt0\lun20 c2t0d20 FA 9cB active alive 14 0
4 port4\path0\tgt0\lun20 c4t0d20 FA 8cB active alive 14 0
vimad
152 Posts
0
September 9th, 2010 16:00
Is there any thing else that need to be checked on the windows Cluster 2008 (active-active). before making any changes on the storage.
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
September 9th, 2010 19:00
so presenting more FAs did not help ? How big is this device and how is it configured (meta ? how many members, protection ?)
SamCl
54 Posts
0
September 10th, 2010 03:00
Given the length of the previous thread I would recommend you open up a SR and supply EMCReports outputs for the nodes of the cluster with a view to getting it formally checked for configuration and also to begin a performance analysis on the array back end.
Sam Claret EMC TSE3
vimad
152 Posts
0
September 10th, 2010 06:00
Hi Dynamox,
Its a 6 way striped meta (202GB) raid( 7+1).
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
September 10th, 2010 09:00
how many drives in the disk group where that meta is created ? Where i am going with this is maybe you can create a meta that is build out of smaller size but more number of sym devices. So right now your meta resides on 48 physical drives ( 6 x 8), maybe building your meta using 15G symdevs would double the number of drives involved on the back-end and get you better performance.
vimad
152 Posts
0
September 10th, 2010 10:00
Disk group has total 160 Disks. In the current config each device is 34 G. and striped 6 way meta was formed. (Raid 5 7+1)
dynamox
2 Intern
2 Intern
•
20.4K Posts
1
September 11th, 2010 10:00
so with 160 disks you could create a 20 member meta that would not overlap. I would create 20 x 10G symm devices and create a striped meta out of that. Then use PowerPath migrator to move off the old device onto this one and see if you get better performance. While we like to keep our symmetrix device size consistent we had to create some custom size devices for situations just like yours. It can't hurt, if performance does not improve you can always migrate back to your original device.
vimad
152 Posts
0
September 14th, 2010 12:00
could you please let me know whats the advantage in creating a 20 member striped meta (20*10GB) when compared to current config( 6 way meta (6 *35GB) . Is it becuase we can have faster parallel processing.
dynamox
2 Intern
2 Intern
•
20.4K Posts
1
September 14th, 2010 12:00
you have 160 drives in the disk group, if you are using 7+1 RAID 5, that means that each sym device resides on 8 physical drives. 160 / 8 = 20. So you can create 20 symmetrix devices, create a striped meta and that meta will span over 160 physical drives, thus giving a lot more IOPS, backend resources. Migrate to it using PowerPath so it's online and see if it improves performance.
vimad
152 Posts
0
September 15th, 2010 13:00
now we are in the process of upgrading the HBA drivers and Powerpath on the windows 2008 cluster. Then planning to add two more additional paths if that doesnt work then I would follow the process you suggested like creating a 20 way striped meta and then do a Migration.
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
September 16th, 2010 09:00
good luck ..let us know about your progress.
vimad
152 Posts
0
September 20th, 2010 07:00
H:\Utils>psexec \\crprchmcs86n1 powermt display ports
==============================================================================
ID Interface Wt_Q Total Dead Q-IOs Errors
000190100424 FA 9cB 256 9 0 9 0
000190100424 FA 8cB 256 9 0 7 0
vimad
152 Posts
0
September 23rd, 2010 09:00
All,
We updated the HBA , Powerpath versions and added two more FA's and performance of the E drive improved ..As of now we are not migrating the drive.