Unsolved
This post is more than 5 years old
36 Posts
0
1051
January 29th, 2010 16:00
hpux 11.31 , cx3-80 array , flare code 26
iam using HPUX 11.31 native multipathing with correct initator settings . How come i am seeing all PATHs as active . Iam also running into some performance issues and seeing high disk i/o with any load . Here is how it looks like
Host with performance issue - All path active
---------------------------------------------
scsimgr lun_map -D /dev/rdisk/disk66 | pg
LUN PATH INFORMATION FOR LUN : /dev/rdisk/disk66
Total number of LUN paths = 6
World Wide Identifier(WWID) = 0x60060160f1dd1a005af5daaeec7fdb11
LUN path : lunpath74
Class = lunpath
Instance = 74
Hardware path = 0/0/4/1/0/4/0.0x5006016139a01781.0x4002000000000000
SCSI transport protocol = fibre_channel
State = ACTIVE
Last Open or Close state = ACTIVE
LUN path : lunpath65
Class = lunpath
Instance = 65
Hardware path = 0/0/4/1/0/4/0.0x5006016839a01781.0x4002000000000000
SCSI transport protocol = fibre_channel
State = ACTIVE
Last Open or Close state = ACTIVE
LUN path : lunpath64
Class = lunpath
Instance = 64
Hardware path = 0/0/4/1/0/4/0.0x5006016b39a01781.0x4002000000000000
SCSI transport protocol = fibre_channel
State = ACTIVE
Last Open or Close state = ACTIVE
LUN path : lunpath66
Class = lunpath
Instance = 66
Hardware path = 0/0/6/1/0/4/0.0x5006016339a01781.0x4002000000000000
SCSI transport protocol = fibre_channel
State = ACTIVE
Last Open or Close state = ACTIVE
LUN path : lunpath67
Class = lunpath
Instance = 67
Hardware path = 0/0/6/1/0/4/0.0x5006016239a01781.0x4002000000000000
SCSI transport protocol = fibre_channel
State = ACTIVE
Last Open or Close state = ACTIVE
LUN path : lunpath73
Class = lunpath
Instance = 73
Hardware path = 0/0/6/1/0/4/0.0x5006016939a01781.0x4002000000000000
SCSI transport protocol = fibre_channel
State = ACTIVE
Last Open or Close state = ACTIVE
Anything wrong ?
0 events found


dseth1
36 Posts
0
January 29th, 2010 16:00
correction : running into performance issues with out any load .
SKT136306
31 Posts
0
January 31st, 2010 02:00
dseth1
36 Posts
0
January 31st, 2010 07:00
each hba is zonned to multiple SP ports .
HBA1 is zoned to b1,a2,a3
HBA2 is zoned to b0,a1,a3
SKT2
2 Intern
•
1.3K Posts
0
January 31st, 2010 22:00
kelleg
6 Operator
•
4.5K Posts
0
February 1st, 2010 13:00
Please see Knowlodgebase article on PowerLink - emc99467 for the latest array settings for all operating systems - failover mode, arraycommpath and Initiator Type. If these settings are not correct, this would cause the types on problems that you're seeing.
As for performance issues, we have no information as to the type of LUNs configured on the array:
1. are the LUNs using fibre channel disks (are they 10K or 15K) or are they SATA?
2. what raid type are you using?
3. how many disks in the raid group?
4. what is the host IO load? Is it mostly Reads or Writes? What is the IO size?
5. what is the bus speed of the HBA's - 1, 2, 4 GB/s?
Please see the Best Practices guide for the CX3-series
EMC CLARiiON Storage System Fundamentals for Performance and Availability
http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/H1049_emc_clariion_fibre_channel_storage_fundamentals_ldv.pd
EMC CLARiiON Performance and Availability Release 28.5 Firmware Update Applied Best Practices.pdf
http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/h5773-clariion-perf-availability-release-28-firmware-wp.pdf
glen