2 Intern

 • 

148 Posts

May 2nd, 2011 08:00

Hi Stefano Del Corno,

I have verified fabric/and storage.there is no issues.This is a two node cluster and no issues in other node.

Any powermt check can we do from host side ?

4 Operator

 • 

2.8K Posts

May 2nd, 2011 08:00

First step is determine why you lost a path: pci@8,600000/emlx@2/fp@0,0

All 19 disks from above path are failed. Probably you have a bad cable, or a broken GBIC in the switch. Check zoning on the switch and overall connectivity between HBA and Storage. Last but not least, check masking on storage side (although less probable, there's still the option of a bad command issued against the storage).

4 Operator

 • 

2.8K Posts

May 2nd, 2011 08:00

No further checks AFAIK with powermt.

9 Legend

 • 

20.4K Posts

May 5th, 2011 06:00

try "powermt restore"

8 Posts

May 5th, 2011 11:00

May be the other node is having the disks in Exclusive access mode. (in case of HPUX, we have to use vgchange -a e). In that case, the other node cannot load balance in those devices. I think (I may be wrong; I am just starting to use powerpath), please check your policy. You may not be able to use load balancing but failover mode.

Experts, please do not hammer me for this comment. As I said, I am "just" learning to use Powerpath.

Thanks

2 Intern

 • 

1.3K Posts

May 30th, 2011 16:00

What OS is this? what does the OS report about the HBA status?

Linux try this pattern - `cat /proc/scsi/qlaxxx/y` ( replace with equivalent values for emulex) or `fcmutil /dev/xxxx stat` in hpux. this might give you a little more details from the host side. 

2 Intern

 • 

1.3K Posts

June 4th, 2011 18:00

`vgchange -a e`  only make sure that vg/device(s) are only active only one at a time. But the way PP works still remains same and havind an exclusivly activated devices itself should not contribute to the errors you are seeing.

No Events found!

Top