Unsolved

This post is more than 5 years old

3 Posts

488

November 17th, 2008 03:00

PowerPath 5.1.2 for SLES 10 SP2 increased load...

We are recently upgraded from not using PowerPath at all to starting to use 5.1.2 for a SLES 10 SP2 machine connected to a SAN.

However prior to using it our load was approximately around 5-6 (processes in run queue) on a this machine and after the installation this increased to 8-12 which is quite high for our 8 cores machine. How can this be? I never thought that the installation of the PowerPath would increase the load on the machine but perhaps decrease it...this is quite urgent for us since we don't want to degrade the machine to not using PowerPath again....

Any recommendations on how this can be tweaked to increase the performance (and decrease the load) or diagnosed to find the possible setup error?

Kind regards
/A

341 Posts

November 17th, 2008 06:00

You don't mention what kind of HBA's you are using, perhaps some of the HBA parameters can be tweaked to improve performance.

Also can you post up an excerpt from powermt display dev=all so we can see what policy you are running, if there are queued IO's at a PowerPath level and also if all paths are active/alive.

3 Posts

November 17th, 2008 07:00

Here is the powermt output

Pseudo name=emcpowera
CLARiiON ID=XXX [DB3]
Logical device ID=XXX [LUN 0]
state=alive; policy=BasicFailover; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
1 qla2xxx sdb SP A0 active alive 0 0

Pseudo name=emcpowerb
CLARiiON ID=XXX [DB3]
Logical device ID=XXX [LUN 10]
state=alive; policy=BasicFailover; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
1 qla2xxx sdc SP A0 active alive 0 0

dmesg gave me this:

QLogic Fibre Channel HBA Driver: 8.02.00-k6-SLES10-05

Is that enough for the HBA info (I'm not sure exactly where to look for output regarding it)?

341 Posts

November 17th, 2008 07:00

Firstly it seems you are only zoned to one SP on the CLARiiON, I would recommend you zone a second path to SPB to protect against SP failure.

I assume you don't have a PowerPath license (this means that you cannot change the PowerPath policy from BasicFailover to CLARopt) Looking at the PowerPath output there are no Queued IO's at the tiome that you ran the display command.

So we will move onto the HBA driver, I recommend you download and review the HBA driver manual for your specific driver from the QLogic website, the manual you require is called "EMC Fibre Channel with QLogic Host Bus Adapters for the Linux v2.6.x Kernel Environment and the v8.x-Series Driver" P/N 300-002-803 REV A03

check section 2-43 for the driver parameters and ensure they are set to the EMC-recommended values. (the file you set them in is /etc/modprobe.conf.local) Once these changes are made a new ramdisk needs to be created and the host rebooted, all details are in the manual.

Let us know how you go...

Conor

3 Posts

November 17th, 2008 10:00

Thanks, I will look into this some more and return when I know more. It seems a consultant didn't quite set this up correctly since we are supposed to have two paths for failover...

Thanks
/A

2 Intern

 • 

1.3K Posts

November 29th, 2008 11:00

could you post the o/p of "uptime" and wanted to see the load average reported?

Could you post the o/p of "mount|grep -i nfs"?

Do you have shared NFS mount points on multiple SLES servers?

W had seen similart high number of run queues in RHEL linux servers and which have been fixed.

0 events found

No Events found!

Top