Unsolved
This post is more than 5 years old
5 Posts
0
4401
AIX, VIO, NPIV and VMAX
Hi! We have 2 pSeries with this setup:
-VIOs (2.1.2.10-FP-22.1) with NPIV
-powerpath installed on VIOs and LPARs
-AIX 6.1 (TL-06) and 5.3 (TL-12)
We didn't made any change in the device configuration for fcsX or hdiskX on either side (VIO or lpar)
queue_depth, reserve_lock, etc... We left everything the way Powerpath configured it.
Any special configuration change should I've considered during the installation?
Not much information available. Read somewhere that the reserve_lock should be set at NO.
Thank you.
MDSchneider
25 Posts
0
May 21st, 2011 07:00
Reserve_Lock only applies to disks you want to share between LPARS (such as GPFS or Oracle RAC type environments).
I would highly recommend changing the AIX hdisk queue_depth, the default is 3, and should be at least 32 (IMO).
dd_emc
26 Posts
0
May 22nd, 2011 17:00
Note that if you have multiple VIOS within a p-series, then you will need to adjust the reserve_lock and possibly the reserve_policy settings. The following extract is from the "EMC Host Connectivity Guide for IBM AIX rev A34":
The System p system can have multiple VIOS LPARs providing access to the same LUN for the same VIOC for Standard Device configurations (no multipathing) and multipathing devices under the control of PowerPath and MPIO. For Standard Device and PowerPath configurations, the Virtual I/O Server device reservation setting reserve_lock must be changed (via the chdev Virtual I/O Server configuration command) to no. For MPIO configurations and PowerPath v5.5 and later, the Virtual I/O Server device reservation setting reserve_policy must be changed (via the chdev command) to no_reserve.
MDSchneider
25 Posts
0
May 23rd, 2011 11:00
It sounds like he's using NPIV, which negates that comment.
In the old VIO environment, the lun was presented to the LPAR (VIOC) via vscsi (virtual scsi), so the VIOS (server) needed no_reserve enabled to allow redundant VIOS. Thankfully, with NPIV, the fibre channel adapter itself gets virtualized, so the VIOC/LPAR sees a fibre channel adapter, the VIOS cannot actually access the lun, as the storage array masks it directly to the VOIC using a N_port virtual WWN.
This also means that PowerPath can be installed directly on the VIOC/LPAR, as the initial poster described he was doing.
The connectivity guide probably needs a refresh to cover NPIV options, as well the new VIOS7 storage pool options (similar to the VMWare VMFS approach).
dd_emc
26 Posts
0
May 23rd, 2011 16:00
Thanks for the clarification - it's hard to keep track of things sometimes, with all the levels of virtualisation on modern systems...
typer100_0f9bd1
5 Posts
0
May 24th, 2011 05:00
Hi! Thank you for your help. queue_depth is set to 16. I guess it's the default Powerpath value.
# lsattr -El hdiskpower23
clr_q yes Clear Queue (RS/6000) True
location Location True
lun_id 0x3a000000000000 LUN ID False
lun_reset_spt yes FC Forced Open LUN True
max_coalesce 0x10000 Maximum coalesce size True
max_transfer 0x40000 Maximum transfer size True
pvid 00c0555509f87c7b0000000000000000 Physical volume identifier False
pvid_takeover yes Takeover PVIDs from hdisks True
q_err no Use QERR bit True
q_type simple Queue TYPE False
queue_depth 16 Queue DEPTH True
reassign_to 120 REASSIGN time out value True
reserve_lock yes Reserve device on open True
rw_timeout 40 READ/WRITE time out True
scsi_id 0x11900 SCSI ID False
start_timeout 180 START unit time out True
ww_name 0x50000972082cb1e0 World Wide Name False
I'll try 32. Anything else I should look for?