EMC and MPIO in AIX

You can run into an issue with EMC storage on AIX systems using MPIO (No Powerpath) for your boot disks:

After installing the ODM_DEFINITONS of EMC Symmetrix on your client system, the system won’t boot any more and will hang with LED 554 (unable to find boot disk).

The boot hang (LED 554) is not caused by the EMC ODM package itself, but by the boot process not detecting a path to the boot disk if the first MPIO path does not corresponding to the fscsiX driver instance where all hdisks are configured. Let me explain that more in detail:

Let’s say we have an AIX system with four HBAs configured in the following order:

# lscfg -v | grep fcs
fcs2 (wwn 71ca) -> no devices configured behind this fscsi2 driver instance (path only configured in CuPath ODM table)
fcs3 (wwn 71cb) -> no devices configured behind this fscsi3 driver instance (path only configured in CuPath ODM table)
fcs0 (wwn 71e4) -> no devices configured behind this fscsi0 driver instance (path only configured in CuPath ODM table)
fcs1 (wwn 71e5) -> ALL devices configured behind this fscsi1 driver instance

Looking at the MPIO path configuration, here is what we have for the rootvg disk:

# lspath -l hdisk2 -H -F”name parent path_id connection status”
name   parent path_id connection                      status
hdisk2 fscsi0 0       5006048452a83987,33000000000000 Enabled
hdisk2 fscsi1 1       5006048c52a83998,33000000000000 Enabled
hdisk2 fscsi2 2       5006048452a83986,33000000000000 Enabled
hdisk2 fscsi3 3       5006048c52a83999,33000000000000 Enabled

The fscsi1 driver instance is the second path (pathid 1), then remove the 3 paths keeping only the path corresponding to fscsi1 :

# rmpath -l hdisk2 -p fscsi0 -d
# rmpath -l hdisk2 -p fscsi2 -d
# rmpath -l hdisk2 -p fscsi3 -d
# lspath -l hdisk2 -H -F”name parent path_id connection status”

Afterwards, do a savebase to update the boot lv hd5. Set up the bootlist to hdisk2 and reboot the host.

It will come up successfully, no more hang LED 554.

When checking the status of the rootvg disk, a new hdisk10 has been configured with the correct ODM definitions as shown below:

# lspv
hdisk10 0003027f7f7ca7e2 rootvg active
# lsdev -Cc disk
hdisk2 Defined   00-09-01 MPIO Other FC SCSI Disk Drive
hdisk10 Available 00-08-01 EMC Symmetrix FCP MPIO Raid6

To summarize, it is recommended to setup ONLY ONE path when installing an AIX to a SAN disk, then install the EMC ODM package then reboot the host and only after that is complete, add the other paths. Dy doing that we ensure that the fscsiX driver instance used for the boot process has the hdisk configured behind.

Configuring MPIO

Use the Following steps to set up the scenario:

  1. Create two Virtual I/O server partition and them VIO_Server1 and VIO_Server2. Creating virtual I/O server partition Select one Fiber Channel adapter in addition to the physical adapter.
  2. Install the both VIO Servers using CD or NIM server.
  3. Change the fc_err_recov to fast fail and dyntrk to yes attributes on the Fibre Channel adapters.

lsdev –type adapter command to find the number of channel adapter.

$ chdev -dev fscsi0 -attr fc_err_recov=fast_fail dyntrk=yes –perm

fscsi0 changed

$ lsdev -dev fscsi0 -attr

attribute value description

user_settable

attach switch How this adapter is CONNECTED False

dyntrk yes Dynamic Tracking of FC Devices True

fc_err_recov fast_fail FC Fabric Event Error RECOVERY Policy True

scsi_id 0x660c00 Adapter SCSI ID False

sw_fc_class 3 FC Class for Fabric True

Important: If you have two or more Fibre Channel adapters per Virtual I/O

Server you have to change the attributes for each of them.

  1. Reboot the VIO Servers for the changes to the Fibre Channel devices to take effect.
  2. Create the Client partition Shows the required virtual SCSI adapters based on the configuration shown in following Chart
VIO Server VIO Server Slot Client Partition Client Slot
VIO_Server1 30 DB_Server 21
VIO_Server1 40 Apps_Server 21
VIO_Server2 30 DB_Server 22
VIO_Server2 40 Apps_Server 22
  1. Also add two virtual Ethernet adapters to each client to provide the highly available network access. One adapter if you plan on using SEA failover for network redundancy.
  2. On VIO_Server1 and VIO_Server2 use the fget_config command to get the LUN to hdisk mappings.

# fget_config -vA

—dar0—

User array name = ‘FAST200’

dac0 ACTIVE dac1 ACTIVE

Disk DAC LUN Logical Drive

utm 1

hdisk0 dac1 0 1

hdisk1 dac0 2 2

hdisk2 dac0 3 4

hdisk3 dac1 4 3

hdisk4 dac1 5 5

hdisk5 dac0 6 6

You can also use the lsdev -dev hdiskn -vpd command, where n is the hdisk number, to retrieve this information

  1. The disk are to be accessed though both VIO Servers. The reserve_policy for each disk must be set to no_reserve on VIO_Server1 and VIO_Server2.

$ chdev -dev hdisk2 -attr reserve_policy=no_reserve

hdisk2 changed

$ chdev -dev hdisk3 -attr reserve_policy=no_reserve

hdisk3 changed

9. Check using the lsdev command, to make sure reserve_policy attribute is

now set to no_reserve

$ lsdev -dev hdisk2 -attr

attribute value description

user_settable

PR_key_value none Persistant Reserve Key Value True

cache_method fast_write Write Caching method False

ieee_volname 600A0B8000110D0E0000000E47436859 IEEE Unique volume name False

lun_id 0x0003000000000000 Logical Unit Number False

max_transfer 0x100000 Maximum TRANSFER Size True

prefetch_mult 1 Multiple of blocks to prefetch on read False

pvid none Physical volume identifier False

q_type simple Queuing Type False

queue_depth 10 Queue Depth True

raid_level 5 RAID Level False

reassign_to 120 Reassign Timeout value True

reserve_policy no_reserve Reserve Policy True

rw_timeout 30 Read/Write Timeout value True

scsi_id 0x660a00 SCSI ID False

size 20480 Size in Mbytes False

write_cache yes Write Caching enabled False

10.Double check both Virtual I/O Servers that the vhost adapters have the correct slot numbers by running the lsmap -all command.

11.Map the hdisks to the vhost adapters using the mkvdev command

$ mkvdev -vdev hdisk2 -vadapter vhost0 -dev app_server

app_server Available

$ mkvdev -vdev hdisk3 -vadapter vhost1 -dev db_server

db_server Available

12. Install the AIX OS in client partitions.

Configuring MPIO in the client partitions

  1. Check the MPIO configuration by running the commands shown in

# lspv

# lsdev -Cc disk

hdisk0 Available Virtual SCSI Disk Drive

  1. Run the lspath command to verify that the disk is attached using two different paths. shows hat hdisk0 is attached using the VSCSI0 and VSCSI1 adapter that point to different Virtual I/O servers. Both Virtual I/O Servers are up and running. Both paths are enabled.

# lspath

Enabled hdisk0 vscsi0

Enabled hdisk0 vscsi1

  1. Enable the health check mode for the disk so that the status of the disks is automatically updated

# chdev -l hdisk0 -a hcheck_interval=20 -P

hdisk0 changed

Leave a Comment

Your email address will not be published. Required fields are marked *

CAPTCHA * Time limit is exhausted. Please reload the CAPTCHA.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top