Category Archives: SAN

How to Change the name of a Brocade switch ?

When managing large diverse SAN fabrics, it is extremely important  to assign a unique and descriptive name to each switch. This descriptive name provides an easy way to determine the purpose of a switch, and is also helpful when debugging problems. To change the name assigned to a Brocade switch, you can use the switchname command:

Switch:admin> switchname
Switch

Switch:admin> switchname “Fabric1Switch1?
Updating flash …

Fabric1Switch1:admin> switchname
Fabric1Switch1

Switch names should be chosen wisely, especially when dealing with large core/edge fabric topologies.

How to Rescan new LUN’s added in Linux, HP-UX, Aix, Solaris ?

HP-UX

1. Rescan the devices:

ioscan -fnC <disk|tape>

2. Generate device files:

 insf -e

3. Verify the new devices:

 ioscan -funC <disk|tape>

AIX

1. Rescan the devices ):

 cfgmgr -vl fcsx

Where x is FC adapter number

2. Verify the new devices:

 lsdev -Cc <disk|tape>

Linux

The rescan in Linux is HBA-specific.

For QLogic:

echo scsi-qlascan > /proc/scsi/qla<model#>/<adapter instance>

For Emulex:

 sh force_lpfc_scan.sh lpfc<adapter-instance>

For each identified device, run the following:

echo scsi add-single-device <host> <channel> <ID> <lun> >   /proc/scsi/scsi

Solaris

1. Determine the FC channels:

 cfgadm -al

2. Force rescan :

 cfgadm -o force_update -c configure cx

Where x is the FC channel number

3. Force rescan at HBA port level:

 luxadm -e forcelip /dev/fc/fpx

4. Force rescan on all FC devices:

 cfgadm -al -o show_FCP_dev

5. Install device files:

 devfsadm

6. Display all Qlogic HBA ports

 luxadm -e port

7. Display HBA port information

 luxadm -v display <WWPN>

8. Display HBA port information

 luxadm -e dump_map

Notes If one specific SANclient is missing a drive, please verify that your zoning is correct. Please also make sure the host initiator and VTL™s target ports are showing online via the Fibre Channel switch. (Check HBA link light and check the cable.)

 

IOPS calculation for your FAST Pool

I will provide a calculation example for calculating the required spindles in combination with a known skew. No capacity will be addressed in this post, where I will base it purely on IOPS / throughput and apply it to a mixed FAST VP pool.

We all know about the write penalty which is the following:

  • RAID10: 2
  • RAID5: 4
  • RAID6: 6

What if we have an environment which has a skew of 80% with a required amount of IOPS that is 50000. Besides this, we know that there are 80% reads and only 20% writes. Remember that Flash is a good reader?

Now that we know there is a skew of 80%, we can calculate the amount of flash we need inside the pool:

0.80 * 50000 = 40000 IOPS that we need inside the highest tier of our FAST VP pool. For the remaining 10000 IOPS, we will keep the rule of thumb where we base the remaining stuff on 80% for SAS and 20% for NLSAS:

0.2 * (0.2 * 10000) = 2000 IOPS for NLSAS

0.2 * (0.8 * 10000) = 8000 IOPS for SAS

Now, without the write penalty applied, we need to get the following in our pool:

  • Flash: 40000 IOPS
  • SAS: 8000 IOPS
  • NLSAS: 2000 IOPS

Write Penalty

But what about the backend load? By backend load, I mean that there will be the write penalty included for calculating the exact spindles we need. Remember that we have about 20% reads on this environment:

(0.8 * 40000) + (2 * 0.2 * 40000) = 32000 + 16000 = 48000 IOPS for FASTCache which is in RAID10

or..

(0.8 * 40000) + (4 * 0.2 * 40000) = 32000 + 32000 = 64000 IOPS for Flash in our pool on RAID5

(0.8 * 8000) + (4 * 0.2 * 8000) = 6400 + 6400 = 12800 IOPS for SAS in RAID5

(0.8 * 2000) + (6 * 0.2 * 2000) = 1600 + 2400 =  4000 IOPS for NLSAS in RAID6

How many drives to I need per tier?

We keep the following rule of thumbs in mind for the IOPS capacity per drive:

  • Flash: 3500 IOPS
  • SAS 15k: 180 IOPS
  • NLSAS: 90 IOPS

To make sure you are ready for bursts, you could use “little’s law”, which means you will use only about 70% of this rule of thumb so you always have an extra buffer, but this is up to you as we will also round up the amount of disks for best RAID purposes.

64000 / 3500 = 19 disks, which we would round up to 20 when we want flash to be in a RAID5 configuration

12800 / 180 = 72 disks, which we would round up to 75 to keep RAID5 best practices again

4000 / 90 = 45 disks, which we would round up to 48 if we want to keep 6+2 RAID6 sets for example

Keep in mind that this calculation does not incude any capacity based on TB or GB, only on IOPS!

IOPS calculation

What is IOPS?
IOPS (Input Output Per Second) is a common performance metrics used for comparing/measuring the performance of Storage systems like HDD, SSD, SAN.

Quick Calculation sheet

RPM IOPS
15 K 175 IOPS
10 K 125 IOPS
7.2 K 75 IOPS
5.4 K 50 IOPS
How to Calculate IOPS requirement?

We will consider the 600G Segate Cheetah 15K RPM HDD.

http://www.seagate.com/files/docs/pdf/datasheet/disc/cheetah-15k.7-ds1677.3-1007us.pdf

Read/ Write seek time : 3.4 / 3.9 ms (take as 3.65)
Average latency : 2.0 ms
IOPS = 1000/ (Average latency in ms + average read/ write seek time)
= 1000/ (2 + 3.65)
= 176.99 IOPS

RAID level and write Penalty 

RAID Level IO Write Penalty
RAID 0 0
RAID 1 / RAID 10 2
RAID 5 4
RAID 6 6
Total IOPS = Disk Speed IOPS * Number of disks
Actual IOPS = (((Total IOPS× Write %))/( RAID Penalty)) + (Total IOPS×Read %)

Suppose we have 8 the segate Cheetah 15K Hard Drives.
Total IOPS = 8 * 176.99
= 1415.92 IOPS (For RAID 0)
= ~ 1400 IOPS

Considering RAID Overheads

Work Load details
Write Load = 30 %
Read Load = 70%
RAID Level = 10
Actual IOPS = ((1400 * .30) / 2 + (1400 * .70)
= 1190 IOPS

Calculating number of Disks required (Reverse calculation)

Requirement I need 1200 IOPS with RAID 10, write load 30 % and read load 70%
Actual IOPS = 1200
Total IOPS = (Actual IOPS * RAID Penalty)/ (Write % + RAID Penalty – (RAID Penalty * write %))
= (1190 * 2) /( .3 + 2 – (.3*2))
= 2380 / 1.7
= 1400 IOPS

System Panic During Boot Logging the Error “NOTICE: zfs_parse_bootfs: error 19”

Today while migrating SAN i face this issue, hope it will help others too…

The system panic during boot logging the error:

{0} ok boot 56024-disk
Boot device: /virtual-devices@100/channel-devices@200/disk@1 File and args:
SunOS Release 5.10 Version Generic_147440-01 64-bit
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
NOTICE: zfs_parse_bootfs: error 19
Cannot mount root on rpool/68 fstype zfs
panic[cpu0]/thread=180e000: vfs_mountroot: cannot mount root

Changes

This issue usually occurs when system is trying to boot a ZFS rpool and the path to the disk changed, or customer is trying to boot the system from a cloned disk (that means the disk is a copy of another boot disks)

Cause

The issue is caused by a mismatch between the current path of the disk you are trying to boot from and the path stored in the ZFS label of the same disk:

ok boot 56024-disk
Boot device: /virtual-devices@100/channel-devices@200/disk@1 File and args:

 

# zdb -l /dev/rdsk/c0d1s0
——————————————–
LABEL 0
——————————————–
version: 29
name: ‘rpool’
state: 0
txg: 1906
pool_guid: 3917355013518575342
hostid: 2231083589
hostname: ”
top_guid: 3457717657893349899
guid: 3457717657893349899
vdev_children: 1
vdev_tree:
type: ‘disk’
id: 0
guid: 3457717657893349899
path: ‘/dev/dsk/c0d0s0
devid: ‘id1,vdc@f85a3722e4e96b600000e056e0049/a’
phys_path: ‘/virtual-devices@100/channel-devices@200/disk@0:a
whole_disk: 0
metaslab_array: 31
metaslab_shift: 27
ashift: 9
asize: 21361065984
is_log: 0
create_txg: 4

As you can see we are trying to boot the path disk@1 but in the ZFS label the path is disk@0.

Solution

To fix the issue you have to boot the system in failsafe mode or from cdrom and import the rpool on that disk to force ZFS to correct the path:

# zpool import -R /mnt rpool
cannot mount ‘/mnt/export’: failed to create mountpoint
cannot mount ‘/mnt/export/home’: failed to create mountpoint
cannot mount ‘/mnt/rpool’: failed to create mountpoint

# zdb -l /dev/rdsk/c0d1s0
——————————————–
LABEL 0
——————————————–
version: 29
name: ‘rpool’
state: 0
txg: 1923
pool_guid: 3917355013518575342
hostid: 2230848911
hostname: ”
top_guid: 3457717657893349899
guid: 3457717657893349899
vdev_children: 1
vdev_tree:
type: ‘disk’
id: 0
guid: 3457717657893349899
path: ‘/dev/dsk/c0d1s0
devid: ‘id1,vdc@f85a3722e4e96b600000e056e0049/a’
phys_path: ‘/virtual-devices@100/channel-devices@200/disk@1:a
whole_disk: 0
metaslab_array: 31
metaslab_shift: 27
ashift: 9
asize: 21361065984
is_log: 0
create_txg: 4

As you can see the path has been corrected, however you have also to remove the zpool.cache file otherwise after boot the ZFS command will still show the disk as c0d0:

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 5.86G 13.7G 106K /mnt/rpool
rpool/ROOT 4.35G 13.7G 31K legacy
rpool/ROOT/s10s_u10wos_17b 4.35G 13.7G 4.35G /mnt
rpool/dump 1.00G 13.7G 1.00G –
rpool/export 63K 13.7G 32K /mnt/export
rpool/export/home 31K 13.7G 31K /mnt/export/home
rpool/swap 528M 14.1G 114M –

# zfs mount rpool/ROOT/s10s_u10wos_17b
# cd /mnt/etc/zfs
# rm zpool.cache

Brocade SAN Switch NTP Setting the date and time

DS_300B_234:admin> date

Tue Dec 31 06:43:16 UTC 2013

DS_300B_234:admin> tstimezone

Time Zone Hour Offset: 0

Time Zone Minute Offset: 0

DS_300B_234:admin> date

Tue Dec 31 06:45:59 UTC 2013

DS_300B_234:admin> tstimezone –interactive

Please identify a location so that time zone rules can be set correctly.

Please select a continent or ocean.

1) Africa

2) Americas

3) Antarctica

4) Arctic Ocean

5) Asia

6) Atlantic Ocean

7) Australia

8) Europe

9) Indian Ocean

10) Pacific Ocean

11) none – I want to specify the time zone using the POSIX TZ format.

Enter number or control-D to quit ?5

Please select a country.

1) Afghanistan           18) Israel                35) Palestine

2) Armenia               19) Japan                 36) Philippines

3) Azerbaijan            20) Jordan                37) Qatar

4) Bahrain               21) Kazakhstan            38) Russia

5) Bangladesh            22) Korea (North)         39) Saudi Arabia

6) Bhutan                23) Korea (South)         40) Singapore

7) Brunei                24) Kuwait                41) Sri Lanka

8) Cambodia              25) Kyrgyzstan            42) Syria

9) China                 26) Laos                  43) Taiwan

10) Cyprus                27) Lebanon               44) Tajikistan

11) East Timor            28) Macau                 45) Thailand

12) Georgia               29) Malaysia              46) Turkmenistan

13) Hong Kong             30) Mongolia              47) United Arab Emirates

14) India                 31) Myanmar (Burma)       48) Uzbekistan

15) Indonesia             32) Nepal                 49) Vietnam

16) Iran                  33) Oman                  50) Yemen

17) Iraq                  34) Pakistan

Enter number or control-D to quit ?14

 

The following information has been given:

 

India

 

Therefore TZ=’Asia/Kolkata’ will be used.

Local time is now:      Tue Dec 31 12:16:40 IST 2013.

Universal Time is now:  Tue Dec 31 06:46:40 UTC 2013.

Is the above information OK?

1) Yes

2) No

Enter number or control-D to quit ?1

System Time Zone change will take effect at next reboot

DS_300B_234:admin> tsclockserver “10.X.X.X.14”

Updating Clock Server configuration…done.

Updated with the NTP servers

DS_300B_234:admin> date

Tue Dec 31 11:44:42 IST 2013

 

EMC and MPIO in AIX

You can run into an issue with EMC storage on AIX systems using MPIO (No Powerpath) for your boot disks:

After installing the ODM_DEFINITONS of EMC Symmetrix on your client system, the system won’t boot any more and will hang with LED 554 (unable to find boot disk).

The boot hang (LED 554) is not caused by the EMC ODM package itself, but by the boot process not detecting a path to the boot disk if the first MPIO path does not corresponding to the fscsiX driver instance where all hdisks are configured. Let me explain that more in detail:

Let’s say we have an AIX system with four HBAs configured in the following order:

# lscfg -v | grep fcs
fcs2 (wwn 71ca) -> no devices configured behind this fscsi2 driver instance (path only configured in CuPath ODM table)
fcs3 (wwn 71cb) -> no devices configured behind this fscsi3 driver instance (path only configured in CuPath ODM table)
fcs0 (wwn 71e4) -> no devices configured behind this fscsi0 driver instance (path only configured in CuPath ODM table)
fcs1 (wwn 71e5) -> ALL devices configured behind this fscsi1 driver instance

Looking at the MPIO path configuration, here is what we have for the rootvg disk:

# lspath -l hdisk2 -H -F”name parent path_id connection status”
name   parent path_id connection                      status
hdisk2 fscsi0 0       5006048452a83987,33000000000000 Enabled
hdisk2 fscsi1 1       5006048c52a83998,33000000000000 Enabled
hdisk2 fscsi2 2       5006048452a83986,33000000000000 Enabled
hdisk2 fscsi3 3       5006048c52a83999,33000000000000 Enabled

The fscsi1 driver instance is the second path (pathid 1), then remove the 3 paths keeping only the path corresponding to fscsi1 :

# rmpath -l hdisk2 -p fscsi0 -d
# rmpath -l hdisk2 -p fscsi2 -d
# rmpath -l hdisk2 -p fscsi3 -d
# lspath -l hdisk2 -H -F”name parent path_id connection status”

Afterwards, do a savebase to update the boot lv hd5. Set up the bootlist to hdisk2 and reboot the host.

It will come up successfully, no more hang LED 554.

When checking the status of the rootvg disk, a new hdisk10 has been configured with the correct ODM definitions as shown below:

# lspv
hdisk10 0003027f7f7ca7e2 rootvg active
# lsdev -Cc disk
hdisk2 Defined   00-09-01 MPIO Other FC SCSI Disk Drive
hdisk10 Available 00-08-01 EMC Symmetrix FCP MPIO Raid6

To summarize, it is recommended to setup ONLY ONE path when installing an AIX to a SAN disk, then install the EMC ODM package then reboot the host and only after that is complete, add the other paths. Dy doing that we ensure that the fscsiX driver instance used for the boot process has the hdisk configured behind.

Configuring MPIO

Use the Following steps to set up the scenario:

  1. Create two Virtual I/O server partition and them VIO_Server1 and VIO_Server2. Creating virtual I/O server partition Select one Fiber Channel adapter in addition to the physical adapter.
  2. Install the both VIO Servers using CD or NIM server.
  3. Change the fc_err_recov to fast fail and dyntrk to yes attributes on the Fibre Channel adapters.

lsdev –type adapter command to find the number of channel adapter.

$ chdev -dev fscsi0 -attr fc_err_recov=fast_fail dyntrk=yes –perm

fscsi0 changed

$ lsdev -dev fscsi0 -attr

attribute value description

user_settable

attach switch How this adapter is CONNECTED False

dyntrk yes Dynamic Tracking of FC Devices True

fc_err_recov fast_fail FC Fabric Event Error RECOVERY Policy True

scsi_id 0x660c00 Adapter SCSI ID False

sw_fc_class 3 FC Class for Fabric True

Important: If you have two or more Fibre Channel adapters per Virtual I/O

Server you have to change the attributes for each of them.

  1. Reboot the VIO Servers for the changes to the Fibre Channel devices to take effect.
  2. Create the Client partition Shows the required virtual SCSI adapters based on the configuration shown in following Chart
VIO Server VIO Server Slot Client Partition Client Slot
VIO_Server1 30 DB_Server 21
VIO_Server1 40 Apps_Server 21
VIO_Server2 30 DB_Server 22
VIO_Server2 40 Apps_Server 22
  1. Also add two virtual Ethernet adapters to each client to provide the highly available network access. One adapter if you plan on using SEA failover for network redundancy.
  2. On VIO_Server1 and VIO_Server2 use the fget_config command to get the LUN to hdisk mappings.

# fget_config -vA

—dar0—

User array name = ‘FAST200’

dac0 ACTIVE dac1 ACTIVE

Disk DAC LUN Logical Drive

utm 1

hdisk0 dac1 0 1

hdisk1 dac0 2 2

hdisk2 dac0 3 4

hdisk3 dac1 4 3

hdisk4 dac1 5 5

hdisk5 dac0 6 6

You can also use the lsdev -dev hdiskn -vpd command, where n is the hdisk number, to retrieve this information

  1. The disk are to be accessed though both VIO Servers. The reserve_policy for each disk must be set to no_reserve on VIO_Server1 and VIO_Server2.

$ chdev -dev hdisk2 -attr reserve_policy=no_reserve

hdisk2 changed

$ chdev -dev hdisk3 -attr reserve_policy=no_reserve

hdisk3 changed

9. Check using the lsdev command, to make sure reserve_policy attribute is

now set to no_reserve

$ lsdev -dev hdisk2 -attr

attribute value description

user_settable

PR_key_value none Persistant Reserve Key Value True

cache_method fast_write Write Caching method False

ieee_volname 600A0B8000110D0E0000000E47436859 IEEE Unique volume name False

lun_id 0x0003000000000000 Logical Unit Number False

max_transfer 0x100000 Maximum TRANSFER Size True

prefetch_mult 1 Multiple of blocks to prefetch on read False

pvid none Physical volume identifier False

q_type simple Queuing Type False

queue_depth 10 Queue Depth True

raid_level 5 RAID Level False

reassign_to 120 Reassign Timeout value True

reserve_policy no_reserve Reserve Policy True

rw_timeout 30 Read/Write Timeout value True

scsi_id 0x660a00 SCSI ID False

size 20480 Size in Mbytes False

write_cache yes Write Caching enabled False

10.Double check both Virtual I/O Servers that the vhost adapters have the correct slot numbers by running the lsmap -all command.

11.Map the hdisks to the vhost adapters using the mkvdev command

$ mkvdev -vdev hdisk2 -vadapter vhost0 -dev app_server

app_server Available

$ mkvdev -vdev hdisk3 -vadapter vhost1 -dev db_server

db_server Available

12. Install the AIX OS in client partitions.

Configuring MPIO in the client partitions

  1. Check the MPIO configuration by running the commands shown in

# lspv

# lsdev -Cc disk

hdisk0 Available Virtual SCSI Disk Drive

  1. Run the lspath command to verify that the disk is attached using two different paths. shows hat hdisk0 is attached using the VSCSI0 and VSCSI1 adapter that point to different Virtual I/O servers. Both Virtual I/O Servers are up and running. Both paths are enabled.

# lspath

Enabled hdisk0 vscsi0

Enabled hdisk0 vscsi1

  1. Enable the health check mode for the disk so that the status of the disks is automatically updated

# chdev -l hdisk0 -a hcheck_interval=20 -P

hdisk0 changed

AIX LUNs(LUNz) presented to host?

LUNz is the logical unit number that an application client uses to communicate with, configure and determine information about an SCSI storage array and the logical units attached to it. The LUN_Z value shall be zero.

LUNz has been implemented on CLARiiON arrays to make arrays visible to the host OS and PowerPath when no LUNs are bound on that array.  When using a direct connect configuration, and there is no Navisphere Management station to talk directly to the array over IP, the LUNZ can be used as a pathway for Navisphere CLI to send Bind commands to the array.

LUNz also makes arrays visible to the host OS and PowerPath when the host’s initiators have not yet ‘logged in to the Storage Group created for the host.  Without LUNz, there would be no device on the host for Navisphere Agent to push the initiator record through to the array. This is mandatory for the host to log in to the Storage Group. Once this initiator push is done, the host will be displayed as an available host to add to the Storage Group in Navisphere Manager (Navisphere Express).

LUNz should disappear once a LUN zero is bound, or when Storage Group access has been attained.

To conclude, the LUNz devices will be shown up in following two scenarios:
1. when arraycommpath is set to 1(enabled) and host HBAs are registered and login to Clariion array, but no “Storage Group” is configured for this host.
2. when there is no LUN configured using HLU0(Host LUN0) in the host “Storage Group”.

Figure-1 Storage Group with no LUN assigned using HLU0
To resolve this LUNz issue:
1. Verify LUNz hdisk number by issue:

## lsdev -Cc disk | grep LUNZ

 

hdisk5 Available 08-08-08     EMC CLARiiON FCP LUNZ Disk

hdisk6 Available 08-08-08     EMC CLARiiON FCP LUNZ Disk

hdisk7 Available 08-08-08     EMC CLARiiON FCP LUNZ Disk

hdisk8 Available 08-08-08     EMC CLARiiON FCP LUNZ Disk

 

2. set one of the LUNs assigned to the host using HLU0.

Figure-2 Assign LUN with HLU0
3. Remove each LUNZ device with command:
#rmdev -dl hdisk5
#rmdev -dl hdisk6
#rmdev -dl hdisk7
#rmdev -dl hdisk8

4. Reconfigure devices with command cfgmgr or emc_cfgmgr

AIX SDD (subsystem device driver)

SDD is designed to support the multipath configuration in the ESS.
The software used to balance ESS I/O traffic across all adapters. It provides multiple access to data from the host.
when using sdd cfgmgr is run 3 times (cfgmgr -l fcs0, cfgmgr -l fcs1, cfgmgr (the third one builds the vpaths))

3 policies exist for load balancing:
default: selecting the path with the least number of current I/O operations
-round robin: choosing the path, that was not used for the last operation (alternating if 2 pathes exist)
-failover: all I/O sent ove the most preferred path, until a failure is detected.

SDDSRV:
SDD has a server daemon running in the background: lssrc/stopsrc/startsrc -s sddsrv
If sddsrv is stopped, the feature that automatically recovers failed paths disabled.

vpath:
A logical disk defined in ESS and recognized by AIX. AIX uses vpath instead of hdisk as a unit of physical storage.

root@aix: /dev # lsattr -El vpath0
active_hdisk  hdisk20/00527461/fscsi1          Active hdisk                 False
active_hdisk  hdisk4/00527461/fscsi0           Active hdisk                 False
policy        df                               Scheduling Policy            True    <-path selection policy
pvid          0056db9a77baebb90000000000000000 Physical volume identifier   False
qdepth_enable yes                              Queue Depth Control          True
serial_number 00527461                         LUN serial number            False
unique_id     1D080052746107210580003IBMfcp    Device Unique Identification False

policy:
fo: failover only – all I/O operations sent to the same paths until the path fails
lb: load balancing – the path is chosen by the number of I/O operations currently in process
lbs: load balancing sequential – same as before with optimization for sequential I/O
rr: round ropbin – path is chosen at random from the not used paths
rrs: round robin sequential – same as before with optimization for sequential I/O
df: default – the default policy is load balancing

datapath set device N policy        change the SDD path selection policy dynamically

DPO (Data Path Optimizer):
it is a pseudo device (lsdev | grep dpo), which is the pseudo parent of the vpaths

root@: / # lsattr -El dpo
SDD_maxlun      1200 Maximum LUNS allowed for SDD                  False
persistent_resv yes  Subsystem Supports Persistent Reserve Command False

— ——————————

software requirements for SDD:
-host attachment for SDD (ibm2105.rte, devices.fcp.disk.ibm.rte) – this is the ODM extension
The host attachments for SDD add 2105 (ESS)/2145 (SVC)/1750 (DS6000)/2107 (DS8000) device information to allow AIX to properly configure 2105/2145/1750/2107 hdisks.
The 2105/2145/1750/2107 device information allows AIX to:
– Identify the hdisk(s) as a 2105/2145/1750/2107 hdisk.
– Set default hdisk attributes such as queue_depth and timeout values.
– Indicate to the configure method to configure 2105/2145/1750/2107 hdisk as non-MPIO-capable devices

ibm2105.rte: for 2105 devices
devices.fcp.disk.ibm.rte: for DS8000, DS6000 and SAN Volume Controller)

-devices.sdd.53.rte – this is the driver (sdd)
it provides the multipath configuration environment support

——————————–

addpaths                  dynamically adds more paths to SDD vpath devices (before addpaths, run cfgmgr)
(running cfgmgr alone does not add new paths to SDD vpath devices)
cfgdpo                    configures dpo
cfgvpath                  configures vpaths
cfallvpath                configures dpo+vpaths
dpovgfix <vgname>         fixes a vg that has mixed vpath and hdisk physical volumes
extenfvg4vp               this can be used insteadof extendvg (it will move pvid from hdisk to vpath)

datapath query version    shows sdd version
datapath query essmap     shows vpaths and their hdisks in a list
datapath query portmap    shows vpaths and ports
—————————————
datapath query adapter    information about the adapters
State:
Normal           adapter is in use.
Degraded         one or more paths are not functioning.
Failed           the adapter is no longer being used by SDD.

datapath query device     information about the devices 8datapath query device 0)
State:
Open             path is in use
Close            path is not being used
Failed           due to errors path has been removed from service
Close_Failed     path was detected to be broken and failed to open when the device was opened
Invalid          path is failed to open, but the MPIO device is opened
—————————————
datapath remove device X path Y   removes path# Y from device# X (datapath query device, will show X and Y)
datapath set device N policy      change the SDD path selection policy dynamically
datapath set adapter 1 offline

lsvpcfg                           list vpaths and their hdisks
lsvp -a                           displays vpath, vg, disk informations

lquerypr                          reads and releases the persistent reservation key
lquerypr -h/dev/vpath30           queries the persistent resrevation on the device (0:if it is reserved by current host, 1: if another host)
lquerypr -vh/dev/vpath30          query and display the persistent reservation on a device
lquerypr -rh/dev/vpath30          release the persisten reservation if the device is reserved by the current host
(0: if the command succeeds or not reserved, 2: if the command fails)
lquerypr -ch/dev/vpath30          reset any persistent reserve and clear all reservation key registrations
lquerypr -ph/dev/vpath30          remove the persisten reservation if the device is reserved by another host
—————————————

Removing SDD (after install a new one):
-umount fs on ESS
-varyoffvg
(if HACMP and RG is online on other host: vp2hd <vgname>) <–it converts vpaths to hdisks)
-rmdev -dl dpo -R                                         <–removes all the SDD vpath devices
-stopsrc -s sddsrv                                        <–stops SDD server
-if needed: rmdev -dl hdiskX                              <–removes hdisks
(lsdev -C -t 2105* -F name | xargs -n1 rmdev -dl)

-smitty remove — devices.sdd.52.rte
-smitty install — devices.sdd.53.rte (/mnt/Storage-Treiber/ESS/SDD-1.7)
-cfgmgr
—————————————

Removing SDD Host Attachment:
-lsdev -C -t 2105* -F name | xargs -n1 rmdev -dl          <–removes hdisk devices
-smitty remove — ibm2105.rte (devices.fcp.disk.ibm)
—————————————

Change adapter settings (Un/re-configure paths):

-datapath set adapter 1 offline
-datapath remove adapter 1
-rmdev -Rl fcs0
(if needed: for i in `lsdev -Cc disk | grep -i defined | awk ‘{ print $1 }’`; do rmdev -Rdl $i; done)
-chdev -l fscsi0 -a dyntrk=yes -a fc_err_recov=fast_fail
-chdev -l fcs0 -a init_link=pt2pt
-cfgmgr; addpaths
—————————————

Reconfigure vpaths:
-datapath remove device 2 path 0
-datapath remove device 1 path 0
-datapath remove device 0 path 0
-cfgmgr; addpaths
-rmdev -Rdl vpath0
-cfgmgr;addpaths
—————————————

Can’t give pvid for a vpath:
root@aix: / # chdev -l vpath6 -a pv=yes
Method error (/usr/lib/methods/chgvpath):
0514-047 Cannot access a device.

in errpt:DEVICE LOCKED BY ANOTHER USER
RELEASE DEVICE PERSISTENT RESERVATION

# lquerypr -Vh /dev/vpath6          <–it will show the host key
# lquerypr -Vph /dev/vpath6         <–it will clear the reservation lock
# lquerypr -Vh /dev/vpath6          <–checking again will show it is OK now