Tag Archives: device

IBM VIOS CDROM – DVDROM (Virtual Optical Device)

CDROM – DVDROM (Virtual Optical Device):

Any optical device equipped on the Virtual I/O Server partition (either CD-ROM, DVD-ROM, or DVD-RAM) can be virtualized and assigned at any logical partition, one at a time, using the same virtual SCSI adapter provided to virtual disks. Virtual optical devices can be used to install the operating system and, if DVD-RAM, to make backups.

Creating Virtual Optical Device:

1. On VIO Server create SCSI Server Adapter. This adapter is set to Any client partition can connect.
This dedicated adapter for the virtual optical device helps to make things easier from a system management point of view.

2. On client LPAR: create SCSI client adapter, mapping the id with the server adapter (above)

3. cfgdev (on vio) will bring up a new vhostX
cfgmgr (on client) will bring up a new vscsiX

4. On VIO Server create optical device:

-for using physical CDs and DVDs, create an optical device
$ mkvdev -vdev cd0 -vadapter vhost4 -dev vcd
vcd Available

$ lsdev -virtual

vcd             Available  Virtual Target Device – Optical Media

for file backed (iso images) optical device
$ mkvdev -fbo -vadapter vhost1
vtopt0 Available

$lsdev -virtual

vtopt0           Available   Virtual Target Device – File-backed Optical

(copy the iso image to /var/vio/VMLibrary, ‘lsrep’ will show media repository content)
(lssp -> mkrep -sp rootvg -size 4G    <–this will create media repository)
(creating an iso image: mkvopt -name <filename>.iso -dev cd0 -ro)

load the image into the vtopt0 device: loadopt -vtd vtopt0 -disk dvd.1022A4_OBETA_710.iso
(lsmap -all will show it)

or you can check it:
padmin@vios1 : /home/padmin # lsvopt
VTD             Media                                   Size(mb)
vtopt0          AIX_7100-00-01_DVD_1_of_2_102010.iso        3206

if later another disk is needed, you can unload an image with this command: unloadopt -vtd vtopt0
if we don’t need the image anymore at all we can remove it from the repository: rmvopt -name AIX_7100-00-01.iso

5. On client LPAR cfgmgr and create CDROM filesystem
In the AIX client partition run the cfgmgr command to assign the virtual optical drive to it. If the drive is already assigned to another partition you will get an error message and you will have to release the drive from the partition holding it.

create mount point: mkdir /cdrom

create cdrom filesystem: smitty fs -> add cdrom filesystems:
device name: cd0
mount point: /cdrom
mount automatically

mount the filesystem: mount -v cdrfs -r /dev/cd0 /cdrom

AIX SDD (subsystem device driver)

SDD is designed to support the multipath configuration in the ESS.
The software used to balance ESS I/O traffic across all adapters. It provides multiple access to data from the host.
when using sdd cfgmgr is run 3 times (cfgmgr -l fcs0, cfgmgr -l fcs1, cfgmgr (the third one builds the vpaths))

3 policies exist for load balancing:
default: selecting the path with the least number of current I/O operations
-round robin: choosing the path, that was not used for the last operation (alternating if 2 pathes exist)
-failover: all I/O sent ove the most preferred path, until a failure is detected.

SDDSRV:
SDD has a server daemon running in the background: lssrc/stopsrc/startsrc -s sddsrv
If sddsrv is stopped, the feature that automatically recovers failed paths disabled.

vpath:
A logical disk defined in ESS and recognized by AIX. AIX uses vpath instead of hdisk as a unit of physical storage.

root@aix: /dev # lsattr -El vpath0
active_hdisk  hdisk20/00527461/fscsi1          Active hdisk                 False
active_hdisk  hdisk4/00527461/fscsi0           Active hdisk                 False
policy        df                               Scheduling Policy            True    <-path selection policy
pvid          0056db9a77baebb90000000000000000 Physical volume identifier   False
qdepth_enable yes                              Queue Depth Control          True
serial_number 00527461                         LUN serial number            False
unique_id     1D080052746107210580003IBMfcp    Device Unique Identification False

policy:
fo: failover only – all I/O operations sent to the same paths until the path fails
lb: load balancing – the path is chosen by the number of I/O operations currently in process
lbs: load balancing sequential – same as before with optimization for sequential I/O
rr: round ropbin – path is chosen at random from the not used paths
rrs: round robin sequential – same as before with optimization for sequential I/O
df: default – the default policy is load balancing

datapath set device N policy        change the SDD path selection policy dynamically

DPO (Data Path Optimizer):
it is a pseudo device (lsdev | grep dpo), which is the pseudo parent of the vpaths

root@: / # lsattr -El dpo
SDD_maxlun      1200 Maximum LUNS allowed for SDD                  False
persistent_resv yes  Subsystem Supports Persistent Reserve Command False

— ——————————

software requirements for SDD:
-host attachment for SDD (ibm2105.rte, devices.fcp.disk.ibm.rte) – this is the ODM extension
The host attachments for SDD add 2105 (ESS)/2145 (SVC)/1750 (DS6000)/2107 (DS8000) device information to allow AIX to properly configure 2105/2145/1750/2107 hdisks.
The 2105/2145/1750/2107 device information allows AIX to:
– Identify the hdisk(s) as a 2105/2145/1750/2107 hdisk.
– Set default hdisk attributes such as queue_depth and timeout values.
– Indicate to the configure method to configure 2105/2145/1750/2107 hdisk as non-MPIO-capable devices

ibm2105.rte: for 2105 devices
devices.fcp.disk.ibm.rte: for DS8000, DS6000 and SAN Volume Controller)

-devices.sdd.53.rte – this is the driver (sdd)
it provides the multipath configuration environment support

——————————–

addpaths                  dynamically adds more paths to SDD vpath devices (before addpaths, run cfgmgr)
(running cfgmgr alone does not add new paths to SDD vpath devices)
cfgdpo                    configures dpo
cfgvpath                  configures vpaths
cfallvpath                configures dpo+vpaths
dpovgfix <vgname>         fixes a vg that has mixed vpath and hdisk physical volumes
extenfvg4vp               this can be used insteadof extendvg (it will move pvid from hdisk to vpath)

datapath query version    shows sdd version
datapath query essmap     shows vpaths and their hdisks in a list
datapath query portmap    shows vpaths and ports
—————————————
datapath query adapter    information about the adapters
State:
Normal           adapter is in use.
Degraded         one or more paths are not functioning.
Failed           the adapter is no longer being used by SDD.

datapath query device     information about the devices 8datapath query device 0)
State:
Open             path is in use
Close            path is not being used
Failed           due to errors path has been removed from service
Close_Failed     path was detected to be broken and failed to open when the device was opened
Invalid          path is failed to open, but the MPIO device is opened
—————————————
datapath remove device X path Y   removes path# Y from device# X (datapath query device, will show X and Y)
datapath set device N policy      change the SDD path selection policy dynamically
datapath set adapter 1 offline

lsvpcfg                           list vpaths and their hdisks
lsvp -a                           displays vpath, vg, disk informations

lquerypr                          reads and releases the persistent reservation key
lquerypr -h/dev/vpath30           queries the persistent resrevation on the device (0:if it is reserved by current host, 1: if another host)
lquerypr -vh/dev/vpath30          query and display the persistent reservation on a device
lquerypr -rh/dev/vpath30          release the persisten reservation if the device is reserved by the current host
(0: if the command succeeds or not reserved, 2: if the command fails)
lquerypr -ch/dev/vpath30          reset any persistent reserve and clear all reservation key registrations
lquerypr -ph/dev/vpath30          remove the persisten reservation if the device is reserved by another host
—————————————

Removing SDD (after install a new one):
-umount fs on ESS
-varyoffvg
(if HACMP and RG is online on other host: vp2hd <vgname>) <–it converts vpaths to hdisks)
-rmdev -dl dpo -R                                         <–removes all the SDD vpath devices
-stopsrc -s sddsrv                                        <–stops SDD server
-if needed: rmdev -dl hdiskX                              <–removes hdisks
(lsdev -C -t 2105* -F name | xargs -n1 rmdev -dl)

-smitty remove — devices.sdd.52.rte
-smitty install — devices.sdd.53.rte (/mnt/Storage-Treiber/ESS/SDD-1.7)
-cfgmgr
—————————————

Removing SDD Host Attachment:
-lsdev -C -t 2105* -F name | xargs -n1 rmdev -dl          <–removes hdisk devices
-smitty remove — ibm2105.rte (devices.fcp.disk.ibm)
—————————————

Change adapter settings (Un/re-configure paths):

-datapath set adapter 1 offline
-datapath remove adapter 1
-rmdev -Rl fcs0
(if needed: for i in `lsdev -Cc disk | grep -i defined | awk ‘{ print $1 }’`; do rmdev -Rdl $i; done)
-chdev -l fscsi0 -a dyntrk=yes -a fc_err_recov=fast_fail
-chdev -l fcs0 -a init_link=pt2pt
-cfgmgr; addpaths
—————————————

Reconfigure vpaths:
-datapath remove device 2 path 0
-datapath remove device 1 path 0
-datapath remove device 0 path 0
-cfgmgr; addpaths
-rmdev -Rdl vpath0
-cfgmgr;addpaths
—————————————

Can’t give pvid for a vpath:
root@aix: / # chdev -l vpath6 -a pv=yes
Method error (/usr/lib/methods/chgvpath):
0514-047 Cannot access a device.

in errpt:DEVICE LOCKED BY ANOTHER USER
RELEASE DEVICE PERSISTENT RESERVATION

# lquerypr -Vh /dev/vpath6          <–it will show the host key
# lquerypr -Vph /dev/vpath6         <–it will clear the reservation lock
# lquerypr -Vh /dev/vpath6          <–checking again will show it is OK now

Solaris Cleaning up the Operating System device tree after removing LUNs

To clean up the device tree after you remove LUNs

  1. The removed devices show up as drive not available in the output of the format command:
    413. c3t5006016841e02f0Cd252 <drive not available>
            /pci@1d,700000/SUNW,qlc@1,1/fp@0,0/ssd@w5006016841e02f0c,fc
  2. After the LUNs are unmapped using Array management or the command line, Solaris also displays the devices as either unusable or failing.
    bash-3.00# cfgadm -al -o show_SCSI_LUN | grep -i unusable
    
    c6::5006016141e02f08,0         disk         connected    configured   unusable
    c6::5006016141e02f08,1         disk         connected    configured   unusable
    c6::5006016141e02f08,2         disk         connected    configured   unusable
    c6::5006016141e02f08,3         disk         connected    configured   unusable
    c6::5006016141e02f08,4         disk         connected    configured   unusable
    c6::5006016141e02f08,5         disk         connected    configured   unusable
    c6::5006016141e02f08,6         disk         connected    configured   unusable
    c6::5006016141e02f08,7         disk         connected    configured   unusable
    c6::5006016141e02f08,8         disk         connected    configured   unusable
    c6::5006016141e02f08,9         disk         connected    configured   unusable
    c6::5006016141e02f08,10        disk         connected    configured   unusable
    c6::5006016141e02f08,11        disk         connected    configured   unusable
    c6::5006016841e02f08,0         disk         connected    configured   unusable
    c6::5006016841e02f08,1         disk         connected    configured   unusable

    bash-3.00# cfgadm -al -o show_SCSI_LUN | grep -i failing
      c2::5006016841e02f03,71    disk  connected configured  failing
      c3::5006016841e02f0c,252   disk  connected configured  failing
  3. If the removed LUNs show up as failing, you need to force a LIP on the HBA. This operation probes the targets again, so that the device shows up as unusable. Unless the device shows up as unusable, it cannot be removed from the device tree.
    luxadm -e forcelip /devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl
  4. To remove the device from the cfgadm database, run the following commands on the HBA:
    cfgadm -c unconfigure -o unusable_SCSI_LUN c2::5006016841e02f03 cfgadm -c unconfigure -o unusable_SCSI_LUN c3::5006016841e02f0c 
  5. Repeat step 2 to verify that the LUNs have been removed.
  6. Clean up the device tree. The following command removes the /dev/rdsk… links to /devices.
    $devfsadm -Cv

To configure a CLARiiON array to serve as a boot device for a Solaris server, follow these steps:

Note: Check the EMC Support Matrix or E-Lab Navigator for the versions of Solaris and arrays that support using the array as a boot device.

1. Partition your LUN on your CLARiiON array so you have the slices of required sizes.

2. Run the newfs command to make a filesystem on the slices you need.

3. Make a mount point for the slice that you are going to copy to the LUN.

4. Mount the slice at the mount point.

5. Use the cd command to change your current directory to the mounted slice.

6. Run the following command to copy the slice to the array:

# ufsdump 0f – /dev/dsk/cxtxdxsx | ufsrestore rf –

Where x= controller,target,LUN, slice where the OS currently resides.

7. Run the command to copy a boot block to the LUN:

# /usr/sbin/installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/cxtxdxsx

Where x= controller, target, LUN, and slice of array LUN.

8. Change /etc/vfstab to the new slices.

9. If ATF is to be part of this configuration, you must install it after setting up the boot partition.

mount: 0506-322 Cannot determine log device to use for /dev/fslv

# mount /dev/fslv00 /test
mount: 0506-322 Cannot determine log device to use for /dev/fslv00 (/test).

Solution :

Just create another logical volume in datavg vg with a size of one (1) PP.

# mklv -t jfs2log -y <yournewloglv>datavg vg 1

# logform /dev/<yournewloglv>

Any filesystem that is created in datavg vg after this step will automatically use it.

Any filesystem that has been there before can be made using your new loglv with

# chfs -a log=<yournewloglv> <filesystemname>

If you create your loglv make sure to place it on the edge (-e) of your disk.
These are rather basic tasks. If you feel uncomfortable accomplishing them on the commandline you might want to use SMIT. Works fine.

eg.

Edit /etc/filesystems to add /dev/fslv00 as a new filesystem:

/test:
dev = /dev/fslv00
vfs = jfs
log = /dev/"jfs_log_name"
mount = true
options = rw
account = false

Run fsck against /dev/fslv00 , and try mounting it again.