Tag Archives: Aix

How to Upgrade IBM Power server firmware fixes through AIX or Linux without an HMC

Installing server firmware fixes through the operating system is a disruptive process. You will need to restart the system.

Notes:

  1. If your system is managed by an HMC, you must apply server firmware through the HMC. For details, see Managed system updates in Updates.
  2. If you have a System i® model running IBM® i, you must either apply server firmware through an HMC or through an IBM i logical partition. If you have a POWER6® Power Systems™ server that is managed by an HMC, you must use the HMC.
  3. By default, the server firmware is installed on the temporary side only after the existing contents of the temporary side are permanently installed on the permanent side. (This process is performed automatically when you install a server firmware fix.)
  4. If you are unable to start your AIX or Linux operating system or server, refer to Obtaining fixes through AIX or Linux when you are unable to start the system.

Perform Steps 1 through 6 to get server firmware fixes through AIX or Linux when you do not have an HMC.

Step 1. View existing firmware levels for AIX or Linux

The Advanced System Management Interface (ASMI) is the user interface to access the server firmware. You can also use the AIX or Linux operating system to view the firmware levels.
  1. Select from the following options:
    • To use the ASMI (AIX or Linux): On the ASMI Welcome pane, view the existing level of server firmware in the upper-right corner below the copyright statement, for example, EM310_006.
    • To use the AIX command prompt (you must have AIX diagnostics installed on your server), continue with step 2.
    • To use the Linux command prompt, continue with step 4.
  2. At an AIX command prompt, enter the following command:
    lsmcode
    The existing levels of server firmware are displayed. For example, you might see output similar to the following:

    DISPLAY MICROCODE LEVEL                                                   802811
    IBM,8231-E1C
    
    The current permanent system firmware image is AL740_088
    The current temporary system firmware image is AL740_088
    The system is currently booted from the temporary firmware image.
    
    Use Enter to continue.
    Notes:

    • The permanent level is also known as the backup level.
    • The temporary level is also known as the installed level.
    • The system was booted from the temporary side, so at this time, the temporary level is also the activated level.
  3. Continue with Step 2. View or download the firmware fix.
  4. To view existing levels of server firmware for Linux, you must have the following service tools installed on your server:
    • Platform Enablement Library – librtas-xxxxx.rpm
    • Service Aids – ppc64-utils-xxxxx.rpm
    • Hardware Inventory – lsvpd-xxxxx.rpm

    where xxxxx represents a specific version of the RPM file.

    Note: If you do not have the service tools on your server, refer to Obtaining service and productivity tools for Linux.
  5. After the service tools are installed on the server running Linux, enter the following at a Linux command prompt:
    lsmcode

    The existing level of server firmware is displayed. For example, you might see output similar to the following:

    Version of system firmware is: AL740_088 (t)  AL740_088 (p)  AL740_088 (t)

    The following table provides descriptions for each of the server firmware levels displayed in the output.

    Table 1. Server firmware levels
    Server firmware levels displayed
    AL740_088 (t) AL740_088 (p) AL740_088 (t)
    The installed level.Also known as the temporary level. The backup level.Also known as the permanent level. The activated level.The level on which the server is currently running.
  6. Continue with the next step.

Step 2. View or download the firmware fix

Follow this procedure to view or download the firmware fix. You can download the fix directly to your server, or you can download it to a computer with an Internet connection and create a fix CD that you apply on the server. If necessary, contact service and support to order the fix on CD. You can also download the firmware fix to a computer that has a network connection to your server and use FTP to download the firmware fix from the computer to the server.

Note: If you plan to create a CD, you will need a CD burner and software.
  1. From a computer or server with an Internet connection, go to the Fix Central Web site at http://www.ibm.com/support/fixcentral/.
  2. Choose from the following options:
    1. If you have a System p® server, select System p in the Product Group list.
    2. If you have a POWER6 Power Systems server, select Power in the Product Group list.
  3. Select Firmware and HMC in the Product list.
  4. If prompted, select POWER5 and POWER6 class in the Processor type list.
  5. Select your Machine Type-Model and click Continue.
  6. Follow the on-screen prompts to download the fix file.
  7. Select from the following options:

Step 3. View and unpack the RPM file that contains the server firmware

If you created a CD with the RPM file, you will need to view and unpack the RPM file that contains the server firmware.
  1. Select from the following options:
    • If you created a CD with the RPM file, continue with the next step.
    • If you downloaded the RPM file to your server from the Fix Central Web site at http://www.ibm.com/support/fixcentral/ or by using the FTP method, continue with step 6.
  2. Insert the CD that contains the RPM file into the media drive on your server.
  3. To mount the CD, select from the following options (you need root user authority):
    • If you are working on an AIX system, enter the following at an AIX command prompt:
      mount /dev/cd0 /mnt
    • If you are working on a Linux system, enter one of the following commands at a Linux command prompt:
      mount -t iso9660 /dev/cdrom /mnt 

      or

      mount -t iso9660 /dev/dvdrom /mnt
  4. Select from the following options:
    • If the mount was successful, continue with step 6.
    • If the mount was unsuccessful, continue with the next step.
  5. If you received the message,
    mount: 0506-324 Cannot mount /dev/cd0 on /mnt, perform the following steps to mount the CD:

    1. Enter the command:
      /usr/sbin/mount -v 'cdrfs' -f'' -p'' -r'' /dev/cd0 /mnt

      The quotation marks following the f, p, and r are two single quotation marks with no space between them.

      Note: If you prefer, you can use the System Management Interface Tool (SMIT) to mount the CD.
    2. Continue with the next step.
  6. To view the RPM file name, enter the following command at the AIX or Linux command prompt:
    • If the RPM file is on CD, type:
      ls /mnt
    • If the RPM file is on the server, type:
      ls /tmp/fwupdate
    The name of the RPM file is displayed. For example, you might see output similar to the following:

    01EM3xx_yyy_zzz.rpm
  7. To unpack the RPM file, enter one of the following commands at the AIX or Linux command prompt:
    • If you want to unpack from a CD, enter:
      rpm -Uvh --ignoreos /mnt/filename.rpm
    • If you want to unpack from the server’s hard drive, enter:
      rpm -Uvh --ignoreos /tmp/fwupdate/filename.rpm
      where filename is the name of the RPM file that contains the server firmware. For example, 01EM3xx_yyy_zzz.rpm.

      Note: When you unpack the RPM file, the server firmware fix file is saved in the /tmp/fwupdate directory on the server’s hard drive in the following format: 01EM3xx_yyy_zzz.img.
  8. Continue with the next step.

Step 4. Apply server firmware fixes through AIX or Linux to the temporary side of the service processor

Important:

  • Do not interrupt this process after you begin.
  • Do not attempt to log into the ASMI, or use any of the ASMI’s functions, while a firmware installation is in progress.
  1. Ensure you are starting the system from the temporary side of the service processor; the firmware installation will fail if the system has booted from the permanent side. To learn which side you are starting from, and how to change to the other side if necessary, refer to Working with the temporary and permanent side of the service processor.
  2. To use the update_flash command (AIX or Linux) to install the server firmware, continue with step 3.
    Note: If you have AIX installed, you can choose to use the AIX diagnostics to install the fix. However, if you plan to install the fix from CD, you will need to obtain the Microcode Updates Files & Discovery Tool CD to use the AIX diagnostics.
  3. You will need the server firmware fix file name in the next step. To view the name, enter the following at an AIX or Linux command prompt:
    Note: To perform this step, you must have root user authority.
    ls /tmp/fwupdate
    The name of the server firmware fix file is displayed. For example, you might see output similar to the following:

    01EM3xx_yyy_zzz.img
  4. To install the server firmware fix, select from the following options:
    • If you are updating AIX, enter the following at an AIX command prompt:
      cd /tmp/fwupdate
      /usr/lpp/diagnostics/bin/update_flash -f fwlevel
    • # rpm -Uvh --ignoreos 01AL740_100_042.rpm
      01AL740_100_042             ##################################################
      # cd /tmp/fwupdate
      # ls
      01AL740_100_042.img
      # /usr/lpp/diagnostics/bin/update_flash -f 01AL740_100_042
      Error in opening the file 01AL740_100_042
      #  /usr/lpp/diagnostics/bin/update_flash -f 01AL740_100_042.img
      The image is valid and would update the temporary image to AL740_100.
      The new firmware level for the permanent image would be AL740_088.
      
      The current permanent system firmware image is AL740_088.
      The current temporary system firmware image is AL740_088.
      
      ***** WARNING: Continuing will reboot the system! *****
      
      Do you wish to continue?
      Enter 1=Yes or 2=No
      1
      
      SHUTDOWN PROGRAM
      Tue May 14 10:08:53 IST 2013
      0513-044 The sshd Subsystem was requested to stop.
      
      Wait for 'Rebooting...' before stopping.
      Error reporting has stopped.
      Advanced Accounting has stopped...
      Process accounting has stopped.
      nfs_clean: Stopping NFS/NIS Daemons
      0513-004 The Subsystem or Group, nfsd, is currently inoperative.
      0513-044 The biod Subsystem was requested to stop.
      0513-044 The rpc.lockd Subsystem was requested to stop.
      0513-044 The rpc.statd Subsystem was requested to stop.
      0513-004 The Subsystem or Group, gssd, is currently inoperative.
      0513-004 The Subsystem or Group, nfsrgyd, is currently inoperative.
      0513-004 The Subsystem or Group, rpc.mountd, is currently inoperative.
      0513-004 The Subsystem or Group, ypserv, is currently inoperative.
      0513-004 The Subsystem or Group, ypbind, is currently inoperative.
      0513-004 The Subsystem or Group, yppasswdd, is currently inoperative.
      0513-004 The Subsystem or Group, ypupdated, is currently inoperative.
      0513-004 The Subsystem or Group, nis_cachemgr, is currently inoperative.
      0513-004 The Subsystem or Group, rpc.nisd, is currently inoperative.
      0513-004 The Subsystem or Group, rpc.nispasswdd, is currently inoperative.
      0513-044 The qdaemon Subsystem was requested to stop.
      0513-044 The writesrv Subsystem was requested to stop.
      0513-044 The clcomd Subsystem was requested to stop.
      0513-044 The lldpd Subsystem was requested to stop.
      0513-044 The ecpvdpd Subsystem was requested to stop.
      0513-044 The ctrmc Subsystem was requested to stop.
      0513-044 The IBM.ServiceRM Subsystem was requested to stop.
      0513-044 The IBM.MgmtDomainRM Subsystem was requested to stop.
      0513-044 The IBM.DRM Subsystem was requested to stop.
      0513-044 The cas_agent Subsystem was requested to stop.
      All processes currently running will now be killed...
      Unmounting the file systems...
      umount: 0506-349 Cannot unmount /dev/hd10opt: The requested resource is busy.
      umount: 0506-349 Cannot unmount /dev/hd1: The requested resource is busy.

      where fwlevel is the specific file name of the server firmware fix, such as 01EM3xx_yyy_zzz.img

    • If you are updating Linux, enter the following at a Linux command prompt:
      cd /tmp/fwupdate
      /usr/sbin/update_flash -f fwlevel

      where fwlevel is the specific file name of the server firmware fix, such as 01EM3xx_yyy_zzz.img

    During the server firmware installation process, reference codes CA2799FD and CA2799FF are alternately displayed on the control panel. After the installation is complete, the system is automatically powered off and powered on.

    Note: If you receive a message stating:
    This partition does not have the authority to perform the requested function, see Message regarding a server that was previously managed by an HMC.
  5. Continue with the next step.

Step 5. Verify that the fix installed correctly

  1. Select from the following options:
    • To use the AIX or Linux command prompt (the operating system must be running and the diagnostics must be available), continue with the next step.
    • To use the ASMI, view the level of server firmware displayed in the upper-right corner below the copyright statement on the ASMI Welcome pane; for example, EM310_006. If the level of server firmware displayed is not the level that you installed, refer to step 4.
  2. Enter the following at a command prompt:
    lsmcode

    The existing levels of server firmware are displayed. For example, you might see output similar to the following:

    DISPLAY MICROCODE LEVEL                                                   802811
    IBM,8231-E1C
    
    The current permanent system firmware image is AL740_088
    The current temporary system firmware image is AL740_100
    The system is currently booted from the temporary firmware image.
    
    Use Enter to continue.
    
    
    Notes:

    • The permanent level is also known as the backup level.
    • The temporary level is also known as the installed level.
    • The system was booted from the temporary side, so at this time, the temporary level is also the activated level.
  3. Verify that the level of server firmware displayed is the level that you installed.
  4. If the level of server firmware displayed is not the level that you installed, perform the following steps:
    1. Retry the fix procedure. If you created a CD or DVD for this procedure, use a new media.
    2. If the problem persists, contact your next level of support.

Using AIX commands to install a firmware fix permanently

You can install a firmware fix permanently by using either the flash command or the AIX diagnostic service aids.

Note: To perform this task, you must meet the following criteria:

  • You must have root user authority.
  • You must start your server from the temporary side. For details, see Working with the temporary and permanent side of the service processor.

Using the flash command

At an AIX command prompt, type the following:

/usr/lpp/diagnostics/bin/update_flash -c

The update_flash -c command might run for 10 or more minutes.

Using the AIX diagnostic service aids

  1. At the AIX command prompt, type
    diag.
  2. Initialize the terminal type, if requested.
  3. On the function selection screen, select Tasks and Service Aids.
  4. On the task selection screen, scroll to the bottom of the list of options, and select Update and Manage Flash.
  5. Select Commit the Temporary Image, and press Enter. The process might run for 10 or more minutes.

AIX LUNs(LUNz) presented to host?

LUNz is the logical unit number that an application client uses to communicate with, configure and determine information about an SCSI storage array and the logical units attached to it. The LUN_Z value shall be zero.

LUNz has been implemented on CLARiiON arrays to make arrays visible to the host OS and PowerPath when no LUNs are bound on that array.  When using a direct connect configuration, and there is no Navisphere Management station to talk directly to the array over IP, the LUNZ can be used as a pathway for Navisphere CLI to send Bind commands to the array.

LUNz also makes arrays visible to the host OS and PowerPath when the host’s initiators have not yet ‘logged in to the Storage Group created for the host.  Without LUNz, there would be no device on the host for Navisphere Agent to push the initiator record through to the array. This is mandatory for the host to log in to the Storage Group. Once this initiator push is done, the host will be displayed as an available host to add to the Storage Group in Navisphere Manager (Navisphere Express).

LUNz should disappear once a LUN zero is bound, or when Storage Group access has been attained.

To conclude, the LUNz devices will be shown up in following two scenarios:
1. when arraycommpath is set to 1(enabled) and host HBAs are registered and login to Clariion array, but no “Storage Group” is configured for this host.
2. when there is no LUN configured using HLU0(Host LUN0) in the host “Storage Group”.

Figure-1 Storage Group with no LUN assigned using HLU0
To resolve this LUNz issue:
1. Verify LUNz hdisk number by issue:

## lsdev -Cc disk | grep LUNZ

 

hdisk5 Available 08-08-08     EMC CLARiiON FCP LUNZ Disk

hdisk6 Available 08-08-08     EMC CLARiiON FCP LUNZ Disk

hdisk7 Available 08-08-08     EMC CLARiiON FCP LUNZ Disk

hdisk8 Available 08-08-08     EMC CLARiiON FCP LUNZ Disk

 

2. set one of the LUNs assigned to the host using HLU0.

Figure-2 Assign LUN with HLU0
3. Remove each LUNZ device with command:
#rmdev -dl hdisk5
#rmdev -dl hdisk6
#rmdev -dl hdisk7
#rmdev -dl hdisk8

4. Reconfigure devices with command cfgmgr or emc_cfgmgr

AIX Memory / RAM performance monitoring

Memory

Memory Leak: Caused by a program that repeatedly allocates memory without freeing it.

When a process exits, its working storage is freed up immediately and its associated memory frames are put back on the free list.
However any files the process may have opened can stay in memory.

AIX tries to use the maximum amount of free memory for file caching.

High levels of file system cache usually means that is the way the application runs and likes it (you have to decide if this is expected by understanding the workload) or AIX can’t find anything else to do with the memory and so thinks it might as well save disk I/O CPU cycles by caching – this is normal and a good idea.

Some notes regarding memory leak:

When a process gets busy, process will use malloc() system call (memory allocation) to get more memory, so its memory usage gets bigger.  Memory requests are satisfied by allocating portions from a large pool of memory called the heap. When the process goes idle, it uses free() system call, but that doesn’t actually free up the memory from the process. It just releases the memory into the “heap area”.

AIX keeps a list of the pages in the heap area about the free memory pages that were used, but not used now. If there are new new malloc() requests, they will be served from heap first. Only if the heap goes to a very small size, only then will be issued new malloc() request to get new memory pages. When heap pages are not used for a long time AIX will page out these to disk.

RSS size is the actual memory occupied by the process in the RAM. (RSS can be active pages or some other pages in the heap). RSS pages will be paged out only if memory is getting short. If there is free mamory, it will not page these out, becaue it maybe useful to have it in the RAM

So, usually it turns out, there is no memory leak at all, just normal memory usage behaviour!!!

————————

memory:
topas -P    This does not tell how much of the application is paged out but how much of the application memory is backed by paging space.
(things in memory (working segment) should be backed by paging space by the actual size in memory of the process.)
svmon -Pt15 | perl -e ‘while(<>){print if($.==2||$&&&!$s++);$.=0 if(/^-+$/)}’        top 15 processes using the most memory
ps aux | head -1 ; ps aux | sort -rn +3 | head -20                                   top memory processes (the above is better)
ps -ef | grep -c LOCAL=NO        shows the number of oracle client connections (each connection takes up memory, so if it is high then…)

paging:
svmon -Pg -t 1 |grep Pid ; svmon -Pg -t 10 |grep “N”                                 top 10 processes using the most paging space
svmon -P -O sortseg=pgsp                                                             shows paging space usage of processes

————————

# ps gv | head -n 1; ps gv | egrep -v “RSS” | sort +6b -7 -n -r
PID    TTY STAT  TIME PGIN  SIZE   RSS   LIM  TSIZ   TRS %CPU %MEM COMMAND
393428      – A    10:23 2070 54752 54840 32768    69    88  0.0  5.0 /var/opt
364774      – A     0:08  579 28888 28940 32768    32    52  0.0  3.0 [cimserve]
397542      – A     0:18  472  6468  7212    xx   526   744  0.0  1.0 /usr/sbi
344246      – A     0:02   44  7132  7204 32768    50    72  0.0  1.0 /opt/ibm

RSS:    The amount of RAM used for the text and data segments per process. PID 393428 is using 54840k. (RSS:resident set size)
%MEM:    The actual amount of the RSS / Total RAM. Watch for processes that consume 40-70 percent of %MEM.
TRS:    The amount of RAM used for the text segment of a process in kilobytes.
SIZE:    The actual amount of paging space (virtual mem. size) allocated for this process (text and data).

How much big is the process in memory? It is the RSS size.
————————————–

Checking memory usage with nmon:

nmon –> t (top processes) –> 4 (order in process size)

PID       %CPU     Size      Res     Res      Res     Char    RAM      Paging         Command
Used       KB      Set     Text     Data     I/O     Use   io   other repage
16580722     0.0   226280   322004   280640    41364        0    5%      0      0      0 oracle
9371840      0.0   204324   300904   280640    20264        0    5%      0      0      0 oracle
10551416     0.0   198988   305656   280640    25016        0    5%      0      0      0 oracle
8650824      0.0   198756   305428   280640    24788        0    5%      0      0      0 oracle

Size KB: program on disk size
ResSize: Resident Set Size – how big it is in memory (excluding the pages still in the file system (like code) and some parts on paging disks)
ResText: code pages of the Resident Set
ResData: data and stack pages of the Resident Set

————————————–

regarding ORACLE:
ps -ef | grep -c LOCAL=NO

This will show how many client connections we have. Each connections take up some memory, sometimes if there are memory problems too many users are logegd in causing this triouble.
————————————–

shared memory segments:

root@aix2: /root #  ipcs -bm
IPC status from /dev/mem as of Sat Sep 17 10:04:28 CDT 2011
T        ID     KEY        MODE       OWNER    GROUP     SEGSZ
Shared Memory:
m   1048576 0x010060f0 –rw-rw-rw-     root   system       980
m   1048577 0xffffffff D-rw-rw-rw-     root   system       944
m   4194306 0x78000238 –rw-rw-rw-     root   system  16777216
m   1048579 0x010060f2 –rw-rw-rw-     root   system       976
m        12 0x0c6629c9 –rw-r—–     root   system   1663028
m        13 0x31000002 –rw-rw-rw-     root   system    131164
m 425721870 0x81fc461c –rw-r—–   oracle oinstall 130027520
m        15 0x010060fa –rw-rw-rw-     root   system      1010
m   2097168 0x849c6158 –rw-rw—-   oracle oinstall 18253647872

It will show our memory segments, who owns them and what their size (in bytes). It shows the maximum allocated size, that a memory segment can go to. It does not mean it is allocated, but the exception is Oracle (and DB2).
Oracle line shows the SGA for Oracle. (This memory is allocated for Oracle. It is 18GB in this case)

————————————–

IBM script for checking what is causing paging space activity:
(it will run until po will be 50 then saves processes, svmon and exists)

#!/usr/bin/ksh
/usr/bin/renice -n -20 -p $$
while [ true ]
do
vmstat -I 1 1 | tail -1 | awk ‘{print $9}’ | read po
if [[ $po -gt 50 ]]
then
ps -ef > ps.out &
svmon -G > svmon.G &
exit 0
fi
done

My script for monitoring memory, paging activity:

#!/usr/bin/ksh
/usr/bin/renice -n -20 -p $$

while [ true ]; do
echo `date` “–>” `svmon -G | head -2 | tail -1` “–>” `vmstat -v | grep numperm` >> svmon.out &
echo `date` “–>” `svmon -G | head -3 | tail -1` >> paging.out &
echo `vmstat -Iwt 1 1 | tail -1` >> vmstat.out &
sleep 60
done

AIX SDD (subsystem device driver)

SDD is designed to support the multipath configuration in the ESS.
The software used to balance ESS I/O traffic across all adapters. It provides multiple access to data from the host.
when using sdd cfgmgr is run 3 times (cfgmgr -l fcs0, cfgmgr -l fcs1, cfgmgr (the third one builds the vpaths))

3 policies exist for load balancing:
default: selecting the path with the least number of current I/O operations
-round robin: choosing the path, that was not used for the last operation (alternating if 2 pathes exist)
-failover: all I/O sent ove the most preferred path, until a failure is detected.

SDDSRV:
SDD has a server daemon running in the background: lssrc/stopsrc/startsrc -s sddsrv
If sddsrv is stopped, the feature that automatically recovers failed paths disabled.

vpath:
A logical disk defined in ESS and recognized by AIX. AIX uses vpath instead of hdisk as a unit of physical storage.

root@aix: /dev # lsattr -El vpath0
active_hdisk  hdisk20/00527461/fscsi1          Active hdisk                 False
active_hdisk  hdisk4/00527461/fscsi0           Active hdisk                 False
policy        df                               Scheduling Policy            True    <-path selection policy
pvid          0056db9a77baebb90000000000000000 Physical volume identifier   False
qdepth_enable yes                              Queue Depth Control          True
serial_number 00527461                         LUN serial number            False
unique_id     1D080052746107210580003IBMfcp    Device Unique Identification False

policy:
fo: failover only – all I/O operations sent to the same paths until the path fails
lb: load balancing – the path is chosen by the number of I/O operations currently in process
lbs: load balancing sequential – same as before with optimization for sequential I/O
rr: round ropbin – path is chosen at random from the not used paths
rrs: round robin sequential – same as before with optimization for sequential I/O
df: default – the default policy is load balancing

datapath set device N policy        change the SDD path selection policy dynamically

DPO (Data Path Optimizer):
it is a pseudo device (lsdev | grep dpo), which is the pseudo parent of the vpaths

root@: / # lsattr -El dpo
SDD_maxlun      1200 Maximum LUNS allowed for SDD                  False
persistent_resv yes  Subsystem Supports Persistent Reserve Command False

— ——————————

software requirements for SDD:
-host attachment for SDD (ibm2105.rte, devices.fcp.disk.ibm.rte) – this is the ODM extension
The host attachments for SDD add 2105 (ESS)/2145 (SVC)/1750 (DS6000)/2107 (DS8000) device information to allow AIX to properly configure 2105/2145/1750/2107 hdisks.
The 2105/2145/1750/2107 device information allows AIX to:
– Identify the hdisk(s) as a 2105/2145/1750/2107 hdisk.
– Set default hdisk attributes such as queue_depth and timeout values.
– Indicate to the configure method to configure 2105/2145/1750/2107 hdisk as non-MPIO-capable devices

ibm2105.rte: for 2105 devices
devices.fcp.disk.ibm.rte: for DS8000, DS6000 and SAN Volume Controller)

-devices.sdd.53.rte – this is the driver (sdd)
it provides the multipath configuration environment support

——————————–

addpaths                  dynamically adds more paths to SDD vpath devices (before addpaths, run cfgmgr)
(running cfgmgr alone does not add new paths to SDD vpath devices)
cfgdpo                    configures dpo
cfgvpath                  configures vpaths
cfallvpath                configures dpo+vpaths
dpovgfix <vgname>         fixes a vg that has mixed vpath and hdisk physical volumes
extenfvg4vp               this can be used insteadof extendvg (it will move pvid from hdisk to vpath)

datapath query version    shows sdd version
datapath query essmap     shows vpaths and their hdisks in a list
datapath query portmap    shows vpaths and ports
—————————————
datapath query adapter    information about the adapters
State:
Normal           adapter is in use.
Degraded         one or more paths are not functioning.
Failed           the adapter is no longer being used by SDD.

datapath query device     information about the devices 8datapath query device 0)
State:
Open             path is in use
Close            path is not being used
Failed           due to errors path has been removed from service
Close_Failed     path was detected to be broken and failed to open when the device was opened
Invalid          path is failed to open, but the MPIO device is opened
—————————————
datapath remove device X path Y   removes path# Y from device# X (datapath query device, will show X and Y)
datapath set device N policy      change the SDD path selection policy dynamically
datapath set adapter 1 offline

lsvpcfg                           list vpaths and their hdisks
lsvp -a                           displays vpath, vg, disk informations

lquerypr                          reads and releases the persistent reservation key
lquerypr -h/dev/vpath30           queries the persistent resrevation on the device (0:if it is reserved by current host, 1: if another host)
lquerypr -vh/dev/vpath30          query and display the persistent reservation on a device
lquerypr -rh/dev/vpath30          release the persisten reservation if the device is reserved by the current host
(0: if the command succeeds or not reserved, 2: if the command fails)
lquerypr -ch/dev/vpath30          reset any persistent reserve and clear all reservation key registrations
lquerypr -ph/dev/vpath30          remove the persisten reservation if the device is reserved by another host
—————————————

Removing SDD (after install a new one):
-umount fs on ESS
-varyoffvg
(if HACMP and RG is online on other host: vp2hd <vgname>) <–it converts vpaths to hdisks)
-rmdev -dl dpo -R                                         <–removes all the SDD vpath devices
-stopsrc -s sddsrv                                        <–stops SDD server
-if needed: rmdev -dl hdiskX                              <–removes hdisks
(lsdev -C -t 2105* -F name | xargs -n1 rmdev -dl)

-smitty remove — devices.sdd.52.rte
-smitty install — devices.sdd.53.rte (/mnt/Storage-Treiber/ESS/SDD-1.7)
-cfgmgr
—————————————

Removing SDD Host Attachment:
-lsdev -C -t 2105* -F name | xargs -n1 rmdev -dl          <–removes hdisk devices
-smitty remove — ibm2105.rte (devices.fcp.disk.ibm)
—————————————

Change adapter settings (Un/re-configure paths):

-datapath set adapter 1 offline
-datapath remove adapter 1
-rmdev -Rl fcs0
(if needed: for i in `lsdev -Cc disk | grep -i defined | awk ‘{ print $1 }’`; do rmdev -Rdl $i; done)
-chdev -l fscsi0 -a dyntrk=yes -a fc_err_recov=fast_fail
-chdev -l fcs0 -a init_link=pt2pt
-cfgmgr; addpaths
—————————————

Reconfigure vpaths:
-datapath remove device 2 path 0
-datapath remove device 1 path 0
-datapath remove device 0 path 0
-cfgmgr; addpaths
-rmdev -Rdl vpath0
-cfgmgr;addpaths
—————————————

Can’t give pvid for a vpath:
root@aix: / # chdev -l vpath6 -a pv=yes
Method error (/usr/lib/methods/chgvpath):
0514-047 Cannot access a device.

in errpt:DEVICE LOCKED BY ANOTHER USER
RELEASE DEVICE PERSISTENT RESERVATION

# lquerypr -Vh /dev/vpath6          <–it will show the host key
# lquerypr -Vph /dev/vpath6         <–it will clear the reservation lock
# lquerypr -Vh /dev/vpath6          <–checking again will show it is OK now

How to Encrypt File System in AIX ?

Encrypting Filesystem on AIX 6.1.

EFS offers 2 modes of operation:

Root Admin mode
This is the default mode. Root can reset user and group keystore passwords.

Root Guard mode
Root does not have access to user’s encrypted files and cannot change their passwords.

Note: NFS exports of EFS filesystems are not supported.

1. Prerequisites:
RBAC has to be enabled. Should be by default on AIX 6.1. If not use chdev to enable it.

# lsattr -El sys0 | grep RBAC
enhanced_RBAC   true         Enhanced RBAC Mode        True

CryptoLite needs to be installed, verify using below command

bash-3.2# lslpp -l | grep  CryptoLite
  clic.rte.kernext           4.7.0.1  COMMITTED  CryptoLite for C Kernel
  clic.rte.lib               4.7.0.1  COMMITTED  CryptoLite for C Library
  clic.rte.kernext           4.7.0.1  COMMITTED  CryptoLite for C Kernel

2. EFS Commands:

efsenable – Enables EFS on a given system. This is run only once
efskeymgr – Encryption Key Management tool
efsmgr – File encryption and decryption

3. Setup:
To enable EFS on the system use:

# efsenable -a
Enter password to protect your initial keystore:
Enter the same password again:

If your password for EFS will be identical with your login password the EFS Kernel extention will be loaded automatically into the kernel. Thus
you will be able to access the encrypted files without having to provide a password.
Otherwise `efskeymgr -o ksh` has tto be executed in order to load the key’s.

In order to have the ability to encrypt files, the filesystem that will hold this files needs to be EFS enabled (efs=yes) and Extended Attribute V2 has to be activated.

This can be verified using lsfs -q

# lsfs -q /test
Name            Nodename   Mount Pt               VFS   Size    Options    Auto Accounting
/dev/fslv12     --         /test               jfs2  262144  rw         yes  no
  (lv size: 262144, fs size: 262144, block size: 4096, sparse files: yes, inline log: no, inline log size: 0, EAformat: v1, Quota: no, DMAPI: no, VIX: yes, EFS: no, ISNAPSHOT: no, MAXEXT: 0, MountGuard: no)

# chfs -a efs=yes /test

# lsfs -q /archive
Name            Nodename   Mount Pt               VFS   Size    Options    Auto Accounting
/dev/fslv12     --         /test               jfs2  262144  rw         yes  no
  (lv size: 262144, fs size: 262144, block size: 4096, sparse files: yes, inline log: no, inline log size: 0, EAformat: v2, Quota: no, DMAPI: no, VIX: yes, EFS: yes, ISNAPSHOT: no, MAXEXT: 0, MountGuard: no)

Now we will have a look at the keys associated  with the current shell.

# efskeymgr -V
List of keys loaded in the current process:
 Key #0:
                           Kind ..................... User key
                           Id   (uid / gid) ......... 0
                           Type ..................... Private key
                           Algorithm ................ RSA_1024
                           Validity ................. Key is valid
                           Fingerprint .............. s6295ea1:be7cae83:82g30ab8:a02379a0
 Key #1:
                           Kind ..................... Group key
                           Id   (uid / gid) ......... 7
                           Type ..................... Private key
                           Algorithm ................ RSA_1024
                           Validity ................. Key is valid
                           Fingerprint .............. 12928ecb:353f4268:e19078be:268c7d56:18928ecb
 Key #2:
                           Kind ..................... Admin key
                           Id   (uid / gid) ......... 0
                           Type ..................... Private key
                           Algorithm ................ RSA_1024
                           Validity ................. Key is valid
                           Fingerprint .............. 940201f9:89h618ac:2e555ac4:60fdb6b5:268c7d56

4. Encrypt file

Now we will create a file, try to encrypt it, have a problem with umask and finally encrypt the file.

# echo "I like black tee with milk." > secret.txt
# ls -U
total 8
-rw-r------    1 root     system           30 May 8  11:18 secret.txt
drwxr-xr-x-    2 root     system          256 Apr 30 14:10 tmp

        Encrypt file
          |
# efsmgr -e secret.txt
./.efs.LZacya: Security authentication is denied.

# umask 077

# efsmgr -e secret.txt
# ls -U
total 16
drwxr-xr-x-    2 root     system          256 30 May 5 12:13 lost+found
-rw-r-----e    1 root     system           30 30 May 8 11:18 secret.txt
          |
          Indicates that this file is encrypted

Display file encryption information:

# efsmgr -l secret.txt
EFS File information:
 Algorithm: AES_128_CBC
List of keys that can open the file:
 Key #1:
  Algorithm       : RSA_1024
  Who             : uid 0
  Key fingerprint : 00f06152:be7cae83:a02379a0:82e30ab8:f6295ea1

Now I set the file permission’s to 644 and try to read the file as another user.

# chmod 644 secret.txt
# ls -la
-rw-r--r--    1 root     system          145 30 May 8 11:19 secret.txt

user1 # file secret.txt
secret.txt: 0653-902 Cannot open the specified file for reading.
user1 # cat secret.txt
cat: 0652-050 Cannot open secret.txt.

As root we will list the inode number of the file, get the block pointer and read directly from the filesystem using fsdb to see if the file is stored  encrypted.

      Display inode no.
      |
# ls -iU
total 32

    5 -rw-r--r--e    1 root     system          145 30 May 8 11:19 secret.txt

# istat 5 /dev/fslv12
Inode 5 on device 10/27 File
Protection: rw-r--r--
Owner: 0(root)          Group: 0(system)
Link count:   1         Length 145 bytes

Last updated:   Tue May 8 11:18:23 GMT+02:00 2012
Last modified:  Tue May 8 11:18:52 GMT+02:00 2012
Last accessed:  Tue May 8 11:18:52 GMT+02:00 2012

Block pointers (hexadecimal):
29
# fsdb /dev/fslv12
Filesystem /dev/fslv12 is mounted.  Modification is not permitted.

File System:                    /dev/fslv12

File System Size:               261728  (512 byte blocks)
Aggregate Block Size:           4096
Allocation Group Size:          8192    (aggregate blocks)

> display 0x29
Block: 41     Real Address 0x29000
00000000:  119CB74E 637C6FE0 C0BF2DCD 36B775BB   |...Nc|o...-.6.u.|
00000010:  569B5A6C 43476ED3 F4BFE938 7C662A3B   |V.ZlCGn....8|f*;|
00000020:  B5D89C51 FA2BE7B6 CEAF2D3E 555EAA06   |...Q.+....->U^..|
00000030:  4FF23413 B11D1170 982690B3 5F1BCA9A   |O.4....p.&.._...|
00000040:  4AD3CEA5 A3CBFAD9 C730EE00 9BD1F409   |J........0......|
00000050:  71203B85 A51320C6 04A97DA4 43002DA7   |q ;... ...}.C.-.|
00000060:  994CC67B A1AC31DF 2C8201AD 3E5B50F7   |.L.{..1.,...>[P.|
00000070:  6BA7B01D EC5CB918 17E13F46 2935FA98   |k........?F)5..|
00000080:  718DF155 D6E69A41 EF592B60 EA5F7B24   |q..U...A.Y+`._{$|
00000090:  32521FE2 7AD8EC61 1A94413D A8338A26   |2R..z..a..A=.3.&|
000000a0:  62E4A319 D6251A66 F19D4739 2FC7E83A   |b....%.f..G9/..:|
000000b0:  DE0F878A 1F95AB89 5C7F3520 C65B7896   |.........5 .[x.|
000000c0:  915A7655 EC269DFF 68E2B08A 871114A9   |.ZvU.&..h.......|
000000d0:  E30B195F 280F7DCD 4F8BE094 4B5603D8   |..._(.}.O...KV..|
000000e0:  962303B0 D957A2A5 24A2A3A5 6260EA5E   |.#...W..$...b`.^|
000000f0:  A4C62B7D FB9B1841 893D253F 72E61065   |..+}...A.=%?r..e|
-hit enter for more-
00000100:  01A150FD AD54677D A856E9B1 320257E1   |..P..Tg}.V..2.W.|
00000110:  5F023AA3 0191E0D6 4B64583B D9F2A4C7   |_.:.....KdX;....|
00000120:  F988937A E0117EB2 26E61976 E4860D7D   |...z..~.&..v...}|
00000130:  0C724A4E 50616226 BDE06FEB 10A19564   |.rJNPab&..o....d|
00000140:  17C90BB7 774338B3 8525ED90 5EADFD8B   |....wC8..%..^...|
00000150:  636FC1AF D46C2E64 6AC37082 3B0168BE   |co...l.dj.p.;.h.|
00000160:  24C0CD2E D8587254 F6DBC1BA 93BE6AD6   |$....XrT......j.|
00000170:  E89EEFF9 08000B07 E3827C10 AE0FD7DB   |..........|.....|
00000180:  162D0E6D EF94D85A 3F09CD85 A19A31FF   |.-.m...Z?.....1.|
00000190:  49E13BFC 5328F670 E0B50878 942CC4BB   |I.;.S(.p...x.,..|
000001a0:  BF1D6C4F 9DA72F3D 8DC90691 328A7053   |..lO../=....2.pS|
000001b0:  99C31EEB 1CD2208A CBF609C1 4DB86819   |...... .....M.h.|
000001c0:  E2746288 5E152ECA 0E2BD9DF D1D1D210   |.tb.^....+......|
000001d0:  7ADDF0EC 522E93E2 CAA0A36F B3CBFB05   |z...R......o....|
000001e0:  4EA56F3C ECBA1A0C AA132269 2024E065   |N.o<......"i $.e|
000001f0:  00BC51B0 88BBCD8A 9C644F66 6A16DBC8   |..Q......dOfj...|

Above we see that the file on the disk is encrypted.

5. Decrypting a file

Decrypt file
          |
# efsmgr -d secret.txt
# ls -U
total 24

-rw-r--r---    1 root     system          145 May 8 12:23 secret.txt

6. Encryption Inheritance

If you enable Encryption Inheritance on a directory all newly created files in that directory will be automatically encrypted.

To enable Encryption inheritance use:

# efsmgr -E /archive

# ls -U / | grep archive
drwxr-xr-xe    3 root     system          256 Jul 17 12:09 archive

# touch next.txt

# ls -U
total 32

-rw-------e    1 root     system            0 May 8 11:10 next.txt
-rw-r--r---    1 root     system          145 May 8 12:25 secret.txt

7. Grant access to another user
Say we are  user1 and want to have a look at who has EFS access to the file.

user1 $ efsmgr -l secret.txt
EFS File information:
 Algorithm: AES_128_CBC
List of keys that can open the file:
 Key #1:
  Algorithm       : RSA_1024
  Who             : uid 0
  Key fingerprint : 00f06152:be7cae83:a02379a0:82e30ab8:f6295ea1

To grant access to a user use:

Add access to the specified file to a user or group(u/g)
          |
# efsmgr -a secret.txt -u user1
                        |
                        Add user to EFS access list

user1 $ cat secret.txt
I like black tee with milk.

Reference Red-books:

AIX 6.1 Diffrence Guide SG24-7559-00 Page 40
AIX V6 Advanced Security Features SG24-7430-00 Page 59

How to Increase paging space logical volume size in AIX ?

To show the current paging space volume, its size along with other information use the ‘lsps’ command:

# lsps -a
Page Space      Physical Volume   Volume Group Size %Used Active  Auto  Type Chksum
hd6             hdisk1            rootvg         512MB     2   yes   yes    lv     0

The paging space may be increased on the fly using the ‘chps’ command. In order to increase this paging space to 16 GB we need to to find out the PP size, since chps will increase the paging space by the specified number of PP’s

# lslv hd6
LOGICAL VOLUME:     hd6                    VOLUME GROUP:   rootvg
LV IDENTIFIER:      0005e4b80000d7000000013e5a69cf95.2 PERMISSION:     read/write
VG STATE:           active/complete        LV STATE:       opened/syncd
TYPE:               paging                 WRITE VERIFY:   off
MAX LPs:            512                    PP SIZE:        256 megabyte(s)
COPIES:             1                      SCHED POLICY:   parallel
LPs:                2                      PPs:            2
STALE PPs:          0                      BB POLICY:      non-relocatable
INTER-POLICY:       minimum                RELOCATABLE:    yes
INTRA-POLICY:       middle                 UPPER BOUND:    32
MOUNT POINT:        N/A                    LABEL:          None
MIRROR WRITE CONSISTENCY: off
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?:     NO
INFINITE RETRY:     no
#

On this system is the PP size 256M so in order to increase the paging space size to 2Gig we need 62 PP’s

# chps -s 62 hd6
# lsps -a
Page Space      Physical Volume   Volume Group Size %Used Active  Auto  Type Chksum
hd6             hdisk1            rootvg       16384MB     1   yes   yes    lv     0
#

Tunning AIX for Oracle Database

  • Memory and Paging
  • Disk I/O Issues
  • CPU Scheduling and Process Priorities
  • Oracle Real Application Clusters Information
  • Setting the AIXTHREAD_SCOPE Environment Variable

Memory and Paging

Memory contention occurs when processes require more memory than is available. To cope with the shortage, the system pages programs and data between memory and disks.

Controlling Buffer-Cache Paging Activity

Excessive paging activity decreases performance substantially. This can become a problem with database files created on journaled file systems (JFS and JFS2). In this situation, a large number of SGA data buffers might also have analogous file system buffers containing the most frequently referenced data. The behavior of the AIX file buffer cache manager can have a significant impact on performance. It can cause an I/O bottleneck, resulting in lower overall system throughput.

On AIX, tuning buffer-cache paging activity is possible but you must do it carefully and infrequently. Use the /usr/samples/kernel/vmtune command to tune the following AIX system parameters:

Parameter Description
minfree The minimum free-list size. If the free-list space in the buffer falls below this size, the system uses page stealing to replenish the free list.
maxfree The maximum free-list size. If the free-list space in the buffer exceeds this size, the system stops using page stealing to replenish the free list.
minperm The minimum number of permanent buffer pages for file I/O.
maxperm The maximum number of permanent buffer pages for file I/O.

 

See Also:

For more information about AIX system parameters, see the AIX 5L Performance Management Guide.

Tuning the AIX File Buffer Cache

The purpose of the AIX file buffer cache is to reduce disk access frequency when journaled file systems are used. If this cache is too small, disk usage increases and potentially saturates one or more disks. If the cache is too large, memory is wasted.

See Also:

For more information about the implications of increasing the AIX file buffer cache, see “Controlling Buffer-Cache Paging Activity”.

You can configure the AIX file buffer cache by adjusting the minperm and maxperm parameters. In general, if the buffer hit ratio is low (less than 90 percent), as determined by the sar -b command, increasing the minperm parameter value might help. If maintaining a high buffer hit ratio is not critical, decreasing the minperm parameter value increases the physical memory available. Refer to the AIX documentation for more information about increasing the size of the AIX file buffer cache.

The performance gain cannot be quantified easily, because it depends on the degree of multiprogramming and the I/O characteristics of the workload.

Tuning the minperm and maxperm Parameters

AIX provides a mechanism for you to loosely control the ratio of page frames used for files rather than those used for computational (working or program text) segments by adjusting the minperm and maxperm values according to the following guidelines:

  • If the percentage of real memory occupied by file pages falls below the minperm value, the virtual memory manager (VMM) page-replacement algorithm steals both file and computational pages, regardless of repage rates.
  • If the percentage of real memory occupied by file pages rises above the maxperm value, the virtual memory manager page-replacement algorithm steals both file and computational pages.
  • If the percentage of real memory occupied by file pages is between the minperm and maxperm parameter values, the virtual memory manager normally steals only file pages, but if the repaging rate for file pages is higher then the repaging rate for computational pages, the computational pages are stolen as well.

Use the following algorithm to calculate the default values:

  • minperm (in pages) = ((number of page frames)-1024) * 0.2
  • maxperm (in pages) = ((number of page frames)-1024) * 0.8

Use the following command to change the value of the minperm parameter to 5 percent of the total number of page frames, and the value of the maxperm parameter to 20 percent of the total number of page frames:

# /usr/samples/kernel/vmtune -p 5 -P 20

The default values are 20 percent and 80 percent, respectively.

To optimize for quick response when opening new database connections, adjust the minfree parameter to maintain enough free pages in the system to load the application into memory without adding additional pages to the free list. To determine the real memory size (resident set size, working set) of a process, use the following command:

$ ps v process_id

Set the minfree parameter to this value or to 8 frames, whichever is larger.

If the database files are on raw devices, or if you are using Direct I/O, you can set the minperm and maxperm parameters to low values, for example 5 percent and 20 percent, respectively. This is because the AIX file buffer cache is not used either for raw devices or for Direct I/O. The memory might be better used for other purposes, such as for the Oracle System Global Area.

Allocating Sufficient Paging Space (Swap Space)

Inadequate paging space (swap space) usually causes the system to hang or suffer abnormally slow response times. On AIX, you can dynamically add paging space on raw disk partitions. The amount of paging space you should configure depends on the amount of physical memory present and the paging space requirements of your applications. Use the lsps command to monitor paging space use and the vmstat command to monitor system paging activities. To increase the paging space, use the smit pgsp command.

On platforms where paging space is pre-allocated, Oracle recommends that you set the paging space to a value larger than the amount of RAM. But on AIX paging space is not allocated until needed. The system uses swap space only if it runs out of real memory. If the memory is sized correctly, there is no paging and the page space can be small. Workloads where the demand for pages does not fluctuate significantly perform well with a small paging space. Workloads likely to have peak periods of increased paging require enough paging space to handle the peak number of pages.

As a general rule, an initial setting for the paging space is half the size of RAM plus 4 GB, with an upper limit of 32 GB. Monitor the paging space use with the lsps -a command, and increase or decrease the paging space size accordingly. The metric %Used in the output of lsps -a is typically less than 25% on a healthy system. A properly sized deployment should require very little paging space and an excessive amount of swapping is an indication that the RAM on the system might be undersized.

Caution:

Do not undersize the paging space. If you do, the system can terminate active processes when it runs out of space. However, over-sizing the paging space has little or no negative impact.

Controlling Paging

Constant and excessive paging indicates that the real memory is over-committed. In general, you should:

  • Avoid constant paging unless the system is equipped with very fast expanded storage that makes paging between memory and expanded storage much faster than Oracle can read and write data between the SGA and disks.
  • Allocate limited memory resource to where it is most beneficial to system performance. It is sometimes a recursive process of balancing the memory resource requirements and trade-offs.
  • If memory is not adequate, build a prioritized list of memory-requiring processes and elements of the system. Assign memory to where the performance gains are the greatest. A prioritized list might look like:
  1. OS and RDBMS kernels
  2. User and application processes
  3. Redo log buffer
  4. PGAs and shared pool
  5. Database block buffer caches

For instance, if you query Oracle dynamic performance tables and views and find that both the shared pool and database buffer cache require more memory, assigning the limited spare memory to the shared pool might be more beneficial than assigning it to the database block buffer caches.

The following AIX commands provide paging status and statistics:

  • vmstat -s
  • vmstat interval [repeats]
  • sar -r interval [repeats]

Setting the Database Block Size

You can configure the Oracle database block size for better I/O throughput. On AIX, you can set the value of the DB_BLOCK_SIZE initialization parameter to between 2 KB and 32 KB, with a default of 4 KB. If the Oracle database is installed on a journaled file system, then the block size should be a multiple of the file system block size (4 KB on JFS, 16 K to 1 MB on GPFS). For databases on raw partitions, the Oracle database block size is a multiple of the operating system physical block size (512 bytes on AIX).

Oracle recommends smaller Oracle database block sizes (2 KB or 4 KB) for online transaction processing (OLTP) or mixed workload environments and larger block sizes (8 KB, 16 KB, or 32 KB) for decision support system (DSS) workload environments.

Tuning the Log Archive Buffers

By increasing the LOG_BUFFER size you might be able to improve the speed of archiving the database, particularly if transactions are long or numerous. Monitor the log file I/O activity and system throughput to determine the optimum LOG_BUFFER size. Tune the LOG_BUFFER parameter carefully to ensure that the overall performance of normal database activity does not degrade.

Note:

The LOG_ARCHIVE_BUFFER_SIZE parameter was obsoleted with Oracle8i.

I/O Buffers and SQL*Loader

For high-speed data loading, such as using the SQL*Loader direct path option in addition to loading data in parallel, the CPU spends most of its time waiting for I/O to complete. By increasing the number of buffers, you can usually push the CPU usage harder, thereby increasing overall throughput.

The number of buffers (set by the SQL*Loader BUFFERS parameter) you choose depends on the amount of available memory and how hard you want to push CPU usage. See Oracle Database Utilities for information about adjusting the file processing options string for the BUFFERS parameter.

The performance gains depend on CPU usage and the degree of parallelism that you use when loading data.

See Also:

For more generic information about the SQL*Loader utility, see Oracle Database Utilities.

BUFFER Parameter for the Import Utility

The BUFFER parameter for the Import utility should be set to a large value to optimize the performance of high-speed networks when they are used. For instance, if you use the IBM RS/6000 Scalable POWERparallel Systems (SP) switch, you should set the BUFFER parameter to a value of at least 1 MB.

Disk I/O Issues

Disk I/O contention can result from poor memory management (with subsequent paging and swapping), or poor distribution of tablespaces and files across disks.

Make sure that the I/O activity is distributed evenly across multiple disk drives by using AIX utilities such as filemon, sar, iostat, and other performance tools to identify any disks with high I/O activity.

AIX Logical Volume Manager

The AIX Logical Volume Manager (LVM) can stripe data across multiple disks to reduce disk contention. The primary objective of striping is to achieve high performance when reading and writing large sequential files. Effective use of the striping features in the LVM allows you to spread I/O more evenly across disks, resulting in greater overall performance.

Note:

Do not add logical volumes to Automatic Storage Management (ASM) disk groups. ASM works best when you add raw disk devices to disk groups. If you are using ASM, do not use LVM for striping. Automatic Storage Management implements striping and mirroring.

Design a Striped Logical Volume

When you define a striped logical volume, you must specify the following items:

Item Recommended Settings
Drives At least two physical drives. The drives should have minimal activity when performance-critical sequential I/O is executed. Sometimes you might need to stripe the logical volume between two or more adapters.
Stripe unit size Although the stripe unit size can be any power of two from 2 KB to 128 KB, stripe sizes of 32 KB and 64 KB are good values for most workloads. For Oracle database files, the stripe size must be a multiple of the database block size.
Size The number of physical partitions allocated to the logical volume must be a multiple of the number of disk drives used.
Attributes Cannot be mirrored. Set the copies attribute to a value of 1.

 

Other Considerations

Performance gains from effective use of the LVM can vary greatly, depending on the LVM you use and the characteristics of the workload. For DSS workloads, you can see substantial improvement. For OLTP-type or mixed workloads, you can still expect significant performance gains.

Using Journaled File Systems Compared to Raw Logical Volumes

Note the following considerations when you are deciding whether to use journaled file systems or raw logical volumes:

  • File systems are continually being improved, as are various file system implementations. In some cases, file systems provide better I/O performance than raw devices.
  • File Systems require some additional configuration (AIX minservers and maxservers parameter) and add a small CPU overhead because Asynchronous I/O on file systems is serviced outside of the kernel.
  • Different vendors implement the file system layer in different ways to exploit the strengths of different disks. This makes it difficult to compare file systems across platforms.
  • The introduction of more powerful LVM interfaces substantially reduces the tasks of configuring and backing up logical disks based on raw logical volumes.
  • The Direct I/O and Concurrent I/O feature included in AIX 5L improves file system performance to a level comparable to raw logical volumes.

If you use a journaled file system, it is easier to manage and maintain database files than if you use raw devices. In earlier versions of AIX, file systems supported only buffered read and write and added extra contention because of imperfect inode locking. These two issues are solved by the JFS2 Concurrent I/O feature and the GPFS Direct I/O feature, enabling file systems to be used instead of raw devices, even when optimal performance is required.

Note:

To use the Oracle Real Application Clusters option, you must place data files in an ASM disk group on raw devices or on a GPFS file system. You cannot use JFS or JFS2. Direct I/O is implicitly enabled when you use GPFS.

File System Options

AIX 5L includes Direct I/O and Concurrent I/O support. Direct I/O and Concurrent I/O support allows database files to exist on file systems while bypassing the operating system buffer cache and removing inode locking operations that are redundant with the features provided by Oracle Database.

Where possible, Oracle recommends enabling Concurrent I/O or Direct I/O on file systems containing Oracle data files. The following table lists file systems available on AIX and the recommended setting.

File System Option Description
JFS dio Concurrent I/O is not available on JFS. Direct I/O (dio) is available, but performance is degraded compared to JFS2 with Concurrent I/O.
JFS large file none Oracle does not recommend using JFS large file for Oracle Database because its 128 KB alignment constraint prevents you from using Direct I/O.
JFS2 cio Concurrent I/O (cio) is a better setting than Direct I/O (dio) on JFS2 because it has support for multiple concurrent readers and writers on the same file.
GPFS N/A Oracle Database silently enables Direct I/O on GPFS for optimum performance. GPFS’ Direct I/O already supports multiple readers and writers on multiple nodes. Therefore, Direct I/O and Concurrent I/O are the same thing on GPFS.

 

Considerations for JFS and JFS2

If you are placing Oracle Database logs on a JFS2 file system, the optimal configuration is to create the file system using the agblksize=512 option and to mount it with the cio option. This delivers logging performance within a few percentage points of the performance of a raw device.Before Oracle Database 10g, Direct I/O and/or Concurrent I/O could not be enabled at file level on JFS/JFS2. Therefore, the Oracle home directory and data files had to be placed in separate file systems for optimal performance, the Oracle home directory placed on a file system mounted with default options, with the data files and logs on file systems mounted using the dio or cio options.With Oracle Database 10g, you can enable Direct I/O and/or Concurrent I/O on JFS/JFS2 at individual file level. You can do this by setting the FILESYSTEMIO_OPTIONS parameter in the server parameter file to setall, which is the default, or directIO. This enables Concurrent I/O on JFS2 and Direct I/O on JFS for all data file I/O. The result is that you can place data files on the same JFS/JFS2 file system as the Oracle home directory. As mentioned above, you should still place Oracle Database logs on a separate JFS2 file system for optimal performance.

Considerations for GPFS

If you are using GPFS, you can use the same file system for all purposes including the Oracle home directory, data files, and logs. For optimal performance, you should use a large GPFS block size (typically at least 512 KB). GPFS is designed for scalability and there is no requirement to create multiple GPFS file systems as long as the amount of data fits in a single GPFS file system.

Moving from a Journaled File System to Raw Logical Volumes

To move from a journaled file system to raw devices without having to manually reload all of the data, perform the following as the root user:

  1. Create a raw device (preferable in a BigVG) using the new raw logical volume device type (-T O), which allows putting the first Oracle block at offset zero for optimal performance:
2.  # mklv -T O -y new_raw_device VolumeGroup NumberOfPartitions
3.
Note:

The raw device should be larger than the existing file. Be sure to mind the size of the new raw device to prevent wasting space.

  1. Set the permissions on the raw device.
  2. Use dd to convert and copy the contents of the JFS file to the new raw device, as follows:
  3. Rename the data file.
6.  # dd if=old_JFS_file of=new_raw_device bs=1m
7.

Moving from Raw Logical Volumes to a Journaled File System

The first Oracle block on a raw logical volume is not necessarily at offset zero, whereas the first Oracle block on a file system is always at offset zero. To determine the offset and locate the first block on a raw logical volume, use the $ORACLE_HOME/bin/offset command. The offset can be 4096 bytes or 128 KB on AIX logical volumes or zero on AIX logical volumes created with the mklv -T O option.

When you have determined the offset, you can copy over data from a raw logical volume to a file system using the dd command and skipping the offset. The following example assumes an offset of 4096 bytes:

# dd if=old_raw_device bs=4k skip=1|dd of=new_file bs=256

You can instruct Oracle Database to use a number of blocks smaller that the maximum capacity of a raw logical volume. If you do, you must add a count clause to make sure to copy only data that contains Oracle blocks. The following example assumes an offset of 4096 bytes, an Oracle block size of 8 KB, and 150000 blocks:

# dd if=old_raw_device bs=4k skip=1|dd bs=8k count=150000|dd of=new_file bs=256k

Using Asynchronous I/O

Oracle Database takes full advantage of asynchronous I/O (AIO) provided by AIX, resulting in faster database access.

AIX 5L supports asynchronous I/O (AIO) for database files created both on file system partitions and on raw devices. AIO on raw devices is implemented fully into the AIX kernel, and does not require database processes to service the AIO requests. When using AIO on file systems, the kernel database processes (aioserver) control each request from the time a request is taken off the queue until it completes. The kernel database processes are also used with I/O with virtual shared disks (VSDs) and HSDs with FastPath disabled. By default, FastPath is enabled. The number of aioserver servers determines the number of AIO requests that can be executed in the system concurrently, so it is important to tune the number of aioserver processes when using file systems to store Oracle Database data files.

Note:

If you are using AIO with VSDs and HSDs with AIO FastPath enabled (the default), the maximum buddy buffer size must be greater than or equal to 128 KB.

Use one of the following commands to set the number of servers. This applies only when using asynchronous I/O on file systems rather than raw devices:

  • smit aio
  • chdev -l aio0 -a maxservers=' m ' -a minservers='n'
See Also:

For more information about SMIT, see the System Management Interface Tool (SMIT) online help, and for more information about the smit aio and chdev commands, see the man pages.

Note:

Starting with AIX 5L version 5.2, there are two AIO subsystems available. Oracle Database 10g uses Legacy AIO (aio0), even though the Oracle pre-installation script enables Legacy AIO (aio0) and POSIX AIO (posix_aio0). Both AIO subsystems have the same performance characteristics.

Set the minimum value to the number of servers to be started at system boot. Set the maximum value to the number of servers that can be started in response to a large number of concurrent requests. These parameters apply to file systems only, they do not apply to raw devices.

The default value for the minimum number of servers is 1. The default value for the maximum number of servers is 10. These values are usually too low to run Oracle Database on large systems with 4 CPUs or more, if you are not using kernelized AIO. Oracle recommends that you set the parameters to the values listed in the following table:

Parameter Value
minservers Oracle recommends an initial value equal to the number of CPUs on the system or 10, whichever is lower.
maxservers Starting with AIX 5L version 5.2, this parameter counts the maximum number of AIO servers per CPU, whereas on previous versions of AIX it was a system-wide value. If you are using GPFS, set maxservers to worker1threads divided by the number of CPUs. This is the optimal setting and increasing maxservers will not lead to additional I/O performance. If you are using JFS/JFS2, set the initial value to (10 * number of logical disks / number of CPUs) and monitor the actual number of aioservers started during a typical workload using the pstat or ps commands. If the actual number of active aioservers is equal to the maxservers, then increase the maxservers value.
maxreqs Set the initial value to (4 * number of logical disks * queue depth). You can determine the queue depth (typically 3), by running the following command:
$ lsattr -E -l hdiskxx

 

If the value of the maxservers or maxreqs parameter is set too low, you will see the following warning messages repeated:

Warning: lio_listio returned EAGAINPerformance degradation may be seen.

You can avoid these errors by increasing the value of the maxservers parameter. To display the number of AIO servers running, enter the following commands as the root user:

# pstat -a | grep -c aios
# ps -k | grep aioserver

Check the number of active AIO servers periodically and change the values of the minservers and maxservers parameters if necessary. The changes take place when the system restarts.

I/O Slaves

I/O Slaves are specialized Oracle processes that perform only I/O. They are rarely used on AIX, as asynchronous I/O is the default and recommended way for Oracle to perform I/O operations on AIX. I/O Slaves are allocated from shared memory buffers. I/O Slaves use a set of initialization parameters, listed in the following table.

Parameter Range of Values Default Value
DISK_ASYNCH_IO true/false true
TAPE_ASYNCH_IO true/false true
BACKUP_TAPE_IO_SLAVES true/false false
DBWR_IO_SLAVES 0 – 999 0
DB_WRITER_PROCESSES 1-20 1

 

Generally, you do not need to adjust the parameters in the preceding table. However, on large workloads, the database writer might become a bottleneck. If it does, increase DB_WRITER_PROCESSES. As a general rule, do not increase the number of database writer processes above one for each 2 CPUs in the system or partition.

There are times when you need to turn off asynchronous I/O, for example, if instructed to do so by Oracle Support for debugging. You can use the DISK_ASYNCH_IO and TAPE_ASYNCH_IO parameters to switch off asynchronous I/O for disk or tape devices. Because the number of I/O slaves for each process type defaults to zero, by default no I/O Slaves are deployed.

Set the DBWR_IO_SLAVES parameter to greater than 0 only if the DISK_ASYNCH_IO or TAPE_ASYNCH_IO parameter is set to false. Otherwise, the database writer process (DBWR) becomes a bottleneck. In this case, the optimal value on AIX for the DBWR_IO_SLAVES parameter is 4.

Using the DB_FILE_MULTIBLOCK_READ_COUNT Parameter

By default, Oracle Database 10g uses Direct I/O or Concurrent I/O when available, and therefore the file system does not perform any read-ahead on sequential scans. The read ahead is performed by Oracle Database as specified by the DB_FILE_MULTIBLOCK_READ_COUNT initialization parameter.

Setting a large value for the DB_FILE_MULTIBLOCK_READ_COUNT initialization parameter usually yields better I/O throughput on sequential scans. On AIX, this parameter ranges from 1 to 512, but using a value higher than 16 usually does not provide additional performance gain.

Set this parameter so that its value when multiplied by the value of the DB_BLOCK_SIZE parameter produces a number larger than the LVM stripe size. Such a setting causes more disks to be used.

Using Write Behind

The write behind feature enables the operating system to group write I/Os together up to the size of a partition. Doing this increases performance because the number of I/O operations is reduced. The file system divides each file into 16 KB partitions to increase write performance, limit the number of dirty pages in memory, and minimize disk fragmentation. The pages of a particular partition are not written to disk until the program writes the first byte of the next 16 KB partition. To set the size of the buffer for write behind to eight 16 KB partitions, enter the following command:

# /usr/samples/kernel/vmtune -c 8

To disable write behind, enter the following command:

# /usr/samples/kernel/vmtune -c 0

Tuning Sequential Read Ahead

The information in this section applies only to file systems, and only when neither Direct I/O nor Concurrent I/O are used.

The Virtual Memory Manager (VMM) anticipates the need for pages of a sequential file. It observes the pattern in which a process accesses a file. When the process accesses two successive pages of the file, the VMM assumes that the program will continue to access the file sequentially, and schedules additional sequential reads of the file. These reads overlap the program processing and make data available to the program sooner. Two VMM thresholds, implemented as kernel parameters, determine the number of pages it reads ahead:

  • minpgahead

The number of pages read ahead when the VMM first detects the sequential access pattern

  • maxpgahead

The maximum number of pages that VMM reads ahead in a sequential file

Set the minpgahead and maxpgahead parameters to appropriate values for your application. The default values are 2 and 8 respectively. Use the /usr/samples/kernel/vmtune command to change these values. You can use higher values for the maxpgahead parameter in systems where the sequential performance of striped logical volumes is of paramount importance. To set the minpgahead parameter to 32 pages and the maxpgahead parameter to 64 pages, enter the following command as the root user:

# /usr/samples/kernel/vmtune -r 32 -R 64

Set both the minpgahead and maxpgahead parameters to a power of two. For example, 2, 4, 8,…512, 1042… and so on.

Tuning Disk I/O Pacing

Disk I/O pacing is an AIX mechanism that allows the system administrator to limit the number of pending I/O requests to a file. This prevents disk I/O intensive processes from saturating the CPU. Therefore, the response time of interactive and CPU-intensive processes does not deteriorate.

You can achieve disk I/O pacing by adjusting two system parameters: the high-water mark and the low-water mark. When a process writes to a file that already has a pending high-water mark I/O request, the process is put to sleep. The process wakes up when the number of outstanding I/O requests falls below or equals the low-water mark.

You can use the smit command to change the high and low-water marks. Determine the water marks through trial-and-error. Use caution when setting the water marks because they affect performance. Tuning the high and low-water marks has less effect on disk I/O larger than 4 KB.

You can determine disk I/O saturation by analyzing the result of iostat, in particular, the percentage of iowait and tm_act. A high iowait percentage combined with high tm_act percentages on specific disks is an indication of disk saturation. Note that a high iowait alone is not necessarily an indication of I/O bottleneck.

Minimizing Remote I/O Operations

Oracle Real Application Clusters running on the SP architecture uses VSDs or HSDs as the common storage that is accessible from all instances on different nodes. If an I/O request is to a VSD where the logical volume is local to the node, local I/O is performed. The I/O traffic to VSDs that are not local goes through network communication layers.

For better performance, it is important to minimize remote I/O as much as possible. Redo logs of each instance should be placed on the VSDs that are on local logical volumes. Each instance should have its own undo segments that are on VSDs mapped to local logical volumes if updates and insertions are intensive.

In each session, each user is allowed only one temporary tablespace. The temporary tablespaces should each contain at least one data file local to each of the nodes.

Carefully design applications and databases (by partitioning applications and databases, for instance) to minimize remote I/O.

Resilvering with Oracle Database

If you disable mirror write consistency (MWC) for an Oracle data file allocated on a raw logical volume (LV), the Oracle Database crash recovery process uses resilvering to recover after a system crash. This resilvering process prevents database inconsistencies or corruption.During crash recovery, if a data file is allocated on a logical volume with more than one copy, the resilvering process performs a checksum on the data blocks of all of the copies. It then performs one of the following:

  • If the data blocks in a copy have valid checksums, the resilvering process uses that copy to update the copies that have invalid checksums.
  • If all copies have blocks with invalid checksums, the resilvering process rebuilds the blocks using information from the redo log file. It then writes the data file to the logical volume and updates all of the copies.

On AIX, the resilvering process works only for data files allocated on raw logical volumes for which MWC is disabled. Resilvering is not required for data files on mirrored logical volumes with MWC enabled, because MWC ensures that all copies are synchronized.If the system crashes while you are upgrading a previous release of Oracle Database that used data files on logical volumes for which MWC was disabled, enter the syncvg command to synchronize the mirrored LV before starting Oracle Database. If you do not synchronize the mirrored LV before starting the database, Oracle Database might read incorrect data from an LV copy.

Note:

If a disk drive fails, resilvering does not occur. You must enter the syncvg command before you can reactivate the LV.

Caution:

Oracle supports resilvering for data files only. Do not disable MWC for redo log files.

Backing Up Raw Devices

Oracle recommends that you use RMAN to back up raw devices. If you do use the dd command to back up raw devices, use it with caution, as follows.

The offset of the first Oracle block on a raw device may be 0, 4K or 128K depending on the device type. You can use the offset command to determine the proper offset.

When creating a logical volume, Oracle recommends using an offset of zero, which is possible if you use -T O option. However, existing raw logical volumes created with earlier versions of Oracle Database typically have a non-zero offset. The following example shows how to backup and restore a raw device whose first Oracle block is at offset 4K:

$ dd if=/dev/raw_device of=/dev/rmt0.1 bs=256k

To restore the raw device from tape, enter commands similar to the following:

$ dd if=/dev/rmt0.1 of=/dev/raw_device count=63 seek=1 skip=1 bs=4k
$ mt -f /dev/rmt0.1 bsf 1
$ dd if=/dev/rmt0.1 of=/dev/raw_device seek=1 skip=1 bs=256k

CPU Scheduling and Process Priorities

The CPU is another system component for which processes might contend. Although the AIX kernel allocates CPU effectively most of the time, many processes compete for CPU cycles. If your system has more than one CPU (SMP), there might be different levels of contention on each CPU.

Changing Process Running Time Slice

The default value for the runtime slice of the AIX RR dispatcher is ten milliseconds. Use the schedtune command to change the time slice. However, be careful when using this command. A longer time slice causes a lower context switch rate if the applications’ average voluntary switch rate is lower. As a result, fewer CPU cycles are spent on context-switching for a process and the system throughput should improve.

However, a longer runtime slice can deteriorate response time, especially on a uniprocessor system. The default runtime slice is usually acceptable for most applications. When the run queue is high and most of the applications and Oracle shadow processes are capable of running a much longer duration, you might want to increase the time slice by entering the following command:

# /usr/samples/kernel/schedtune -t n

In the previous command, choosing a value for n of 0 results in a slice of 10 milliseconds (ms), choosing a value of 1 results in a slice of 20 ms, choosing a value of 2 results in a slice of 30 ms, and so on.

Using Processor Binding on SMP Systems

Binding certain processes to a processor can improve performance substantially on an SMP system. Processor binding is available and fully functional on AIX 5L.

However, starting with AIX 5L version 5.2, specific improvements in the AIX scheduler allow Oracle Database processes to be scheduled optimally without the need for processor binding. Therefore, Oracle no longer recommends binding processes to processors when running on AIX 5L version 5.2 or later.

Oracle Real Application Clusters Information

The following sections provide information about Oracle Real Application Clusters.

UDP Tuning

Oracle Real Application Clusters uses User Datagram Protocol (UDP) for interprocess communications on AIX. You can tune UDP kernel settings to improve Oracle performance. You can modify kernel UDP buffering on AIX by changing the udp_sendspace and udp_recvspace parameters. The udp_sendspace value must always be greater than the value of the Oracle Database DB_BLOCK_SIZE initialization parameter. Otherwise, one or more of the Oracle Real Application Clusters instances will fail at startup. Use the following guidelines when tuning these parameters:

  • Set the value of the udp_sendspace parameter to the product of DB_BLOCK_SIZE by DB_FILE_MULTIBLOCK_READ_COUNT plus 4 KB. So, for example, if you have a 16 KB block size with 16 DB_FILE_MULTIBLOCK_READ_COUNT, set the udp_sendspace to 260 KB, that is 266240.
  • Set the value of the udp_recvspace parameter to at least ten times the value of the udp_sendspace parameter.
  • The value of the udp_recvspace parameter must be less than the value of the sb_max parameter.

To monitor the suitability of the udp_recvspace parameter settings, enter the following command:

$ netstat -p udp | grep "socker buffer overflows"

If the number of overflows is not zero, increase the value of the udp_recvspace parameter. You can use the following command to reset error counters before monitoring again:

$ netstat -Zs -p udp
See Also:

For information about setting these parameters, see the Oracle Real Application Clusters Installation and Configuration Guide. For additional information about AIX tuning parameters, see the AIX 5L Performance Management Guide.

Network Tuning for Transparent Application Failover

If you are experiencing Transparent Application Failover time of more than 10 minutes, consider tuning network parameters rto_length, rto_low, and rto_high to reduce the failover time.

The lengthy Transparent Application Failover time is caused by a TCP timeout and retransmission problem in which clients connected to a crashed node do not receive acknowledgement from the failed instance. Consequently, the client continues to retransmit the same packet again and again using an Exponential Backoff algorithm (refer to TCP/IP documentation for more information).

On AIX, the default timeout value is set to approximately 9 minutes. You can use the no command to tune this parameter using the load time attributes rto_length, rto_low, and rto_high. Using these parameters, you can control how often and how many times a client should retransmit the same packet before it gives up. The rto_low (default is 1 second) and rto_high (default is 64 seconds) parameters control how often to transmit the packet, while the rto_length (default is 13) parameter controls how many times to transmit the packet.

For example, using the Exponential Backoff algorithm with the AIX default values, the timeout value is set to approximately 9.3 minutes. However, using the same algorithm, and setting rto_length to 7, the timeout value is reduced to 2.5 minutes.

Note:

Check the quality of the network transmission before setting any of the parameters described in this section. You can check the quality of the network transmission using the netstat command. Bad quality network transmissions might require a longer timeout value.

Oracle Real Application Clusters and HACMP or PSSP

With Oracle Database 10g, Real Application Clusters (RAC) uses the group services provided by the AIX 5L RSCT Peer Domains (RPD). RAC no longer relies on specific services provided by HACMP or PSSP. In particular, there is no need to configure the PGSD_SUBSYS variable in the information repository.RAC remains compatible with HACMP and PSSP. HACMP is typically present when shared logical raw volumes are used instead of a GPFS file system. PSSP is present when a SP Switch or SP Switch 2 is used as the interconnect.If you are using an IP-based interconnect, such as Gigabit Ethernet, IEEE 802.3ad, EtherChannel, IP over SP Switch, RAC determines the name of the interface(s) to use, as specified by the CLUSTER_INTERCONNECTS parameter in the server parameter file.

Oracle Real Application Clusters and Fault Tolerant IPC

When the interconnect (IPC) used by Real Application Clusters 10g is based on the Internet Protocol (IP), RAC takes advantage of the fault tolerance and link aggregation that is built in AIX 5L via the IEEE 802.3ad Link Aggregation and/or EtherChannel technologies. This replaces the Fault Tolerant IPC feature (FT-IPC) that was used in previous versions of Real Application Clusters.

Link Aggregation using 802.3ad provide the same level of fault tolerance and adds support for bandwidth aggregation. It also simplifies the configuration of Real Application Clusters.

RAC determines which IP interface(s) to use by looking up the server parameter file for the CLUSTER_INTERCONNECTS parameter. This parameter typically contains only the name of IP interface created through IEEE 802.3ad Link Aggregation or EtherChannel. For more information refer to the AIX System Management Guide: Communications and Networks: EtherChannel and IEEE 802.3ad Link Aggregation.

Setting the AIXTHREAD_SCOPE Environment Variable

Threads in AIX can run with process-wide contention scope (M:N) or with system-wide contention scope (1:1). The AIXTHREAD_SCOPE environment variable controls which contention scope is used.

The default value of the AIXTHREAD_SCOPE environment variable is P which specifies process-wide contention scope. When using process-wide contention scope, Oracle threads are mapped to a pool of kernel threads. When Oracle is waiting on an event and its thread is swapped out, it may return on a different kernel thread with a different thread ID. Oracle uses the thread ID to post waiting processes so it is important for the thread ID to remain the same. When using system-wide contention scope, Oracle threads are mapped to kernel threads statically, one to one. For this reason Oracle recommends using system-wide contention. The use of system-wide contention is especially critical for Oracle Real Application Clusters (RAC) instances.Additionally, on AIX 5L version 5.2 or higher, if you set system-wide contention scope, significantly less memory is allocated to each Oracle process.

Oracle recommends that you set the value of the AIXTHREAD_SCOPE environment variable to S in the environment script that you use to set the ORACLE_HOME or ORACLE_SID environment variables for an Oracle database instance or an Oracle Net listener process, as follows:

  • Bourne, Bash, or Korn shell:

Add the following line to the ~/.profile or /usr/local/bin/oraenv script:

AIXTHREAD_SCOPE=S; export AIXTHREAD_SCOPE
  • C shell:

Add the following line to the ~/.login or /usr/local/bin/coraenv script:

setenv AIXTHREAD_SCOPE S

Doing this enables system-wide thread scope for running all Oracle processes.

References :  Oracle® Database Administrator’s Reference
10g Release 1 (10.1) for UNIX Systems: AIX-Based Systems, Apple Mac OS X, hp HP-UX, hp Tru64 UNIX, Linux, and Solaris Operating System
Part No. B10812-06

AIX Delete problem

Today one user complained, that he is not able to delete files.

Most probably this was due to non printable charter in the file name.

# cd /oracle
# ls -ltr
total 7151632
drwxr-xr-x    2 oracle   oinstall        256 May 27 2008  lost+found
drwxr-xr-x    3 oracle   oinstall        256 May 28 2008  app
drwxr-xr-x    2 oracle   oinstall       4096 May 28 2008  TT_DB
-rwxrwxrwx    1 oracle   oinstall       3970 May 28 2008  .dtprofile
-rwxrwxrwx    1 oracle   oinstall         98 May 28 2008  .Xauthority
-rwxrwxrwx    1 oracle   oinstall        219 May 28 2008  .TTauthority
drwxrwxrwx   10 oracle   oinstall       4096 May 28 2008  .dt
-rwxrwxrwx    1 oracle   oinstall          3 May 30 2008  .wmrc
drwxr-xr-x    3 oracle   oinstall        256 May 31 2008  .java
-rw-------    1 oracle   oinstall          3 Nov 28 2008  dead.letter
-rw-r--r--    1 oracle   oinstall          0 Dec 05 2008  VALID
drwx--x--x    2 oracle   oinstall        256 Dec 05 2008  Mail
-rw-r--r--    1 oracle   oinstall         11 Dec 05 2008  .mh_profile
-rw-r--r--    1 oracle   oinstall          0 Feb 28 2009  smit.transaction
-rw-r--r--    1 oracle   oinstall          0 Feb 28 2009  smit.script
drwxr-xr-x    3 oracle   oinstall        256 May 26 2009  oradiag_oracle
-rw-r--r--    1 oracle   oinstall       2409 Feb 02 2011  back.sh
drwxr-----    3 ofm11g   dba             256 Jun 14 2011  application
drwx------    2 oracle   oinstall        256 Jun 17 2011  .ssh
-rw-r--r--    1 oracle   oinstall         54 Aug 02 2011  afiedt.buf
drwxrwxrwx    3 cash     staff           256 Nov 01 2011  dbdirectory
-rwxrwxr-x    1 oracle   oinstall        630 Nov 30 2011  .profile
drwxrwxrwx    2 bbk      staff           256 Jan 09 2012  CMS_FILE_LOCATION
-rw-r-----    1 oracle   oinstall       4817 Jun 14 13:04 sqlnet.log
-rw-------    1 oracle   oinstall     127931 Jul 31 19:19 nohup.out
-rw-r--r--    1 oracle   oinstall     127543 Jul 31 19:19 cmsdb12_27jul12_exp.lo                                                                             g
-rw-r--r--    1 oracle   oinstall 3661254656 Jul 31 19:19 cmsdb12_27jul12.dmp
-rw-r--r--    1 oracle   oinstall        231 Aug 01 10:19 cmsdb6.txt
-rw-r--r--    1 oracle   oinstall        231 Aug 01 13:45 cmsdb6-1.txt
-rw-r--r--    1 oracle   oinstall      11199 Aug 01 16:45 kill_inactive.sql
-rwxrwxrwx    1 oracle   oinstall        356 Aug 01 17:28 .vi_history
-rwxrwxrwx    1 oracle   oinstall       1462 Aug 01 17:36 .bash_history
-rw-------    1 oracle   oinstall      13300 Aug 03 10:51 .sh_history
# file cmsdb6-1.txt
file: cannot open cmsdb6-1.txt
# ls -li cmsdb6-1.txt
ls: 0653-341 The file cmsdb6-1.txt does not exist.
# rm cmsdb6-1.txt
rm: cmsdb6-1.txt: A file or directory in the path name does not exist.
# ls -lbi *cmsdb6*
   39 -rw-r--r--    1 oracle   oinstall        231 Aug 01 10:19 cmsdb6.txt
   42 -rw-r--r--    1 oracle   oinstall        231 Aug 01 13:45 cmsdb6.txt?33[D?33[D?33[D?33[D-1.txt
# rm -i cmsdb6.txt*
rm: Remove cmsdb6.txt? n
rm: Remove cmsdb6-1.txt? y
# ls -lbi *cmsdb6*
   39 -rw-r--r--    1 oracle   oinstall        231 Aug 01 10:19 cmsdb6.txt
# rxit
ksh: rxit:  not found.
You have mail in /usr/spool/mail/root
#

So if you face same kind of issue, Please try “ls -lbi” to display nonprintable characters and the inode (first column of the output) of the file(s).

Then you have different options to delete the file.

E.g. “rm -i p*” and reply only “y” for the file you want to delete.

Or use something like

“find . -inum <inum-of-your-file-in-question> -exec rm {} ;”

to identify the file by the inode number you got with the ls -b.

How to reduce the filesystem in aix

The following steps reduce the size of the /var or /tmp file system in all supported releases of AIX Versions 4 and 5. If either file system on your machine is 8192KB in size or smaller, you probably should not reduce it. The default size of the /var file system (on installation) is 4096KB, which fills up rather quickly. If you can afford the space, it is better to have /var be 8192KB total. The default size of the /tmp file system (upon installation) is 8192KB.

NOTE: Back up the data before proceeding. If you have a tape drive connected to your system, this can be achieved by executing the following sequence of commands on either /var or /tmp:

cd /
tar -cvf /dev/rmt0 /var

/dev/rmt0 can be replaced with /dev/fd0 or the full path of a directory NOT in the same file system.

Boot your system into a limited function maintenance shell (Service or Maintenance mode) from bootable AIX media.

Please refer to your system’s user’s or installation and service guide for specific IPL procedures related to the type and model of your system. Additionally, the document titled “Booting in Service Mode”, has specific procedures for most types of systems. The document is available at this location:

http://techsupport.services.ibm.com/rs6k/techbrowse

With bootable media of the same version and level as the system, boot the system into Service mode.
The bootable media can be any ONE of the following:

Bootable CD-ROM
NON_AUTOINSTALL, bootable mksysb
Bootable Install Tape
Follow the screen prompts or icons to the Welcome to Base OS menu.

Choose Start Maintenance Mode for System Recovery (Option 3). The next screen displays prompts for the Maintenance menu.

Choose Access a Root Volume Group (Option 1).
The next screen displays a warning that indicates you will not be able to return to the Base OS menu without rebooting.

Choose 0 continue.
The next screen displays information about all volume groups on the system.

Select the root volume group by number. The logical volumes in rootvg will be displayed with two options.

Choose Access this volume group and start a shell. (Option 1).
If you get errors from the preceding option, do not continue with this procedure. Correct the problem causing the error. If you need assistance correcting the problem causing the error, contact one of the following:

Local branch office
Your point of sale
Your AIX support center
If no errors occur, proceed with the following steps.

Unmount the file system. (The following examples use /var. If you intend to reduce the /tmp file system, substitute /tmp for /var in the commands.) Execute:
umount /var

Remove the file system by executing:
rmfs /var

Determine the physical partition (PP) size of your rootvg volume group with the command:
lsvg rootvg

Create the logical volume with one of these commands:
mklv -y hd9var rootvg [x]     (for /var)
mklv -y hd3 rootvg [x]           (for /tmp)

where x is the number of logical partitions you want to allocate. If your rootvg volume group has a PP size of 4MB, and you want the total size of the /var file system to be 8MB, then x would be 2. For example:

mklv -y hd9var rootvg 2

This command makes a logical volume hd9var of size 8MB (two 4MB partitions) in the rootvg volume group.

NOTE:The logical volume name used for the /tmp file system is hd3, and hd9var is the logical volume name used for /var. These names must be used if you wish to maintain your AIX system in an IBM supported state.

Create the file system with the following command:
crfs -v jfs -d hd9var -m /var -a check=false -a free=false -a vol=/var

NOTE: Substitute hd3 for hd9var and /tmp for /var if needed. Refer to the section Example of /etc/filesystems for the different attributes required for these filesystems.

Mount the file system:
mount /var        (OR    mount /tmp)

If you are recreating /var, now create the /var/tmp directory for the vi editor. Execute:
mkdir /var/tmp

Set your TERM variable and export it. If you are using a megapel display, try setting TERM=hft. If you are using an ASCII terminal such as an IBM 3151, set your TERM to the appropriate terminal type. For example:
TERM=hft
export TERM

Edit /etc/filesystems. If you have been recreating /tmp, invoke the vi editor by executing the following command:
vi -c “set dir=/” /etc/filesystems

If you have not been recreating /tmp, execute:

vi /etc/filesystems

Skip down to the stanza for either /var or /tmp. Within that stanza, go to the line that says mount = false and change the word false to automatic. Save the file.

Change the ownership and permissions to the proper values, as follows:
chmod g-s /var
chmod 755 /var
chown bin.bin /var

or

 

> chmod g-s /tmp
chmod 1777 /tmp
chown bin.bin /tmp

Restore the files from your backup. If you used the backup method given earlier in this document, execute:
cd /
tar -xvf /dev/rmt0

Remove the bootable media if you have not already done so.

If your system has a mode select key, switch it to the Normal position.

Reboot the system into Normal mode with the following:
sync;sync;sync;reboot