How to interpret PV and VG numbers from HP-UX syslog.log

Error messages like the following in syslog.log:

Aug 14 00:14:05 cust1 vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x000000005ed22800), from raw device 0x1f00b400 (with priority: 0, and current flags: 0x40) to raw device 0x1f03b400 (with priority: 1, and current flags: 0x0).
Aug 14 00:14:05 cust1 vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x000000005ed22800), from raw device 0x1f03b400 (with priority: 1, and current flags: 0x0) to raw device 0x1f00b400 (with priority: 0, and current flags: 0x0).
Aug 14 00:14:05 cust1 vmunix: LVM: Recovered Path (device 0x1f00b400) to PV 11 in VG 18.
Aug 14 00:14:05 cust1 vmunix: LVM: Restored PV 11 to VG 18.

To determine the Volume Group that VG 18 refers to:

translate 18 (dec) into 12 (hex)

then look for the “group” file that has the value 0x??0000 where ?? = the hex value. e.g. issue: ls -lr /dev/*/group | grep 0x120000

 

crw-r–r– 1 root sys 64 0x120000 Apr 19 01:30 /dev/vgdata_02/group

So VG 18 is volume group vgdata_02.

 

To next determine the Physical Volume that PV 11 refers to:

issue: strings /etc/lvmtab

then look at the section for the relevant volume group, e.g. vgdata_02:

/dev/vgdata_02
/dev/dsk/c0t2d0
/dev/dsk/c0t2d3
/dev/dsk/c0t2d4
/dev/dsk/c0t3d0
/dev/dsk/c0t3d3
/dev/dsk/c0t3d4
/dev/dsk/c0t4d2
/dev/dsk/c0t4d5
/dev/dsk/c0t5d1
/dev/dsk/c0t11d2
/dev/dsk/c0t11d3
/dev/dsk/c0t11d4
/dev/dsk/c0t11d5
/dev/dsk/c0t11d6
/dev/dsk/c0t11d7
/dev/dsk/c0t12d0
/dev/dsk/c3t2d0
/dev/dsk/c8t2d0
/dev/dsk/c11t2d0
/dev/dsk/c3t2d3
/dev/dsk/c8t2d3
/dev/dsk/c11t2d3
/dev/dsk/c3t2d4
/dev/dsk/c8t2d4
/dev/dsk/c11t2d4
/dev/dsk/c3t3d0
/dev/dsk/c8t3d0
/dev/dsk/c11t3d0
/dev/dsk/c3t3d3
/dev/dsk/c8t3d3
/dev/dsk/c11t3d3
/dev/dsk/c3t3d4
/dev/dsk/c8t3d4
/dev/dsk/c11t3d4
/dev/dsk/c3t4d2
/dev/dsk/c8t4d2
/dev/dsk/c11t4d2
/dev/dsk/c3t4d5
/dev/dsk/c8t4d5
/dev/dsk/c11t4d5
/dev/dsk/c3t5d1
/dev/dsk/c8t5d1
/dev/dsk/c11t5d1
/dev/dsk/c3t11d2
/dev/dsk/c8t11d2
/dev/dsk/c11t11d2
/dev/dsk/c3t11d3
/dev/dsk/c8t11d3
/dev/dsk/c11t11d3
/dev/dsk/c3t11d4
/dev/dsk/c8t11d4
/dev/dsk/c11t11d4
/dev/dsk/c3t11d5
/dev/dsk/c8t11d5
/dev/dsk/c11t11d5
/dev/dsk/c3t11d6
/dev/dsk/c8t11d6
/dev/dsk/c11t11d6
/dev/dsk/c3t11d7
/dev/dsk/c8t11d7
/dev/dsk/c11t11d7
/dev/dsk/c3t12d0
/dev/dsk/c8t12d0
/dev/dsk/c11t12d0

Note that PV 0 is the first c#t#d# entry. So here PV 11 refers to the 12th entry: /dev/dsk/c0t11d4.

To double-check, determine which c#t#d# that device 0x1f00b400 refers to. It is c0t11d4.

HP-UX: Checking and Modifying the UNIX Kernel

 

Manual Configuration of the Kernel

 

1. Change the kernel parameters according to the table Recommended Kernel Parameters

for HP-UX in the file

 

/stand/system

 

2. Generate a new kernel after making the changes using the following command:

 

mk_kernel -o /stand/vmunix -s /stand/system

 

3. Reboot your system.

 

Configuration of the Kernel Using SAM

  • Enter the command

 

/usr/sbin/sam

 

  • Select:

Kernel Configuration à Configurable Parameters

 

  • Choose the parameter you want to modify and select:

Actions à Modify Configurable Parameter

 

  • Modify all kernel parameters according to the table Recommended Kernel Parameters for HP-UX.
  • Select Process New Kernel from the Actions menu.
  • Exit SAM.
  • Reboot your system.

 

HP-UX: Mounting a CD-ROM

 

Mounting a CD-ROM Manually:

  1. Log on as user root.
  2. Create a mount point for CD-ROM with the command:

mkdir <CD-mountdir>
(usually <CD-mountdir> is /sapcd).

3.      Make sure that the driver is part of the kernel (skip this step if the CD drive is already working):

grep cdfs /stand/system

If the driver is not configured, you have to add the string cdfs to the file /stand/system and rebuild the kernel.  Reboot the system after rebuilding the kernel.

4.      Mount the CD-ROM with the command:

mount -r -F cdfs /dev/dsk/<diskdevice> <CD-mountdir>
<diskdevice> is c0t4d0, a CD drive with hardware address 4.

Mounting a CD-ROM Using SAM:

1.      Enter the command

/usr/sbin/sam

2.      Select:

Disks and Filesystems ? Disk Devices ? Actions ? Mount

3.      Enter the mount directory
<CD-mountdir>

(for example, <CD-mountdir> is /sapcd).

4.      Perform task.

5.      Exit SAM.

 

How to check and install missing perl modules

  • Check if module is installed. Errors mean missing module.

    # perl -MModule::Name -e 1
    
  • See documentation of the module if installed.
    # perldoc Module::Name
    
  • Open CPAN shell
    # perl -MCPAN -e shell
    
  • To reconfigure the shell if needed.
    cpan>o conf init
    
  • Install an available module.
    cpan> install HTML::Template
  • You can run the Perl CPAN module via command line perl and get it installed in a single line:

# perl -MCPAN -e ‘install HTML::Template

  • Force install if test fails.
    cpan> force install Module::Name
    
  • To manual install perl modules. Unzip and go to module directory.

# tar -zxvf HTML-Template-2.8.tar.gz

  • # perl Makefile.PL
    # make
    # make test
    # make install
    

How to get a copy of root’s email or forward it to an smtp email address

We can edit the aliases file for the system and set our address as the destination for root. so you can edit /etc/aliases (or wherever the aliases file has gone) and look at the existing aliases. There is probably an entry for root (possibly commented out) that shows a fictitious user. Add an entry like:

root: your_id@your_email.address

and save the file. Run the newaliases command and all mail to root will go to you.

Example:

# cat /etc/aliases
# @(#)87        1.3  src/bos/usr/sbin/sendmail/aliases, cmdsend, bos530 6/15/90 23:21:43
# IBM_PROLOG_BEGIN_TAG
# This is an automatically generated prolog.
#
# bos530 src/bos/usr/sbin/sendmail/aliases 1.3
#
# Licensed Materials – Property of IBM
#
# (C) COPYRIGHT International Business Machines Corp. 1985,1989
# All Rights Reserved
#
# US Government Users Restricted Rights – Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
# IBM_PROLOG_END_TAG
#
# COMPONENT_NAME: CMDSEND aliases
#
# FUNCTIONS:
#
# ORIGINS: 10  26  27
#
# (C) COPYRIGHT International Business Machines Corp. 1985, 1989
# All Rights Reserved
# Licensed Materials – Property of IBM
#
# US Government Users Restricted Rights – Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
##
#  Aliases in this file will NOT be expanded in the header from
#  Mail, but WILL be visible over networks or from /bin/bellmail.
#
#       >>>>>>>>>>      The command “sendmail -bi” must be run after
#       >> NOTE >>      this file is updated for any changes to
#       >>>>>>>>>>      affect sendmail operation.
##

# Alias for mailer daemon
MAILER-DAEMON:root

# Following alias is required by the new mail protocol, RFC 822
postmaster:root

# Aliases to handle mail to msgs and news
nobody: /dev/null

# Alias to which SSA related warnings are mailed
ssa_adm: root
root:Unix-Support@123software.com
#

 

Aix: Hot spot management in logical volumes

You can identify hot spot problems with your logical volumes and remedy those problems without interrupting the use of your system.

A hot-spot problem occurs when some of the logical partitions on your disk have so much disk I/O that your system performance noticeably suffers.

The first step toward solving the problem is to identify it. By default, the system does not collect statistics for logical volume use. After you enable the gathering of these statistics, the first time you enter the lvmstat command, the system displays the counter values since the previous system reboot. Thereafter, each time you enter the lvmstat command, the system displays the difference since the previous lvmstat command.

By interpreting the output of the lvmstat command, you can identify the logical partitions with the heaviest traffic. If you have several logical partitions with heavy usage on one physical disk and want to balance these across the available disks, you can use the migratelp command to move these logical partitions to other physical disks.

In the following example, the gathering of statistics is enabled and the lvmstat command is used repeatedly to gather a baseline of statistics:

# lvmstat -v rootvg -e
# lvmstat -v rootvg -C
# lvmstat -v rootvg

The output is similar to the following:

Logical Volume            iocnt      Kb_read     Kb_wrtn      Kbps
  hd8                         4            0          16      0.00
  paging01                    0            0           0      0.00
  lv01                        0            0           0      0.00
  hd1                         0            0           0      0.00
  hd3                         0            0           0      0.00
  hd9var                      0            0           0      0.00
  hd2                         0            0           0      0.00
  hd4                         0            0           0      0.00
  hd6                         0            0           0      0.00
  hd5                         0            0           0      0.00

The previous output shows that all counters have been reset to zero. In the following example, data is first copied from the /unix directory to the /tmp directory. The lvmstat command output reflects the activity for the rootvg:

# cp -p /unix /tmp
# lvmstat -v rootvg

Logical Volume            iocnt      Kb_read     Kb_wrtn      Kbps
  hd3                       296            0        6916      0.04
  hd8                        47            0         188      0.00
  hd4                        29            0         128      0.00
  hd2                        16            0          72      0.00
  paging01                    0            0           0      0.00
  lv01                        0            0           0      0.00
  hd1                         0            0           0      0.00
  hd9var                      0            0           0      0.00
  hd6                         0            0           0      0.00
  hd5                         0            0           0      0.00

The output shows activity on the hd3 logical volume, which is mounted in the /tmp directory, on hd8, which is the JFS log logical volume, on hd4, which is / (root), on hd2, which is the /usr directory, and on hd9var, which is the /var directory. The following output provides details for hd3 and hd2:

# lvmstat -l hd3

Log_part    mirror#    iocnt    Kb_read    Kb_wrtn     Kbps
       1         1       299          0       6896     0.04
       3         1         4          0         52     0.00
       2         1         0          0          0     0.00
       4         1         0          0          0     0.00
# lvmstat -l hd2
Log_part    mirror#    iocnt    Kb_read    Kb_wrtn     Kbps
       2         1         9          0         52     0.00
       3         1         9          0         36     0.00
       7         1         9          0         36     0.00
       4         1         4          0         16     0.00
       9         1         1          0          4     0.00
      14         1         1          0          4     0.00
       1         1         0          0          0     0.00

The output for a volume group provides a summary for all the I/O activity of a logical volume. It is separated into the number of I/O requests (iocnt), the kilobytes read and written (Kb_read and Kb_wrtn, respectively), and the transferred data in KB/s (Kbps). If you request the information for a logical volume, you receive the same information, but for each logical partition separately. If you have mirrored logical volumes, you receive statistics for each of the mirror volumes. In the previous sample output, several lines for logical partitions without any activity were omitted. The output is always sorted in decreasing order on the iocnt column.

The migratelp command uses, as parameters, the name of the logical volume, the number of the logical partition (as it is displayed in the lvmstat output), and an optional number for a specific mirror copy. If information is omitted, the first mirror copy is used. You must specify the target physical volume for the move; in addition, you can specify a target physical partition number. If successful, the output is similar to the following:

# migratelp hd3/1 hdisk1/109
  migratelp: Mirror copy 1 of logical partition 1 of logical volume
        hd3 migrated to physical partition 109 of hdisk1.

After the hot spot feature is enabled, either for a logical volume or a volume group, you can define your reporting and statistics, display your statistics, select logical partitions to migrate, specify the destination physical partition, and verify the information before committing your changes.

migratelp Command

Purpose

Moves allocated logical partition from one physical partition to another physical partition on a different physical volume.

Syntax

migratelp LVname/LPartnumber[ /Copynumber ] DestPV[/PPartNumber]

Description

The migratelp moves the specified logical partition LPartnumber of the logical volume LVname to the DestPV physical volume. If the destination physical partition PPartNumber is specified it will be used, otherwise a destination partition is selected using the intra region policy of the logical volume. By default the first mirror copy of the logical partition in question is migrated. A value of 1, 2 or 3 can be specified for Copynumber to migrate a particular mirror copy.

Notes:
  1. You must consider the partition usage, reported by lvmstat, on the other active concurrent nodes in case of a concurrent volume group.
  2. Strictness and upper bound settings are not enforced when using migratelp.

The migratelp command fails to migrate partitions of striped logical volumes.

Security

To use migratelp, you must have root user authority.

Examples

  1. To move the first logical partitions of logical volume lv00 to hdisk1, type:
    migratelp lv00/1 hdisk1
  2. To move second mirror copy of the third logical partitions of logical volume hd2 to hdisk5, type:
    migratelp hd2/3/2 hdisk5
  3. To move third mirror copy of the 25th logical partitions of logical volume testlv to 100th partition of hdisk7, type:
    migratelp testlv/25/3 hdisk7/100

Aix: Install and Configuration HACMP [ Overview]

1. Overview

The High Availability (HA) feature of application allows a properly configured application system to automatically recover from a number of possible failures, with the goal of eliminating all single points of failure in the system. The same functionality can be used to minimize the impact of regularly scheduled maintenance and/or software upgrades.

The High Availability feature is only available on AIX platforms.

Failures of the following components will be protected against when using a properly configured HA application system:

  • Core server
  • Network-related
  • adapters
  • cables
  • Disk-related
  • adapters
  • cables
  • disks
  • Power-related
  • node power supply
  • disk power supply
  • power distribution strip

High availability is not the same as fault tolerance. The failures above are “protected against” from the standpoint that the HA application system will be able to return to an operational state without intervention when any one of the above failures occur. There certainly may be some down-time, especially when the core server fails (crashes).

After a recovery, application will function properly, but it will no longer be in a Highly Available state. A subsequent failure may not be recoverable. For instance, if the core server crashes and the backup takes over, there is no longer a backup node. It will be necessary to correct the original failure in order to return the system to a Highly Available state.

1.1 Architecture

The following diagram shows the necessary components for an HA application configuration:

Figure G-1 HA application Architecture

This diagram does not include the power system, but it does have several features that are very important:

  • At any point in time, either Node 1 or Node 2 can act as the core application server.
  • The two shared disk busses are mirrored to one another and accessed by each node using separate adapter cards so that any single failure (disk, adapter, or bus) will result in accessibility of at least one good copy of the data.
  • Each node has two connections to the ethernet network. One is a “standby” that can take over the IP and hardware addresses of the primary adapter in case of failure.
  • There is an RS-232 serial cable connecting Node 1 and Node 2 to enable communication even in the event that the main network fails.