Tag Archives: volumes

Logical volumes not being recognized and mounted even varyonvg not work

After two lun removed from host by mistake, we were experiencing problems and so
I rebooted the system and now I'm having big problems with two LVs on my user-defined volume group.  The user-defined volume group consists
of three disks.  When I try to vary them on I get "PVREMOVED" for three
of the disks.  Incidentally, these two disks are external and went down due to accidentally removed and  went out.  Here is the output we get when we try to varyon the uservg volume group:

#  varyonvg datavg
PV Status:      hdisk3  00f63e0e510b4815        PVREMOVED
                hdisk4  00f63e0e510b48b4        PVREMOVED
                hdisk5  00f63e0e510b4954        PVACTIVE

varyonvg: Volume group uservg is varied on.

Response:

This is happening because the three external disks were marked
as removed when you took the power hit.  You had been forcing
the import and varyon of the volume group and the external disks
were forced to a removed state.

You should try making the three drives active with the following
commands:

  chpv -v a hdisk3
  chpv -v a hdisk4

After you run those commands, varyoff and varyon uservg and see
if that corrects the problem.

Raw Work ::::::

*******************************************************************************
*                                                                             *
*                                                                             *
*  Welcome to AIX Version 7.1!                                                *
*                                                                             *
*                                                                             *
*  Please see the README file in /usr/lpp/bos for information pertinent to    *
*  this release of the AIX Operating System.                                  *
*                                                                             *
*                                                                             *
*******************************************************************************
# lsfs
Name            Nodename   Mount Pt               VFS   Size    Options    Auto Accounting
/dev/hd4        —         /                      jfs2  1048576 —         yes  no
/dev/hd1        —         /home                  jfs2  524288  —         yes  no
/dev/hd2        —         /usr                   jfs2  15204352 —         yes  no
/dev/hd9var     —         /var                   jfs2  11534336 —         yes  no
/dev/hd3        —         /tmp                   jfs2  4194304 —         yes  no
/dev/hd11admin  —         /admin                 jfs2  524288  —         yes  no
/proc           —         /proc                  procfs —      —         yes  no
/dev/hd10opt    —         /opt                   jfs2  1048576 —         yes  no
/dev/livedump   —         /var/adm/ras/livedump  jfs2  524288  —         yes  no
/dev/cd0        —         /mnt                   cdrfs —      ro         no   no
/dev/fslv03     —         /data01                jfs2  —      rw         yes  no
/dev/fslv04     —         /data02                jfs2  —      rw         yes  no
# bash
ksh: bash:  not found.
# vi /etc/filesystems
# mount /data01
mount: 0506-324 Cannot mount /dev/fslv03 on /data01: There is a request to a device or address that does not exist.
# lspv
hdisk1          00f63e023551f39d                    None
hdisk0          00f63e0e32fddc85                    rootvg          active
hdisk2          00f63e0e510b49f3                    None
hdisk5          00f63e0e510b4954                    datavg
hdisk6          00f63e0e7061217b                    None
# lspv -l hdisk6
0516-320 : Physical volume 00f63e0e7061217b0000000000000000 is not assigned to
a volume group.

# lspv -l hdisk5
0516-010 : Volume group must be varied on; use varyonvg command.
# varyonvg datavg
0516-052 varyonvg: Volume group cannot be varied on without a
quorum. More physical volumes in the group must be active.
Run diagnostics on inactive PVs.

# mpio_get_config -Av
Frame id 0:
Storage Subsystem worldwide name: 600a0b80006e0fc4000000004d59db49
Controller count: 2
Partition count: 1
Partition 0:
Storage Subsystem Name = ‘DS5300_PRI’
hdisk#           LUN #   Ownership          User Label
hdisk2               4   B (preferred)      6
hdisk5               3   A (preferred)      5
hdisk6               5   A (preferred)      7
# hostname
NUSW2WDB1
# redefinevg -d hdisk5 datavg
# lspv
hdisk1          00f63e023551f39d                    None
hdisk0          00f63e0e32fddc85                    rootvg          active
hdisk2          00f63e0e510b49f3                    None
hdisk5          00f63e0e510b4954                    datavg
hdisk6          00f63e0e7061217b                    None
# lspv -l hdisk5
0516-010 : Volume group must be varied on; use varyonvg command.
# varyonvg datavg
0516-052 varyonvg: Volume group cannot be varied on without a
quorum. More physical volumes in the group must be active.
Run diagnostics on inactive PVs.
# cfgmgr
# lspv
hdisk1          00f63e023551f39d                    None
hdisk0          00f63e0e32fddc85                    rootvg          active
hdisk2          00f63e0e510b49f3                    None
hdisk3          00f63e0e510b4815                    None
hdisk4          00f63e0e510b48b4                    None
hdisk5          00f63e0e510b4954                    datavg
hdisk6          00f63e0e7061217b                    None
# bootinfo -s hdisk5
256000
# varyonvg -f datavg
PV Status:      hdisk5  00f63e0e510b4954        PVACTIVE
00f63e0e510b48b4        NONAME
00f63e0e510b4815        NONAME
varyonvg: Volume group datavg is varied on.
# lspv
hdisk1          00f63e023551f39d                    None
hdisk0          00f63e0e32fddc85                    rootvg          active
hdisk2          00f63e0e510b49f3                    None
hdisk3          00f63e0e510b4815                    None
hdisk4          00f63e0e510b48b4                    None
hdisk5          00f63e0e510b4954                    datavg          active
hdisk6          00f63e0e7061217b                    None
# lspv -l hdisk5
hdisk5:
LV NAME               LPs     PPs     DISTRIBUTION          MOUNT POINT
fslv04                499     499     100..100..99..100..100 /data02
# mount /data02
Replaying log for /dev/fslv04.
mount: 0506-324 Cannot mount /dev/fslv04 on /data02: The media is not formatted or the format is not correct.
0506-342 The superblock on /dev/fslv04 is dirty.  Run a full fsck to fix.
# varryoffvg datavg
ksh: varryoffvg:  not found.
# varyoffvg datavg
#  lspv -l hdisk5
0516-010 : Volume group must be varied on; use varyonvg command.
# redefinevg -d hdisk5 datavg
# lspv
hdisk1          00f63e023551f39d                    None
hdisk0          00f63e0e32fddc85                    rootvg          active
hdisk2          00f63e0e510b49f3                    None
hdisk3          00f63e0e510b4815                    datavg
hdisk4          00f63e0e510b48b4                    datavg
hdisk5          00f63e0e510b4954                    datavg
hdisk6          00f63e0e7061217b                    None
#  mpio_get_config -Av
Frame id 0:
Storage Subsystem worldwide name: 600a0b80006e0fc4000000004d59db49
Controller count: 2
Partition count: 1
Partition 0:
Storage Subsystem Name = ‘DS5300_PRI’
hdisk#           LUN #   Ownership          User Label
hdisk2               4   B (preferred)      6
hdisk3               0   A (preferred)      3
hdisk4               1   B (preferred)      4
hdisk5               3   A (preferred)      5
hdisk6               5   A (preferred)      7
# rmdev -Rdl hdisk2
hdisk2 deleted
# rmdev -Rdl hdisk6
hdisk6 deleted
# cfgmgr
Method error (/usr/lib/methods/cfgscsidisk -l hdisk3 ):
0514-082 The requested function could only be performed for some
of the specified paths.
Method error (/usr/lib/methods/cfgscsidisk -l hdisk4 ):
0514-082 The requested function could only be performed for some
of the specified paths.
# lspv
hdisk1          00f63e023551f39d                    None
hdisk0          00f63e0e32fddc85                    rootvg          active
hdisk3          00f63e0e510b4815                    datavg
hdisk4          00f63e0e510b48b4                    datavg
hdisk5          00f63e0e510b4954                    datavg
# varyonvg datavg
PV Status:      hdisk3  00f63e0e510b4815        PVREMOVED
hdisk4  00f63e0e510b48b4        PVREMOVED
hdisk5  00f63e0e510b4954        PVACTIVE
varyonvg: Volume group datavg is varied on.
# varyoffvg datavg
You have mail in /usr/spool/mail/root
# importvg
0516-604 importvg: Physical volume name not entered.
Usage: importvg [ [ [-V MajorNumber] [-y VGname] [-f] [-c] [-x] ] | [-L VGname] ]
[-n] [-F] [-R] [-O] PVname
Imports the definition of a volume group.
#
ksh: ^[[A^[[B:  not found.
# importvg -y datavg hdisk3
0516-360 getvgname: The device name is already used; choose a
different name.
0516-776 importvg: Cannot import hdisk3 as datavg.
#  varyonvg datavg
PV Status:      hdisk3  00f63e0e510b4815        PVREMOVED
hdisk4  00f63e0e510b48b4        PVREMOVED
hdisk5  00f63e0e510b4954        PVACTIVE
varyonvg: Volume group datavg is varied on.
# lspv -l hdisk3
hdisk3:
LV NAME               LPs     PPs     DISTRIBUTION          MOUNT POINT
fslv03                499     499     100..100..99..100..100 /data01
# lspv -l hdisk4
hdisk4:
LV NAME               LPs     PPs     DISTRIBUTION          MOUNT POINT
loglv00               1       1       00..01..00..00..00    N/A
fslv03                1       1       00..01..00..00..00    /data01
fslv04                1       1       00..01..00..00..00    /data02
# lspv -l hdisk5
hdisk5:
LV NAME               LPs     PPs     DISTRIBUTION          MOUNT POINT
fslv04                499     499     100..100..99..100..100 /data02
# mount /data01
mount: 0506-324 Cannot mount /dev/fslv03 on /data01: There is an input or output error.
# chpv -v a hdisk3
# chpv -v a hdisk4
# varyoffvg datavg
# varyonvg datavg
# mount /data01
Replaying log for /dev/fslv03.
# mount /data02
# df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4           0.50      0.25   51%    10624    16% /
/dev/hd2           7.25      5.14   30%    49315     4% /usr
/dev/hd9var        5.50      3.85   31%     9868     2% /var
/dev/hd3           2.00      1.99    1%      105     1% /tmp
/dev/hd1           0.25      0.25    1%        8     1% /home
/dev/hd11admin      0.25      0.25    1%        5     1% /admin
/proc                 –         –    –         –     –  /proc
/dev/hd10opt       0.50      0.20   60%     8331    15% /opt
/dev/livedump      0.25      0.25    1%        7     1% /var/adm/ras/livedump
/dev/fslv03      250.00    171.20   32%    41784     1% /data01
/dev/fslv04      250.00    249.96    1%        4     1% /data02
#

Raw vs JFS Logical Volumes I/O

Question

Does Virtual Memory Manager (VMM) work with raw logical updates, and if so, how?
If raw logical volumes do not use block I/O buffer cache, does sync update raw logical volumes or does VMM?

Answer

When an application directly accesses a raw logical volume, the VMM is not involved. The VMM is involved when accessing the Journaled File System (JFS).
sync only updates JFS, so neither sync nor VMM updates raw logical volumes. All writes to raw logical volumes are synchronous, which means that the writes do not return until the data has made it to disk and therefore does not require a sync.

 

Aix: Hot spot management in logical volumes

You can identify hot spot problems with your logical volumes and remedy those problems without interrupting the use of your system.

A hot-spot problem occurs when some of the logical partitions on your disk have so much disk I/O that your system performance noticeably suffers.

The first step toward solving the problem is to identify it. By default, the system does not collect statistics for logical volume use. After you enable the gathering of these statistics, the first time you enter the lvmstat command, the system displays the counter values since the previous system reboot. Thereafter, each time you enter the lvmstat command, the system displays the difference since the previous lvmstat command.

By interpreting the output of the lvmstat command, you can identify the logical partitions with the heaviest traffic. If you have several logical partitions with heavy usage on one physical disk and want to balance these across the available disks, you can use the migratelp command to move these logical partitions to other physical disks.

In the following example, the gathering of statistics is enabled and the lvmstat command is used repeatedly to gather a baseline of statistics:

# lvmstat -v rootvg -e
# lvmstat -v rootvg -C
# lvmstat -v rootvg

The output is similar to the following:

Logical Volume            iocnt      Kb_read     Kb_wrtn      Kbps
  hd8                         4            0          16      0.00
  paging01                    0            0           0      0.00
  lv01                        0            0           0      0.00
  hd1                         0            0           0      0.00
  hd3                         0            0           0      0.00
  hd9var                      0            0           0      0.00
  hd2                         0            0           0      0.00
  hd4                         0            0           0      0.00
  hd6                         0            0           0      0.00
  hd5                         0            0           0      0.00

The previous output shows that all counters have been reset to zero. In the following example, data is first copied from the /unix directory to the /tmp directory. The lvmstat command output reflects the activity for the rootvg:

# cp -p /unix /tmp
# lvmstat -v rootvg

Logical Volume            iocnt      Kb_read     Kb_wrtn      Kbps
  hd3                       296            0        6916      0.04
  hd8                        47            0         188      0.00
  hd4                        29            0         128      0.00
  hd2                        16            0          72      0.00
  paging01                    0            0           0      0.00
  lv01                        0            0           0      0.00
  hd1                         0            0           0      0.00
  hd9var                      0            0           0      0.00
  hd6                         0            0           0      0.00
  hd5                         0            0           0      0.00

The output shows activity on the hd3 logical volume, which is mounted in the /tmp directory, on hd8, which is the JFS log logical volume, on hd4, which is / (root), on hd2, which is the /usr directory, and on hd9var, which is the /var directory. The following output provides details for hd3 and hd2:

# lvmstat -l hd3

Log_part    mirror#    iocnt    Kb_read    Kb_wrtn     Kbps
       1         1       299          0       6896     0.04
       3         1         4          0         52     0.00
       2         1         0          0          0     0.00
       4         1         0          0          0     0.00
# lvmstat -l hd2
Log_part    mirror#    iocnt    Kb_read    Kb_wrtn     Kbps
       2         1         9          0         52     0.00
       3         1         9          0         36     0.00
       7         1         9          0         36     0.00
       4         1         4          0         16     0.00
       9         1         1          0          4     0.00
      14         1         1          0          4     0.00
       1         1         0          0          0     0.00

The output for a volume group provides a summary for all the I/O activity of a logical volume. It is separated into the number of I/O requests (iocnt), the kilobytes read and written (Kb_read and Kb_wrtn, respectively), and the transferred data in KB/s (Kbps). If you request the information for a logical volume, you receive the same information, but for each logical partition separately. If you have mirrored logical volumes, you receive statistics for each of the mirror volumes. In the previous sample output, several lines for logical partitions without any activity were omitted. The output is always sorted in decreasing order on the iocnt column.

The migratelp command uses, as parameters, the name of the logical volume, the number of the logical partition (as it is displayed in the lvmstat output), and an optional number for a specific mirror copy. If information is omitted, the first mirror copy is used. You must specify the target physical volume for the move; in addition, you can specify a target physical partition number. If successful, the output is similar to the following:

# migratelp hd3/1 hdisk1/109
  migratelp: Mirror copy 1 of logical partition 1 of logical volume
        hd3 migrated to physical partition 109 of hdisk1.

After the hot spot feature is enabled, either for a logical volume or a volume group, you can define your reporting and statistics, display your statistics, select logical partitions to migrate, specify the destination physical partition, and verify the information before committing your changes.

migratelp Command

Purpose

Moves allocated logical partition from one physical partition to another physical partition on a different physical volume.

Syntax

migratelp LVname/LPartnumber[ /Copynumber ] DestPV[/PPartNumber]

Description

The migratelp moves the specified logical partition LPartnumber of the logical volume LVname to the DestPV physical volume. If the destination physical partition PPartNumber is specified it will be used, otherwise a destination partition is selected using the intra region policy of the logical volume. By default the first mirror copy of the logical partition in question is migrated. A value of 1, 2 or 3 can be specified for Copynumber to migrate a particular mirror copy.

Notes:
  1. You must consider the partition usage, reported by lvmstat, on the other active concurrent nodes in case of a concurrent volume group.
  2. Strictness and upper bound settings are not enforced when using migratelp.

The migratelp command fails to migrate partitions of striped logical volumes.

Security

To use migratelp, you must have root user authority.

Examples

  1. To move the first logical partitions of logical volume lv00 to hdisk1, type:
    migratelp lv00/1 hdisk1
  2. To move second mirror copy of the third logical partitions of logical volume hd2 to hdisk5, type:
    migratelp hd2/3/2 hdisk5
  3. To move third mirror copy of the 25th logical partitions of logical volume testlv to 100th partition of hdisk7, type:
    migratelp testlv/25/3 hdisk7/100