How to recover a deleted file in aix / jfs?

It is possible to recover the file using the “fsdb” command (filesystem debugger). when,

No new files have been created on the filesystem.

No files have been extended.

The filesystem is able to be unmounted.

Warning: I have test this in my test server. This is undocumented one. You may facing the critical problem when you follow the below steps on your systems. So try this at your own risk. Please avoid directly try this with your production servers. Here is the output for your reference.

You can get deleted files inode if you don’t have.

#fuser -dV

inode=68     size=34358697984  fd=6
inode=76     size=16106135552  fd=7
inode=65     size=34358697984  fd=16
inode=68     size=34358697984  fd=11
inode=68     size=34358697984  fd=7
inode=68     size=34358697984  fd=6

# lsvg -l testvg

testvg:

LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT

loglv00             jfs2log    1     1     1    closed/syncd  N/A

#

# crfs -a size=256M -v jfs2 -g testvg -m /new            à create a “/new” FS

File system created successfully.

261932 kilobytes total disk space.

New File System size is 524288

#

# lsvg -l testvg

testvg:

LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT

loglv00             jfs2log    1     1     1    closed/syncd  N/A

fslv00              jfs2       16    16    1    closed/syncd  /new

#

# mount /new         à mount the /new FS

#

# lsvg -l testvg

testvg:

LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT

loglv00             jfs2log    1     1     1    open/syncd    N/A

fslv00              jfs2       16    16    1    open/syncd    /new

#

# cd /new

#

# ls -l

total 0

drwxr-xr-x   2 root     system          256 Apr 03 16:47 lost+found

#

# cat >> film         à Create a file named “film”

Hi this is the test file. I want to use this file for recovery test

^C#

#

# cat film

Hi this is the test file. I want to use this file for recovery test

#

# ls –il        à check the inode number of the file “film”. That is 4

total 8

4 -rw-r–r–   1 root     system           68 Apr 03 16:49 film

3 drwxr-xr-x   2 root     system          256 Apr 03 16:47 lost+found

#

#

# rm film     à remove the file “film”

#

# ls -l

total 0

drwxr-xr-x   2 root     system          256 Apr 03 16:47 lost+found

#

# cd ~

#

# umount /new     à unmount the /new FS

#

# lsvg -l testvg

testvg:

LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT

loglv00             jfs2log    1     1     1    closed/syncd  N/A

fslv00              jfs2       16    16    1    closed/syncd  /new

#

# fsdb /dev/fslv00       à use the “fsdb <lv_name>” to recover the deleted  file.

File System:                    /dev/fslv00

File System Size:               523864  (512 byte blocks)

Aggregate Block Size:           4096

Allocation Group Size:          8192    (aggregate blocks)

> dir 2

idotdot = 2

3      lost+found

>

> i 4     à provide the inode number of our deleted file. That is 4

Inode 4 at block 33, offset 0x800:

[1] di_fileset:         16                 [18] di_inostamp:       0x4d98ead4

[2] di_number:          4               [19] di_gen:            3940655789

[3] di_size:    0x0000000000000044      [20] di_ixpxd.len:      4

[4] di_nblocks: 0x0000000000000001      [21] di_ixpxd.addr1:    0x00

[5] di_nlink:           0               [22] di_ixpxd.addr2:    0x00000021

[6] di_mode:            0x000081a4           di_ixpxd.address:  33

0100644 -rw-r–r–      [24] di_uid:            0

[25] di_gid:            0

[9] di_atime.tj_nsec:   0x1e8a1025      [26] di_atime.tj_sec:0x000000004d98eb7d

[10] di_ctime.tj_nsec:  0x0ca85614      [27] di_ctime.tj_sec:0x000000004d98ebac

[11] di_mtime.tj_nsec:  0x1af63892      [28] di_mtime.tj_sec:0x000000004d98eb77

[12] di_otime.tj_nsec:  0x03b74a9a      [29] di_otime.tj_sec:0x000000004d98eb24

[13] di_ea.flag:        0x00            [30] di_ea.len:         0

EAv1                               [31] di_ea.addr1:       0x00

[15] di_ea.nEntry:      0x00            [32] di_ea.addr2:       0x00000000

[16] di_ea.type:        0x0000               di_ea.address:     0

[34] di_ea.nblocks:     0

change_inode: [m]odify, [e]a, [t]ree, or e[x]it > m     à choose “m” to modify

Please enter: field-number value > 5  1   à  put the field number is 5, change the di_nlink value to 1

Inode 4 at block 33, offset 0x800:

[1] di_fileset:         16              [18] di_inostamp:       0x4d98ead4

[2] di_number:          4               [19] di_gen:            3940655789

[3] di_size:    0x0000000000000044      [20] di_ixpxd.len:      4

[4] di_nblocks: 0x0000000000000001      [21] di_ixpxd.addr1:    0x00

[5] di_nlink:           1               [22] di_ixpxd.addr2:    0x00000021

[6] di_mode:            0x000081a4           di_ixpxd.address:  33

0100644 -rw-r–r–      [24] di_uid:            0

[25] di_gid:            0

[9] di_atime.tj_nsec:   0x1e8a1025      [26] di_atime.tj_sec:0x000000004d98eb7d

[10] di_ctime.tj_nsec:  0x0ca85614      [27] di_ctime.tj_sec:0x000000004d98ebac

[11] di_mtime.tj_nsec:  0x1af63892      [28] di_mtime.tj_sec:0x000000004d98eb77

[12] di_otime.tj_nsec:  0x03b74a9a      [29] di_otime.tj_sec:0x000000004d98eb24

[13] di_ea.flag:        0x00            [30] di_ea.len:         0

EAv1                               [31] di_ea.addr1:       0x00

[15] di_ea.nEntry:      0x00            [32] di_ea.addr2:       0x00000000

[16] di_ea.type:        0x0000               di_ea.address:     0

[34] di_ea.nblocks:     0

change_inode: [m]odify, [e]a, [t]ree, or e[x]it > x    à exit

> quit

#

# fsck -yp /dev/fslv00     à run fsck to repaired the  inconsistencies.

The current volume is: /dev/fslv00

Primary superblock is valid.

J2_LOGREDO:log redo processing for /dev/fslv00

logredo start at: 1301867616 sec and end at 1301867616 sec

Primary superblock is valid.

*** Phase 1 – Initial inode scan

*** Phase 2 – Process remaining directories

*** Phase 3 – Process remaining files

*** Phase 4 – Check and repair inode allocation map

File system inode map is corrupt (FIXED)

Superblock marked dirty because repairs are about to be written.

*** Phase 5 – Check and repair block allocation map

Block allocation map is corrupt (FIXED)

Inodes not connected to the root directory

tree have been detected.  Will reconnect.

File system is clean.

Superblock is marked dirty (FIXED)

All observed inconsistencies have been repaired.

#

# mount /new   à mount the /new FS

# lsvg -l testvg

testvg:

LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT

loglv00             jfs2log    1     1     1    open/syncd    N/A

fslv00              jfs2       16    16    1    open/syncd    /new

#

# cd /new  à goto the /new FS

#

# ls -l

total 0

drwxr-xr-x   2 root     system          256 Apr 03 16:47 lost+found

#

# cd lost+found   à go to lost+found dir

#

# pwd

/new/lost+found

#

# ls -l

total 8

-rw-r–r–   1 root     system           68 Apr 03 16:49 4     à you can see the deleted file in the name of your inode number

#

# cat 4   à confirm the file content

Hi this is the test file. I want to use this file for recovery test

#

# mv 4 /new/.      à move the file to the exact place where it was before

#

# pwd

/new/lost+found

# cd ..

#

# pwd

/new

# ls -l

total 8

-rw-r–r–   1 root     system           68 Apr 03 16:49 4

drwxr-xr-x   2 root     system          256 Apr 03 16:55 lost+found

#

# cat 4

Hi this is the test file. I want to use this file for recovery test

#

# mv 4 film  à change the name of the recovered file to the old one.

#

# ls -l

total 8

-rw-r–r–   1 root     system           68 Apr 03 16:49 film   à the deleted file has been recovered.

drwxr-xr-x   2 root     system          256 Apr 03 16:55 lost+found

#

#

 

AWS RDS IAM Policy for Read Only Access and DB Logs Download

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "rds:Describe*",
                "rds:ListTagsForResource",
                "rds:Download*",
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeVpcs"
            ],
            "Effect": "Allow",
            "Resource": "*"
        },
        {
            "Action": [
                "cloudwatch:GetMetricStatistics",
                "logs:DescribeLogStreams",
                "logs:GetLogEvents"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}

Creating graphs from SAR output

You must know that sar is a very effective tool to collect system activity or performance information from your system.
To collect all information:

sar -o test.log -A 1 3 2>&1 >/dev/null

This command will make a binary file from output, and you can display it with sadf command:

sadf -t -d test.log — -A

Sometimes, you would like to make graphs from these data, because diagrams are clear and better understandable than plain data.
The best way to create graphs from output of sar is kSar that is an Java-based application with gui.
You can download it from the following site: http://ksar.atomique.net/

This program can process sar text output and make diagrams from them, also it can save graphs as picture or export them into a PDF file.
If you have only the binary output of sar, you can convert it into text file with this command:

sar -A -f test.log >> sardata.txt

Now, you can import text file into kSar, with “Data/Load from text file…” menu entry.
Also, kSar has other useful functions, like remote sar running via SSH, for more details read kSar documentation.
Loading text file:sar-graph2-300x187 sar-graph-300x187 sar-main-300x187

Setup AWS Cloudwatch Memory and Drive Monitoring on RHEL

Download Scripts

Install Prerequisite Packages

sudo yum install wget unzip perl-core perl-DateTime perl-Sys-Syslog perl-CPAN perl-libwww-perl perl-Crypt-SMIME perl-Crypt-SSLeay

Install LWP Perl Bundles

  1. Launch cpan
    sudo perl -MCPAN -e shell
    
  2. Install Bundle
    install Bundle::LWP6 LWP YAML
    

Install Script

wget http://aws-cloudwatch.s3.amazonaws.com/downloads/CloudWatchMonitoringScripts-1.2.1.zip
unzip CloudWatchMonitoringScripts-1.2.1.zip -d /opt
rm -f CloudWatchMonitoringScripts-1.2.1.zip

Setup Credentials

API Access Key (Option 1)

This is good for testing, but it’s better to use IAM roles covered in Option 2.

  1. Copy awscreds template
    cp /opt/aws-scripts-mon/awscreds.template /opt/aws-scripts-mon/awscreds.conf
    
  2. Add access key id and secret access key
    vim /opt/aws-scripts-mon/awscreds.conf
    
  3. Lock down file access
    chmod 0400 /opt/aws-scripts-mon/awscreds.conf
    

IAM Role (Option 2)

  1. Login to AWS web console
  2. Select Identity & Access Management
  3. Select Roles | Create New Role
  4. Enter Role Name
    1. i.e. ec2-cloudwatch
  5. Select Next Step
  6. Select Amazon EC2
  7. Search for cloudwatch
  8. Select CloudwatchFullAccess
  9. Select Next Step | Create Role
  10. Launch a new instance and assign the ec2-cloudwatch IAM role

You can not add an IAM Role to an existing EC2 Instance; you can only specify a role when you launch a new instance.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html?console_help=true

Test

This won’t send data to Cloudwatch.

/opt/aws-scripts-mon/mon-put-instance-data.pl --mem-util --verify --verbose

Example

MemoryUtilization: 31.7258903184253 (Percent)
Using AWS credentials file <./awscreds.conf>
Endpoint: https://monitoring.us-west-2.amazonaws.com
Payload: {"MetricData":[{"Timestamp":1443537153,"Dimensions":[{"Value":"i-12e1fac4","Name":"InstanceId"}],"Value":31.7258903184253,"Unit":"Percent","MetricName":"MemoryUtilization"}],"Namespace":"System/Linux","__type":"com.amazonaws.cloudwatch.v2010_08_01#PutMetricDataInput"}

Verification completed successfully. No actual metrics sent to CloudWatch.

Report to Cloudwatch Test

Test that communication to Cloudwatch works and design the command you’ll want to cron out in the next step.

/opt/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail

After you run this command one point-in-time metric should show up for the instance under Cloudwatch | Linux System

Create Cron Task (as root)

Now that you’ve tested out the command and figured out what you want to report it’s time to add a Cron task so it runs ever X minutes. Usually 5 minutes is good.

  1. Edit cron table
    crontab -e
    
    */5 * * * * /opt/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail --disk-space-util --disk-path=/ --from-cron
    

Create Cron Task (as other user)

You may want to create a user that runs the cron. Here’s an example using a user named cloudwatch

  1. Create user
    useradd cloudwatch
    
  2. Disable user login
    usermod -s /sbin/nologin cloudwatch
    
  3. Set ownership
    chown -R cloudwatch.cloudwatch /opt/aws-scripts-mon
    
  4. Edit cron table
    crontab -e -u cloudwatch
    
  5. Add cron job
    */5 * * * * /opt/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail --swap-used --disk-space-util --disk-path=/ --from-cron
    

Verify Cron Job Ran

One way to verify the cron job ran is to look in the cron log.

less /var/log/cron
tail -f /var/log/cron

References

Monitor Script Arguments

Name Description
–mem-util Collects and sends the MemoryUtilization metrics in percentages. This option reports only memory allocated by applications and the operating system, and excludes memory in cache and buffers.
–mem-used Collects and sends the MemoryUsed metrics, reported in megabytes. This option reports only memory allocated by applications and the operating system, and excludes memory in cache and buffers.
–mem-avail Collects and sends the MemoryAvailable metrics, reported in megabytes. This option reports memory available for use by applications and the operating system.
–swap-util Collects and sends SwapUtilization metrics, reported in percentages.
–swap-used Collects and sends SwapUsed metrics, reported in megabytes.
–disk-path=PATH Selects the disk on which to report.PATH can specify a mount point or any file located on a mount point for the filesystem that needs to be reported. For selecting multiple disks, specify a –disk-path=PATH for each one of them. To select a disk for the filesystems mounted on / and /home, use the following parameters:
–disk-path=/ –disk-path=/home
–disk-space-util Collects and sends the DiskSpaceUtilization metric for the selected disks. The metric is reported in percentages.
–disk-space-used Collects and sends the DiskSpaceUsed metric for the selected disks. The metric is reported by default in gigabytes.Due to reserved disk space in Linux operating systems, disk space used and disk space available might not accurately add up to the amount of total disk space.
–disk-space-avail Collects and sends the DiskSpaceAvailable metric for the selected disks. The metric is reported in gigabytes.Due to reserved disk space in the Linux operating systems, disk space used and disk space available might not accurately add up to the amount of total disk space.
–memory-units=UNITS Specifies units in which to report memory usage. If not specified, memory is reported in megabytes. UNITS may be one of the following: bytes, kilobytes, megabytes, gigabytes.
–disk-space-units=UNITS Specifies units in which to report disk space usage. If not specified, disk space is reported in gigabytes. UNITS may be one of the following: bytes, kilobytes, megabytes, gigabytes.
–aws-credential- file=PATH Provides the location of the file containing AWS credentials.This parameter cannot be used with the –aws-access-key-id and –aws-secret-keyparameters.
–aws-access-key-id=VALUE Specifies the AWS access key ID to use to identify the caller. Must be used together with the –aws-secret-key option. Do not use this option with the –aws-credential-file parameter.
–aws-secret-key=VALUE Specifies the AWS secret access key to use to sign the request to CloudWatch. Must be used together with the –aws-access-key-id option. Do not use this option with –aws-credential-file parameter.
–verify Performs a test run of the script that collects the metrics, prepares a complete HTTP request, but does not actually call CloudWatch to report the data. This option also checks that credentials are provided. When run in verbose mode, this option outputs the metrics that will be sent to CloudWatch.
–from-cron Use this option when calling the script from cron. When this option is used, all diagnostic output is suppressed, but error messages are sent to the local system log of the user account.
–verbose Displays detailed information about what the script is doing.
–help Displays usage information.
–version Displays the version number of the script.

What process is listening on a certain port on Solaris? / How to find out which process listens on certain port on Solaris?

Scenario: I’m looking for PID for which PORT 2817 is using here in Solaris 11.

-bash-3.2# netstat -an | grep 2817
*.2817 *.* 0 0 49152 0 LISTEN
10.0.50.81.2817 10.0.50.81.37374 49152 0 49152 0 CLOSE_WAIT
10.0.50.81.2817 10.0.50.81.35510 49152 0 49152 0 CLOSE_WAIT
10.0.50.81.2817 10.0.50.81.34478 49152 0 49152 0 CLOSE_WAIT

Here is a script, I found somewhere which actually works. This script worked to find PID from port number, If lsof command/utility is not available with server.

Save this script as Port_check.sh and grant execute permissions using chmod 777 to this script.

————————————————————————————-

#!/bin/ksh

line=’———————————————‘
pids=$(/usr/bin/ps -ef | sed 1d | awk ‘{print $2}’)

if [ $# -eq 0 ]; then
read ans?”Enter port you would like to know pid for: “
else
ans=$1
fi

for f in $pids
do
/usr/proc/bin/pfiles $f 2>/dev/null | /usr/xpg4/bin/grep -q “port: $ans”
if [ $? -eq 0 ]; then
echo $line
echo “Port: $ans is being used by PID:\c”
/usr/bin/ps -ef -o pid -o args | egrep -v “grep|pfiles” | grep $f
fi
done
exit 0

————————————————————————————-
Now the syntax of using this script is
./Port_check.sh <port number>

For example, to find which process is using TCP port 2817,

-bash-3.2# ./Port_check.sh 2817
———————————————
Port: 2817 is being used by PID:26222 /data01/IBM/WebSphere/AppServer/java_1.7_64/bin/sparcv9/java -XX:+UnlockDiagnos

So, Here is PID 26222 using port 2817.

Thanks…

AIX: rootvg/disk mirroring

bash-4.2# bootinfo -s hdisk0
140013
bash-4.2# bootinfo -s hdisk1
140013
bash-4.2# bootinfo -s hdisk2
140013

bash-4.2# lspv
hdisk0 002b012f397c20ce None
hdisk1 002afe4f2b4c3fdb rootvg active
hdisk2 002b016f09313544 ppmvg active
bash-4.2#

bash-4.2# lsvg -p rootvg
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk1 active 546 187 10..00..00..85..92

bash-4.2# extendvg rootvg hdisk0
0516-1398 extendvg: The physical volume hdisk0, appears to belong to
another volume group. Use the force option to add this physical volume
to a volume group.
0516-792 extendvg: Unable to extend volume group.

bash-4.2# extendvg -f rootvg hdisk0

bash-4.2# lsvg -p rootvg
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk1 active 546 187 10..00..00..85..92
hdisk0 active 546 546 110..109..109..109..109

bash-4.2# mirrorvg rootvg hdisk0
0516-1804 chvg: The quorum change takes effect immediately.
0516-1296 lresynclv: Unable to completely resynchronize volume.
The logical volume has bad-block relocation policy turned off.
This may have caused the command to fail.
0516-934 /usr/sbin/syncvg: Unable to synchronize logical volume hd5.
0516-934 /usr/sbin/syncvg: Unable to synchronize logical volume hd2.
0516-934 /usr/sbin/syncvg: Unable to synchronize logical volume dppmiaslv.
0516-934 /usr/sbin/syncvg: Unable to synchronize logical volume data01lv.
0516-932 /usr/sbin/syncvg: Unable to synchronize volume group rootvg.
0516-1126 mirrorvg: rootvg successfully mirrored, user should perform
bosboot of system to initialize boot records. Then, user must modify
bootlist to include: hdisk1 hdisk0.

bash-4.2# bosboot -ad /dev/hdisk0

bosboot: Boot image is 55324 512 byte blocks.

bash-4.2# bootlist -m normal -o
hdisk1 pathid=0

bash-4.2# bootlist -m normal hdisk1 hdisk0

bash-4.2# bootlist -m normal -o
hdisk1 blv=hd5 pathid=0
hdisk0 blv=hd5 pathid=0

bash-4.2# lspv
hdisk0 002b012f397c20ce rootvg active
hdisk1 002afe4f2b4c3fdb rootvg active
hdisk2 002b016f09313544 ppmvg active

To verify, rootvg is mirrored. We should have a 1:2 ratio between LP and PP except for the dumpdevices, like this.

bash-4.2# lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd5 boot 1 2 2 closed/stale N/A
hd6 paging 2 4 2 open/syncd N/A
hd8 jfs2log 1 2 2 open/syncd N/A
hd4 jfs2 2 4 2 open/syncd /
hd2 jfs2 10 20 2 open/stale /usr
hd9var jfs2 1 2 2 open/syncd /var
hd3 jfs2 13 26 2 open/syncd /tmp
hd1 jfs2 1 2 2 open/syncd /home
hd10opt jfs2 1 2 2 open/syncd /opt
hd11admin jfs2 1 2 2 open/syncd /admin
fwdump jfs2 2 4 2 open/syncd /var/adm/ras/platform
lg_dumplv sysdump 4 4 1 open/syncd N/A
dppmiaslv jfs2 120 240 2 open/stale /dppmias
data01lv jfs2 200 400 2 open/stale /data01

And we want to see exactly where a LV is mirrored:-

bash-4.2# lslv -m hd2
hd2:/usr
LP PP1 PV1 PP2 PV2 PP3 PV3
0001 0222 hdisk1 0223 hdisk0
0002 0228 hdisk1 0224 hdisk0
0003 0229 hdisk1 0225 hdisk0
0004 0230 hdisk1 0226 hdisk0
0005 0231 hdisk1 0227 hdisk0
0006 0232 hdisk1 0228 hdisk0
0007 0523 hdisk1 0229 hdisk0
0008 0524 hdisk1 0230 hdisk0
0009 0525 hdisk1 0231 hdisk0
0010 0527 hdisk1 0232 hdisk0

Now, check your bootlist. Does it have both physical disks in the bootlist or not.

bash-4.2# bootlist -m normal -o
hdisk1 blv=hd5 pathid=0
hdisk0 blv=hd5 pathid=0

English Vocabulary Latin Prefixes

Latin Prefixes :

SNo. Prefix Meaning Examples
1 A- not, without amoral, APATHY, ANOMALY
2 AB-, ABS- away from, off, apart abrupt, ABSCOND, ABSTRACT
3 AD-, AC-, AN-, AS- toward, against advent, accurate, ANNOTATE, assent
4 AMBI-, AMB- around, about, on both sides AMBIGUOUS, AMBIVALENT
5 ANTE- before, in front of, early antecede, antedate, antebellum
6 ARCH main, chief archangel, archbishop, archenemy
7 BI- two bifurcate, biannually
8 BENE- well BENEFACTOR, benefit, beneficial
9 CIRCUM-, CIRA- around, about circumflex, circumference, circa
10 CIS- on this side of cislunar, cisalpine
11 CON- with, together concur, concede, CONSCRIPT
12 COM-, COR-, COL- together, with, very COMPRISE, corrode, colleteral
13 CONTRA- against contradict, controversy, contravene
14 COUNTER- against counterfeit, counterclockwise
15 DE- down, down from, off, utterly deformed, defoliate, descend, depress
16 DEMI- half, partly belongs to demisemiquaver, demigod
17 DIS-, DI-, DIF- apart, in different directions DIGRESS, divorce, dispute, DISCERN
18 DU-, DUO- two duet, duplicate
19 EM- EN- in, into embrace, enclose
20 EX-, E-, EF-, EC- out, out of, from, away EXTOL, event, expel, evade, ELUCIDATE
21 EXTRA-,EXTRO- outside of, beyond extraordinary, extrovert, EXTRAPOLATE
22 FORE before forestall, forgo, forebear
23 IN-, I-, IL-, IM-, IR- in, into, on, toward, put into, incision, impel, impulse, irrigate,
24 not, lacking, without illegal, ignominious, impure, immoral,
25 (same as above) immodest, indecent, INCOHERENT
26 INDU-, INDI- a strengthened form of IN- indigent
27 INFRA- below, beneath, inferior to, after infrared, infrasonic
28 INTER-, INTEL- among, between, at intervals intercede, intercept, intellect
29 INTRA- in, within, inside of intramural, intravenous
30 INTRO- in, into, within introduce, introspective
31 JUXTA near, beside juxtapose, juxtaposition
32 MAL-, MALE- evil, badly malformed, malicious, malaise, maladroit
33 MEDI-, MEDIO- middle median, mediocre
34 MILLI-, MILLE- thousand millennium, millimeter
35 MONO- one MONARCH, MONOTONE
36 MULTI-, MULTUS- much, many multifaceted, multiply, multilevel
37 NE- not neuter, NEUTRAL
38 NON- not (less emphatic than IN or UN) nonresident, nonconformity
39 NUL-, NULL- none, not any nullify, nullification
40 OB-, OF-, OC-, toward, against, across, down, for oblong, OBDURATE, offer, occasion, occur
41 OP-, O- toward, against, across, down, for oppose, opposite, omit, offer
42 OMNI- all, everywhere omniscient, omnivorous
43 PED-, PEDI- foot pedestrian, pedicure
44 PER-, PEL- through, by, thoroughly, away PERMEATE, perfidy, pellucid
45 POST- behind, after (in time or place) postpone, postnatal, postorbital
46 PRE- before, early, toward precedent, precept, preposition
47 PRO-, PUR- before, for, forth proceed, purport, pursue, PROLONG
48 QUADRI-, QUADR- four times, four fold quadriceps, quadrisect, quadrangle
49 RE-, RED- back, again, against, behind repel, RELEGATE, redeem, redemption
50 RETRO- backwards, behind retrogressive, retrofit, retrograde
51 SE-, SED- aside, apart, away from secure, seduce, seclude, sedition, select
52 SEMI- half semicircle, semiprivate
53 SINE without sinecure
54 SUB-, SUC-, SUF- under, beneath, inferior, suffer, SUBMISSIVE, succumb,
55 SUG-, SUM-, SUP- less than, in place of, secretly suggest, subtract, suffuse, support
56 SUR-, SUS- (same as above meanings) suspend, surplus
57 SUBTER- beneath, secretly subterfuge
58 SUPER-, SUPRA- over, above, excessively SUPERFICIAL, SUPERCILIOUS
59 SUR- over, above, excessively surcharge, surtax, surplus, surrealism
60 TRANS-, TRA- across, over, beyond, through transoceanic, transgression, transit, transition
61 TRI- three triangle, triceps
62 ULTRA- beyond, on other side ultrasound, ultraconservative
63 UN- (Old English) no, not, without unabashed, unashamed

Prepare a RHEL-Based Virtual Machine for Azure

Today we have got project to prepare RHEL  VHD’s  for Azure. I did not find any doc for RHEL on azure. So i think to write steps i followed for RHEL on azure …

Prerequisites

CentOS Installation Notes

  • The newer VHDX format is not supported in Azure. You can  convert the disk to VHD format using Hyper-V Manager or the convert-vhd cmdlet.
  • When installing the Linux system it is recommended that you use standard partitions rather than LVM (often the default for many installations). This will avoid LVM name conflicts with cloned VMs, particularly if an OS disk ever needs to be attached to another VM for troubleshooting. LVM or RAID may be used on data disks if preferred.
  • NUMA is not supported for larger VM sizes due to a bug in Linux kernel versions below 2.6.37. This issue primarily impacts distributions using the upstream Red Hat 2.6.32 kernel. Manual installation of the Azure Linux agent (waagent) will automatically disable NUMA in the GRUB configuration for the Linux kernel. More information about this can be found in the steps below.
  • Do not configure a swap partition on the OS disk. The Linux agent can be configured to create a swap file on the temporary resource disk. More information about this can be found in the steps below.
  • All of the VHDs must have sizes that are multiples of 1 MB.

RHEL 6.5

  1. In Hyper-V Manager, select the virtual machine.
  2. Click Connect to open a console window for the virtual machine.
  3. Uninstall NetworkManager by running the following command:
    # sudo rpm -e --nodeps NetworkManager

    Note: If the package is not already installed, this command will fail with an error message. This is expected.

  4. Create a file named network in the /etc/sysconfig/ directory that contains the following text:
    NETWORKING=yes
    HOSTNAME=localhost.localdomain
  5. Create a file named ifcfg-eth0 in the /etc/sysconfig/network-scripts/ directory that contains the following text:
    DEVICE=eth0
    ONBOOT=yes
    BOOTPROTO=dhcp
    TYPE=Ethernet
    USERCTL=no
    PEERDNS=yes
    IPV6INIT=no
  6. Move (or remove) udev rules to avoid generating static rules for the Ethernet interface. These rules cause problems when cloning a virtual machine in Microsoft Azure or Hyper-V:
    # sudo mkdir -m 0700 /var/lib/waagent
    # sudo mv /lib/udev/rules.d/75-persistent-net-generator.rules /var/lib/waagent/
    # sudo mv /etc/udev/rules.d/70-persistent-net.rules /var/lib/waagent/
  7. Ensure the network service will start at boot time by running the following command:
    # sudo chkconfig network on
  8. Install the python-pyasn1 package by running the following command:
    # sudo yum install python-pyasn1
  9. If you would like to use the OpenLogic mirrors that are hosted within the Azure datacenters, then replace the /etc/yum.repos.d/CentOS-Base.repo file with the following repositories. This will also add the [openlogic] repository that includes packages for the Azure Linux agent:
    [openlogic]
    name=CentOS-$releasever - openlogic packages for $basearch
    baseurl=http://olcentgbl.trafficmanager.net/openlogic/6/openlogic/$basearch/
    enabled=1
    gpgcheck=0
    
    [base]
    name=CentOS-$releasever - Base
    baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/os/$basearch/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
    

    Note: The rest of this guide will assume you are using at least the [openlogic] repo, which will be used to install the Azure Linux agent below.

  10. Add the following line to /etc/yum.conf:
    http_caching=packages
  11. Run the following command to clear the current yum metadata:
    # yum clean all
  12. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this open “/boot/grub/menu.lst” in a text editor and ensure that the default kernel includes the following parameters:
    console=ttyS0 earlyprintk=ttyS0 rootdelay=300 numa=off

     

    This will also ensure all console messages are sent to the first serial port, which can assist Azure support with debugging issues. This will disable NUMA due to a bug in the kernel version used by RHEL 6.azure_kernel

    In addition to the above, it is recommended to remove the following parameters:

    rhgb quiet crashkernel=auto

    Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the serial port.

    The crashkernel option may be left configured if desired, but note that this parameter will reduce the amount of available memory in the VM by 128MB or more, which may be problematic on the smaller VM sizes.

  13. Ensure that the SSH server is installed and configured to start at boot time. This is usually the default.
  14. Disable SWAP  :  comment swap in /etc/fstab
        # blkid | grep swap
      /dev/sda3: UUID="53-e0e3efe22612" TYPE="swap"
      # swapoff /dev/sda3
  15. Install the Azure Linux Agent by running the following command:
    # sudo yum install WALinuxAgent

    Note that installing the WALinuxAgent package will remove the NetworkManager and NetworkManager-gnome packages if they were not already removed as described in step 2.

  16. Do not create swap space on the OS diskThe Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. Note that the local resource disk is a temporary disk, and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify the following parameters in /etc/waagent.conf appropriately:
    ResourceDisk.Format=y
    ResourceDisk.Filesystem=ext4
    ResourceDisk.MountPoint=/mnt/resource
    ResourceDisk.EnableSwap=y
    ResourceDisk.SwapSizeMB=8192    ## NOTE: set this to whatever you need it to be.
  17. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
    # sudo waagent -force -deprovision
    # export HISTSIZE=0
    # logout
  18. Click Action -> Shut Down in Hyper-V Manager. Your Linux VHD is now ready to be uploaded to Azure.

Reclaim Space in a VM on Thin or Thick VMDKs

Fedora/CentOs/RedHat
[root@rac1 ~]$ yum install zerofree
updates/metalink | 12 kB 00:00
updates | 4.5 kB 00:00
updates/primary_db | 4.3 MB 00:21
Setting up Install Process
Resolving Dependencies
Running transaction check
Package zerofree.i686 0:1.0.1-8.fc15 will be installed
Finished Dependency Resolution

Dependencies Resolved

================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
zerofree i686 1.0.1-8.fc15 fedora 20 k

Transaction Summary
================================================================================
Install 1 Package

Total download size: 20 k
Installed size: 20 k
Is this ok [y/N]: y
Downloading Packages:
zerofree-1.0.1-8.fc15.i686.rpm | 20 kB 00:00
Running Transaction Check
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : zerofree-1.0.1-8.fc15.i686 1/1

Installed:
zerofree.i686 0:1.0.1-8.fc15

Complete!
For Debian/Ubuntu:
[root@rac1 ~]$ apt-get install zerofree
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
zerofree
0 upgraded, 1 newly installed, 0 to remove and 17 not upgraded.
Need to get 7,272 B of archives.
After this operation, 61.4 kB of additional disk space will be used.
Get:1 http://ubuntu.cs.utah.edu/ubuntu/ oneiric/universe zerofree amd64 1.0.1-2ubuntu1 [7,272 B]
Fetched 7,272 B in 0s (41.5 kB/s)
Selecting previously deselected package zerofree.
(Reading database ... 22748 files and directories currently installed.)
Unpacking zerofree (from .../zerofree_1.0.1-2ubuntu1_amd64.deb) ...
Processing triggers for man-db ...
Setting up zerofree (1.0.1-2ubuntu1) ...

Then you need to mount the partition as read-only and run zerofree on it. If you need perform this on your OS/root partition, then power off your VM and attach the OS disk to another Linux VM. Here is how it looks like:

[root@rac1 ~]$ mount -o remount,ro /dev/mapper/test-lvol0
[root@rac1 ~]$ zerofree -v /dev/mapper/test-lvol0
1106/485301/512000