Recreating missing virtual disk (VMDK) Descriptor File Vmware

Recreating missing virtual disk (VMDK) Descriptor File

Problem-

You just noticed that virtual machine’s VMDK files are missing and lost somehow. The reason, not able to start VM and when you try to add existing HDD – existing VMDK and -flat. VMDK files are not available to attach with Virtual Machine too.

Solution

We need to recreate missing VMDK files, to add VMDK files and start Virtual Machine.

  1. Login to ESXI PS with Putty. Make sure, SSH service is enabled and running for the PS.
  2. Switch to the directory where your virtual machine is located.

cd /vmfs/volumes/<<datastore>>/VM-Directory

cd /vmfs/volumes/5d406ca3-62654bd0-75fd-e4434b75ed38/APP-U1-63-25

3. Run below command to identify and record the exact size of the existing flat vmdk file for the VM.

ls -ltr *

-rw——-    1 root     root     16106127360 Jun 21 12:20 APP-U1-63-25-flat.vmdk

-rw——-    1 root     root     214748364800 Jun 21 10:49 APP-U1-63-25_1-flat.vmdk

4. Run the vmkfstools to create new virtual disk, i.e VMDK (Descriptor File)

vmkfstools -c 16106127360 -d thin -a lsilogic APP-U1-63-25-OS.vmdk

vmkfstools -c 214748364800 -d thin -a lsilogic APP-U1-63-25-DATA.vmdk

5. As mentioned, there are 2 new files created: APP-U1-63-25-OS.vmdk and APP-U1-63-25-DATA.vmdk are created as a result.

ls -ltr

rw——-    1 root     root     16106127360 Jun 21 19:21 APP-U1-63-25-OS-flat.vmdk

-rw——-    1 root     root     214748364800 Jun 21 19:21 APP-U1-63-25-DATA-flat.vmdk

-rw——-    1 root     root           546 Jun 21 21:05 APP-U1-63-25-OS.vmdk 

-rw——-    1 root     root           550 Jun 21 21:05 APP-U1-63-25-DATA.vmdk 

-rw——-    1 root     root     214748364800 Jun 22 05:08 APP-U1-63-25_1-flat.vmdk

-rw——-    1 root     root     16106127360 Jun 22 05:08 APP-U1-63-25-flat.vmdk

6. We need to Rename APP-U1-63-25-OS.vmdk  and APP-U1-63-25-DATA.vmdk to the name that is needed to match the orphaned -flat file.

mv APP-U1-63-25-OS.vmdk APP-U1-63-25.vmdk

mv APP-U1-63-25-DATA.vmdk APP-U1-63-25_1.vmdk

7. Now, it is final stage where editing descriptor file (APP-U1-63-25.vmdk and APP-U1-63-25_1.vmdk)

find the line with RW ####### and change the name of the -flat to match the orphaned -flat file you have. So, I changed to APP-U1-63-25-flat.vmdk and APP-U1-63-25_1-flat.vmdk respectively for both disks, as you can see in the image below.

vi APP-U1-63-25.vmdk

vi APP-U1-63-25_1.vmdk

8. You can now delete the below files created earlier. These are not required anymore.

rw——-    1 root     root     16106127360 Jun 21 19:21 APP-U1-63-25-OS-flat.vmdk

-rw——-    1 root     root     214748364800 Jun 21 19:21 APP-U1-63-25-DATA-flat.vmdk

9. Attach the respective VMDK files to the virtual machine as earlier.

10. Now, All set and good to power on the virtual machine.

USING CURL TO TROUBLESHOOT

To use curl to test basic network connectivity, you need to know several things:

  • The remote server name or IP address.
  • The protocol for the service to be tested (HTTP, FTP, SMTP, etc.)
  • The port number for the network application you want to test.

To open a connection to a remote server, open a terminal window on your computer, and then type curl protocol://IP/host:port, where protocol is the communication protocol to be used IP/host represents the IP address or hostname of the server, and port represents the TCP port number. Port is optional when the standard port for a given protocol is used.

C:\>curl http://asgaur.com
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved <a href="http://asgaur.com/wp/">here</a>.</p>
</body></html>

Use curl to try and connect via SMTP protocol

C:\>curl smtp://asgaur.com
C:\>curl smtp://asgaur.com:2525 (2525 custom port if any)
214-Commands supported:
214 AUTH STARTTLS HELO EHLO MAIL RCPT DATA BDAT NOOP QUIT RSET HELP

To test an FTP server, use curl to connect via ftp protocol or to port 21.

C:\>curl ftp://asgaur.com
C:\>curl asgaur.com:21
220---------- Welcome to Pure-FTPd [privsep] [TLS] ----------
220-You are user number 18 of 50 allowed.
220-Local time is now 10:04. Server port: 21.
220-This is a private system - No anonymous login
220-IPv6 connections are also welcome on this server.
220 You will be disconnected after 15 minutes of inactivity.

SSH uses encrypted connections. However, you can still use curl to verify that the service is running on a server.

C:\>curl asgaur.com:22
SSH-2.0-OpenSSH_XX

passwd: Authentication token manipulation error | RHEL 6

Problem described as below-

Getting passwd: Authentication token manipulation error on  RHEL6 Machine.

[root@ip-linuxbox~]# passwd user1
Changing password for user user1.
New password:
Retype new password:
passwd: Authentication token manipulation error

[root@ip-linuxbox~]# passwd -u user1 [Tried to unlock the account password.]
Unlocking password for user user1.
passwd: Libuser error at line: 179 – error creating `/etc/passwd+’: Permission denied.
passwd: Error (password not set?)  [Getting Permission denied error hence passwd is not getting changed/updated. Hence, we need to restore the permission on passwd.]

[root@ip-linuxbox~]# chage -l user1
Last password change : Apr 03, 2019
Password expires : never
Password inactive : never
Account expires : never
Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7

Solution-

[root@ip-linuxbox~]# rpm –setperms passwd  [Restore permission on passwd file.]

[root@ip-linuxbox~]# which passwd | xargs chmod u+s [Setting required permission on passwd file.]

[root@ip-linuxbox ~]# restorecon /etc/* [ if SELinux context is properly set for, it will fix the same.]

[root@ip-linuxbox~]# passwd user1
Changing password for user user1.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

[root@ip-linuxbox~]# chage -l user1
Last password change : Jul 28, 2020
Password expires : never
Password inactive : never
Account expires : never
Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7

AQI Delhi

[barChart width=”500px” stacked=”1″
legend=”{ position: ‘top’, maxLines: 2 }”
vaxis=”{title: ‘in $000’, titleTextStyle: {color: ‘blue’}}”
haxis=”{title: ‘Year’, titleTextStyle: {color: ‘blue’}}”]
[‘Year’, ‘Sales’, ‘Expenses’],
[‘2004’, 1000, 400],
[‘2005’, 1170, 460],
[‘2006’, 660, 1120],
[‘2007’, 1030, 540]
[/barChart]

[gvn_schart_2 id=”1″ width=”500″ height=”400″]

File System shows 100% occupied but du tells different and still has Unused Spaces.

We have faced an issue as below mentioned – found /oracle mount-point showing 100% as Used. But getting different size values for folders under /oracle mount-point, when we executed du -gs .

bash-4.4# df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 4.00 1.39 66% 20686 6% /
/dev/hd2 4.00 1.02 75% 45679 16% /usr
/dev/hd9var 2.00 1.01 50% 16318 7% /var
/dev/hd3 2.00 1.83 9% 407 1% /tmp
/dev/hd1 1.00 0.16 85% 6124 12% /home
/dev/hd11admin 0.25 0.25 1% 5 1% /admin
/proc – – – – – /proc
/dev/hd10opt 0.50 0.07 87% 12979 44% /opt
/dev/livedump 0.25 0.25 1% 4 1% /var/adm/ras/livedump
/dev/fslv00 198.00 0.01 100% 440205 16% /oracle

bash-4.4# cd /oracle/
bash-4.4# du -gs *
0.00 AutoDeployment
5.45 Oracle
0.03 Patch
7.54 data01
11.67 fmw_12.1.3.0.0_wls
0.01 jboss
0.20 jdk
0.00 lost+found
0.92 wls1221

You may notice 100 % utilization for /oracle mount-point on the “df -g” output , but with “du -gs ” We found the files did not occupy the entire space.

It may be because of open files in the file-system. These open files hold space on the file-system to complete there execution, once the write operation is completed. 

To overcome, from this issue, We need to follow below two steps:-

  1. We need to find all such processes still running but using deleted files, in result – /oracle mount-point is showing 100% as Used.

bash-4.4# fuser -dV /oracle/
/oracle/:
inode=670047 size=5242722 fd=389 5570776
inode=1165305 size=20278 fd=1 8061106
inode=1165313 size=182335565824 fd=1 8716486
inode=1165305 size=20278 fd=1 9044152
inode=1165313 size=182335565824 fd=1 9371672
inode=1165305 size=20278 fd=1 11141354
inode=669981 size=5514335 fd=1 13041898


2. We have to kill all such processes as found in above command.

bash-4.4# kill -9 5570776
bash-4.4# kill -9 8061106
bash-4.4# kill -9 8716486
bash-4.4# kill -9 9044152
bash-4.4# kill -9 9371672
bash-4.4# kill -9 11141354
bash-4.4# kill -9 13041898

bash-4.4# fuser -dV /oracle/
/oracle/:

bash-4.4# df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 4.00 1.39 66% 20686 6% /
/dev/hd2 4.00 1.02 75% 45679 16% /usr
/dev/hd9var 2.00 1.01 50% 16318 7% /var
/dev/hd3 2.00 1.83 9% 407 1% /tmp
/dev/hd1 1.00 0.16 85% 6124 12% /home
/dev/hd11admin 0.25 0.25 1% 5 1% /admin
/proc – – – – – /proc
/dev/hd10opt 0.50 0.07 87% 12979 44% /opt
/dev/livedump 0.25 0.25 1% 4 1% /var/adm/ras/livedump
/dev/fslv00 198.00 170.01 15% 440205 1% /oracle
bash-4.4# cd /oracle/
bash-4.4# du -gs *
0.00 AutoDeployment
5.45 Oracle
0.03 Patch
7.54 data01
11.67 fmw_12.1.3.0.0_wls
0.01 jboss
0.20 jdk
0.00 lost+found
0.92 wls1221

How to Find Server Public IP Address in Linux Terminal

root@test:/var/log/nginx# wget -qO – icanhazip.com
www.xxx.yyy.zzz
root@test:/var/log/nginx# wget -qO- http://ipecho.net/plain | xargs echo
www.xxx.yyy.zzz
root@test:/var/log/nginx# wget -qO – icanhazip.com
www.xxx.yyy.zzz
root@test:/var/log/nginx# curl icanhazip.com
www.xxx.yyy.zzz
root@test:/var/log/nginx#

sed to play with data or parsing your text

To select all the lines starting from STARTING_PATTERN up to  blank line ^$ and then delete those lines.

# sed ‘/STARTING_PATTERN/,/^$/d’ filename

To edit files in place, use -i option.

# sed -i ‘/STARTING_PATTER/,/^$/d’ filename

Insert multiple lines into a file after specified pattern.

# sed ‘/cdef/r add.txt’ input.txt

# sed ‘/cdef/r add.txt’ input.txt

input.txt:
abcd
accd
cdef
line
web
add.txt:
line1
line2
line3
line4
Output :
abcd
accd
cdef
line1
line2
line3
line4
line
web

If you want to apply the changes in input.txt file. Then, use -i with sed.

# sed -i ‘/cdef/r add.txt’ input.txt

If you want to use a regex as an expression you have to use the -E tag with sed.

# sed -E ‘/RegexPattern/r add.txt’ input.txt

Usefull Linux Commands for SAN LUN allocation in RHEL

ls -l /dev/disk/by-* | grep lun-31
ls -l /dev/disk/by-* | grep lun-33
ls -l /dev/disk/by-* | grep lun-20
ls -l /dev/disk/by-* | grep lun-10
cat /sys/class/fc_transport/*/node_name
grep 50060160bee045be /sys/class/fc_transport/*/node_name
lsscsi
./inq.LinuxAMD64 -clariion
multipath -ll
df -h
cat /etc/fstsb
cat /etc/fstab
multipath -ll | grep mpathg
ls -ltr /data*
ls -ls /data*
ls -ld /data*
df -h
mount/dev/mapper/mpathg /data10
mount /dev/mapper/mpathg /data10
df -h
cd /data10
ls -ltr
du -hs regcss
rm -rf regcss
df -h
ls -ltr
vi /etc/fstab
cat /etc/fstsb
cat /etc/fstab
cd
mount /data10
umount /data10
mount /data10
df -h
multipath -ll | grep mpathk
multipath -ll | grep mpathl
ls -l /dev/disk/by-* | grep lun-33
echo “0 5 33” > /sys/class/scsi_host/host2/scan
ls -l /dev/disk/by-* | grep lun-33
echo “0 4 33” > /sys/class/scsi_host/host2/scan
ls -l /dev/disk/by-* | grep lun-33
ls -l /dev/disk/by-* | grep lun-31
cat /sys/class/fc_transport/*/node_name
echo “0 5 33” > /sys/class/scsi_host/host1/scan
ls -l /dev/disk/by-* | grep lun-33
cd /proc/scsi
ls
cd scsi
cd sg
ls
cd device
cd devices
cat devices
grep 0x50060160bee045be  /sys/class/fc_transport/*/node_name
echo “0 0 33” > /sys/class/scsi_host/host1/scan
ls -l /dev/disk/by-* | grep lun-33
echo “0 1 33” > /sys/class/scsi_host/host1/scan
ls -l /dev/disk/by-* | grep lun-33
echo “0 3 33” > /sys/class/scsi_host/host1/scan
ls -l /dev/disk/by-* | grep lun-33
echo “0 3 33” > /sys/class/scsi_host/host2/scan
ls -l /dev/disk/by-* | grep lun-33
echo “0 1 33” > /sys/class/scsi_host/host2/scan
ls -l /dev/disk/by-* | grep lun-33
grep 0x5006016b08605821  /sys/class/fc_transport/*/node_name
cat /sys/class/fc_transport/*/node_name
grep 0x5006016088605821  /sys/class/fc_transport/*/node_name
echo “0 2 33” > /sys/class/scsi_host/host2/scan
ls -l /dev/disk/by-* | grep lun-33
echo “0 4 33” > /sys/class/scsi_host/host1/scan
ls -l /dev/disk/by-* | grep lun-33
echo “0 2 33” > /sys/class/scsi_host/host1/scan
ls -l /dev/disk/by-* | grep lun-33
grep 0x50060160bea0597f  /sys/class/fc_transport/*/node_name
echo “0 3 33” > /sys/class/scsi_host/host1/scan
ls -l /dev/disk/by-* | grep lun-33
echo “0 5 33” > /sys/class/scsi_host/host1/scan
ls -l /dev/disk/by-* | grep lun-33
ls -l /dev/disk/by-* | grep lun-34
echo “0 5 34” > /sys/class/scsi_host/host1/scan
ls -l /dev/disk/by-* | grep lun-34
echo “0 2 34” > /sys/class/scsi_host/host2/scan
echo “0 2 34” > /sys/class/scsi_host/host1/scan
echo “0 4 34” > /sys/class/scsi_host/host1/scan
echo “0 4 34” > /sys/class/scsi_host/host2/scan
ls -l /dev/disk/by-* | grep lun-34
cd
mkdir /data11
mkdir /data12
multipath ll
multipath -ll
df -h | grep mpathp
history | grep ext4
mkfs.ext4 -L DATA11 -m 0 -b 2048 /dev/mapper/mpathp
df -h | grep mpathq
mkfs.ext4 -L DATA12 -m 0 -b 2048 /dev/mapper/mpathq
df -h
mkdir /data11
mkdir /data12
mount /dev/mapper/mpathp /data11
mount /dev/mapper/mpathp /data12
umount /data12
umount /data11
mount /dev/mapper/mpathp /data11
mount /dev/mapper/mpathq /data12
df -h
umount /data12
vi /etc/fstsb
vi /etc/fstab
df -h
umount /data11
mount all
mount -all
df -h
cat /etc/fstab
df -h
ls -ld /data*
chown -R orarh11g:dba /data11 /data12
ls -ld /data*
df -h
rm -rf /data12
cat /proc/scsi/scsi | egrep -i ‘Host:’ | wc -l
ls /sys/class/fc_host
df -h
cat /etc/fstsb
cat /etc/fstab
vi /etc/fstab
df -h
mount all
mount -all
mkdir /data12
mount -all
df -h
chown -R orarh11g:dba  /data12
df -h
cat /sys/class/scsi_host/host*/device/fc_host/host*/node_name
for i in 0 1 2 3 4 5; do cat host$i/device/fc_host/host$i/port_name;  done
for i in 0 1 2 3 4 5 6 7 8 9 10; do cat host$i/device/fc_host/host$i/port_name;  done
cd  /sys/class/scsi_host/
for i in 0 1 2 3 4 5 6 7 8 9 10; do cat host$i/device/fc_host/host$i/port_name;  done
ls /sys/class/fc_host
fdisk -l |egrep ‘^Disk’ |egrep -v ‘dm-‘
multipath -ll
lspci | grep Fibre
lspci -v -s 05:00.0
ls -l /sys/class/scsi_host
ind /sys/class/pci_bus/0000\:05/device/0000\:05\:00.0/host*/rport-*/target*/*/state | awk -F’/’ ‘{print $11}’ | sort
find /sys/class/pci_bus/0000\:05/device/0000\:05\:00.0/host*/rport-*/target*/*/state | awk -F’/’ ‘{print $11}’ | sort
find /sys/class/pci_bus/0000\:05/device/0000\:05\:00.1/host*/rport-*/target*/*/state | awk -F’/’ ‘{print $11}’ | sort
cat /proc/scsi/scsi | grep scsi2
cat /proc/scsi/scsi | grep scsi1
find   /sys/class/pci_bus/0000\:05/device/0000\:05\:00.0/host*/rport-*/target*/*/block/*/stat | awk -F’/’ ‘{print $11,$13}’
find   /sys/class/pci_bus/0000\:05/device/0000\:05\:00.1/host*/rport-*/target*/*/block/*/stat | awk -F’/’ ‘{print $11,$13}’
udevadm info –query=path –name /dev/sdad
df -h
udevadm info –query=path –name /dev/mapper/mpathq
udevadm info –query=path –name /devices/virtual/block/dm-13
for port in /sys/class/fc_host/host[0-9]/port_name; { echo -n “$port : “; cat $port; }
history
CAILDB-63 scsi_host]#

Check and list luns attached to HBA in RHEL6

This article will show you the mapping from physical HBA card to luns, I use SAN as example below, in general, it’s also applys to any other devices whichever use sysfs, for example direct sas connect.

[root@RHEL6 scsi_host]# lspci | grep Fibre
05:00.0 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03)
05:00.1 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03)
[root@RHEL6 scsi_host]# lspci -v -s 05:00.0
05:00.0 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03)
        Subsystem: Hewlett-Packard Company Device 338f
        Physical Slot: 1
        Flags: bus master, fast devsel, latency 0, IRQ 40
        Memory at f7ff0000 (64-bit, non-prefetchable) [size=4K]
        Memory at f7fe0000 (64-bit, non-prefetchable) [size=16K]
        I/O ports at 5000 [size=256]
        [virtual] Expansion ROM at f1700000 [disabled] [size=256K]
        Capabilities: [58] Power Management version 3
        Capabilities: [60] MSI: Enable- Count=1/16 Maskable+ 64bit+
        Capabilities: [78] MSI-X: Enable- Count=32 Masked-
        Capabilities: [84] Vital Product Data
        Capabilities: [94] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [12c] Power Budgeting <?>
        Kernel driver in use: lpfc
        Kernel modules: lpfc

[root@RHEL6 scsi_host]# ls -l /sys/class/scsi_host
total 0
lrwxrwxrwx. 1 root root 0 Jun 20 18:17 host0 -> ../../devices/pci0000:00/0000:00:02.2/0000:03:00.0/host0/scsi_host/host0
lrwxrwxrwx. 1 root root 0 Jun 20 18:17 host1 -> ../../devices/pci0000:00/0000:00:03.0/0000:05:00.0/host1/scsi_host/host1
lrwxrwxrwx. 1 root root 0 Jun 20 18:17 host2 -> ../../devices/pci0000:00/0000:00:03.0/0000:05:00.1/host2/scsi_host/host2

[root@RHEL6 scsi_host]# find /sys/class/pci_bus/0000\:05/device/0000\:05\:00.0/host*/rport-*/target*/*/state | awk -F'/' '{print $11}' | sort
1:0:0:0
1:0:0:10
1:0:0:11
1:0:0:12
1:0:0:13
1:0:0:14
1:0:0:15
1:0:0:16
1:0:0:33
1:0:1:0
1:0:1:10
1:0:1:11
1:0:1:12
1:0:1:13
1:0:1:14
1:0:1:15
1:0:1:16
1:0:1:33
1:0:2:0
1:0:2:31
1:0:2:32
1:0:2:33
1:0:2:34
1:0:3:0
1:0:3:1
1:0:3:2
1:0:3:33
1:0:4:0
1:0:4:31
1:0:4:32
1:0:4:33
1:0:4:34
1:0:5:0
1:0:5:1
1:0:5:2
1:0:5:33
1:0:5:34
[root@RHEL6 scsi_host]# find /sys/class/pci_bus/0000\:05/device/0000\:05\:00.1/host*/rport-*/target*/*/state | awk -F'/' '{print $11}' | sort
2:0:0:0
2:0:0:1
2:0:0:2
2:0:1:0
2:0:1:10
2:0:1:11
2:0:1:12
2:0:1:13
2:0:1:14
2:0:1:15
2:0:1:16
2:0:1:33
2:0:2:0
2:0:2:31
2:0:2:32
2:0:2:33
2:0:2:34
2:0:3:0
2:0:3:10
2:0:3:11
2:0:3:12
2:0:3:13
2:0:3:14
2:0:3:15
2:0:3:16
2:0:3:33
2:0:4:0
2:0:4:31
2:0:4:32
2:0:4:33
2:0:4:34
2:0:5:0
2:0:5:1
2:0:5:2
2:0:5:33
[root@RHEL6 scsi_host]# cat /proc/scsi/scsi | grep scsi2
Host: scsi2 Channel: 00 Id: 02 Lun: 00
Host: scsi2 Channel: 00 Id: 02 Lun: 31
Host: scsi2 Channel: 00 Id: 02 Lun: 32
Host: scsi2 Channel: 00 Id: 04 Lun: 00
Host: scsi2 Channel: 00 Id: 04 Lun: 31
Host: scsi2 Channel: 00 Id: 04 Lun: 32
Host: scsi2 Channel: 00 Id: 05 Lun: 00
Host: scsi2 Channel: 00 Id: 05 Lun: 01
Host: scsi2 Channel: 00 Id: 05 Lun: 02
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Host: scsi2 Channel: 00 Id: 00 Lun: 01
Host: scsi2 Channel: 00 Id: 00 Lun: 02
Host: scsi2 Channel: 00 Id: 01 Lun: 00
Host: scsi2 Channel: 00 Id: 01 Lun: 10
Host: scsi2 Channel: 00 Id: 01 Lun: 11
Host: scsi2 Channel: 00 Id: 01 Lun: 12
Host: scsi2 Channel: 00 Id: 01 Lun: 13
Host: scsi2 Channel: 00 Id: 01 Lun: 14
Host: scsi2 Channel: 00 Id: 01 Lun: 15
Host: scsi2 Channel: 00 Id: 01 Lun: 16
Host: scsi2 Channel: 00 Id: 03 Lun: 00
Host: scsi2 Channel: 00 Id: 03 Lun: 10
Host: scsi2 Channel: 00 Id: 03 Lun: 11
Host: scsi2 Channel: 00 Id: 03 Lun: 12
Host: scsi2 Channel: 00 Id: 03 Lun: 13
Host: scsi2 Channel: 00 Id: 03 Lun: 14
Host: scsi2 Channel: 00 Id: 03 Lun: 15
Host: scsi2 Channel: 00 Id: 03 Lun: 16
Host: scsi2 Channel: 00 Id: 05 Lun: 33
Host: scsi2 Channel: 00 Id: 04 Lun: 33
Host: scsi2 Channel: 00 Id: 03 Lun: 33
Host: scsi2 Channel: 00 Id: 01 Lun: 33
Host: scsi2 Channel: 00 Id: 02 Lun: 33
Host: scsi2 Channel: 00 Id: 02 Lun: 34
Host: scsi2 Channel: 00 Id: 04 Lun: 34
[root@RHEL6 scsi_host]# cat /proc/scsi/scsi | grep scsi1
Host: scsi1 Channel: 00 Id: 02 Lun: 00
Host: scsi1 Channel: 00 Id: 02 Lun: 31
Host: scsi1 Channel: 00 Id: 02 Lun: 32
Host: scsi1 Channel: 00 Id: 04 Lun: 00
Host: scsi1 Channel: 00 Id: 04 Lun: 31
Host: scsi1 Channel: 00 Id: 04 Lun: 32
Host: scsi1 Channel: 00 Id: 05 Lun: 00
Host: scsi1 Channel: 00 Id: 05 Lun: 01
Host: scsi1 Channel: 00 Id: 05 Lun: 02
Host: scsi1 Channel: 00 Id: 03 Lun: 00
Host: scsi1 Channel: 00 Id: 03 Lun: 01
Host: scsi1 Channel: 00 Id: 03 Lun: 02
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Host: scsi1 Channel: 00 Id: 00 Lun: 10
Host: scsi1 Channel: 00 Id: 00 Lun: 11
Host: scsi1 Channel: 00 Id: 00 Lun: 12
Host: scsi1 Channel: 00 Id: 00 Lun: 13
Host: scsi1 Channel: 00 Id: 00 Lun: 14
Host: scsi1 Channel: 00 Id: 00 Lun: 15
Host: scsi1 Channel: 00 Id: 00 Lun: 16
Host: scsi1 Channel: 00 Id: 01 Lun: 00
Host: scsi1 Channel: 00 Id: 01 Lun: 10
Host: scsi1 Channel: 00 Id: 01 Lun: 11
Host: scsi1 Channel: 00 Id: 01 Lun: 12
Host: scsi1 Channel: 00 Id: 01 Lun: 13
Host: scsi1 Channel: 00 Id: 01 Lun: 14
Host: scsi1 Channel: 00 Id: 01 Lun: 15
Host: scsi1 Channel: 00 Id: 01 Lun: 16
Host: scsi1 Channel: 00 Id: 05 Lun: 33
Host: scsi1 Channel: 00 Id: 00 Lun: 33
Host: scsi1 Channel: 00 Id: 01 Lun: 33
Host: scsi1 Channel: 00 Id: 03 Lun: 33
Host: scsi1 Channel: 00 Id: 04 Lun: 33
Host: scsi1 Channel: 00 Id: 02 Lun: 33
Host: scsi1 Channel: 00 Id: 05 Lun: 34
Host: scsi1 Channel: 00 Id: 02 Lun: 34
Host: scsi1 Channel: 00 Id: 04 Lun: 34
[root@RHEL6 scsi_host]# find   /sys/class/pci_bus/0000\:05/device/0000\:05\:00.0/host*/rport-*/target*/*/block/*/stat | awk -F'/' '{print $11,$13}'
1:0:0:0 sdb
1:0:0:10 sdc
1:0:0:11 sdd
1:0:0:12 sde
1:0:0:13 sdf
1:0:0:14 sdg
1:0:0:15 sdh
1:0:0:16 sdi
1:0:1:0 sdj
1:0:1:10 sdk
1:0:1:11 sdl
1:0:1:12 sdm
1:0:1:13 sdn
1:0:1:14 sdo
1:0:1:15 sdp
1:0:1:16 sdq
1:0:2:0 sdr
1:0:2:31 sds
1:0:2:32 sdt
1:0:2:33 sdbi
1:0:2:34 sdbk
1:0:3:0 sdu
1:0:3:1 sdv
1:0:3:2 sdw
1:0:4:0 sdx
1:0:4:31 sdy
1:0:4:32 sdz
1:0:4:33 sdbh
1:0:4:34 sdbl
1:0:5:0 sdaa
1:0:5:1 sdab
1:0:5:2 sdac
[root@RHEL6 scsi_host]# find   /sys/class/pci_bus/0000\:05/device/0000\:05\:00.1/host*/rport-*/target*/*/block/*/stat | awk -F'/' '{print $11,$13}'
2:0:0:0 sdad
2:0:0:1 sdae
2:0:0:2 sdaf
2:0:1:0 sdag
2:0:1:10 sdah
2:0:1:11 sdai
2:0:1:12 sdaj
2:0:1:13 sdak
2:0:1:14 sdal
2:0:1:15 sdam
2:0:1:16 sdan
2:0:2:0 sdao
2:0:2:31 sdap
2:0:2:32 sdaq
2:0:2:33 sdbg
2:0:2:34 sdbj
2:0:3:0 sdar
2:0:3:10 sdas
2:0:3:11 sdat
2:0:3:12 sdau
2:0:3:13 sdav
2:0:3:14 sdaw
2:0:3:15 sdax
2:0:3:16 sday
2:0:4:0 sdaz
2:0:4:31 sdba
2:0:4:32 sdbb
2:0:4:33 sdbf
2:0:4:34 sdbm
2:0:5:0 sdbc
2:0:5:1 sdbd
2:0:5:2 sdbe
[root@RHEL6 scsi_host]# udevadm info --query=path --name /dev/sdad
/devices/pci0000:00/0000:00:03.0/0000:05:00.1/host2/rport-2:0-2/target2:0:0/2:0:0:0/block/sdad
[root@RHEL6 scsi_host]# df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/sda3            50G   20G   27G  43% /
tmpfs                64G 1012K   64G   1% /dev/shm
/dev/sda1           485M   40M  420M   9% /boot
/dev/sda6            92G  9.9G   77G  12% /home
/dev/sda5           7.9G  147M  7.4G   2% /tmp
/dev/mapper/mpathb   99G   17G   82G  18% /oracle
/dev/mapper/mpathc  985G  907G   78G  93% /data01
/dev/mapper/mpathd  985G  911G   74G  93% /data02
/dev/mapper/mpathe  985G  942G   43G  96% /data03
/dev/mapper/mpathf  985G  933G   52G  95% /data04
/dev/mapper/mpathn  985G  920G   65G  94% /data05
/dev/mapper/mpathh  985G  927G   58G  95% /data06
/dev/mapper/mpathi  985G  937G   48G  96% /data07
/dev/mapper/mpathj  985G  895G   90G  91% /data08
/dev/mapper/mpatho  985G  966G   19G  99% /data09
/dev/mapper/mpathg  985G  828G  157G  85% /data10
/dev/mapper/mpathp  985G  545G  441G  56% /data11
/dev/mapper/mpathq  985G   87M  985G   1% /data12

[root@RHEL6 scsi_host]# for port in /sys/class/fc_host/host[0-9]/port_name; { echo -n "$port : "; cat $port; }
/sys/class/fc_host/host1/port_name : 0x1000a0481ce4f1da
/sys/class/fc_host/host2/port_name : 0x1000a0481ce4f1db
[root@RHEL6 scsi_host]#

First, use lspci get HBA card  info installed on the host

# lspci | grep Fibre
15:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02)
15:00.1 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02)

HBA detail info

# lspci -v -s 15:00.0
15:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02)
    Subsystem: QLogic Corp. Device 015d
    Physical Slot: 2
    Flags: bus master, fast devsel, latency 0, IRQ 24
    I/O ports at 2200 [size=256]
    Memory at 97b00000 (64-bit, non-prefetchable) [size=16K]
    Expansion ROM at 90000000 [disabled] [size=256K]
    Capabilities: [44] Power Management version 3
    Capabilities: [4c] Express Endpoint, MSI 00
    Capabilities: [88] MSI: Enable- Count=1/32 Maskable- 64bit+
    Capabilities: [98] Vital Product Data
    Capabilities: [a0] MSI-X: Enable+ Count=2 Masked-
    Capabilities: [100] Advanced Error Reporting
    Capabilities: [138] Power Budgeting <?>
    Kernel driver in use: qla2xxx
    Kernel modules: qla2xxx

It tells you that one HBA card in PCI slot2, two FC ports

Find pci slot and scsi_host mapping

# ls -l /sys/class/scsi_host
total 0
...
lrwxrwxrwx 1 root root 0 Oct  9 12:58 host4 -> ../../devices/pci0000:00/0000:00:1f.5/host4/scsi_host/host4
lrwxrwxrwx 1 root root 0 Oct  9 12:58 host5 -> ../../devices/pci0000:00/0000:00:03.0/0000:15:00.0/host5/scsi_host/host5
lrwxrwxrwx 1 root root 0 Oct  9 12:58 host6 -> ../../devices/pci0000:00/0000:00:03.0/0000:15:00.1/host6/scsi_host/host6

You can easily tell that the first port of pcs slot 2(15:00.0) mapped to host5, the other mapped to host6

Find target luns by HBA port

Once you know the pci info of a HBA card, then you can find its port0 target luns or SAN devices.

Note 15:00.0 is used in this case

#find /sys/class/pci_bus/0000\:15/device/0000\:15\:00.0/host*/rport-*/target*/*/state | awk -F'/' '{print $11}' | sort
...
5:0:0:0
5:0:0:1
5:0:0:10
5:0:0:11
5:0:0:2
5:0:0:3
5:0:0:31
5:0:0:4
5:0:0:5
...

It should be consistent with the devices in /proc/scsi/scsi

#cat /proc/scsi/scsi | grep scsi5
...
Host: scsi5 Channel: 00 Id: 04 Lun: 04
Host: scsi5 Channel: 00 Id: 04 Lun: 05
Host: scsi5 Channel: 00 Id: 04 Lun: 06
Host: scsi5 Channel: 00 Id: 04 Lun: 07
Host: scsi5 Channel: 00 Id: 04 Lun: 08
Host: scsi5 Channel: 00 Id: 04 Lun: 09
Host: scsi5 Channel: 00 Id: 04 Lun: 10
Host: scsi5 Channel: 00 Id: 04 Lun: 11
Host: scsi5 Channel: 00 Id: 04 Lun: 31
...

Note: if use the command for sas direct attached devices, change ‘rport’ to ‘port’, same applies to the example below.

Find block devices

If you are only interested in block devices, like tape drive, disk lun or cd rom, here is a way similar.

# find   /sys/class/pci_bus/0000\:15/device/0000\:15\:00.0/host*/rport-*/target*/*/block/*/stat | awk -F'/' '{print $11,$13}'
5:0:0:0 sdb
5:0:0:1 sdc
5:0:0:10 sdl
5:0:0:11 sdm
5:0:0:2 sdd
5:0:0:3 sde
5:0:0:4 sdf
5:0:0:5 sdg
5:0:0:6 sdh

Reverse search, find the physical port that a lun connected to

/proc/scsi/scsi doesn’t tell you which physical port target luns are connected to In the reverse look, for a given device name, for example /dev/sdd, how do I know which hba port it connected to?

# udevadm info --query=path --name /dev/sdd
/devices/pci0000:00/0000:00:03.0/0000:15:00.0/host5/rport-5:0-0/target5:0:0/5:0:0:2/block/sdd

Is is clear?

Or
Multipath also can tell you some hint

multipath -ll | grep sdd
  `- 5:0:0:2  sdd  8:48    active ready running

Or, look into /dev/disk/by-path/ tree
...
lrwxrwxrwx 1 root root 10 Aug 15 16:49 /dev/disk/by-path/pci-0000:15:00.1-fc-0x22430080e524ebac-lun-4 -> ../../sdcx
lrwxrwxrwx 1 root root 10 Aug 15 16:49 /dev/disk/by-path/pci-0000:15:00.1-fc-0x22430080e524ebac-lun-5 -> ../../sdcy

Get HBA WWNA info:

# for port in /sys/class/fc_host/host[0-9]/port_name; { echo -n "$port : "; cat $port; }
/sys/class/fc_host/host5/port_name : 0x21000024ff3434e4
/sys/class/fc_host/host6/port_name : 0x21000024ff3434e5

Dynamically insert and remove SCSI devices

If a newer kernel and the /proc file system is running, a non-busy device can be removed and installed ‘on the fly’.

To hot remove a SCSI device:

    echo 1 > /sys/class/scsi_device/h:c:t:l/device/delete
    or
    echo 1 > /sys/block/<dev>/device/delete
    where <dev> is like sda or sdb etc..
    old way
    echo "scsi remove-single-device a b c d" > /proc/scsi/scsi

and similar, to hot add a SCSI device, do

    echo "c t l" >  /sys/class/scsi_host/host<h>/scan
    or use wildcard like below
    echo "- - -" > /sys/class/scsi_host/host<h>/scan

    old way
    echo "scsi add-single-device h c t l" > /proc/scsi/scsi

where

          h == hostadapter id (first one being 0)
          c == SCSI channel on hostadapter (first one being 0)
          t == ID
          l == LUN (first one being 0)

SIMPLE BACKUP SOLUTION WITH AWS S3

Data availability is one of the biggest concern in IT industry. After moving most of my services to the AWS cloud I was thinking how I can ensure data availability and accuracy in case of AWS data center failure or what if my EC2 EBS volume gets corrupted.

A case study

I have a Oracle RDS running on EC2 instance.

  • I need to ensure I can restore data from backup in case of user demand, in case of data center failure or in case of instance failure
  • On the other hand I need to ensure it will not increase my AWS monthly charges unexpectedly
  • I will only run that service during the business hours

Solution could be

  • Use AWS Oracle RDS. The service will take care of everything including backup and patch update. This is really a very reliable service AWS is providing. But to fulfil my last requirement it will be a lot of work for me, since RDS can’t be stopped, you can only terminate RDS (yes, you can take snapshot before terminating)
  • Use EC2 instance and take snapshot backup of your EC2 EBS volume. But my EBS volume is 120 GB, much bigger than the original SQL DB backup. Which means it will cost me more to store multiple snapshots in S3 (120 GB x 7days).

The solution I am using

  • Created a maintenance plan in SQL Server to take daily db backup
  • Created an AWS CLI script to sync data from SQL server backup location to a S3 bucket
  • aws s3 sync \\SERVER_NAME\backup$ s3://BUCKETNAME –exclude * –include *.bak
  • Created a batch job to move local SQL server backup data to another folder for old data clean-up
  • move \\SERVER_NAME\backup$\*.* \\SERVER_NAME\backup$\movedS3
  • Create a maintenance plan in SQL Server to delete older files from movedS3 folder. It will help me to control unwanted data growth
  • Created a lifecycle policy to delete older files from my S3 bucketS3-Lifecycle

 

What this solution will ensure

  • First of all I can sleep tight during night. I don’t need to worry about my backup data.😉
  • S3 provides me 99.999999999% data durability. It means I will be able to access my S3 data in case of AWS availability zone failure also. Because S3 data synchronizes between multiple availability zone.
  • S3 is the cheapest cloud data storage solution. That’s why drop box dare to give you such storage space as free😉