Category Archives: Cloud

How to Find Server Public IP Address in Linux Terminal

root@test:/var/log/nginx# wget -qO – icanhazip.com
www.xxx.yyy.zzz
root@test:/var/log/nginx# wget -qO- http://ipecho.net/plain | xargs echo
www.xxx.yyy.zzz
root@test:/var/log/nginx# wget -qO – icanhazip.com
www.xxx.yyy.zzz
root@test:/var/log/nginx# curl icanhazip.com
www.xxx.yyy.zzz
root@test:/var/log/nginx#

SIMPLE BACKUP SOLUTION WITH AWS S3

Data availability is one of the biggest concern in IT industry. After moving most of my services to the AWS cloud I was thinking how I can ensure data availability and accuracy in case of AWS data center failure or what if my EC2 EBS volume gets corrupted.

A case study

I have a Oracle RDS running on EC2 instance.

  • I need to ensure I can restore data from backup in case of user demand, in case of data center failure or in case of instance failure
  • On the other hand I need to ensure it will not increase my AWS monthly charges unexpectedly
  • I will only run that service during the business hours

Solution could be

  • Use AWS Oracle RDS. The service will take care of everything including backup and patch update. This is really a very reliable service AWS is providing. But to fulfil my last requirement it will be a lot of work for me, since RDS can’t be stopped, you can only terminate RDS (yes, you can take snapshot before terminating)
  • Use EC2 instance and take snapshot backup of your EC2 EBS volume. But my EBS volume is 120 GB, much bigger than the original SQL DB backup. Which means it will cost me more to store multiple snapshots in S3 (120 GB x 7days).

The solution I am using

  • Created a maintenance plan in SQL Server to take daily db backup
  • Created an AWS CLI script to sync data from SQL server backup location to a S3 bucket
  • aws s3 sync \\SERVER_NAME\backup$ s3://BUCKETNAME –exclude * –include *.bak
  • Created a batch job to move local SQL server backup data to another folder for old data clean-up
  • move \\SERVER_NAME\backup$\*.* \\SERVER_NAME\backup$\movedS3
  • Create a maintenance plan in SQL Server to delete older files from movedS3 folder. It will help me to control unwanted data growth
  • Created a lifecycle policy to delete older files from my S3 bucketS3-Lifecycle

 

What this solution will ensure

  • First of all I can sleep tight during night. I don’t need to worry about my backup data.😉
  • S3 provides me 99.999999999% data durability. It means I will be able to access my S3 data in case of AWS availability zone failure also. Because S3 data synchronizes between multiple availability zone.
  • S3 is the cheapest cloud data storage solution. That’s why drop box dare to give you such storage space as free😉

AWS RDS IAM Policy for Read Only Access and DB Logs Download

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "rds:Describe*",
                "rds:ListTagsForResource",
                "rds:Download*",
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeVpcs"
            ],
            "Effect": "Allow",
            "Resource": "*"
        },
        {
            "Action": [
                "cloudwatch:GetMetricStatistics",
                "logs:DescribeLogStreams",
                "logs:GetLogEvents"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}

Setup AWS Cloudwatch Memory and Drive Monitoring on RHEL

Download Scripts

Install Prerequisite Packages

sudo yum install wget unzip perl-core perl-DateTime perl-Sys-Syslog perl-CPAN perl-libwww-perl perl-Crypt-SMIME perl-Crypt-SSLeay

Install LWP Perl Bundles

  1. Launch cpan
    sudo perl -MCPAN -e shell
    
  2. Install Bundle
    install Bundle::LWP6 LWP YAML
    

Install Script

wget http://aws-cloudwatch.s3.amazonaws.com/downloads/CloudWatchMonitoringScripts-1.2.1.zip
unzip CloudWatchMonitoringScripts-1.2.1.zip -d /opt
rm -f CloudWatchMonitoringScripts-1.2.1.zip

Setup Credentials

API Access Key (Option 1)

This is good for testing, but it’s better to use IAM roles covered in Option 2.

  1. Copy awscreds template
    cp /opt/aws-scripts-mon/awscreds.template /opt/aws-scripts-mon/awscreds.conf
    
  2. Add access key id and secret access key
    vim /opt/aws-scripts-mon/awscreds.conf
    
  3. Lock down file access
    chmod 0400 /opt/aws-scripts-mon/awscreds.conf
    

IAM Role (Option 2)

  1. Login to AWS web console
  2. Select Identity & Access Management
  3. Select Roles | Create New Role
  4. Enter Role Name
    1. i.e. ec2-cloudwatch
  5. Select Next Step
  6. Select Amazon EC2
  7. Search for cloudwatch
  8. Select CloudwatchFullAccess
  9. Select Next Step | Create Role
  10. Launch a new instance and assign the ec2-cloudwatch IAM role

You can not add an IAM Role to an existing EC2 Instance; you can only specify a role when you launch a new instance.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html?console_help=true

Test

This won’t send data to Cloudwatch.

/opt/aws-scripts-mon/mon-put-instance-data.pl --mem-util --verify --verbose

Example

MemoryUtilization: 31.7258903184253 (Percent)
Using AWS credentials file <./awscreds.conf>
Endpoint: https://monitoring.us-west-2.amazonaws.com
Payload: {"MetricData":[{"Timestamp":1443537153,"Dimensions":[{"Value":"i-12e1fac4","Name":"InstanceId"}],"Value":31.7258903184253,"Unit":"Percent","MetricName":"MemoryUtilization"}],"Namespace":"System/Linux","__type":"com.amazonaws.cloudwatch.v2010_08_01#PutMetricDataInput"}

Verification completed successfully. No actual metrics sent to CloudWatch.

Report to Cloudwatch Test

Test that communication to Cloudwatch works and design the command you’ll want to cron out in the next step.

/opt/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail

After you run this command one point-in-time metric should show up for the instance under Cloudwatch | Linux System

Create Cron Task (as root)

Now that you’ve tested out the command and figured out what you want to report it’s time to add a Cron task so it runs ever X minutes. Usually 5 minutes is good.

  1. Edit cron table
    crontab -e
    
    */5 * * * * /opt/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail --disk-space-util --disk-path=/ --from-cron
    

Create Cron Task (as other user)

You may want to create a user that runs the cron. Here’s an example using a user named cloudwatch

  1. Create user
    useradd cloudwatch
    
  2. Disable user login
    usermod -s /sbin/nologin cloudwatch
    
  3. Set ownership
    chown -R cloudwatch.cloudwatch /opt/aws-scripts-mon
    
  4. Edit cron table
    crontab -e -u cloudwatch
    
  5. Add cron job
    */5 * * * * /opt/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail --swap-used --disk-space-util --disk-path=/ --from-cron
    

Verify Cron Job Ran

One way to verify the cron job ran is to look in the cron log.

less /var/log/cron
tail -f /var/log/cron

References

Monitor Script Arguments

Name Description
–mem-util Collects and sends the MemoryUtilization metrics in percentages. This option reports only memory allocated by applications and the operating system, and excludes memory in cache and buffers.
–mem-used Collects and sends the MemoryUsed metrics, reported in megabytes. This option reports only memory allocated by applications and the operating system, and excludes memory in cache and buffers.
–mem-avail Collects and sends the MemoryAvailable metrics, reported in megabytes. This option reports memory available for use by applications and the operating system.
–swap-util Collects and sends SwapUtilization metrics, reported in percentages.
–swap-used Collects and sends SwapUsed metrics, reported in megabytes.
–disk-path=PATH Selects the disk on which to report.PATH can specify a mount point or any file located on a mount point for the filesystem that needs to be reported. For selecting multiple disks, specify a –disk-path=PATH for each one of them. To select a disk for the filesystems mounted on / and /home, use the following parameters:
–disk-path=/ –disk-path=/home
–disk-space-util Collects and sends the DiskSpaceUtilization metric for the selected disks. The metric is reported in percentages.
–disk-space-used Collects and sends the DiskSpaceUsed metric for the selected disks. The metric is reported by default in gigabytes.Due to reserved disk space in Linux operating systems, disk space used and disk space available might not accurately add up to the amount of total disk space.
–disk-space-avail Collects and sends the DiskSpaceAvailable metric for the selected disks. The metric is reported in gigabytes.Due to reserved disk space in the Linux operating systems, disk space used and disk space available might not accurately add up to the amount of total disk space.
–memory-units=UNITS Specifies units in which to report memory usage. If not specified, memory is reported in megabytes. UNITS may be one of the following: bytes, kilobytes, megabytes, gigabytes.
–disk-space-units=UNITS Specifies units in which to report disk space usage. If not specified, disk space is reported in gigabytes. UNITS may be one of the following: bytes, kilobytes, megabytes, gigabytes.
–aws-credential- file=PATH Provides the location of the file containing AWS credentials.This parameter cannot be used with the –aws-access-key-id and –aws-secret-keyparameters.
–aws-access-key-id=VALUE Specifies the AWS access key ID to use to identify the caller. Must be used together with the –aws-secret-key option. Do not use this option with the –aws-credential-file parameter.
–aws-secret-key=VALUE Specifies the AWS secret access key to use to sign the request to CloudWatch. Must be used together with the –aws-access-key-id option. Do not use this option with –aws-credential-file parameter.
–verify Performs a test run of the script that collects the metrics, prepares a complete HTTP request, but does not actually call CloudWatch to report the data. This option also checks that credentials are provided. When run in verbose mode, this option outputs the metrics that will be sent to CloudWatch.
–from-cron Use this option when calling the script from cron. When this option is used, all diagnostic output is suppressed, but error messages are sent to the local system log of the user account.
–verbose Displays detailed information about what the script is doing.
–help Displays usage information.
–version Displays the version number of the script.

Prepare a RHEL-Based Virtual Machine for Azure

Today we have got project to prepare RHEL  VHD’s  for Azure. I did not find any doc for RHEL on azure. So i think to write steps i followed for RHEL on azure …

Prerequisites

CentOS Installation Notes

  • The newer VHDX format is not supported in Azure. You can  convert the disk to VHD format using Hyper-V Manager or the convert-vhd cmdlet.
  • When installing the Linux system it is recommended that you use standard partitions rather than LVM (often the default for many installations). This will avoid LVM name conflicts with cloned VMs, particularly if an OS disk ever needs to be attached to another VM for troubleshooting. LVM or RAID may be used on data disks if preferred.
  • NUMA is not supported for larger VM sizes due to a bug in Linux kernel versions below 2.6.37. This issue primarily impacts distributions using the upstream Red Hat 2.6.32 kernel. Manual installation of the Azure Linux agent (waagent) will automatically disable NUMA in the GRUB configuration for the Linux kernel. More information about this can be found in the steps below.
  • Do not configure a swap partition on the OS disk. The Linux agent can be configured to create a swap file on the temporary resource disk. More information about this can be found in the steps below.
  • All of the VHDs must have sizes that are multiples of 1 MB.

RHEL 6.5

  1. In Hyper-V Manager, select the virtual machine.
  2. Click Connect to open a console window for the virtual machine.
  3. Uninstall NetworkManager by running the following command:
    # sudo rpm -e --nodeps NetworkManager

    Note: If the package is not already installed, this command will fail with an error message. This is expected.

  4. Create a file named network in the /etc/sysconfig/ directory that contains the following text:
    NETWORKING=yes
    HOSTNAME=localhost.localdomain
  5. Create a file named ifcfg-eth0 in the /etc/sysconfig/network-scripts/ directory that contains the following text:
    DEVICE=eth0
    ONBOOT=yes
    BOOTPROTO=dhcp
    TYPE=Ethernet
    USERCTL=no
    PEERDNS=yes
    IPV6INIT=no
  6. Move (or remove) udev rules to avoid generating static rules for the Ethernet interface. These rules cause problems when cloning a virtual machine in Microsoft Azure or Hyper-V:
    # sudo mkdir -m 0700 /var/lib/waagent
    # sudo mv /lib/udev/rules.d/75-persistent-net-generator.rules /var/lib/waagent/
    # sudo mv /etc/udev/rules.d/70-persistent-net.rules /var/lib/waagent/
  7. Ensure the network service will start at boot time by running the following command:
    # sudo chkconfig network on
  8. Install the python-pyasn1 package by running the following command:
    # sudo yum install python-pyasn1
  9. If you would like to use the OpenLogic mirrors that are hosted within the Azure datacenters, then replace the /etc/yum.repos.d/CentOS-Base.repo file with the following repositories. This will also add the [openlogic] repository that includes packages for the Azure Linux agent:
    [openlogic]
    name=CentOS-$releasever - openlogic packages for $basearch
    baseurl=http://olcentgbl.trafficmanager.net/openlogic/6/openlogic/$basearch/
    enabled=1
    gpgcheck=0
    
    [base]
    name=CentOS-$releasever - Base
    baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/os/$basearch/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
    

    Note: The rest of this guide will assume you are using at least the [openlogic] repo, which will be used to install the Azure Linux agent below.

  10. Add the following line to /etc/yum.conf:
    http_caching=packages
  11. Run the following command to clear the current yum metadata:
    # yum clean all
  12. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this open “/boot/grub/menu.lst” in a text editor and ensure that the default kernel includes the following parameters:
    console=ttyS0 earlyprintk=ttyS0 rootdelay=300 numa=off

     

    This will also ensure all console messages are sent to the first serial port, which can assist Azure support with debugging issues. This will disable NUMA due to a bug in the kernel version used by RHEL 6.azure_kernel

    In addition to the above, it is recommended to remove the following parameters:

    rhgb quiet crashkernel=auto

    Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the serial port.

    The crashkernel option may be left configured if desired, but note that this parameter will reduce the amount of available memory in the VM by 128MB or more, which may be problematic on the smaller VM sizes.

  13. Ensure that the SSH server is installed and configured to start at boot time. This is usually the default.
  14. Disable SWAP  :  comment swap in /etc/fstab
        # blkid | grep swap
      /dev/sda3: UUID="53-e0e3efe22612" TYPE="swap"
      # swapoff /dev/sda3
  15. Install the Azure Linux Agent by running the following command:
    # sudo yum install WALinuxAgent

    Note that installing the WALinuxAgent package will remove the NetworkManager and NetworkManager-gnome packages if they were not already removed as described in step 2.

  16. Do not create swap space on the OS diskThe Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. Note that the local resource disk is a temporary disk, and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify the following parameters in /etc/waagent.conf appropriately:
    ResourceDisk.Format=y
    ResourceDisk.Filesystem=ext4
    ResourceDisk.MountPoint=/mnt/resource
    ResourceDisk.EnableSwap=y
    ResourceDisk.SwapSizeMB=8192    ## NOTE: set this to whatever you need it to be.
  17. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
    # sudo waagent -force -deprovision
    # export HISTSIZE=0
    # logout
  18. Click Action -> Shut Down in Hyper-V Manager. Your Linux VHD is now ready to be uploaded to Azure.

Microsoft Azure Set a Static Internal IP Address for a VM

Before you specify a static IP address from your address pool, you may want to verify that the IP address has not been already assigned. In the example below, we’re checking to see whether the IP address 10.1.61.140 is available in the TestVNet virtual network.

Test-AzureStaticVNetIP –VNetName TestVNet –IPAddress 10.1.61.140 

Be sure to change the variables for the cmdlets to reflect what you require for your environment before running them.

New-AzureVMConfig -Name $vmname -ImageName $img –InstanceSize Small | Set-AzureSubnet –SubnetNames $sub | Set-AzureStaticVNetIP -IPAddress 10.1.61.140 | New-AzureVM –ServiceName $vmsvc1 –VNetName TestVNet

If you want to set a static IP address for a VM that you previously created, you can do so by using the following cmdlets. If you already set an IP address for the VM and you want to change it to a different IP address, you’ll need to remove the existing static IP address before running these cmdlets. See the instructions below to remove a static IP.

For this procedure, you’ll use the Update-AzureVM cmdlet. The Update-AzureVM cmdlet restarts the VM as part of the update process. The DIP that you specify will be assigned after the VM restarts. In this example, we set the IP address for VM2, which is located in cloud service StaticDemo.

Get-AzureVM -ServiceName StaticDemo -Name VM2 | Set-AzureStaticVNetIP -IPAddress 10.1.61.140 | Update-AzureVM

When you remove a static IP address from a VM, the VM will automatically receive a new DIP after the VM restarts as part of the update process. In the example below, we remove the static IP from VM2, which is located in cloud service StaticDemo.

Get-AzureVM -ServiceName StaticDemo -Name VM2 | Remove-AzureStaticVNetIP | Update-AzureVM