root@test:/var/log/nginx# wget -qO – icanhazip.com
www.xxx.yyy.zzz
root@test:/var/log/nginx# wget -qO- http://ipecho.net/plain | xargs echo
www.xxx.yyy.zzz
root@test:/var/log/nginx# wget -qO – icanhazip.com
www.xxx.yyy.zzz
root@test:/var/log/nginx# curl icanhazip.com
www.xxx.yyy.zzz
root@test:/var/log/nginx#
Category Archives: Cloud
SIMPLE BACKUP SOLUTION WITH AWS S3
Data availability is one of the biggest concern in IT industry. After moving most of my services to the AWS cloud I was thinking how I can ensure data availability and accuracy in case of AWS data center failure or what if my EC2 EBS volume gets corrupted.
A case study
I have a Oracle RDS running on EC2 instance.
- I need to ensure I can restore data from backup in case of user demand, in case of data center failure or in case of instance failure
- On the other hand I need to ensure it will not increase my AWS monthly charges unexpectedly
- I will only run that service during the business hours
Solution could be
- Use AWS Oracle RDS. The service will take care of everything including backup and patch update. This is really a very reliable service AWS is providing. But to fulfil my last requirement it will be a lot of work for me, since RDS can’t be stopped, you can only terminate RDS (yes, you can take snapshot before terminating)
- Use EC2 instance and take snapshot backup of your EC2 EBS volume. But my EBS volume is 120 GB, much bigger than the original SQL DB backup. Which means it will cost me more to store multiple snapshots in S3 (120 GB x 7days).
The solution I am using
- Created a maintenance plan in SQL Server to take daily db backup
- Created an AWS CLI script to sync data from SQL server backup location to a S3 bucket
- aws s3 sync \\SERVER_NAME\backup$ s3://BUCKETNAME –exclude * –include *.bak
- Created a batch job to move local SQL server backup data to another folder for old data clean-up
- move \\SERVER_NAME\backup$\*.* \\SERVER_NAME\backup$\movedS3
- Create a maintenance plan in SQL Server to delete older files from movedS3 folder. It will help me to control unwanted data growth
- Created a lifecycle policy to delete older files from my S3 bucket
What this solution will ensure
- First of all I can sleep tight during night. I don’t need to worry about my backup data.😉
- S3 provides me 99.999999999% data durability. It means I will be able to access my S3 data in case of AWS availability zone failure also. Because S3 data synchronizes between multiple availability zone.
- S3 is the cheapest cloud data storage solution. That’s why drop box dare to give you such storage space as free😉
AWS RDS IAM Policy for Read Only Access and DB Logs Download
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "rds:Describe*", "rds:ListTagsForResource", "rds:Download*", "ec2:DescribeAccountAttributes", "ec2:DescribeAvailabilityZones", "ec2:DescribeSecurityGroups", "ec2:DescribeVpcs" ], "Effect": "Allow", "Resource": "*" }, { "Action": [ "cloudwatch:GetMetricStatistics", "logs:DescribeLogStreams", "logs:GetLogEvents" ], "Effect": "Allow", "Resource": "*" } ] }
AWS EC2 Instance Metadata
There is an easy way to access to Instance information from within. This is very useful when writing scripts who are executed inside the Instance. The method is accessing the Instance Metadata using a HTTP GET call to the IP 169.254.169.254. It works on any EC2 instance and the IP address is always the same.
– Obtaining the Instance ID:
$ wget -q -O – http://169.254.169.254/latest/meta-data/instance-id i-87eef4e2 |
– Public Hostname:
$ wget -q -O – http://169.254.169.254/latest/meta-data/public-hostname ec2-50-17-85-234.compute-1.amazonaws.com |
– Public IPv4 Address:
$ wget -q -O – http://169.254.169.254/latest/meta-data/public-ipv4 50.17.85.234 |
$ ec2-describe-instances `wget -q -O – http://169.254.169.254/latest/meta-data/instance-id` –show-empty-fields | grep TAG
TAG instance i-87eef4e2 Another Tag Another Value |
Setup AWS Cloudwatch Memory and Drive Monitoring on RHEL
Download Scripts
Install Prerequisite Packages
sudo yum install wget unzip perl-core perl-DateTime perl-Sys-Syslog perl-CPAN perl-libwww-perl perl-Crypt-SMIME perl-Crypt-SSLeay
Install LWP Perl Bundles
- Launch cpan
-
sudo perl -MCPAN -e shell
-
- Install Bundle
-
install Bundle::LWP6 LWP YAML
-
Install Script
wget http://aws-cloudwatch.s3.amazonaws.com/downloads/CloudWatchMonitoringScripts-1.2.1.zip
unzip CloudWatchMonitoringScripts-1.2.1.zip -d /opt
rm -f CloudWatchMonitoringScripts-1.2.1.zip
Setup Credentials
API Access Key (Option 1)
This is good for testing, but it’s better to use IAM roles covered in Option 2.
- Copy awscreds template
-
cp /opt/aws-scripts-mon/awscreds.template /opt/aws-scripts-mon/awscreds.conf
-
- Add access key id and secret access key
-
vim /opt/aws-scripts-mon/awscreds.conf
-
- Lock down file access
-
chmod 0400 /opt/aws-scripts-mon/awscreds.conf
-
IAM Role (Option 2)
- Login to AWS web console
- Select Identity & Access Management
- Select Roles | Create New Role
- Enter Role Name
- i.e. ec2-cloudwatch
- Select Next Step
- Select Amazon EC2
- Search for cloudwatch
- Select CloudwatchFullAccess
- Select Next Step | Create Role
- Launch a new instance and assign the ec2-cloudwatch IAM role
You can not add an IAM Role to an existing EC2 Instance; you can only specify a role when you launch a new instance.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html?console_help=true
Test
This won’t send data to Cloudwatch.
/opt/aws-scripts-mon/mon-put-instance-data.pl --mem-util --verify --verbose
Example
MemoryUtilization: 31.7258903184253 (Percent) Using AWS credentials file <./awscreds.conf> Endpoint: https://monitoring.us-west-2.amazonaws.com Payload: {"MetricData":[{"Timestamp":1443537153,"Dimensions":[{"Value":"i-12e1fac4","Name":"InstanceId"}],"Value":31.7258903184253,"Unit":"Percent","MetricName":"MemoryUtilization"}],"Namespace":"System/Linux","__type":"com.amazonaws.cloudwatch.v2010_08_01#PutMetricDataInput"} Verification completed successfully. No actual metrics sent to CloudWatch.
Report to Cloudwatch Test
Test that communication to Cloudwatch works and design the command you’ll want to cron out in the next step.
/opt/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail
After you run this command one point-in-time metric should show up for the instance under Cloudwatch | Linux System
Create Cron Task (as root)
Now that you’ve tested out the command and figured out what you want to report it’s time to add a Cron task so it runs ever X minutes. Usually 5 minutes is good.
- Edit cron table
-
crontab -e
-
*/5 * * * * /opt/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail --disk-space-util --disk-path=/ --from-cron
-
Create Cron Task (as other user)
You may want to create a user that runs the cron. Here’s an example using a user named cloudwatch
- Create user
-
useradd cloudwatch
-
- Disable user login
-
usermod -s /sbin/nologin cloudwatch
-
- Set ownership
-
chown -R cloudwatch.cloudwatch /opt/aws-scripts-mon
-
- Edit cron table
-
crontab -e -u cloudwatch
-
- Add cron job
-
*/5 * * * * /opt/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail --swap-used --disk-space-util --disk-path=/ --from-cron
-
Verify Cron Job Ran
One way to verify the cron job ran is to look in the cron log.
less /var/log/cron
tail -f /var/log/cron
References
Monitor Script Arguments
Name | Description |
---|---|
–mem-util | Collects and sends the MemoryUtilization metrics in percentages. This option reports only memory allocated by applications and the operating system, and excludes memory in cache and buffers. |
–mem-used | Collects and sends the MemoryUsed metrics, reported in megabytes. This option reports only memory allocated by applications and the operating system, and excludes memory in cache and buffers. |
–mem-avail | Collects and sends the MemoryAvailable metrics, reported in megabytes. This option reports memory available for use by applications and the operating system. |
–swap-util | Collects and sends SwapUtilization metrics, reported in percentages. |
–swap-used | Collects and sends SwapUsed metrics, reported in megabytes. |
–disk-path=PATH | Selects the disk on which to report.PATH can specify a mount point or any file located on a mount point for the filesystem that needs to be reported. For selecting multiple disks, specify a –disk-path=PATH for each one of them. To select a disk for the filesystems mounted on / and /home, use the following parameters: –disk-path=/ –disk-path=/home |
–disk-space-util | Collects and sends the DiskSpaceUtilization metric for the selected disks. The metric is reported in percentages. |
–disk-space-used | Collects and sends the DiskSpaceUsed metric for the selected disks. The metric is reported by default in gigabytes.Due to reserved disk space in Linux operating systems, disk space used and disk space available might not accurately add up to the amount of total disk space. |
–disk-space-avail | Collects and sends the DiskSpaceAvailable metric for the selected disks. The metric is reported in gigabytes.Due to reserved disk space in the Linux operating systems, disk space used and disk space available might not accurately add up to the amount of total disk space. |
–memory-units=UNITS | Specifies units in which to report memory usage. If not specified, memory is reported in megabytes. UNITS may be one of the following: bytes, kilobytes, megabytes, gigabytes. |
–disk-space-units=UNITS | Specifies units in which to report disk space usage. If not specified, disk space is reported in gigabytes. UNITS may be one of the following: bytes, kilobytes, megabytes, gigabytes. |
–aws-credential- file=PATH | Provides the location of the file containing AWS credentials.This parameter cannot be used with the –aws-access-key-id and –aws-secret-keyparameters. |
–aws-access-key-id=VALUE | Specifies the AWS access key ID to use to identify the caller. Must be used together with the –aws-secret-key option. Do not use this option with the –aws-credential-file parameter. |
–aws-secret-key=VALUE | Specifies the AWS secret access key to use to sign the request to CloudWatch. Must be used together with the –aws-access-key-id option. Do not use this option with –aws-credential-file parameter. |
–verify | Performs a test run of the script that collects the metrics, prepares a complete HTTP request, but does not actually call CloudWatch to report the data. This option also checks that credentials are provided. When run in verbose mode, this option outputs the metrics that will be sent to CloudWatch. |
–from-cron | Use this option when calling the script from cron. When this option is used, all diagnostic output is suppressed, but error messages are sent to the local system log of the user account. |
–verbose | Displays detailed information about what the script is doing. |
–help | Displays usage information. |
–version | Displays the version number of the script. |
Prepare a RHEL-Based Virtual Machine for Azure
Today we have got project to prepare RHEL VHD’s for Azure. I did not find any doc for RHEL on azure. So i think to write steps i followed for RHEL on azure …
Prerequisites
CentOS Installation Notes
- The newer VHDX format is not supported in Azure. You can convert the disk to VHD format using Hyper-V Manager or the convert-vhd cmdlet.
- When installing the Linux system it is recommended that you use standard partitions rather than LVM (often the default for many installations). This will avoid LVM name conflicts with cloned VMs, particularly if an OS disk ever needs to be attached to another VM for troubleshooting. LVM or RAID may be used on data disks if preferred.
- NUMA is not supported for larger VM sizes due to a bug in Linux kernel versions below 2.6.37. This issue primarily impacts distributions using the upstream Red Hat 2.6.32 kernel. Manual installation of the Azure Linux agent (waagent) will automatically disable NUMA in the GRUB configuration for the Linux kernel. More information about this can be found in the steps below.
- Do not configure a swap partition on the OS disk. The Linux agent can be configured to create a swap file on the temporary resource disk. More information about this can be found in the steps below.
- All of the VHDs must have sizes that are multiples of 1 MB.
RHEL 6.5
- In Hyper-V Manager, select the virtual machine.
- Click Connect to open a console window for the virtual machine.
- Uninstall NetworkManager by running the following command:
# sudo rpm -e --nodeps NetworkManager
Note: If the package is not already installed, this command will fail with an error message. This is expected.
- Create a file named network in the
/etc/sysconfig/
directory that contains the following text:NETWORKING=yes HOSTNAME=localhost.localdomain
- Create a file named ifcfg-eth0 in the
/etc/sysconfig/network-scripts/
directory that contains the following text:DEVICE=eth0 ONBOOT=yes BOOTPROTO=dhcp TYPE=Ethernet USERCTL=no PEERDNS=yes IPV6INIT=no
- Move (or remove) udev rules to avoid generating static rules for the Ethernet interface. These rules cause problems when cloning a virtual machine in Microsoft Azure or Hyper-V:
# sudo mkdir -m 0700 /var/lib/waagent # sudo mv /lib/udev/rules.d/75-persistent-net-generator.rules /var/lib/waagent/ # sudo mv /etc/udev/rules.d/70-persistent-net.rules /var/lib/waagent/
- Ensure the network service will start at boot time by running the following command:
# sudo chkconfig network on
- Install the python-pyasn1 package by running the following command:
# sudo yum install python-pyasn1
- If you would like to use the OpenLogic mirrors that are hosted within the Azure datacenters, then replace the /etc/yum.repos.d/CentOS-Base.repo file with the following repositories. This will also add the [openlogic] repository that includes packages for the Azure Linux agent:
[openlogic] name=CentOS-$releasever - openlogic packages for $basearch baseurl=http://olcentgbl.trafficmanager.net/openlogic/6/openlogic/$basearch/ enabled=1 gpgcheck=0 [base] name=CentOS-$releasever - Base baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/os/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
Note: The rest of this guide will assume you are using at least the [openlogic] repo, which will be used to install the Azure Linux agent below.
- Add the following line to /etc/yum.conf:
http_caching=packages
- Run the following command to clear the current yum metadata:
# yum clean all
- Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this open “/boot/grub/menu.lst” in a text editor and ensure that the default kernel includes the following parameters:
console=ttyS0 earlyprintk=ttyS0 rootdelay=300 numa=off
This will also ensure all console messages are sent to the first serial port, which can assist Azure support with debugging issues. This will disable NUMA due to a bug in the kernel version used by RHEL 6.
In addition to the above, it is recommended to remove the following parameters:
rhgb quiet crashkernel=auto
Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the serial port.
The
crashkernel
option may be left configured if desired, but note that this parameter will reduce the amount of available memory in the VM by 128MB or more, which may be problematic on the smaller VM sizes. - Ensure that the SSH server is installed and configured to start at boot time. This is usually the default.
- Disable SWAP : comment swap in /etc/fstab
# blkid | grep swap /dev/sda3: UUID="53-e0e3efe22612" TYPE="swap"
# swapoff /dev/sda3
- Install the Azure Linux Agent by running the following command:
# sudo yum install WALinuxAgent
Note that installing the WALinuxAgent package will remove the NetworkManager and NetworkManager-gnome packages if they were not already removed as described in step 2.
- Do not create swap space on the OS diskThe Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. Note that the local resource disk is a temporary disk, and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify the following parameters in /etc/waagent.conf appropriately:
ResourceDisk.Format=y ResourceDisk.Filesystem=ext4 ResourceDisk.MountPoint=/mnt/resource ResourceDisk.EnableSwap=y ResourceDisk.SwapSizeMB=8192 ## NOTE: set this to whatever you need it to be.
- Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
# sudo waagent -force -deprovision # export HISTSIZE=0 # logout
- Click Action -> Shut Down in Hyper-V Manager. Your Linux VHD is now ready to be uploaded to Azure.
Microsoft Azure Set a Static Internal IP Address for a VM
Before you specify a static IP address from your address pool, you may want to verify that the IP address has not been already assigned. In the example below, we’re checking to see whether the IP address 10.1.61.140 is available in the TestVNet virtual network.
Test-AzureStaticVNetIP –VNetName TestVNet –IPAddress 10.1.61.140
Be sure to change the variables for the cmdlets to reflect what you require for your environment before running them.
New-AzureVMConfig -Name $vmname -ImageName $img –InstanceSize Small | Set-AzureSubnet –SubnetNames $sub | Set-AzureStaticVNetIP -IPAddress 10.1.61.140 | New-AzureVM –ServiceName $vmsvc1 –VNetName TestVNet
If you want to set a static IP address for a VM that you previously created, you can do so by using the following cmdlets. If you already set an IP address for the VM and you want to change it to a different IP address, you’ll need to remove the existing static IP address before running these cmdlets. See the instructions below to remove a static IP.
For this procedure, you’ll use the Update-AzureVM cmdlet. The Update-AzureVM cmdlet restarts the VM as part of the update process. The DIP that you specify will be assigned after the VM restarts. In this example, we set the IP address for VM2, which is located in cloud service StaticDemo.
Get-AzureVM -ServiceName StaticDemo -Name VM2 | Set-AzureStaticVNetIP -IPAddress 10.1.61.140 | Update-AzureVM
When you remove a static IP address from a VM, the VM will automatically receive a new DIP after the VM restarts as part of the update process. In the example below, we remove the static IP from VM2, which is located in cloud service StaticDemo.
Get-AzureVM -ServiceName StaticDemo -Name VM2 | Remove-AzureStaticVNetIP | Update-AzureVM