Category Archives: SAN

Operating System Tuning for Oracle Database

This chapter describes how to tune Oracle Database. It contains the following sections:

  • Importance of Tuning
  • Operating System Tools
  • Tuning Memory Management
  • Tuning Disk I/O
  • Monitoring Disk Performance
  • System Global Area
  • Tuning the Operating System Buffer Cache

1.1 Importance of Tuning

Oracle Database is a highly optimizable software product. Frequent tuning optimizes system performance and prevents data bottlenecks.

Before tuning the database, you must observe its normal behavior by using the tools described in the “Operating System Tools” section.

1.2 Operating System Tools

Several operating system tools are available to enable you to assess database performance and determine database requirements. In addition to providing statistics for Oracle processes, these tools provide statistics for CPU usage, interrupts, swapping, paging, context switching, and I/O for the entire system.

This section provides information about the following common tools:

  • vmstat
  • sar
  • iostat
  • swap, swapinfo, swapon, or lsps
  • AIX Tools
  • HP-UX Tools
  • Linux Tools
  • Solaris Tools
  • Mac OS X Tools

See Also:

The operating system documentation and man pages for more information about these tools

1.2.1 vmstat


On Mac OS X, the vm_stat command displays virtual memory information. Refer to the vm_stat man page for more information about using this command.

Use the vmstat command to view process, virtual memory, disk, trap, and CPU activity, depending on the switches that you supply with the command. Run one of the following commands to display a summary of CPU activity six times, at five-second intervals:

  • On HP-UX and Solaris:
  • AIX, Linux, and Tru64 UNIX:
·         $ vmstat -S 5 6
·         $ vmstat 5 6

The following is sample output of this command on HP-UX:

procs     memory            page            disk          faults      cpu
 r b w   swap  free  si  so pi po fr de sr f0 s0 s1 s3   in   sy   cs us sy id
 0 0 0   1892  5864   0   0  0  0  0  0  0  0  0  0  0   90   74   24  0  0 99
 0 0 0  85356  8372   0   0  0  0  0  0  0  0  0  0  0   46   25   21  0  0 100
 0 0 0  85356  8372   0   0  0  0  0  0  0  0  0  0  0   47   20   18  0  0 100
 0 0 0  85356  8372   0   0  0  0  0  0  0  0  0  0  2   53   22   20  0  0 100
 0 0 0  85356  8372   0   0  0  0  0  0  0  0  0  0  0   87   23   21  0  0 100
 0 0 0  85356  8372   0   0  0  0  0  0  0  0  0  0  0   48   41   23  0  0 100

The w sub column, under the procs column, shows the number of potential processes that have been swapped out and written to disk. If the value is not zero, then swapping occurs and the system is short of memory.

The si and so columns under the page column indicate the number of swap-ins and swap-outs per second, respectively. Swap-ins and swap-outs should always be zero.

The sr column under the page column indicates the scan rate. High scan rates are caused by a shortage of available memory.

The pi and po columns under the page column indicate the number of page-ins and page-outs per second, respectively. It is normal for the number of page-ins and page-outs to increase. Some paging always occurs even on systems with sufficient available memory.


The output from the vmstat command differs across platforms.

See Also:

Refer to the man page for information about interpreting the output

8.2.2 sar

Depending on the switches that you supply with the command, use the sar (system activity reporter) command to display cumulative activity counters in the operating system.


On Tru64 UNIX systems, the sar command is available in the UNIX SVID2 compatibility subset, OSFSVID.

On an HP-UX system, the following command displays a summary of I/O activity ten times, at ten-second intervals:

$ sar -b 10 10

The following example shows the output of this command:

13:32:45 bread/s lread/s %rcache bwrit/s lwrit/s %wcache pread/s pwrit/s
13:32:55       0      14     100       3      10      69       0       0
13:33:05       0      12     100       4       4       5       0       0
13:33:15       0       1     100       0       0       0       0       0
13:33:25       0       1     100       0       0       0       0       0
13:33:35       0      17     100       5       6       7       0       0
13:33:45       0       1     100       0       0       0       0       0
13:33:55       0       9     100       2       8      80       0       0
13:34:05       0      10     100       4       4       5       0       0
13:34:15       0       7     100       2       2       0       0       0
13:34:25       0       0     100       0       0     100       0       0

Average        0       7     100       2       4      41       0       0

The sar output provides a snapshot of system I/O activity at a given point in time. If you specify the interval time with more than one option, then the output can become difficult to read. If you specify an interval time of less than 5, then the sar activity itself can affect the output.

See Also:

The man page for more information about sar

1.2.3 iostat

Use the iostat command to view terminal and disk activity, depending on the switches that you supply with the command. The output from the iostat command does not include disk request queues, but it shows which disks are busy. This information can be used to balance I/O loads.

The following command displays terminal and disk activity five times, at five-second intervals:

$ iostat 5 5

The following is sample output of the command on Solaris:

tty          fd0           sd0           sd1           sd3          cpu
 tin tout Kps tps serv  Kps tps serv  Kps tps serv  Kps tps serv  us sy wt id
   0    1   0   0    0    0   0   31    0   0   18    3   0   42   0  0  0 99
   0   16   0   0    0    0   0    0    0   0    0    1   0   14   0  0  0 100
   0   16   0   0    0    0   0    0    0   0    0    0   0    0   0  0  0 100
   0   16   0   0    0    0   0    0    0   0    0    0   0    0   0  0  0 100
   0   16   0   0    0    0   0    0    2   0   14   12   2   47   0  0  1 98

Use the iostat command to look for large disk request queues. A request queue shows how long the I/O requests on a particular disk device must wait to be serviced. Request queues are caused by a high volume of I/O requests to that disk or by I/O with long average seek times. Ideally, disk request queues should be at or near zero.

1.2.4 swap, swapinfo, swapon, or lsps

See Also:

“Determining Available and Used Swap Space” for information about swap space on Mac OS X systems

Use the swap, swapinfo, swapon, or lsps command to report information about swap space usage. A shortage of swap space can stop processes responding, leading to process failures with Out of Memory errors. The following table lists the appropriate command to use for each platform.

Platform Command
AIX lsps -a
HP-UX swapinfo -m
Linux and Tru64 UNIX swapon -s
Solaris swap -l and swap -s


The following example shows sample output from the swap -l command on Solaris:

swapfile             dev        swaplo blocks        free
/dev/dsk/c0t3d0s1    32,25      8      197592        162136

1.2.5 AIX Tools

The following sections describe tools available on AIX systems.

  • Base Operation System Tools
  • Performance Toolbox
  • System Management Interface Tool

See Also:

The AIX operating system documentation and man pages for more information about these tools Base Operation System Tools

The AIX Base Operation System (BOS) contains performance tools that are historically part of UNIX systems or are required to manage the implementation-specific features of AIX. The following table lists the most important BOS tools.

Tool Function
lsattr Displays the attributes of devices
lslv Displays information about a logical volume or the logical volume allocations of a physical volume
netstat Displays the contents of network-related data structures
nfsstat Displays statistics about Network File System (NFS) and Remote Procedure Call (RPC) activity
nice Changes the initial priority of a process
no Displays or sets network options
ps Displays the status of one or more processes
reorgvg Reorganizes the physical-partition allocation within a volume group
time Displays the elapsed execution, user CPU processing, and system CPU processing time
trace Records and reports selected system events
vmo Manages Virtual Memory Manager tunable parameters Performance Toolbox

The AIX Performance Toolbox (PTX) contains tools for monitoring and tuning system activity locally and remotely. PTX consists of two main components, the PTX Manager and the PTX Agent. The PTX Manager collects and displays data from various systems in the configuration by using the xmperf utility. The PTX Agent collects and transmits data to the PTX Manager by using the xmserd daemon. The PTX Agent is also available as a separate product called Performance Aide for AIX.

Both PTX and Performance Aide include the monitoring and tuning tools listed in the following table.

Tool Description
fdpr Optimizes an executable program for a particular workload
filemon Uses the trace facility to monitor and report the activity of the file system
fileplace Displays the placement of blocks of a file within logical or physical volumes
lockstat Displays statistics about contention for kernel locks
lvedit Facilitates interactive placement of logical volumes within a volume group
netpmon Uses the trace facility to report on network I/O and network-related CPU usage
rmss Simulates systems with various memory sizes for performance testing
svmon Captures and analyzes information about virtual-memory usage
syscalls Records and counts system calls
tprof Uses the trace facility to report CPU usage at module and source-code-statement levels
BigFoot Reports the memory access patterns of processes
stem Permits subroutine-level entry and exit instrumentation of existing executables


See Also:

  • Performance Toolbox for AIX Guide and Reference for information about these tools
  • AIX 5L Performance Management Guide for information about the syntax of some of these tools System Management Interface Tool

The AIX System Management Interface Tool (SMIT) provides a menu-driven interface to various system administrative and performance tools. By using SMIT, you can navigate through large numbers of tools and focus on the jobs that you want to perform.

1.2.6 HP-UX Tools

The following performance analysis tools are available on HP-UX systems:

  • GlancePlus/UX

This HP-UX utility is an online diagnostic tool that measures the activities of the system. GlancePlus displays information about how system resources are used. It displays dynamic information about the system I/O, CPU, and memory usage on a series of screens. You can use the utility to monitor how individual processes are using resources.

  • HP PAK

HP Programmer’s Analysis Kit (HP PAK) consists of the following tools:

  • Puma

This tool collects performance statistics during a program run. It provides several graphical displays for viewing and analyzing the collected statistics.

  • Thread Trace Visualizer (TTV)

This tool displays trace files produced by the instrumented thread library,, in a graphical format. It enables you to view how threads are interacting and to find where threads are blocked waiting for resources.

HP PAK is bundled with the HP Fortran 77, HP Fortran 90, HP C, HP C++, HP ANSI C++, and HP Pascal compilers.

The following table lists the performance tuning tools that you can use for additional performance tuning on HP-UX.

Tools Function
caliper (Itanium only) Collects run-time application data for system analysis tasks such as cache misses, translation look-aside buffer (TLB) or instruction cycles, along with fast dynamic instrumentation. It is a dynamic performance measurement tool for C, C++, Fortran, and assembly applications.
gprof Creates an execution profile for programs.
monitor Monitors the program counter and calls to certain functions.
netfmt Monitors the network.
netstat Reports statistics on network performance.
nfsstat Displays statistics about Network File System (NFS) and Remote Procedure Call (RPC) activity.
nettl Captures network events or packets by logging and tracing.
prof Creates an execution profile of C programs and displays performance statistics for your program, showing where your program is spending most of its execution time.
profil Copies program counter information into a buffer.
top Displays the top processes on the system and periodically updates the information.


1.2.7 Linux Tools

On Linux systems, use the top, free, and cat /proc/meminfo commands to view information about swap space, memory, and buffer usage.

1.2.8 Solaris Tools

On Solaris systems, use the mpstat command to view statistics for each processor in a multiprocessor system. Each row of the table represents the activity of one processor. The first row summarizes all activity since the last system restart. Each subsequent row summarizes activity for the preceding interval. All values are events per second unless otherwise noted. The arguments are for time intervals between statistics and number of iterations.

The following example shows sample output from the mpstat command:

CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0    0   0    1    71   21   23    0    0    0    0    55    0   0   0  99
  2    0   0    1    71   21   22    0    0    0    0    54    0   0   0  99
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0    0   0    0    61   16   25    0    0    0    0    57    0   0   0 100
  2    1   0    0    72   16   24    0    0    0    0    59    0   0   0 100

1.2.9 Mac OS X Tools

You can use the following additional performance tuning tools:

  • Use the top command to display information about running processes and memory usage.
  • Use the Apple Computer Hardware Understanding Developer (CHUD) tools, such as Shark and BigTop, to monitor system activity and tune applications.

See Also:

For more information about the CHUD tools, refer to

1.3 Tuning Memory Management

Start the memory tuning process by measuring paging and swapping space to determine how much memory is available. After you determine your system memory usage, tune the Oracle buffer cache.

The Oracle buffer manager ensures that the most frequently accessed data is cached longer. If you monitor the buffer manager and tune the buffer cache, then you can significantly improve Oracle Database performance. The optimal Oracle Database buffer size for your system depends on the overall system load and the relative priority of Oracle Database over other applications.

This section includes the following topics:

  • Allocating Sufficient Swap Space
  • Controlling Paging
  • Adjusting Oracle Block Size

8.3.1 Allocating Sufficient Swap Space

Try to minimize swapping because it causes significant operating system overhead. To check for swapping, use the sar or vmstat commands. For information about the appropriate options to use with these commands, refer to the man pages.

If your system is swapping and you must conserve memory, then:

  • Avoid running unnecessary system daemon processes or application processes.
  • Decrease the number of database buffers to free some memory.
  • Decrease the number of operating system file buffers, especially if you are using raw devices.


On Mac OS X systems, swap space is allocated dynamically. If the operating system requires more swap space, then it creates additional swap files in the /private/var/vm directory. Ensure that the file system that contains this directory has sufficient free disk space to accommodate additional swap files. Refer “Determining Available and Used Swap Space” for more information on allocating swap space.

To determine the amount of swap space, run one of the following commands, depending on your platform:

Platform Command
AIX lsps -a
HP-UX swapinfo -m
Linux swapon -s
Solaris swap -l and swap -s
Tru64 UNIX swapon -s


To add swap space to your system, run one of the following commands, depending on your platform:

Platform Command
AIX chps or mkps
HP-UX swapon
Linux swapon -a
Solaris swap -a
Tru64 UNIX swapon -a


Set the swap space to between two and four times the physical memory. Monitor the use of swap space, and increase it as required.

See Also:

The operating system documentation for more information about these commands

1.3.2 Controlling Paging

Paging may not present as serious a problem as swapping, because an entire program does not have to be stored in memory to run. A small number of page-outs may not noticeably affect the performance of your system.

To detect excessive paging, run measurements during periods of fast response or idle time to compare against measurements from periods of slow response.

Use the vmstat (vm_stat on Mac OS X) or sar command to monitor paging.

See Also:

The man pages or your operating system documentation for information about interpreting the results for your platform

The following table lists the important columns from the output of these commands.

Platform Column Function
Solaris vflt/s Indicates the number of address translation page faults. Address translation faults occur when a process refers to a valid page not in memory.
Solaris rclm/s Indicates the number of valid pages that have been reclaimed and added to the free list by page-out activity. This value should be zero.
HP-UX at Indicates the number of address translation page faults. Address translation faults occur when a process refers to a valid page not in memory.
HP-UX re Indicates the number of valid pages that have been reclaimed and added to the free list by page-out activity. This value should be zero.


If your system consistently has excessive page-out activity, then consider the following solutions:

  • Install more memory.
  • Move some of the work to another system.
  • Configure the System Global Area (SGA) to use less memory.

1.3.3 Adjusting Oracle Block Size

During read operations, entire operating system blocks are read from the disk. If the database block size is smaller than the operating system file system block size, then I/O bandwidth is inefficient. If you set Oracle Database block size to be a multiple of the file system block size, then you can increase performance by up to 5 percent.

The DB_BLOCK_SIZE initialization parameter sets the database block size. However, to change the value of this parameter, you must re-create the database.

To see the current value of the DB_BLOCK_SIZE parameter, run the SHOW PARAMETER DB_BLOCK_SIZE command in SQL*Plus.

1.4 Tuning Disk I/O

Balance I/O evenly across all available disks to reduce disk access times. For smaller databases and those not using RAID, ensure that different data files and tablespaces are distributed across the available disks.

1.4.1 Using Automatic Storage Management

If you choose to use Automatic Storage Management for database storage, then all database I/O is balanced across all available disk devices in the Automatic Storage Management disk group. Automatic Storage Management provides the performance of raw device I/O without the inconvenience of managing raw devices.

By using Automatic Storage Management, you avoid manually tuning disk I/O.

1.4.2 Choosing the Appropriate File System Type

Depending on your operating system, you can choose from a range of file system types. Each file system type has different characteristics. This fact can have a substantial impact on database performance. The following table lists common file system types.

File System Platform Description
S5 HP-UX and Solaris UNIX System V file system
UFS AIX, HP-UX, Mac OS X, Solaris, Tru64 UNIX Unified file system, derived from BSD UNIXNote: On Mac OS X, Oracle does not recommend the use of the UFS file system for either software or database files.
VxFS AIX, HP-UX, and Solaris VERITAS file system
None All Raw devices (no file system)
ext2/ext3 Linux Extended file system for Linux
OCFS Linux Oracle cluster file system
AdvFS Tru64 UNIX Advanced file system
CFS Tru64 UNIX Cluster file system
JFS/JFS2 AIX Journaled file system
HFS Plus, HFSX Mac OS X HFS Plus is the standard hierarchical file system used by Mac OS X. HFSX is an extension to HFS Plus that enables case-sensitive file names.
GPFS AIX General parallel file system


The suitability of a file system for an application is usually not documented. For example, even different implementations of the Unified file system are hard to compare. Depending on the file system that you choose, performance differences can be up to 20 percent. If you choose to use a file system, then:

  • Make a new file system partition to ensure that the hard disk is clean and unfragmented.
  • Perform a file system check on the partition before using it for database files.
  • Distribute disk I/O as evenly as possible.
  • If you are not using a logical volume manager or a RAID device, then consider placing log files on a different file system from data files.

1.5 Monitoring Disk Performance

The following sections describe the procedure for monitoring disk performance.

Monitoring Disk Performance on Mac OS X

Use the iostat and sar commands to monitor disk performance. For more information about using these commands, refer to the man pages.

Monitoring Disk Performance on Other Operating Systems

To monitor disk performance, use the sar -b and sar -u commands.

The following table describes the columns of the sar -b command output that are significant for analyzing disk performance.

Columns Description
bread/s, bwrit/s Blocks read and blocks written per second (important for file system databases)
pread/s, pwrit/s Number of reads and writes per second from or to raw character devices.


An important sar -u column for analyzing disk performance is %wio, the percentage of CPU time spent waiting on blocked I/O.


Not all Linux distributions display the %wio column in the output of the sar -u command. For detailed I/O statistics, you can use iostat -x command.

Key indicators are:

  • The sum of the bread, bwrit, pread, and pwrit column values indicates the level of activity of the disk I/O subsystem. The higher the sum, the busier the I/O subsystem. The larger the number of physical drives, the higher the sum threshold number can be. A good default value is no more than 40 for 2 drives and no more than 60 for 4 to 8 drives.
  • The %rcache column value should be greater than 90 and the %wcache column value should be greater than 60. Otherwise, the system may be disk I/O bound.
  • If the %wio column value is consistently greater than 20, then the system is I/O bound.

1.6 System Global Area

The SGA is the Oracle structure that is located in shared memory. It contains static data structures, locks, and data buffers. Sufficient shared memory must be available to each Oracle process to address the entire SGA.

The maximum size of a single shared memory segment is specified by the shmmax (shm_max on Tru64 UNIX) kernel parameter.

The following table shows the recommended value for this parameter, depending on your platform.

Platform Recommended Value
HP-UX The size of the physical memory installed on the systemSee Also: HP-UX Shared Memory Segments for an Oracle Instance for information about the shmmax parameter on HP-UX
Linux Half the size of the physical memory installed on the system
Mac OS X Half the size of the physical memory installed on the system
Solaris and Tru64 UNIX 4294967295 or 4 GB minus 16 MBNote: The value of the shm_max parameter must be at least 16 MB for the Oracle Database instance to start. If your system runs both Oracle9i Database and Oracle Database 10g instances, then you must set the value of this parameter to 2 GB minus 16 MB. On Solaris, this value can be greater than 4 GB on 64-bit systems.


If the size of the SGA exceeds the maximum size of a shared memory segment (shmmax or shm_max), then Oracle Database attempts to attach more contiguous segments to fulfill the requested SGA size. The shmseg kernel parameter (shm_seg on Tru64 UNIX) specifies the maximum number of segments that can be attached by any process. Set the following initialization parameters to control the size of the SGA:


Alternatively, set the SGA_TARGET initialization parameter to enable automatic tuning of the SGA size.

Use caution when setting values for these parameters. When values are set too high, too much of the physical memory is devoted to shared memory. This results in poor performance.

An Oracle Database configured with Shared Server requires a higher setting for the SHARED_POOL_SIZE initialization parameter, or a custom configuration that uses the LARGE_POOL_SIZE initialization parameter. If you installed the database with Oracle Universal Installer, then the value of the SHARED_POOL_SIZE parameter is set automatically by Oracle Database Configuration Assistant. However, if you created a database manually, then increase the value of the SHARED_POOL_SIZE parameter in the parameter file by 1 KB for each concurrent user.

1.6.1 Determining the Size of the SGA

You can determine the SGA size in one of the following ways:

  • Run the following SQL*Plus command to display the size of the SGA for a running database:
·         SQL> SHOW SGA

The result is shown in bytes.

  • When you start your database instance, the size of the SGA is displayed next to the Total System Global Area heading.
  • On systems other than Mac OS X, run the ipcs command as the oracle user.

1.6.2 Shared Memory on AIX


The information in this section applies only to AIX.

Shared memory uses common virtual memory resources across processes. Processes share virtual memory segments through a common set of virtual memory translation resources, for example, tables and cached entries, for improved performance.

Shared memory can be pinned to prevent paging and to reduce I/O overhead. To perform this, set the LOCK_SGA parameter to true. On AIX 5L, the same parameter activates the large page feature whenever the underlying hardware supports it.

Run the following command to make pinned memory available to Oracle Database:

$ /usr/sbin/vmo -r -o v_pinshm=1

Run a command similar to the following to set the maximum percentage of real memory available for pinned memory, where percent_of_real_memory is the maximum percent of real memory that you want to set:

$ /usr/sbin/vmo -r -o maxpin%=percent_of_real_memory

When using the maxpin% option, it is important that the amount of pinned memory exceeds the Oracle SGA size by at least 3 percent of the real memory on the system, enabling free pinnable memory for use by the kernel. For example, if you have 2 GB of physical memory and you want to pin the SGA by 400 MB (20 percent of the RAM), then run the following command:

$ /usr/sbin/vmo -r -o maxpin%=23

Use the svmon command to monitor the use of pinned memory during the operation of the system. Oracle Database attempts to pin memory only if the LOCK_SGA parameter is set to true.

Large Page Feature on AIX POWER4- and POWER5-Based Systems

To turn on and reserve 10 large pages each of size 16 MB on a POWER4 or POWER 5 system, run the following command:

$ /usr/sbin/vmo -r -o lgpg_regions=10 -o lgpg_size=16777216

This command proposes bosboot and warns that a restart is required for the changes to take affect.

Oracle recommends specifying enough large pages to contain the entire SGA. The Oracle Database instance attempts to allocate large pages when the LOCK_SGA parameter is set to true. If the SGA size exceeds the size of memory available for pinning, or large pages, then the portion of the SGA exceeding these sizes is allocated to ordinary shared memory.

See Also:

The AIX documentation for more information about enabling and tuning pinned memory and large pages

1.7 Tuning the Operating System Buffer Cache

To take full advantage of raw devices, adjust the size of Oracle Database buffer cache. If memory is limited, then adjust the operating system buffer cache.

The operating system buffer cache holds blocks of data in memory while they are being transferred from memory to disk, or from disk to memory.

Oracle Database buffer cache is the area in memory that stores Oracle Database buffers. Because Oracle Database can use raw devices, it does not use the operating system buffer cache.

If you use raw devices, then increase the size of Oracle Database buffer cache. If the amount of memory on the system is limited, then make a corresponding decrease in the operating system buffer cache size.

Use the sar command to determine which buffer caches you must increase or decrease.

See Also:

The man page on Tru64 UNIX for more information about the sar command


On Tru64 UNIX, do not reduce the operating system buffer cache, because the operating system automatically resizes the amount of memory that it requires for buffering file system I/O. Restricting the operating system buffer cache can cause performance issues.

How to Set the time zone of the VNX Data Mover ?

You can update the time zone information on the Data Mover by using simple and  decipherable strings that correspond to the time zones available in the Control Station. You can also update the daylight savings time on the Data Mover for the specified time zone.

Set Data Mover or blade time zone manually

To set the time zone on a Data Mover using the Linux time zone method, use this command
$ server_date <movername> timezone -name <timezonename>
<movername> = name of the Data Mover
<timezonename> = a Linux style time zone specification
Note: A list of valid Linux time zones is located in the /usr/share/zoneinfo directory.
To set the time zone to Central Time and adjust the daylight savings time for a Data Mover by using the Linux method, type:
$ server_date server_2 timezone -name  Asia/Kolkata

How to Halt the VNX Data Movers ?

The following procedure explains how to perform an orderly, timed, or immediate
halt of a network server’s Data Mover or blade. This procedure applies to all VNX
unified and VNX for file systems.

Note: A Data Mover for a VNX for file server is also called a blade. There is no functional
difference between a Data Mover and a blade. They both serve the same purpose in a VNX
for file server.

To immediately halt a Data Mover or blade, use this command syntax:

$ server_cpu <movername> -halt <time>
<movername> = name of the Data Mover or blade
<time> = when the Data Mover or blade is to be halted, specified as one of the following:
{ now | +<min> | <hour>:<min> }

To halt server_2 immediately, type:
$ server_cpu server_2 -halt now

server_2 : done

EMC NAS / VNX Health Checkup using command line

Login using nasadmin  and verify the system’s health, type:
$ /nas/bin/nas_checkup
The checkup command reports back on the state of the Control Station, Data Movers, and storage system.
Note: This health check ensures that there are no major errors in the system that would prevent the system from being turned on during the power up process.

[nasadmin@VNXCS01 ~]$ /nas/bin/nas_checkup
Check Version:
Check Command: /nas/bin/nas_checkup
Check Log    : /nas/log/checkup-run.120807-113919.log

Control Station: Checking statistics groups database………………….. Pass
Control Station: Checking if file system usage is under limit………….. Pass
Control Station: Checking if NAS Storage API is installed correctly…….. Pass
Control Station: Checking if NAS Storage APIs match…………………… Pass
Control Station: Checking if NBS clients are started………………….. Pass
Control Station: Checking if NBS configuration exists…………………. Pass
Control Station: Checking if NBS devices are accessible……………….. Pass
Control Station: Checking if NBS service is started…………………… Pass
Control Station: Checking if PXE service is stopped…………………… Pass
Control Station: Checking if standby is up…………………………… Pass
Control Station: Checking integrity of NASDB…………………………. Pass
Control Station: Checking if primary is active……………………….. Pass
Control Station: Checking all callhome files delivered………………… Warn
Control Station: Checking resolv conf……………………………….. Pass
Control Station: Checking if NAS partitions are mounted……………….. Pass
Control Station: Checking ipmi connection……………………………. Pass
Control Station: Checking nas site eventlog configuration……………… Pass
Control Station: Checking nas sys mcd configuration…………………… Pass
Control Station: Checking nas sys eventlog configuration………………. Pass
Control Station: Checking logical volume status………………………. Pass
Control Station: Checking valid nasdb backup files……………………. Pass
Control Station: Checking root disk reserved region…………………… Pass
Control Station: Checking if RDF configuration is valid………………..  N/A
Control Station: Checking if fstab contains duplicate entries………….. Pass
Control Station: Checking if sufficient swap memory available………….. Pass
Control Station: Checking for IP and subnet configuration……………… Pass
Control Station: Checking auto transfer status……………………….. Warn
Control Station: Checking for invalid entries in etc hosts…………….. Pass
Control Station: Checking the hard drive in the control station………… Pass
Control Station: Checking if Symapi data is present…………………… Pass
Control Station: Checking if Symapi is synced with Storage System………. Pass
Blades         : Checking boot files………………………………… Pass
Blades         : Checking if primary is active……………………….. Pass
Blades         : Checking if root filesystem is too large……………… Pass
Blades         : Checking if root filesystem has enough free space……… Pass
Blades         : Checking network connectivity……………………….. Pass
Blades         : Checking status……………………………………. Pass
Blades         : Checking dart release compatibility………………….. Pass
Blades         : Checking dart version compatibility………………….. Pass
Blades         : Checking server name……………………………….. Pass
Blades         : Checking unique id…………………………………. Pass
Blades         : Checking CIFS file server configuration………………. Pass
Blades         : Checking domain controller connectivity and configuration. Pass
Blades         : Checking DNS connectivity and configuration…………… Pass
Blades         : Checking connectivity to WINS servers………………… Pass
Blades         : Checking I18N mode and unicode translation tables……… Pass
Blades         : Checking connectivity to NTP servers…………………. Warn
Blades         : Checking connectivity to NIS servers…………………. Pass
Blades         : Checking virus checker server configuration…………… Pass
Blades         : Checking if workpart is OK………………………….. Pass
Blades         : Checking if free full dump is available………………. Pass
Blades         : Checking if each primary Blade has standby……………. Pass
Blades         : Checking if Blade parameters use EMC default values……. Pass
Blades         : Checking VDM root filesystem space usage………………  N/A
Blades         : Checking if file system usage is under limit………….. Pass
Blades         : Checking slic signature…………………………….. Pass
Storage System : Checking disk emulation type………………………… Pass
Storage System : Checking disk high availability access……………….. Pass
Storage System : Checking disks read cache enabled……………………. Pass
Storage System : Checking disks and storage processors write cache enabled. Pass
Storage System : Checking if FLARE is committed………………………. Pass
Storage System : Checking if FLARE is supported………………………. Pass
Storage System : Checking array model……………………………….. Pass
Storage System : Checking if microcode is supported……………………  N/A
Storage System : Checking no disks or storage processors are failed over… Pass
Storage System : Checking that no disks or storage processors are faulted.. Pass
Storage System : Checking that no hot spares are in use……………….. Pass
Storage System : Checking that no hot spares are rebuilding……………. Pass
Storage System : Checking minimum control lun size……………………. Pass
Storage System : Checking maximum control lun size…………………….  N/A
Storage System : Checking maximum lun address limit…………………… Pass
Storage System : Checking system lun configuration……………………. Pass
Storage System : Checking if storage processors are read cache enabled….. Warn
Storage System : Checking if auto assign are disabled for all luns………  N/A
Storage System : Checking if auto trespass are disabled for all luns…….  N/A
Storage System : Checking storage processor connectivity………………. Pass
Storage System : Checking control lun ownership……………………….  N/A
Storage System : Checking if Fibre Channel zone checker is set up……….  N/A
Storage System : Checking if Fibre Channel zoning is OK………………..  N/A
Storage System : Checking if proxy arp is setup………………………. Pass
Storage System : Checking if Product Serial Number is Correct………….. Pass
Storage System : Checking SPA SPB communication………………………. Pass
Storage System : Checking if secure communications is enabled………….. Pass
Storage System : Checking if backend has mixed disk types……………… Pass
Storage System : Checking for file and block enabler………………….. Pass
Storage System : Checking if nas storage command generates discrepancies… Pass
Storage System : Checking if Repset and CG configuration are consistent…. Pass
Storage System : Checking block operating environment…………………. Pass
Storage System : Checking thin pool usage…………………………….  N/A
Storage System : Checking for domain and federations health on VNX……… Pass

One or more warnings have occurred. It is recommended that you follow the
instructions provided to correct the problem then try again.

Control Station: Check if standby is up
Information HC_CS_27389984778: The standby Control Station is
currently powered on. It will be powered off during upgrade, and then
later restarted and upgraded.


Control Station: Check all callhome files delivered
Warning HC_CS_18800050328: There are 36 undelivered Call Home
incidents and 3 scheduled Call Home files left in the
/nas/log/ConnectHome directory(es)
Action :

Check the /nas/log/connectemc/ConnectEMC log to ensure the connection
is established correctly. To test your Callhome configuration, you can
run /nas/sbin/nas_connecthome -test { -email_1 | -email_2 | -ftp_1 |
-ftp_2 | -modem_1 | -modem_2 } command. View the RSC*.xml files under
the /nas/log/ConnectHome directory(es) and inspect the CDATA content
to find out and possibly resolve the problem. To remove the call home
incidents and files, run the command “/nas/sbin/nas_connecthome –
service clear”. Otherwise escalate this issue through your support

Control Station: Check auto transfer status
Warning HC_CS_18800050417: The automatic transfer feature is disabled.
Action :

EMC recommends the automatic transfer feature to be enabled via

/nas/tools/automaticcollection -enable

or from Unisphere:

1. Select VNX > [VNX_name] > System. Click the link for “Manage Log
Collection for File” Under Service Tasks.
2. Select Enable Automatic Transfer.
3. Click Apply.

By default, support materials will be transferred to,
but you can modify the location in the
/nas/site/automaticcollection.cfg file. For more information, search
the Knowledgebase on Powerlink as follows:
1. Log in to and go to Support >
Knowledgebase Search> Support Solutions Search.
2. Use ID emc221733 to search.

Blades : Check connectivity to NTP servers
Warning HC_DM_18800115743:
* server_2: Only one NTP server is configured. It is recommended to
define at least two different NTP servers for a high availability.
If the clock of the Data Mover is not correct, potential errors
during Kerberos authentication may happen (timeskew).
Action : Use the server_date command to define another NTP server on
the Data Mover. Read the man pages for details and examples.

Storage System : Check if storage processors are read cache enabled
Warning HC_BE_18799984735: SPA Read Cache State on VNX FCN0xxxxxxxx5
is not enabled
Action : Please contact EMC Customer Service for assistance. Include
this log with your support request.

Storage System : Check if storage processors are read cache enabled
Warning HC_BE_18799984735: SPB Read Cache State on VNX FCNxxxxxxxxx5
is not enabled
Action : Please contact EMC Customer Service for assistance. Include
this log with your support request.


[nasadmin@VNXCS01 ~]$

Fibre Channel Topologies

Fibre Channel Topologies



Point-to-Point: In this topology, devices are connected directly to each other. This topology allows the devices to communicate using the full bandwidth of the link.

Arbitrated Loop (FC-AL): In this topology, devices are attached to a shared “loop”. FC-AL is analogous to the token ring topology. Each device has to contend for performing I/O on the loop by a process called  “arbitration” and, at any given time, only one device can “own” the I/O on the loop. This results in a shared bandwidth environment. Private Arbitrated loops restrict the number of devices to 126 (plus the initiator).
Each device has a unique id called ALPA (Arbitrated Loop Physical Address) by which it is identified on the loop. In a loop environment, each time the topology changes (i.e. when devices are added or removed) the loop has to be re-initialized by a process known as a LIP (Loop initialization protocol) reset. This results in a momentary pause in I/O. It is for this reason that arbitrated loop environments don’t scale well and are limited to a few devices only. Most implementations of arbitrated loop provide a star topology to a loop by implementing a device called a hub. Hubs have won wide acceptance in JBOD (Just a Bunch of Disks) environments, because just as JBOD costs less than enterprise storage, hubs cost less than switches.

Switched Fabric (FC-SW): In this topology, each device has a unique dedicated I/O path to another device. This is accomplished by implementing a device known as a fabric switch. A fabric switch is analogous to an IP switch. When a device is physically connected to a switch port, it establishes a point-to-point connection with the port and logs into the fabric by a process called a fabric logon (defined in the FC2 layer) and registers itself with the fabric name server, which is a virtual database to keep track of devices connected to the switch. Thus in essence, it establishes a point-to-point connection with a port on the switch. It then sends a request to access another device which is connected to the same switch. In most cases, this is either a storage array or a tape drive. Once this request is granted, the switch makes a note of this connection and a dedicated path is established. This path is now totally independent of any topology changes to the switch (i.e. devices being
added or removed) and results in dedicated bandwidth. Switched fabric environments use a 24-bit addressing scheme to identify devices and hence can accommodate up to 15 million devices. The reason why switched fabric provides higher performance than arbitrated loop is because a switch based fabric provides full bandwidth between the nodes in the fabric. At any given time there can be n/2 full bandwidth connections between nodes in the fabric (one connection for the initiator and one connection for the target).

How to configure a VMware ESX / ESXi host with a QLogic HBA to boot from SAN in a CLARiiON environment

How to configure a VMware ESX / ESXi host with a QLogic HBA to boot from SAN in a CLARiiON environment


This procedure explains how to configure the QLogic HBA to boot ESX/ESXi from SAN. The procedure involves enabling the QLogic HBA BIOS, enabling the selectable boot, and selecting the boot LUN.


1. While booting the server, press Ctrl+Q to enter the Fast!UTIL configuration utility.

2. Perform the appropriate action depending on the number of HBAs.

Option Description
One HBA If you have only one host bus adapter (HBA), the Fast!UTIL Options page appears. Skip to Step 3.
Multiple HBAs If you have more than one HBA, select the HBA manually.

1. In the Select Host Adapter page, use the arrow keys to position the cursor on the appropriate HBA.

2. Press Enter.

3. In the Fast!UTIL Options page, select Configuration Settings and press Enter.

4. In the Configuration Settings page, select Adapter Settings and press Enter.

5. Set the BIOS to search for SCSI devices.

a. In the Host Adapter Settings page, select Host Adapter BIOS.

b. Press Enter to toggle the value to Enabled.

c. Press Esc to exit.

6. Enable the selectable boot.

a. Select Selectable Boot Settings and press Enter.

b. In the Selectable Boot Settings page, select Selectable Boot.

c. Press Enter to toggle the value to Enabled.

7. Use the cursor keys to select the Boot Port Name entry in the list of storage processors (SPs) and press Enter to open the Select Fibre Channel Device screen.

8. Use the cursor keys to select the specific SP and press Enter.

If you are using an active-passive storage array, the selected SP must be on the preferred (active) path to the boot LUN. If you are not sure which SP is on the active path, use your storage array management software to find out. The target IDs are created by the BIOS and might change with each reboot.

9. Perform the appropriate action depending on the number of LUNs attached to the SP.

Option Description
One LUN The LUN is selected as the boot LUN. You do not need to enter the Select LUN screen.
Multiple LUNs Select LUN screen opens. Use the cursor to select the boot LUN, then press Enter.

EMC Client installation and checking

This web page is a quick guide on what to install and how to check that EMC SAN is attached and working


Install Emulex driver/firmware, san packages (SANinfo, HBAinfo, lputil), EMC powerpath
Use lputil to update firmware
Use lputil to disable boot bios
Update /kernel/drv/lpfc.conf
Update /kernel/drv/sd.conf
Install ECC agent


Note: when adding disks on different FA had to reboot server?

List HBA’s /usr/sbin/hbanyware/hbacmd listHBAS   (use to get WWN’s)

/opt/HBAinfo/bin/gethbainfo           (script wrapped around hbainfo)

grep ‘WWN’ /var/adm/messages
HBA attributes /opt/EMLXemlxu/bin/emlxadm

/usr/sbin/hbanyware/hbacmd HBAAttrib 10:00:00:00:c9:49:28:47
HBA port /opt/EMLXemlxu/bin/emlxadm

/usr/sbin/hbanyware/hbacmd PortAttrib 10:00:00:00:c9:49:28:47
HBA firmware /opt/EMLXemlxu/bin/emlxadm
Fabric login /opt/HBAinfo/bin/gethbainfo           (script wrapped around hbainfo)
Adding Additional Disks cfgadm -c configure c2
Disk available cfgadm -al -o show_SCSI_lun


inq                                    (use to get serial numbers)
Labelling format
Partitioning vxdiskadm



Filesystem newfs or mkfs



Install Emulex driver, san packages (saninfo, hbanyware), firmware (lputil)
Configure /etc/modprobe.conf
Use lputil to update firmware
Use lputil to disable boot bios
Create new ram disk so changes to modprobe.conf can take affect.
Install ECC agent

List HBA’s

/usr/sbin/hbanyware/hbacmd listHBAS             (use to get WWN’s)

cat /proc/scsi/lpfc/*

HBA attributes /usr/sbin/hbanyware/hbacmd HBAAttrib 10:00:00:00:c9:49:28:47

cat /sys/class/scsi_host/host*/infoHBA port/usr/sbin/hbanyware/hbacmd PortAttrib 10:00:00:00:c9:49:28:47HBA firmwarelputilFabric logincat /sys/class/scsi_host/host*/stateDisk availablecat /proc/scsi/scsi

fdisk -l |grep -I Disk |grep sd

inq                                  (use to get serial numbers)
 Labellingparted -s /dev/sda mklabel msdos     (like labelling in solaris)
parted -s /dev/sda printPartitioningfdisk

Filesystemmkfs -j -L <disk label> /dev/vx/dsk/datadg/vol01


HBA Info /etc/powermt display 
Disk Info /etc/powermt display dev=all
Rebuild /kernel/drv/emcp.conf /etc/powercf -q
Reconfigure powerpath using emcp.conf /etc/powermt config
Save the configuration /etc/powermt save
Enable and Disable HBA cards used for testing /etc/powermt display (get card ID)

/etc/powermt disable hba=3072
/etc/powermt enable hba=3072

Solaris : Emulex Firmware Upgrade

You will require the following files before you begin the upgrade:


  • solaris-2.1a18-6.02f-1a.tar
  • lpfc-6.02f-sparc.tar
  • EmlxApps300a39-Solaris.tar
1.Copy configuration files
# cp -p /kernel/drv/lpfc.conf /kernel/drv/
# cp -p /kernel/drv/sd.conf /kernel/drv/
# cp -p /kernel/drv/st.conf /kernel/drv/
# cp -p /etc/path_to_inst /etc/

2.Copy Driver / Firmware updates from shared area to local disk
# mkdir /var/tmp/emulex
# cp –p /proj/gissmo/HBA/EMC/Emulex/* /var/tmp/emulex/

3.Shutdown server to single user mode
# reboot — -rs

4.Remove the HBAnyware package
# pkgrm HBAnyware

5.Remove the lpfc driver
# pkgrm lpfc

6.Copy back the saved path_to_inst file
 # cp –p /etc/ /etc/path_to_inst

7.Untar the file containing the driver, apps, driver and the Emulex Application Kit
# tar xvf solaris-2.1a18-6.02f-1a.tar
# tar xvf lpfc-6.02f-sparc.tar
# pkgadd –d .
# tar xvf EmlxApps300a39-Solaris.tar
# gunzip HBAnyware-*-sparc.tar.gz
# tar xvf HBAnyware-*-sparc.tar
# pkgadd –d .                      Note: Select the package for HBAnyware

8.Revert sd.conf file
# cp –p /kernel/drv/sd.conf /kernel/drv/sd.conf.post_upgrade
# cp –p /kernel/drv/ /kernel/drv/sd.conf

9.Convert lpfc.conf file from version 5 to version 6
# /usr/sbin/lpfc/update_lpfc /kernel/drv/ /kernel/drv/lpfc.conf >      /kernel/drv/lpfc.conf.updated
# cp -p /kernel/drv/lpfc.conf /kernel/drv/lpfc.conf_post_upgrade
# cp /kernel/drv/lpfc.conf.upgrated /kernel/drv/lpfc.conf

10.Reboot system back into single user mode
# reboot — -rs

11.Copy firmware into /usr/sbin/lpfc
# cd /var/tmp/emulex
# unzip
# cp –p cd392a3.awc /usr/sbin/lpfc/

12.Update firmware
# cd /usr/sbin/lpfc
# ./lputil
> Select option 3 for – Firmware Maitenance
> Select adaptor number to update
> Select option 1 for – Load Firmware Image
> Type in the full name of the image : – cd392a3.awc

Repeat above steps for all Emulex HBA’s

13.Reboot into Single user mode and ensure that devices can been seen
# reboot — -rs
# /etc/powermt display

14.Reboot server
# reboot


EMC Symmetrix Architecture

This document will be using the EMC symmetrix configuration. There are a number of EMC Symmetrix configurations but they all use the same architecture as detailed below.

Front End Director Ports (SA-16b:1)
Front End Director (SA-16b)
Back End Director (DA-02b)
Back End Director Ports (DA-02b:c)
Disk Devices

Front End Director
A channel director (front end director) is a card that connects a host to the symmetrix, each card can have upto four ports.

Symmetrix cache memory buffers I/O transfers between the director channels and the storage devices. The cache is divided up into regions to eliminate contension.

Back End Director
A disk director (back end director) transfers data from disk to cache. Each back-end director can have upto four interfaces (C,D,E and F). Each back-end director interface can handle seven SCSI ids (0-6)

Disk Devices
The disk devices that are attached to the back-end directors could be either SCSI or FC-AL.

The direct matrix interconnect is a matrix of high speed connections to all componentswith bandwidth up to 64Gb/s

SAN Components

The are many components to a SAN Architecture. A host can connect to a SAN via direct connection or via a SAN switch.

Host HBA Host bus adaptor cards are used to access SAN storage systems
SAN Cables There are many types of cables and connectors:

Types: Multimode (<500m), single mode (>500m) and copper
Connectors: ST, SC (1Gb), LC (2Gb)

SAN Switches The primary function of a switch is to provide a physical connection and logical routing of data frames between the attached devices.

Support multiple protocols: Fibre channel, iSCSI, FCIP, iFCP
Type of switch: Workgroup, Directors

SAN Zoning Zoning is used to partition a fibre channel switched fabric into subsets of logical devices. Each zone contains a set of members that are permitted to access each other. Members are HBA’s, switch ports and SAN ports.

Types of zoning: hard, soft and mixed

Zone set s This is a group of zones that relate to one another, only one zone set can be active at any one time.
Storage arrays Storage array is were all the disk devices are located.
Volume access control This is also know as LUN masking. The storage array maintains a database that contains a map of the storage volumes and WWN’s that are allowed to access it. The VCM database in a symmetrix would contain the LUN masking information.

SAN Login

The below table documents the various proccesses that occur when a fibre channel device is connected to a SAN

Information/process FLOGI (fabric login) PLOGI (port login) PRLI (process login)
What is need ? – Link initialization
– Cable
– HBA and driver
– Switch Port
– Zoning
– Persistent binding
– Driver setting
– Device masking (target)
– Device mapping (initiator)
– Driver setting (initiator)
What information is passed – WWN
– S_ID
– Protocol
– Class
– Zoning
– S_ID
– Class
– BB Credit
Who does the communication ? – N_port to F_port – N_port to N_port – ULP( scsi-3 to scsi-3)
where to find the information ? Unix
– syslog
– switch utilites

– Event viewer
– Switch viewer

– Syslog
– Driver Ulitities

– Driver utilities

– Syslog
– Host based volume management

– Driver Utilities
– Host based volume management
– Device Manager

If any one of the above were to fail then the host will not be allowed to access the disks on the SAN.

VCM Database

The Symmetrix Volume Configuration Management (VCM) database stores access configurations that are used to grant host access to logical devices in a Symmetrix storage array.

The VCM database resides on a special system resource logical device, referred to as the VCMDB device, on each Symmetrix storage array.

Information stored in the VCM database includes, but is not limited to:

  • Host and storage World Wide Names
  • SID Lock and Volume Visibility settings
  • Native logical device data, such as the front-end directors and storage ports to which they are mapped

Masking operations performed on Symmetrix storage devices result in modifications to the VCM database in the Symmetrix array. The VCM database can be backed up, restored, initialized and activated. The Symmetrix SDM Agent must be running in order to perform VCM database operations (except deleting backup files).


There are three models of switchs M-series (Mcdata), B-series (Brocade) and the MDS-series (Cisco). Each of the switch offer a web interface and a CLI. The following tasks can be set on most switches:

  • Configure network params
  • Configure fabric params (BB Credit, R_A_TOV, E_D_TOV, switch PID format, Domain ID)
  • Enable/Disable ports
  • Configure port speeds
  • Configure Zoning
BB Credit Configure the number of buffers that are available to attached devices for frame receipt default 16. Values range 1-16.
R_A_TOV Resource allocation time out value. This works with the E_D_TOV to determine switch actions when presented with an error condition
E_D_TOV Error detect time out value. This timer is used to flag potential error condition when an expected response is not received within the set time

Host HBA’s

The table below outlines which card will work with a particular O/S

Solaris Emulex PCI (lputil)
HPUX PCI-X gigabit fibre channel and ethernet card
AIX FC6227/6228/6239 using IBM native drivers
Windows Emulex (HBAnyware or lputilnt)
Linux Emulex PCI (lputil)

Linux: iSCSI Initiator installation & configration

Installation Instructions:

Red Hat Supplied iSCSI Initiator:

Find the RPM on the Red Hat Media, then install it using the rpm –ivh command as follows:

# rpm –ivh iscsi-initiator-utils-

NOTE: This is the version for Enterprise Linux AS 5. Your version may be different.

An alternative to installing this package manually in Red Hat Enterprise Linux (ES or AS) 5 or greater is to use the “Add/Remove Applications” menu item in the “System Settings” menu. In the details for the “Network Servers” package list, the iscsi-initiator-utils is one of the packages listed. This same choice is available in the same location during the initial install of Red Hat, so this can also be done at that time.

Once installed, there will be a file in the /etc directory named iscsi.conf. If this file does not exist this may indicate a problem with the installation. This file can be created with the following minimal entries:



This needs to be set to the Group IP Address of your UIT Array.


For the initiator to receive Vendor Specific async events from the target.


To globally specify that all discovery sessions be kept open.

Within the iscsi.conf file itself there are many more options available that can be set. You can look through the iscsi.conf file for information on what these variables are and what they are used for.

Once these values are either placed in a newly created /etc/iscsi.conf file, or the respective lines are uncommented and edited where necessary, the iscsi service can be started:

# service iscsi start

To verify that the iscsi service will be started at boot time, the chkconfig command can be used as follows:

# chkconfig –list iscsi

iscsi 0:off 1:off 2:off 3:off 4:off 5:off 6:off

By default, the newly added iscsi initiator is not enabled at boot which is the reason for each of the run levels listed to have the service set to off. To enable this at boot, again use the chkconfig command as follows:

# chkconfig –add iscsi

# chkconfig iscsi on

The above two commands first checks to be sure there are the necessary scripts to start and stop the service, then it sets this service to be on for the appropriate runlevels.

Then check to be sure the changes took effect:

# chkconfig –list iscsi

iscsi 0:off 1:off 2:on 3:on 4:on 5:on 6:off

To verify that you can see your iscsi devices, you can run the following command:

# iscsi-ls


SFNet iSCSI Driver Version … 6.2 (27-July-2009 )



TARGET ALIAS : pat-rhel5-vol2


BUS NO : 0






SESSION ID : ISID 00023d000001 TSIH 06


To see greater details of the devices, you can run the above command with the –l option:

# iscsi-ls –s


SFNet iSCSI Driver Version … 6.2 (27-Jun-2009 )



TARGET ALIAS : pat-rhel5-vol2


BUS NO : 0






SESSION ID : ISID 00023d000001 TSIH 06



LUN ID : 0

Vendor: EQLOGIC Model: 100E-00 Rev: 2.1

Type: Direct-Access ANSI SCSI revision: 05

page83 type3: 0690a018007082143638c4d6ef067098

page80: 3036393041303138303037303832313433363338433444364546303637303938

Device: /dev/sdc


As can be seen in the example iscsi-ls –l output above, the device in question is mapped to the /dev/sdc device.

Linux-iscsi Sourceforge Initiator:

If you are not running the required update of Red Hat Linux to have their precompiled iSCSI Initiator, you can try to compile the iSCSI Initiator supplied by the Sourceforge linux-iscsi project.

Beyond the required kernel revision as noted above, all development packages need to be installed for the compiling of the initiator as well as the kernel sources. The easiest way to install these items is to us the “Add/Remove Applications” in the “System Settings Menu” from within the Desktop GUI. Depending on the version of Red Hat you are running will determine what you select to be installed:

Red Hat AS 3:

Development Tools (Default packages have all required packages)

Kernel Development (Again, default is fine)

Red Hat AS 4:

Development Tools (Default packages have all required packages)

NOTE: If there is no Kernel Development choice, the Kernel Source files need to be found and installed prior to compilation.

Once these OS packages are installed, it should be as easy as getting the source package from the Sourceforge linux-iscsi project, then making the initiator. Refer to the README file that comes with the source for detailed instructions on how to make the initiator. If there are problems compiling the initiator, check the linux-iscsi Sourceforge project for assistance. You are able to search and post to their mailing lists to get information and assistance with this product.

Persistent Device naming:

Devices using the Red Hat software initiators do not have a persistent naming scheme, but a few ways to setup Persistent Naming for the different versions of Red Hat are as follows:

Red Hat Enterprise Linux (ES or AS) 3:

Devlabel (see the devlabel man page):

This will only work on Red Hat kernel’s 2.4.x.

Use devlabel to setup symlinks from known names to the current device name.

A basic add command to setup a devlabel link is as follows:

# devlabel add –d -s

An example:

sdc –s /dev/iscsi/vollink

# ls –l /dev/iscsi/vollink

lrwxrwxrwx 1 root root 8 Dec 1 16:31 newvol -> /dev/sdc

Red Hat Enterprise Linux (ES or AS) 4:

Use the udev facility (man udev, man scsi_id):

This is only available on Red Hat EL 4/Kernel 2.6.*

This creates device links to the device files when the device nodes are created. Udev uses a rules file (see man udev) to determine what the link names or device names it should create for different devices.

This is the least elegant of the solutions to configure and there is no straightforward example to provide on how this needs to be setup.

Red Hat may be able to provide additional information on persistent device naming for iSCSI devices using their iSCSI initiator with udev.

Both Red Hat Enterprise Linux 3 and 4:

Use filesystem LABELs (see the e2label man page):

This will work on all ext2/3 filesystem partitions.

Place an ext2/3 filesystem label on your filesystem partition. Once the Label has been added, use the LABEL identifier to identify the filesystem you want to mount in the fstab (man fstab and/or man mount). Following is an example of using the e2label command and what a resulting line in the fstab file would look like:

# e2label /dev/sdc1 EMC

# mkdir /EMC

# echo “LABEL=EMC /EMC ext3 _netdev,defaults 0 0” >> /etc/fstab

NOTE: _netdev delays the mounting of this filesystem until after the Network has been started and ensures that the filesystem is unmounted before stopping the Network.

# mount –a

# df –k | grep EMC

/dev/sdc1 5166332 43072 4860816 1% /EMC

Red Hat Linux iSCSI Configuration

Supported iSCSI Initiators:

Enterprise Linux (ES or AS) 3 Update 6:

Disc 2 of 4:


linux-iscsi 3.4.x: Minimum kernel release: 2.4.21

linux-iscsi 3.6.x: Minimum kernel release: 2.4.21

NOTE: Versions of the linux-iscsi Initiator above 3.x are not compatible with the 2.4.x and below kernel release.

Enterprise Linux (ES or AS) 4 Update 2:

Disc 4 of 4:



4.0.2 – Mnimum kernel release: 2.6.10

4.0.1 – minimum kernel Release: 2.6.0

NOTE: Versions of the linux-iscsi Initiator below 4.x are not compatible with the 2.6.x and higher kernel release.