I/O Tuning Parameters:
- numfsbufs (vmtune –b) specifies the number of file system buffer
structures. This value is critical asVMM will put a process on the wait list
if there are insufficient free buffer structures.
- Run vmtune –a (pre 5.2) vmstat –v (5.2 & >) and monitor
fsbufwaitcnt. This is incremented each time an I/O operation has to wait for
file system buffer structures. - A general technique is to double the numfsbufs value (up to a maximum of
512) until fsbufwaitcount no longer increases. This value, as it is dynamic,
should be re-executed on boot prior to any mount all command. - hd_pbuf_cnt (vmtune –B) determines the number of pbufs assigned to LVM.
pbufs are pinned memory buffers used to hold pending I/O requests. - Again, examine vmtune –a and review the psbufwaitcnt. If increasing,
multiply the current hd_pbuf_cnt by 2 until psbufwaitcnt stops incrementing. - Because the hd_pbuf_cnt can only be reduced via a reboot (this is
pinned memory) – be frugal when increasing this value.
I/O Tuning :
- Over 35% I/O wait should be investigated.
- Oracle databases like async I/O, DB2 & Sybase do not care (a good
place to start would be AIO PARMS of - MINSERVERS = 80 MAXSERVERS = 200 MAXREQUESTS = 8192)
- Recent technology disks will support higher ltg numbers
- lvmstat (must be enabled prior to usage) provides detailed information for
I/O contention - filemon is an excellent I/O tool (trace – ensure you turn it off)
- numfsbufs and hd_pbuf_cnt adjusted to reduce wait counts in vmtune or
vmstat -v