appa_aix.htm

来自「Debian中文参考手册,系统介绍了Debian系统」· HTM 代码 · 共 1,190 行 · 第 1/5 页

HTM
1,190
字号
</td>
<td headers="r5c1 r1c2" align="left" colspan="1" rowspan="1">0 - 999
</td>
<td headers="r5c1 r1c3" align="left" colspan="1" rowspan="1">0
</td>
</tr>
<tr align="left" valign="top">
<td id="r6c1" headers="r1c1" align="left" colspan="1" rowspan="1">DB&#095;WRITER&#095;PROCESSES
</td>
<td headers="r6c1 r1c2" align="left" colspan="1" rowspan="1">1-20
</td>
<td headers="r6c1 r1c3" align="left" colspan="1" rowspan="1">1
</td>
</tr></tbody>
</table>
<p>There are times when the use of asynchronous I&#047;O is not desirable or not possible. The first two parameters in the preceding table, DISK&#095;ASYNCH&#095;IO and TAPE&#095;ASYNCH&#095;IO, allow asynchronous I&#047;O to be switched off respectively for disk or tape devices. Because the number of I&#047;O slaves for each process type defaults to zero, by default no I&#047;O Slaves are deployed.
</p>
<p>The DBWR&#095;IO&#095;SLAVES parameter should be set to greater than 0 only if the DISK&#095;ASYNCH&#095;IO, or the TAPE&#095;ASYNCH&#095;IO parameter has been set to FALSE, otherwise the database writer process (DBWR) becomes a bottleneck. In this case, the optimal value on AIX for the DBWR&#095;IO&#095;SLAVES parameter is 4.
</p>
<p>The DB&#095;WRITER&#095;PROCESSES parameter specifies the initial number of database writer processes for an instance. If you use the DBWR&#095;IO&#095;SLAVES parameter, only one database writer process is used, regardless of the setting of the DB&#095;WRITER&#095;PROCESSES parameter.
</p>
</div class="sect2">
<div class="sect2"><a id="sthref649" name="sthref649"></a>
<h3>
<font face="arial, helvetica, sans-serif" color="#330099">
Using the DB&#095;FILE&#095;MULTIBLOCK&#095;READ&#095;COUNT Parameter
</font>
</h3>
<p>A large value for the DB&#095;FILE&#095;MULTIBLOCK&#095;READ&#095;COUNT initialization parameter usually yields better I&#047;O throughput. On AIX, this parameter ranges from 1 to 512, but using a value higher than 16 usually does not provide additional performance gain.
</p>
<p>Set this parameter so that its value when multiplied by the value of the DB&#095;BLOCK&#095;SIZE parameter produces a number that is larger than the LVM stripe size. Such a setting causes more disks to be used.
</p>
</div class="sect2">
<div class="sect2"><a id="sthref650" name="sthref650"></a>
<h3>
<font face="arial, helvetica, sans-serif" color="#330099">
Using RAID Capabilities
</font>
</h3><a id="i631512" name="i631512"></a>
<p>RAID 5 enhances sequential read performance, but decreases overall write performance. Oracle Corporation recommends using RAID 5 only for workloads that are not write-intensive. Intensive writes on RAID 5 might result in a performance degradation compared to a non-RAID environment.
</p><a id="i631514" name="i631514"></a>
<p>RAID 0 and 1 generally result in better performance, as they introduce striping and mirroring at the hardware level, which is more efficient than at the AIX or Oracle level. RAID 7 is capable of providing better small and large read and write performance than RAID 0 to 6.
</p>
</div class="sect2"><a id="i631520" name="i631520"></a>
<div class="sect2"><a id="sthref651" name="sthref651"></a>
<h3>
<font face="arial, helvetica, sans-serif" color="#330099">
Using Write Behind
</font>
</h3>
<p>The write behind feature enables the operating system to group write I&#047;Os together up to the size of a partition. Doing this increases performance because the number of I&#047;O operations is reduced. The file system divides each file into 16 KB partitions to increase write performance, limit the number of dirty pages in memory, and minimize disk fragmentation. The pages of a particular partition are not written to disk until the program writes the first byte of the next 16 KB partition. To set the size of the buffer for write behind to eight 16 KB partitions, enter the following command:
</p>
<pre>&#035; vmtune -c 8

</pre>
<p>To disable write behind, enter the following command:
</p>
<pre>&#035; vmtune -c 0
</pre>
</div class="sect2">
<div class="sect2"><a id="sthref652" name="sthref652"></a>
<h3>
<font face="arial, helvetica, sans-serif" color="#330099">
Tuning Sequential Read Ahead
</font>
</h3><a id="i631528" name="i631528"></a>
<p>The Virtual Memory Manager (VMM) anticipates the need for pages of a sequential file. It observes the pattern in which a process accesses a file. When the process accesses two successive pages of the file, the VMM assumes that the program will continue to access the file sequentially, and schedules additional sequential reads of the file. These reads overlap the program processing and make data available to the program sooner. Two VMM thresholds, implemented as kernel parameters, determine the number of pages it reads ahead: 
</p><a id="i631530" name="i631530"></a>
<ul>
<li type="disc">
<p>MINPGAHEAD
</p>
<p>The number of pages read ahead when the VMM first detects the sequential access pattern
</p>
</li>
<li type="disc">
<p>MAXPGAHEAD
</p>
<p>The maximum number of pages that VMM reads ahead in a sequential file
</p>
</li>
</ul>
<p>Set the MINPGAHEAD and MAXPGAHEAD parameters to appropriate values for your application. The default values are 2 and 8 respectively. Use the <code>vmtune</code> command to change these values. You can use higher values for the MAXPGAHEAD parameter in systems where the sequential performance of striped logical volumes is of paramount importance. To set the MINPGAHEAD parameter to 32 pages and the MAXPGAHEAD parameter to 64 pages, enter the following command:
</p>
<pre>&#035; vmtune -r 32 -R 64

</pre>
<p>Set both the MINPGAHEAD and MAXPGAHEAD parameters to a power of two. For example, 2, 4, 8,...512, 1042... and so on.
</p>
</div class="sect2">
<div class="sect2"><a id="sthref653" name="sthref653"></a>
<h3>
<font face="arial, helvetica, sans-serif" color="#330099">
Tuning Disk I&#047;O Pacing
</font>
</h3><a id="i631542" name="i631542"></a>
<p>Disk I&#047;O pacing is an AIX mechanism that allows the system administrator to limit the number of pending I&#047;O requests to a file. This prevents disk I&#047;O intensive processes from saturating the CPU. Therefore, the response time of interactive and CPU-intensive processes does not deteriorate.
</p><a id="i631549" name="i631549"></a>
<p>You can achieve disk I&#047;O pacing by adjusting two system parameters: the high-water mark and the low-water mark. When a process writes to a file that already has a pending high-water mark I&#047;O request, the process is put to sleep. The process wakes up when the number of outstanding I&#047;O requests falls below or equals the low-water mark.
</p>
<p>You can use the <code>smit</code> command to change the high and low-water marks. Determine the water marks through trial-and-error. Use caution when setting the water marks because they affect performance. Tuning the high and low-water marks has less effect on disk I&#047;O larger than 4 KB.
</p>
</div class="sect2">
<div class="sect2"><a id="sthref654" name="sthref654"></a>
<h3>
<font face="arial, helvetica, sans-serif" color="#330099">
Disk Geometry Considerations
</font>
</h3>
<p>On AIX, you can, to some extent, control the placement of a logical volume on a disk. Placing logical volumes with high disk activity close to each other can reduce disk seek time, resulting in better overall performance.
</p>
</div class="sect2"><a id="i631556" name="i631556"></a>
<div class="sect2"><a id="sthref655" name="sthref655"></a>
<h3>
<font face="arial, helvetica, sans-serif" color="#330099">
Minimizing Remote I&#047;O Operations
</font>
</h3>
<p>Oracle9<em>i</em> Real Application Clusters running on the SP architecture uses VSDs or HSDs as the common storage that is accessible from all instances on different nodes. If an I&#047;O request is to a VSD where the logical volume is local to the node, local I&#047;O is performed. The I&#047;O traffic to VSDs that are not local goes through network communication layers.
</p>
<p>For better performance, it is important to minimize remote I&#047;O as much as possible. Redo logs of each instance should be placed on the VSDs that are on local logical volumes. Each instance should have its own private rollback segments that are on VSDs mapped to local logical volumes if updates and insertions are intensive.
</p>
<p>In each session, each user is allowed only one temporary tablespace. The temporary tablespaces should each contain at least one datafile local to each of the nodes.
</p>
<p>Carefully design applications and databases (by partitioning applications and databases, for instance) to minimize remote I&#047;O. 
</p>
</div class="sect2"><a id="i631562" name="i631562"></a>
<div class="sect2"><a id="sthref656" name="sthref656"></a>
<h3>
<font face="arial, helvetica, sans-serif" color="#330099">
VSD Cache Buffers
</font>
</h3>
<p>Do not use VSD cache buffers (nocache) under normal situations for the following reasons: 
</p>
<ul>
<li type="disc">
<p>VSD LRU cache buffers use pinned kernel memory, which can be put to more effective use. 
</p>
</li>
<li type="disc">
<p>When the cache buffer is enabled, every physical read incurs the overhead of searching the cache blocks for overlapping pages and copying data in and out of the cache buffers. 
</p>
</li>
</ul>
<p>Use the <code>statvsd</code> command to check the performance of the VSD. If the <code>statvsd</code> command consistently shows requests queued waiting for buddy buffers, do not add more buddy buffers. Instead, increase the size of the switch send pool:
</p>
<pre>&#035; &#047;usr&#047;lpp&#047;ssp&#047;css&#047;chgcss -l css0 -a spoolsize&#061;<em>new&#095;size&#095;in&#095;bytes</em>

</pre>
<p>If the send pool size increases, you should also increase the mbuf parameter top ceiling mark:
</p>
<pre>&#035; &#047;etc&#047;no -o thewall&#061;<em>new&#095;size&#095;in&#095;kbytes</em>

</pre>
<div align="center">
<br /><table summary="This is a layout table to format a note" title="This is a layout table to format a note" dir="ltr" border="1" width="80%" frame="hsides" rules="groups" cellpadding="3" cellspacing="0"><tbody>
<tr>
<td align="left" colspan="1" rowspan="1">
<p>
<font face="arial, helvetica, sans-serif">
<strong>Note:</strong>
</font>
</p>The maximum value that you can specify is 64 MB.
</td>
</tr></tbody>
</table><br />
</div>
<p>The mbuf parameter top ceiling mark specifies the maximum amount of memory that can be used for network buffers. To check the current sizes of the send and receive pools, enter the following command:
</p>
<pre>&#036; &#047;usr&#047;sbin&#047;lsattr -El css0
</pre>
<br /><table summary="This is a layout table to format a tip" title="This is a layout table to format a tip" dir="ltr" border="1" width="80%" frame="hsides" rules="groups" cellpadding="3" cellspacing="0"><tbody>
<tr>
<td align="left" colspan="1" rowspan="1">
<p>
<font face="arial, helvetica, sans-serif">
<strong>See Also:</strong>
</font>
</p><em>Oracle9i Release Notes Release 2 (9.2.0.1.0) for AIX-Based Systems</em> for information on IBM Web addresses. 
</td>
</tr></tbody>
</table><br />
</div class="sect2">
</div class="sect1"><a id="i631579" name="i631579"></a>
<div class="sect1"><a id="sthref657" name="sthref657"></a>
<h2>
<font face="arial, helvetica, sans-serif" color="#330099">CPU Scheduling and Process Priorities
</font>
</h2>
<p>The CPU is another system component for which processes might contend. Although the AIX kernel allocates CPU effectively most of the time, many processes compete for CPU cycles. If your system has more than one CPU (SMP), there might be different levels of contention on each CPU. 
</p>
<div class="sect2"><a id="sthref658" name="sthref658"></a>
<h3>
<font face="arial, helvetica, sans-serif" color="#330099">
Changing Process Running Time Slice
</font>
</h3>
<p>The default value for the runtime slice of the AIX RR dispatcher is ten milliseconds. Use the <a id="i632130" name="i632130"></a><code>schedtune</code> command to change the time slice. However, be careful when using this command. A longer time slice causes a lower context switch rate if the applications&#039; average voluntary switch rate is lower. As a result, fewer CPU cycles are spent on context-switching for a process and the system throughput should improve. 
</p>
<p>However, a longer runtime slice can deteriorate response time, especially on a uniprocessor system. The default runtime slice is usually acceptable for most applications. When the run queue is high and most of the applications and Oracle shadow processes are capable of running a much longer duration, you might want to increase the time slice by entering the following command:
</p>
<pre>&#035; &#047;usr&#047;samples&#047;kernel&#047;schedtune -t <em>n</em>

</pre>
<p>In the previous example, choosing a value for <em>n</em> of 0 results in a slice of 10 milliseconds (ms), choosing a value of 1 results in a slice of 20 ms, choosing a value of 2 results in a slice of 30 ms, and so on.
</p>
</div class="sect2">
<div class="sect2"><a id="sthref659" name="sthref659"></a>
<h3>
<font face="arial, helvetica, sans-serif" color="#330099">
Using Processor Binding on SMP Systems
</font>
</h3>
<p>Binding certain processes to a processor can improve performance substantially on an SMP system. Processor binding is available and fully functional with AIX version 4<a id="i631596" name="i631596"></a><em></em> and higher.
</p>
<p>Processor binding offers the following benefits:
</p>
<ul>
<li type="disc">
<p>Provides higher-priority applications with a relatively larger share of CPU time
</p>
</li>
<li type="disc">
<p>Maintains the process context for a longer period
</p>
</li>
</ul>
<p>Processor binding on AIX is not automatic. On a multiprocessor system, you must explicitly bind a process to a processor by using the <code>bindprocessor</code> command. Only the <code>root</code> user or the Oracle software owner can bind an Oracle process to a processor. The child processes inherit the processor binding. 
</p>
<p>Oracle Corporation recommends binding the various Oracle background processes (except the database writer process) to different processors and leaving one processor free to service the database writer process. This guarantees the database writer a processor on which to execute and at the same time allows the database writer process to migrate freely to the other processors if it becomes CPU bound.
</p>
<div align="center">
<br /><table summary="This is a layout table to format a note" title="This is a layout table to format a note" dir="ltr" border="1" width="80%" frame="hsides" rules="groups" cellpadding="3" cellspacing="0"><tbody>
<tr>
<td align="left" colspan="1" rowspan="1">
<p>
<font face="arial, helvetica, sans-serif">

⌨️ 快捷键说明

复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?