⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 ch02.htm

📁 Teach yourself Oracle8 in 21 day
💻 HTM
📖 第 1 页 / 共 5 页
字号:
of interaction involved in the Oracle RDBMS. As you look in detail at the tuning
of the server processes and applications later in this book, you can use this overview
as a reference to the basics of how the Oracle RDBMS operates. Because of the differences
in operating systems, minor variances in different environments will be discussed
individually.</P>
<H2><FONT COLOR="#000077"><B>RDBMS Functionality</B></FONT></H2>
<P>If the RDBMS is to operate, you must provide for certain functions, including
data integrity, recovery from failure, error handling, and so on. This is accomplished
via events such as checkpointing, logging, and archiving. The following sections
list and describe some of these functions.</P>
<H3><FONT COLOR="#000077"><B>Checkpointing</B></FONT></H3>
<P>You know that Oracle uses either the CKPT background process or the LGWR process
to signal a checkpoint; but what is a checkpoint and why is it necessary?</P>
<P>Because all modifications to data blocks are done on the block buffers, some changes
to data in memory are not necessarily reflected in the blocks on disk. Because caching
is done using a least recently used algorithm, a buffer that is constantly modified
is always marked as recently used and is therefore unlikely to be written by the
DBWR. A checkpoint is used to ensure that these buffers are written to disk by forcing
all dirty buffers to be written out on a regular basis. This does not mean that all
work stops during a checkpoint; the checkpoint process has two methods of operation:
the normal checkpoint and the fast checkpoint.</P>
<P>In the normal checkpoint, the DBWR merely writes a few more buffers every time
it is active. This type of checkpoint takes much longer but affects the system less
than the fast checkpoint. In the fast checkpoint, the DBWR writes a large number
of buffers at the request of the checkpoint each time it is active. This type of
checkpoint completes much quicker and is more efficient in terms of I/Os generated,
but it has a greater effect on system performance at the time of the checkpoint.</P>
<P>You can use the time between checkpoints to improve instance recovery. Frequent
checkpoints reduce the time required to recover in the event of a system failure.
A checkpoint automatically occurs at a log switch.</P>
<H3><FONT COLOR="#000077"><B>Logging and Archiving</B></FONT></H3>
<P>The redo log records all changes made to the Oracle database. The purpose of the
redo log is to ensure that in the event of the loss of a datafile as a result of
some sort of system failure, the database can be recovered. By restoring the datafiles
back to a known good state from backups, the redo log files (including the archive
log files) can replay all the transactions to the restored datafile, thus recovering
the database to the point of failure.</P>
<P>When a redo log file is filled in normal operation, a log switch occurs and the
LGWR process starts writing to a different redo log file. When this switch occurs,
the ARCH process copies the filled redo log file to an archive log file. When this
archive process has finished copying the entire redo log file to the archive log
file, the redo log file is marked as available. It's critical that this archive log
file be safely stored because it might be needed for recovery.


<BLOCKQUOTE>
	<P>
<HR>
<FONT COLOR="#000077"><B>NOTE:</B></FONT><B> </B>Remember that a transaction has
	not been committed until the redo log file has been written. Slow I/Os to the redo
	log files can slow down the entire system. 
<HR>


</BLOCKQUOTE>

<H2><FONT COLOR="#000077"><B>What Affects Oracle Performance?</B></FONT></H2>
<P>Because one of the roles of the DBA is to anticipate, find, and fix performance
problems, you must know what types of things affect performance. To understand why
these things affect performance, you must first review the basics of how a computer
system works.</P>
<H3><FONT COLOR="#000077"><B>Overview of Computer Architecture</B></FONT></H3>
<P>Your computer system consists of thousands of individual components that work
in harmony to process data. Each of these components has its own job to perform,
and each has its own performance characteristics.</P>
<P>The brainpower of the system is the Central Processing Unit (CPU), which processes
all the calculations and instructions that run on the computer. The job of the rest
of the system is to keep the CPU busy with instructions to process. A well-tuned
system runs at maximum performance if the CPU or CPUs are busy 100% of the time.</P>
<P>So how does the system keep the CPUs busy? In general, the system consists of
different layers, or tiers, of progressively slower components. Because faster components
are typically the most expensive, you must perform a balancing act between speed
and cost efficiency.</P>
<H3><FONT COLOR="#000077"><B>CPU and Cache</B></FONT></H3>
<P><FONT COLOR="#000077"><B>New Term:</B></FONT><B> </B>The CPU and the CPU's cache
are the fastest components of the system. The cache is high-speed memory used to
store recently used data and instructions so that it can provide quick access if
this data is used again in a short time. Most CPU hardware designs have a cache built
into the CPU chip. This internal cache is known as a <I>Level 1</I> (or <I>L1</I>)
<I>cache</I>. Typically, an L1 cache is quite small--8-16KB.</P>
<P>When a certain piece of data is wanted, the hardware looks first in the L1 cache.
If the data is there, it's processed immediately. If the data is not available in
the L1 cache, the hardware looks in the L2 cache, which is external to the CPU chip
but located close to it. The L2 cache is connected to the CPU chip(s) on the same
side of the memory bus as the CPU. To get to main memory, you must use the memory
bus, which affects the speed of the memory access.</P>
<P>Although the L2 cache is twice as slow as the L1 cache, it's usually much larger.
Its larger size means you have a better chance of getting a cache hit. Typical L2
caches range in size from 128KB to 4MB.</P>
<P>Slower yet is the speed of the system memory--it's probably five times slower
than the L2 cache. The size of system memory can range from 4MB for a small desktop
PC to 2-4GB for large server machines. Some supercomputers have even more system
memory than that.</P>
<P>As you can see from the timeline shown in Figure 2.4, there is an enormous difference
between retrieving data from the L1 cache and retrieving data from the disk. This
is why you spend so much time trying to take advantage of the SGA in memory. This
is also why hardware vendors spend so much time designing CPU caches and fast memory
buses.</P>
<P><A NAME="04"></A><A HREF="04.htm"><B>Figure 2.4.</B></A></P>
<P><I>Component speed comparison.</I></P>
<H3><FONT COLOR="#000077"><B>CPU Design</B></FONT></H3>
<P>Most instruction processing occurs in the CPU. Although certain intelligent devices,
such as disk controllers, can process some instructions, the instructions these devices
can handle are limited to the control of data moving to and from the devices. The
CPU works from the system clock and executes instructions based on clock signals.
The clock rate and type of CPU determine how quickly these instructions are executed.</P>
<P>The CPU usually falls into one of two groups of processors: Complex Instruction
Set Computer (CISC) or Reduced Instruction Set Computer (RISC).</P>
<P><FONT COLOR="#000077"><B>CISC Processors</B></FONT></P>
<P>CISC processors (like the ones Intel builds) are by far the most popular processors.
They are more traditional and offer a large instruction set to the program developer.
Some of these instructions can be quite complicated; most instructions require several
clock cycles to complete.</P>
<P>CISC processors are complex and difficult to build. Because these chips contain
millions of internal components, the components are extremely close together. The
physical closeness causes problems because there is no room for error. Each year,
technology allows more complex and faster chips to be built, but eventually, physics
will limit what can be done.</P>
<P>CISC processors carry out a wide range of tasks and can sometimes perform two
or more instructions at a time in parallel. CISC processors perform most tasks, such
as RDBMS processing, very well.</P>
<P><FONT COLOR="#000077"><B>RISC Processors</B></FONT></P>
<P>RISC processors are based on the principle that if you can reduce the number of
instructions processed by the CPU, the CPU can be simpler to build and can run faster.
By putting fewer internal components inside the chip, the speed of the chip can be
accelerated. One of the most popular RISC chips on the market is the DEC Alpha.</P>
<P>The system compiler determines what instructions are executed on the CPU chips.
When the number of instructions was reduced, compilers were written to exploit this
and to compensate for the missing instructions.</P>
<P>By reducing the instruction set, RISC manufacturers have been able to increase
the clock speed to many times that of CISC chips. Although the faster clock speed
is beneficial in some cases, it offers little improvement in others. One effect of
a faster CPU is that the surrounding components such as L2 cache and memory must
also run faster at an increase in cost.</P>
<P>One goal of some RISC manufacturers is to design the chip so that the majority
of instructions complete within one clock cycle. Some RISC chips can already do this.
But because some operations that require a single instruction for a CISC chip might
require many instructions for a RISC chip, a speed-to-speed comparison cannot be
made.</P>

<DL>
	<DD>
<HR>
<B>CISC versus RISC</B>
	<P>Both CISC and RISC processors have their advantages and disadvantages; it's up
	to you to determine whether a RISC processor or a CISC processor will work best for
	you. When comparing the two types of processors, be sure to look at performance data
	and not just clock speed. Although the RISC chips have a much faster clock speed,
	they do less work per instruction. The performance of the system cannot be determined
	by clock speed alone. 
<HR>

</DL>

<H3><FONT COLOR="#000077"><B>Multiprocessor Systems</B></FONT></H3>
<P>Multiprocessor systems can provide significant performance with very good value.
With such a system, you can start with one or two processors and add more as needed.
Multiprocessors fall into several categories; two of the main types of multiprocessor
systems are the Symmetric Multiprocessor (SMP) system and the Massively Parallel
Processing (MPP) system.</P>
<P><FONT COLOR="#000077"><B>SMP Systems</B></FONT></P>
<P>SMP systems usually consist of a standard computer architecture with two or more
CPUs that share the system memory, I/O bus, and disks. The CPUs are called <I>symmetric</I>
because each processor is identical to any other processor in terms of function.
Because the processors share system memory, each processor looks at the same data
and the same operating system. In fact, the SMP architecture is sometimes called
<I>tightly coupled </I>because the CPUs can even share the operating system.</P>
<P>In the typical SMP system, only one copy of the operating system runs. Each processor
works independently by taking the next available job. Because the Oracle architecture
is based on many processes working independently, you can see great improvement by
adding processors.</P>
<P>The SMP system has these advantages:

<UL>
	<LI>It's cost effective--The addition of a CPU or CPU board is much less expensive
	than adding another entire system.
	<P>
	<LI>It's high performing--Under most applications, additional CPUs provide an incremental
	performance improvement.
	<P>
	<LI>It's easily upgradable--Simply add a CPU to the system to instantly and significantly
	increase performance.
</UL>

<P>A typical SMP system supports between four and eight CPUs. Because the SMP system
shares the system bus and memory, only a certain amount of activity can occur before
the bandwidth of the bus is saturated. To add more processors, you must go to an
MPP architecture.</P>
<P><FONT COLOR="#000077"><B>MPP Systems</B></FONT></P>
<P>MPP systems are based on many independent units. Each processor in an MPP system
typically has its own resources (such as its own local memory and I/O system). Each
processor in an MPP system runs an independent copy of the operating system and its
own independent copy of Oracle. An MPP system is sometimes called <I>loosely coupled</I>.</P>
<P>Think of an MPP system as a large cluster of independent units that communicate
through a high-speed interconnect. As with SMP systems, you will eventually hit the
bandwidth limitations of the interconnect as you add processors. However, the number
of processors with which you hit this limit is typically much larger than with SMP
systems.</P>
<P>If you can divide the application among the nodes in the cluster, MPP systems
can achieve quite high scalability. Although MPP systems can achieve much higher
performance than SMP systems, they are less economical: MPP systems are typically
much higher in cost than SMP systems.</P>
<H3><FONT COLOR="#000077"><B>CPU Cache</B></FONT></H3>
<P>Regardless of whether you use a single-processor system, an SMP system, or an
MPP system, the basic architecture of the CPUs is similar. In fact, you can find
the same Intel processors in both SMP and MPP systems.</P>
<P>As you learned earlier today, the system cache is important to the system. The
cache allows quick access to recently used instructions or data. A cache is always
used to store and retrieve data more quickly than the next level of storage (the
L1 cache is faster than the L2 cache, the L2 cache is faster than main memory, and
so on).</P>
<P>By caching frequently used instructions and data, you increase the likelihood
of a cache hit. This can save precious clock cycles that would otherwise have been
spent retrieving data from memory or disk.</P>
<H2><FONT COLOR="#000077"><B>System Memory Architecture</B></FONT></H2>
<P>The system memory is basically a set of memory chips, either protected or not
protected, that stores data and instructions used by the system. System memory can
be protected by parity or by a more sophisticated advanced ECC correction method.
Data parity will detect an incorrect value in memory and flag it to the system. An
advanced ECC correction method will not only detect an incorrect value in memory,
but in many cases can correct it. The system memory can range in size from 4MB on
a small PC to 4GB on a large SMP server.</P>
<P>Typically, the more memory available to Oracle, the better your performance. Allocation
of a large SGA allows Oracle to cache more data, thus speeding access to that data.</P>
<P><FONT COLOR="#000077"><B>New Term:</B></FONT><B> </B>System memory is accessed
by the CPUs through a high-speed bus that allows large amounts of data and instructions
to be quickly moved from the CPU to L2 cache. Data and instructions are typically
read from memory in large chunks and put into the cache. Because the CPU expects
that memory will be read sequentially, in most cases it will read ahead the data

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -