📄 chap02.htm
字号:
<HTML><HEAD>
<TITLE>Intro to Algorithms: CHAPTER 2: GROWTH OF FUNCTIONS</TITLE></HEAD><BODY BGCOLOR="#FFFFFF">
<a href="chap03.htm"><img align=right src="../../images/next.gif" alt="Next Chapter" border=0></A>
<a href="toc.htm"><img align=right src="../../images/toc.gif" alt="Return to Table of Contents" border=0></A>
<a href="parti.htm"><img align=right src="../../images/prev.gif" alt="Previous Chapter" border=0></A>
<h1><a name="06e8_0001">CHAPTER 2: GROWTH OF FUNCTIONS<a name="06e8_0001"></h1><P>
The order of growth of the running time of an algorithm, defined in Chapter 1, gives a simple characterization of the algorithm's efficiency and also allows us to compare the relative performance of alternative algorithms. Once the input size <I>n</I> becomes large enough, merge sort, with its <img src="../images/bound.gif">(<I>n</I> lg <I>n</I>) worst-case running time, beats insertion sort, whose worst-case running time is <img src="../images/bound.gif">(<I>n</I><SUP>2</SUP>). Although we can sometimes determine the exact running time of an algorithm, as we did for insertion sort in Chapter 1, the extra precision is not usually worth the effort of computing it. For large enough inputs, the multiplicative constants and lower-order terms of an exact running time are dominated by the effects of the input size itself.<P>
When we look at input sizes large enough to make only the order of growth of the running time relevant, we are studying the <I><B>asymptotic</I></B> efficiency of algorithms. That is, we are concerned with how the running time of an algorithm increases with the size of the input <I>in the limit</I>, as the size of the input increases without bound. Usually, an algorithm that is asymptotically more efficient will be the best choice for all but very small inputs.<P>
This chapter gives several standard methods for simplifying the asymptotic analysis of algorithms. The next section begins by defining several types of "asymptotic notation," of which we have already seen an example in <img src="../images/bound.gif">-notation. Several notational conventions used throughout this book are then presented, and finally we review the behavior of functions that commonly arise in the analysis of algorithms.<P>
<h1><a name="06ea_1177">2.1 Asymptotic notation<a name="06ea_1177"></h1><P>
<a name="06ea_1176">The notations we use to describe the asymptotic running time of an algorithm are defined in terms of functions whose domains are the set of natural numbers <B>N</B> = {0, 1, 2, . . .}. Such notations are convenient for describing the worst-case running-time function <I>T</I>(<I>n</I>), which is usually defined only on integer input sizes. It is sometimes convenient, however, to <I>abuse</I> asymptotic notation in a variety of ways. For example, the notation is easily extended to the domain of real numbers or, alternatively, restricted to a subset of the natural numbers. It is important, however, to understand the precise meaning of the notation so that when it is abused, it is not <I>misused</I>. This section defines the basic asymptotic notations and also introduces some common abuses.<P>
<h2><img src="../images/bound.gif">-notation</h2><P>
<a name="06eb_1177">In Chapter 1, we found that the worst-case running time of insertion sort is <I>T</I>(<I>n</I>) = <img src="../images/bound.gif">(<I>n</I><SUP>2</SUP>). Let us define what this notation means. For a given function <I>g</I>(<I>n</I>), we denote by <img src="../images/bound.gif">(<I>g</I>(<I>n</I>)) the <I>set of functions</I><P>
<img src="../images/bound.gif">(<I>g</I>(<I>n</I>)) = {â(<I>n</I>) : there exist positive constants <I>c</I><SUB>1</SUB>, <I>c</I><SUB>2</SUB>, and <I>n</I><SUB>0 </SUB>such that<P>
0 <img src="../images/lteq12.gif"> <I>c</I><SUB>1</SUB><I>g</I>(<I>n</I>) <img src="../images/lteq12.gif"> â(<I>n</I>) <img src="../images/lteq12.gif"> <I>c</I><SUB>2</SUB><I>g</I>(<I>n</I>) for all <I>n</I> <img src="../images/gteq.gif"> <I>n</I><SUB>0</SUB>}.<P>
A function â(<I>n</I>) belongs to the set <img src="../images/bound.gif">(<I>g</I>(<I>n</I>)) if there exist positive constants <I>c</I><SUB>1</SUB> and <I>c</I><SUB>2</SUB> such that it can be "sandwiched" between <I>c</I><SUB>1</SUB><I>g</I>(<I>n</I>) and +<I>c</I><SUB>2</SUB><I>g</I>(<I>n</I>), for sufficiently large <I>n</I>. Although <img src="../images/bound.gif">(<I>g</I>(<I>n</I>)) is a set, we write "â(<I>n</I>) = <img src="../images/bound.gif">(<I>g</I>(<I>n</I>))" to indicate that â(<I>n</I>) is a member of <img src="../images/bound.gif">(<I>g</I>(<I>n</I>)), or "â(<I>n</I>) <img src="../images/memof12.gif"> <img src="../images/bound.gif">(<I>g</I>(<I>n</I>))." This abuse of equality to denote set membership may at first appear confusing, but we shall see later in this section that it has advantages.<P>
<a name="06eb_1178"><a name="06eb_1179">Figure 2.1 (a) gives an intuitive picture of functions â(<I>n</I>) and <I>g</I>(<I>n</I>), where â(<I>n</I>) = <img src="../images/bound.gif">(<I>g</I>(<I>n</I>)). For all values of <I>n</I> to the right of <I>n</I><SUB>0</SUB>, the value of â(<I>n</I>) lies at or above <I>c</I><SUB>1</SUB><I>g</I>(<I>n</I>) and at or below <I>c</I><SUB>2</SUB><I>g</I>(<I>n</I>). In other words, for all <I>n</I> <img src="../images/gteq.gif"> <I>n</I><SUB>0</SUB>, the function â(<I>n</I>) is equal to <I>g</I>(<I>n</I>) to within a constant factor. We say that <I>g</I>(<I>n</I>) is an <I><B>asymptotically tight bound</I></B> for â(<I>n</I>).<P>
<a name="06eb_117a">The definition of <img src="../images/bound.gif">(<I>g</I>(<I>n</I>)) requires that every member of <img src="../images/bound.gif">(<I>g</I>(<I>n</I>)) be <I><B>asymptotically nonnegative</I></B>, that is, that â(<I>n</I>) be nonnegative whenever <I>n</I> is sufficiently large. Consequently, the function <I>g</I>(<I>n</I>) itself must be asymptotically nonnegative, or else the set <img src="../images/bound.gif">(<I>g</I>(<I>n</I>)) is empty. We shall therefore assume that every function used within <img src="../images/bound.gif">-notation is asymptotically non-negative. This assumption holds for the other asymptotic notations defined in this chapter as well.<P>
In Chapter 1, we introduced an informal notion of <img src="../images/bound.gif">-notation that amounted to throwing away lower-order terms and ignoring the leading coefficient of the highest-order term. Let us briefly justify this intuition by using the formal definition to show that 1/2<I>n</I><SUP>2</SUP> - 3<I>n</I> = <img src="../images/bound.gif">(<I>n</I><SUP>2</SUP>). To do so, we must determine positive constants <I>c</I><SUB>1</SUB>, <I>c</I><SUB>2</SUB>, and <I>n</I><SUB>0</SUB> such that<P>
<img src="24_a.gif"><P>
for all <I>n</I> <img src="../images/gteq.gif"> <I>n</I><SUB>0</SUB>. Dividing by <I>n</I><SUP>2</SUP> yields<P>
<img src="24_b.gif"><P>
<img src="25_a.gif"><P>
<h4><a name="06eb_117b">Figure 2.1 Graphic examples of the <img src="../images/bound.gif">, O, and <img src="../images/omega12.gif"> notations. In each part, the value of n<SUB>0</SUB> shown is the minimum possible value; any greater value would also work. (a) <img src="../images/bound.gif">-notation bounds a function to within constant factors. We write <img src="../images/scrptf12.gif">(n) = <img src="../images/bound.gif">(g(n)) if there exist positive constants n<SUB>0</SUB>, c<SUB>1</SUB>, and c<SUB>2</SUB> such that to the right of n<SUB>0</SUB>, the value of â(n) always lies between c<SUB>1</SUB>g(n) and c<SUB>2</SUB>g(n) inclusive. (b) O-notation gives an upper bound for a function to within a constant factor. We write â(n) = O(g(n)) if there are positive constants n<SUB>0</SUB> and c such that to the right of n<SUB>0</SUB>, the value of â(n) always lies on or below cg(n). (c) <img src="../images/omega12.gif">-notation gives a lower bound for a function to within a constant factor. We write â(n) = <img src="../images/omega12.gif">(g(n)) if there are positive constants n<SUB>0</SUB> and c such that to the right of n<SUB>0</SUB>, the value of â(n) always lies on or above cg(n).<a name="06eb_117b"></sub></sup></h4><P>
The right-hand inequality can be made to hold for any value of <I>n</I> <img src="../images/gteq.gif"> 1 by choosing <I>c</I><SUB>2</SUB> <img src="../images/gteq.gif"> 1/2. Likewise, the left-hand inequality can be made to hold for any value of <I>n</I> <img src="../images/gteq.gif"> 7 by choosing <I>c</I><SUB>1</SUB> <img src="../images/lteq12.gif"> 1/14. Thus, by choosing <I>c</I><SUB>1</SUB> = 1/14, <I>c</I><SUB>2</SUB> = 1/2, and <I>n</I><SUB>0</SUB> = 7, we can verify that <img src="25_b.gif">. Certainly, other choices for the constants exist, but the important thing is that some choice exists. Note that these constants depend on the function <img src="25_c.gif"> a different function belonging to <img src="../images/bound.gif">(<I>n</I><SUP>2</SUP>) would usually require different constants.<P>
We can also use the formal definition to verify that 6<I>n</I><SUP>3</SUP> <img src="../images/noteq.gif"> <img src="../images/bound.gif">(<I>n</I><SUP>2</SUP>). Suppose for the purpose of contradiction that <I>c</I><SUB>2</SUB> and <I>n</I><SUB>0</SUB> exist such that 6<I>n</I><SUP>3</SUP> <img src="../images/lteq12.gif"> <I>c</I><SUB>2</SUB><I>n</I><SUP>2</SUP> for all <I>n</I> <img src="../images/gteq.gif"> <I>n</I><SUB>0</SUB>. But then <I>n</I> <img src="../images/lteq12.gif"> <I>c</I><SUB>2</SUB>/6, which cannot possibly hold for arbitrarily large <I>n</I>, since <I>c</I><SUB>2</SUB> is constant.<P>
Intuitively, the lower-order terms of an asymptotically positive function can be ignored in determining asymptotically tight bounds because they are insignificant for large <I>n</I>. A tiny fraction of the highest-order term is enough to dominate the lower-order terms. Thus, setting <I>c</I><SUB>1</SUB> to a value that is slightly smaller than the coefficient of the highest-order term and setting <I>c</I><SUB>2</SUB> to a value that is slightly larger permits the inequalities in the definition of <img src="../images/bound.gif">-notation to be satisfied. The coefficient of the highest-order term can likewise be ignored, since it only changes <I>c</I><SUB>1</SUB> and <I>c</I><SUB>2</SUB> by a constant factor equal to the coefficient.<P>
As an example, consider any quadratic function â(<I>n</I>) = <I>an</I><SUP>2</SUP> + <I>bn</I> + <I>c</I>, where <I>a</I>, <I>b</I>, and <I>c</I> are constants and <I>a</I> > 0. Throwing away the lower-order terms and ignoring the constant yields â(<I>n</I>) = <img src="../images/bound.gif">(<I>n</I><SUP>2</SUP>). Formally, to show the same thing, we take the constants <I>c</I><SUB>1</SUB> = <I>a</I>/4, <I>c</I><SUB>2</SUB> = 7<I>a</I>/4, and <I>n</I><SUB>0</SUB> = 2 <SUP>.</SUP> max <img src="26_a.gif">. The reader may verify that 0 <img src="../images/lteq12.gif"> <I>c</I><SUB>1</SUB><I>n</I><SUP>2</SUP> <img src="../images/lteq12.gif"> <I>an</I><SUP>2</SUP> + <I>bn</I> + <I>c</I> <img src="../images/lteq12.gif"> <I>c</I><SUB>2</SUB><I>n</I><SUP>2 </SUP>for all <I>n</I> <img src="../images/gteq.gif"> <I>n</I><SUB>0</SUB>. In general, for any polynomial <I>p</I>(<I>n</I>) = <img src="26_b.gif"> where the <I>a<SUB>i</I></SUB> are constants and <I>a<SUB>d</I></SUB> > 0, we have <I>p</I>(<I>n</I>) = <img src="../images/bound.gif">(<I>n<SUP>d</I></SUP>) (see Problem 2-1).<P>
Since any constant is a degree-0 polynomial, we can express any constant function as <img src="../images/bound.gif">(<I>n</I><SUP>0</SUP>), or <img src="../images/bound.gif">(1). This latter notation is a minor abuse, however, because it is not clear what variable is tending to infinity.<SUP>1 </SUP>We shall often use the notation <img src="../images/bound.gif">(1) to mean either a constant or a constant function with respect to some variable.<P>
<SUP><FONT FACE="Times" SIZE=1>1</SUP><FONT FACE="Times" SIZE=2>The real problem is that our ordinary notation for functions does not distinguish functions from values. In <img src="../images/lambdauc.gif">-calculus, the parameters to a function are clearly specified: the function <I>n</I><SUP><FONT FACE="Times" SIZE=1>2</SUP><FONT FACE="Times" SIZE=2> could be written as <img src="../images/lambdauc.gif"><I>n.n</I><SUP><FONT FACE="Times" SIZE=1>2, </SUP><FONT FACE="Times" SIZE=2>or even <img src="../images/lambdauc.gif"><I>r.r</I><SUP><FONT FACE="Times" SIZE=1>2</SUP><FONT FACE="Times" SIZE=2>. Adopting a more rigorous notation, however, would complicate algebraic manipulations, and so we choose to tolerate the abuse.</FONT></FONT></FONT></FONT></FONT></FONT></FONT></FONT><P>
<P>
<h2>O-notation</h2><P>
<a name="06ec_117b"><a name="06ec_117c"><a name="06ec_117d">The <img src="../images/bound.gif">-notation asymptotically bounds a function from above and below. When we have only an <I><B>asymptotic upper bound</I></B>, we use <I>O</I>-notation. For a given function <I>g</I>(<I>n</I>), we denote by <I>O</I>(<I>g</I>(<I>n</I>)) the set of functions<P>
<I>O</I>(<I>g</I>(<I>n</I>)) = {â(<I>n</I>) : there exist positive constants <I>c</I> and <I>n</I><SUB>0</SUB> such that<P>
0 <img src="../images/lteq12.gif"> â(<I>n</I>) <img src="../images/lteq12.gif"> <I>cg</I>(<I>n</I>) for all <I>n</I> <img src="../images/gteq.gif"> <I>n</I><SUB>0</SUB>}.<P>
We use <I>O</I>-notation to give an upper bound on a function, to within a constant factor. Figure 2.1 (b) shows the intuition behind <I>O</I>-notation. For all values <I>n</I> to the right of <I>n</I><SUB>0</SUB>, the value of the function â(<I>n</I>) is on or below <I>g</I>(<I>n</I>).<P>
To indicate that a function â(<I>n</I>) is a member of <I>O</I>(<I>g</I>(<I>n</I>)), we writeâ(<I>n</I>) = <I>O</I>(<I>g</I>(<I>n</I>)). Note that â(<I>n</I>) = <img src="../images/bound.gif">(<I>g</I>(<I>n</I>)) implies â(<I>n</I>) = <I>O</I>(<I>g</I>(<I>n</I>)), since <img src="../images/bound.gif">-notation is a stronger notion than <I>O</I>-notation. Written set-theoretically, we have <img src="../images/bound.gif">(<I>g</I>(<I>n</I>)) <img src="../images/rgtubar.gif"> <I>O</I>(<I>g</I>(<I>n</I>)). Thus, our proof that any quadratic function <I>an</I><SUP>2 </SUP>+ <I>bn </I>+ <I>c</I>, where <I>a</I> > 0, is in <img src="../images/bound.gif">(<I>n</I><SUP>2</SUP>) also shows that any quadratic function is in <I>O</I>(<I>n</I><SUP>2</SUP>). What may be more surprising is that any <I>linear</I> function <I>an </I>+ <I>b </I>is in <I>O</I>(<I>n</I><SUP>2</SUP>), which is easily verified by taking <I>c</I> = <I>a </I>+ |<I>b| and </I>n<I><SUB>0</SUB> = 1.</I><P>
Some readers who have seen <I>O</I>-notation before may find it strange that we should write, for example, <I>n</I> = <I>O</I>(<I>n</I><SUP>2</SUP>). In the literature, <I>O</I>-notation is sometimes used informally to describe asymptotically tight bounds, that is, what we have defined using <img src="../images/bound.gif">-notation. In this book, however, when we write â(<I>n</I>) = <I>O</I>(<I>g</I>(<I>n</I>)), we are merely claiming that some constant multiple of <I>g</I>(<I>n</I>) is an asymptotic upper bound on â(<I>n</I>), with no claim about how tight an upper bound it is. Distinguishing asymptotic upper bounds from asymptotically tight bounds has now become standard in the algorithms literature.<P>
Using <I>O</I>-notation, we can often describe the running time of an algorithm merely by inspecting the algorithm's overall structure. For example, the doubly nested loop structure of the insertion sort algorithm from Chapter 1 immediately yields an <I>O</I>(<I>n</I><SUP>2</SUP>) upper bound on the worst-case running time: the cost of the inner loop is bounded from above by <I>O</I>(1) (constant), the indices <I>i</I> and <I>j</I> are both at most <I>n</I>, and the inner loop is executed at most once for each of the <I>n</I><SUP>2</SUP> pairs of values for <I>i</I> and <I>j.</I><P>
Since <I>O</I>-notation describes an upper bound, when we use it to bound the worst-case running time of an algorithm, by implication we also bound the running time of the algorithm on arbitrary inputs as well. Thus, the <I>O</I>(<I>n</I><SUP>2</SUP>) bound on worst-case running time of insertion sort also applies to its running time on every input. The <img src="../images/bound.gif"><I>(</I>n<I><SUP>2</SUP>) bound on the worst-case running time of insertion sort, however, does not imply a <img src="../images/bound.gif"></I>(<I>n</I><SUP>2</SUP>) bound on the running time of insertion sort on <I>every</I> input. For example, we saw in Chapter 1 that when the input is already sorted, insertion sort runs in <img src="../images/bound.gif">(<I>n</I>) time.<P>
Technically, it is an abuse to say that the running time of insertion sort is <I>O</I>(<I>n</I><SUP>2</SUP>), since for a given <I>n</I>, the actual running time depends on the particular input of size <I>n.</I> That is, the running time is not really a function of <I>n.</I> What we mean when we say "the running time is <I>O</I>(<I>n</I><SUP>2</SUP>)" is that the worst-case running time (which is a function of <I>n</I>) is <I>O</I>(<I>n</I><SUP>2</SUP>), or equivalently, no matter what particular input of size <I>n</I> is chosen for each value of <I>n</I>, the running time on that set of inputs is <I>O</I>(<I>n</I><SUP>2</SUP>).<P>
<P>
<h2><img src="../images/omega12.gif">-notation</h2><P>
<a name="06ed_117e"><a name="06ed_117f"><a name="06ed_1180">Just as <I>O</I>-notation provides an asymptotic <I>upper</I> bound on a function, <img src="../images/omega12.gif">-notation provides an <I><B>asymptotic lower bound</I>.</B> For a given function <I>g</I>(<I>n</I>), we denote by <img src="../images/omega12.gif">(<I>g</I>(<I>n</I>)) the set of functions<P>
<img src="../images/omega12.gif">(<I>g</I>(<I>n</I>)) = {<I>f</I>(<I>n</I>): there exist positive constants <I>c</I> and <I>n</I><SUB>0</SUB> such that<P>
0 <img src="../images/lteq12.gif"> <I>cg</I>(<I>n</I>) <img src="../images/lteq12.gif"> <I>f</I> (<I>n</I>) for all <I>n</I> <img src="../images/gteq.gif"> <I>n</I><SUB>0</SUB>} .<P>
The intuition behind <img src="../images/omega12.gif">-notation is shown in Figure 2.1(c). For all values <I>n</I> to the right of <I>n</I><SUB>0,</SUB><I> </I>the value of<I> f</I>(<I>n</I>) is on or above <I>g</I>(<I>n</I>).<P>
From the definitions of the asymptotic notations we have seen thus far, it is easy to prove the following important theorem (see Exercise 2.1-5).<P>
<a name="06ed_1185">Theorem 2.1<a name="06ed_1185"><P>
For any two functions <I>f</I>(<I>n</I>) and <I>g</I>(<I>n</I>)<I>, f</I>(<I>n</I>)<I> = </I><img src="../images/bound.gif"><I>(</I>g<I>(</I>n<I>)) if and only if </I>f<I>(</I>n<I>) = </I>O<I>(</I>g<I>(</I>n<I>)) and </I>f<I>(</I>n<I>)</I> = <I><img src="../images/omega12.gif"></I>(<I>g</I>(<I>n</I>)). <P>
As an example of the application of this theorem, our proof that <I>an</I><SUP>2</SUP> + <I>bn + c = </I><img src="../images/bound.gif"><I></I>(<I>n</I><SUP>2</SUP>) for any constants <I>a, b</I>, and <I>c</I>, where <I>a > </I>0, immediately implies that <I>an</I><SUP>2</SUP><I> + bn<SUP> </SUP>+ c = </I><img src="../images/omega12.gif">(<I>n</I><SUP>2</SUP>) and <I>an</I><SUP>2</SUP><I> + bn + c = O</I>(<I>n<SUP>2</I></SUP>). In practice, rather than using Theorem 2.1 to obtain asymptotic upper and lower bounds from asymptotically tight bounds, as we did for this example, we usually use it to prove asymptotically tight bounds from asymptotic upper and lower bounds.<P>
<a name="06ed_1181"><a name="06ed_1182">Since <img src="../images/omega12.gif">-notation describes a lower bound, when we use it to bound the best-case running time of an algorithm, by implication we also bound the running time of the algorithm on arbitrary inputs as well. For example, the best-case running time of insertion sort is <img src="../images/omega12.gif">(<I>n</I>), which implies that the running time of insertion sort is <img src="../images/omega12.gif">(<I>n</I>).<P>
<a name="06ed_1183"><a name="06ed_1184">The running time of insertion sort therefore falls between <img src="../images/omega12.gif">(<I>n</I>) and <I>O</I>(<I>n</I><SUP>2</SUP><I>),</I> since it falls anywhere between a linear function of <I>n</I> and a quadratic function of <I>n.</I> Moreover, these bounds are asymptotically as tight as possible: for instance, the running time of insertion sort is not<I> </I><img src="../images/omega12.gif">(<I>n</I><SUP>2</SUP>), since insertion sort runs in <img src="../images/bound.gif"><I></I>(<I>n</I>) time when the input is already sorted. It is not contradictory, however, to say that the <I>worst-case</I> running time of insertion sort is <img src="../images/omega12.gif">(<I>n</I><SUP>2</SUP>), since there exists an input that causes the algorithm to take <img src="../images/omega12.gif">(n<SUP>2</SUP>) time. When we say that the <I>running time</I> (no modifier) of an algorithm is <img src="../images/omega12.gif">(<I>g</I>(<I>n</I>)), we mean that <I>no matter what particular input of size n is chosen for each value of n</I>, the running time on that set of inputs is at least a constant times <I>g</I>(<I>n</I>), for sufficiently large <I>n.</I><P>
<P>
<h2>Asymptotic notation in equations</h2><P>
We have already seen how asymptotic notation can be used within mathematical formulas. For example, in introducing <I>O</I>-notation, we wrote "<I>n = O</I>(<I>n</I><SUP>2</SUP>)<I>.</I>"<I> </I>We might also write 2<I>n</I><SUP>2</SUP><I> + </I>3<I>n + </I>1<I> = </I>2<I>n</I><SUP>2<I> </SUP>+ </I><img src="../images/bound.gif"><I>(n</I>). How do we interpret such formulas?<P>
When the asymptotic notation stands alone on the right-hand side of an equation, as in <I>n = O</I>(<I>n</I><SUP>2</SUP>), we have already defined the equal sign to mean set membership: <I>n </I><img src="../images/memof12.gif"><I> O</I>(<I>n</I><SUP>2</SUP>)<I>. </I>In general, however, when asymptotic notation appears in a formula, we interpret it as standing for some anonymous function that we do not care to name. For example, the formula 2<I>n</I><SUP>2<I> </SUP>+ </I>3<I>n + </I>1<I><SUP> </SUP>= </I>2<I>n</I><SUP>2</SUP><I> + </I><img src="../images/bound.gif"><I></I>(<I>n</I>) means that 2<I>n</I><SUP>2</SUP><I> + </I>3<I>n + </I>1<I> = </I>2<I>n</I><SUP>2</SUP><I> + f</I>(<I>n</I>)<I>,</I> where <img src="../images/scrptf12.gif"><I>(</I>n<I>) is some function in the set <img src="../images/bound.gif"></I>(<I>n</I>). In this case, <img src="../images/scrptf12.gif"><I>(</I>n<I>)</I> = <I>3</I>n + <I>1</I>, <I>which</I> <I>indeed is in</I> <I><img src="../images/bound.gif"></I>(<I>n</I>)<I>.</I><P>
Using asymptotic notation in this manner can help eliminate inessential detail and clutter in an equation. For example, in Chapter I we expressed the worst-case running time of merge sort as the recurrence<P>
<pre><I>T</I>(<I>n</I>) = 2<I>T</I>(<I>n</I>/2) + <img src="../images/bound.gif">(<I>n</I>).</sub></sup></pre><P>
If we are interested only in the asymptotic behavior of <I>T</I>(<I>n</I>), there is no point in specifying all the lower-order terms exactly; they are all understood to be included in the anonymous function denoted by the term <img src="../images/bound.gif"><I>(</I>n<I>).</I><P>
The number of anonymous functions in an expression is understood to be equal to the number of times the asymptotic notation appears. For example, in the expression<P>
<img src="29_a.gif"><P>
there is only a single anonymous function (a function of <I>i</I>). This expression is thus <I>not</I> the same as <I>O</I>(1) + <I>O</I>(2) + <img src="../images/dot10.gif"> <img src="../images/dot10.gif"> <img src="../images/dot10.gif"> + <I>O</I>(<I>n</I>), which doesn't really have a clean interpretation.<P>
In some cases, asymptotic notation appears on the left-hand side of an equation, as in<P>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -