📄 chap01.htm
字号:
Use mathematical induction to show that the solution of the recurrence<P>
<img src="15_b.gif"><P>
<a name="06e0_1172">1.3-4<a name="06e0_1172"><P>
Insertion sort can be expressed as a recursive procedure as follows. In order to sort <I>A</I>[1 . . <I>n</I>], we recursively sort <I>A</I>[1 . . <I>n</I> - 1] and then insert <I>A</I>[<I>n</I>] into the sorted array <I>A</I>[1 . . <I>n</I> - 1]. Write a recurrence for the running time of this recursive version of insertion sort.<P>
<a name="06e0_1173">1.3-5<a name="06e0_1173"><P>
<a name="06e0_1168"><a name="06e0_1169"><a name="06e0_116a">Referring back to the searching problem (see Exercise 1.1-3), observe that if the sequence <I>A</I> is sorted, we can check the midpoint of the sequence against <I>v</I> and eliminate half of the sequence from further consideration. <I><B>Binary search</I></B><I> </I>is an algorithm that repeats this procedure, halving the size of the remaining portion of the sequence each time. Write pseudocode, either iterative or recursive, for binary search. Argue that the worst-case running time of binary search is <img src="../images/bound.gif">(lg <I>n</I>).<P>
<a name="06e0_1174">1.3-6<a name="06e0_1174"><P>
<a name="06e0_116b"><a name="06e0_116c"><a name="06e0_116d">Observe that the <B>while</B> loop of lines 5-7 of the <FONT FACE="Courier New" SIZE=2>INSERTION</FONT>-<FONT FACE="Courier New" SIZE=2>SORT</FONT> procedure in Section 1.1 uses a linear search to scan (backward) through the sorted subarray A[1 . . <I>j</I> - 1]. Can we use a binary search (see Exercise 1.3-5) instead to improve the overall worst-case running time of insertion sort to <img src="../images/bound.gif">(<I>n</I> lg <I>n</I>)?<P>
<a name="06e0_1175">1.3-7<a name="06e0_1175"><P>
Describe a <img src="../images/bound.gif">(<I>n</I> lg <I>n</I>)-time algorithm that, given a set <I>S</I> of <I>n</I> real numbers and another real number <I>x</I>, determines whether or not there exist two elements in <I>S </I>whose sum is exactly <I>x</I>.<P>
<P>
<P>
<h1><a name="06e1_0001">1.4 Summary<a name="06e1_0001"></h1><P>
A good algorithm is like a sharp knife--it does exactly what it is supposed to do with a minimum amount of applied effort. Using the wrong algorithm to solve a problem is like trying to cut a steak with a screwdriver: you may eventually get a digestible result, but you will expend considerably more effort than necessary, and the result is unlikely to be aesthetically pleasing.<P>
Algorithms devised to solve the same problem often differ dramatically in their efficiency. These differences can be much more significant than the difference between a personal computer and a supercomputer. As an example, let us pit a supercomputer running insertion sort against a small personal computer running merge sort. They each must sort an array of one million numbers. Suppose the supercomputer executes 100 million instructions per second, while the personal computer executes only one million instructions per second. To make the difference even more dramatic, suppose that the world's craftiest programmer codes insertion sort in machine language for the supercomputer, and the resulting code requires 2<I>n</I><SUP>2</SUP> supercomputer instructions to sort <I>n</I> numbers. Merge sort, on the other hand, is programmed for the personal computer by an average programmer using a high-level language with an inefficient compiler, with the resulting code taking 50<I>n</I> 1g <I>n</I> personal computer instructions. To sort a million numbers, the supercomputer takes<P>
<img src="16_a.gif"><P>
By using an algorithm whose running time has a lower order of growth, even with a poor compiler, the personal computer runs 20 times faster than the supercomputer!<P>
This example shows that algorithms, like computer hardware, are a <I><B>technology</I></B>. Total system performance depends on choosing efficient algorithms as much as on choosing fast hardware. Just as rapid advances are being made in other computer technologies, they are being made in algorithms as well.<P>
<h2><a name="06e2_1170">Exercises<a name="06e2_1170"></h2><P>
<a name="06e2_1171">1.4-1<a name="06e2_1171"><P>
<a name="06e2_116e"><a name="06e2_116f">Suppose we are comparing implementations of insertion sort and merge sort on the same machine. For inputs of size <I>n</I>, insertion sort runs in 8<I>n</I><SUP>2</SUP> steps, while merge sort runs in 64<I>n</I> 1g <I>n</I> steps. For which values of <I>n </I>does insertion sort beat merge sort? How might one rewrite the merge sort pseudocode to make it even faster on small inputs?<P>
<a name="06e2_1172">1.4-2<a name="06e2_1172"><P>
What is the smallest value of <I>n</I> such that an algorithm whose running time is 100<I>n</I><SUP>2</SUP> runs faster than an algorithm whose running time is 2<I><SUP>n</I></SUP> on the same machine?<P>
<P>
<P>
<h1><a name="06e3_1174">Problems<a name="06e3_1174"></h1><P>
<a name="06e3_1175">1-1 Comparison of running times<a name="06e3_1175"><P>
For each function <I>f</I>(<I>n</I>) and time <I>t</I> in the following table, determine the largest size <I>n</I> of a problem that can be solved in time <I>t</I>, assuming that the algorithm to solve the problem takes <I>f</I>(<I>n</I>) microseconds.<P>
<img src="17_a.gif"><P>
<a name="06e3_1176">1-2 Insertion sort on small arrays in merge sort<a name="06e3_1176"><P>
<a name="06e3_1170"><a name="06e3_1171">Although merge sort runs in <img src="../images/bound.gif">(<I>n</I> lg <I>n</I>) worst-case time and insertion sort runs in <img src="../images/bound.gif">(<I>n</I><SUP>2</SUP>) worst-case time, the constant factors in insertion sort make it faster for small <I>n</I>. Thus, it makes sense to use insertion sort within merge sort when subproblems become sufficiently small. Consider a modification to merge sort in which <I>n</I>/<I>k</I> sublists of length <I>k</I> are sorted using insertion sort and then merged using the standard merging mechanism, where<I> k</I> is a value to be determined.<P>
<I><B>a.</I></B> Show that the <I>n</I>/<I>k</I> sublists, each of length <I>k</I>, can be sorted by insertion sort in <img src="../images/bound.gif">(<I>nk</I>) worst-case time.<P>
<I><B>b.</I></B> Show that the sublists can be merged in <img src="../images/bound.gif">(<I>n</I> lg(<I>n</I>/<I>k</I>)) worst-case time.<P>
<I><B>c.</I></B> Given that the modified algorithm runs in <img src="../images/bound.gif">(<I>nk</I> +<I> n</I> 1g(<I>n</I>/<I>k</I>)) worst-case time, what is the largest asymptotic (<img src="../images/bound.gif">-notation) value of <I>k</I> as a function of <I>n</I> for which the modified algorithm has the same asymptotic running time as standard merge sort?<P>
<I><B>d.</I></B> How should <I>k</I> be chosen in practice?<P>
<a name="06e3_1177">1-3 Inversions<a name="06e3_1177"><P>
<a name="06e3_1172"><a name="06e3_1173">Let A[1 . . <I>n</I>] be an array of <I>n</I> distinct numbers. If <I>i</I> < <I>j</I> and <I>A</I>[<I>i</I>] > <I>A</I>[<I>j</I>], then the pair (<I>i, j</I>) is called an <I><B>inversion</I></B> of <I>A</I>.<P>
<I><B>a.</I></B> List the five inversions of the array <img src="../images/lftwdchv.gif">2, 3, 8, 6, 1<img src="../images/wdrtchv.gif">.<P>
<I><B>b.</I></B> What array with elements from the set { 1,2, . . . , <I>n</I>} has the most inversions? How many does it have?<P>
<I><B>c.</I></B> What is the relationship between the running time of insertion sort and the number of inversions in the input array? Justify your answer.<P>
<I><B>d.</I></B> Give an algorithm that determines the number of inversions in any permutation on <I>n</I> elements in <img src="../images/bound.gif">(<I>n</I> 1g <I>n</I>) worst-case time. (<I>Hint</I>: Modify merge sort.)<P>
<P>
<h1>Chapter notes</h1><P>
There are many excellent texts on the general topic of algorithms, including those by Aho, Hopcroft, and Ullman[4, 5], Baase [14], Brassard and Bratley [33], Horowitz and Sahni [105], Knuth[121, 122, 123], Manber [142], Mehlhorn[144, 145, 146], Purdom and Brown [164], Reingold, Nievergelt, and Deo [167], Sedgewick [175], and Wilf [201]. Some of the more practical aspects of algorithm design are discussed by Bentley[24, 25] and Gonnet [90].<P>
In 1968, Knuth published the first of three volumes with the general title <I>The Art of Computer Programming</I>[121, 122, 123]. The first volume ushered in the modern study of computer algorithms with a focus on the analysis of running time, and the full series remains an engaging and worthwhile reference for many of the topics presented here. According to Knuth, the word "algorithm" is derived from the name "al-Khowârizmî," a ninth-century Persian mathematician.<P>
Aho, Hopcroft, and Ullman [4] advocated the asymptotic analysis of algorithms as a means of comparing relative performance. They also popularized the use of recurrence relations to describe the running times of recursive algorithms.<P>
<a name="06e4_1174"><a name="06e4_1175">Knuth [123] provides an encyclopedic treatment of many sorting algorithms. His comparison of sorting algorithms (page 381) includes exact step-counting analyses, like the one we performed here for insertion sort. Knuth's discussion of insertion sort encompasses several variations of the algorithm. The most important of these is Shell's sort, introduced by D. L. Shell, which uses insertion sort on periodic subsequences of the input to produce a faster sorting algorithm.<P>
Merge sort is also described by Knuth. He mentions that a mechanical collator capable of merging two decks of punched cards in a single pass was invented in 1938. J. von Neumann, one of the pioneers of computer science, apparently wrote a program for merge sort on the EDVAC computer in 1945.<P>
<P>
<P>
<P>
<center>Go to <a href="parti.htm">Part I</A> Back to <a href="toc.htm">Table of Contents</A>
</P>
</center>
</BODY></HTML>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -