📄 chap2.tex
字号:
\enddisplayto conclude that\begindisplay2S=\sum_{1\le j,k\le n}(a_j-a_k)(b_j-b_k) \;-\sum_{1\le j=k\le n}\!(a_j-a_k)(b_j-b_k)\,.\enddisplayThe second sum here is zero; what about the first? It expands intofour separate sums, each of which is vanilla flavored:\begindisplay \openup4pt&\sum_{1\le j,k\le n}a_jb_j \;-\;\sum_{1\le j,k\le n}a_jb_k \;-\;\sum_{1\le j,k\le n}a_kb_j \;+\;\sum_{1\le j,k\le n}a_kb_k\cr\noalign{\nobreak}&\quad=2\!\!\sum_{1\le j,k\le n}a_kb_k \;-\;2\!\!\sum_{1\le j,k\le n}a_jb_k\cr&\quad=2n\!\sum_{1\le k\le n}a_kb_k \;-\;2\biggl(\sum_{k=1}^n a_k\biggr)\biggl(\sum_{k=1}^n b_k\biggr)\,.\enddisplayIn the last step both sums have been simplified according to thegeneral distributive law \eq(|gen-dist-law|). If the manipulation ofthe first sum seems mysterious, here it is again in slow motion:\begindisplay \openup5pt2\!\!\sum_{1\le j,k\le n}a_kb_k &=2\!\!\sum_{1\le k\le n}\,\sum_{1\le j\le n}a_kb_k\cr &=2\!\!\sum_{1\le k\le n}a_kb_k\sum_{1\le j\le n}1\cr &=2\!\!\sum_{1\le k\le n}a_kb_kn =2n\!\sum_{1\le k\le n}a_kb_k\,.\cr\enddisplayAn index variable that doesn't appear in the summand (here~$j$) cansimply be eliminated if we multiply what's left by the size of thatvariable's index set (here~$n$).Returning to where we left off, we can now divide everything by~$2$ andrearrange things to obtain an interesting formula:\begindisplay\biggl(\sum_{k=1}^n a_k\mskip-2mu\biggr) \biggl(\sum_{k=1}^n b_k\mskip-2mu\biggr) = n\sum_{k=1}^n a_kb_k -\sum_{1\le j<k\le n}\!(a_k-a_j)(b_k-b_j)\,.\eqno\enddisplayThis identity yields {\it "Chebyshev's monotonicinequalities"\/} as a special case:"!inequalities, Chebyshev"\g \vskip-10pt("Chebyshev"~[|chebyshev-mono|] actually~provedthe analogous result for~integrals\newline instead of sums,\par{\smallskip$\bigl(\int_a^b f(x)\,dx\bigr)$\par\baselineskip=11.5pt\hfill${}\cdot\bigl(\int_a^b g(x)\,dx\bigr)$\par\ $\le{(b-a)}$\par\hfill${}\cdot\bigl(\int_a^b\!f(x)g(x)\,dx\bigr)$,\smallskip}if\/ $f(x)$ and $g(x)$ are monotone nondecreasing functions.)\g\begindisplay \openup4pt\biggl(\sum_{k=1}^n \!a_k\mskip-2mu\biggr) \biggl(\sum_{k=1}^n\! b_k\mskip-2mu\biggr) \le n\sum_{k=1}^n \!a_kb_k\,,\mskip-2mu \quad\hbox{if $a_1\le\cdots\le a_n$ and $b_1\le\cdots\le b_n$};\cr\biggl(\sum_{k=1}^n \!a_k\mskip-2mu\biggr) \biggl(\sum_{k=1}^n\! b_k\mskip-2mu\biggr) \ge n\sum_{k=1}^n \!a_kb_k\,,\mskip-2mu \quad\hbox{if $a_1\le\cdots\le a_n$ and $b_1\ge\cdots\ge b_n$}.\cr\enddisplay(In general, if $a_1\le\cdots\le a_n$ and if $p$ is a permutation of $\{1,\ldots,n\}$, it's not difficult to prove thatthe largest value of $\sum_{k=1}^n a_kb_{p(k)}$ occurs when$b_{p(1)}\le\cdots\le b_{p(n)}$, and the smallest value occurs when$b_{p(1)}\ge\cdots\ge b_{p(n)}$.)\goodbreakMultiple summation has an interesting connection with the generaloperation of "changing the index of summation" in {\it single\/} sums. We knowby the commutative law that\begindisplay\sum_{k\in K}a_k=\sum_{p(k)\in K}a_{p(k)}\,,\enddisplayif $p(k)$ is any permutation of the integers. But what happens when wereplace $k$ by $f(j)$, where $f$ is an arbitrary function\begindisplayf\!\!: J\rightarrow K\enddisplaythat takes an integer $j\in J$ into an integer $f(j)\in K$? Thegeneral formula for index replacement is\begindisplay\sum_{j\in J}a_{f(j)} = \sum_{k\in K}a_k\,\#f^-(k)\,,\eqno\eqref|gen-repl|\enddisplaywhere $\#f^-(k)$ stands for the number of elements in the set\tabref|nn:card|%\begindisplayf^-(k)=\bigl\{\,j\,\big\vert\,f(j)=k\,\bigr\}\,,\enddisplaythat is, the number of values of $j\in J$ such that $f(j)$ equals~$k$.It's easy to prove \eq(|gen-repl|) by interchanging the orderof summation,\begindisplay\sum_{j\in J}a_{f(j)} =\sum\twoconditions{j\in J}{k\in K}a_k\,\bigi[f(j)=k\bigr] =\sum_{k\in K}a_k\sum_{j\in J}\bigi[f(j)=k\bigr]\,,\enddisplaysince $\sum_{j\in J}\bigi[f(j)=k\bigr]=\#f^-(k)$. In the special case that$f$ is a one-to-one correspondence\g My other math teacher calls this a ``"bijection"'';maybe I'll learn to love that word some day.\medskipAnd then again\thinspace\dots\gbetween $J$ and~$K$, we have $\#f^-(k)=1$ for all~$k$, and thegeneral formula \eq(|gen-repl|) reduces to\begindisplay\sum_{j\in J}a_{f(j)}=\sum_{f(j)\in K}a_{f(j)}=\sum_{k\in K}a_k\,.\enddisplayThis is the commutative law \eq(|comm-law|) we had before, slightly disguised.Our examples of multiple sums so far have all involved general termslike $a_k$ or~$b_k$. But this book is supposed to be concrete,so let's take a look at a multiple sum that involves actual numbers:\begindisplayS_n=\sum_{1\le j<k\le n}{1\over k-j}\,.\enddisplay\g \vskip-47pt Watch out---\parthe authors\par seem to think that$j$, $k$, and $n$ are ``actual numbers.\qback''\gFor example, $S_1=0$; $S_2=1$; $S_3={1\over2-1}+{1\over3-1}+{1\over3-2}={5\over2}$.The normal way to evaluate a double sum is to sum first on $j$ or first on~$k$,so let's explore both options.\begindisplayS_n&=\hbox to44mm{$\displaystyle \sum_{1\le k\le n}\,\,\sum_{1\le j<k}{1\over k-j}$\hfil}% &\xbox{summing first on $j$}\cr&=\sum_{1\le k\le n}\,\,\sum_{1\le k-j<k}{1\over j} &\xbox{replacing $j$ by $k-j$}\cr&=\sum_{1\le k\le n}\,\,\sum_{0<j\le k-1}{1\over j} &\xbox{simplifying the bounds on $j$}\cr&=\sum_{1\le k\le n}H_{k-1}&\xbox{by \eq(|h-def|), the definition of $H_{k-1}$}\cr&=\sum_{1\le k+1\le n}\!H_k&\xbox{replacing $k$ by $k+1$}\cr&=\sum_{0\le k< n}H_k\,.&\xbox{simplifying the bounds on $k$}\cr\enddisplayAlas! We don't know how to get a sum of harmonic numbers\g Get out the whip.\ginto closed form.If we try summing first the other way, we get\begindisplayS_n&=\hbox to44mm{$\displaystyle \sum_{1\le j\le n}\,\,\sum_{j<k\le n}{1\over k-j}$\hfil}% &\xbox{summing first on $k$}\cr&=\sum_{1\le j\le n}\,\sum_{j<k+j\le n}{1\over k} &\xbox{replacing $k$ by $k+j$}\cr&=\sum_{1\le j\le n}\,\sum_{0<k\le n-j}{1\over k} &\xbox{simplifying the bounds on $k$}\cr&=\sum_{1\le j\le n}H_{n-j}&\xbox{by \eq(|h-def|), the definition of $H_{n-j}$}\cr&=\sum_{1\le n-j\le n}\!H_j\,&\xbox{replacing $j$ by $n-j$}\cr&=\sum_{0\le j< n}H_j\,.&\xbox{simplifying the bounds on $j$}\cr\enddisplayWe're back at the same impasse.But there's {\it another\/} way to proceed, if we replace $k$ by $k+j${\it before\/} deciding to reduce $S_n$ to a sum of sums:\begindisplayS_n&=\hbox to44mm{$\displaystyle\sum_{1\le j<k\le n}{1\over k-j}$\hfil}% &\xbox{recopying the given sum}\cr&=\sum_{1\le j<k+j\le n}{1\over k} &\xbox{replacing $k$ by $k+j$}\cr&=\sum_{1\le k\le n}\,\,\sum_{1\le j\le n-k}{1\over k} &\xbox{summing first on $j$}\cr&=\sum_{1\le k\le n}{n-k\over k}&\xbox{the sum on $j$ is trivial}\cr&=\sum_{1\le k\le n}{n\over k}-\sum_{1\le k\le n}1&\xbox{by the associative law}\cr&=n\biggl(\sum_{1\le k\le n}{1\over k}\biggr)-n&\xbox{by gosh}\cr\noalign{\smallskip}&=nH_n-n\,.&\xbox{by \eq(|h-def|), the definition of $H_n$}\cr\enddisplayAha!\g \vskip-2in It's smart to say $k\le n$ instead of $k\le n-1$ here.Simple bounds save energy.\gWe've found $S_n$. Combining this with the false starts we made gives"!harmonic numbers, sum of"us a further identity as a bonus:\begindisplay\sum_{0\le k<n}H_k=nH_n-n\,.\eqno\eqref|harm-sum|\enddisplay\looseness-1We can understand the trick that worked here in two ways, one algebraic and onegeometric. (1)~Algebraically, if we have a double sum whose terms involve$k+f(j)$, where $f$ is an arbitrary function, this example indicates that it'sa good idea to try replacing $k$ by $k-f(j)$ and summing on~$j$.(2)~Geometrically, we can look at this particular sum $S_n$as follows, in the case $n=4$:\begindisplay \openup1.5pt \def\preamble{&$\hfil##\hfil$}%\advance\abovedisplayskip-1.5pt \advance\belowdisplayskip-1.5pt&\quad&k=1&\phantom+&k=2&&k=3&&k=4\crj=1&&&&{1\over1}&+&\half &+&{1\over3}\crj=2&&&&&&{1\over1}&+&\half \crj=3&&&&&&&&{1\over1}\crj=4\cr\enddisplayOur first attempts, summing first on $j$ (by columns) or on $k$ (by rows), gave us$H_1+H_2+H_3=H_3+H_2+H_1$. The winning idea was essentially to sum bydiagonals, getting ${3\over1}+{2\over2}+{1\over3}$.\beginsection 2.5 General MethodsNow let's consolidate what we've learned, by looking at a single examplefrom several different angles. On the next few pages we're going totry to find a closed form for the sum of the first $n$ squares, which"!sum of consecutive squares" "!squares, sum of consecutive"we'll call~$\Sq_n$:\begindisplay\Sq_n = \sum_{0 \le k \le n} k^2 \,, \qquad\hbox{for $n \geq 0$.}\eqno\enddisplayWe'll see that there are at least seven different ways to solve this problem,and in the process we'll learn useful strategies for attacking sumsin general.\goodbreakFirst, as usual, we look at some small cases.\begindisplay \let\preamble=\tablepreamblen&&0&1&2&3&4&5&6&7&8&9&10&11&12\cr\noalign{\hrule}n^2&&0&1&4&9&16&25&36&49&64&81&100&121&144\cr\noalign{\hrule}\Sq_n&&0&1&5&14&30&55&91&140&204&285&385&506&650\cr\enddisplayNo closed form for~$\Sq_n$ is immediately evident; butwhen we do find one, we can use these values as a check.\subhead Method 0: You could look it up. "!Stengel, Charles Dillon (Casey)"A problem like the sum of the first $n$ squares has probably been solvedbefore, so we can most likely find the solution in a handy "reference book".Sure enough,page~36 of the {\sl CRC Standard Mathematical Tables\/}~[|crc-tables|]has the answer:\begindisplay\Sq_n = {n(n+1)(2n+1)\over6} \,, \qquad\hbox{for $n \geq 0$}.\eqno\enddisplayJust to make sure we haven't misread it,we check that this formula correctlygives $\Sq_5=5 \cdt 6 \cdt 11 / 6 = 55$.Incidentally, page 36 of the{\sl CRC Tables\/} has further informationabout the sums of cubes, \dots, tenth powers.The definitive reference for mathematical formulas is the{\sl Handbook of Mathematical Functions}, edited by "Abramowitz" and\g(Harder sums can be found in~"Hansen"'s\par comprehensive table~[|hansen|].)\g"Stegun"~[|abramowitz-stegun|]. Pages 813--814 of that book list the valuesof $\Sq_n$ for $n\le100$; and pages 804 and 809 exhibit formulas equivalentto \thiseq, together with the analogous formulas for sums of cubes, \dots,fifteenth powers, with or without alternating signs.But the best source for answers to questions about sequences is an amazing little bookcalled the {\sl Handbook of Integer Sequences}, by "Sloane"~[|sloane|],which lists thousands of sequences by their numerical values. If youcome up with a "recurrence" that you suspect has already been studied, allyou have to do is compute enough terms to distinguish your recurrence fromother famous ones; then chances are you'll find a pointer to the relevantliterature in Sloane's{\sl Handbook}. For example, %the sequence$1$, $5$, $14$, $30$, \dots\ turns out to be Sloane's sequencenumber~1574, and it's called the sequence of ``"square pyramidalnumbers"'' (because there are $\Sq_n$ balls in a pyramid thathas a square base of $n^2$ balls). Sloane gives three references, one ofwhich is to the handbook of Abramowitz and Stegun that we've already mentioned.Still another way to probe the world's store of accumulated mathematicalwisdom is to use a computer program (such as Axiom, MACSYMA, Maple, orMathematica)that provides tools for symbolic manipulation. Such programs areindispensable, especially for people who need to deal with large formulas.It's good to be familiar with standard sources of information, because they canbe extremely helpful. But Method~0 isn't really consistent with the spiritof this book, because we want to know how to figure out theanswers by ourselves. The look-up method is limited to problems that other\g Or, at least to problems having the same \undertext{answers} as problemsthat other people have decided to consider.\gpeople have decided are worth considering; a new problem won't be there.\subhead Method 1: Guess the answer, prove it by "induction".Perhaps a little bird has told us the answer to a problem, or we havearrived at a closed form by some other less-than-rigorous means.Then we merely have to prove that it is correct.We might, for example, have noticed that the values of $\Sq_n$ have rathersmall prime factors, so we may have come up with formula \thiseq\ assomething that works for all small values of~$n$. We might alsohave conjectured the equivalent formula\begindisplay\Sq_n={n(n+{1\over2\lower1pt\null})(n+1)\over3},\qquad\hbox{for $n\ge 0$},\eqno\eqref|pyramid|\enddisplaywhich is nicer because it's easier to remember.The preponderance of the evidence supports~\thiseq,but we must prove our conjectures beyond all reasonable doubt."Mathematical induction" was invented for this purpose.``Well, Your Honor, we know that $\Sq_0 = 0 = 0(0+\half )(0 + 1)/3$,so the basis is easy.For the induction,suppose that $n>0$, and assume that \thiseq~holds when $n$ is replaced by~$n-1$.Since\begindisplay \Sq_{n} = \Sq_{n-1} + n^2,\enddisplaywe have\begindisplay \openup2pt3\Sq_{n}&= \textstyle(n-1)(n-\half )(n)\;+\;3n^2\cr &=\textstyle (n^3-{3\over2}n^2 + \half n)\;+\;3n^2\cr &=\textstyle (n^3+{3\over2}n^2 + \half n)\cr &=\textstyle n(n+\half )(n+1)\,.\cr\enddisplayTherefore \thiseq\ indeed holds,beyond a reasonable doubt, for all $n \geq 0$.''Judge Wapner, "!Wapner, Joseph A." in his infinite wisdom, agrees.Induction has its place, and it issomewhat more defensible than trying to look up the answer.
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -