⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 chap9.tex

📁 Concrete_Mathematics_2nd_Ed_TeX_Source_Code
💻 TEX
📖 第 1 页 / 共 5 页
字号:
and so it's natural to try summing all these approximations:\begindisplay \openup4pt{1\over n^2+1}&={1\over n^2}&-{1\over n^4}&+{1^2\over n^6}&-{1^3\over n^8} &+O\Bigl({1^4\over n^{10}}\Bigr)\cr{1\over n^2+2}&={1\over n^2}&-{2\over n^4}&+{2^2\over n^6}&-{2^3\over n^8} &+O\Bigl({2^4\over n^{10}}\Bigr)\cr&\vdots\cr{1\over n^2+n}&={1\over n^2}&-{n\over n^4}&+{n^2\over n^6}&-{n^3\over n^8} &+O\Bigl({n^4\over n^{10}}\Bigr)\cr\noalign{\kern8pt\hrule\kern7pt}S_n&={n\over n^2}-{n(n+1)\over2n^4}+\cdots\,.\hidewidth\enddisplayIt looks as if we're getting $S_n=n^{-1}-{1\over2}n^{-2}+O(n^{-3})$,based on the sums of the first two columns; but the calculations aregetting hairy. If we persevere in this approach, we will ultimatelyreach the goal; but we won't bother to sum the other columns, fortwo reasons: First, the last column is going to give us terms thatare $O(n^{-6})$, when $n/2\le k\le n$, so we will have an errorof~$O(n^{-5})$; that's too big, and we will have to include yet anothercolumn in the expansion. Could the exam-giver have been so\g Do pajamas have buttons?\g sadistic?We suspect that there must be a better way. Second, there is indeed a muchbetter way, staring us right in the face.Namely, we know a closed form for $S_n$: It's just $H_{n^2+n}-H_{n^2}$.And we know a good approximation for harmonic numbers, so we justapply it twice:\begindisplayH_{n^2+n}&=\ln(n^2+n)+\gamma+{1\over2(n^2+n)}-{1\over12(n^2+n)^2} +O\Bigl({1\over n^8}\Bigr)\,;\crH_{n^2}&=\ln n^2+\gamma+{1\over2n^2}-{1\over12n^4} +O\Bigl({1\over n^8}\Bigr)\,.\cr\enddisplayNow we can pull out large terms and simplify, as we did when looking atStirling's approximation. We have\begindisplay \openup3pt\ln(n^2+n)&=\ln n^2+\ln\Bigl(1+{1\over n}\Bigr)=\ln n^2+{1\over n} -{1\over2n^2}+{1\over3n^3}-\cdots\,;\cr{1\over n^2+n}&={1\over n^2}-{1\over n^3}+{1\over n^4}-\cdots\,;\cr{1\over(n^2+n)^2}&={1\over n^4}-{2\over n^5}+{3\over n^6}-\cdots\,.\cr\enddisplaySo there's lots of helpful cancellation, and we find\begindisplay \let\displaystyle=\textstyle \openup3ptS_n&=n^{-1}&-\half n^{-2}&+{1\over3}n^{-3}&-{1\over4}n^{-4}&+{1\over5}n^{-5} &-{1\over6}n^{-6}\cr&&&-\half n^{-3}&+\half n^{-4}&-\half n^{-5}&+\half n^{-6}\cr&&&&&+{1\over6}n^{-5}&-{1\over4} n^{-6}\cr\enddisplayplus terms that are $O(n^{-7})$. A bit of arithmetic and we're home free:\begindisplay \advance\medmuskip-.5mu\textstyleS_n=n^{-1}-{1\over2}n^{-2}-{1\over6}n^{-3}+{1\over4}n^{-4}-{2\over15}n^{-5} +{1\over12}n^{-6}+O(n^{-7})\,.\eqno\enddisplayIt would be nice if we could check this answer numerically, as we did whenwe derived exact results in earlier chapters. Asymptotic formulas areharder to verify; an arbitrarily large constant may be hiding ina $O$~term, so any numerical test is inconclusive. But in practice, wehave no reason to believe that an adversary is trying to trap us, sowe can assume that the unknown $O$-constants are reasonably small.With a pocket calculator we find that $S_4={1\over17}+{1\over18}+{1\over19}+{1\over20}=0.2170107$; and our asymptotic estimate when $n=4$ comes to\begindisplay\textstyle\def\\{{1\over4}}\\\bigl(1+\\\bigl(-\half+\\(-{1\over6}+\\(\\+\\(-{2\over15}+\\\cdt{1\over12})))\bigr)\bigr)=0.2170125\,.\enddisplayIf we had made an error of,say, $1\over12$ in the term for $n^{-6}$, a difference of ${1\over12}{1\over4096}$would have shown up in the fifth decimal place; so our asymptotic answer is probablycorrect.\subhead Problem 5: An infinite sum.We turn now to an asymptotic question posed by Solomon "Golomb"~[|golomb-sum|]:What is the approximate value of\begindisplayS_n=\sum_{k\ge1}{1\over k N_n(k)^2}\,,\eqno\enddisplaywhere $N_n(k)$ is the number of digits required to write $k$ in"radix~$n$ notation"?First let's try again for a ballpark estimate. The number of digits, $N_n(k)$,is approximately $\log_n k=\log k/\!@\log n$; so the terms of this sum are roughly$(\log n)^2\!/k(\log k)^2$. Summing on~$k$ gives $\approx(\log n)^2\sum_{k\ge2}1/k(\log k)^2$, and this sum converges to a constant value because it canbe compared to the integral\begindisplay\int_2^\infty{dx\over x(\ln x)^2}=-\thinspace{1\over\ln x}\bigg\vert_2^\infty={1\over\ln2}\,.\enddisplayTherefore we expect $S_n$ to be about $C(\log n)^2$, for some constant~$C$.Hand-wavy analyses like this are useful for orientation, but we needbetter estimates to solve the problem. One idea is to express $N_n(k)$exactly:\begindisplayN_n(k)=\lfloor\log_n k\rfloor+1\,.\eqno\enddisplayThus, for example, $k$ has three radix~$n$ digits when $n^2\le k<n^3$, andthis happens precisely when $\lfloor\log_n k\rfloor=2$. It follows that$N_n(k)>\log_n k$, hence $S_n=\sum_{k\ge1}1/kN_n(k)^2<1+(\log n)^2\sum_{k\ge2}1/k(\log k)^2$.Proceeding as in Problem 1, we can try to write $N_n(k)=\log_n k+O(1)$ andsubstitute this into the formula for~$S_n$. The term represented hereby~$O(1)$is always between $0$ and~$1$, and it is about $\half$ on the average,so it seems rather well-behaved. But still, this isn't a good enoughapproximation to tell us about~$S_n$; it gives us zero significant figures(that is, high relative error)when $k$ is small, and these are the terms that contribute the most tothe sum. We need a different idea.The key (as in Problem 4) is to use our manipulative skills to put thesum into a more tractable form, before we resort to asymptotic estimates.We can introduce a new variable of summation, $m=N_n(k)$:\begindisplayS_n&=\sum_{k,m\ge1}{\bigi[m=N_n(k)\bigr]\over km^2}\cr &=\sum_{k,m\ge1}{\[n^{m-1}\le k<n^m]\over km^2}\cr &=\sum_{m\ge1}{1\over m^2}\bigl(H_{n^m-1}-H_{n^{m-1}-1}\bigr)\,.\enddisplayThis may look worse than the sum we began with, but it's actually astep forward, because we have very good approximations for theharmonic numbers.Still, we hold back and try to simplify some more.No need to rush into asymptotics. Summation by partsallows us to group the terms for each value of $H_{n^m-1}$ that weneed to approximate:\begindisplayS_n=\sum_{k\ge1}H_{n^k-1}\Bigl({1\over k^2}-{1\over(k+1)^2}\Bigr)\,.\enddisplayFor example, $H_{n^2-1}$ is multiplied by $1/2^2$ and then by $-1/3^2$.(We have used the fact that $H_{n^0-1}=H_0=0$.)Now we're ready to expand the harmonic numbers. Our experience withestimating $(n-1)!$ has taught us that it will be easier to estimate$H_{n^k}$ than $H_{n^k-1}$, since the $(n^k-1)$'s will be messy;therefore we write\begindisplay \openup3ptH_{n^k-1}=H_{n^k}-{1\over n^k}&=\ln n^k+\gamma+{1\over2n^k}+O\Bigl({1\over n^{2k}}\Bigr)-{1\over n^k}\cr\noalign{\vskip2pt}&=k\ln n+\gamma-{1\over2n^k}+O\Bigl({1\over n^{2k}}\Bigr)\,.\cr\enddisplayOur sum now reduces to\begindisplay \openup3ptS_n&=\sum_{k\ge1}\,\Bigl(k\ln n+\gamma-{1\over2n^k} +O\Bigl({1\over n^{2k}}\Bigr)\Bigr) \Bigl({1\over k^2}-{1\over(k+1)^2}\Bigr)\cr&=(\ln n)\Sigma_1+\gamma\Sigma_2-\textstyle\half\Sigma_3(n)+ O\bigl(\Sigma_3(n^2)\bigr)\,.\eqno\eqref|golomb-pieces|\enddisplayThere are four easy pieces left: $\Sigma_1$, $\Sigma_2$, $\Sigma_3(n)$,and $\Sigma_3(n^2)$.Let's do the $\Sigma_3$'s first, since $\Sigma_3(n^2)$ is the $O$ term; then we'llsee what sort of error we're getting. (There's no sense carrying outother calculations with perfect accuracy if they will be absorbed\g Into a Big Oh.\ginto a $O$ anyway.) This sum is simply a power series,\begindisplay\Sigma_3(x)=\sum_{k\ge1}\,\Bigl({1\over k^2}-{1\over(k+1)^2}\Bigr)x^{-k}\,,\enddisplayand the series converges when $x\ge1$ so we can truncate it at anydesired point. If we stop $\Sigma_3(n^2)$ at the termfor $k=1$, we get $\Sigma_3(n^2)=O(n^{-2})$; hence \eq(|golomb-pieces|) has anabsolute error of $O(n^{-2})$. (To decrease this absolute error,we could use a better approximation to $H_{n^k}$; but $O(n^{-2})$ isgood enough for now.) If we truncate $\Sigma_3(n)$ at the term for $k=2$, we get\begindisplay\Sigma_3(n)=\textstyle{3\over4}n^{-1}+O(n^{-2})\,;\enddisplaythis is all the accuracy we need.We might as well do $\Sigma_2$ now, since it is so easy:\begindisplay\Sigma_2=\sum_{k\ge1}\,\Bigl({1\over k^2}-{1\over(k+1)^2}\Bigr)\,.\enddisplayThis is the telescoping series $(1-{1\over4})+({1\over4}-{1\over9})+({1\over9}-{1\over16})+\cdots\,=1$.\vskip1ptFinally, $\Sigma_1$ gives us the leading term of $S_n$, thecoefficient of $\ln n$ in \eq(|golomb-pieces|):\begindisplay\Sigma_1=\sum_{k\ge1}\,k\Bigl({1\over k^2}-{1\over(k+1)^2}\Bigr)\,.\enddisplayThis is $(1-{1\over4})+({2\over4}-{2\over9})+({3\over9}-{3\over16})+\cdots\,={1\over1}+{1\over4}+{1\over9}+\cdots\,=H_\infty^{(2)}=\pi^2\!/6$.(If we hadn't applied summation by parts earlier, we would have seendirectly that $S_n\sim \sum_{k\ge1}(\ln n)/k^2$, because$H_{n^k-1}-H_{n^{k-1}-1}\sim\ln n$; so summation by parts didn'thelp us to evaluate the leading term, although it did make someof our other work easier.)Now we have evaluated each of the $\Sigma$'s in \eq(|golomb-pieces|), so we canput everything together and get the answer to Golomb's problem:\begindisplayS_n={\pi^2\over6}\ln n+\gamma-{3\over8n}+O\Bigl({1\over n^2}\Bigr)\,.\eqno\eqref|golomb-ans|\enddisplayNotice that this grows more slowly than our original hand-wavy estimate of$C(\log n)^2$. Sometimes a discrete sum fails to obey a continuous intuition.\subhead Problem 6: Big Phi.Near the end of Chapter 4, we observed that the number of fractions in the"Farey series" $\Fscr_n$ is $1+\Phi(n)$, where\begindisplay\Phi(n)=\varphi(1)+\varphi(2)+\cdots+\varphi(n)\,;\enddisplayand we showed in \equ(4.|bigphi-gen|) that\begindisplay\Phi(n)={1\over2}\sum_{k\ge1}\mu(k)\lfloor n/k\rfloor\lfloor1+n/k\rfloor\,.\eqno\eqref|big-phi-repeat|\enddisplayLet us now try to estimate $\Phi(n)$ when $n$ is large. (It was sums like"!phi" "!mu" "!M\"obius function"this that led "Bachmann" to invent $O$-notation in the first place.)Thinking {\sc big} tells us that $\Phi(n)$ will probably beproportional to~$n^2$. For if the final factor were just $\lfloor n/k\rfloor$instead of $\lfloor1+n/k\rfloor$, we would have $\bigl\vert\Phi(n)\bigr\vert\le\half\sum_{k\ge1}\lfloor n/k\rfloor^2\le\half\sum_{k\ge1}(n/k)^2={\pi^2\over12}n^2$,because the "M\"obius function" $\mu(k)$ is either $-1$, $0$, or~$+1$. Theadditional `$1+{}$' in that final factor adds $\sum_{k\ge1}\mu(k)\lfloor n/k\rfloor$; but this is zero for $k>n$, so it cannot be more than $nH_n=O(n\log n)$ in absolute value.This preliminary analysis indicates that we'll find it advantageous to write\begindisplay \openup3pt\Phi(n)=\half\sum_{k=1}^n\mu(k)\biggl(\Bigl({n\over k}\Bigr)+O(1)\biggr)^{\!2}&=\half\sum_{k=1}^n\mu(k)\biggl(\Bigl({n\over k}\Bigr)^{\!2}+ O\Bigl({n\over k}\Bigr)\biggr)\cr&=\half\sum_{k=1}^n\mu(k)\Bigl({n\over k}\Bigr)^{\!2}  +\sum_{k=1}^n O\Bigl({n\over k}\Bigr)\cr&=\half\sum_{k=1}^n\mu(k)\Bigl({n\over k}\Bigr)^{\!2}\,+\, O(n\log n)\,.\cr\enddisplayThis removes the floors; the remaining problem is to evaluate the unfloored sum$\half\sum_{k=1}^n\mu(k)n^2\!/k^2$ with an accuracy of $O(n\log n)$; in other words, we wantto evaluate $\sum_{k=1}^n \mu(k)1/k^2$ with an accuracy of $O(n^{-1}\log n)$.But that's easy; we can simply run the sum all the way up to $k=\infty$, becausethe newly added terms are\begindisplay\sum_{k>n}{\mu(k)\over k^2}=O\Bigl(\sum_{k>n}{1\over k^2}\Bigr) &=O\Bigl(\sum_{k>n}{1\over k(k-1)}\Bigr)\cr &=O\biggl(\sum_{k>n}\Bigl({1\over k-1}-{1\over k}\Bigr)\biggr) =O\Bigl({1\over n}\Bigr)\,.\enddisplayWe proved in \equ(7.|inverse-zeta|) that $\sum_{k\ge1}\mu(k)/k^z=1/\zeta(z)$. Hence $\sum_{k\ge1}\mu(k)/k^2=1\big/\bigl(\sum_{k\ge1}1/k^2\bigr)=6/\pi^2$, and we have our answer:\g\vskip20pt(The error term was shown to be at most$O(n(\log n)^{2/3}\*\null\quad(\log\log n)^{1+\epsilon})$ by "Saltykov" in1960~[|salty-phi|]. On the other hand, it is not as small as$o(n(\log\mskip-1mu\log\mskip-1mu n)^{1/2})$, according to "Montgomery"~[|monty-phi|].)\g\begindisplay\Phi(n)={3\over \pi^2}n^2+O(n\log n)\,.\eqno\eqref|o-phi|\enddisplay\beginsection 9.4 Two Asymptotic TricksNow that we have some facility with $O$ manipulations, let's look at whatwe've done from a slightly higher perspective. Then we'll have someimportant weapons in our asymptotic arsenal, when we need to do battle withtougher problems.\subhead Trick 1: Bootstrapping.When we estimated the $n$th prime $P_n$ in Problem~3 of Section~9.3,we solved an asymptotic recurrence of the form\begindisplayP_n=n\ln P_n\bigl(1+O(1/\!@\log n)\bigr)\,.\enddisplayWe proved that $P_n=n\ln n+O(n)$ by first using the recurrence to show theweaker result $O(n^2)$. This is a special case of a general method called{\it"bootstrapping"}, in which we solve a recurrence asymptotically bystarting with a rough estimate and plugging it into the recurrence; in thisway we can often derive better and better estimates, ``pulling ourselvesup by our bootstraps.''Here's another problem that illustrates bootstrapping nicely: What isthe asymptotic value of the coefficient $g_n=[z^n]\,G(z)$ in thegenerating function\begindisplayG(z)=\exp\Bigl(@\sum_{k\ge1}{z^k\over k^2}\Bigr)\,,\eqno\enddisplayas $n\to\infty$? If we differentiate this equation with respect to~$z$, we find\begindisplayG'(z)=\sum_{n=0}^\infty ng_n@z^{n-1}=\Bigl(@ \sum_{k\ge1}{z^{k-1}

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -