📄 paper.tex
字号:
\lefthead{Sava \& Fomel}\righthead{Spectral factorization}\footer{SEP--100}\title{Spectral factorization revisited}\email{paul@sep.stanford.edu, sergey@sep.stanford.edu}\input{macros}\author{Paul Sava and Sergey Fomel}\maketitle\begin{abstract}In this paper, we review some of the iterative methods for the squareroot, showing that all these methods belong to the samefamily, for which we find a general formula. We then explain how those iterative methods for real numbers can be extended to spectralfactorization of auto-correlations. The iteration based onthe Newton-Raphson method is optimal from the convergence stand point, thoughit is not optimal as far as stability is concerned. Finally, we showthat other members of the iteration family are more stable, thoughslightly more expensive and slower to converge.\end{abstract}\inputdir{Matlab}\section{Introduction}Spectral factorization has been recently revived by the advent of thehelical coordinate system. Several methods are reported in theliterature, ranging from Fourier domain methods, such as Kolmogoroff's\cite[]{Claerbout.blackwell.92, kolmog}, to iterative methods, such as theWilson-Burg method \cite[]{gee,mywilson,Sava.sep.97.paul1}.\parIn this paper, after reviewing the general theory of root estimation byiterative methods, we derive a general square root relationshipapplicable to both real numbers and to auto-correlation functions. Weintroduce a new spectral factorization relation and show its relation to the Wilson-Burg method.\section{The square root of real numbers}This section briefly reviews some well known square root iterativealgorithms, and derives the Newton-Raphson and Secant methods. It alsoshows that Muir's iteration for the square root \cite[]{Claerbout.bei.95}belongs to the same family of iterative methods, if we make anappropriate choice of the generating function.\subsection{Root-finding recursions}Given a function $f(x)$ and an approximation for one of its roots$x_n$, we can find a better approximation for the root by linearizingthe function around $x_n$\[f(x) \approx f(x_n)+(x_{n+1}-x_n) f'(x_n) \]and by setting $f(x)$ to be zero for $x=x_{n+1}$. We find that\beq \label{eqn:newton-raphson}\fbox {$ \displaystyle x_{n+1}=x_{n}-\frac{f(x_n)}{f'(x_n)} $} \eeq\begin {enumerate}%\item {\bf Newton-Raphson's method for the square root} \parA common choice of the function $f$ is $f(x)=x^2-s$. Thisfunction has the advantage that it is easily differentiable, with$f'(x)=2x$. The recursion relation thus becomes\[ x_{n+1}=x_{n}-\frac{x_n^2-s}{2x_n}=\frac{x_n}{2}+\frac{s}{2x_n} \]or\[ x_{n+1}=\frac{1}{2}\left(x_n+\frac{s}{x_n}\right) \]or, after rearrangement,\beq \label{eqn:newton}\fbox {$ \displaystylex_{n+1}=\frac{s+x_n^2}{2 x_n} $} \eeq\parThe recursion (\ref{eqn:newton}) converges to $\pm \sqrt{s}$ dependingon the sign of the starting guess $x_0\ne 0$.%\item {\bf Secant method for the square root} \parA variation of the Newton-Raphson method is to use a finiteapproximation of the derivative instead of the differential form. Inthis case, the approximate value of the derivative at step $n$ is\[ f'(x_n)=\frac{f(x_n)-f(x_{n-1})}{x_{n}-x_{n-1}} \]\parFor the same choice of the function $f$, $f(x)=x^2-s$, weobtain\[ x_{n+1}=x_{n}-\frac{x_n^2-s}{x_n+x_{n-1}}\]and\beq \label{eqn:secant}\fbox {$ \displaystylex_{n+1}=\frac{s+x_{n}x_{n-1}}{x_n+x_{n-1}} $} \eeq\parIn this case, recursion (\ref{eqn:secant}) also converges to $\pm\sqrt{s}$ depending on the sign of the starting guesses $x_0$ and$x_1$.%\item {\bf Muir's method for the square root} \parAnother possible iterative relation for the square root is FrancisMuir's, described by \cite{Claerbout.bei.95}:\beq \label{eqn:muir}\fbox {$ \displaystyle x_{n+1}=\frac{s+x_{n}}{x_n+1} $} \eeq\parThis relation belongs to the same family of iterative schemes asNewton and Secant, if we make the following special choice of thefunction $f(x)$ in (\ref{eqn:newton-raphson}):\beq \label{eqn:muf} f(x)= |x+\sqrt{s}|^{\frac{\sqrt{s}-1}{2\sqrt{s}}}|x-\sqrt{s}|^{\frac{\sqrt{s}+1}{2\sqrt{s}}} \eeq\parFigure~\ref{fig:muf} is a graphical representation of thefunction f(x).%%\sideplot{muf}{width=3.0in}{The graph of the function defined inEquation~(\ref{eqn:muf}) used to generate Muir's iteration for thesquare root (solid line). The dashed lines are the plot of the twofactors in the equation~\label{eqn:muf}.}%%\item {\bf A general formula for the square root}\parFrom the analysis of equations (\ref{eqn:newton}), (\ref{eqn:secant}),and (\ref{eqn:muir}), we can derive the following general form for the square root iteration:\beq \label{eqn:general}\fbox {$ \displaystyle x_{n+1}=\frac{s+x_{n}\gamma}{x_n+\gamma} $} \eeqwhere $\gamma$ can be either a fixed parameter, or the value of theiteration at the preceding step, as shown in Table~\ref{recursion}. %%\begin{table} \center\caption{Recursions for the square root}\label{recursion}\begin{tabular}{|c|c|c|}\hline\R & $\gamma$ & Recursion \\\hline\R Muir & $1$ & $x_{n+1}=\frac{s+x_n }{x_n+1 }$ \\\hline\R Secant & $x_{n-1}$ & $x_{n+1}=\frac{s+x_n x_{n-1}}{x_n+x_{n-1} }$ \\\hline\R Newton & $x_n$ & $x_{n+1}=\frac{s+x_n^2 }{2 x_n }$ \\\hline\R Ideal & $\sqrt{s}$ & $x_{n+1}=\frac{s+x_n\sqrt{s}}{x_n+\sqrt{s}}$ \\\hline\end{tabular}\end{table}%%\beq \label{eqn:recursion}%%\begin{array}{|c|c|c|} \hline%%\R & \gamma & Recursion \\\hline%%\R Muir & 1 & x_{n+1}=\frac{s+x_n }{x_n+1 } \\\hline%%\R Secant & x_{n-1} & x_{n+1}=\frac{s+x_n x_{n-1}}{x_n+x_{n-1} } \\\hline%%\R Newton & x_n & x_{n+1}=\frac{s+x_n^2 }{2 x_n } \\\hline%%\R Ideal & \sqrt{s} & x_{n+1}=\frac{s+x_n\sqrt{s}}{x_n+\sqrt{s}} \\\hline %%\end{array} \eeq%%The parameter $\gamma$ is the estimate of the square root at the givenstep (Newton), the estimate of the square root at the preceding step(Secant), or a constant value (Muir). Ideally, this value should be asclose as possible to $\sqrt{s}$.\end{enumerate}\subsection{The convergence rate}We can now analyze which of the particular choices of $\gamma$ is moreappropriate as far as the convergence rate is concerned.\parIf we consider the general form of the square root iteration\[x_{n+1}=\frac{s+x_{n}\gamma}{x_n+\gamma}\]we can estimate the convergence rate by the difference between theactual estimation at step $(n+1)$ and the analytical value$\sqrt{s}$. For the general case, we obtain\[ x_{n+1}-\sqrt{s} =\frac{s+\gamma x_n -x_n \sqrt{s}-\gamma \sqrt{s}}{x_n+\gamma}\]or\beq \label{convergence}\fbox {$ \displaystylex_{n+1}-\sqrt{s} =\frac{(x_n-\sqrt{s})(\gamma-\sqrt{s})}{x_n+\gamma} $} \eeq%%\sideplot{sqroot}{width=3.0in}{Convergence plots for differentrecursive algorithms, shown in Table~\ref{recursion}.}%%The possible selections for $\gamma$ from Table~\ref{recursion} clearly show that the recursionsdescribed in the preceding subsection generally have a linearconvergence rate (that is, the error at step $n+1$ is proportional tothe error at step $n$), but can converge quadratically for anappropriate selection of the parameter $\gamma$, as shown in Table~\ref{convergence}. Furthermore, the convergence is fasterwhen $\gamma$ is closer to $\sqrt{s}$.\begin{table} \center\caption{Convergence rate}\label{convergence}\begin{tabular}{|c|c|c|}\hline\R & $\gamma$ & Convergence \\\hline\R Muir & $1$ & linear \\\hline\R Secant & $x_{n-1}$ & quasi-quadratic \\\hline\R Newton & $x_n$ & quadratic \\\hline\end{tabular}\end{table}%%\beq \label{eqn:convergence}%%\begin{array}{|c|c|c|} \hline%%\R & \gamma & Convergence \\\hline%%\R Muir & 1 & linear \\\hline%%\R Secant & x_{n-1} & quasi-quadratic \\\hline%%\R Newton & x_n & quadratic \\\hline%%\end{array} \eeq\parWe therefore conclude that Newton's iteration has the potential toachieve the fastest convergence rate. Ideally, however, we could usea fixed $\gamma$ which is a good approximation to the square root. Theconvergence would then be slightly faster than for the Newton-Raphson method, as shown in Figure~\ref{fig:sqroot}.\section{Spectral factorization}We can now extend the equations derived for real numbers topolynomials of Z, with $Z=e^{i\omega t}$, and obtain spectralfactorization algorithms similar to the Wilson-Burg method\cite[]{Sava.sep.97.paul1}, as follows:\beq \label{eqn:genspecfac}\fbox {$ \displaystyle X_{n+1}=\frac{S+X_{n} \bar{G}}{ \bar{X_n} + \bar{G}} $} \eeq\parIf $L$ represents the limit of the series in (\ref{eqn:genspecfac}), \[ L \bar{L} + L \bar{G} = S + L \bar{G} \] and so\[ L \bar{L} = S\]\parTherefore, $L$ represents the causal or anticausal part of the givenspectrum $S=X\bar{X}$.\parTable~\ref{specfac} summarizes the spectral factorizationrelationships equivalent to those established for real numbers inTable~\ref{recursion}.%%\begin{table} \center\caption{Spectral factorization}\label{specfac}\begin{tabular}{|c|c|}\hline\R General& $X_{n+1} =\frac{S+X_n \bar G }{ \bar X_n+\bar G }$\\\hline\R Muir & $X_{n+1} =\frac{S+X_n }{ \bar X_n+1 }$\\\hline\R Secant & $X_{n+1} =\frac{S+X_n \bar X_{n-1}}{ \bar X_n+\bar X_{n-1}}$\\\hline\R Newton & $X_{n+1} =\frac{S+X_n \bar X_n }{2\bar X_n }$\\\hline\R Ideal & $X_{n+1} =\frac{S+X_n\sqrt{S} }{ \bar X_n+\sqrt{S} }$\\\hline\end{tabular}\end{table}%%\beq \label{eqn:specfac}%%\begin{array}{|c|c|} \hline%%\R General& X_{n+1} =\frac{S+X_n \bar G }{ \bar X_n+\bar G }\\\hline%%\R Muir & X_{n+1} =\frac{S+X_n }{ \bar X_n+1 }\\\hline%%\R Secant & X_{n+1} =\frac{S+X_n \bar X_{n-1}}{ \bar X_n+\bar X_{n-1}}\\\hline%%\R Newton & X_{n+1} =\frac{S+X_n \bar X_n }{2\bar X_n }\\\hline%%\R Ideal & X_{n+1} =\frac{S+X_n\sqrt{S} }{ \bar X_n+\sqrt{S} }\\\hline%%\end{array} \eeq\parThe convergence properties are similar to those derived forreal numbers. As shown above, the Newton-Raphson method should havethe fastest convergence. %%The only better choice is a $G(Z)$, which is a good%%approximation for the causal/anticausal part of the spectrum $S(Z)$.\section{A comparison with the Wilson-Burg method}For reasons of symmetry, we can take Newton's relation fromTable~\ref{specfac}\[ X_{n+1} =\frac{S+X_n \bar X_n}{2\bar X_n}\]and convert it to\[ \frac{X_{n+1}}{2 X_n} =\frac{S+X_n \bar X_n}{(2 X_n) (2\bar X_n)}. \]We can then consider a symmetrical relation where on the left side weinsert the anticausal part of the spectrum, and obtain\[ \frac{\bar X_{n+1}}{2 \bar X_n} =\frac{S+X_n \bar X_n}{(2 X_n) (2\bar X_n)}. \]Finally, we can sum the preceding two equations and get\beq \fbox {$ \displaystyle\frac{X_{n+1}}{2 X_n} + \frac{\bar X_{n+1}}{2 \bar X_n} = \frac{2S+ X_n \bar X_n + \bar X_n X_n}{(2 X_n) (2\bar X_n)}$} \eeqwhich can easily be shown to be equivalent to the Wilson-Burgrelation\beq \label{eqn:wilsonburgSpF} \frac{X_{n+1}}{X_n} + \frac{\bar X_{n+1}}{\bar X_n} =1 + \frac{S}{ X_n \bar X_n}\eeq\parIn an analogous way, we can take the general relation fromTable~\ref{specfac}\[ X_{n+1} =\frac{S+X_n \bar G}{\bar X_n+\bar G} \]and convert it to\[ \frac{X_{n+1}}{X_n + G} =\frac{S+X_n \bar G}{(X_n + G) (\bar X_n + \bar G)}\]We can then consider a symmetrical relation where on the left side weinsert the anticausal part of the spectrum, and obtain\[ \frac{\bar X_{n+1}}{\bar X_n + \bar G} =\frac{S+\bar X_n G}{(X_n + G) (\bar X_n + \bar G)}\]Finally, we can sum the preceding two equations and get\beq \label{eqn:generalSpF} \fbox {$ \displaystyle\frac{X_{n+1}}{X_n + G} + \frac{\bar X_{n+1}}{\bar X_n + \bar G} = \frac{2S+X_n \bar G + \bar X_n G}{(X_n + G) (\bar X_n + \bar G)}$} \eeq\parEquation~(\ref{eqn:generalSpF}) represents our general formula forspectral factorization. If we consider the particular case when $G$ is $X_n$, we obtain equation~(\ref{eqn:wilsonburgSpF}), which we have shownto be equivalent to the Wilson-Burg formula.\parFrom the computational standpoint, our equation is more expensive thanthe Wilson-Burg because it requires two more convolutions on thenumerator of the right-hand side. However, our equation offers moreflexibility in the convergence rate. If we try to achieve a quickconvergence, we can take $G$ to be $X_n$ and get the Wilson-Burgequation. On the other hand, if we worry about the stability, especially when some of theroots of the auto-correlation function are close to the unit circle,and we fear losing the minimum-phase property of the factors, we can take $G$ to be some damping function, more tolerant ofnumerical errors.\parMoreover, by using the Equation~(\ref{eqn:generalSpF}), we can achievefast convergence in cases when the auto-correlations we arefactorizing have a very similar form, for example, in nonstationaryfiltering. In such cases, the solution at the preceding step can beused as the $G$ function in the new factorization. Since $G$ is alreadyvery close to the solution, the convergence is likely to occur quitefast.\section{Conclusions}The general iterative formula for the square root that we derived canbe extended to the factorization of the auto-correlationfunctions. The Wilson-Burg algorithm is a specialcase of our more general formula. Using such a general formulaprovides flexibility in choosing between fast convergence andstability. We can achieve fast convergencewhen factorizing auto-spectra that have a very similar form.This improvement in convergence rate can have a usefulapplication, for instance, in nonstationary preconditioning.\section{Acknowledgments}We thank Jon Claerbout, who brought Muir's iterative scheme to ourattention, and suggested its application to spectral factorization.\bibliographystyle{seg}\bibliography{SEG,SEP2,paper}
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -