⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 paper.tex

📁 国外免费地震资料处理软件包
💻 TEX
📖 第 1 页 / 共 3 页
字号:
% Started 11/29/00%\shortnote\lefthead{}\righthead{}\renewcommand{\thefootnote}{\fnsymbol{footnote}} \title{The Wilson-Burg method of spectral factorization \\  with application to helical filtering}\author{Sergey Fomel\/\footnotemark[1],Paul Sava\/\footnotemark[2],James Rickett\/\footnotemark[3],and Jon F. Claerbout\/\footnotemark[2]} \footnotetext[1]{\emph{Bureau of Economic Geology, Jackson School of Geosciences,The University of Texas at Austin,University Station, Box X,Austin, Texas 78713-8972, USA}}\footnotetext[2]{\emph{Stanford Exploration Project,         Department of Geophysics,         Stanford University, Stanford, California 94305, USA}}\footnotetext[3]{\emph{ChevronTexaco Exploration and Production Technology    Company, 6001 Bollinger Canyon Road, San Ramon, California 94583-2324}}\maketitle\begin{abstract}      Spectral factorization is a computational procedure for constructing      minimum-phase (stable inverse) filters required for recursive      inverse filtering. We present a novel method of spectral      factorization. The method iteratively constructs an      approximation of the minimum-phase filter with the given      autocorrelation by repeated forward and inverse filtering and      rearranging the terms. This procedure is especially efficient in      the multidimensional case, where the inverse recursive filtering      is enabled by the helix transform.            To exemplify a practical application of the proposed method, we      consider the problem of smooth two-dimensional data      regularization. Splines in tension are smooth interpolation      surfaces whose behavior in unconstrained regions is controlled      by the tension parameter. We show that such surfaces can be      efficiently constructed with recursive filter preconditioning      and introduce a family of corresponding two-dimensional      minimum-phase filters. The filters are created by spectral      factorization on a helix.\end{abstract}\section{Introduction}Spectral factorization is the task of estimating a minimum-phase signal from agiven power spectrum. The advent of the helical coordinate system\cite[]{helix0,GEO63-05-15321541} has led to renewed interest in spectral factorizationalgorithms, since they now apply to multi-dimensional problems. Specifically,spectral factorization algorithms provide the key to rapid multi-dimensionalrecursive filtering with arbitrary functions, which in turn has geophysicalapplications in preconditioning inverse problems \cite[]{SEG-1998-1851,GEO68-02-05770588},wavefield extrapolation\cite[]{SEG-1998-1124,EAE-2000-P0145,SEG-2000-08620865,SEG-2001-10571060}, and3-D noise attenuation \cite[]{SEG-1999-12311234,EAE-1999-6039,EAE-2001-P167}.The Kolmogoroff (cepstral or Hilbert transform) method of spectralfactorization \cite[]{kolmog,Claerbout.fgdp.76,oppenheim} is often used by thegeophysical community because of its computational efficiency. However, as afrequency-domain method, it has certain limitations. For example, theassumption of periodic boundary conditions often requires extreme amounts ofzero-padding for a stable factorization. This is one of the limitations whichmake this method inconvenient for multi-dimensional applications.The Wilson-Burg method, introduced in this paper, is an iterative algorithmfor spectral factorization based on Newton's iterations. The algorithmexhibits quadratic convergence. It provides a time-domain approach that ispotentially more efficient than the Kolmogoroff method. We include a detailedcomparison of the two methods.Recent surveys \cite[]{goodman,kailath} discuss some other methods for spectralfactorization, such as the Schur method \cite[]{schur}, the Bauer method\cite[]{bauer} and Wilson's original method \cite[]{mywilson}. The latter is notedfor its superb numerical properties. We introduce Burg's modification to thisalgorithm, which puts the computational attractiveness of this method to a newlevel. The Wilson-Burg method avoids the need for matrix inversion, essentialfor the original Wilson's algorithm, reduces the computational effort from$O(N^3)$ operations to $O(N^2)$ operations per iteration. A different way toaccelerate Wilson's iteration was suggested by \cite{laurie}. We havefound the Wilson-Burg algorithm to be especially suitable for applications ofmultidimensional helical filtering, where the number of filter coefficientscan be small, and the cost effectively reduces to $O(N)$ operations.The second part of the paper contains a practical example of theintroduced spectral factorization method. The method is applied to theproblem of two-dimensional smooth data regularization. This problemoften occurs in mapping potential fields data and in other geophysicalproblems. Applying the Wilson-Burg spectral factorization method, weconstruct a family of two-dimensional recursive filters, whichcorrespond to different values of tension in the tension-splineapproach to data regularization \cite[]{GEO55-03-02930305}. We then usethe constructed filters for an efficient preconditioning of the dataregularization problem. The combination of an efficient spectral factorizationand an efficient preconditioning technique provides an attractive practicalmethod for multidimensional data interpolation. The technique is illustratedwith bathymetry data from the Sea of Galilee (Lake Kinneret) in Israel.\section{Method description}Spectral factorization constructs a minimum-phase signal from itsspectrum.  The algorithm, suggested by \cite{mywilson}, approachesthis problem directly with Newton's iterative method. In a$Z$-transform notation, Wilson's method implies solving the equation\begin{equation}S(Z) = A(Z)\bar{A}(1/Z)\label{eqn:specfac}\end{equation}for a given spectrum $S(Z)$ and unknown minimum-phase signal $A(Z)$with an iterative linearization\begin{eqnarray}\nonumberS(Z) & = & A_t(Z)\bar A_t(1/Z)+     A_t(  Z)[\bar A_{t+1}(1/Z)-\bar A_t(1/Z)]+\bar A_t(1/Z)[     A_{t+1}(  Z)-     A_t(  Z)] \\& = & A_t(  Z) \bar A_{t+1}(1/Z) + \bar A_t(1/Z) A_{t+1}(Z)- A_t(Z)\bar A_t(1/Z)\;,\label{eqn:wilson}\end{eqnarray}where $A_t(Z)$ denotes the signal estimate at iteration $t$. Starting fromsome initial estimate $A_0(Z)$, such as $A_0(Z)=1$, one iteratively solves thelinear system~(\ref{eqn:wilson}) for the updated signal $A_{t+1}(Z)$. Wilson\shortcite{mywilson} presents a rigorous proof thatiteration~(\ref{eqn:wilson}) operates with minimum-phase signals provided thatthe initial estimate $A_0(Z)$ is minimum-phase. The original algorithmestimates the new approximation $A_{t+1}(Z)$ by matrix inversion implied inthe solution of the system.Burg (1998, personal communication) recognized that dividing bothsides of equation (\ref{eqn:wilson}) by $\bar A_t(1/Z) A_t(Z)$ leadsto a particularly convenient form, where the terms on the left aresymmetric, and the two terms on the right are correspondingly strictlycausal and anticausal:\begin{equation} \label{eqn:burg}1 \ +\ {S(Z) \over \bar A_t(1/Z)\  A_t(Z)} ={A_{t+1}(Z) \over A_t(Z)}\ +\{\bar A_{t+1}(1/Z) \over \bar A_t(1/Z)}\end{equation}\parEquation~(\ref{eqn:burg}) leads to the Wilson-Burg algorithm, whichaccomplishes spectral factorization by a recursive application ofconvolution (polynomial multiplication) and deconvolution (polynomialdivision). The algorithm proceeds as follows:\begin{enumerate}\item Compute the left side of equation~(\ref{eqn:burg}) using forward  and adjoint polynomial division.\item Abandon negative lags, to keep only the causal part of the  signal, and also keep half of the zero lag. This gives us  $A_{t+1}(Z)/A_t(Z)$.\item Multiply out (convolve) the denominator $A_t(Z)$. Now we havethe desired result $A_{t+1}(Z)$.\item Iterate until convergence.  \end{enumerate}\tabl{tablerate}{Example convergence of the Wilson-Burg iteration}{\begin{center}\begin{tabular}{|r||r|r|r|r|}               \hline iter & $a_0$ &  $a_1$ & $a_2$ & $a_3$ \\ \hline \hline 0   & 1.000000    &  0.000000   &  0.000000   &  0.000000 \\ 1   & 36.523964   & 23.737839   &  6.625787   &  0.657103 \\ 2   & 26.243151   & 25.726116   &  8.471050   &  0.914951 \\ 3   & 24.162354   & 25.991493   &  8.962727   &  0.990802 \\ 4   & 24.001223   & 25.999662   &  9.000164   &  0.999200 \\ 5   & 24.000015   & 25.999977   &  9.000029   &  0.999944 \\ 6   & 23.999998   & 26.000002   &  9.000003   &  0.999996 \\ 7   & 23.999998   & 26.000004   &  9.000001   &  1.000000 \\ 8   & 23.999998   & 25.999998   &  9.000000   &  1.000000 \\ 9   & 24.000000   & 26.000000   &  9.000000   &  1.000000 \\ \hline\end{tabular}\end{center}}An example of the Wilson-Burg convergence is shown inTable~\ref{tbl:tablerate} on a simple 1-D signal. The autocorrelation$S(Z)$ in this case is $1334 + 867 \left(Z + 1/Z\right) + 242\left(Z^2 + 1/Z^2\right) + 24 \left(Z^3 + 1/Z^3\right)$, and thecorresponding minimum-phase signal is $A(Z) = (2+Z)(3+Z)(4+Z) = 24 +26 Z + 9 Z^2 + Z^3$. A quadratic rate of convergence is visible fromthe table. The convergence slows down for signals whose polynomialroots are close to the unit circle \cite[]{mywilson}.The clear advantage of the Wilson-Burg algorithm in comparison with theoriginal Wilson algorithm is in the elimination of the expensive matrixinversion step. Only convolution and deconvolution operations are used at eachiteration step.\subsection{Comparison of Wilson-Burg and Kolmogoroff methods}The Kolmogoroff (cepstral or Hilbert transform) spectral factorizationalgorithm \cite[]{kolmog,Claerbout.fgdp.76,oppenheim} is widely used because ofits computationally efficiency. While this method is easily extended to themulti-dimensional case with the help of helical transform\cite[]{SEG-1999-16751678}, there are several circumstances that make theWilson-Burg method more attractive in multi-dimensional filteringapplications.\begin{itemize}\item The Kolmogoroff method takes $O(N \log N)$ operations, where $N$ is the  length of the auto-correlation function.  The cost of the Wilson-Burg method  is proportional to the [number of iterations] $\times$ [filter length]  $\times N$.  If we keep the filter small and limit the number of iterations,  the Wilson-Burg method can be cheaper (linear in $N$). In comparison, the  cost of the original Wilson's method is the [number of iterations] $\times$  $O(N^3)$.\item The Kolmogoroff method works in the frequency domain and assumes  periodic boundary conditions.  Auto-correlation functions,  therefore, need to be padded with zeros before they are Fourier  transformed.  For functions with zeros near the unit circle, the  padding may need to be many orders of magnitude greater than the  original filter length, $N$. The Wilson-Burg method is implemented  in the time-domain, where no extra padding is required.  \item Newton's method (the basis of the Wilson-Burg algorithm)  converges quickly when the initial guess is close to the solution.  If we take advantage of this property, the method may converge in  one or two iterations, reducing the cost even further.  It is  impossible to make use of an initial guess with the Kolmogoroff  method.  \item The Kolmogoroff method, when applied to helix filtering,  involves the dangerous step of truncating the filter coefficients to  reduce the size of the filter. If the auto-correlation function has  roots close to the unit circle, truncating filter coefficients may  easily lead to non-minimum-phase filters. The Wilson-Burg allows us to  fix the shape of the filter from the very beginning. This does not  guarantee that we will find the exact solution, but at least we can  obtain a reasonable minimum-phase approximation to the desired  filter. The safest practical strategy in the case of an unknown  initial estimate is to start with finding the longest possible  filter, remove those of its coefficients that are smaller than a certain  threshold, and repeat the factoring process again with the shorter  filter.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -