⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 paper.tex

📁 国外免费地震资料处理软件包
💻 TEX
📖 第 1 页 / 共 5 页
字号:
%arises in the theory below where we are required%to take the logarithm of the spectrum.%The spectrum of the Laplacian operator is $k_x^2+k_y^2$,%which vanishes at the point $(k_x,k_y)=(0,0)$%where the logarithm is minus infinity.%The .00004 prevents that.%\end{comment}\begin{comment}\parSubroutine \texttt{lapfac()}finds the helical derivative (factored negative Laplacian)and then prepares the required filtercoefficient tables for the helix convolutionand deconvolution subroutines.\moddex{lapfac}{factor 2-D Laplacian}Subroutine \texttt{lapfacn()} has its main jobdone by subroutine \texttt{wilson\_factor()} \vpageref{lst:wilson}shown after the Wilson-Burg theory.\end{comment}%\subsection{Wilson-Burg theory}Newton's iteration for square roots\begin{equation}a_{t+1} \eq {1\over 2} \ \left( a_t \ +\ {s\over a_t} \right)\label{eqn:newton}\end{equation}converges quadratically starting from any real initial guess $a_0$ except zero.When $a_0$ is negative,Newton's iteration converges to the negative square root.\parQuadratic convergence means that the square of the error$a_t-\sqrt{s}$at one iteration is proportional to the error at the next iteration\begin{eqnarray}\label{eqn:quadconv}a_{t+1} - \sqrt{s} \quad\sim\quad ( a_t-\sqrt{s})^2 \eq  a_t^2 - 2 a_t \sqrt{s} + s \quad > \quad 0\end{eqnarray}so, for example if the error is one significant digit at one iteration,at the next iteration it is two digits, then four, etc.We cannot use equation (\ref{eqn:quadconv}) in place of the Newton iteration itself,because it uses the answer $\sqrt{s}$ to get the answer $a_{t+1}$,and also we need the factor of proportionality.  Notice, however,if we take the factor to be $1/(2a_t)$,then $\sqrt{s}$ cancels andequation (\ref{eqn:quadconv}) becomes itself the Newton iteration (\ref{eqn:newton}).\parAnother interesting feature of the Newton iteration is thatall iterations (except possibly the initial guess)are above the ultimate square root.This is obvious from equation (\ref{eqn:quadconv}).\parWe can insert spectral functions in the Newton square-root iteration,for example $s(\omega)$ and $a(\omega)$.Where the first guess $a_0$ happens to match $\sqrt{s}$,it will match $\sqrt{s}$ at all iterations.The Newton iteration is\begin{equation}2\ {a_{t+1}\over a_t} \eq   1 \ +\ {s\over a_t^2}\end{equation}Something inspires Wilson toexpress the spectrum $S=\bar A A$ as a $Z$-transformand then write the iteration%\cite{wilson}\begin{equation} {\bar A_{t+1}(1/Z) \over \bar A_t(1/Z)} \ +\  {A_{t+1}(Z) \over A_t(Z)}\eq1 \ +\ {S(Z) \over \bar A_t(1/Z)\  A_t(Z)}\label{eqn:wilson}\end{equation}\parNow we are ready for the algorithm:Compute the right side of (\ref{eqn:wilson})by polynomial division forwards and backwards and then add 1.Then abandon negative lags and take half of the zero lag.Now you have $A_{t+1}(Z) / A_t(Z)$.Multiply out (convolve) the denominator $A_t(Z)$,and you have the desired result $A_{t+1}(Z)$.Iterate as long as you wish.\par(Parenthetically, for those people familiar with the ideaof minimum phase (if not, see FGDP or PVI),we show that $A_{t+1}(Z)$ is minimum phase:Both sides of (\ref{eqn:wilson}) are positive, as noted earlier.Both terms on the right are positive.Since the Newton iteration always overestimates,the 1 dominates the rightmost term.After masking off the negative powers of $Z$ (and half the zero power),the right side of (\ref{eqn:wilson}) adds two wavelets.The 1/2 is wholly real,and hence its real part always dominates the real part of the rightmost term.Thus (after masking negative powers) the wavelet onthe right side of  (\ref{eqn:wilson}) has a positive real part,so the phase cannot loop about the origin.This wavelet multiplies $A_t(Z)$ to give the final wavelet $A_{t+1}(Z)$and the product of two minimum-phase wavelets is minimum phase.)\par%I plan to factor%the Laplacian%\begin{equation}%S(Z) \eq -{1\over Z^{100}} - {1\over Z} + 4 -Z -Z^{100}%\end{equation}%by the Wilson-Burg method.%It seems less confusing than the Kolmogoroff method.%Furthermore, it might be better%because we can think of%upper and lower triangular matrices,%instead of recursive filters.%The advantage of the triangular matrices%is that they embody time-variable filters.\parThe input of the program is the spectrum $S(Z)$and the output is the factor $A(Z)$,a function with the spectrum $S(Z)$.I mention here that in later chapters of this book,the factor $A(Z)$ is known as the inverse Prediction-Error Filter (PEF).In the Wilson-Burg code below,$S(Z)$ and $A(Z)$ are $Z$-transform polynomialsbut their lead coefficients are extracted off,so for example, $A(z) = (a_0) + (a_1 Z + a_2 Z^2 + \cdots)$is broken into the two parts \texttt{a0} and \texttt{aa}.\moddex{wilson}{Wilson-Burg spectral factorization}{48}{87}{user/gee}%\begin{exer}\begin{comment}\itemYou hear from three different peoplethat a more isotropic representationof the Laplacian is minus one sixth of$$\begin{array} {rrr}-1     & -4 &  -1 \\-4     & 20 &  -4 \\-1     & -4 &  -1\end{array}$$What changes need to be made to subroutine \texttt{lapfac()}?%\end{comment}\itemFomel's factorization:A simple trick to avoid division in square root computation is to runNewton's method on the inverse square root instead.The iteration is then$R' = {1\over 2} R(3-R^2 X^2)$where $R$ converges (quadratically) to $1/\sqrt{X^2}$.To get the square root, just multiply $R$ by $X^2$.This leads to a reciprocal version of the Wilson-Burg algorithm.$A'/A + \bar A'/ \bar A = 3 - A \bar A  S $Here is how it can work:Construct an inverse autocorrelation ---for example,an ideal isotropic smoother;make a guess for $A$ (min-phase roughener); iterate:(1) compute $3 - A A* S$,(2) take its causal part,(3) convolve with $A$ to get $A'$.Each iteration involves just three convolutions(could be even done without helix).\end{exer}%\end{notforlecture}\section{HELIX LOW-CUT FILTER}If you want to see some tracks on the side of a hill,you want to subtract the hill and see only the tracks.Usually, however, you don't have a very good model for the hill.As an expedient you could apply a low-cut filter to remove allslowly variable functions of altitude.In chapter \ref{ajt/paper:ajt} we found the Sea of Galileein Figure \ref{ajt/fig:galbin} to be too smooth for viewing pleasureso we made the roughened versionsin Figure \ref{ajt/fig:galocut}using a filter basedon equation (\ref{ajt/eqn:lowcut}),a one-dimensional filter that we could applyover the $x$-axis or the $y$-axis.In Fourier space such a filter has a response function of $k_x$or a function of $k_y$.The isotropy of physical space tells usit would be more logical to design a filter thatis a function of$k_x^2+k_y^2$.In Figure \ref{fig:helocut} we saw that the helix derivative$\bold H$does a nice job.The Fourier magnitude of its impulse response is $k_r=\sqrt{k_x^2+k_y^2}$.There is a little anisotropy connected with phase (which way shouldwe wind the helix, on $x$ or $y$?) but it isnot nearly so severe as that of either component of the gradient,the two components having wholly different spectra,amplitude $|k_x|$ or $|k_y|$.\inputdir{mam}\parIt is nice having the 2-D helix derivative,but we can imagine even nicer 2-D low-cut filters.In one dimension,equation (\ref{ajt/eqn:lowcut})and(\ref{lsq/eqn:lowcut})we designed a filters with an adjustable parameter,a cutoff frequency.We don't have such an object in 2-D soI set out to define one.It came out somewhatabstract and complicated,and didn't work very well, but along the wayI found a simpler parameter that is very effective in practice.We'll look at it first.\sideplot{mam}{width=3.5in,height=3.5in}{  Mammogram (medical X-ray).  The cancer is the ``spoked wheel.''  (I apologize for the inability of paper publishing technology  to exhibit a clear grey image.)  The white circles are metal foil used for navigation.  The little halo around a circle exhibits the impulse  response of the helix derivative.}\parFirst I had a problem preparing Figure \ref{fig:mam}.It shows shows the application of the helix derivativeto a medical X-ray.The problem was that the original X-ray was all positivevalues of brightness so there was a massive amount ofspatial low frequency present.Obviously an $x$-derivative or a $y$-derivative wouldeliminate the low frequency, but the helix derivative did not.This unpleasant surprise arisesbecause the filter in equation(\ref{eqn:lapfac})was truncated after a finite number of terms.Adding up the terms actually displayed in equation(\ref{eqn:lapfac}),they sum to .183 whereas theoretically the sum of all the terms should be zero.From the ratio of .183/1.791 we can say that the filterpushes zero frequency amplitude 90\% of the way to zero value.When the image contains very much zero frequency amplitude,this is not good enough.Better results could be obtained with more coefficients,and I did use more coefficients,but simply removing the mean saved mefrom needing a costly number of filter coefficients.\inputdir{helgal}\parWe can visualize a plot of the magnitude of the 2-DFourier transform of the filter(\ref{eqn:lapfac}).It is a 2-D function of $k_x$ and $k_y$ and it shouldresemble $k_r=\sqrt{k_x^2+k_y^2}$.It does look like this even when the filter(\ref{eqn:lapfac})has been truncated.The point of the cone $k_r=\sqrt{k_x^2+k_y^2}$ becomesrounded and the truncated approximation of$k_r$ does not reach zero at the origin of the $(k_x,k_y)$-plane.We can force it to vanish at zero frequencyby subtracting .183 from the lead coefficient 1.791.I did not do that subtraction in Figure\ref{fig:helgal}which explains the whiteness in the middle of the lake.\plot{helgal}{width=6in,height=3.7in}{  Galilee roughened by gradient and  by helical derivative.}%Experimentation with this figure showed that the helix filter%need not be long (16 points was enough) but,%since the inputs were all positive numbers,%the output was best displayed with its mean removed%because the short 16 point filter did not remove the mean.\parNow let us return to my more logical but less effective approach.I prepared a half dozen medical X-rays like Figure \ref{fig:mam}.The doctor brought her young son to my office one eveningto evaluate the results.In a dark room I would show the original X-ray on a big screenand then suddenly switch to the helix derivative.Every time I did this, her son would exclaim ``Wow!''The doctor was not so easily impressed, however.She was not accustomed to the unfamiliar image.Fundamentally, the helix derivative applied to her datadoes compress the dynamic range making weaker features more readily discernable.We were sure of this from theory and fromvarious geophysical examples.The subjective problem was her unfamiliarity with our display.I found that I could always spot anomalies more quicklyon the filtered display, but then I would feel more comfortablewhen I would discover those same anomalies also present(though less evident) in the original data.Thinking this through, I decided the doctor would likely havebeen more impressedhad I used a spatial lowcut filter instead of the helix derivative.That would have left the details of her image(above the cutoff frequency)unchangedalteringonly the low frequencies,thereby allowing me to increase the gain.%\par%As we set out, keep in mind that many applications call for%isotropic filters%(because physics often tells us there is no prefered orientation).%This requires us to build upon the operator $-\nabla^2$,%because all isotropic functions are functions of it.%It might look anisotropic (because rotating%$45^\circ$ the finite difference representation of $-\nabla^2$%seems to change it),%but in Fourier space%the well sampled data is at low frequencies,%and the FT of $-\nabla^2$ is the isotropic expression%$k_x^2+k_y^2$ which is independent of coordinate rotation.%%\par%Recall the one-dimensional causal lowcut filters (\ref{ajt/eqn:lowcut})%that we invented earlier.%Applying one of these to its time reverse gives %an expression recognizable as a low-cut filter.%\begin{equation}%{1-Z^{-1}\over 1-\rho Z^{-1}}\ \ %{1-Z     \over 1-\rho Z     }%\quad\approx\quad%{\omega^2 \over \omega^2 + \omega_0^2}%\label{eqn:lofac}%\end{equation}%As you look at this expression,%think of the numerator and denominator%as expressions that could be sent to a spectral factorization module.%For example, in 1-D the numerator%is the autocorrelation $(-1,2,-1)$.%The denominator is an autocorrelation like $(-1,2.05,-1)$.%These two autocorrelations can be factored%by Kolmogoroff or Burg-Wilson methods.%Equation%(\ref{eqn:lofac}) shows them as already factored.%In two dimensions we can use the same approach.%The factor of the numerator is the helix derivative%and the factor of the%denominator being found likewise.%\par%The di

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -