📄 paper.tex
字号:
1.791&-.651&-.044&-.024&\cdots&\cdots&-.044&-.087&-.200&-.558 \\\hline\end{array}\label{eqn:lapfacrow}\end{equation}According to the Kolmogoroff theory,if we form the autocorrelation of$\mathcal{H}$,we will get $\mathcal{R}$.This is not obvious from the numbers themselvesbecause the computation requires a little work.\parLet the time reversed version of$\mathcal{H}$be denoted$\mathcal {H}'$.This notation is consistant with an idea fromChapter \ref{ajt/paper:ajt} that the adjoint of a filter matrixis another filter matrix with a reversed filter.In engineering it is conventional to use the asterisk symbol``$\ast$'' to denote convolution.Thus, the idea that the autocorrelation of a signal $\mathcal{H}$is a convolution of thesignal $\mathcal{H}$with its time reverse (adjoint)can be written as$ \mathcal{H}' \ast \mathcal{H} = \mathcal{H} \ast \mathcal{H}' = \mathcal{R}$.\parWind the signal $\mathcal{R}$ around avertical-axis helix to see its two-dimensional shape $\mathcal{R}_2$:\begin{equation}\mathcal{R}\quad\xrightarrow{\rm helical \; boundaries}\quad\begin{array}{|r|r|r|} \hline& -1 & \\\hline-1 & 4 & -1\\\hline& -1 & \\\hline\end{array}\eq\mathcal{R}_2\label{eqn:lap2d}\end{equation}This 2-D filter is the negative of the finite-difference representationof the Laplacian operator, generally denoted$\nabla^2= {\partial^2\over \partial x^2} + {\partial^2\over \partial y^2} $.Now for the magic:Wind the signal $\mathcal{H}$ around the same helixto see its two-dimensional shape $\mathcal{H}_2$\begin{equation}\label{eqn:lapfac} {\mathcal{H}_2}\eq \begin{array} {|r|r|r|r|r|r|r|r|r|} \hline & & & & 1.791 & -.651 & -.044 & -.024& \cdots \\ \hline \cdots &-.044 & -.087 & -.200 & -.558 & & & & \\ \hline \end{array}\end{equation}In the representation (\ref{eqn:lapfac}) we see the coefficients diminishingrapidly away from maximum value 1.791.My claim is that the two-dimensional autocorrelation of (\ref{eqn:lapfac})is (\ref{eqn:lap2d}).You verified this idea earlier when the numbers were all ones.You can check it again in a few momentsif you drop the small values, say 0.2 and smaller.\inputdir{helocut}\parSince the autocorrelation of $\mathcal{H}$ is$\mathcal{H}' \ast \mathcal{H} = \mathcal{R} =-\nabla^2$is a second derivative,the operator $\mathcal{H}$ must be something like a first derivative.As a geophysicist, I found it natural to comparethe operator ${\partial\over\partial y}$with $\mathcal{H}$ by applying them to a local topographic map.The result shown inFigure \ref{fig:helocut}is that $\mathcal{H}$ enhances drainage patterns whereas${\partial\over\partial y}$ enhances mountain ridges.\plot{helocut}{width=6in,height=8.6in} { Topography, helical derivative, slope south.}\parThe operator $\mathcal{H}$ hascurious similarities and differenceswith the familiar gradient and divergence operators.In two-dimensional physical space,the gradient maps one field to {\em two} fields(north slope and east slope).The factorization of $-\nabla^2$ with the helixgives us the operator $\mathcal{H}$that maps one field to {\em one} field.Being a one-to-one transformation(unlike gradient and divergence)the operator $\mathcal{H}$ is potentially invertibleby deconvolution (recursive filtering).\parI have chosen the name\footnote{ Any fact this basic should be named in some earlier field of mathematics or theoretical physics. Admittedly, the concept exists on an infinite cartesian plane without a helix, but all my codes in a finite space involve the helix, and the helix concept led me to it. }``helix derivative''or ``helical derivative'' for the operator $\mathcal{H}$.A telephone pole has a narrow shadow behind it.The helix integral (middle frame of Figure \ref{fig:lapfac})and the helix derivative (left frame)show shadows with an angular bandwidth approaching $180^\circ$.\parOur construction makes $\mathcal{H}$ have the energy spectrum $k_x^2+k_y^2$,so the magnitude of the Fourier transform is $\sqrt{k_x^2+k_y^2}$.It is a conecentered and with value zero at the origin.By contrast, the components of the ordinary gradienthave amplitude responses $|k_x|$ and $|k_y|$that are lines of zero across the$(k_x,k_y)$-plane.\parThe rotationally invariant cone in the Fourier domaincontrasts sharply with the nonrotationally invariantfunction shape in $(x,y)$-space.The difference must arise from the phase spectrum.The factorization (\ref{eqn:lapfac})is nonunique in that causalityassociated with the helix mappingcan be defined along either $x$- or $y$-axes;thus the operator (\ref{eqn:lapfac})can be rotated or reflected.\inputdir{helicon}\parThis is where the story all comes together.One-dimensional theory, either the oldKolmogoroff spectral factorization,or the newWilson-Burg spectral-factorization methodproduces not merely a causal waveletwith the required autocorrelation.It produces one that is stable in deconvolution.Using $\mathcal{H}$ in one-dimensional polynomial division,we can solve many formerly difficult problems very rapidly.Consider the Laplace equation with sources (Poisson's equation).Polynomial division and its reverse (adjoint) gives us$\mathcal{P} =(\mathcal{Q}/\mathcal{H})/\mathcal{H}'$which means that we have solved$\nabla^2 \mathcal{P} = -\mathcal{Q}$by using polynomial division on a helix.Using the seven coefficients shown,the cost is fourteen multiplications(because we need to run both ways) per mesh point.An example is shown in Figure \ref{fig:lapfac}.\plot{lapfac}{width=6in,height=2.0in}{ Deconvolution by a filter whose autocorrelation is the two-dimensional Laplacian operator. Amounts to solving the Poisson equation. Left is $\mathcal{Q}$; Middle is $\mathcal{Q}/\mathcal{H}$; Right is $(\mathcal{Q}/\mathcal{H})/\mathcal{H}'$.}\parFigure \ref{fig:lapfac} contains both the helix derivative and its inverse.Contrast them to the $x$- or $y$-derivatives (doublets) and their inverses(axis-parallel lines in the $(x,y)$-plane).Simple derivatives are highly directionalwhereas the helix derivative is only slightly directionalachieving its meagre directionality entirely from its phase spectrum.\parIn practice we often require an isotropic filter.Such a filter is a function of $k_r=\sqrt{k_x^2 + k_y^2}$.It could be represented as a sum of helix derivatives to integer powers.\subsection{Matrix view of the helix}Physics on a helix can be viewed thru the eyesof matrices and numerical analysis.This is not easy because the matrices are so huge.Discretize the $(x,y)$-plane to an $N\times M$ arrayand pack the array into a vector of $N\times M$ components.Likewise pack the Laplacian operator $\partial_{xx}+\partial_{yy}$into a matrix.For a $4\times 3$ plane, that matrix is shown in equation (\ref{eqn:huge}).\begin{equation}\label{eqn:huge}-\ \nabla^2 \eq\left[\begin{array}{rrrr|rrrr|rrrr} 4 & -1 & \cdot & \cdot & -1 & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ -1 & 4 & -1 & \cdot & \cdot &-1 & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\\cdot & -1 & 4 & -1 & \cdot & \cdot &-1 & \cdot & \cdot & \cdot & \cdot & \cdot \\\cdot & \cdot & -1 & 4 & h & \cdot & \cdot &-1 & \cdot & \cdot & \cdot & \cdot \\\hline -1 & \cdot & \cdot & h & 4 & -1 & \cdot & \cdot & -1 & \cdot & \cdot & \cdot \\ \cdot &-1 & \cdot & \cdot & -1 & 4 & -1 & \cdot & \cdot &-1 & \cdot & \cdot \\ \cdot & \cdot &-1 & \cdot & \cdot & -1 & 4 & -1 & \cdot & \cdot &-1 & \cdot \\ \cdot & \cdot & \cdot &-1 & \cdot & \cdot & -1 & 4 & h & \cdot & \cdot &-1 \\\hline \cdot & \cdot & \cdot & \cdot & -1 & \cdot & \cdot & h & 4 & -1 & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot &-1 & \cdot & \cdot & -1 & 4 & -1 & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot &-1 & \cdot & \cdot & -1 & 4 & -1 \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot &-1 & \cdot & \cdot & -1 & 4 \end{array}\right]\end{equation}\par\noindentThe two-dimensional matrix of coefficients for the Laplacian operatoris shown in (\ref{eqn:huge}),where, on a cartesian space, $h=0$,and in the helix geometry, $h=-1$.(A similar partitioned matrix arises from packinga cylindrical surface into a $4\times3$ array.)Notice that the partitioning becomes transparent for the helix, $h=-1$.With the partitioning thus invisible, the matrixsimply represents one-dimensional convolutionand we have an alternative analytical approach,one-dimensional Fourier Transform.We often need to solve sets of simultaneous equationswith a matrix similar to (\ref{eqn:huge}).The method we use is triangular factorization.\parAlthough the autocorrelation $\mathcal{R}$ has mostly zero values,the factored autocorrelation $\mathcal{A}$ has a great number of nonzero terms,but fortunately they seem to be converging rapidly (in the middle)so truncation (of the middle coefficients) seems reasonable.I wish I could show you a larger matrix, but all I can do is to packthe signal $\mathcal{A}$ into shifted columns ofa lower triangular matrix $\mathbf{A}$ like this:\begin{equation}\bold A \ = \ \ \left[\begin{array}{rrrrrrrrrrrr} 1.8&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\ -.6& 1.8&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\ 0.0& -.6& 1.8&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\ -.2& 0.0& -.6& 1.8&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\ -.6& -.2& 0.0& -.6& 1.8&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\\cdot& -.6& -.2& 0.0& -.6& 1.8&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\\cdot&\cdot& -.6& -.2& 0.0& -.6& 1.8&\cdot&\cdot&\cdot&\cdot&\cdot\\\cdot&\cdot&\cdot& -.6& -.2& 0.0& -.6& 1.8&\cdot&\cdot&\cdot&\cdot\\\cdot&\cdot&\cdot&\cdot& -.6& -.2& 0.0& -.6& 1.8&\cdot&\cdot&\cdot\\\cdot&\cdot&\cdot&\cdot&\cdot& -.6& -.2& 0.0& -.6& 1.8&\cdot&\cdot\\\cdot&\cdot&\cdot&\cdot&\cdot&\cdot& -.6& -.2& 0.0& -.6& 1.8&\cdot\\\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot& -.6& -.2& 0.0& -.6& 1.8\end{array}\right]\label{eqn:lapfacmat}\end{equation}If you will allow me some truncation approximations,I now claim that the laplacian represented by thematrix in equation (\ref{eqn:huge})is factored into two parts$-\nabla^2 =\bold A'\bold A$which are upper and lower triangular matriceswhose product forms the autocorrelation seen in (\ref{eqn:huge}).Recall that triangular matricesallow quick solutions of simultaneous equations by backsubstitution.That is what we do with ourdeconvolution program.\section{CAUSALITY AND SPECTAL FACTORIZATION}Mathematics sometimes seems a mundane subject,like when it does the ``accounting'' for an engineer.Other times it brings unexpected amazing new concepts into our lives.This is the case with the study of causality and spectral factorization.There are many little-known, amazing, fundamental ideas hereI would like to tell you about.We won't get to the bottom of any of thembut it's fun and useful to see what they are and how to use them.\parStart with an example. Consider a mechanical object.We can strain it and watch it stress or we can stress it and watch it strain.We feel knowledge of the present and past stress history is all we needto determine the present value of strain.Likewise, the converse, history of strain should tell us the stress.We could say there is a filter that takes us from stress to strain;likewise another filter takes us from strain to stress.What we have here is a pair of filters that are mutually inverseunder convolution.In the Fourier domain, one is literally the inverse of the other.What is remarkable is that in the time domain, both are causal.They both vanish before zero lag $\tau=0$.\parNot all causal filters have a causal inverse.The best known name for one that does is ``minimum-phase filter.''Unfortunately, this name is not suggestive of
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -