📄 paper.tex
字号:
\lefthead{Fomel}\righthead{Forward interpolation}\footer{SEP--107}\title{Forward interpolation}\email{sergey@sep.stanford.edu}\author{Sergey Fomel}\newtheorem{property}{Property}\maketitle\label{chapter:forwd}\begin{abstract}As I will illustrate in later chapters, the crucial part of dataregularization problems is in the choice and implementation of theregularization operator $\mathbf{D}$ or the correspondingpreconditioning operator $\mathbf{P}$. The choice of the forwardmodeling operator $\mathbf{L}$ is less critical. In this chapter, Idiscuss the nature of forward interpolation, which has been one of thetraditional subjects in computational mathematics. \cite{wolberg}presents a detailed review of different conventional approaches. Idiscuss a simple mathematical theory of interpolation from a regulargrid and derive the main formulas from a very general idea of functionbases.Forward interpolation plays only a supplementary role in thisdissertation, but it has many primary applications, such as traceresampling, NMO, Kirchhoff and Stolt migrations, log-stretch, andradial transform, in seismic data processing and imaging. Two simpleexamples appear at the end of this chapter.\end{abstract}\section{Interpolation theory}Mathematical interpolation theory considers a function $f$, defined ona regular grid $N$. The problem is to find $f$ in a continuum thatincludes $N$. I am not defining the dimensionality of $N$ and $f$ herebecause it is not essential for the derivations. Furthermore, I amnot specifying the exact meaning of ``regular grid,'' since it willbecome clear from the analysis that follows. The function $f$ isassumed to belong to a Hilbert space with a defined dot product.%\newtheorem{property}{Property}If we restrict our consideration to a linear case, the desiredsolution will take the following general form\begin{equation}\label{eq:linear} f (x) = \sum_{n \in N} W (x, n) f (n)\;,\end{equation}where $x$ is a point from the continuum, and $W (x, n)$ is a linearweight function that can take both positive and negative values. Ifthe grid $N$ itself is considered as continuous, the sum in formula(\ref{eq:linear}) transforms to an integral in $dn$. Two generalproperties of the linear weighting function $W (x, n)$ are evidentfrom formula (\ref{eq:linear}).\begin{property} \begin{equation}\label{eq:wnn} W (n, n) = 1\;.\end{equation}\end{property}Equality (\ref{eq:wnn}) is necessary to assure that the interpolationof a single spike at some point $n$ does not change the value $f (n)$at the spike. \begin{property}\begin{equation}\label{eq:sum} \sum_{n \in N} W (x, n) = 1\;.\end{equation}\end{property}This property is the normalization condition. Formula (\ref{eq:sum})assures that interpolation of a constant function $f(n)$ remainsconstant.One classic example of the interpolation weight $W (x, n)$ is theLagrange polynomial, which has the form\begin{equation}\label{eq:lagrange} W (x, n) = \prod_{i \neq n} \frac{(x-i)}{(n-i)}\;.\end{equation}The Lagrange interpolation provides a unique polynomial, which goesexactly through the data points $f (n)$\footnote{It is interesting tonote that the interpolation and finite-difference filters developedby \cite{Karrenbach.sepphd.83} from a general approach ofself-similar operators reduce to a localized form of Lagrangepolynomials.}. The local 1-point Lagrange interpolation is equivalentto the nearest-neighbor interpolation, defined by the formula\begin{equation}\label{eq:bin} W (x, n) = \left\{\begin{array}{lcr}1, & \mbox{for} & n - 1/2 \leq x < n + 1/2 \\0, & \mbox{otherwise} &\end{array}\right.\end{equation}Likewise, the local 2-point Lagrange interpolation is equivalentto the linear interpolation, defined by the formula\begin{equation}\label{eq:lin} W (x, n) = \left\{\begin{array}{lcr}1 - |x-n|, & \mbox{for} & n - 1 \leq x < n + 1 \\0, & \mbox{otherwise} &\end{array}\right.\end{equation}\inputdir{Math}Because of their simplicity, the nearest-neighbor and linearinterpolation methods are very practical and easy to apply. Theiraccuracy is, however, limited and may be inadequate forinterpolating high-frequency signals. The shapes ofinterpolants~(\ref{eq:bin}) and~(\ref{eq:lin}) and their spectra areplotted in Figures~\ref{fig:nnint} and~\ref{fig:linint}. The spectralplots show that both interpolants act as low-pass filters, preventingthe high-frequency energy from being correctly interpolated.\par\plot{nnint}{width=6in}{Nearest-neighbor interpolant (left) and its spectrum (right).}\plot{linint}{width=6in}{Linear interpolant (left) and its spectrum (right).}The Lagrange interpolants of higher order correspond to morecomplicated polynomials. Another popular practical approach is cubicconvolution \cite[]{keys}. The cubic convolution interpolant is a localpiece-wise cubic function:\begin{equation}\label{eq:keys} W (x, n) = \left\{\begin{array}{lcr}3/2 |x-n|^3 - 5/2 |x-n|^2 + 1, & \mbox{for} & 0 \leq |x-n| < 1 \\-1/2 |x-n|^3 + 5/2 |x-n|^2 - 4 |x-n| + 2, & \mbox{for} & 1 \leq |x-n| < 2 \\0, & \mbox{otherwise} &\end{array}\right.\end{equation}The shapes of interpolant~(\ref{eq:keys}) and its spectrum are plottedin Figure~\ref{fig:ccint}. \plot{ccint}{width=6in}{Cubic-convolution interpolant (left) and its spectrum (right).}\inputdir{chirp}I compare the accuracy of different forward interpolation methods on aone-dimensional signal shown in Figure~\ref{fig:chirp}. The idealsignal has an exponential amplitude decay and a quadratic frequencyincrease from the center towards the edges. It is sampled at a regular50-point grid and interpolated to 500 regularly sampled locations. Theinterpolation result is compared with the ideal one.Figures~\ref{fig:binlin} and~\ref{fig:lincub} show the interpolationerror steadily decreasing as we proceed from 1-point nearest-neighborto 2-point linear and 4-point cubic-convolution interpolation. At thesame time, the cost of interpolation grows proportionally to theinterpolant length.\sideplot{chirp}{height=2.5in}{One-dimensional test signal. Top: ideal. Bottom: sampled at 50 regularly spaced points. The bottom plot is the input in a forward interpolation test.}\sideplot{binlin}{height=2.5in}{Interpolation error of the nearest-neighbor interpolant (dashed line) compared to that of the linear interpolant (solid line).}\sideplot{lincub}{height=2.5in}{Interpolation error of the linear interpolant (dashed line) compared to that of the cubic convolution interpolant (solid line).}\section{Function basis}A particular form of the solution (\ref{eq:linear}) arises fromassuming the existence of a basis function set $\{\psi_k(x)\},\;k \inK$, such that the function $f (x)$ can be represented by a linearcombination of the basis functions in the set, as follows:\begin{equation}\label{eq:basis} f (x) = \sum_{k \in K} c_k \psi_k (x)\;.\end{equation}We can find the linear coefficients $c_k$ by multiplying bothsides of equation (\ref{eq:basis}) by one of the basis functions(e.g. $\psi_j (x)$). Inverting the equality\begin{equation}\label{eq:dotprod} \left( \psi_j (x), f (x)\right) = \sum_{k \in K} c_k \Psi_{jk}\;,\end{equation}where the parentheses denote the dot product, and\begin{equation}\label{eq:psi0}\Psi_{jk} = \left( \psi_j (x), \psi_k (x)\right) \;,\end{equation}leads to the following explicit expression for the coefficients$c_k$:\begin{equation}\label{eq:ck} c_k = \sum_{j \in K} \Psi^{-1}_{kj} \left( \psi_j (x), f (x)\right) \;.\end{equation}Here $\Psi^{-1}_{kj}$ refers to the $kj$ component of the matrix,which is the inverse of $\Psi$. The matrix $\Psi$ is invertible aslong as the basis set of functions is linearly independent. In thespecial case of an orthonormal basis, $\Psi$ reduces to the identitymatrix:\begin{equation}\label{eqn:deltajk}\Psi_{jk} = \Psi^{-1}_{kj} = \delta_{jk}\;.\end{equation}Equation (\ref{eq:ck}) is a least-squares estimate of the coefficients$c_k$: one can alternatively derive it by minimizing the least-squaresnorm of the difference between $f(x)$ and the lineardecomposition~(\ref{eq:basis}). For a given set of basis functions,equation~(\ref{eq:ck}) approximates the function $f(x)$ in formula(\ref{eq:linear}) in the least-squares sense.\section{Solution}The usual (although not unique) mathematical definition of thecontinuous dot product is\begin{equation}\label{eq:dot} (f_1, f_2) = \int \bar{f}_1 (x) f_2 (x) dx \;,\end{equation}where the bar over $f_1$ stands for complex conjugate (in the case ofcomplex-valued functions). Applying definition (\ref{eq:dot}) to thedot product in equation~(\ref{eq:ck}) and approximating the integralby a finite sum on the regular grid $N$, we arrive at the approximateequality\begin{equation}\label{eq:grid} (\psi_j (x), f (x)) = \int \bar{\psi}_j (x) f (x) dx \approx \sum_{n \in N} \bar{\psi}_j (n) f (n)\;.\end{equation}We can consider equation (\ref{eq:grid}) not only as a usefulapproximation, but also as an implicit \emph{definition} of theregular grid. Grid regularity means that approximation (\ref{eq:grid})is possible. According to this definition, the more regular the gridis, the more accurate is the approximation.Substituting equality~(\ref{eq:grid}) into equations~(\ref{eq:ck})and~(\ref{eq:basis}) yields a solution to the interpolation problem.The solution takes the form of equation (\ref{eq:linear}) with\begin{equation}\label{eq:solution} W (x, n) = \sum_{k \in K} \sum_{j \in K} \Psi^{-1}_{kj} \psi_k (x) \bar{\psi}_j (n)\;.\end{equation}We have found a constructive way of creating the linearinterpolation operator from a specified set of basis functions.It is important to note that the adjoint of the linear operator informula (\ref{eq:linear}) is the continuous dot product of thefunctions $W (x, n)$ and $f (x)$. This simple observation follows fromthe definition of the adjoint operator and the simple equality\begin{eqnarray}\label{eq:dottest} \left(f_1 (x), \sum_{n \in N} W (x, n) f_2 (n)\right) = \sum_{n \in N} f_2 (n) \left(f_1 (x), W (x, n) \right) = \nonumber \\ \left(\left(W (x, n), f_1 (x)\right), f_2 (n) \right) \;.\end{eqnarray}In the final equality, we have assumed that the discrete dot productis defined by the sum\begin{equation}\label{eq:ddot} (f_1 (n), f_2 (n)) = \sum_{n \in N} \bar{f}_1 (n) f_2 (n) \;.\end{equation}Applying the adjoint interpolation operator to the function $f$,defined with the help of formula (\ref{eq:solution}), and employingformulas (\ref{eq:basis}) and (\ref{eq:ck}), we discover that\begin{eqnarray}\label{eq:adjoint} \left(W (x, n), f (x)\right) = \sum_{k \in K} \sum_{j \in K} \Psi^{-1}_{kj} \bar{\psi}_j (n) \left(\psi_k (x), f (x)\right) = \nonumber \\ \sum_{j \in K} \bar{\psi}_j (n) \sum_{k \in K} \Psi^{-1}_{jk} \left(\psi_k (x), f (x)\right) = \sum_{j \in K} c_j \psi_j (n) = f (n)\;.\end{eqnarray}This remarkable result shows that although the forward linearinterpolation is based on approximation (\ref{eq:grid}), the adjointinterpolation produces an exact value of $f (n)$! The approximatenature of equation (\ref{eq:solution}) reflects the fundamentaldifference between adjoint and inverse linear operators\cite[]{Claerbout.blackwell.92}. When adjoint interpolation is applied to a constant function $f (x)\equiv 1$, it is natural to require the constant output $f (n) = 1$.This requirement leads to yet another general property of theinterpolation functions $W (x,n)$:\begin{property}\begin{equation}\label{eq:intwdx}\int W (x, n) dx = 1\;.\end{equation}\end{property}The functional basis approach to interpolation is well developed inthe sampling theory \cite[]{garcia}. Some classic examples are discussedin the next section.\section{Interpolation with Fourier basis}To illustrate the general theory with familiar examples, I considerin this section the most famous example of an orthonormal functionbasis, the Fourier basis of trigonometric functions. What kind oflinear interpolation does this basis lead to?\subsection{Continuous Fourier basis}\inputdir{Math}For the continuous Fourier transform, the set of basis functions isdefined by\begin{equation}\label{eq:cft} \psi_\omega (x) = \frac{1}{\sqrt{2 \pi}} e^{i \omega x} \;,\end{equation}where $\omega$ is the continuous frequency. For a $1$-pointsampling interval, the frequency is limited by the Nyquistcondition: $|\omega| \leq \pi$. In this case, the interpolationfunction $W$ can be computed from equation~(\ref{eq:solution}) to be\begin{equation}\label{eq:w-cft} W (x, n) = \frac{1}{2 \pi} \int_{-\pi}^{\pi} e^{i \omega (x-n)} d\omega = \frac{\sin \left[\pi (x - n) \right]}{\pi (x - n)} \;.\end{equation}The shape of the interpolation function~(\ref{eq:w-cft}) and itsspectrum are shown in Figure~\ref{fig:sincint}. The spectrum isidentically equal to 1 in the Nyquist frequency band.
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -