📄 paper.tex
字号:
\inputdir{bob}Choosing the size of $\bold \epsilon$ choosesthe stiffness of the curve that connects regions of good data.Our first test cases gave solutionsthat we interpreted to betoo stiff at early times and too flexible at later times.This leads to two possible ways to deal with the problem.One way modifies the model shapingand the other modifies the data fitting.The program below weakens the data fitting weight with time.This has the same effect as stiffening the model shaping with time.\plot{clapp}{width=6in,height=3in}{ Raw CMP gather (left), Semblance scan (middle), and semblance value used for weighting function (right). (Clapp)}\plot{stiff}{width=6in,height=3in}{ Observed RMS velocity and that predicted by a stiff model with $\epsilon=4$. (Clapp)}\plot{flex}{width=6in,height=3in}{ Observed RMS velocity and that predicted by a flexible model with $\epsilon=.25$ (Clapp)}%\subsection{Bigsolver}%The regression (\ref{eqn:model}) includes a weighting function,%so we need yet another solver module%very much like the regularized and preconditioned solvers%that we used earlier.%The idea is essentially the same.%Instead of preparing such a solver here,%I refer you to the end of the book for%\texttt{solver\_mod} \vpageref{lst:bigsolver},%a big solver that incorporates everything%that we need in the book.%Hopefully we will not need to look at solvers again for a while.%Module \texttt{vrms2int} \vpageref{lst:vrms2int}%was written by Bob Clapp to get the results%in Figures \ref{fig:clapp} to \ref{fig:flex}.%Notice that he is using%\texttt{solver\_mod} \vpageref{lst:bigsolver}.\begin{comment}\moddex{vrms2int}{Converting RMS to interval velocity}\end{comment}\subsection{Lateral variations}The analysis above appears one dimensional in depth.Conventional interval velocity estimation builds a velocity-depth modelindependently at each lateral location.Here we have a logical path for combining measurementsfrom various lateral locations.We can change the regularizationto something like $\bold 0\approx \nabla\bold u$.Instead of merely minimizing the vertical gradient of velocitywe minimize its spatial gradient.Luckily we have preconditioning and the helix to speed the solution.\par%Likewise the construction of the data quality screen $\bold G$%would naturally involve the full three-dimensional setting.\subsection{Blocky models}\inputdir{.}Sometimes we seek a velocity model that increases smoothlywith depth through our scatteredmeasurements of good-quality RMS velocities.Other times, we seek a blocky model.(Where seismic data is poor,a well log could tell us whether to choose smooth or blocky.)Here we see an estimation method that can choose the blocky alternative,or some combination of smooth and blocky.\parConsider the five layer model in Figure \ref{fig:rosales}.Each layer has unit traveltime thickness(so integration is simply summation).Let the squared interval velocities be $(a,b,c,d,e)$with strong reliable reflections at the base of layer $c$ and layer $e$,and weak, incoherent, ``bad'' reflections at bases of $(a,b,d)$.Thus we measure $V_c^2$ the RMS velocity squared of the top three layersand $V_e^2$ that for all five layers.Since we have no reflection from at the base of the fourth layer,the velocity in the fourth layer is not measured but a matter for choice.In a smooth linear fit we would want $d=(c+e)/2$.In a blocky fit we would want $d=e$.\sideplot{rosales}{width=1.5in}{ A layered earth model. The layer interfaces cause reflections. Each layer has a constant velocity in its interior.}\parOur screen for good reflections looks like $(0,0,1,0,1)$and our screen for bad ones looks like the complement $(1,1,0,1,0)$.We put these screens on the diagonals of diagonal matrices$\bold G$ and $\bold B$.Our fitting goals are:\begin{eqnarray}3V_c^2 &\approx& a+b+c\\5V_e^2 &\approx& a+b+c+d+e\\u_0 &\approx& a\\0 &\approx& -a+b\\0 &\approx& -b+c\\0 &\approx& -c+d\label{eqn:block}\\0 &\approx& -d+e\end{eqnarray}For the blocky solution, we do not want the fitting goal (\ref{eqn:block}).Further explanations await completion of examples.%We can remove it by multiplying the model goals by a diagonal-matrix%badpass screen $\bold B$ %eliminating the goal $0 \approx -c+d$.%In abstract, our fitting goals become%\begin{eqnarray}%\bold 0 &\approx& \bold r \eq \bold C \bold u - \bold d%\\%\bold 0 &\approx& \bold p \eq \bold B\bold D\bold u - \bold u_0%\end{eqnarray}%where $\bold u_0$ is a zero vector with a top component of $u_0$.%Since $\bold B$ is not invertable,%we cannot backsolve the preconditioned variable $\bold p$ for%the squared interval velocity%$\bold u= \bold D^{-1}\bold B^{-1}(\bold p + \bold u_0)$.%Instead, we use $\bold G$ for $\bold B^{-1}$%thus redefining%the implicit relationship for $\bold u$.%\begin{equation}%\bold u \eq \bold u_0 + \bold D^{-1}\bold G\bold p%\label{eqn:ufromp}%\end{equation}%where $\bold G$ is the goodpass screen.%Since $\bold D^{-1}=\bold C$ the fitting goals become%%%\begin{equation}%\begin{array}{lll}%\bold 0 &\approx& \bold r \eq \bold C^2\bold G \bold p%+\bold C \bold u_0 %-\bold d%\\%\bold 0 &\approx& \bold p%\end{array}%\label{eqn:logical}%\end{equation}%After fitting with $\bold p$,%we define the squared interval-velocity%$\bold u$ using (\ref{eqn:ufromp}).%%\par%The formulation%(\ref{eqn:logical})%is so logical that we might have guessed it:%The goal $\bold 0 \approx \bold p $ says that $\bold p$ is mostly zero.%What emerges from $\bold G$ is a sprinkling of impulses.%Then $\bold C^2$ converts the pulses to ramp functions%(zero until a certain place, then growing linearly)%which are used to fit the data (integrated velocity).%Differentiating the data-fitting ramps%converts them to the desired blocks of constant velocity.%One iteration is required for each impulse.%%\par%Choosing $\bold G$ to be an identity $\bold I$ gives smooth velocity models,%such as caused by the increasing consolidation of the rocks with depth.%Choosing the screen $\bold G$ to have a sprinkling of pass locations%picks the boundaries of blocks of constant velocity.%The choice can be made by people with subjective criteria (like geologists)%or we can assist by using the data itself%in various ways to select our degree of preference between%the blocky and smooth models.%For example,%we could put seismic coherency or amplitude%on the goodpass diagonal matrix $\bold G$.%Clearly, much remains to be gained from experience.%%\begin{notforlecture}\section{INVERSE LINEAR INTERPOLATION}\inputdir{sep94}\sideplot{data}{width=3in,height=1.5in}{ The input data are irregularly sampled.}The first example is a simple synthetic test for 1-D inverseinterpolation. The input data were randomly subsampled (withdecreasing density) from a sinusoid (Figure \ref{fig:data}). Theforward operator $\bold L$ in this case is linear interpolation. We seeka regularly sampled model that could predict the data with aforward linear interpolation. Sparse irregular distribution of theinput data makes the regularization enforcement a necessity.I applied convolution with the simple $(1,-1)$difference filter as the operator $\bold D$ that forces model continuity(the first-order spline).An appropriate preconditioner $\bold S$ in thiscase is recursive causal integration. %Figures \ref{fig:im1} and%\ref{fig:fm1} show the results of inverse interpolation after%exhaustive 300 iterations of the conjugate-direction method.%As a result of using the causal integration for preconditioning,%the rightmost part of the model in the data-space case stays at a%constant level instead of decreasing to zero. If we specifically%wanted a zero-value boundary condition, it wouldn't be difficult to%implement it by adding a zero-value data point at the boundary.%\sideplot{im1}{width=3in,height=1.5in}{ Estimation of a continuous% function b regularization. The regularization operator $\bold A$ is% the derivative operator (convolution with $(1,-1)$).}%\sideplot{fm1}{width=3in,height=1.5in}{Estimation of a continuous% function by preconditioning model regularization. The% preconditioning operator $\bold P$ is causal integration.}\plot{conv}{width=6in,height=7in}{Convergence history of inverse linear interpolation. Left: regularization, right: preconditioning. The regularization operator $\bold A$ is the derivative operator (convolution with $(1,-1)$. The preconditioning operator $\bold S$ is causal integration.}%\plot{conv2}{width=6in,height=7in}{Convergence history of inverse% linear interpolation. Left: regularization, right: preconditioning.% The regularization operator $\bold A$ is% the second derivative operator (convolution with $(1,-2,1)$. The% preconditioning operator $\bold P$ is the corresponding inverse filtering.}As expected,preconditioning provides a much faster rate of convergence.Since iteration to the exact solutionis never achieved in large-scale problems,the results of iterative optimization may turn out quite differently.Bill Harlan points out that the two goalsin (\ref{eq:main}) conflict with each other:the first one enforces ``details'' in the model,while the second one tries to smooth them out.Typically, regularized optimization createsa complicated model at early iterations.At first, the data fitting goal (\ref{eq:main}) plays a more important role.Later, the regularization goal (\ref{eq:main}) comes into playand simplifies (smooths) the model as much as needed.Preconditioning acts differently.The very first iterations create a simplified (smooth) model.Later, the data fitting goal adds more details into the model.If we stop the iterative process early,we end up with an insufficiently complex model,not in an insufficiently simplified one.Figure \ref{fig:conv} provides a clear illustration of Harlan's observation.\parFigure \ref{fig:schwab1}measures the rate of convergence by the model residual,which is a distance from the current model to the final solution.It shows that preconditioning saves many iterations.Since the cost of each iteration for each method is roughly equal,the efficiency of preconditioning is evident.\sideplot{schwab1}{width=2.4in}{ Convergence of the iterative optimization, measured in terms of the model residual. The ``p'' points stand for preconditioning; the ``r'' points, regularization.}\begin{comment}\parThe module \texttt{invint2} \vpageref{lst:invint2}invokes the solvers to makeFigures \ref{fig:conv}and\ref{fig:schwab1}.We use convolution with\texttt{helicon} \vpageref{lst:helicon}for the regularizationand we use deconvolution with\texttt{polydiv} \vpageref{lst:polydiv}for the preconditioning.The code looks fairly straightforward except forthe oxymoron\texttt{known=aa\%mis}.\moddex{invint2}{Inverse linear interpolation} \end{comment}%\par%This example suggests that the philosophy of image creation by%optimization has a dual orthonormality: First, Gauss (and common%sense) tells us that the data residuals should be roughly equal in%size. Likewise in Fourier space they should be roughly equal in size,%which means they should be roughly white, i.e. orthonormal. (I use%the word ``orthonormal'' because white means the autocorrelation is an%impulse, which means the signal is statistically orthogonal to shifted%versions of itself.) Second, to speed convergence of iterative%methods, we need a whiteness, another othonormality, in the solution.%The map image, the physical function that we seek, might not be itself%white, so we should solve first for another variable, the whitened map%image, and as a final step, transform it to the ``natural colored''%map.%%\par%Often geophysicists create a preconditioning matrix $\bold B$ by%inventing columns that ``look like'' the solutions that they seek.%Then the space $\bold x$ has many fewer components than the space of%$\bold m$. This approach is touted as a way of introducing geological%and geophysical prior information into the solution. Indeed, it%strongly imposes the form of the solution. Perhaps this approach%deserves the diminutive term ``curve fitting'' instead of the%grandiloquent ``geophysical inverse theory.'' Our preferred approach%is not to invent the columns of the preconditioning matrix, but to%estimate the prediction-error filter of the model and use its inverse.%\section{EMPTY BINS AND PRECONDITIONING}There are at least three ways to fill empty bins.Two require a roughening operator $\bold A$ whilethe third requires a smoothing operator which(for comparison purposes) we denote $\bold A^{-1}$.The three methods are generally equivalentthough they differ in important details.\parThe original way inChapter \ref{iin/paper:iin} is torestore missing databy ensuring that the restored data,after specified filtering,has minimum energy, say$\bold A\bold m\approx \bold 0$.Introduce the selection mask operator $\bold K$, a diagonal matrix withones on the known data and zeros elsewhere
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -