⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 paper.tex

📁 国外免费地震资料处理软件包
💻 TEX
📖 第 1 页 / 共 4 页
字号:
  0    & a_3 & a_2 & a_1 \\  0    & 0   & a_3 & a_2 \\  0    & 0   & 0   & a_3   \end{array} \right] \; \left[ \begin{array}{c}  m_1 \\   m_4 \\   m_5 \\   m_6  \end{array} \right]\label{eqn:wristpain}\end{equation}This is the familiar form of an overdetermined system of equations$\bold y \approx \bold F_u \bold m_u$which we could solvefor $\bold m_u$as illustrated earlier by conjugate directions,or by a wide variety of well-known methods.\parThe trouble with this matrix approach is that it is awkward to programthe partitioning of the operator into the known and missing parts,particularly if the application of the operator uses arcane techniques,such as those used by the fast--Fourier-transform operatoror various numerical approximations to differential or partialdifferential operators that depend on regular data sampling.Even for the modest convolution operator,we already have a library of convolution programsthat handle a variety of end effects,and it would be much nicer to use the library as it israther than recode it for all possible geometrical arrangementsof missing data values.\parNote:Here I take the main goal to be the clarity of the code,not the efficiency or accuracy of the solution.So, if your problem consumes too many resources,and if you have many more known points than missing ones,maybe you should fit$\bold y \approx \bold F_u \bold m_u$and ignore the suggestions below.\subsubsection{Operator approach to missing data}\sx{missing data}\parFor the operator approach to the fitting goal$ -\bold F_k \bold m_k \approx \bold F_u \bold m_u$we rewrite it as$ -\bold F_k \bold m_k \approx \bold F \bold J \bold m $ where\begin{equation} -\bold F_k \bold m_k\quad \approx \quad\left[ \begin{array}{cccccc}  a_1 & 0   & 0    & 0   & 0   & 0   \\  a_2 & a_1 & 0    & 0   & 0   & 0   \\  a_3 & a_2 & a_1  & 0   & 0   & 0   \\  0   & a_3 & a_2  & a_1 & 0   & 0   \\  0   & 0   & a_3  & a_2 & a_1 & 0   \\  0   & 0   & 0    & a_3 & a_2 & a_1 \\  0   & 0   & 0    & 0   & a_3 & a_2 \\  0   & 0   & 0    & 0   & 0   & a_3   \end{array} \right] \;\left[ \begin{array}{cccccc}  1   & .   & .    & .   & .   & .   \\  .   & 0   & .    & .   & .   & .   \\  .   & .   & 0    & .   & .   & .   \\  .   & .   & .    & 1   & .   & .   \\  .   & .   & .    & .   & 1   & .   \\  .   & .   & .    & .   & .   & 1    \end{array} \right] \;\left[ \begin{array}{c}  m_1 \\   m_2 \\   m_3 \\   m_4 \\   m_5 \\   m_6  \end{array} \right]\quad =\quad \bold F \bold J \bold m\label{eqn:migraine}\end{equation}Notice the introduction of the new diagonal matrix $\bold J$,called a \bx{mask}ing matrix or a \bx{constraint-mask} matrixbecause it multipliesconstrained variables by zeroleaving freely adjustable variables untouched.Experience shows that a better name than ``mask matrix'' is``\bx{selector} matrix''because what comes out of it,that which is selected,is a less-confusing name for it than which is rejected.With a selector matrix the whole data space seems freely adjustable,both the missing data values and known values.We see that the CD method does not change the known (constrained) values.In general, we derive the fitting goal (\ref{eqn:migraine}) by\begin{eqnarray} \label{eqn:misfitgoal} \bold 0 &\approx& \bold F \bold m \\ \bold 0 &\approx& \bold F( \bold J + (\bold I-\bold J) ) \bold m \\ \bold 0 &\approx& \bold F\bold J\bold m + \bold F(\bold I-\bold J)\bold m \\ \label{eqn:misfitgoal2} \bold 0 &\approx& \bold F\bold J\bold m + \bold F\bold m_{\rm known} \\                                         \bold 0 \quad\approx\quad \bold r &=&       \bold F\bold J\bold m + \bold r_0\end{eqnarray}As usual, we find a direction to go $\Delta \bold m$by the gradient of the residual energy.\begin{equation}\Delta\bold m\eq {\partial \over\partial \bold m'}\ \bold r'\bold r\eq \left( {\partial \over\partial \bold m'} \ \bold r' \right) \bold r\eq \left( {\partial \over\partial \bold m'} \(\bold m'\bold J'\bold F' + \bold r'_0)                 \right) \bold r\eq  \bold J' \bold F' \bold r\label{eqn:fooo}\end{equation}\parWe begin the calculation withthe known data values where missing data values are replaced by zeros, namely$(\bold I-\bold J)\bold m$.Filter this data,getting $\bold F(\bold I-\bold J)\bold m$,and load it into the residual $\bold r_0$.With this initialization completed,we begin an iteration loop.First we compute $\Delta m$ from equation (\ref{eqn:fooo}).\begin{equation}\Delta\bold m \quad\longleftarrow\quad \bold J' \bold F' \bold r\end{equation}$\bold F'$ applies a {\it crosscorrelation} of the filter to the residualand then $\bold J'$ sets to zero any changes proposed to known data values.Next, compute the change in residual $\Delta\bold r$from the proposed change in the data $\Delta\bold m$.\begin{equation}\Delta\bold r \quad \longleftarrow \quad \bold F \bold J \Delta \bold m\label{eqn:lastmistheory}\end{equation}This applies the filtering again.Then use the method of steepest descent (or conjugate direction)to choose the appropriate scaling (or inclusion of previous step)of $\Delta \bold m$ and $\Delta \bold r$,and update $\bold m$ and $\bold r$ accordinglyand iterate.\begin{comment}\parI could have passed a new operator $\bold F \bold J$into the old solver,but found it worthwhile to write a new,more powerful solver having built-in constraints.To introduce the masking operator $\bold J$ into the \texttt{solver_smp}subroutine \vpageref{lst:solver_tiny},I introduce an optional operator \texttt{Jop},which is initialized with a logical array of the model size. %The values of \texttt{known} are \texttt{.true.}~for the known data locations, %and \texttt{.false.}~for the unknown data.Two lines in the \texttt{solver_tiny} module \vpageref{lst:solver_tiny}\par\indent\footnotesize\begin{verbatim}stat = Fop( T, F, g, rd)                          #  g = F' Rdstat = Fop( F, F, g, gd)                          #  G = F  g\end{verbatim}\normalsize\par\noindentbecome three lines in the standard library module \texttt{solver\_smp}.(We use a temporary array \texttt{tm} of the size of model space.)$\Delta \bold m$ is {\tt g} and$\Delta \bold r$ is {\tt gg}.\par\noindent\footnotesize\begin{verbatim}stat = Fop( T, F, g, rd)                                  # g = F' Rdif ( present( Jop)) { tm=g;  stat= Jop( F, F, tm, g)      # g = J gstat = Fop( F, F, g, gg)                                  # G = F g\end{verbatim}\normalsize\parThe full code includes all the definitions we had earlierin \texttt{solver_tiny} module \vpageref{lst:solver_tiny}.Merging it with the above bits of code we havethe simple solver \texttt{solver_smp}.%In Fortran77, I simply multiplied by $\bold J$.%In Fortran90, subroutines can have optional arguments.%The expression%\texttt{if( present( known))} says, ``if the argument%\texttt{known}, a logical array, is an argument of the call.''%The expression,%\texttt{where( known) dm = 0} means that each component %of the logical array \texttt{known} is examined;%if the value of that component is \texttt{.true.}~then%the corresponding component of $\Delta \bold m$ is set to zero.%In other words, we are not going to change the given, known data values.\moddex{solver_smp}{simple solver}\parThere are two methods of invoking the solver.Comment cards in the code indicate the slightly more verbosemethod of solution which matches the theory presented in the book.\end{comment}\parThe subroutine to find missing data is \texttt{mis1()}.It assumes that zero values in the input datacorrespond to missing data locations.It uses our convolution operator\texttt{tcai1()} \vpageref{lst:tcai1}.You can also check the Index for other\bx{operators} and \bx{modules}.%\progdex{mis1}{1-D missing data}\moddex{mis1}{1-D missing data}{54}{76}{user/gee}\noindent%Here \texttt{known} is declared in \texttt{solver\_mod} to be a logical vector.%The call argument \texttt{ known=(xx/=0.)} sets the mask vector components%to \texttt{.true.} where components of $\bold x$ are nonzero and sets it%to \texttt{.false.} where the components are zero.\parI sought reference material on conjugate gradients with constraintsand didn't find anything,leaving me to fear that this chapter was in errorand that I had lost the magic property of convergencein a finite number of iterations.I tested the code and it did converge in a finite number of iterations.The explanation is that these constraints are almost trivial.We pretended we had extra variables,and computed a $\Delta \bold m =\bold g$ for each of them.Then we set the $\Delta \bold m =\bold g$ to zero,hence making no changes to anything,like as if we had never calculated the extra  $\Delta \bold m$'s.\begin{exer}\item        Figures~\ref{fig:mlines}--\ref{fig:moscil}        seem to extrapolate to vanishing signals at the side boundaries.        Why is that so, and what could be done to leave the sides        unconstrained in that way?\item        Show that the interpolation curve in Figure~\ref{fig:mparab} is not        parabolic as it appears, but cubic.        ({\sc hint}:  First show that $(\nabla^2)'\nabla^2 u = \bold 0$.)        \item        Verify by a program example that the number of iterations        required with simple constraints is the number of free parameters.        \todo{ idoc to reference printout}\item        A signal on a uniform mesh has missing values.        How should we estimate the mean?\end{exer}\section{WELLS NOT MATCHING THE SEISMIC MAP}\inputdir{chevron}Accurate knowledge comes from a \bx{well},but wells are expensive and far apart.Less accurate knowledge comes from surface seismology,but this knowledge is available densely in spaceand can indicate significant \bx{trend}s between the wells.For example,a prospective area may contain 15 wellsbut 600 or more seismic stations.To choose future well locations,it is helpful to match the known well data with the seismic data.Although the seismic data is delightfully dense in space,it often mismatches the wellsbecause there are systematic differences in the nature of the measurements.These discrepanciesare sometimes attributed to velocity \bx{anisotropy}.To work with such measurements,we do not need to track down the physical model,we need only to merge the information somehowso we can appropriately \bx{map} the trends between wellsand make a proposal for the next drill site.Here we consider only a scalar value at each location.Take $\bold w$ to be a vector of 15 components,each component being the seismic travel time to some fixed depth in a well.Likewise let $\bold s$ be a 600-component vectoreach with the seismic travel time to that fixed depthas estimated wholly from surface seismology.Such empirical corrections are often called ``\bx{fudge factor}s''.An example is the Chevron oil field in Figure \ref{fig:wellseis}.%\activesideplot{wellseis90}{width=3in,height=2in}{ER}{%\activeplot{wellseis90}{width=6in,height=4in}{ER}{\plot{wellseis}{width=4.5in,height=3in}{  Binning by data push.  Left is seismic data.  Right is well locations.  Values in bins are divided by numbers in bins.  (Toldi)}The binning of the seismic data in Figure \ref{fig:wellseis}is not really satisfactory when we have availablethe techniques of missing data estimationto fill the empty bins.Using the ideas of subroutine \texttt{mis1()} \vpageref{lst:mis1},we can extend the seismic data into the empty part of the plane.We use the same principle that we minimize the energy in the filtered map where the map must match the data where it is known.I chose the filter $\bold A = \nabla'\nabla=-\nabla^2$to be the Laplacian operator (actually, its negative)to obtain the result in Figure \ref{fig:misseis}.%\activesideplot{misseis90}{width=3in,height=2in}{ER}{%\activeplot{misseis90}{width=6in,height=4in}{ER}{\plot{misseis}{width=4.5in,height=3in}{  Seismic binned (left) and extended (right)  by minimizing energy in $\nabla^2 \bold s$.}%\begin{notforlecture}\parFigure \ref{fig:misseis} also involves a \bx{boundary condition calculation}.Many differential equations have a solution that becomes infiniteat infinite distance, and in practice this means that the largestsolutions may often be found on the boundaries of the plot,exactly where there is the least information.To obtain a more pleasing result,I placed artificial ``average'' dataalong the outer boundary.Each boundary point was given the value of an averageof the interior data values.The average was weighted,each weight being an inverse power of the separation distanceof the boundary point from the interior point.\parParenthetically, we notice that all the unknown interior pointscould be guessed by the same method we used on the outer boundary.After some experience guessing what inverse power wouldbe best for the weighting functions, I do not recommend this method.Like gravity, the forces of interpolation from the weighted sumsare not blocked by intervening objects.But the temperature in a house is not a function of temperaturein its neighbor's house.To further isolate the more remote points,I chose weights to be the inverse fourth power of distance.\parThe first job is to fill the gaps in the seismic data.We just finished doing a job like this in one dimension.I'll give you more computational details later.Let us call the extended seismic data  $\bold s$.\parThink of a map of a model space $\bold m$of infinitely many hypothetical wells that must match the real wells,where we have real wells.We must find a map that matches the wells exactlyand somehow matches the seismic information elsewhere.Let us define the vector $\bold w$ as shown in Figure \ref{fig:wellseis}so $\bold w$ isobserved values at wells and zeros elsewhere.\parWhere the seismic data contains sharp bumps or streaks,we want our final earth model to have those features.The wells cannot provide the rough features because the wellsare too far apart to provide high spatial frequencies.The well information generally conflicts with the seismic data at low spatial frequencies because of systematic discrepancies betweenthe two types of measurements.Thus we must accept that $\bold m$ and $\bold s$ may differat low spatial frequencies (where gradient and Laplacian are small).\parOur final map $\bold m$ would be very unconvincing if

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -