⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 paper.tex

📁 国外免费地震资料处理软件包
💻 TEX
📖 第 1 页 / 共 5 页
字号:
  Mirror had 1-D PEF applied, 30 point filter.}\parGiven that all visual (or audio) displays have a bounded rangeof amplitudes, increasing the frequency content (bandwidth)means that we will need to turn down the amplificationso we do not wish to increase the bandwidth unless we are adding signal.\par\boxit{Increasing the spectral bandwidth	always requires us to diminish the gain.}\parThe same ideas but with a two-dimensional PEF are inFigure~\ref{fig:decon1} (the same data but with more ofit squeezed onto the page.)As usual,the raw data is dominated by events arriving later at greater distances.After the PEF, we tend to see equal energy in dips in all directions.We have strongly enhanced the ``backscattered'' energy,those events that arrive later at {\it shorter} distances.\plot{decon1}{width=6in,height=8.4in}{  A 2-D filter (here $20\times 5$) brings out the backscattered energy.}\parFigure~\ref{fig:zof} showsechos from the all shots, the nearest receiver on each shot.This picture of the earth is called a ``near-trace section.''This earth picture shows us why there is so much backscattered energy inFigure~\ref{fig:decon1} (which is located at the left side ofFigure~\ref{fig:zof}).The backscatter comes from any of the many of near-vertical faults.\parWe have been thinking of the PEF as a toolfor shaping the spectrum of a display.But does it have a physical meaning?What might it be?Referring back to the beginning of the chapter we are inclined toregard the PEF as the convolution of the source waveform withsome kind of water-bottom response.In Figure~\ref{fig:decon1} we used many different shot-receiverseparations.  Since each different separation has a differentresponse (due to differing moveouts) the water bottom reverberationmight average out to be roughly an impulse.Figure~\ref{fig:decon1} is a different story.Here for each shot location, the distance to the receiver is constant.Designing a single channel PEF we can expect the PEF to containboth the shot waveform and the water bottom layers becauseboth are nearly identical in all the shots.We would rather have a PEF that represents only the shot waveform(and perhaps a radiation pattern).\plot{zof}{width=6in,height=8.4in}{  Raw data, near-trace section (top).  Filtered with a two-channel PEF (bottom).  The movie has other shaped filters.}\parLet us consider how we might workto push the water-bottom reverberation out of the PEF.This data is recorded in water 600 meters deep.A consequence is that the sea bottom is made of fine-grained sedimentsthat settled very slowly and rather similarly from place to place.In shallow water the situation is different.The sands near estuaries are always shifting.Sedimentary layers thicken and thin.They are said to ``on-lap and off-lap.''Here I do notice where the water bottom is slopedthe layers do thin a little.To push the water bottom layers out of the PEFour idea is to base its calculation not on the raw data,but on the spatial prediction error of the raw data.On a perfectly layered eartha perfect spatial prediction error filter would zero all tracesbut the first one.Since a 2-D PEF includes spatial prediction as well as temporalprediction, we can expect it to contain much less of the sea-floorlayers than the 1-D PEF.If you have access to the electronic book, you can blinkthe figure back and forth with various filter shapes.\section{PEF ESTIMATION WITH MISSING DATA}\parIf we are not careful,our calculation of the PEF%, \texttt{gdecon()} \vpageref{lst:gdecon} and \texttt{rnpef1()},could have the pitfall that it would try to use the missingdata to find the PEF, and hence it would get the wrong PEF.To avoid this pitfall,imagine a PEF finder that uses weighted least squareswhere the weighting function vanishes onthose fitting equations that involve missing data.The weighting would be unity elsewhere.Instead of weighting bad results by zero,we simply will not compute them.The residual there will be initialized to zero andnever changed.Likewise for the adjoint,these components of the residual will never contribute to a gradient.So now we need a convolution program thatproduces no outputs where missing inputs would spoil it.\parRecall there are two ways of writing convolution,equation (\ref{ajt/eqn:contran1})when we are interested in finding the filter{\em inputs}, andequation (\ref{ajt/eqn:contran2})when we are interested in finding the{\em filter itself}.We have already codedequation (\ref{ajt/eqn:contran1}),operator\texttt{helicon} \vpageref{lst:helicon}.That operator was useful in missing data problems.Now we want to find a prediction-error filterso we need the other case,equation (\ref{ajt/eqn:contran2}),and we need to ignore the outputsthat will be broken because of missing inputs.The operator module\texttt{hconest} does the job.\opdex{hconest}{helix convolution, adjoint is the filter}{47}{56}{user/gee}%\par%Now identify the broken regression equations,%those that use missing data.%Suppose that $y_2$ and $y_3$ were missing%or bad data values in the fitting goal (\ref{eqn:exmiss}).%That would spoil the%2nd, 3rd, 4th, and 5th fitting equations.%Thus we would want to be sure that%$w_2$, $w_3$, $w_4$ and $w_5$ were zero.%(We'd still be left enough equations to find $(a_2,a_3)$.)\par%\begin{equation}%\bold 0%\ \approx\ \bold W \bold r \ =\ %\left[%        \begin{array}{cccccccc}%          w_1& . & . & . & . & . & . & .  \\%           . &w_2& . & . & . & . & . & .  \\%           . & . &w_3& . & . & . & . & .  \\%           . & . & . &w_4& . & . & . & .  \\%           . & . & . & . &w_5& . & . & .  \\%           . & . & . & . & . &w_6& . & .  \\%           . & . & . & . & . & . &w_7& .  \\%           . & . & . & . & . & . & . &w_8%          \end{array}%\right]%        \left[ %        \begin{array}{ccc}%          y_1 & 0   & 0    \\%          y_2 & y_1 & 0    \\%          y_3 & y_2 & y_1  \\%          y_4 & y_3 & y_2  \\%          y_5 & y_4 & y_3  \\%          y_6 & y_5 & y_4  \\%          0   & y_6 & y_5  \\%          0   & 0   & y_6%          \end{array} \right] %        \; \left[ %        \begin{array}{c}%          1   \\ %          a_1 \\ %          a_2 \end{array} \right]%%\right)%\label{eqn:exmiss}%\end{equation}%%What algorithm will enable us to identify the regression equations%that have become defective, now that $y_2$ and $y_3$ are missing?%Examine this calculation:%\begin{equation}%\left[%        \begin{array}{c}%          m_1 \\%          m_2 \\%          m_3 \\%          m_4 \\%          m_5 \\%          m_6 \\%          m_7 \\%          m_8 %          \end{array}%%\right]%\eq%\left[%        \begin{array}{c}%          0 \\%          1 \\%          2 \\%          2 \\%          1 \\%          0 \\%          0 \\%          0 %          \end{array}%\right]%\eq%        \left[ %        \begin{array}{ccc}%          0   & 0   & 0    \\%          1   & 0   & 0    \\%          1   & 1   & 0    \\%          0   & 1   & 1    \\%          0   & 0   & 1    \\%          0   & 0   & 0    \\%          0   & 0   & 0    \\%          0   & 0   & 0  %          \end{array} \right] %        \; \left[ %        \begin{array}{c}%          1   \\ %          1 \\ %          1 \end{array} \right]%%\right)%\label{eqn:twomissing}%\end{equation}%The value of $m_i$ tells us how many inputs are missing%from the calculation of the residual $r_i$.%Where none are missing, we want unit weights $w_i=1$.%Where any are missing, we want zero weights $w_i=0$.%%\par%From this example we recognize a general method%for identifying defective regression equations%and weighting them by zero:%Prepare a vector like $\bold y$ with ones where data is missing%and zeros where the data is known.%Prepare a vector like $\bold a$ where all values are ones.%These are the vectors we put in%equation (\ref{eqn:twomissing})%to find the $m_i$ and hence the needed weights $w_i$.%It is all done in module \texttt{misinput}.%%\vpageref{lst:misinput}.%\moddex{misinput}{mark bad regression equations}%\parWe are seeking a prediction error filter $(1,a_1,a_2)$but some of the data is missing.The data is denoted $\bold y$ or $y_i$ above and $x_i$ below.Because some of the  $x_i$ are missing,some of the regression equations in (\ref{eqn:exmiss}) are worthless.When we figure out which are broken, we will put zero weights on those equations.\begin{equation}\bold 0 \approx\ \bold r = \bold W \bold X \bold a =\left[        \begin{array}{cccccccc}          w_1& . & . & . & . & . & . & .  \\           . &w_2& . & . & . & . & . & .  \\           . & . &w_3& . & . & . & . & .  \\           . & . & . &w_4& . & . & . & .  \\           . & . & . & . &w_5& . & . & .  \\           . & . & . & . & . &w_6& . & .  \\           . & . & . & . & . & . &w_7& .  \\           . & . & . & . & . & . & . &w_8          \end{array}\right]        \left[        \begin{array}{ccc}          x_1 & 0   & 0    \\          x_2 & x_1 & 0    \\          x_3 & x_2 & x_1  \\          x_4 & x_3 & x_2  \\          x_5 & x_4 & x_3  \\          x_6 & x_5 & x_4  \\          0   & x_6 & x_5  \\          0   & 0   & x_6          \end{array} \right]        \left[        \begin{array}{c}          1   \\          a_1 \\          a_2 \end{array} \right]\label{eqn:exmiss}\end{equation}\parSuppose that $x_2$ and $x_3$ were missing or known bad.That would spoil the 2nd, 3rd, 4th, and 5th fitting equationsin (\ref{eqn:exmiss}).In principle, we want $w_2$, $w_3$, $w_4$ and $w_5$ to be zero.In practice, we simply want those components of $\bold r$ to be zero.\parWhat algorithm will enable us to identify the regression equationsthat have become defective, now that $x_2$ and $x_3$ are missing?Take filter coefficients $(a_0, a_1, a_2,\ldots)$ to be all ones.Let $\bold d_{\rm free}$ be a vector like $\bold x$ but containing 1's forthe missing (or ``freely adjustable'') data values and 0's forthe known data values.Recall our very first definition of filtering showed we can putthe filter in a vector and the data in a matrix or vice versa.Thus $\bold X \bold a$ above gives the same result as $\bold A \bold x$ below.\begin{equation}\left[        \begin{array}{c}          m_1 \\          m_2 \\          m_3 \\          m_4 \\          m_5 \\          m_6 \\          m_7 \\          m_8          \end{array}\right]\eq\left[        \begin{array}{c}          0 \\          1 \\          2 \\          2 \\          1 \\          0 \\          0 \\          0          \end{array}\right]\eq        \left[        \begin{array}{cccccc}          1   & 0   & 0 &0 &0  &0 \\          1   & 1   & 0 &0 &0  &0 \\          1   & 1   & 1 &0 &0  &0 \\          0   & 1   & 1 &1 &0  &0 \\          0   & 0   & 1 &1 &1  &0 \\          0   & 0   & 0 &1 &1  &1 \\          0   & 0   & 0 &0 &1  &1 \\          0   & 0   & 0 &0 &0  &1             \end{array} \right]        \; \left[        \begin{array}{c}          0 \\          1 \\          1 \\          0 \\          0 \\          0 \end{array} \right]\eq	\bold A \bold d_{\rm free}\label{eqn:twomissing}\end{equation}\parThe numeric value of each $m_i$ tells us how many of its inputs are missing.Where none are missing, we want unit weights $w_i=1$.Where any are missing, we want zero weights $w_i=0$.The desired residual under partially missing inputs is computedby module \texttt{misinput}\vpageref{lst:misinput}.\moddex{misinput}{mark bad regression equations}{26}{59}{user/gee}

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -