📄 paper.tex
字号:
% copyright (c) 1997 Jon Claerbout\long\def\HIDE#1{#1}\title{Multidimensional autoregression}\author{Jon Claerbout}\maketitle\label{paper:mda} The many applications of least squares to the one-dimensional convolution operator constitute the subject known as ``\bx{time-series analysis}.'' The {\it \bx{autoregression}} filter, also known as the \bx{prediction-error filter} (\bx{PEF}), gathers statistics for us, not the autocorrelation or the spectrum directly but it gathers them indirectly as the inverse of the amplitude spectrum of its input. The PEF plays the role of the so-called ``inverse-covariance matrix'' in statistical estimation theory. Given the PEF, we use it to find missing portions of signals.%Here we examine applications of time-series analysis to reflection seismograms.%These applications further illuminate the theory of least squares%in the area of \bx{weighting function}s and stabilization.%\HIDE{\subsection{Time domain versus frequency domain}\parIn the simplest applications, solutions can be most easily foundin the frequency domain.When complications arise,it is better to use the time domain,to directly apply the convolution operatorand the method of least squares.\parA first complicating factor in the frequency domain is a required boundaryin the time domain, such as that between past and future,or requirements that a filter be nonzero in a stated time interval.Another factor that attracts us to the time domainrather than the frequency domain is \bx{weighting function}s.\parWeighting functionsare appropriate whenever a signal or image amplitude variesfrom place to place.Much of the literature on \bx{time-series analysis}applies to the limited case of uniform weighting functions.Such time series are said to be ``stationary.''This means that their statistical properties do not change in time.In real life, particularly in the analysis of echos,signals are never stationary in time and space.A \bx{stationarity} assumption is a reasonable starting assumption,but we should know how to go beyond itso we can take advantage of the many opportunities that do arise.In order of increasing difficulty in the frequency domain arethe following complications:\begin{enumerate}\item A time boundary such as between past and future.\item More time boundaries such as delimiting a filter.\item More time boundaries such as erratic locations of missing data.\item Nonstationary signal, i.e., time-variable weighting.\item Time-axis stretching such as normal moveout.\end{enumerate}\parWe will not have difficulty with any of these complications here,because we will stay in the time domainand set up and solve optimization problems by use ofthe conjugate-direction method.Thus we will be able to cope with great complexity in goal formulationand get the right answer without approximations.By contrast, analytic or partly analytic methodscan be more economical, but they generally solvesomewhat different problems than those given to us by nature.\section{SOURCE WAVEFORM, MULTIPLE REFLECTIONS}\sx{source waveform}\sx{multiple reflection}Here we devise a simple mathematical modelfor deep \bx{water bottom} multiple reflections.\footnote{ For this short course I am omitting here many interesting examples of multiple reflections shown in my 1992 book, PVI. }There are two unknown waveforms,the source waveform $S(\omega )$and the ocean-floor reflection $F(\omega )$.The water-bottom primary reflection $P(\omega )$is the convolution of the source waveformwith the water-bottom response; so $P(\omega )=S(\omega )F(\omega )$.The first multiple reflection $M(\omega )$ sees the same source waveform,the ocean floor, a minus one for the free surface, and the ocean floor again.Thus the observations $P(\omega )$ and $M(\omega )$as functions of the physical parameters are\begin{eqnarray}P(\omega )&=&S(\omega )\,F(\omega ) \label{eqn:PP} \\M(\omega )&=&-S(\omega )\,F(\omega )^2 \label{eqn:MM}\end{eqnarray}Algebraically the solutions of equations(\ref{eqn:PP}) and(\ref{eqn:MM}) are\begin{eqnarray}F(\omega )&=& - M(\omega )/P(\omega ) \label{eqn:FF} \\S(\omega )&=& - P(\omega )^2/M(\omega ) \label{eqn:SS}\end{eqnarray}\parThese solutions can be computed in the Fourier domainby simple division.The difficulty is that the divisors inequations~(\ref{eqn:FF}) and~(\ref{eqn:SS})can be zero, or small.This difficulty can be attacked by use of a positive number $\epsilon$to \bx{stabilize} it.For example, multiply equation~(\ref{eqn:FF}) on top and bottomby $P'(\omega )$ and add $\epsilon >0$ to the denominator.This gives\begin{equation}F(\omega )\eq- \ {M(\omega ) P'(\omega ) \over P(\omega )P'(\omega ) + \epsilon}\label{eqn:epsilon}\end{equation}where $ P'(\omega )$ is the complex conjugate of $P(\omega )$.Although the $\epsilon$ stabilization seems nice,it apparently produces a nonphysical model.For $\epsilon$ large or small, the time-domain responsecould turn out to be of much greater duration than is physically reasonable. This should not happen with perfect data, but in real life,data always has a limited spectral band of good quality.\parFunctions that are rough in the frequency domain will be long inthe time domain. This suggests making a short function in the time domainby local smoothing in the frequency domain.Let the notation $< \cdots >$ denote smoothing by local averaging.Thus,to specify filters whose time duration is not unreasonably long,we can revise equation~(\ref{eqn:epsilon}) to\begin{equation}F(\omega )\eq- \ {<M(\omega ) P'(\omega )> \over <P(\omega )P'(\omega ) >}\label{eqn:smoothit}\end{equation}whereinstead of deciding a size for $\epsilon$ we need to decide how much smoothing.I find that smoothing has a simpler physical interpretation than choosing $\epsilon$.The goal of finding the filters $F(\omega )$ and $S(\omega )$ is tobest model the multiple reflections so that they canbe subtracted from the data,and thus enable us to see what primary reflectionshave been hidden by the multiples.\parThese frequency-duration difficulties do not arise in a time-domain formulation.Unlike in the frequency domain,in the time domain it is easy and naturalto limit the duration and locationof the nonzero time range of $F(\omega)$ and $S(\omega)$.First express(\ref{eqn:FF}) as\begin{equation}0 \eq P(\omega )F(\omega ) +M(\omega ) \label{eqn:floor}\end{equation}\parTo imagine equation (\ref{eqn:floor})as a fitting goal in the time domain,instead of scalar functions of $\omega$,think of vectors with components as a function of time.Thus $\bold f$ is a column vectorcontaining the unknown sea-floor filter,$\bold m$ contains the ``multiple'' portion of a seismogram,and $\bold P$ is a matrix of down-shifted columns,each column being the ``primary''.\begin{equation}\bold 0 \quad\approx\quad\bold r \eq\left[\begin{array}{c} r_1 \\ r_2 \\ r_3 \\ r_4 \\ r_5 \\ r_6 \\ r_7 \\ r_8 \end{array} \right] \eq\left[\begin{array}{ccc} p_1 & 0 & 0 \\ p_2 & p_1 & 0 \\ p_3 & p_2 & p_1 \\ p_4 & p_3 & p_2 \\ p_5 & p_4 & p_3 \\ p_6 & p_5 & p_4 \\ 0 & p_6 & p_5 \\ 0 & 0 & p_6 \end{array} \right]\; \left[\begin{array}{c} f_1 \\ f_2 \\ f_3 \end{array} \right]\ +\ \left[\begin{array}{c} m_1 \\ m_2 \\ m_3 \\ m_4 \\ m_5 \\ m_6 \\ m_7 \\ m_8 \end{array} \right]\label{eqn:findmult}\end{equation}%\par%To minimize $\bold r'\bold r$,%we could use the conjugate-direction subroutine \texttt{cgmeth()} \vpageref{lst:cgmeth},%but we would remove its call to%the matrix multiply subroutine%and replace it by a convolution subroutine%with boundary conditions of our choice.%%}% end HIDE\section{TIME-SERIES AUTOREGRESSION}Given $y_t$ and $y_{t-1}$, you might like to predict $y_{t+1}$.The prediction could be a scaled sum or differenceof $y_t$ and $y_{t-1}$.This is called ``\bx{autoregression}''because a signal is regressed on itself.To find the scale factors you would optimize the fitting goal below,for the \bx{prediction filter} $(f_1,f_2)$:\sx{filter ! prediction}\begin{equation}\bold 0\quad \approx \quad\bold r \eq\left[ \begin{array}{ccc} y_1 & y_0 \\ y_2 & y_1 \\ y_3 & y_2 \\ y_4 & y_3 \\ y_5 & y_4 \end{array} \right] \; \left[ \begin{array}{c} f_1 \\ f_2 \end{array} \right]\ -\ \left[ \begin{array}{c} y_2 \\ y_3 \\ y_4 \\ y_5 \\ y_6 \end{array} \right] \label{eqn:simplepe}\end{equation}(In practice, of course the system of equations would bemuch taller, and perhaps somewhat wider.)A typical row in the matrix (\ref{eqn:simplepe})says that $y_{t+1} \approx y_t f_1 + y_{t-1} f_2$hence the description of $f$ as a ``prediction'' filter.The error in the prediction is simply the residual.Define the residual to have opposite polarityand merge the column vector into the matrix, so you get\begin{equation}\left[ \begin{array}{c} 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{array} \right] \quad \approx \quad\bold r \eq\left[ \begin{array}{ccc} y_2 & y_1 & y_0 \\ y_3 & y_2 & y_1 \\ y_4 & y_3 & y_2 \\ y_5 & y_4 & y_3 \\ y_6 & y_5 & y_4 \end{array} \right] \; \left[ \begin{array}{c} 1 \\ -f_1 \\ -f_2 \end{array} \right] \label{eqn:simplepef}\end{equation}which is a standard form for autoregressions and prediction error.\par\bxbx{Multiple reflections}{multiple reflection}are predictable.It is the unpredictable part of a signal,the prediction residual,that contains the primary information.The output of the filter$(1,-f_1, -f_2) = (a_0, a_1, a_2)$is the unpredictable part of the input.This filter is a simple example ofa ``prediction-error'' (PE) filter.\sx{prediction-error filter}\sx{filter ! prediction-error}It is one member of a family of filters called ``error filters.''\parThe error-filter family are filters with one coefficient constrainedto be unity and various other coefficients constrained to be zero.Otherwise, the filter coefficients are chosen to have minimum power output.Names for various error filters follow:\begin{tabular}{ll} $(1, a_1,a_2,a_3, \cdots ,a_n)$ & \bxbx{prediction-error (PE) filter}{prediction-error filter} \\ $(1, 0, 0, a_3,a_4,\cdots ,a_n)$ & gapped PE filter with a gap \\ $(a_{-m}, \cdots, a_{-2}, a_{-1}, 1, a_1, a_2, a_3, \cdots ,a_n)$ & \bxbx{interpolation-error (IE) filter}{interpolation-error filter}\end{tabular}\sx{filter ! prediction-error}\sx{filter ! interpolation-error}\par%\begin{notforlecture}We introduce a\bx{free-mask matrix} $\bold K$which ``passes'' the freely variable coefficients in the filterand ``rejects'' the constrained coefficients(which in this first example is merely the first coefficient $a_0=1$).\begin{equation}\bold K \eq\left[\begin{array}{cccccc} 0 & . & . \\ . & 1 & . \\ . & . & 1 \end{array} \right]\label{eqn:pefconstraint}\end{equation}\parTo compute a simple prediction error filter $\bold a =(1, a_1, a_2)$with the CD method,we write(\ref{eqn:simplepe}) or(\ref{eqn:simplepef}) as\begin{equation}\bold 0\quad \approx \quad\bold r \eq\left[ \begin{array}{ccc} y_2 & y_1 & y_0 \\ y_3 & y_2 & y_1 \\ y_4 & y_3 & y_2 \\ y_5 & y_4 & y_3 \\ y_6 & y_5 & y_4 \end{array} \right] \;\left[ \begin{array}{ccc} 0 & \cdot & \cdot \\ \cdot & 1 & \cdot \\ \cdot & \cdot & 1 \end{array} \right] \;\left[
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -