⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 paper.tex

📁 国外免费地震资料处理软件包
💻 TEX
📖 第 1 页 / 共 5 页
字号:
To make this really concrete,consider its meaning in one dimension,where signal is white$\bold S'\bold S=1$ andnoise has the frequency $\omega_0$,which is killable with the multiplier$\bold N'\bold N=(\omega-\omega_0)^2$.Now we recognize that equation (\ref{eqn:notchfilter})is a notch filter and equation (\ref{eqn:narrowfilter})is a narrow-band filter.\parThe analytic solutions in equations~(\ref{eqn:notchfilter})and~(\ref{eqn:narrowfilter})are valid in 2-D \bx{Fourier space} or dip space too.I prefer to compute them in the time and space domainto give me tighter control on window boundaries,but the Fourier solutions give insightand offer a computational speed advantage.\parLet us express the fitting goal in the form needed in computation.\begin{equation}                                        \label{eqn:signoireg}\left[    \begin{array}{c}           \bold 0  \\           \bold 0            \end{array}   \right] %\quad\approx\quad%\left[    \begin{array}{c}                    \bold N  \\           \epsilon \bold S           \end{array}   \right] \                   \bold s\ +\ \left[    \begin{array}{c}           -\bold N  \bold d  \\            \bold 0            \end{array}   \right] \end{equation}\par%Subroutine \texttt{signoi2()}%using \texttt{pef2()} \vpageref{lst:pef2}%first computes the prediction-error%filters $\bold S $ and $\bold N$.%Then it loads the residual vector $\bold r$%with the negative of the noise whitened data $-\bold N\bold d$.%Then it enters the usual conjugate-direction iteration loop%where it makes adjustments of%$\Delta \bold r$ and %$\Delta \bold s$.%Finally, it defines noise by $\bold n = \bold d - \bold s$.\opdex{signoi}{signal and noise separation}{51}{67}{user/gee}As with the missing-data subroutines,the potential number of iterations is large,because the dimensionality of the space of unknownsis much larger than the number of iterations we would find acceptable.Thus,sometimes changing the number of iterations {\tt niter}can create a larger change than changing {\tt epsilon}.Experience shows that helix preconditioning saves the day.\par%Subroutine \texttt{signoi2()} \vpageref{lst:signoi2} is set up in a bootstrapping way.%It uses not only data, but also signal and noise.%The first time in,%I used a crude estimate of the signal and noise%that is merely a copy of the raw data.%Notice that these copies are used only to compute%$\bold S$ and $\bold N$,%the signal and noise \bx{whiteners}.%Because the program%uses models of signal and noise%to enhance the separation of signal and noise,%we might consider a second invocation of the program.%I tried multiple invocations to little avail.%Another thing for contemplation%is some automatic way of choosing $\epsilon$.\subsection{Signal/noise decomposition examples}\sx{filter ! multidip}\sx{multidip filtering}\inputdir{signoi}Figure~\ref{fig:signoi} demonstrates thesignal/noise decomposition concept on synthetic data.The signal and noise have similar frequency spectrabut different dip spectra.\plot{signoi}{width=6in,height=2.0in}{  The input signal is on the left.  Next is that signal with noise added.  Next,  for my favorite value of  {\tt epsilon=1.},  is the estimated signal and the estimated noise.}\par%\par%Ray Abma%\sx{Abma, Ray}%first noticed that different results were obtained when the%fitting goal was cast in terms of $\bold n$ instead of $\bold s$.%At first I could not believe his result,%but after repeating the computations independently I had to agree that%the result does depend on the choice of independent variable.%I sought an explanation in terms of differing null spaces,%but this is not yet satisfactory.\parBefore I discovered helix preconditioning,Ray Abma found that different results were obtained when thefitting goal was cast in terms of $\bold n$ instead of $\bold s$.Theoretically it should not make any difference.Now I believe that with preconditioning, or even without it,if there are enough iterations,the solution should be independentof whether the fitting goal is cast with either $\bold n$ or $\bold s$.\parFigure~\ref{fig:signeps} shows the result of experimenting withthe choice of $\epsilon$.As expected, increasing $\epsilon$weakens $\bold s$ and increases $\bold n$.When $\epsilon$ is too small,                                the noise is small and                                the signal is almost the original data.When $\epsilon$ is too large,                                the signal is small and                                coherent events are pushed into the noise.(Figure~\ref{fig:signeps}rescales both signal and noise images for the clearest display.)\plot{signeps}{width=6in,height=2.0in}{  Left is an estimated signal-noise pair where {\tt epsilon=4}  has improved the appearance of the estimated signal but  some coherent events have been pushed into the noise.  Right is a signal-noise pair where {\tt epsilon=.25},  has improved the appearance of the estimated noise but  the estimated signal looks no better than original data.}\parNotice that the leveling operators$\bold S$ and $\bold N$ were both estimatedfrom the original signal and noise mixture$\bold d = \bold s +\bold n$shown in Figure~\ref{fig:signoi}.Presumably we could do even better if we were to reestimate$\bold S$ and $\bold N$ from the estimates$\bold s$ and $\bold n$ in Figure~\ref{fig:signeps}.\subsection{Spitz for variable covariances}\parSince signal and noise are uncorrelated,the spectrum of data is the spectrum of the signal plus that of the noise.An equation for this idea is\begin{equation}\label{eqn:sigma}\sigma_d^2 \eq \sigma_s^2 \ + \ \sigma_n^2\end{equation}This says resonances in the signaland resonances in the noisewill both be found in the data.When we are given $\sigma_d^2$ and $\sigma_n^2$ it seems a simplematter to subtract to get $\sigma_s^2$.Actually it can be very tricky.We are never given $\sigma_d^2$ and $\sigma_n^2$;we must estimate them.Further, they can be a function of frequency, wave number, or dip,and these can be changing during measurements.We could easily find ourselves with a negative estimate for$\sigma_s^2$ which would ruin any attempt to segregate signal from noise.An idea of Simon Spitz can help here.\parLet us reexpress equation (\ref{eqn:sigma}) with prediction-error filters.\begin{equation}{1\over \bar A_d A_d} \eq{1\over \bar A_s A_s} \ + \ {1\over \bar A_n A_n}\eq{	  {\bar A_s A_s} \ +\  {\bar A_n A_n}	\over	( {\bar A_s A_s} )   ( {\bar A_n A_n})}\end{equation}Inverting\begin{equation}{ \bar A_d A_d} \eq{	( {\bar A_s A_s} ) \  ( {\bar A_n A_n})	\over	  {\bar A_s A_s} \ +\  {\bar A_n A_n}}\end{equation}The essential feature of a PEF is its zeros.Where a PEF approaches zero, its inverse is large and resonating.When we are concerned with the zeros of a mathematical functionwe tend to focus on numerators and ignore denominators.The zeros in${\bar A_s A_s}$compound with the zeros in${\bar A_n A_n}$to make the zeros in ${\bar A_d A_d}$.This motivates the ``Spitz Approximation.''\begin{equation}{ \bar A_d A_d} \eq	( {\bar A_s A_s} )\   ( {\bar A_n A_n})\end{equation}\parIt usually happens that we canfind a patch of data where no signal is present.That's a good place to estimate the noise PEF $A_n$.It is usually much harder to find a patch of data where no noise is present.This motivates the Spitz approximation which by saying$ A_d = A_s  A_n $tells us that the hard-to-estimate $A_s$ is the ratio$ A_s = A_d /  A_n $of two easy-to-estimate PEFs.\parIt would be computationally convenient ifwe had $A_s$ expressed not as a ratio.For this, form the signal$\bold u = \bold A_n \bold d$by applying the noise PEF $A_n$ to the data $\bold d$.The spectral relation is\begin{equation}\sigma_u^2 \eq \sigma_d^2 /\sigma_n^2\end{equation}Inverting this expressionand using the Spitz approximationwe see thata PEF estimate on $\bold u$ is the required $A_s$ in numerator form because\begin{equation}A_u \eq A_d / A_n \eq A_s\end{equation}\subsection{Noise removal on Shearer's data}\inputdir{ida}Professor Peter \bx{Shearer}\footnote{        I received the data for this stack from Peter Shearer        at the \bx{Cecil and Ida Green} Institute of Geophysics        \sx{Green, Cecil and Ida}        and Planetary Physics of the Scripps Oceanographic Institute.        I also received his permission to redistribute it        to friends and colleagues.        Should you have occasion to copy it please reference        \cite{Shearer.jgr.91.18147} \cite{Shearer.jgr.91.20535}        it properly.        Examples of earlier versions of these stacks        are found in the references.        Professor Shearer may be willing to supply newer and better stacks.        His electronic mail address is {\tt shearer@mahi.ucsd.edu}.        }gathered the earthquakes from the IDA network,an array of about 25 widely distributed gravimeters,donated by Cecil Green,and Shearer selected most of the shallow-depth earthquakesof magnitude greater than about 6 over the 1981-91 time interval,and sorted them by epicentral distance into bins $1^\circ$ wideand stacked them.He generously shared his edited data with meand I have been restacking it,compensating for amplitude in various ways,and planning time and filtering compensations.\parFigure~\ref{fig:sneps} shows a test of noise subtractionby multidip narrow-pass filteringon the \bx{Shearer-IDA stack}.As with prediction there is a general reduction of the noise.Unlike with prediction, weak events are preservedand noise is subtracted from them too.\plot{sneps}{width=6.5in,height=8.0in}{  Stack of Shearer's IDA data (left).  Multidip filtered (right).  It is pleasing that the noise is reduced while  weak events are preserved.}\parBesides the difference in theory,the separation filters are much smallerbecause their size is determined bythe concept that ``two dips will fit anything locally''({\tt a2=3}),versus the prediction filters``needing a sizeable window to do statistical averaging.''The same aspect ratio {\tt a1/a2} is kept andthe page is now divided into11 vertical patches and 24 horizontal patches(whereas previously the page was divided in $3\times 4$ patches).In both cases the patches overlap about 50\%.In both cases I chose to have about ten times as many equationsas unknowns on each axis in the estimation.The ten degrees of freedom could be distributed differentlyalong the two axes, but I saw no reason to do so.\subsection{The human eye as a dip filter}\parAlthough the filter seems to be performing as anticipated,no new events are apparent.I believe the reason that we see no new events isthat the competition is too tough.We are competing with the human eye, whichthrough aeons of survival has become is a highly skilled filter.Does this mean that there is no need for filter theory and filter subroutinesbecause the eye can do it equally well?It would seem so.Why then pursue the subject matter of this book?\parThe answer is 3-D.The human eye is not a perfect filter.It has a limited (though impressive) dynamic range.A nonlinear display (such as wiggle traces)can prevent it from averaging.The eye is particularly good at dip filtering,because the paper can be looked at from a range of grazing anglesand averaging window sizes miraculously adjust to the circumstances.The eye can be overwhelmed by too much data.The real problem with the human eyeis that the retina is only two-dimensional.The world contains many three-dimensional data volumes.I don't mean the simple kind of 3-D where the contents of the roomare nicely mapped onto your 2-D retina.I mean the kind of 3-D found inside a bowl of soup or inside a rock.A rock can be sliced and sliced and sliced again and each slice is a picture.The totality of these slices is a movie.The eye has a limited ability to deal with \bx{movies} by optical persistence,an averaging of all pictures shown in about 1/10 second interval.Further, the eye can follow a moving object and perform the sameaveraging.I have learned, however, that the eye really cannot follow two objectsat two different speeds and average them both over time.Now think of the third dimension in Figure~\ref{fig:sneps}.It is the dimension that I summed over to make the figure.It is the $1^\circ$ range bin.If we were viewing the many earthquakes in each bin,we would no longer be able to see the out-of-plane information

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -