⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 cats_v312.tex

📁 最大似然估计算法
💻 TEX
📖 第 1 页 / 共 2 页
字号:
model and $\beta$ for the FOGM noise model. As for above you can fix these parametersas follows\begin{verbatim}--model gg:b23.0k-1\end{verbatim}The equation for the power spectrum in {\it Langbein} [2004] (equation 16) is incorrectand takes the form \begin{equation}P = P_0 \left( (\frac{\beta}{S})^2 + 4 \pi^2 f^2 \right)^{\frac{\kappa}{2}}\end{equation}To scale these equations so that they match the power-law only power spectrum and the First-Order Gauss Markov power spectrumwe have\begin{equation}P = \frac{2 \sigma_2 S^{\frac{\kappa}{2}}}{f_s^{\frac{\kappa}{2} + 1}} \left( (\frac{\beta}{S})^2 + 4 \pi^2 f^2 \right)^{\frac{\kappa}{2}}\end{equation}The step-variable white noise model is an ad-hoc model devised so that we can introducea step function in the size of the white nosie component between two specified times. Youcan let the minimization solve for the times but this is very unstable (as its not a continuous function)and is not recommended. This noise model should ALWAYS be used in conjunctionwith a white noise or variable white noise model. The model should be specifiedas follows\begin{verbatim}--model sw:b1997.0e1998.0\end{verbatim}where $b$ is the start time of when the white noise changes and $e$ is the end time. You can specify start or end times outside the range of the time series to give a single step functionin the noise (but not both). For each model you might want to specify the sigmas by appending the additional parameter as follows. \begin{verbatim}--model wh:s0.0005/s0.0005/s0.0015\end{verbatim}The / seperation allows you to specifiy different sigmas for each column of data.The sigma "option"  is almost meaningless at the moment, sinceI have virtually eliminated the need for starting values. However if you want to fully specify a model and justcompute the weighted least squares (--method W) then the sigmas are required and should be imput as shown.\item[-\--method $<$M/S/E/W$>$ (-B)] Estimation method. Choose from Maximum Likelihood Estimation (MLE), Empirical estimation, Spectral estimation or Weighted Least Squares Estimation. Additional parameter should be M,E,S or W.\item[-\--columns $<$number$>$ (-C)] Only use a certain column from the data file. The number after the-\--columns defines which columns to use. The number is basically the decimal representation of a binarysequence; 1 for use and 0 for ignore. So if there are three columns and you wish to use the middle column thiswould be in binary 010, which is 2 in decimal so the number would be 2. If you want to use all 3 columns thenthe number would be 7. This is essentially scalable, if in the future there is a file with morecolumns then it can cope with this. Of course if you want to use all columns then you dont need touse this option. \item[-\--scale scale$\_$factor (-S)]. Divide the data by scale$\_$factor before estimating the model.\item[-\--sinusoid $<$period$>$$<$time$\_$flag$>$$<$harmonics$>$ (-A)]Include a sinusoid (and $n$ harmonics) of period $<$period$>$ to the linear parameters to solve.Use time$\_$flag to describe what units the period is in :\begin{description}\item[y] years\item[d] days\item[h] hours\item[m] minutes\item[s] seconds\end{description}To solve for an annual sinusoid use {\bf - -sinusoid 1y}. To solve for an annual sinusoid and a semi-annualuse {\bf - -sinusoid 1y1}. \item[-\--verbose (-V)] verbose. Output more information.\item[-\--speed $<$0/1/2/3$>$ (-Z)] Speed up the computation.  At every point in the minimization algorithm the noise parameters are chosen, the covariance matrix is calculated, a weighted least squares is performed on the data and the residuals calculated. The residuals, along with the noise parameters, are used to calculate the MLE value (which isthe objective function in the minimization). However if you have good starting parameters then theresiduals will not alter much between different noise parameter choices and so the minimization can be made fasterby using the same residuals for many parts of the algorithm. In general this gives results that are a tenth ofa mm or less different than the full blown algorithm for a considerable increase in speed. Note this wasmore of an issue in the previous version of the software, when using the simplex algorithm. This new versionperforms at the same speed as the previous even without this "cheat". The higher the number the lesstimes the residuals are estimated in the algorithm.\item[-\--help (-H)] Bring up a help page.\item[-\--delta $<$Delta$>$ (-D)]This sets the accuracy of the starting parameters. Delta is 1.1 for 10\% accuracy, 1.01 for 1\% ... This is used in the simplex algorithm for controlling the sizeof the initial walks through parameter space. {\bf Currently defunct.}\item[-\--tolerance $<$tol1$>$/$<$tol2$>$/$<$tol3$>$ (-T)] Tolerance values for the simplex algorithm. {\bf tol1} is the PRECISION required for all values of the noise parameters at solution. {\bf tol2} is the PRECISION required for the objective function (the MLE) at solution. {\bf tol3} : is the MAXIMUM VALUE of all the noise parameters (OR-ed with {\bf tol2}).{\bf Currently defunct.}\item[-\--output $<$output$\_$file$>$ (-O)] All results are written to this file. If this option is not set then the output goes to {\bf stdout}. \item[-\--filetype $<$cats/psmsl$>$ (-F)]Filetype, either {\bf cats} (Default) or {\bf psmsl}. See above.\item[-\--psdfile $<$filename$>$ (-P)]Create the power spectrum for this data set and output it to $<$filename$>$. Ifthe dataset is complete (no gaps or missing data) then the psd is generated fromthe FFT otherwise scargle's periodogram is used.\item[-\--notrend (-E)]Do no compute a trend to the data.\item[-\--cov$\_$form $<$1/2$>$ (-X)]Specify the form of the covariance matrix. In the previous versions the covariance matrix forpower-law noise was formed as described in {\it Williams} [2003] i.e. the transformation matrix(which is used to produce the covariance matrix) is scaled by the individual $\Delta T$s ($\Delta Tj = T_j - T_{j-1}$). For extremely unevenly spaced data, then this is the only way to do this, butit can lead to uncertain assumptions when trying to interpret the results, in particular withreference to an evenly spaced dataset. Alternatively, one can create a covariance matrix assuminga certain time-sampling and then remove the columns and rows associated with missing data. The secondmethod is, I believe, generally more stable and corresponds to the method used by {\it Langbein} [2004].It is also the only real way of computing the covariance matrix for certain noise models suchas band-pass, autoregressive and moving average. The default is 1, removing columns and rowsof missing data.\end{description}\subsection{Example Command Line} To estimate the amount of flicker noise and white noise in the file vyas.neu assuming there isan annual signal in the series we can use the command{\scriptsize\begin{verbatim}cats vyas.neu --sinusoid 1y --model pl:k-1 --verbose --output vyas.fn_mle\end{verbatim}}To estimate the spectral index plus the amplitudes of white and power-law noise in the east and up componentof file vyas.neu assuming an annual and semi-annual signal and using the fast option we can use the command{\scriptsize\begin{verbatim}cats vyas.neu --sinusoid 1y1 --model pl: --columns 3 --verbose --speed 2 --output vyas.pl_mle\end{verbatim}}\section{Typical Output}Below is the output for the North component of the file vyas.neu using the command{\scriptsize\begin{verbatim}cats --model pl:k-1 --model wh: --columns 4 --sinusoid 1y --verbose --output vyas.fn_mle vyas.neu\end{verbatim}}The long list of numbers are the vertex in the stages of the simplex algorithm togetherwith the MLE value and the noise amplitudes.{\scriptsize\begin{verbatim} Sampling frequency 1.15664e-05 (Hz), 1.00 days Number of samples 1 period apart = 565 of 587 Number of points in full series  = 612 Cats Version : 3.1.1 Cats command : cats --model pl:k-1 --columns 4 --sinusoid 1y --verbose --output vyas.fn_mle vyas.neu Series[0] = 1 Series[1] = 0 Series[2] = 0 Number of series to process : 1 Data from file : vyas.neu cats : running on bilai Linux release 2.4.21-27.0.1.EL (version #1 Mon Dec 20 18:47:51 EST 2004) on i686 userid : sdwil Start Time : Tue Jan 25 14:21:23 2005 work(1)  = 19992.000000 info     = 0 Time taken to create covariance matrix and compute eigen value and vectors : 8 seconds wh_only = 0.00113253 (3154.2463), cn_only = 0.00503155 (3141.9072) Starting a one-dimensional minimisation : initial angle 45.00 angle = 45.000000 mle = 3176.04935824 radius =     1.476166 wh =     1.043807 cn =     1.043807 Next choice of angle = 27.811530 angle = 27.811530 mle = 3184.98028986 radius =     2.038823 wh =     0.951243 cn =     1.803313 Next choice of angle = 17.188471 angle = 17.188471 mle = 3181.49799071 radius =     2.810471 wh =     0.830539 cn =     2.684949 Next choice of angle = 28.089645 angle = 28.089645 mle = 3184.92628502 radius =     2.024794 wh =     0.953379 cn =     1.786298 Next choice of angle = 25.922946 angle = 25.922946 mle = 3185.20570106 radius =     2.140598 wh =     0.935788 cn =     1.925217 Next choice of angle = 22.586673 angle = 22.586673 mle = 3184.87538613 radius =     2.352437 wh =     0.903525 cn =     2.172004 Next choice of angle = 25.139212 angle = 25.139212 mle = 3185.21929412 radius =     2.186426 wh =     0.928835 cn =     1.979324 Next choice of angle = 25.390604 angle = 25.390604 mle = 3185.22046980 radius =     2.171482 wh =     0.931103 cn =     1.961729 Next choice of angle = 25.644510 angle = 25.644510 mle = 3185.21630040 radius =     2.156624 wh =     0.933357 cn =     1.944189 Finished :  Angle = 45.000000 mle = 3176.04935824 wh =     1.043807 cn =     1.043807 Angle = 27.811530 mle = 3184.98028986 wh =     0.951243 cn =     1.803313 Angle = 17.188471 mle = 3181.49799071 wh =     0.830539 cn =     2.684949 Angle = 28.089645 mle = 3184.92628502 wh =     0.953379 cn =     1.786298 Angle = 25.922946 mle = 3185.20570106 wh =     0.935788 cn =     1.925217 Angle = 22.586673 mle = 3184.87538613 wh =     0.903525 cn =     2.172004 Angle = 25.139212 mle = 3185.21929412 wh =     0.928835 cn =     1.979324 Angle = 25.390604 mle = 3185.22046980 wh =     0.931103 cn =     1.961729 Angle = 25.644510 mle = 3185.21630040 wh =     0.933357 cn =     1.944189 Number of Parameters = 7+NORT MLE = 3185.22046980+NORT               INTER :    -33.2048 +-     0.5869+NORT               SLOPE :     16.3914 +-     0.5012+NORT             0 SIN   :     -0.2780 +-     0.2122+NORT             0 COS   :     -0.0141 +-     0.2237+NORT 1999.79190000 OFFST :      2.9781 +-     0.4382+NORT               WH    :      0.9311 +-     0.0412+NORT               PL    :      1.9617 +-     0.2468+NORT FIXED         INDEX :    -1.000000+COVARXX     0.3445    -0.2495     0.0467     0.0225     0.0694    -0.0011     0.0084XX    -0.2495     0.2512    -0.0311    -0.0056    -0.1049     0.0006    -0.0049XX     0.0467    -0.0311     0.0450     0.0002     0.0008     0.0000    -0.0001XX     0.0225    -0.0056     0.0002     0.0501    -0.0339    -0.0003     0.0022XX     0.0694    -0.1049     0.0008    -0.0339     0.1920     0.0006    -0.0044XX    -0.0011     0.0006     0.0000    -0.0003     0.0006     0.0017    -0.0059XX     0.0084    -0.0049    -0.0001     0.0022    -0.0044    -0.0059     0.0609-COVAR End Time : Tue Jan 25 14:21:38 2005 Total Time : 15\end{verbatim}}\section{Processing Speeds}The time taken to process a time series depends on the lengthof time series and the model you are running. For example, the whitenoise only model is almost instantaneous whereas a stochastic model such as power-law noise plus white where the spectral index is alsobeing estimated takes the longest. I will come up with some graphs based on a fixed data set for differentmachines. From my experience however, SUNS (Ultra II) are slower than SGI's.and PC's running linux or CYGWIN ( under NT/XP) are around 10 times fasterthan the SGI 02. PC's running solaris as also quite slow compared toan equivalent machine running linux. A gain of 10 times can mean a lot whena time series can take over a day to process! I would recommend a machinewith at least 512Mb (and preferably 1024Mb or more) of ram for this sort ofwork. I have not attempted so far to run this on a cluster and trysome parallelization of the code. \begin{thebibliography}{}\bibitem{gar78}Gardner, M., Mathematical Games : White and brown music, fractal curves and one-over-ffluctuations. {\it Scientific American, 238}, 4, 16-32, 1978.\bibitem{hos81}Hosking, J.R.M., Fractional differencing. {\it Biometrika 68}, 1, 165-176, 1981\bibitem{}Langbein, J., Noise in two-color electronic distance meter measurements revisited, {\it J. Geophys.Res., 109}, B04406, doi:10.1029/2003JB002819, 2004.\bibitem{}Langbein, J., and H. Johnson, Correlated errors in geodetic time series:Implications for time-dependent deformation, {\it J. Geophys. Res., 102}, B1, 591-603,1997.\bibitem{}Mao A., C. G. A. Harrison, T. H. Dixon, Noise in GPS coordinate time series.{\it J. Geophys. Res.}, 104, 2797-2816, 1999.\bibitem{}Williams, S. D. P., The effect of coloured noise on the uncertainties of ratesestimated from geodetic time series, {\it J. Geodesy}, 76 (9-10), 483-494, 2003.\bibitem{}Williams, S. D. P., Y. Bock, P. Fang, P. Jamason, R. M. Nikolaidis, L. Prawirodirdjo, M. Miller,and D. J. Johnson, Error analysis of continuous GPS position time series, {\it J. Geophys. Res., 109},B03412, doi:10.1029/2003JB002741, 2004.\bibitem{}Zhang, J., Y. Bock, H. Johnson, P. Fang, S. Williams, J. Genrich, S. Wdowinski, and J. Behr,Southern California Permanent GPS Geodetic Array : Error analysis of daily position estimates andsite velocities, {\it J. Geophys. Res., 102}, B8, 18035-18055, 1997.\end{thebibliography}\end{document}

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -