⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 chaostest.m

📁 toolbox for chaos system
💻 M
📖 第 1 页 / 共 2 页
字号:
            A = StartValue * ones(q, m); % Contains the coefficients gamma(i,j).
            B = StartValue * ones(q, 1); % Contains the coefficients gamma(0,j).
            C = StartValue * ones(1, q); % Contains the coefficients beta(j).
            D = StartValue; % Constant.
            %E = StartValue * ones(1, m); % Linear coefficients for lagged X.
            
            Theta0 = [reshape(A', 1, q*m), B', C, D];
            %Theta0 = [reshape(A', 1, q*m), B', C, D, E];
            
            Theta = lsqnonlin('chaosfn', Theta0, [], [], Options, X, L, m, q, ActiveFN);
        
            T      =  length(X) - m*L; % New sample size.

%
% Use an inner function JACOBMATX to compute the Jacobian needed
% to compute the Lyapunov Exponent LAMBDA. This jacobian is relative
% to X and not to parameter THETA. N.B: the jacobian given by
% LSQNONLIN is relative to parameter THETA.
%

            JacobianMatrix   =   jacobmatx(Theta, X, L, m, q, ActiveFN);

%
% We distinguish between the "sample size" T used for estimating
% the Jacobian, and the "block length" M which is the number of
% evaluation points used for estimating LAMBDA (M <= T).
%

            n   =   3;
            h   =   round(T^(1/n)); % Number of blocks or step.

% We can choose the full sample by setting h = 1. When h > 1,
% the obtained exponent is the Local Lyapunov Exponent.

            newIndex    =   (h:h:T);

            M   =   length(newIndex); % Equally spaced subsample's size.

%
% Compute the dominant Lyapunov Exponent LAMBDA. The dominant exponent
% corresponds to the maximum eigenvalue of Tn(M)'*Tn(M). The following
% procedure uses a sample estimation of the Lyapunov exponent as
% described by Whang Y. and Linton O. (1999).
%

            Tn  =   JacobianMatrix(:, :, newIndex);
            for t = 2:M
                Tn(:, :, t)  =   Tn(:, :, t) * Tn(:,:,t-1);
            end

%
% Compute the "QR" estimate of LAMBDA as suggested by Abarbanel et al.
% (1991) by multiplying Tn by a unit vector U0 to reduce the systematic
% positive bias in the formal estimate of LAMBDA. U0 is chosen at random
% with respect to uniform measure on the unit sphere. In practice:
% U0 = [1; 0; 0; ...; 0].
%

            U0    =   zeros(m, 1);
            U0(1) =   1;

            v   =   eig((Tn(:, :, M) * U0)' * (Tn(:, :, M) * U0));
            
            % LAMBDA is the largest Lyapunov exponent for the specified orders.
            % To avoid Log of zero, when max(v) is zero replace it with
            % REALMIN.
            
            Lambda1  =  1/(2*M) * log(max([v ; realmin])); 

%
% Choose the largest Lyapunov exponent from all regressions carried by
% varying L, m, and q. So we have L*m*q regressions in total, the chosen
% LAMBDA is the largest of all. This method is different from the Nychka et
% al. (1992) method where they choose the regression that minimizes the
% Schwarz Information Criterion first to specify (L, m, q) and then compute
% the corresponding Lyapunov exponent. This method rejects some chaotic map
% when we change initial conditions (but keeping the same map). The triplet
% (L, m, q) can be seen as the degree of complexity of a chaotic map, if a
% process is not chaotic, it will reject the null hypothesis for all orders
% (L, m, q), else, it will accept the null for some orders where the
% chaotic map get more complex.
%

            if Lambda1 >= Lambda
                
                Lambda  = Lambda1;
                Orders  =   [L, m, q];
%
% Compute the asymptotic variance of the Lyapunov Exponent for noisy
% systems as computed by Shintani M. and Linton O. (2002).
%

                esp     =   realmin;
                Eta     =   zeros(M, 1);
                Eta(1)  =   0.5 * log(max(esp, max(eig(Tn(:, :, 1)' * ...
                    Tn(:, :, 1))))) - Lambda;

% Ensure that the obtained eigenvalue is not zero, if so, replace it with
% a positive small number REALMIN to avoid 'log of zero' and 'divide by zero'.

                for t = 2:M
                    Eta(t)  =   0.5 * log(max(esp, max(eig(Tn(:, :, t)' * ...
                        Tn(:, :, t))))./ max(esp, max(eig(Tn(:, :, t-1)' * ...
                        Tn(:, :, t-1))))) - Lambda;
                end

                gamm    =   zeros(M, 1);                
                for i = 1:M
                    gamm(i) =   1/M * Eta(M-i+1:M)' * Eta(1:i);
                end

                gamm   =   [gamm(1:end-1); flipud(gamm)];

%
% Compute the Lag truncation parameter. This is the optimal Lag as defined
% by Andrews (1991) p-830 for the Quadratic Spectral Kernel. The coefficient
% is for the QS kernel, Andrews (1991) p-830. The coefficient a(2) as defined
% by Andrews converges to 1: a(2) -> 1, its inverse too converges to 1:
% 1/a(2) -> 1, Andrews (1991) p-837. Because the calculation of a(2) is very
% difficult for neural network model, use the evident coeffiicent a(2) = 1.
% Limit(Sm, M, Inf) = Inf and Limit(Sm/M, M, Inf) = 0.
%

                Sm      =   1.3221 * (1 * M)^(1/5);
                
%
% Compute the kernel function needed to estimate the asymptotic variance of
% LAMBDA. The used kernel function is the Quadratic Spectral Kernel as
% defined by Andrews D. (1991) p-821.
%

                j           =   -M+1:M-1;
                KernelFN    =   ones(size(j));

                z           =   6 * pi * (j./Sm) / 5;

                % The limit(KernelFN(z), z, 0) = 1, so remove all z == 0.
                z(j==0)     =   [];

                KernelFN(j~=0)    =   (3 ./ z.^2) .* (sin(z) ./ z - cos(z));

%
% Compute the asymptotic variance of LAMBDA and prevent it from being
% negative or NaN. Next, compute the statistic of LAMBDA.
%

                varLambda       =   max(esp, KernelFN * gamm);
                LambdaStatistic =   Lambda / sqrt(varLambda / M);
                
            end

        end
    end
end

%
% The statistic just found is for the one-sided test where the null
% hypothesis: LAMBDA >= 0 (presence of chaos) against the alternative
% LAMBDA < 0 (no chaos). This statistic is asymptotically normal.
% This test is for TAIL = 1.
%

pValue   =   normcdf(LambdaStatistic, 0, 1);

%
% Compute the critical value and the confidence interval CI for the true
% LAMBDA only when asked because NORMINV is computationally intensive.
%

if nargout >=5
    crit     = norminv(1 - alpha, 0, 1) * sqrt(varLambda / M);
    CI       = [(Lambda - crit), Inf];
end

%
% To maintain consistency with existing Statistics Toolbox hypothesis
% tests, returning 'H = 0' implies that we 'Do not reject the null 
% hypothesis of chaotic dynamics at the significance level of alpha'
% and 'H = 1' implies that we 'Reject the null hypothesis of chaotic
% dynamics at significance level of alpha.'
%

H  = (alpha >= pValue);

%--------------------------------------------------------------------------%
%                       Helper function JACOBMATX                          %
%--------------------------------------------------------------------------%

function    J   =   jacobmatx(Theta, X, L, m, q, ActiveFN)
%JACOBMATX computes the Jacobian matrix needed to compute the Lyapunov
%   exponent LAMBDA. The jacobian matrix is relative to X: dF(X)/dX. Not
%   to confuse with the jacobian needed to determine the coefficient THETA.
%   The last jacobian is relative to THETA and not X.

XLAG   =  lagmatrix(X, L:L:L*m); % size(XLAG) = [T, m].
XLAG   =  XLAG(m*L+1:end, :); % Remove all NaN observation.
T      =  length(X) - m*L; % New sample size.

A   =   reshape(Theta(1:q*m), m, q)'; % size(A) = [q, m].
B   =   Theta(q*m+1:q*m+q)'; % size(B) = [q, 1].
C   =   Theta(q*m+q+1:q*m+q+q); % size(C) = [1, q].

u   =   A * XLAG' + repmat(B,1,T); % size(u) = [q, T].

J   =   zeros(m, m, T);

J(2:end, 1:end-1, :)  =   repmat(eye(m-1), [1 1 T]);

switch upper(ActiveFN)
    case 'LOGISTIC'
        J(1, :, :)  =   A' * (repmat(C', 1, T) .* (exp(- u) ./ ...
            (1 + exp(- u)).^2));
        
    case 'TANH'
        J(1, :, :)  =   A' * (repmat(C', 1, T) .* (sech(u).^2));
        
    case 'FUNFIT'
        J(1, :, :)  =   A' * (8 * repmat(C', 1, T) .* ...
                (1 + abs(u)) ./ (3 + (1 + abs(u)).^2).^2);
            
    otherwise
        error(' Unrecognized activation function!')
end

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -