⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 dct_lms.m

📁 matlab里面的一些经典的例子程序
💻 M
字号:
function [W, e, Lambda] = dct_lms(u, d, M, alpha, beta, gamma, verbose)
% function [W, e, Lambda] = dct_lms(u, d, M, alpha, beta, gamma, verbose)
%
% dct_lms.m - use DCT-LMS algorithm to estimate optimum weight vectors
% for linear estimation
% written for MATLAB 4.0
%
% Reference: Ch.10 of Haykin, _Adaptive Filter Theory_, 3rd edition
%
%
% Input parameters:
%                u       : vector of inputs (real scalars)
%                d       : vector of desired outputs
%                M       : final order of predictor
%                alpha   : base step size for weight updates
%                beta:   : remembering factor for sliding DCT coefficient updates
%                gamma   : forgetting factor for estimated eigenvalue updates
%                verbose : set to i for interactive processing
%
%
% Output parameters:
%                W       : row-wise matrix of Hermitian transposed weights
%                          at each iteration
%                e       : row vector of prediction errors at each time step
%                Lambda  : row-wise matrix of estimates of process eigenvalues
%                          at each iteration

% length of maximum number of timesteps that can be predicted
N = min(length(u),length(d));

% initialize weight matrix and associated parameters for LMS predictor
W = zeros(M, N+1);
Lambda = zeros(M, N);

m = [0:(M-1)]';
k = ones(M, 1); k(1) = 1 / sqrt(2);
W2M = exp(-j * pi / M);
W2M2 = exp(-j * pi / 2 / M);
W2Mm = exp(-j * pi * m / M);
W2M2m = exp(-j * pi * m / 2 / M);
F1 = W2M2m;
F2 = conj (W2M2m);
y = zeros (N, 1);
e = zeros (N, 1);
Lambda_n1 = zeros(M, 1);

n=M;
i = n-M+1:n;
A1n1 = W2M.^(m*(n-i)) * u(i);
A2n1 = W2M.^(m* (i-n+2*M-1)) * u(i);

for n = M+1:N

    A1(:, n) = W2M.^m    .*  A1n1 + u(n) * ones(M, 1) - u(n-M) * (-1).^m;
    A2(:, n) = W2M.^(-m) .* (A2n1 + u(n) * ones(M, 1) - u(n-M) * (-1).^m);
    C(:, n)  = 1/2 * k .* (-1).^m .* W2M2.^m .* (A1(:, n) + A2(:, n));

    A1n1 = A1 (: , n);
    A2n1 = A2(:, n);

    % predict next sample and compute error
    y(n) = W(:, n).' * C(:, n);
    e(n) = d(n) - y(n);

    if (verbose ~= 0);
       disp(['time step ', int2str(n), ': mag. pred. err. = ' , num2str (abs(e(n)))])
    end;

% adapt eigenvalue estimate and weight vectors, adjusting for
% offset of M+i in starting index for eigenvalue step size
    Lambda(:, n) = gamma * Lambda_n1 + 1 / (n-M) * (C( :, n) .^2 - gamma * Lambda_n1);
    Lambda_n1    = Lambda (: , n);
    W(:, n+1)    = W(:, n) + alpha ./ Lambda(:, n) .* C(:, n) * e(n);

end; % for n

% discard last update to W since no more data to predict
W = W(i:M, i:N);

% end; % dct_lms


⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -