⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 comp_wls.m

📁 this is Demonstration of Wiener filter,LMS filter,Steep-descent algorithm.
💻 M
字号:
% Demonstration of Wiener filter,LMS filter,Steep-descent algorithm
clear;
clc;

N = 10000;          %----- the length of the observation sequence
M = 2;              %----- the filter length

v = randn(1,N);     %----- white process as the AR excitation
a = poly(sign(randn(1,M)).*rand(1,M));      %----- coefficient of AR process

u = filter(1,a,v);          %-----the input sequence

d = v;              %----- the desired response

rf = xcorr(u,M,'biased');
rv = rf(M+1:2*M+1);

R = toeplitz(rv);       %----- the correlation matrix of the input

pf = xcorr(d,u,M,'biased');

pv = pf(M+1:2*M+1).';   %----- the cross-correlation vector between the input and the desired response

%----- the optimal tap weight vector for Wiener filter-----
wopt = inv(R) * pv;
[V,D] = eig(R);         %-----selection of a stable step size mu
lambda_max = max(diag(D));
mu = 0.9 * 2/lambda_max;

%----- the steepest descent learning-----
wsd = randn(M+1,1);                 %-----initial weight vector for steepest descent
total_iteration_number = 100;       %-----total iteration number         

for i=1:total_iteration_number
    wsd = wsd + mu * (pv - R*wsd);
end

%----- the LMS learning-----
wlms = randn(M+1,1);                %-----initial weight vector for LMS
uv = zeros(M+1,1);                  %-----initial input vector

mu = 0.1*2/lambda_max               %-----step size mu for LMS

for n=1:N;
    uv(2:M+1) = uv(1:M);
    uv(1) = u(n);
    
    y = wlms' * uv;
    e = d(n) - y;
    wlms = wlms + mu * uv * conj(e);
end;
    

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -