📄 index.html
字号:
%% Compressed Sensing Acquisition% Compressed sensing acquisition corresponds to a random projection of a signal |x| on a% few linear vectors. For the recovery of |x| to be possible, it is assumed% to be sparsely represented in an orthogonal basis. Up to a change of% basis, we suppose that the vector |x| itself is sparse. %%% We begin by generating a sparse signal with |s| randomized values.% dimensionn = 512;% sparsitys = 17;% location of the diracsset_rand_seeds(123456,123456);sel = randperm(n); sel = sel(1:s);x = zeros(n,1); x(sel)=1;% % randomization of the signs and valuesx = x.*sign(randn(n,1)).*(1-.5*rand(n,1));% save for future usex0 = x;%%% We create a random Gaussian matrix and normalize its columns. This defines the acquisition operator |U|.% The columns% are thus random point on the unit sphere of |R^p|.% number of measuresp = 100;% Gaussian matrixU = randn(p,n);% normalizationU = U ./ repmat( sqrt(sum(U.^2)), [p 1] );%%% We perform random measurements without noise.y = U*x;%% % Compressed sensing recovery corresponds % to solving the inverse problem |y=U*x|, which is ill posed because |x| is% higher dimensional than |x|. % The reconstruction can be perform with L1 minimization, which regularizes the problems by using the sparsity of the solution.%%% |min_{x1} norm(x1,1) subject to U*x1=y|.%%% Using the homotopy algorithm, we compute the solution path for a large% number of different regularization parameter |lambda| values, and retrieve the last step, which corresponds to the % minimum L1 norm solution.options.niter = Inf;[X1,lambda_list,sparsity_list] = perform_homotopy(U,y,options);xbp = X1(:,end);%%% We display the solution. You can see that the location of the Dirac is% perfectly recovered.clf;subplot(2,1,1);plot_sparse_diracs(x);set_graphic_sizes([], 20);title('Original Signal');subplot(2,1,2);plot_sparse_diracs(xbp);set_graphic_sizes([], 20);title('Recovered by L1 minimization');%%% _Exercice 1:_ (the solution is <../private/inverse_compressed_sensing/exo1.m exo1.m>)% For sparsity values |s| in [14,34], measure the ratio of vector that are% perfectly recovered. To do this, generate many input signal (something% like 100) and perform CS recovery on these signals to see wether they% are recovered or not.exo1;%% Compressed Sensing Recovery with Orthogonal Matching Pursuit% Instead of using L1 minimization, it is possible to use matching pursuit% using the columns of |U| as atoms of a redundant dictionary.x = x0;y = U*x;s = sum(x~=0);%%% We perform |s| steps of orthogonal pursuit. This also recovers perfectly% the signal.options.nbr_max_atoms = s;xomp = perform_omp(U,y,options);clf;subplot(2,1,1);plot_sparse_diracs(x);set_graphic_sizes([], 20);title('Original Signal');subplot(2,1,2);plot_sparse_diracs(xomp);set_graphic_sizes([], 20);title('Recovered by OMP');%%% _Exercice 2:_ (the solution is <../private/inverse_compressed_sensing/exo2.m exo2.m>)% Solve the compressed sensing reconstruction using |s| steps of orthogonal matching% pursuits.exo2;%% Compressed Sensing on Compressible Signals% Exactely |s|-sparse signals are of little practical use because natural% signals are compressible rather than exactely sparse. We study here the% relative performance of non-linear approximation and compressive sampling% approximation on signals with polynomially decaying coefficients.%% % A typical exemple of signal that is compressible in the Dirac basis is a% signal with fast decaying coefficients. We flip here the sign,% although it is not relevant (because the sensing matrix is random).n = 512;alpha = 1.5;x = (1:n)'.^(-alpha);x(2:2:n) = -x(2:2:n);%%% Random signal are generated from this template by randomizing the% coefficients.clf;subplot(2,1,1);plot_sparse_diracs( x(randperm(n)) );subplot(2,1,2);plot_sparse_diracs( x(randperm(n)) );%%% The measure of sparsity here is |alpha|, which should be >1 for compressible signal, % and in (1/2,1) for not so compressible signal.%%% The non-linear approximation error with |M| coefficients of such a signal decays like % |norm(f-f_M)^2=M^{-2*alpha+1}|, so that the slope in log/log plot is% |-2*alpha+1|.xs = sort(abs(x));if xs(1)>xs(n) xs = reverse(xs);enderr_nl = reverse(cumsum( xs.^2 ));err_nl = err_nl/err_nl(1);% displayclf;plot( log10(1:n), log10(err_nl) );axis('tight');set_label('log10(M)', 'log10(|f-f_M|^2)');%%% _Exercice 3:_ (the solution is <../private/inverse_compressed_sensing/exo3.m exo3.m>)% Compute the |alpha| slope for the wavelet coefficients of a natural image (for instance 'lena').% You should perform a linear fit of the slope of ordered coefficients for % |M| for instance in the range |[.005 .1]*n^2| (where |n^2| is the number of pixels). Use the function |polyfit| to% compute the linear regression.exo3;%%% The compressed sensing recovery from a compressible signal from noiseless% measurements can be performed by pure L1 minimization. It is however better% to use a relaxed L1 minimzation%%% |min_{x1} 1/2*norm(y-U*x1)^2 + lambda*norm(x1,1)|%%% for a well chosen |lambda|.%%% We compute the full regularization path for all the |lambda|.% measuresy = U*x;% L1 resolutionoptions.niter = Inf;[X1,lambda_list,sparsity_list] = perform_homotopy(U,y,options);%%% We can display the L2 compressed sensing approximation error as a% function of the iteration step. You can see that it is not strictly% decaying. A slight regularization somehow decreases the error. m = size(X1,2);err = sum( (X1 - repmat(x, [1 m])).^2 );clf;plot(err / norm(x)); axis([1 m 0 .2]);set_label('#step', 'Error');%%% The number of samples is |p|. We can compute the number of |q| of sample% that generates the same non-linear approximation error. The ratio |p/q>1|% gives the compressed sensing sub-optimality constant. The closer to 1,% the better.% the best possible compressed sensing error[e,i] = compute_min(err(:), 1);xcs = X1(:,i);% compute the equivalent number of NL terms[tmp,q] = compute_min(abs(e-err_nl(:)), 1);disp(strcat(['Compressed sensing sub-optimality factor: p/q=' num2str(p/q,2)]));##### SOURCE END #####--> </body></html>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -