⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 pls_nonneg.m

📁 采用matlab编写的数字图像恢复程序
💻 M
字号:
  function [uh,gradnormvec] = pls_nonneg(...      b_mat,k_hat,alpha,L,newt_params,cg_params)%%  Minimize the L2-norm-penalized least squares (standard Tikhonov) functional %    f(u) = 1/2*(||K*u-z||^2 + alpha*<L*u,u>)%  subject to the constraint u > or = 0.%%  K is a first kind integral operator of convolution type, z represents%  noisy, blurred data. alpha is positive parameter controlling the tradeoff%  between fit to data and the amount of regularization.%%  The constrained minimizer is obtained using a projected Newton iteration.%  Objective functional: f(u) = 1/2*u'*H*u + u'*b,  b = K'*z%  gradient:             g(u) = H*u - b%  Hessian:              H = K'*K + alpha*L%%  Reference: Dimitri P. Bertsekas, "Constrained Optimization and Lagrange%  Multipliers", Academic Press, 1982. See Section 1.5 (Algorithms for %  Minimization Subject to Simple Constraints) on p. 76.%    See also the paper by D.P. Bertsekas, "Projected Newton methods for %  optimization problems with simple constraints", SIAM J. Control Opt., %  20 (1982), pp. 221-246.%%  Code was written by C. Vogel in August of 1995. It was modified by%  Mattias Henkenschloss in Sept. 1995, and then further modified by %  Vogel. global tvplotfigsize;  newt_maxiter = newt_params(1);  newt_steptol = newt_params(2);  newt_gradtol = newt_params(3);  newt_tab_flag = newt_params(4);  newt_fig = newt_params(5);  GA_param = newt_params(6);  ls_maxiter = newt_params(7);  tol0 = newt_params(8);  %  Initialization.   gradnormvec = [];  newt_stepnormvec = [];  cg_conv_rate = [];  newt_iter = 0;  stop_flag = 0;  [nx,ny] = size(b_mat);  u_low = zeros(nx,ny);  %  Lower bound for u  u_mat = zeros(nx,ny);  %  Initial guess  No_constr = zeros(nx,ny);  %  Indicates active set is empty.  debug = 0; % set debug = 1 to print some diagnostics  %  Perform projected Newton iterations. Keep track of the norm %  of the projected gradient.        while stop_flag == 0    newt_iter = newt_iter + 1;    %  Compute gradient g = H*u - b and projected gradient.    Hu_mat = mult_hbar(u_mat,k_hat,alpha,L,No_constr);    g_mat = Hu_mat - b_mat;  % gradient    pgrad = u_mat - max(u_mat-g_mat,u_low);   % projected gradient    gradnorm = norm(pgrad(:));    %  Compute tolerance used in determining the active set.    tol = min(tol0,gradnorm);    %  Determine active indices    Active = ((u_low <= u_mat & u_mat <= u_low+tol) & g_mat > 0);    % Compute projected Newton direction, s = -Hbar\g, where     %     Hbar = diag(Active) + diag(1-Active)*H*diag(1-Active).    [s_mat,cg_residnormvec] = cg(k_hat,-g_mat,Active,alpha,L,cg_params);    % Step size selection                 lambda = 1;      ls_iter     = 0;    Hu_mat = mult_hbar(u_mat,k_hat,alpha,L,No_constr);    f_old = .5*u_mat(:)'*Hu_mat(:) - u_mat(:)'*b_mat(:);    u_old    = u_mat;    decrease = 1;      %set for initialization    f_new    = f_old;  %set for initialization    if debug == 1      fprintf(1,' Step size selection \n')    end    % Armijo-like condition (Bertsekas' SICON paper, p 229).     while( f_old - f_new < decrease )       % trial iterate      u_mat = max(u_old+lambda*s_mat,u_low);      % required decrease in objective function.      mask  = (u_mat+s_mat < u_low) & ~Active;  % inactive indices for which                                    % full step violates lower bound      if sum(sum(mask)) > 0         tau1 = min(min((u_low(mask)-u_mat(mask))./s_mat(mask)));      end      gam = min(1,tau1);      %  The scalar gamma is computed following footnote on page 229       %  in Bertsekas' SICON paper.       %  Compute required decrease in objective function.      decrease = GA_param * sum(sum(g_mat.*(-lambda*gam*(1-Active).*s_mat + ...                  Active.*(u_old-u_mat))));      % new function value      Hu_mat = mult_hbar(u_mat,k_hat,alpha,L,No_constr);      f_new = .5*u_mat(:)'*Hu_mat(:) - u_mat(:)'*b_mat(:);      if debug == 1        fprintf(1,' Line search iteration %d \n', ls_iter)        fprintf(1,'   f_old - f_new    = %12.6e \n', f_old-f_new)        fprintf(1,'   req. decrease    = %12.6e \n', decrease)        fprintf(1,'   step size lambda = %12.6e \n', lambda)      end            lambda = 0.5*lambda;      ls_iter     = ls_iter+1;      if( ls_iter > ls_maxiter )        stop_flag = 4;   %  Set error flag. Max line search steps exceeded.	disp(' *** Error in pls_nonneg.m. Max no. line search iter exceeded.');      end    end        stepnorm = norm(u_mat(:)-u_old(:));    newt_stepnormvec = [newt_stepnormvec; stepnorm];    gradnormvec = [gradnormvec; gradnorm;];    cg_conv_rate = [cg_conv_rate; -mean(diff(log10(cg_residnormvec)))];    %  Check stopping criteria.        if newt_iter >= newt_maxiter      stop_flag = 1;    elseif stepnorm < newt_steptol       stop_flag = 2;    elseif gradnorm/gradnormvec(1) < newt_gradtol      stop_flag = 3;    end        %  Output results of current iteration.        if newt_tab_flag == 1      fprintf(1,'\n\n Projected Newton iteration %3.0f \n',newt_iter)      fprintf(1,'   ||Proj. grad.|| = %12.6e \n', gradnorm)      fprintf(1,'   Tol. used for estimation of active ind. = %12.6e \n', tol)      fprintf(1,'   Number of indices estimated to be active     = %d \n',...         sum(sum(Active)))      fprintf(1,'   CG mean conv rate = %12.6e \n', cg_conv_rate(newt_iter))      fprintf(1,'   No. of LS backtrack steps =%3.0f \n', ls_iter)    end    if newt_fig > 0      figure(newt_fig)      set(newt_fig,...        'units','pixels',...        'pos',tvplotfigsize ,...        'numbertitle','off',...        'name','"Non-Negative Non-Linear Solver"');        subplot(221)          imagesc(u_mat), colormap(jet), colorbar	  title('Approx. Solution')        subplot(222)          semilogy(gradnormvec,'o')          title('||proj. gradient||')	subplot(223)	  semilogy(newt_stepnormvec,'o')	  title('||Newton step||')      drawnow    end          endglobal figure_list;uicontrol(newt_fig, ...        'style','push', ...        'callback','close(gcf),figure_list(8) = -1;', ...        'string','Close');

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -