⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 nonlin_gg.m

📁 Sparse Estimation Algorithms by Blumensath and Davies min ||x||_0 subject to ||y - Ax||_2<e
💻 M
📖 第 1 页 / 共 2 页
字号:
function [s, err_cost, iter_time]=nonlin_gg(x,F,C,m,varargin)% nonlin_gg: Nonlinear sparse approximation by greedy gradient search.%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Usage% [s, err_cost, iter_time]=nonlin_gg(x,P,m,'option_name','option_value')%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Input%   Mandantory:%               x   Observation vector to be decomposed%               F   Function mapping from the sparse coeficient domain to%                   the observation domain.%               C   Function handle to cost function to be optimised%               m   length of s %%   Possible additional options:%   (specify as many as you want using 'option_name','option_value' pairs)%   See below for explanation of options:%__________________________________________________________________________%   option_name    |     available option_values                | default%--------------------------------------------------------------------------%   stopCrit       | M, corr, cost, cost_change                 | M%   stopTol        | number (see below)                         | n/4%   maxIter        | positive integer (see below)               | n%   verbose        | true, false                                | false%   start_val      | vector of length m                         | zeros%   weights        | vector of length m containing weights      | ones%                  | to biase atom selection                    |%   grad_pert      | pertubation size to numerically evaluate   | 1e-6%                  | gradient                                   |%   step_size      | step size for gradient optimisation step   | 0.1%   grad_stop      | Change in the norm of the gradient below   | 1e-3%                  | which gradient optimisation stops          |%   max_grad       | Maximum number of gradient steps           | 1000%   grad           | Function handle to gradient of cost        |%                  | function to be optimised                   | %   optimiser      | Function handle to a function that         |%                  | optimises F for given initial value and    |%                  | given subset of coefficients               |%   PlotFunc       | Function handle to a function that         |%                  | plots estimate of signal given sparse      |%                  | coefficient estimate                       |%                  |                                            | %%   Available stopping criteria :%               M           -   Extracts exactly M = stopTol elements.%               corr        -   Stops when maximum correlation between%                               residual and atoms is below stopTol value.%               cost        -   Stops when cost C is below stopTol value.%               cost_change -   Stops when the change cost C falles below%                               stopTol value.%%   stopTol: Value for stoping criterion.%%   maxIter: Maximum of allowed iterations.%%   verbose: Logical value to allow algorithm progress to be displayed.%%   start_val: Allows algorithms to start from partial solution.%%   optimiser: Allows the specification of a function OPT(s,x,INDEX). %              OPT msut return the full coefficient vector s after %              optimisation of the cost function C over a given subset of%              coefficients in s indexed by INDEX.%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Outputs%	 s              Solution vector %    err_cost       Vector containing cost of approximation error for each %                   iteration%    iter_time      Vector containing times for each iteration%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Description%   Algorithm selects greedily new elements in each iteration based on the%   gradient of the cost function to be minimised with respect to the%   coefficients. If the exact gradient is not specified, the gradienbt is%   approximated using diference between cost at s and at s+DELTA. Once a%   new element has been selected, gradient optimisation is used to%   minimise the cost function based on the selected elements.% References: T. Blumensath, M. E. Davies; "Gradient Pursuit for Non-Linear%            Sparse Signal Modelling", submitted to EUSIPCO, 2008.%   %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%                    Default values and initialisation%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%[n1 n2]=size(x);if n2 == 1    n=n1;elseif n1 == 1    x=x';    n=n2;else   display('x must be a vector.');   returnend    sigsize         = x'*x/n;initial_given   = 0;do_plot         = 0;err_cost        = [];iter_time       = [];STOPCRIT        = 'M';STOPTOL         = ceil(n/4);MAXITER         = n;verbose         = false;s_initial       = zeros(m,1);w               = ones(m,1);g_tol           = 1e-6;step_size       = 1;grad_tol        = 1e-3;grad_given      = 0;optimiser_given = 0;Max_Grad        = 100;if verbose   display('Initialising...') end%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%                           Output variables%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%switch nargout     case 3        comp_err=true;        comp_time=true;    case 2         comp_err=true;        comp_time=false;    case 1        comp_err=false;        comp_time=false;    case 0        error('Please assign output variable.')    otherwise        error('Too many output arguments specified')end%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%                       Look through options%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Put option into nice formatOptions={};OS=nargin-4;c=1;for i=1:OS    if isa(varargin{i},'cell')        CellSize=length(varargin{i});        ThisCell=varargin{i};        for j=1:CellSize            Options{c}=ThisCell{j};            c=c+1;        end    else        Options{c}=varargin{i};        c=c+1;    endendOS=length(Options);if rem(OS,2)   error('Something is wrong with argument name and argument value pairs.') endfor i=1:2:OS   switch Options{i}        case {'stopCrit'}            if (strmatch(Options{i+1},{'M'; 'corr'; 'cost'; 'cost_change'},'exact'));                STOPCRIT    = Options{i+1};              else error('stopCrit must be char string [M, corr, cost, cost_change]. Exiting.'); end         case {'stopTol'}            if isa(Options{i+1},'numeric') ; STOPTOL     = Options{i+1};               else error('stopTol must be number. Exiting.'); end        case {'grad_pert'}            if isa(Options{i+1},'numeric') ; g_tol     = Options{i+1};               else error('grad_pert must be number. Exiting.'); end        case {'grad'}            if isa(Options{i+1},'function_handle') ; grad_given = 1; Grad = Options{i+1};               else error('grad must be function handle. Exiting.'); end        case {'grad_stop'}            if isa(Options{i+1},'numeric') ; grad_tol     = Options{i+1};               else error('grad_stop must be number. Exiting.'); end        case {'step_size'}            if isa(Options{i+1},'numeric') ; step_size     = Options{i+1};               else error('step_size must be number. Exiting.'); end        case {'weights'}             if isa(Options{i+1},'numeric') & length(Options{i+1}) == m ;                w  = Options{i+1};               else error('weights must be a vector of length m. Exiting.'); end        case {'max_grad'}             if isa(Options{i+1},'numeric') & length(Options{i+1}) == 1 ;                Max_Grad  = Options{i+1};               else error('max_grad must be scalar. Exiting.'); end        case {'maxIter'}            if isa(Options{i+1},'numeric'); MAXITER     = Options{i+1};                         else error('maxIter must be a number. Exiting.'); end        case {'verbose'}            if isa(Options{i+1},'logical'); verbose     = Options{i+1};               else error('verbose must be a logical. Exiting.'); end         case {'start_val'}            if isa(Options{i+1},'numeric') & length(Options{i+1}) == m ;                s_initial     = Options{i+1};                   initial_given=1;            else error('start_val must be a vector of length m. Exiting.'); end        case {'optimiser'}            if isa(Options{i+1},'function_handle'); OPT = Options{i+1}; optimiser_given=1;            else error('optimiser must be function handle. Exiting.'); end         case {'PlotFunc'}            if isa(Options{i+1},'function_handle'); PlotFunc = Options{i+1}; do_plot=1;            else error('PlotFunc must be function handle. Exiting.'); end        otherwise                        error('Unrecognised option. Exiting.')    endendif strcmp(STOPCRIT,'M')     maxM=STOPTOL;else    maxM=MAXITER;endif nargout >=2    err_cost = zeros(maxM,1);endif nargout ==3    iter_time = zeros(maxM,1);end%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%                        Do we start from zero or not?%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%if initial_given ==1;    IND          = find(s_initial);    s            = s_initial;    else    IND         = [];    s           = s_initial;end%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%                        Main algorithm%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%if verbose   display('Main iterations...') endtict=0;done = 0;ne   = C(s,x);

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -