⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 solvb.m

📁 模式识别工具箱,本人毕业论文时用到的,希望对大家有用!
💻 M
字号:
function [w1,b1,w2,b2,k,tr] = solverb(p,t,tp)%SOLVERB Design radial basis network.%	%	[W1,B1,W2,B2,TE,TR] = SOLVERB(P,T,DP)%	  P - RxQ matrix of Q input vectors.%	  T - SxQ matrix of Q target vectors.%	  DP - Design parameters (optional).%	Returns:%	  W1 - S1xR weight matrix for radial basis layer.%	  B1 - S1x1 bias vector for radial basis layer.%	  W2 - S2xS1 weight matrix for linear layer.%	  B2 - S2x1 bias vector for linear layer.%	  NR - the number of radial basis neurons used.%	  TR - training record: [row of errors]%	%	Design parameters are:%	  DP(1) - Iterations between updating display, default = 25.%	  DP(2) - Maximum number of neurons, default = # vectors in P.%	  DP(3) - Sum-squared error goal, default = 0.02.%	  DP(4) - Spread of radial basis functions, default = 1.0.%	Missing parameters and NaN's are replaced with defaults.%	%	See also NNSOLVE, RADBASIS, SIMRB, SOLVERB.% Mark Beale, 12-15-93% Copyright (c) 1992-94 by the MathWorks, Inc.% $Revision: 1.1 $  $Date: 1994/01/11 16:29:16 $if nargin < 2, error('Not enough input arguments'),end% TRAINING PARAMETERSif nargin == 2, tp = []; end[r,q] = size(p);tp = nndef(tp,[25 q 0.02 1]);df = tp(1);eg = tp(3);b = sqrt(-log(.5))/tp(4);[s2,q] = size(t);mn = min(q,tp(2));% PLOTTING FLAGplottype = max(r,s2) == 1;% RADIAL BASIS LAYER OUTPUTSP = exp(-(distm(p'))*(b*b));PP = sum(P.*P)';d = t';dd = sum(d.*d)';% CALCULATE "ERRORS" ASSOCIATED WITH VECTORSe = ((P' * d)' .^ 2) ./ (dd * PP');% PICK VECTOR WITH MOST "ERROR"ee = sum(e.^2,1);[me,pick] = max(ee);%pick = nnfmc(e);used = [];left = 1:q;W = P(:,pick);P(:,pick) = []; PP(pick,:) = [];e(:,pick) = [];used = [used left(pick)];left(pick) = [];% CALCULATE ACTUAL ERRORw1 = p(:,used)';%a1 = exp(-(distm(zeros(1,2),0.5*ones(4,2)))*(b*b))a1 = exp(-(distm(w1,p'))*(b*b));%a1 = radbas(dist(w1,p)*b);[w2,b2] = solvelin(a1,t);a2 = w2*a1 + b2*ones(1,size(a1,2));%a2 = purelin(w2*a1,b2);sse = sumsqr(t-a2);% TRAINING RECORDtr = zeros(1,mn);tr(1) = sse;% PLOTTING%clg%if plottype%  h = plotfa(p,t,p,a2);%else%  h = ploterr(tr(1),eg);%endfor k = 1:(mn-1)   % CHECK ERROR  if (sse < eg), k=k-1; break, end  % CALCULATE "ERRORS" ASSOCIATED WITH VECTORS  wj = W(:,k);  %---- VECTOR CALCULATION  a = wj' * P / (wj'*wj);  P = P - wj * a;  PP = sum(P.*P)';  e = ((P' * d)' .^ 2) ./ (dd * PP');  % PICK VECTOR WITH MOST "ERROR"  ee = sum(e.^2,1);  [me,pick] = max(ee);  %pick = nnfmc(e);  W = [W, P(:,pick)];  P(:,pick) = []; PP(pick,:) = [];  e(:,pick) = [];  used = [used left(pick)];  left(pick) = [];  % CALCULATE ACTUAL ERROR  w1 = p(:,used)';  a1 = exp(-(distm(w1,p'))*(b*b));  %a1 = radbas(dist(w1,p)*b);  [w2,b2] = solvelin(a1,t);  a2 = w2*a1 + b2*ones(1,size(a1,2));  %a2 = purelin(w2*a1,b2);  sse = sumsqr(t-a2);  % TRAINING RECORD  tr(k+1) = sse;  % PLOTTING % if rem(k,df) == 0 %   if plottype %     delete(h); %     h = plot(p,a2,'m'); %     drawnow; %   else %     h = ploterr(tr(1:(k+1)),eg,h); %   end % endend[S1,R] = size(w1);b1 = ones(S1,1)*b;% TRAINING RECORDtr = tr(1:(k+1));% PLOTTING%if rem(k,df) ~= 0%  if plottype%    delete(h);%    plot(p,a2,'m');%    drawnow;%  else%    ploterr(tr,eg,h);%  end%end% WARNINGS%if sse > eg%  disp(' ')%  disp('SOLVERB: Network error did not reach the error goal.')%  disp('  More neurons may be necessary, or try using a')%  disp('  wider or narrower spread constant.')%  disp(' ')%end

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -