⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 collapse.m

📁 偏最小二乘算法在MATLAB中的实现
💻 M
字号:
function [W12,W23,B2,B3]=collapse(NEURAL,W,Q,P,fac)
%COLLAPSE Calculates final neural net model from NNPLS
% The inputs are parameters in the neural network inner model, (NEURAL),
% the PLS X-block weights (W), Y-block weights (Q), X-block loadings
% (P) and the number of factors to be used (fac). The outputs are the
% weights to the hidden nodes (W12), weights to the output nodes (W23),
% biases on the hidden nodes (B2) and biases on the output nodes (B3). 
% I/O syntax is: [W12,W23,B2,B3]=collapse(NEURAL,W,Q,P,fac);
%
% See also NNPLS, NNPLSPRD

%  Copyright
%  Thomas Mc Avoy
%  1994
%  Distributed by Eigenvector Technologies
%  Modified by BMW on 5-8-95
f=0;
% Need to have as many copies of weights as there are sigmoids
temp=eye(size(W(:,1)*W(:,1)'));
% temp is used since NNPLS is developed with residuals
for i=1:fac
  for j=1:NEURAL(1,i);
% NEURAL(1,i) gives the number of sigmoids used
    W12(f+j,:)=NEURAL(2*NEURAL(1,i)+2+j,i)*W(:,i)'*temp';
    B2(f+j)=NEURAL(NEURAL(1,i)+2+j,i);
  end
  f=f+NEURAL(1,i);
  temp=temp*(eye(size(W(:,i)*W(:,i)'))-W(:,i)*P(:,i)');
end
% Hidden to Output Node Weights are W23
f=0;
B3=zeros(size(Q(:,1)'));
% Need to have as many copies of weights as there are sigmoids
for i=1:fac
  for j=1:NEURAL(1,i);
    W23(j+f,:)=NEURAL(2+j,i)*Q(:,i)';
  end
  B3=B3+NEURAL(2,i)*Q(:,i)';
  f=f+NEURAL(1,i);
end


		
	

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -