📄 lda.m
字号:
function [DS2, discrimVec, eigValues] = lda(DS, discrimVecNum)
%lda: Linear discriminant analysis
% Usage:
% [DS2, discrimVec, eigValues] = lda(DS, discrimVecNum)
% DS: input data set.
% Please try "DS=prData('iris')" to get an example of DS.
% discrimVecNum: No. of discriminant vectors
% DS2: output data set, with new feature vectors
%
% Reference:
% J. Duchene and S. Leclercq, "An Optimal Transformation for
% Discriminant Principal Component Analysis," IEEE Trans. on
% Pattern Analysis and Machine Intelligence,
% Vol. 10, No 6, November 1988
%
% Type "lda" for a self-demo.
% Roger Jang, 19990829, 20030607
if nargin<1, selfdemo; return; end
if ~isstruct(DS)
fprintf('Please try "DS=prData(''iris'')" to get an example of DS.\n');
error('The input DS should be a structure variable!');
end
if nargin<2, discrimVecNum=size(DS.input,1); end
% ====== Initialization
m = size(DS.input,1); % Dimension of data point
n = size(DS.input,2); % No. of data point
A = DS.input;
if size(DS.output, 1)==1 % Crisp output
classLabel = DS.output;
[diffClassLabel, classSize] = elementCount(classLabel);
classNum = length(diffClassLabel);
mu = mean(A, 2);
% ====== Compute B and W
% ====== B: between-class scatter matrix
% ====== W: within-class scatter matrix
% M = \sum_k m_k*mu_k*mu_k^T
M = zeros(m, m);
for i = 1:classNum,
index = find(classLabel==diffClassLabel(i));
classMean = mean(A(:, index), 2);
M = M + length(index)*classMean*classMean';
end
W = A*A'-M;
B = M-n*mu*mu';
else % Potential fuzzy output
end
% ====== Find the best discriminant vectors
invW = inv(W);
Q = invW*B;
D = [];
for i = 1:discrimVecNum
[eigVec, eigVal] = eig(Q);
[eigValues(i), index] = max(diag(eigVal));
D = [D, eigVec(:, index)]; % Each col of D is a eigenvector
Q = (eye(m)-invW*D*inv(D'*invW*D)*D')*invW*B;
end
DS2=DS;
DS2.input = D(:,1:discrimVecNum)'*A;
discrimVec = D;
% ====== Self demo
function selfdemo
% ====== Self demo using IRIS dataset
DS=prData('iris');
DS2=feval(mfilename, DS);
figure; dcprDataPlot(DS2); xlabel('Input 1'); ylabel('Input 2');
DS2.input=DS2.input(3:4, :);
figure; dcprDataPlot(DS2); xlabel('Input 3'); ylabel('Input 4');
% ====== Leave-one-out errors after using LDA for dimension reduction
DS=prData('iris');
recogRate = knnrLoo(DS);
fprintf('Full data ===> LOO recog. rate = %g%%\n', 100*recogRate);
DS2 = feval(mfilename, DS);
for i = 1:4
DS3=DS2; DS3.input=DS3.input(1:i, :);
[recogRate(i), hitIndex] = knnrLoo(DS3);
fprintf('LDA dim = %d ===> LOO recog. rate = %g%%\n', i, 100*recogRate(i));
end
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -