📄 kisomap_train.m
字号:
function [TrainY, U, BK0, D, BMLam] = kisomap_train(TrainX, K, dims);
% [U, BK0, D] = kisomap_train(TrainX, K, dims);
%
% INPUT
% TrainX: Train Data
% K: the number of neighborhood
% dims: reduced dimension
%
% OUTPUT
% U: EigenVector of Kernel Matrix
% BK0: Gram matrix of Training Data in Feature space.
% D: Euclidean Distance
%
% This code uses some functions of Tenenbaum's Isomap code
%
% For the details, See
% H. Choi, S. Choi, "Kernel Isomap," Electronics Letters, vol. 40, no. 25, pp. 1612-1613, 2004
%
% 2004.06.04
% by hychoi@postech.ac.kr, http://home.postech.ac.kr/~hychoi/
% Heeyoul Choi
% Dept. of Computer Science
% POSTECH, Korea
% Initialize...
opt.disp = 0;
D = L2_distance(TrainX, TrainX, 1);
N = size(D,1);
landmarks = 1:N;
use_dijk = 1;
% Neighborhood...
[tmp, ind] = sort(D);
for i=1:N
D(i,ind((2+K):end,i)) = 0;
end
D = sparse(D);
D = max(D,D');
% Shortest paths
GD = dijkstra(D, landmarks);
% Kernel Matrix
B0 = -.5*(GD.^2 - sum(GD.^2)'*ones(1,N)/N - ones(N,1)*sum(GD.^2)/N + sum(sum(GD.^2))/(N^2));
% Modify the Kernel to be positive... BK0
B0_ = -.5*(GD - sum(GD)'*ones(1,N)/N - ones(N,1)*sum(GD)/N + sum(sum(GD))/(N^2));
BM = [zeros(N, N), 2*B0; -eye(N), -4*B0_];
[BMU, BMLam] = eigs(BM, 1, 'LR', opt);
%% Constant Adding
TKD = GD + BMLam .* (1 - eye(N));
BK0 = -.5*(TKD.^2 - sum(TKD.^2)'*ones(1,N)/N - ones(N,1)*sum(TKD.^2)/N + sum(sum(TKD.^2))/(N^2));
%% Centering is meaningless
% BK0 = BK0 - ones(N, N)/N*BK0 - BK0 *ones(N,N)/N + ones(N,N)*BK0*ones(N,N)/N^2;
[U, Lam] = eigs(BK0, dims, 'LR', opt);
% U = cnp_m(BK0, dims, 30);
% Lam = (U'*BK0*U);
for i = 1:dims
U(:, i) = U(:, i) ./ Lam(i,i).^0.5;
end
TrainY = Lam * U';
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -