⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 fscl.m

📁 竞争学习的matlab工具箱
💻 M
字号:
% MATLAB implementation of Frequency-Sensitive Competitive Learning (FSCL)
% algorithm using the SOMTOOLBOX
%
% Source: 
%    C. Ahalt, A. K. Krishnamurthy, P. Chen, and D. E. Melton (1990) 
%          "Competitive learning algorithms for vector quantization", 
%          Neural Networks, vol. 3, no. 3, pp. 277-290.
% 
% Code author: Guilherme A. Barreto
% Date: October 21th 2005

clear; clc; close all;

% Load data
load dataset1.dat;
Dw=dataset1; clear dataset1

% Get size of data matrix (1 input vector per row)
[LEN_DATA DIM_INPUT]=size(Dw);

%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Create a SOM structure  %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Mx = 10;                  % Number of neurons
MAP_SIZE = [Mx 1];        % Size of SOM map (always use 1-D map)

sMap = som_map_struct(DIM_INPUT,'msize', MAP_SIZE,'hexa','sheet');

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Different weights initialization methods %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% sMap  = som_randinit(Dw, sMap);   % Random weight initialization
% sMap  = som_lininit(Dw, sMap);   % Linear weight initialization
I=randperm(LEN_DATA); sMap.codebook=Dw(I(1:Mx),:);  % Select Mx data vectors at random

% Train the Sequential FSCL algorithm
disp('----------------------------------------');
disp('Running the FSCL Algorithm... Please wait!');

Ep=100;      % number of training epochs

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Learning rate scheduling %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
ai=0.1;   % Initial learning rate (Try different values ranging from 0.1-0.5)
af=0.01; % Final learning rate
Tmax=LEN_DATA*Ep;      % Maximum number of training iterations
T=0:Tmax;              % Generate discrete time steps
alpha=ai*power(af/ai,T/Tmax);  % Note that "alpha" approaches "af" exponentially!

% Parameters of the FSCL algorithm 
counter=zeros(Mx,1);  % Counter for the number of victories
freq=ones(Mx,1);      % Relative frequency of victories
z=0.1;                % Constant exponent

for t=1:Ep,   % Loop for the training epochs
    Epoch=t,  % Show current epoch

    % Shuffle input data vectors at each training epoch
    I=randperm(LEN_DATA);  % shuffle the row indices
    Dw=Dw(I,:);

    for tt=1:LEN_DATA,   % Loop for iteration within an epoch
        Di=sqrt(som_eucdist2(sMap,Dw(tt,:))); % Compute Euclidean distances for all neurons
        
        WDi=freq.*Di;                 % Compute weighted Euclidean distances 
        [WDi_min win]=min(WDi);       % Find the winner (BMU) using WDi
        
        counter(win)=counter(win)+1;   % Increment the number of victories of the winner
        
        % Update the weights of the winning neuron only
        T=(t-1)*LEN_DATA+tt;   % Current iteration
        sMap.codebook(win,:)=sMap.codebook(win,:)+alpha(T)*(Dw(tt,:)-sMap.codebook(win,:));
        
        % Update relative frequency of victories of the winner
        freq(win) = power(counter(win)/T,z);
    end
        
    % Quantization error per training epoch
    Qerr(t) = som_quality(sMap, Dw);
end


% Plot prototypes and data altogether
figure, plot(Dw(:,1),Dw(:,2),'+r'), hold on
plot(sMap.codebook(:,1),sMap.codebook(:,2),'b*')
title('Prototype vectors in input space'), hold off

% Plot quantization error evolution per training epoch
figure, plot(Qerr) 
title('Quantization Error per Training Epoch')

% A bar plot of the number of victories per neuron throughout training epochs
figure, bar(1:Mx,counter)
title('Victories per neuron')

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -