⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 llm.m

📁 竞争学习的matlab工具箱
💻 M
字号:
% MATLAB implementation of local linear mapping for time series prediction
% 
% OBS: This approach can be undestood as an online version of the model simulated 
% in the file 'local_AR_models_using_data_vectors.m'
%
% Based on the prediction method described in the following paper:
% 
%  J. Walter, H. Ritter & K. Schulten (1990). "Non-linear prediction with
%  the self-organizing map", Proceedings of the IEEE International Joint
%  Conference on neural networks, pp. 587-592
%
% OBS: This approach can be seen as an online version of the model simulated 
% in the file 'lard.m'
%
% Authors: Guilherme A. Barreto
% Date: September 21st 2006

clear; clc; close all;

%--------------- Organize the training data ------------
load train_series_data.dat;    % Load the time series to be clustered

%% Building the input vectors from an univariate time series
p=5;       % Dimension of the input vector (length of the time window)
lap=p-1;      % Amount of overlapping between consecutive input vectors
Dw=buffer(train_series_data,p,lap);   % Build the data vectors
if lap>0,
     Dw=Dw(:,p:end)';  % Eliminate the first 'p-1' vectors with zeros)
else Dw=Dw';
end
Dw=fliplr(Dw);       
Dw=Dw+0.01*randn(size(Dw));  % Add some gaussian noise to the data

Ytrue=Dw(2:end,1);           % True (observed) output values

[LEN_DATA DIM_INPUT]=size(Dw);  % Data matrix size (1 input vector per row)

%-------------------------------------------------------
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Create the network structure  %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Mx = 5;   % Number of neurons in the X-dimension
My = 1;    % Number of neurons in the Y-dimension
MAP_SIZE = [Mx My];        % Size of SOM map
sMap = som_map_struct(2*DIM_INPUT,'msize',MAP_SIZE,'rect','sheet');

sMap.codebook = rand(size(sMap.codebook));   % Random weight initialization

Co=som_unit_coords(sMap); % Coordinates of neurons in the map

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Specification of some training parameters %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
si=round(max(Mx,My)/2);  % Initial neighborhood
sf=0.001;                % Final neighborhood
ei=0.1;                  % Initial learning rate
ef=0.001;              % Final learning rate
Nep=10;                % Number of epochs
Tmax=LEN_DATA*Nep;     % Maximum number of iterations
T=0:Tmax;              % Time index for training iteration
eta=ei*power(ef/ei,T/Tmax);  % Learning rate vector
sig=si*power(sf/si,T/Tmax);  % Neighborhood width vector

%%%%%%%%%%%%%%%%%%%%%%%%
% Train Kohonen Map   %%
%%%%%%%%%%%%%%%%%%%%%%%%
for t=1:Nep,  % loop for the epochs
    
    epoch=t, % Show current epoch
    for tt=1:LEN_DATA-1,
        
         T=(t-1)*LEN_DATA+tt;    % iteration throughout the epochs
         
         % Compute distances of all prototype vectors to current input
         Di=sqrt(som_eucdist2(sMap.codebook(:,1:DIM_INPUT),Dw(tt,:)));

         [Di_min win] = min(Di);         % Find the winner (BMU) 
         
         % Prediction procedure
         c=sMap.codebook(win,DIM_INPUT+1:end);  % Coefficient vector of the winner
         Yhat(tt) = dot(c,Dw(tt,:));            % Estimated output
         error_win(tt)= Ytrue(tt)-Yhat(tt);     % Prediction error
     
         % Update the clustering weight vector and the coefficient vector
         % of the winner and of its neighbors
         T=(t-1)*LEN_DATA+tt;    % iteration throughout the epochs
         for i=1:Mx*My,
             % Squared distance (in map coordinates) between winner and neuron i
             D2=power(norm(Co(win,:)-Co(i,:)),2);
             
             % Compute corresponding value of the neighborhood function
             H=exp(-0.5*D2/(sig(T)*sig(T)));
             
             % Update the clustering weights of neuron i
             sMap.codebook(i,1:DIM_INPUT)=sMap.codebook(i,1:DIM_INPUT) + eta(T)*H*(Dw(tt,:)-sMap.codebook(i,1:DIM_INPUT));
             
             % Update the coefficient vector of the i-th neuron (LMS-like rule)
             c=sMap.codebook(i,DIM_INPUT+1:end);  % Coefficient vector of the i-th neuron 
             Yhat(i) = dot(c,Dw(tt,:));
             error(i) = Ytrue(tt)-Yhat(i);
             sMap.codebook(i,DIM_INPUT+1:end)=sMap.codebook(i,DIM_INPUT+1:end)+eta(T)*H*error(i)*Dw(tt,:);
         end
    end
    SSE(t)=mean(sum(error_win.^2));
end

figure; plot(SSE);      % Plot the learning curve

%--------------- Organize the testing data ------------
clear Yhat;

load test_series_data.dat;    % Load testing time series
Dw=buffer(test_series_data,p,lap); % Build input data vectors

if lap>0,
     Dw=Dw(:,p:end)';  % Eliminate the first 'p-1' vectors containing zeros
else Dw=Dw';
end
Dw=fliplr(Dw);      
Dw=Dw+0.01*randn(size(Dw));  % Add some gaussian noise to the data

Ytrue=Dw(2:end,1);       % Desired output values
%------------------------------------------------------

[LEN_DATA DIM_INPUT]=size(Dw);  % Data matrix size (1 input vector per row)

for tt=1:LEN_DATA-1,     
         % Compute distances of all prototype vectors to current input
         Di=sqrt(som_eucdist2(sMap.codebook(:,1:DIM_INPUT),Dw(tt,:)));

         [Di_min win] = min(Di);         % Find the winning neuron 
         
         % Prediction procedure
         c=sMap.codebook(win,DIM_INPUT+1:end);  % Coefficient vector of the winner
         Yhat(tt) = dot(c,Dw(tt,:));            % Estimated output
         error_win(tt)= Ytrue(tt)-Yhat(tt);     % Prediction error
end

% Plot target and predicted time series
figure; hold on; plot(Ytrue,'r-'); plot(Yhat,'b-'); hold off

% Normalized mean squared error
NMSE=var(error_win)/var(Ytrue)

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -