📄 jader(1).m
字号:
function B = jadeR(X,m)% B = jadeR(X, m) is an m*n matrix such that Y=B*X are separated sources% extracted from the n*T data matrix X.% If m is omitted, B=jadeR(X) is a square n*n matrix (as many sources as sensors)%% Blind separation of real signals with JADE. Version 1.8. May 2005.%% Usage: % * If X is an nxT data matrix (n sensors, T samples) then% B=jadeR(X) is a nxn separating matrix such that S=B*X is an nxT% matrix of estimated source signals.% * If B=jadeR(X,m), then B has size mxn so that only m sources are% extracted. This is done by restricting the operation of jadeR% to the m first principal components. % * Also, the rows of B are ordered such that the columns of pinv(B)% are in order of decreasing norm; this has the effect that the% `most energetically significant' components appear first in the% rows of S=B*X.%% Quick notes (more at the end of this file)%% o this code is for REAL-valued signals. An implementation of JADE% for both real and complex signals is also available from% http://sig.enst.fr/~cardoso/stuff.html%% o This algorithm differs from the first released implementations of% JADE in that it has been optimized to deal more efficiently% 1) with real signals (as opposed to complex)% 2) with the case when the ICA model does not necessarily hold.%% o There is a practical limit to the number of independent% components that can be extracted with this implementation. Note% that the first step of JADE amounts to a PCA with dimensionality% reduction from n to m (which defaults to n). In practice m% cannot be `very large' (more than 40, 50, 60... depending on% available memory)%% o See more notes, references and revision history at the end of% this file and more stuff on the WEB% http://sig.enst.fr/~cardoso/stuff.html%% o This code is supposed to do a good job! Please report any% problem to cardoso@sig.enst.fr% Copyright : Jean-Francois Cardoso. cardoso@sig.enst.frverbose = 1 ; % Set to 0 for quiet operation% Finding the number of sources[n,T] = size(X);if nargin==1, m=n ; end; % Number of sources defaults to # of sensorsif m>n , fprintf('jade -> Do not ask more sources than sensors here!!!\n'), return,endif verbose, fprintf('jade -> Looking for %d sources\n',m); end ;% to do: add a warning about complex signals% Mean removal%=============if verbose, fprintf('jade -> Removing the mean value\n'); end X = X - mean(X')' * ones(1,T);%%% whitening & projection onto signal subspace% ===========================================if verbose, fprintf('jade -> Whitening the data\n'); end[U,D] = eig((X*X')/T) ; %% An eigen basis for the sample covariance matrix[Ds,k] = sort(diag(D)) ; %% Sort by increasing variancesPCs = n:-1:n-m+1 ; %% The m most significant princip. comp. by decreasing variance%% --- PCA ----------------------------------------------------------B = U(:,k(PCs))' ; % At this stage, B does the PCA on m components%% --- Scaling ------------------------------------------------------scales = sqrt(Ds(PCs)) ; % The scales of the principal components .B = diag(1./scales)*B ; % Now, B does PCA followed by a rescaling = sphering%% --- Sphering ------------------------------------------------------X = B*X; %% We have done the easy part: B is a whitening matrix and X is white.clear U D Ds k PCs scales ;%%% NOTE: At this stage, X is a PCA analysis in m components of the real data, except that%%% all its entries now have unit variance. Any further rotation of X will preserve the%%% property that X is a vector of uncorrelated components. It remains to find the%%% rotation matrix such that the entries of X are not only uncorrelated but also `as%%% independent as possible'. This independence is measured by correlations of order%%% higher than 2. We have defined such a measure of independence which%%% 1) is a reasonable approximation of the mutual information%%% 2) can be optimized by a `fast algorithm'%%% This measure of independence also corresponds to the `diagonality' of a set of%%% cumulant matrices. The code below finds the `missing rotation ' as the matrix which%%% best diagonalizes a particular set of cumulant matrices. %%% Estimation of the cumulant matrices.% ====================================if verbose, fprintf('jade -> Estimating cumulant matrices\n'); end%% Reshaping of the data, hoping to speed up things a little bit...X = X';dimsymm = (m*(m+1))/2; % Dim. of the space of real symm matricesnbcm = dimsymm ; % number of cumulant matricesCM = zeros(m,m*nbcm); % Storage for cumulant matricesR = eye(m); %% Qij = zeros(m); % Temp for a cum. matrixXim = zeros(m,1); % TempXijm = zeros(m,1); % TempUns = ones(1,m); % for convenience%% I am using a symmetry trick to save storage. I should write a short note one of these%% days explaining what is going on here.%%Range = 1:m ; % will index the columns of CM where to store the cum. mats.for im = 1:m Xim = X(:,im) ; Xijm= Xim.*Xim ; %% Note to myself: the -R on next line can be removed: it does not affect %% the joint diagonalization criterion Qij = ((Xijm(:,Uns).*X)' * X)/T - R - 2 * R(:,im)*R(:,im)' ; CM(:,Range) = Qij ; Range = Range + m ; for jm = 1:im-1 Xijm = Xim.*X(:,jm) ; Qij = sqrt(2) *(((Xijm(:,Uns).*X)' * X)/T - R(:,im)*R(:,jm)' - R(:,jm)*R(:,im)') ; CM(:,Range) = Qij ; Range = Range + m ; end ;end;%%%% Now we have nbcm = m(m+1)/2 cumulants matrices stored in a big m x m*nbcm array.%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The inefficient code below does the same as above: computing the big CM cumulant matrix.%% It is commented out but you can check that it produces the same result.%% This is supposed to help people understand the (rather obscure) code above.%% See section 4.2 of the Neural Comp paper referenced below. It can be found at%% "http://www.tsi.enst.fr/~cardoso/Papers.PS/neuralcomp_2ppf.ps",%%%% %% %% if 1,%% %% %% Step one: we compute the sample cumulants%% Matcum = zeros(m,m,m,m) ;%% for i1=1:m,%% for i2=1:m,%% for i3=1:m,%% for i4=1:m,%% Matcum(i1,i2,i3,i4) = mean( X(:,i1) .* X(:,i2) .* X(:,i3) .* X(:,i4) ) ...%% - R(i1,i2)*R(i3,i4) ...%% - R(i1,i3)*R(i2,i4) ...%% - R(i1,i4)*R(i2,i3) ;%% end%% end%% end%% end%% %% %% Step 2; We compute a basis of the space of symmetric m*m matrices%% CMM = zeros(m, m, nbcm) ; %% Holds the basis. %% icm = 0 ; %% index to the elements of the basis%% vi = zeros(m,1); %% the ith basis vetor of R^m%% vj = zeros(m,1); %% the jth basis vetor of R^m%% Id = eye (m) ; %% convenience%% for im=1:m,%% vi = Id(:,im) ;%% icm = icm + 1 ;%% CMM(:, :, icm) = vi*vi' ;%% for jm=1:im-1,%% vj = Id(:,jm) ;%% icm = icm + 1 ;%% CMM(:, :, icm) = sqrt(0.5) * (vi*vj'+vj*vi') ;%% end%% end%% %% Now CMM(:,:,i) is the ith element of an orthonormal basis for_ the space of m*m symmetric matrices%% %% %% Step 3. We compute the image of each basis element by the cumulant tensor and store it back into CMM.%% mat = zeros(m) ; %% tmp%% for icm=1:nbcm%% mat = squeeze(CMM(:,:,icm)) ;%% for i1=1:m%% for i2=1:m%% CMM(i1, i2, icm) = sum(sum(squeeze(Matcum(i1,i2,:,:)) .* mat )) ;%% end%% end%% end;%% %% This is doing something like \sum_kl [ Cum(xi,xj,xk,xl) * mat_kl ] %% %% %% Step 4. Now, we can check that CMM and CM are equivalent%% Range = 1:m ;%% for icm=1:nbcm,%% M1 = squeeze( CMM(:,:,icm)) ;%% M2 = CM(:,Range) ; %% Range = Range + m ; %% norm (M1-M2, 'fro' ) , %% This should be a numerical zero.%% end;%% %% end; %% End of the demo code for the computation of cumulant matrices%% %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% joint diagonalization of the cumulant matrices%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Initif 0, %% Init by diagonalizing a *single* cumulant matrix. It seems to save %% some computation time `sometimes'. Not clear if initialization is really worth %% it since Jacobi rotations are very efficient. On the other hand, it does not %% cost much... if verbose, fprintf('jade -> Initialization of the diagonalization\n'); end [V,D] = eig(CM(:,1:m)); % Selectng a particular cumulant matrix. for u=1:m:m*nbcm, % Accordingly updating the cumulant set given the init CM(:,u:u+m-1) = CM(:,u:u+m-1)*V ; end; CM = V'*CM;else, %% The dont-try-to-be-smart init V = eye(m) ; % la rotation initialeend;
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -