📄 m1_multiply_doc.html
字号:
(2,3) 0.0341</pre><pre class="codeinput">X = sptenrand([4 2 3 4],80);Y = contract(X,1,4) <span class="comment">%<-- should be dense</span></pre><pre class="codeoutput">Y is a tensor of size 2 x 3 Y(:,:) = 1.9795 1.3526 1.7949 0.8789 1.7861 0.2635</pre><h2>Relationships among ttv, ttm, and ttt<a name="73"></a></h2> <p>The three "tensor times <i>_</i>" functions (<tt>ttv</tt>, <tt>ttm</tt>, <tt>ttt</tt>) all perform specialized calculations, but they are all related to some degree. Here are several relationships among them: </p><pre class="codeinput">X = tensor(rand(4,3,2));A = rand(4,1);</pre><p>Tensor times vector gives a 3 x 2 result</p><pre class="codeinput">Y1 = ttv(X,A,1)</pre><pre class="codeoutput">Y1 is a tensor of size 3 x 2 Y1(:,:) = 1.3525 0.7826 0.4640 0.6248 0.3595 0.6841</pre><p>When <tt>ttm</tt> is used with the transpose option, the result is almost the same as <tt>ttv</tt></p><pre class="codeinput">Y2 = ttm(X,A,1,<span class="string">'t'</span>)</pre><pre class="codeoutput">Y2 is a tensor of size 1 x 3 x 2 Y2(:,:,1) = 1.3525 0.4640 0.3595 Y2(:,:,2) = 0.7826 0.6248 0.6841</pre><p>We can use <tt>squeeze</tt> to remove the singleton dimension left over from <tt>ttm</tt> to give the same answer as <tt>ttv</tt></p><pre class="codeinput">squeeze(Y2)</pre><pre class="codeoutput">ans is a tensor of size 3 x 2 ans(:,:) = 1.3525 0.7826 0.4640 0.6248 0.3595 0.6841</pre><p>Tensor outer product may be used in conjuction with contract to produce the result of <tt>ttm</tt>. Please note that this is more expensive than using <tt>ttm</tt>. </p><pre class="codeinput">Z = ttt(tensor(A),X);size(Z)</pre><pre class="codeoutput">ans = 4 1 4 3 2</pre><pre class="codeinput">Y3 = contract(Z,1,3)</pre><pre class="codeoutput">Y3 is a tensor of size 1 x 3 x 2 Y3(:,:,1) = 1.3525 0.4640 0.3595 Y3(:,:,2) = 0.7826 0.6248 0.6841</pre><p>Finally, use <tt>squeeze</tt> to remove the singleton dimension to get the same result as <tt>ttv</tt>. </p><pre class="codeinput">squeeze(Y3)</pre><pre class="codeoutput">ans is a tensor of size 3 x 2 ans(:,:) = 1.3525 0.7826 0.4640 0.6248 0.3595 0.6841</pre><h2>Frobenius norm of a tensor<a name="81"></a></h2> <p>The Frobenius norm of any type of tensor may be computed with the function <tt>norm</tt>. Each class is optimized to calculate the norm in the most efficient manner. </p><pre class="codeinput">X = sptenrand([4 3 2],5)norm(X)norm(full(X))</pre><pre class="codeoutput">X is a sparse tensor of size 4 x 3 x 2 with 5 nonzeros (1,1,1) 0.4315 (1,3,2) 0.0963 (2,3,1) 0.2578 (3,1,2) 0.1131 (3,2,1) 0.1691ans = 0.5507ans = 0.5507</pre><pre class="codeinput">X = ktensor({rand(4,2),rand(3,2)})norm(X)</pre><pre class="codeoutput">X is a ktensor of size 4 x 3 X.lambda = [ 1 1 ] X.U{1} = 0.7782 0.7981 0.5535 0.2071 0.0126 0.5604 0.5964 0.9688 X.U{2} = 0.2473 0.9360 0.4601 0.0894 0.6999 0.3869ans = 2.0976</pre><pre class="codeinput">X = ttensor(tensor(rand(2,2)),{rand(4,2),rand(3,2)})norm(X)</pre><pre class="codeoutput">X is a ttensor of size 4 x 3 X.core is a tensor of size 2 x 2 X.core(:,:) = 0.8863 0.9777 0.1854 0.6602 X.U{1} = 0.1073 0.6613 0.3096 0.8004 0.1871 0.5998 0.3794 0.2723 X.U{2} = 0.6679 0.6879 0.5256 0.1727 0.1006 0.7775ans = 1.7854</pre><p class="footer"><br> Published with MATLAB® 7.2<br></p> </div> <!--##### SOURCE BEGIN #####%% Multiplying tensors
%% Tensor times vector (ttv for tensor)
% Compute a tensor times a vector (or vectors) in one (or more) modes.
rand('state',0);
X = tenrand([5,3,4,2]); %<REPLACE_WITH_DASH_DASH Create a dense tensor.
A = rand(5,1); B = rand(3,1); C = rand(4,1); D = rand(2,1); %<REPLACE_WITH_DASH_DASH Some vectors.
%%
Y = ttv(X, A, 1) %<REPLACE_WITH_DASH_DASH X times A in mode 1.
%%
Y = ttv(X, {A,B,C,D}, 1) %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttv(X, {A,B,C,D}, [1 2 3 4]) %<REPLACE_WITH_DASH_DASH All-mode multiply produces a scalar.
%%
Y = ttv(X, {D,C,B,A}, [4 3 2 1]) %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttv(X, {A,B,C,D}) %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttv(X, {C,D}, [3 4]) %<REPLACE_WITH_DASH_DASH X times C in mode-3 & D in mode-4.
%%
Y = ttv(X, {A,B,C,D}, [3 4]) %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttv(X, {A,B,D}, [1 2 4]) %<REPLACE_WITH_DASH_DASH 3-way multiplication.
%%
Y = ttv(X, {A,B,C,D}, [1 2 4]) %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttv(X, {A,B,D}, -3) %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttv(X, {A,B,C,D}, -3) %<REPLACE_WITH_DASH_DASH Same as above.
%% Sparse tensor times vector (ttv for sptensor)
% This is the same as in the dense case, except that the result may be
% either dense or sparse (or a scalar).
X = sptenrand([5,3,4,2],5); %<REPLACE_WITH_DASH_DASH Create a sparse tensor.
%%
Y = ttv(X, A, 1) %<REPLACE_WITH_DASH_DASH X times A in mode 1. Result is sparse.
%%
Y = ttv(X, {A,B,C,D}, [1 2 3 4]) %<REPLACE_WITH_DASH_DASH All-mode multiply.
%%
Y = ttv(X, {C,D}, [3 4]) %<REPLACE_WITH_DASH_DASH X times C in mode-3 & D in mode-4.
%%
Y = ttv(X, {A,B,D}, -3) %<REPLACE_WITH_DASH_DASH 3-way multiplication. Result is *dense*!
%% Kruskal tensor times vector (ttv for ktensor)
% The special structure of a ktensor allows an efficient implementation of
% vector multiplication. The result is a ktensor or a scalar.
X = ktensor([10;1],rand(5,2),rand(3,2),rand(4,2),rand(2,2)); %<REPLACE_WITH_DASH_DASH Ktensor.
Y = ttv(X, A, 1) %<REPLACE_WITH_DASH_DASH X times A in mode 1. Result is a ktensor.
%%
norm(full(Y) - ttv(full(X),A,1)) %<REPLACE_WITH_DASH_DASH Result is the same as dense case.
%%
Y = ttv(X, {A,B,C,D}) %<REPLACE_WITH_DASH_DASH All-mode multiply REPLACE_WITH_DASH_DASH scalar result.
%%
Y = ttv(X, {C,D}, [3 4]) %<REPLACE_WITH_DASH_DASH X times C in mode-3 & D in mode-4.
%%
Y = ttv(X, {A,B,D}, [1 2 4]) %<REPLACE_WITH_DASH_DASH 3-way multiplication.
%% Tucker tensor times vector (ttv for ttensor)
% The special structure of a ttensor allows an efficient implementation of
% vector multiplication. The result is a ttensor or a scalar.
X = ttensor(tenrand([2,2,2,2]),rand(5,2),rand(3,2),rand(4,2),rand(2,2));
Y = ttv(X, A, 1) %<REPLACE_WITH_DASH_DASH X times A in mode 1.
%%
norm(full(Y) - ttv(full(X),A, 1)) %<REPLACE_WITH_DASH_DASH Same as dense case.
%%
Y = ttv(X, {A,B,C,D}, [1 2 3 4]) %<REPLACE_WITH_DASH_DASH All-mode multiply REPLACE_WITH_DASH_DASH scalar result.
%%
Y = ttv(X, {C,D}, [3 4]) %<REPLACE_WITH_DASH_DASH X times C in mode-3 & D in mode-4.
%%
Y = ttv(X, {A,B,D}, [1 2 4]) %<REPLACE_WITH_DASH_DASH 3-way multiplication.
%% Tensor times matrix (ttm for tensor)
% Compute a tensor times a matrix (or matrices) in one (or more) modes.
X = tensor(rand(5,3,4,2));
A = rand(4,5); B = rand(4,3); C = rand(3,4); D = rand(3,2);
%%
Y = ttm(X, A, 1); %<REPLACE_WITH_DASH_DASH X times A in mode-1.
Y = ttm(X, {A,B,C,D}, 1); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, A', 1, 't') %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttm(X, {A,B,C,D}, [1 2 3 4]); %<REPLACE_WITH_DASH_DASH 4-way mutliply.
Y = ttm(X, {D,C,B,A}, [4 3 2 1]); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, {A,B,C,D}); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, {A',B',C',D'}, 't') %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttm(X, {C,D}, [3 4]); %<REPLACE_WITH_DASH_DASH X times C in mode-3 & D in mode-4
Y = ttm(X, {A,B,C,D}, [3 4]) %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttm(X, {A,B,D}, [1 2 4]); %<REPLACE_WITH_DASH_DASH 3-way multiply.
Y = ttm(X, {A,B,C,D}, [1 2 4]); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, {A,B,D}, -3); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, {A,B,C,D}, -3) %<REPLACE_WITH_DASH_DASH Same as above.
%% Sparse tensor times matrix (ttm for sptensor)
% It is also possible to multiply an sptensor times a matrix or series of
% matrices. The arguments are the same as for the dense case. The result may
% be dense or sparse, depending on its density.
X = sptenrand([5 3 4 2],10);
Y = ttm(X, A, 1); %<REPLACE_WITH_DASH_DASH X times A in mode-1.
Y = ttm(X, {A,B,C,D}, 1); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, A', 1, 't') %<REPLACE_WITH_DASH_DASH Same as above
%%
norm(full(Y) - ttm(full(X),A, 1) ) %<REPLACE_WITH_DASH_DASH Same as dense case.
%%
Y = ttm(X, {A,B,C,D}, [1 2 3 4]); %<REPLACE_WITH_DASH_DASH 4-way multiply.
Y = ttm(X, {D,C,B,A}, [4 3 2 1]); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, {A,B,C,D}); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, {A',B',C',D'}, 't') %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttm(X, {C,D}, [3 4]); %<REPLACE_WITH_DASH_DASH X times C in mode-3 & D in mode-4
Y = ttm(X, {A,B,C,D}, [3 4]) %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttm(X, {A,B,D}, [1 2 4]); %<REPLACE_WITH_DASH_DASH 3-way multiply.
Y = ttm(X, {A,B,C,D}, [1 2 4]); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, {A,B,D}, -3); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, {A,B,C,D}, -3) %<REPLACE_WITH_DASH_DASH Same as above.
%%
% The result may be dense or sparse.
X = sptenrand([5 3 4],1);
Y = ttm(X, A, 1) %<REPLACE_WITH_DASH_DASH Sparse result.
%%
X = sptenrand([5 3 4],50);
Y = ttm(X, A, 1) %<REPLACE_WITH_DASH_DASH Dense result.
%%
% Sometimes the product may be too large to reside in memory. For
% example, try the following:
% X = sptenrand([100 100 100 100], 1e4);
% A = rand(1000,100);
% ttm(X,A,1); %<REPLACE_WITH_DASH_DASH too large for memory
%% Kruskal tensor times matrix (ttm for ktensor)
% The special structure of a ktensor allows an efficient implementation of
% matrix multiplication. The arguments are the same as for the dense case.
X = ktensor({rand(5,1) rand(3,1) rand(4,1) rand(2,1)});
%%
Y = ttm(X, A, 1); %<REPLACE_WITH_DASH_DASH X times A in mode-1.
Y = ttm(X, {A,B,C,D}, 1); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, A', 1, 't') %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttm(X, {A,B,C,D}, [1 2 3 4]); %<REPLACE_WITH_DASH_DASH 4-way mutliply.
Y = ttm(X, {D,C,B,A}, [4 3 2 1]); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, {A,B,C,D}); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, {A',B',C',D'}, 't') %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttm(X, {C,D}, [3 4]); %<REPLACE_WITH_DASH_DASH X times C in mode-3 & D in mode-4.
Y = ttm(X, {A,B,C,D}, [3 4]) %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttm(X, {A,B,D}, [1 2 4]); %<REPLACE_WITH_DASH_DASH 3-way multiply.
Y = ttm(X, {A,B,C,D}, [1 2 4]); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, {A,B,D}, -3); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, {A,B,C,D}, -3) %<REPLACE_WITH_DASH_DASH Same as above.
%% Tucker tensor times matrix (ttm for ttensor)
% The special structure of a ttensor allows an efficient implementation of
% matrix multiplication.
X = ttensor(tensor(rand(2,2,2,2)),{rand(5,2) rand(3,2) rand(4,2) rand(2,2)});
%%
Y = ttm(X, A, 1); %<REPLACE_WITH_DASH_DASH computes X times A in mode-1.
Y = ttm(X, {A,B,C,D}, 1); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, A', 1, 't') %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttm(X, {A,B,C,D}, [1 2 3 4]); %<REPLACE_WITH_DASH_DASH 4-way multiply.
Y = ttm(X, {D,C,B,A}, [4 3 2 1]); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, {A,B,C,D}); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, {A',B',C',D'}, 't') %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttm(X, {C,D}, [3 4]); %<REPLACE_WITH_DASH_DASH X times C in mode-3 & D in mode-4
Y = ttm(X, {A,B,C,D}, [3 4]) %<REPLACE_WITH_DASH_DASH Same as above.
%%
Y = ttm(X, {A,B,D}, [1 2 4]); %<REPLACE_WITH_DASH_DASH 3-way multiply
Y = ttm(X, {A,B,C,D}, [1 2 4]); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, {A,B,D}, -3); %<REPLACE_WITH_DASH_DASH Same as above.
Y = ttm(X, {A,B,C,D}, -3) %<REPLACE_WITH_DASH_DASH Same as above.
%% Tensor times tensor (ttt for tensor)
X = tensor(rand(4,2,3)); Y = tensor(rand(3,4,2));
Z = ttt(X,Y); %<REPLACE_WITH_DASH_DASH Outer product of X and Y.
size(Z)
%%
Z = ttt(X,X,1:3) %<REPLACE_WITH_DASH_DASH Inner product of X with itself.
%%
Z = ttt(X,Y,[1 2 3],[2 3 1]) %<REPLACE_WITH_DASH_DASH Inner product of X & Y.
%%
Z = innerprod(X,Y) %<REPLACE_WITH_DASH_DASH Same as above.
%%
Z = ttt(X,Y,[1 3],[2 1]) %<REPLACE_WITH_DASH_DASH Product of X & Y along specified dims.
%% Sparse tensor times sparse tensor (ttt for sptensor)
X = sptenrand([4 2 3],3); Y = sptenrand([3 4 2],3);
Z = ttt(X,Y) %<REPLACE_WITH_DASH_DASHOuter product of X and Y.
%%
norm(full(Z)-ttt(full(X),full(Y))) %<REPLACE_WITH_DASH_DASH Same as dense.
%%
Z = ttt(X,X,1:3) %<REPLACE_WITH_DASH_DASH Inner product of X with itself.
%%
X = sptenrand([2 3],1); Y = sptenrand([3 2],1);
Z = ttt(X, Y) %<REPLACE_WITH_DASH_DASH Sparse result.
%%
X = sptenrand([2 3],20); Y = sptenrand([3 2],20);
Z = ttt(X, Y) %<REPLACE_WITH_DASH_DASH Dense result.
%%
Z = ttt(X,Y,[1 2],[2 1]) %<REPLACE_WITH_DASH_DASH inner product of X & Y
%% Inner product (innerprod)
% The function |innerprod| efficiently computes the inner product
% between two tensors X and Y. The code does this efficiently
% depending on what types of tensors X and Y.
X = tensor(rand(2,2,2))
Y = ktensor({rand(2,2),rand(2,2),rand(2,2)})
%%
z = innerprod(X,Y)
%% Contraction on tensors (contract for tensor)
% The function |contract| sums the entries of X along dimensions I and
% J. Contraction is a generalization of matrix trace. In other words,
% the trace is performed along the two-dimensional slices defined by
% dimensions I and J. It is possible to implement tensor
% multiplication as an outer product followed by a contraction.
X = sptenrand([4 3 2],5);
Y = sptenrand([3 2 4],5);
%%
Z1 = ttt(X,Y,1,3); %<REPLACE_WITH_DASH_DASH Normal tensor multiplication
%%
Z2 = contract(ttt(X,Y),1,6); %<REPLACE_WITH_DASH_DASH Outer product + contract
%%
norm(Z1-Z2) %<REPLACE_WITH_DASH_DASH Should be zero
%%
% Using |contract| on either sparse or dense tensors gives the same
% result
X = sptenrand([4 2 3 4],20);
Z1 = contract(X,1,4) % sparse version of contract
%%
Z2 = contract(full(X),1,4) % dense version of contract
%%
norm(full(Z1) - Z2) %<REPLACE_WITH_DASH_DASH Should be zero
%%
% The result may be dense or sparse, depending on its density.
X = sptenrand([4 2 3 4],8);
Y = contract(X,1,4) %<REPLACE_WITH_DASH_DASH should be sparse
%%
X = sptenrand([4 2 3 4],80);
Y = contract(X,1,4) %<REPLACE_WITH_DASH_DASH should be dense
%% Relationships among ttv, ttm, and ttt
% The three "tensor times ___" functions (|ttv|, |ttm|, |ttt|) all perform
% specialized calculations, but they are all related to some degree.
% Here are several relationships among them:
%%
X = tensor(rand(4,3,2));
A = rand(4,1);
%%
% Tensor times vector gives a 3 x 2 result
Y1 = ttv(X,A,1)
%%
% When |ttm| is used with the transpose option, the result is almost
% the same as |ttv|
Y2 = ttm(X,A,1,'t')
%%
% We can use |squeeze| to remove the singleton dimension left over
% from |ttm| to give the same answer as |ttv|
squeeze(Y2)
%%
% Tensor outer product may be used in conjuction with contract to
% produce the result of |ttm|. Please note that this is more expensive
% than using |ttm|.
Z = ttt(tensor(A),X);
size(Z)
%%
Y3 = contract(Z,1,3)
%%
% Finally, use |squeeze| to remove the singleton dimension to get
% the same result as |ttv|.
squeeze(Y3)
%% Frobenius norm of a tensor
% The Frobenius norm of any type of tensor may be computed with the
% function |norm|. Each class is optimized to calculate the norm
% in the most efficient manner.
X = sptenrand([4 3 2],5)
norm(X)
norm(full(X))
%%
X = ktensor({rand(4,2),rand(3,2)})
norm(X)
%%
X = ttensor(tensor(rand(2,2)),{rand(4,2),rand(3,2)})
norm(X)
##### SOURCE END #####--> </body></html>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -