📄 fftalgorithms.tex
字号:
g_1([a_0 b_2 b_1 b_0])
&=&
W_{16}^{a_0 [b_2 b_1 b_0]}
\sum_{b_3=0}^1 (-1)^{a_0 b_3} g([b_3 b_2 b_1 b_0]) \\
g_2([a_0 a_1 b_1 b_0])
&=&
W_{8}^{a_1 [b_1 b_0]}
\sum_{b_2=0}^1 (-1)^{a_1 b_2} g_1([a_0 b_2 b_1 b_0]) \\
g_3([a_0 a_1 a_2 b_0])
&=&
W_{4}^{a_2 b_0}
\sum_{b_1=0}^1 (-1)^{a_2 b_1} g_2([a_0 a_1 b_1 b_0]) \\
h(a)
=
g_4([a_0 a_1 a_2 a_3])
&=&
\sum_{b_0=0}^1 (-1)^{a_3 b_0} g_3([a_0 a_1 a_2 b_0])
\end{eqnarray}
%
The final pass leaves the data for $h(a)$ in bit-reversed order, but
this is easily fixed by a final bit-reversal of the ordering.
The basic in-place calculation or butterfly for each pass is slightly
different from the decimation-in-time version,
%
\begin{equation}
\left(
\begin{array}{c}
g({\hat a} + {\hat b}) \\
g({\hat a} + \Delta + {\hat b}) \\
\end{array}
\right)
\leftarrow
\left(
\begin{array}{c}
g({\hat a} + {\hat b}) + g({\hat a} + \Delta + {\hat b})\\
W_{\Delta}^{\hat b}
\left( g({\hat a} + {\hat b}) - g({\hat a} + \Delta + {\hat b}) \right)
\end{array}
\right)
\end{equation}
%
In each pass ${\hat b}$ runs from $0 \dots \Delta-1$ and ${\hat
a}$ runs from $0, 2\Delta, \dots, (N/\Delta -1) \Delta$. On the first
pass we start with $\Delta=16$, and on subsequent passes $\Delta$ takes
the values $8, 4, \dots, 1$.
This leads to the canonical radix-2 decimation-in-frequency FFT
algorithm for $2^n$ data points stored in the array $g(0) \dots
g(2^n-1)$.
%
\begin{algorithm}
\STATE {$\Delta \Leftarrow 2^{n-1}$}
\FOR {$\mbox{pass} = 1 \dots n$}
\STATE {$W \Leftarrow \exp(-2 \pi i / 2\Delta)$}
\FOR {$(b = 0 ; b < \Delta ; b++)$}
\FOR{$(a = 0 ; a < N ; a += 2*\Delta)$}
\STATE{$t_0 \Leftarrow g(b+a) + g(a+\Delta+b)$}
\STATE{$t_1 \Leftarrow W^b \left( g(a+b) - g(a+\Delta+b) \right)$}
\STATE{$g(a+b) \Leftarrow t_0$}
\STATE{$g(a+\Delta+b) \Leftarrow t_1$}
\ENDFOR
\ENDFOR
\STATE{$\Delta \Leftarrow \Delta/2$}
\ENDFOR
\STATE bit-reverse ordering of $g$
\end{algorithm}
%
\section{Self-Sorting Mixed-Radix Complex FFTs}
%
This section is based on the review article {\em Self-sorting
Mixed-Radix Fast Fourier Transforms} by Clive
Temperton~\cite{temperton83}. You should consult his article for full
details of all the possible algorithms (there are many
variations). Here I have annotated the derivation of the simplest
mixed-radix decimation-in-frequency algorithm.
For general-$N$ FFT algorithms the simple binary-notation of radix-2
algorithms is no longer useful. The mixed-radix FFT has to be built
up using products of matrices acting on a data vector. The aim is to
take the full DFT matrix $W_N$ and factor it into a set of small,
sparse matrices corresponding to each factor of $N$.
We'll denote the components of matrices using either subscripts or
function notation,
%
\begin{equation}
M_{ij} = M(i,j)
\end{equation}
%
with (C-like) indices running from 0 to $N-1$. Matrix products will be
denoted using square brackets,
%
\begin{equation}
[AB]_{ij} = \sum_{k} A_{ik} B_{kj}
\end{equation}
%
%
Three special matrices will be needed in the mixed-radix factorization
of the DFT: the identity matrix, $I$, a permutation matrix, $P$ and a
matrix of twiddle factors, $D$, as well as the normal DFT matrices
$W_n$.
We write the identity matrix of order $r$ as $I_r(n,m)$,
%
\begin{equation}
I_r(n,m) = \delta_{nm}
\end{equation}
%
for $0 \leq n,m \leq r-1$.
We also need to define a permutation matrix $P^a_b$ that performs
digit reversal of the ordering of a vector. If the index of a vector
$j= 0\dots N-1$ is factorized into $j = la +m$, with $0 \leq l \leq
b-1$ and $0 \leq m \leq a-1$ then the operation of the matrix $P$ will
exchange positions $la+m$ and $mb+l$ in the vector (this sort of
digit-reversal is the generalization of bit-reversal to a number
system with exponents $a$ and $b$).
In mathematical terms $P$ is a square matrix of size $ab \times ab$
with the property,
%
\begin{eqnarray}
P^a_b(j,k) &=& 1 ~\mbox{if}~ j=ra+s ~\mbox{and}~ k=sb+r \\
&=& 0 ~\mbox{otherwise}
\end{eqnarray}
%
Finally the FFT algorithm needs a matrix of twiddle factors, $D^a_b$,
for the trigonometric sums. $D^a_b$ is a diagonal square matrix of
size $ab \times ab$ with the definition,
%
\begin{eqnarray}
D^a_b(j,k) &=& \omega^{sr}_{ab} ~\mbox{if}~ j=k=sb+r \\
&=& 0 ~\mbox{otherwise}
\end{eqnarray}
%
where $\omega_{ab} = e^{-2\pi i/ab}$.
\subsection{The Kronecker Product}
The Kronecker matrix product plays an important role in all the
algorithms for combining operations on different subspaces. The
Kronecker product $A \otimes B$ of two square matrices $A$ and $B$, of
sizes $a \times a$ and $b \times b$ respectively, is a square matrix
of size $a b \times a b$, defined as,
%
\begin{equation}
[A \otimes B] (tb+u, rb+s) = A(t,r) B(u,s)
\end{equation}
%
where $0 \leq u,s < b$ and $0 \leq t,r < a$. Let's examine a specific
example. If we take a $2 \times 2$ matrix and a $3
\times 3$ matrix,
%
\begin{equation}
\begin{array}{ll}
A =
\left(
\begin{array}{cc}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{array}
\right)
&
B =
\left(
\begin{array}{ccc}
b_{11} & b_{12} & b_{13} \\
b_{21} & b_{22} & b_{23} \\
b_{31} & b_{32} & b_{33}
\end{array}
\right)
\end{array}
\end{equation}
%
then the Kronecker product $A \otimes B$ is,
%
\begin{eqnarray}
A \otimes B &= &
\left(
\begin{array}{cc}
a_{11} B & a_{12} B \\
a_{12} B & a_{22} B
\end{array}
\right) \\
&=&
\left(
\begin{array}{cccccc}
a_{11} b_{11} & a_{11} b_{12} & a_{11} b_{13} &
a_{12} b_{11} & a_{12} b_{12} & a_{12} b_{13} \\
a_{11} b_{21} & a_{11} b_{22} & a_{11} b_{23} &
a_{12} b_{21} & a_{12} b_{22} & a_{12} b_{23} \\
a_{11} b_{31} & a_{11} b_{32} & a_{11} b_{33} &
a_{12} b_{31} & a_{12} b_{32} & a_{12} b_{33} \\
a_{21} b_{11} & a_{21} b_{12} & a_{21} b_{13} &
a_{22} b_{11} & a_{22} b_{12} & a_{22} b_{13} \\
a_{21} b_{21} & a_{21} b_{22} & a_{21} b_{23} &
a_{22} b_{21} & a_{22} b_{22} & a_{22} b_{23} \\
a_{21} b_{31} & a_{21} b_{32} & a_{21} b_{33} &
a_{22} b_{31} & a_{22} b_{32} & a_{22} b_{33}
\end{array}
\right)
\end{eqnarray}
%
When the Kronecker product $A \otimes B$ acts on a vector of length
$ab$, each matrix operates on a different subspace of the vector.
Writing the index $i$ as $i=t b + u$, with $0\leq u \leq b-1$
and $0\leq t\leq a$, we can see this explicitly by looking at components,
%
\begin{eqnarray}
[(A \otimes B) v]_{(tb+u)}
& = & \sum_{t'=0}^{a-1} \sum_{u'=0}^{b-1}
[A \otimes B]_{(tb+u,t'b+u')} v_{t'b+u'} \\
& = & \sum_{t'u'} A_{tt'} B_{uu'} v_{t'b+u'}
\end{eqnarray}
%
The matrix $B$ operates on the ``index'' $u'$, for all values of $t'$, and
the matrix $A$ operates on the ``index'' $t'$, for all values of $u'$.
%
The most important property needed for deriving the FFT factorization
is that the matrix product of two Kronecker products is the Kronecker
product of the two matrix products,
%
\begin{equation}
(A \otimes B)(C \otimes D) = (AC \otimes BD)
\end{equation}
%
This follows straightforwardly from the original definition of the
Kronecker product.
\subsection{Two factor case, $N=ab$}
%
First consider the simplest possibility, where the data length $N$ can
be divided into two factors, $N=ab$. The aim is to reduce the DFT
matrix $W_N$ into simpler matrices corresponding to each factor. To
make the derivation easier we will start from the known factorization
and verify it (the initial factorization can be guessed by
generalizing from simple cases). Here is the factorization we are
going to prove,
%
\begin{equation}
W_{ab} = (W_b \otimes I_a) P^a_b D^a_b (W_a \otimes I_b).
\end{equation}
%
We can check it by expanding the product into components,
%
\begin{eqnarray}
\lefteqn{[(W_b \otimes I_a) P^a_b D^a_b (W_a \otimes I_b)](la+m,rb+s)} \\
& = &
\sum_{u=0}^{b-1} \sum_{t=0}^{a-1}
[(W_b \otimes I_a)](la+m,ua+t) [P^a_b D^a_b (W_a \otimes I_b)](ua+t,rb+s)
\end{eqnarray}
%
where we have split the indices to match the Kronecker product $0 \leq
m, r \leq a$, $0 \leq l, s \leq b$. The first term in the sum can
easily be reduced to its component form,
%
\begin{eqnarray}
[(W_b \otimes I_a)](la+m,ua+t)
&=& W_b(l,u) I_a(m,t) \\
&=& \omega_b^{lu} \delta_{mt}
\end{eqnarray}
%
The second term is more complicated. We can expand the Kronecker
product like this,
\begin{eqnarray}
(W_a \otimes I_b)(tb+u,rb+s)
&=& W_a(t,r) I_a(u,s) \\
&=& \omega_a^{tr} \delta_{us}
\end{eqnarray}
%
and use this term to build up the product, $P^a_b D^a_b (W_a \otimes
I_b)$. We first multiply by $D^a_b$,
%
\begin{equation}
[D^a_b (W_a \otimes I_b)](tb+u,rb+s)
=
\omega^{tu}_{ab} \omega^{tr}_{a} \delta_{su}
\end{equation}
%
and then apply the permutation matrix, $P^a_b$, which digit-reverses
the ordering of the first index, to obtain,
%
\begin{equation}
[P^a_b D^a_b (W_a \otimes I_b)](ua+t,rb+s)
=
\omega^{tu}_{ab} \omega^{tr}_{a} \delta_{su}
\end{equation}
%
Combining the two terms in the matrix product we can obtain the full
expansion in terms of the exponential $\omega$,
%
\begin{eqnarray}
[(W_b \otimes I_a) P^a_b D^a_b (W_a \otimes I_b)](la+m,rb+s)
&=&
\sum_{u=0}^{b-1} \sum_{t=0}^{a-1}
\omega_b^{lu} \delta_{mt} \omega^{tu}_{ab} \omega^{tr}_{a} \delta_{su}
\end{eqnarray}
%
If we evaluate this sum explicitly we can make the connection between
the product involving $W_a$ and $W_b$ (above) and the expansion of the
full DFT matrix $W_{ab}$,
%
\begin{eqnarray}
\sum_{u=0}^{b-1} \sum_{t=0}^{a-1}
\omega_b^{lu} \delta_{mt} \omega^{tu}_{ab} \omega^{tr}_{a} \delta_{su}
&=& \omega^{ls}_b \omega^{ms}_{ab} \omega^{mr}_a \\
&=& \omega^{als + ms +bmr}_{ab} \\
&=& \omega^{als + ms +bmr}_{ab} \omega^{lrab}_{ab} \quad\mbox{using~} \omega^{ab}_{ab} =1\\
&=& \omega^{(la+m)(rb+s)}_{ab} \\
&=& W_{ab}(la+m,rb+s)
\end{eqnarray}
%
The final line shows that matrix product given above is identical to
the full two-factor DFT matrix, $W_{ab}$.
%
Thus the full DFT matrix $W_{ab}$ for two factors $a$, $b$ can be
broken down into a product of sub-transforms, $W_a$ and $W_b$, plus
permutations, $P$, and twiddle factors, $D$, according to the formula,
%
\begin{equation}
W_{ab} = (W_b \otimes I_a) P^a_b D^a_b (W_a \otimes I_b).
\end{equation}
%
This relation is the foundation of the general-$N$ mixed-radix FFT algorithm.
\subsection{Three factor case, $N=abc$}
%
The result for the two-factor expansion can easily be generalized to
three factors. We first consider $abc$ as being a product of two
factors $a$ and $(bc)$, and then further expand the product $(bc)$ into
$b$ and $c$. The first step of the expansion looks like this,
%
\begin{eqnarray}
W_{abc} &=& W_{a(bc)}\\
&=& (W_{bc} \otimes I_a) P^a_{bc} D^a_{bc} (W_a \otimes I_{bc}) .
\end{eqnarray}
%
And after using the two-factor result to expand out $W_{bc}$ we obtain
the factorization of $W_{abc}$,
%
\begin{eqnarray}
W_{abc} &=& (((W_c \otimes I_b) P^b_c D^b_c (W_b \otimes I_c)) \otimes I_a )
P^a_{bc} D^a_{bc} (W_a \otimes I_{bc}) \\
&=& (W_c \otimes I_{ab}) (P^b_c D^b_c \otimes I_a) (W_b \otimes I_{ac}) P^a_{bc} D^a_{bc} (W_a \otimes I_c)
\end{eqnarray}
%
We can write this factorization in a product form, with one term for
each factor,
%
\begin{equation}
W_{abc} = T_3 T_2 T_1
\end{equation}
%
where we read off $T_1$, $T_2$ and $T_3$,
%
\begin{eqnarray}
T_1 &=& P^a_{bc} D^a_{bc} (W_a \otimes I_{bc}) \\
T_2 &=& (P^b_c D^b_c \otimes I_a) (W_b \otimes I_{ac}) \\
T_3 &=& (W_c \otimes I_{ab} )
\end{eqnarray}
%
\subsection{General case, $N=f_1 f_2 \dots f_{n_f}$}
%
If we continue the procedure that we have used for two- and
three-factors then a general pattern begins to emerge in the
factorization of $W_{f_1 f_2 \dots f_{n_f}}$. To see the beginning of
the pattern we can rewrite the three factor case as,
%
\begin{eqnarray}
T_1 &=& (P^a_{bc} D^a_{bc} \otimes I_1) (W_a \otimes I_{bc}) \\
T_2 &=& (P^b_c D^b_c \otimes I_a) (W_b \otimes I_{ac}) \\
T_3 &=& (P^c_1 D^c_1 \otimes I_{ab}) (W_c \otimes I_{ab} )
\end{eqnarray}
%
using the special cases $P^c_1 = D^c_1 = I_c$.
%
In general, we can write the factorization of $W_N$ for $N= f_1 f_2
\dots f_{n_f}$ as,
%
\begin{equation}
W_N = T_{n_f} \dots T_2 T_1
\end{equation}
%
where the matrix factors $T_i$ are,
%
\begin{equation}
T_i = (P^{f_i}_{q_i} D^{f_i}_{q_i} \otimes I_{p_{i-1}}) ( W_{f_i}
\otimes I_{m_i})
\end{equation}
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -