RADIX 2 DECIMATION IN FREQUENCY (DIF) ALGORITHM
seminar class Active In SP Posts: 5,361 Joined: Feb 2011 
21022011, 10:51 AM
DIT and DIF algorithms.docx (Size: 170.15 KB / Downloads: 57) RADIX 2 DECIMATION IN FREQUENCY (DIF) ALGORITHM 1.1 INTRODUCTION A DFT decomposes a sequence of values into components of different frequencies. This operation is useful in many fields but computing it directly from the definition is often too slow to be practical. A Fast Fourier Transform (FFT) is a way to compute the same result more quickly: computing a DFT of N points using the definition, takes O(N2) arithmetical operations, while an FFT can compute the same result in only O(N log N) operations. The difference in speed can be substantial, especially for long data sets where N may be in the thousands or millions—in practice, the computation time can be reduced by several orders of magnitude in such cases, and the improvement is roughly proportional to N/log(N). This huge improvement makes many DFTbased algorithms practical; FFTs are of great importance to a wide variety of applications, from digital signal processing and solving partial differential equations to algorithms for quick multiplication of large integers. 1.2 FAST FOURIER TRANSFORMS ARE FAST DFT ALGORITHMS. It may be noted that the number of complex multiply and add operations required by the simple forms (both the DFT and IDFT) is of order N2. This is because there are N data points to calculate, each of which requires N complex arithmetic operations. Therefore they have algorithmic complexity O (N2). Thus the DFT will not be very useful for the majority of practical DSP applications if the complexity is not reduced. This is enabled by a number of different 'Fast Fourier Transform' (FFT) algorithms. 1.2.1 RADIX 2 ALGORITHMS The popular 'Radix 2' algorithms are useful if N is a regular power of 2 (N=2p). These algorithms have complexity of only O(NlogN). If we assume that algorithmic complexity provides a direct measure of execution time (and that the relevant logarithm base is 2) then the ratio of execution times for the (DFT) vs. (Radix 2 FFT) can be expressed: For a 1024 point transform (p=10, N=1024), this gives approximately 100 fold speed improvement. This is why FFT's are important. Of course the actual speed improvement that is achieved in practice depends on other factors associated with algorithm implementation and coding efficiency, but the above expression is useful to get 'ball park' estimates. There are two different Radix 2 algorithms, namely a) 'Decimation In Time' (DIT) Algorithm b) 'Decimation In Frequency' (DIF) Algorithm. Both of these rely on the recursive decomposition of an N point transform into 2 (N/2) point transforms. This decomposition process can be applied to any composite (non prime) N, it just so happens that is particularly simple if N is divisible by 2 (and if N is a regular power of 2, the decomposition can be applied repeatedly until the trivial '1 point' transform is reached). The so called Radix 4 algorithms are similar in concept to the Radix 2 algorithms. These are sometimes slightly faster. There is also an efficient algorithm for evaluating the DFT of sequences whose length is a prime number. In short, there is always an FFT algorithm (and usually several) which can be applied to any sequence length. 1.3 THE RADIX 2 DECIMATION IN FREQUENCY (DIF) ALGORITHM. An FFT is defined as: If N is even, the above sum can be split into 'top' (n=0…...N/21) and 'bottom' (n=N/2…...N1) halves and rearranged as follows: If we now consider the form of this result for even and odd valued k separately, we can see how this result enables us to express an N point FFT in terms of 2 (N/2) point FFT's. Even k, k=2k' (k'=0..N/21): or equivalently, Odd k, k=2k'+1 (k'=0..N/21): or equivalently.. The process of dividing the frequency components into even and odd parts is what gives this algorithm its name 'Decimation In Frequency'. If N is a regular power of 2, we can apply this method recursively until we get to the trivial 1 point transform. The factors TN are conventionally referred to as 'twiddle factors'. 1.4 A RECURSIVE DIF FFT ROUTINE. Given the above results, we can now have a 'first stab' at a recursive routine to implement this algorithm (in a hypothetical Pascal like programming language which supports complex numbers and allows arrays as function arguments and results): FUNCTION DIF(N,f); LOCAL N',n,fe,fo,Fe,Fo,k',F; IF N==1 THEN RETURN(f); {trivial if N==1} ELSE BEGIN {perform 2 subtransforms} N':=N/2; {size of subtransforms} FOR n:=0 TO (N'1) DO {perform N' DIF 'butterflies'} BEGIN fe[n]:= f[n]+f[n+N']; {even subset} fo[n]:=(f[n]f[n+N'])*T(N,n); {odd subset} END; Fe:=DIF(N',fe); {even k} Fo:=DIF(N',fo); {odd k} FOR k':=0 TO (N'1) DO BEGIN F[2*k' ]:= Fe[k']; {even k} F[2*k'+1]:= Fo[k']; {odd k} END; RETURN(F); END; This is simplest form of DIF implementation and directly reflects the mathematical derivation of the algorithm. The process of calculating the fe and fo components is conventionally referred to as a (DIF) 'butterfly', and is the basic primitive operation of the FFT. 1.5 DIF FFT ALGORITHMIC COMPLEXITY (WHY IT'S FAST). The algorithmic complexity of an FFT is usually quantified in terms of the total number of butterfly operations performed. Let this number be C(p) for a 2p point transform. Looking at the DIF routine above, it is easy to see that C(p) must satisfy the following recurrence relation: This has solution: or, in terms of N (=2p): Dropping the constant scaling factors (including the log base) we get an algorithmic complexity of O(N.logN) 1.6 A RECURSIVE 'IN PLACE' DIF FFT ROUTINE. A better and more efficient (recursive) inplace DIF routine is as given below: { Perform in place DIF of N points starting at position BaseE DIF(0,N,f) performs DIF FFT on entire array f, N= size of f } PROCEDURE DIF(BaseE,N, VAR f); {f is an external array} LOCAL N',BaseO,n,e,o; IF N==1 THEN {do nothing} ELSE BEGIN N':=N>>1; {shift right to get size of subtransforms} BaseO:=BaseE+N'; {split block into 2 halves} FOR n:=0 TO (N'1) DO BEGIN e:= f[BaseE+n]+f[BaseO+n]; o:=(f[BaseE+n]f[BaseO+n])*T(N,n); f[BaseE+n]:=e; f[BaseO+n]:=o; END; DIF(BaseE,N',f); {even subtransform} DIF(BaseO,N',f); {odd subtransform} END; This version of the DIF routine is a little simpler and more efficient than the first, but has the disadvantage that the output is in 'jumbly' (bit reversed) order. This may or may not be a serious problem, depending on what the output is to be used for and whether or not the processor/programming language supports bit reversed addressing (most DSP's do). If bit reversed addressing is not available then you may need to produce a bit reversed index look up table to access the output. It is worth noting that this is the simplest formulation of the 'inplace' DIF algorithm. It takes normal order input and generates bit reversed order output. However, this is not fundamental to the algorithm, it is merely a consequence of the simplest implementation. It is possible to write a reverse order DIF algorithm FFT which takes bit reversed order input and generates normal order output. This requires the DIF FFT routine to be amended in two ways: Passing an additional parameter which specifies the spacing between elements in each subarray to be transformed. In the simple code above, this is always 1. In the reverse order FFT, this will start at 1 and double for each sub transform. This parameter is also useful feature for multidimensional transforms. Since the input to each subtransform is now in bit reversed order, the twiddle factors must also used in bit reversed order. This isn't difficult if the twiddle factors are taken from a table in bit reversed order, or if bit reversed addressing is available. 1.7 TWIDDLE FACTOR TRICKS. The twiddle factor is given by: In practice, calculating these will require time consuming COS's and SIN's. So what is needed is to look up table (an array of size N/2) which is calculated just once. The important thing to realise is that one doesn’t need a separate table for each DIF pass (N halves each pass). Instead we use the same table each pass and introduce an additional 'twiddle_step_size' parameter which starts at 1 and doubles after each pass. The twiddle factor used has index n*twiddle_step_size. (The array bounds will never be exceeded, because N halves on each pass and the maximum value of n is N/21.) The next trick is to notice that often the twiddle factor will be either +1 or j (it will never be 1 or +j). In such cases there is no need to do a full blown complex multiply (remember this requires 4 real multiplies and 2 real adds). Simple addition and subtraction will suffice. We shall call butterflies that use one or other of these twiddle factors 'trivial butterflies'. Specifically: Let us consider the last DIF FFT pass (N=2). In this case, T2(0)=1 is the only twiddle factor used. So, the entire last pass consists of trivial butterflies. Similarly, the first butterfly of any subblock will use TN(0)=1. In general, in pass P (P=0..p1), there are BP (=2P) subblocks and therefore 2Pbutterflies which use a twiddle factor of +1. In the penultimate (N=4) pass the butterflies will use either T4(0)=1 or T4(1)=j. So this pass also uses only trivial butterflies. As in the above case, in each of the previous (N>4, P<p2) passes, there are BP (=2P) subblocks and therefore 2P butterflies which use a twiddle factor of j (at n=N/4) . 


