Instructor (Brad Osgood):Weíre on the air. Pay attention. You should Ė I think the exams will be back sometime today. Iíll send out a note when everything is all sorted out. I donít really know how things are gonna turn out yet. I graded my part over the weekend. I graded the first part problem and people did actually quite well on that. I think Rajib is working on it now, and at some point Ė anyway, Iíll send you a note out when everythingís all set for you guys to pick up. All right? I am going to do a little demo on aliasing today that I didnít want you to miss. Unfortunately, I loaned my strobe light to somebody, and I canít find it. I canít remember who I lent it to, so I canít give you a demonstration of the spinning fan, which is the classic demonstration of aliasing. If I can get my strobe light back, then maybe weíll have a chance to do that a little bit later. So Iím gonna give you a demonstration, a slight demonstration, a brief demonstration on what happens when you under sample music, that is when you turn analog music into digital music that weíre all so used to these days, and when you do it the wrong way, or you do not do it accurately enough. So how would you Ė how rapidly do you have to sample a piece of music in order to be able to interpolate it, in order to be able to reconstruct it digitally? Well, for that we need Ė sampling. We need to know something about human hearing. That is how high a note you can hear, and thatís roughly about 20 Ė the bandwidth of Ė or rather Ė gonna start that again. You can hear roughly up to 20,000 hertz. That is on the high end. You can hear Ė dogs go up much higher of course. You can hear up to roughly a note with a pitch 20,000 hertz. Thatís way beyond anything youíd really hear musically of course, way beyond where music goes, but thatís roughly the range of the human hearing from about 20 hertz to about 20,000 hertz.
So according to the way we do things then, the bandwidth or the spectrum of Ė a slice of spectrum of music would go roughly up to say 20,000 hertz and then down to minus 20,000, so the frequency would be Ė and beyond that itís essentially zero. At least as far as youíre concerned itís zero. You canít hear anything. So if thatís a picture of the spectrum of a slice of music, then itís between minus 20,000 and 20,000, so the way we write things, that would be P over 2. The bandwidth is 20,000, so P is about 40,000, which means that if you wanna sample and reconstruct music, you should do it roughly at a rate of 40,000 hertz. That is to say you should sample every 40,000th of a second. So this should be the sampling rate for music, roughly 40,000 times per second, or sampled at a rate of one over 40,000 per second in order to be able to have confidence that you are sampling rapidly enough so that your reconstruction based on the sync function is gonna interpolate the actual music. Now in fact, as people probably know, they sample at a rate Ė for CDs, for example, they sample at a rate higher than that. Do you know what it is off the top of your head? 44.1 or something like that? Is that right? Somewhere around there? In fact, the sampling rate for CDs is whatever it is. I think itís 44.1 kilohertz. And also, as far as I know, and some of you may know the history of this better than I do, this is Ė itís gotta be something bigger than 40,000, all right? But as far as I know, the precise number comes from just the equipment that they were using when they were making the transition from analog to digital. I think the way the tapes Ė Iím sorry, do you know?
Instructor (Brad Osgood):Is that right? Oh, really? The story that I had heard Ė I tracked this down [inaudible], but I havenít looked at it really carefully was that everything was set up for analog recording, of course. And the way the machines were set up, it was just most natural to sample at that rate somehow. They got the best Ė they got the most accurate Ė most reliable sampling if they did it roughly at that rate, but it was not Ė that number comes out of practical considerations, not out of any sort of theoretical inspiration. Anyway, thatís how fast you should sample, okay? So here is a well-known piece Ė and if you sample at less than that rate, then youíre not gonna get an accurate reproduction of the music. So here is Gershwinís Rhapsody in Blue sampled at a high enough rate, sampled at 44.1 kilohertz. It has a famous clarinet gliss at the beginning. Everybody remember? Everybody heard this? Okay. All right, now Ė so then Ė isnít that nice? Nice way to start the day. So then what we did was sample this at a low rate, I think around 16 hertz Ė 16 kilohertz. All right? Not 16 hertz, 16,000. Roughly a little less than half of that. Now remember what that means in terms of the picture. For me, as I said many times, the sampling theorem is identical with the proof of the sampling theorem, so I always try to imagine what things look like in the frequency domain, in the spectrum. So that means if the real spectrum goes to that range, you are cutting of like here. So that means youíre cutting off a bunch of high frequencies, so to speak. And what does the proof of the sampling theorem say? The proof of the sampling theorem says to periodize this thing by a Shaw function that has spacing that is too narrow and then cut off.
So if you just take the first shift, if you shift thing over by the corresponding Shaw function, then youíre doing this. Youíre getting like that. And then shifting it over the other way, youíre doing that, so when the two signals add, when this adds with its shifted version, youíre getting something that looks like that. And then you cut off Ė [inaudible] Iím not doing this right. And you cut off like that, so youíre getting some signal Ė the frequency of the signal that youíre reconstructing, instead of the original spectrum, looks something like that. And they talk about low frequencies being aliased as high frequencies, and high frequencies being aliased as low frequencies. What they mean by that is the lower part of the spectrum here is getting shifted over, and is getting added in as if it were part of the high frequency. So that means the lower frequencies are being aliased as high frequencies because youíre shifting things inadequately to shift this spectrum off itself. So youíre reproducing a signal based on this picture of the frequency instead of this picture of the frequency, and here is what you get. [Inaudible] lower parts are not as affected by [inaudible] distortion is more [inaudible]. You get the idea. The distortion really is more on the high end of things because itís the high end of things where the overlapís occurring. Youíre cutting it off by not enough. Youíre shifting things over. Youíre adding up. And thereís more mistake thatís made in the higher part of the spectrum than the lower part of the spectrum. This part of the spectrum is pretty much unchanged. Itís the higher part of the spectrum thatís getting messed up by an inadequate shifting or too low a sampling rate.
And there are all sorts of experiments and tricks you can do with this. Aliasing should not always be thought of as the enemy, by the way. There are times when you wanna alias on purpose. As a matter of fact, thereís a problem I believe on the problem set on the basis for the so-called sampling oscilloscope where you try to sample Ė if you have a very fast signal to sample, thatís faster than Ė the frequencies are higher than an ordinary oscilloscope can actually handle, then there are ways of aliasing effectively to get a picture of the signal at a lower Ė by sampling at a lower rate. So itís not that aliasing is always the enemy necessarily, but itís something that certainly has to be understood. Then again, as far as Iím concerned, understanding aliasing is understanding the proof of the sampling theorem, which is what weíve been talking about. Okay. The other famous example with aliasing is one you had a problem on also, and if I can find my strobe light, weíll do it sometime because it is sort of fun to do where you freeze a fan going Ė or you freeze a rotating object thatís rotating periodically. Itís described by a periodic signal. If you sample it inappropriately, say at too low a rate once again, you can freeze it. Youíve probably seen demos like that. Itís very impressive. Okay. And then what Iíd really like to do Ė I have done this before Ė is do some other examples of sampling and actually spectrum analyzing with a real, honest to god spectrum analyzer, bring my trombone in, bring some other musical instruments in, so you see how they look in the frequency domain. You can also do a bunch of interesting demos on that. Weíll see if we have the time for that. Any questions about that? Anything on anybodyís mind? Okay. So today, it is welcome to the modern world. Today, weíre gonna make the transition from the continuous to the discrete, from the analog to the digital, the modern world. I want to introduce Ė youíve actually been using it in Ė any time youíve used MATLAB with a Fourier transform youíve been using it, but now weíre gonna really talk about why it works, and how it works, and where it comes from. I wanna introduce the DFT, the discrete Fourier transform. The DFT Ė that is to say weíre not saying farewell to the continuous and the analog world, but we are now having a meeting between the two, the old world and the new world. So we wanna move from the continuous and the analog to the discrete and the digital. So weíre moving from the continuous and analog to the digital and discrete, to the discrete and digital.
Now like so much else in this course, there are choices to make. Some people would say this is not a transition at all. You should never have been in a continuous world in the first place, you Neanderthal you. The modern is digital, the modern world is discrete, and you should always understand things in terms of exactly those sorts of operation. Thatís a defensible choice, and there are courses actually on digital signal processing where you donít really talk so much about the continuous case, but everything is discrete and digital right from the beginning, and all the formulas go that way, all the demonstrations go that way, all the theorems go that way, and so on. So thatís defensible. I donít like it, and I donít think itís ultimately that Ė I think you miss a number of things by doing it that way, and I donít think it always simplifies things, but nevertheless, thatís a defensible choice. For us however, we wanna make a transition from the continuous to the discrete, or from the analog to the digital. And in doing so, I wanna leverage I hope on all the hard won skill and intuition that we built up on the continuous Fourier transform. We spent a lot of time and before this class, Iím sure you spent a certain amount of time also on just developing those ideas, understanding analog signals. And what I wanna convince you of is that a lot of that skill that youíve build up, and a lot of the intuition that you built up carries over. So much so that even thereís a similarity in the formulas. Even the formal aspects of the subject are similar enough in the discrete case that you can carry over what you learned then to what youíre gonna learn now. Thatís the other reason for doing it this way is so we can use all that we have learned so far in the continuous and analog case to understand, and help us analyze, and help us work with the discrete and the digital case. So thatís the choice that we made, but it is a choice. Itís not necessarily the only way of doing things, by no means. So hereís how weíre gonna proceed. Hereís the plan. It comes really in three parts. One Ė so I have Ė youíre starting off with continuous signals, so FFT is a continuous signal as usual. Thereís a continuous signal. When I say that, Iím not meaning it in a formal mathematical sense. Iím thinking of it as a function of a continuous variable T, and thatís whatís gonna get replaced by a discrete. F is gonna get replaced by a discrete function. All right? Think of this as the analog case.
So the first step is I wanna find a reasonable discrete approximation to FFT. Secondly, I wanna find a reasonable discrete approximation to its Fourier transform. All right? I know how to take the Fourier transform and make a continuous signal. The question is can I discretize that in a reasonable way thatís providing a reasonable approximation. And the third, which really combines the first two steps immediately when you see how the development goes, is to find a reasonable way from passing from one to the other, from the discrete approximation of F to the discrete approximation of the Fourier transform that is the most natural thing to do in this case. That is it approximates the passing from the continuous case, the continuous function of the continuous Fourier transform. So find a reasonable way of passing from the discrete form of F to the discrete form of the Fourier transform that mimics Ė I wonít write the rest of the sentence down Ė that mimics the way you pass from the form of F to the signal to its Fourier transform, the continuous version of that. Okay? And again, I wanna make this look as much like the continuous case as possible. There are different ways of making this argument also. There are different choices for how you can execute this. Weíre actually gonna base this on the sampling theorem, misapplied a little bit or Ė yeah, misapplied is probably the right word. So weíre gonna base this Ė that is to say, Iím gonna base the reasonableness, the test of reasonableness, on sampling. So base this on misuse of the sampling theorem.
Well, the sampling theorem let me say Ė rather than saying based on a misuse of the sampling theorem, Iím not gonna write down the sync interpolation. Iíll say Iím gonna base it on the misuse of sampling. Hereís what I have in mind. Okay? Iím gonna make some assumptions that I know are wrong right from the outset. Iím gonna assume Ė but play along Ė assume first of all that F of T is limited Ė itís a time limit signal to zero less than or equal to T, less than or equal to L. Okay. And Iím gonna also assume that the Fourier transform of F is limited to zero less than or equal to S, less than or equal to 2B, so B is the bandwidth, and Iím writing B for bandwidth instead of P. Now both of those assumptions cannot hold together. So you canít have both together, but play along. This canít happen. This cannot be because the signal cannot be both time limited and band limited. Thatís one thing. The other thing here is that Iím writing the Fourier transform as if itís only on the interval from zero to 2B. Sorry, you canít see it very well here. So F of S is limited by zero less than or equal to S, less than or equal to 2B. All right? Now we know the Fourier Ė thatís what I just wrote there. The Fourier transform is symmetric. You talk about frequencies going from minus B to plus B. The only reason Iím saying from zero to Ė as between zero and 2B is because of the indexing of the discrete variable thatís gonna go along with this. Youíll see what I mean. All right? So weíre saying this Ė itís purely a formal statement just to make the notation a little bit easier in a minute Ė saying this to make indexing of the discrete variable easier. All right. Youíll see. So just play along.
All right, now Ė so I have a sample thatís limited in time and limited in frequency. We know that canít happen, but letís play along. And what would the sampling theorem tell us? If I wanted to reconstruct F, how many samples would I need? Or rather what should the sampling rate be? Well, the sampling rate to consider myself getting a reasonable approximation of F from its samples is dictated by the properties of F in the frequency domain, that is to say the bandwidth of it. So to get a reasonable discrete approximation of F by sampling, I take samples spaced one over 2B apart. That is the sampling rate should be 2B, and so the samples should be one over 2B apart in the time domain. Okay? So in the time domain, hereís zero, 2B, and the sample should be spaced one over 2B apart. So letís say there are N of them. Letís say take N samples. All right? Oh sorry. This is the time domain, so the length of the interval is L. In the time domain, the function is limited from zero to L. I take the sample space, one over 2B apart, so I should have N over 2B is equal to L. That is if I take N samples, and the relationship between N and the number of samples that I take, the spacing, and the length of the interval that theyíre supposed to fill up is that, and itís equal to 2B times L if I take N samples. Okay? And letís call the samples say T [inaudible] the sample points, T not zero, T1 is equal to one over 2B. T2 is equal to two over 2B. And then I go all the way up to [inaudible] N minus one over 2B. Okay? Those are my sample points. There are N of them indexed from zero to N minus one. So what is the sampled form of F? For us, the sample formed of F [inaudible] function is always multiplying a function by given Ė by delta function with that spacing. Take F of T times the sum from say K equals zero to N minus one of delta, T minus TK. Thatís what we always mean by sampling. All right? So again, the TKs are zero, one over 2B, two over 2B, all the way to N minus one over 2B, so the sum goes from zero to N minus one. So this is sum F of TK delta T minus TK, zero up to N minus one. Thatís a sample. Weíll just call that F sample.
Thatís still a function of a continuous variable, but itís recording what we think of as a reasonable approximation to F. Itís a reasonable approximation to F in the sense that the sampling theorem would tell us that if I base my interpolation on those values, I should be able to reconstruct it exactly. Thatís why itís reasonable. And as those points werenít just chosen arbitrarily, those points were chosen with that spacing because thatís what the sampling theorem would tell us to do. Okay, now the sampled version of F is still a function of a continuous variable. I can take its Fourier transform. All right? Thatís not so hard. The Fourier transform of F sample is also a function of the continuous frequency variable S, and thatís just the sum from K equals zero to N minus one of F of TK times the Fourier transform of the delta function which E to the minus Ė shift the delta function Ė E to the minus 2p I S T K. Okay? All right, so where are we here? Iíve got what I think is a reasonable approximation to the continuous function in the time domain. Iíve taken this Fourier transform, but the Fourier transform is not yet discretized itself. Thatís a function of the continuous variable. So I want to sample or discretize the Fourier transform of F, the Fourier transform of the sample function. Okay? How do I do that? Well, think of the sampling theorem. What would the sampling theorem say again viewed somehow going from the frequency domain back to the time domain? That is think of this as a function that I wanna sample. How rapidly should I sample Ė never mind that itís the Fourier transform of something. How rapidly should I sample this thing so the sampling theorem will tell me Iím getting a reasonable discrete approximation of it? That is Iím getting enough sample points so that if I wanted to interpolate it, Iíd be getting the function back again.
How to sample this in the frequency domain so you get a reasonable approximation, letís say Ė so you get a reasonable letís say discrete version if I start using that sort of terminology. All right. So in the frequency domain the signal is limited to zero to 2B. All right? How rapidly to sample in that domain is dictated to us by what happens in the other domain where the signal is limited to zero L. How far apart should the sample points be spaced in the frequency domain in order that I get a reasonable approximation if I take samples of those points? It was either an answer or a cell phone ringing. One over L, all right? In the time domain, remember the signal is limited to one over L. All right. In the frequency domain, itís limited to one over 2B. How fast the sample in one domain is dictated by the properties of the function in the other domain. All right? So the answers will go back and forth here between the function and its Fourier transform, but you should be able to say one or the other in either Ė I mean talk about one in terms of the other in either direction. So once again, let me say it. To think about how fast a sample a signal in this domain should have to do with what its properties are in the other domain. So Iím not speaking so much in terms of time and frequency here. Iím speaking in terms of the one domain and the other domain. The other domain, the function is limited to zero to L, so in this domain the sample should be taken one over L apart. All right? Thatís what the sampling theorem would tell us to do if we were applying the sampling theorem in this direction.
If you take samples spaced one over L apart, spaced one over L, okay Ė and thatís supposed to cover an interval of length 2B. Okay? How many sample points would I take in the frequency domain? So again, if I have M sample points in the frequency domain, then I have M times Ė so M points say Ė then I have M times one over L is equal to 2B. That is M is equal to 2B L. Oh, but thatís N, all right? The number of sample points in the time domain. That is to say Ė I shouldnít say N. Suppose M points, then M is equal to N. That is to say I take the same number of sample points from the time domain as in the frequency domain. So again you Ė thatís what I meant by saying again here. So again in the frequency domain, you again take N sample points. What are they? Then they are space one over L apart, and sample points spaced one over L apart, so they are like S zero is equal to zero, S1 is equal to one over L, S2 is equal to two over L, and so on going on to SN minus one is N minus one over L. Okay? Again, one feels by an appeal to the sampling theorem that if you are sampling the function often enough at those points here, youíre getting a reasonable enough approximation in the sense the sampling theorem would tell you you could reconstruct the function exactly from those sample points. Never mind that all this is being misapplied. Play along. All right? Okay. So what is the sampled version of the Fourier transform of the sampled function? What is the sample form of the Fourier transform of the sample version of F? Well, how do I sample a function? I multiply it by the corresponding delta function. So this would be the Fourier transform of the sample function, F sampled, times the sum of the corresponding delta function Iíve gotta call Ė what did I call it over there? I called it K in that sum. Let me call it something else, M. M equals zero to N minus one. Again, I have N points delta S minus S K where these are the S Ks, those are sample points. Okay. Thatís the sampled version of the Fourier transform. What is that? Letís write it down. Iím erasing this, but Iím gonna write it again. Oh, there it is. Okay, so the Fourier transform of F sampled once again at S is this sum: sum from K equals zero up to Ė K equals zero up to N minus one of F of T K, E to the minus 2 p I S T K. All right? So I multiply that by the corresponding delta function Ė sum of delta functions rather. So this K equals zero up to N minus one, N minus one of F of T K, E to the minus 2 p I S T K times the sum of deltas M equals zero up to N minus one. Iím calling M, right? Just to be consistent here. Yeah. Delta S minus S K gives me what? Gives me these terms times these terms, so it gives me the sum over K and L going from Ė K and M going from zero up to N minus one. F of T K, thatís a constant. I get an exponential function, a complex exponential function times the delta function. We know what that does. That pulls out the value of the exponential at the point where the delta function is shifted and then times the corresponding delta function. So F of T K, either the minus 2 p I S M T K times delta S minus S K. Cool. Okay? Cool, cool, cool. Pardon me?
Instructor (Brad Osgood):S, S M. Now itís cool. Is it cool?
Instructor (Brad Osgood):Are we cool? Okay. All right. So what are the sampled values of the Fourier transform? It is there before us, actually. All right. I wanna repeat again how we got to this point. We got to this point by misapplying the sampling theorem that has led to something that actually turns out to be reasonable and useful. And again, as is the tradition here, once you reach this point you sort of cover your tracks and just say, ďAll right. Now Iím gonna turn this into a definition.Ē
How do we get to this point? We said we wanted to choose sample points of F that we thought were gonna be reasonable Ė sample points of the function little F in the time domain that we thought were gonna be reasonable Ė provide a reasonable approximation of F. We take the Fourier transform of that. That gives us a signal. That gives us a continuous signal, function of a continuous variable in the frequency domain.
Then I wanna sample that, and you say to yourself, ďHow do I sample that?Ē Well, you sample that according to what the properties are in the other domain. That says I take sample values one over L apart, and I sample according to that criteria. And then the sample version of the Fourier transform of the sampled function is this. Iím gonna say that one more time to make sure Iíve said it right. The sample version of the Fourier transform of the sampled function looks like this.
So what are the sampled values of the Fourier transform of the sampled function are Ė let me call them Ė let me just use a different notation, [inaudible] call it capital F Ė are F of S zero, say thatís the sum from K equals zero to N minus one. If M is fixed at zero here, the first entry, the zeroth entry is F of T K E to the minus 2 p I S is not Ė S zero times T K. F of S1 is the sum from K equals zero up to N minus one of F of T K, E to the minus 2 p I S1 T K, and so on. I get N sample values of the Fourier transform, and so on down to F of S M minus one is the sum from K equals zero up to N minus one of F of T K, E to the minus 2 p I S N minus one T K. All right? Thatís the discrete approximation to the Fourier transform of the discrete version of F Ė discrete approximation to the Fourier transformation of the discrete approximation of F. All right. I approximate F by taking N samples in the time domain. I approximate the Fourier transform N values in the frequency domain according to this rule, according to this formula. All right. Let me take stock here because we are actually Ė we are there almost. So again F Ė F of T is approximated by Ė or discretized by [inaudible] better here Ė F is discretized to F of T not F of T N minus one, and the Fourier transform of F of S is discretized to F of capital F S not F S N minus one where the formula relating capital F to little F to the sampled Ė to the values here, to the values here, are the formula that I just have. F of Ė capital F of S M is equal to the sum from K equals zero to N minus one of F of T K, E to the minus 2 p I S M T K.
Okay. Now thereís one more step in actually defining the Fourier transform as people usually think of the Fourier transform. This is still Ė a continuous side of things is still in sight. All right? We start off by looking at continuous functions and try to approximate it. You still see the continuous variables here. You still see the continuous picture. All right? And in fact, thatís most often where the Fourier transform comes up when youíre actually gonna apply it. Thereís usually some continuous process working in the background that you are approximating discretely. Thatís where it comes from. But the definition of the discrete Fourier transform as itís usually given is purely in terms of discrete data and discrete signals. So the final step in defining the DFT is to sort of eliminate the continuous completely, and define Ė and everything is defined in terms of discrete signals and use only discrete signals, or digital signals, however you wanna call it, discrete signals. Now how do you do that?
Well, itís not hard actually. According to our spacing, according to the way we set this up Ė so in our set up, T K Ė the Kth value is K over 2B, right? The points are spaced 2B apart, one over 2B apart in the time domain. The sample points in the time domain are spaced one over 2B apart. Itís zero, one over 2B, two over 2B, three over 2B, and so on and so on. Those are the sample points in the time domain. In the frequency domain, the sample points S M are M over L. All right? Theyíre spaced one over L apart, zero over L, one over L, two over L, and so on and so on. All right? T K times S M Ė if I use this how they are in relation to the spacing, then itís K times M over 2B times L, but you remember 2 B times L in the way we did the set up is actually the number of sample points. 2 B L equals N, the number of the sample points. All right? So in the complex exponential Ė so what do I have in my complex exponential? If I still keep the shadow of the continuous variable, I have 2 p I S M T K, but then if I use this, I can write this as E to the minus 2 p I K M over 2 B L. Thatís K M over N. All right? Only the discrete indices K and M now appear in that complex exponential. No more do you see on the right hand side where those points Ė where the K and the M so to speak came from. You just see the indices K and M. All right. They came from the Mth index of the continuous variable S, the Kth index of the continuous variable K, but thereís a relationship here. That is we sampled at a certain rate. T K was K over 2 B. S M was M over L. The product is K M over 2 B L and 2 B L is the total number of sample points you take in either domain. All right? So the product there is just minus 2 p I K M over N. Thatís one Ė thatís sort of the first step in eliminating the continuous variable from the picture. The last step in eliminating the continuous picture from the variable is to identify the value of the function in this Fourier transform at the sample points with just the index that determines that value. So finally, you identify the values of F of T K with the value of a discrete signal with a value Ė say let me call it this: F K of a discrete signal. That is discrete signal F is F Ė I need a nonsum notation for this Ė F zero up to F N minus one. And Iím following the sort of common notation here of using brackets when you talk about discrete variable rather than parentheses when you talk about a continuous variable. Where these entries here are nothing but the values of the measured signal Ė of the continuous signal at the point T K. F K is equal to F of T K, so you have to be a little careful here. Iím trying to make a distinction somehow between the discrete signal indexed by K coming from the values of the continuous signal at the sample point T sub K, okay? And likewise, in the frequency domain, you replace in your mind or in the definition the continuous S K or S M by the index that determines it. So likewise, we replace S M by the index M, i.e. F by the discrete signal, capital F, underline Ė Iím trying to use that as an indication that I have a discrete signal Ė of this n-tuple F zero, capital F, N minus one where its value at the Kth index is before what I was calling F of S K. All right?
The right hand side made sense from what I had before. Itís the value of this approximate Fourier transform at the Kth sample point. But identify the Kth sample point just with this index, and I consider that to be defining a discrete signal, F0, F1, F2, F3, and so on and so on. Okay? And if I do that, then all traces of the continuous variable are gone, and I now have a transformation from one discrete signal to another discrete signal. If I do this, then all traces of a continuous variable are gone, and you have a transformation from one discrete signal to another discrete signal. That is to say if you start off with the signal little F, discrete signal, which Iím writing like that Ė if you think of it as an n-tuple, F0 up to F N minus one. All right? So just given by N discrete values, indexed here from zero to N minus one, then itís discrete Fourier transform, the DFT of this discrete signal is the discrete signal capital F, which is again an n-tuple indexed from zero to N minus one the way Iíve indexed things.
And what is the definition? The definition is F of M is the sum from K equals zero up to N minus one of F of K. Iíll write that a little bit nicer. E to the minus 2 p I M K over N. Right? All traces of the continuous have disappeared, and some would consider this a great step forward. All right? Oftentimes, when you see treatments of the discrete Fourier transform, if you just look at a book thatís devoted to the discrete Fourier transform, or sometimes even if you look at other books on Fourier analysis and they talk about the discrete Fourier transform, they often jump right to this definition. All right? And thatís defensible. You can say the Fourier transform is for continuous function Ė is functions of continuous variables, and so on. The discrete Fourier transform is for functions of discrete variables. Itís for discrete signals. Hereís the definition of the continuous case. Hereís the definition in the discrete case. Donít bother me. All right? Itís a definition. I get to do what I want. But what you often miss in that treatment Ė [inaudible] itís a choice. And I chose to do it this way because I wanted to make the connection with the continuous case because I think actually that in applications, thatís how most often you see it. You wanna pass from the continuous to the discrete. You wanna pass from a continuous signal to a discrete approximation, and you still want the tools of Fourier analysis that you worked so hard to learn in the continuous case to be available to you in the discrete case.
So what you have to do most often in applying this is to say to yourself, ďAll right. Thatís the definition. Fine.Ē Thatís what the discrete Fourier transform looks like. I have a discrete signal, little F. I have its discrete Fourier transform, capital F. This is how theyíre related. The indices of capital F, the values of capital F at the discrete points, 0, 1, 2, 3, and so on, are related to the values of little F, so on and Ė according to this formula, everything is discrete, but what it comes from is Ė you have to realize that these are coming from values of a continuous signal approximated at a discrete set of points. These are values of the approximation of the Fourier transformation, the continuous Fourier transform at a discrete set of points. Okay? That is the burden of my remarks. Okay, now what weíre gonna do is Ė like I say, Iím gonna try to make the discrete Fourier transform look as much as possible like the continuous Fourier transform. Iím gonna try write the Ė weíre gonna wrap up today. Iím gonna to write the formulas for the discrete Fourier transform in a way that look like the formulas for the continuous Fourier transform. Iím gonna write the theorems for the discrete Fourier transform to look as much as possible like the continuous Fourier transform. The inverse of the discrete Fourier transform to look like the inverse of the Fourier transform Ė thereís a catch there as it turns out. But by and large, you can do this quite Ė you can take this quite far. You can take the analogy quite far. Now again, thatís a choice. You can just study this object the way it is. You can apply it the way it is to the particular cases that come up. But I think again from my experience, you gain much more from trying to make a connection to the continuous case than you lose by Ė you gain much more by doing that than by just taking this as the accepted definition and working with it as it is. All right? So thatís gonna be our modus operandi. Now there are lots of little points along the way. There are lots of little observations of the similarities and the differences. I donít want to talk about all of them because there are Ė itís difficult to sort of see the whole picture at once, so I wanna try to do this at a level where you can see the whole argument go. So be sure to read through these sections carefully. Before you come to class, you should always do this naturally. Just so thereís some small points Ė I wonít always talk about this or always talk about that, but I wanna assume youíll have some familiarity with how some of the calculations go. Iím gonna do a certain number of them so you can see how the techniques go, and so you can see how the formulas come about, but there are a number of sort of little points along the way that have to do with the discrete, and then the continuous, and so on that I donít always wanna emphasize. So Iím asking you to sort of keep up with that as we go ahead and develop some of the properties of this. All right? So next time, weíre gonna start to unwind this a little bit, and see how it works, and see how it works analogous to the continuous case. Thank you.
[End of Audio]
Duration: 52 minutes