TheFourierTransformAndItsApplications-Lecture03

Instructor (Brad Osgood):I love show biz you know. Good thing. Okay. All right, anything on anybodyís mind out there? Any questions about anything? Are we all enjoying our first problem set to class? I guess, Thomas posted some typos. Iíll correct those. I just had a chance to look at it this morning. There was some evidently minor typoís in the problem set. So Iíll look it over and repost the version of it. I donít think thereís anything there that would confuse anybody. All right? Okay. Let me remind you where we finished up last time. We took an important first step in understanding the analysis and undertaking the analysis of periodic phenomena and trying to represent a general periodic phenomena by the sum of much simpler periodic phenomena under this is complex exponential? So think in terms of sines and cosines, all right?

So last time, I say we took the first step in analyzing general periodic phenomena via the sum, so several combination, a linear combination of simple building blocks, simple periodic phenomena. So let me remind you what we did because itís very important that you realize what we did and what we didnít do. We said suppose that you can write a periodic signal in a certain form what has to happen. So we start off by saying F of T is a given periodic function, periodic signal. Function, signal same thing. And just to be definite we took it to have period one, all right? And the question is can it be represented in terms of others and suppose it can be represented in terms of other simple signals of period one, namely the complex exponential.

So suppose we can write F of T as a sum, say something like this. Okay? K from minus N to N, cease of K, E to the two pi I, KT. There was a question, by the way, somebody sent me an e-mail about why the sum is symmetric and so why does it go from minus N to N and we talked a little bit about this last time. This is also discussed a little bit more in the notes. You can think in terms of sines and cosines, all right? And the idea is that if you have a real signal the coefficient [inaudible], but it bears repeating. The coefficient satisfying an important symmetry relationship. The complex numbers they cease of minus K is equal to cease of K bar and because of that the positive frequencies combines with the negative frequencies. The positive terms combine with the negative terms to give you a real part. So will give you essentially a sum of cosines. Okay?

Or sum of sines and cosines because the values are complex. And itís a symmetric sum. Instead of going from one to N or zero to N, if you use complex exponentials it goes from minus N to N. The proof of the helpfulness of this representation will just become apparent as we use it, all right? As I said before, the algebraic work in the analysis is just made incomparably easier by using complex exponentials than real sines and cosineí. Just the calculations become that much easier. All right.

So, once again, suppose we can do this. Then what we found is the coefficients had to be given by a certain formula. So then the coefficients are given by C sub-K is the interval from zero to one, E to the minus two pi I, KT times F of T, DT, all right? Itís an explicit formula for the coefficients. And in principle thatís known, all right? If you know the function then you can carry out the integration in principle. And I want to remind you that this formula Ė there were two parts of deriving that formula. There was a sort of algebraic part, where we just try to isolate the K coefficient and that certain way, but then the analytic part invoked a little calculus where to solve for the coefficient I had to integrate. This depended on a very important relationship of the complex exponentials. This Iíll write over here Ė oh, right here.

The interval from Ė Iíll write it over here. The interval from zero to one, E to the, say, two pi I, NT, E to the two pi minus two pi I, MT, DT. So thatís interval from zero to one if I combine the two ended up two pi I and minus MT, DT. Thatís either one if M is equal to N. So in that case Iím just integrating to the zero, Iím just integrating one. Where itís equal to zero if M is different from N, all right? Fundamental relationship. Weíll see that making its triumphant return a little later on. All right.

Now, thatís fine and that was a very important first step, but it was only the first step. The second step is to turn the question around, all right? The first step says suppose we can write the function in this form, then the coefficients have to be given by this formula. The second step is to turn that around and ask the following: When is that really possible? So I want to turn this around. So, again, youíre given F of T periodic of period one, all right? I know what the answer has to be, all right? In a sense, I know what the coefficients have to be. They have to be given by this, so Iím gonna define them, all right?

Define Ė and I want to use a different notation. I want to introduce a different notation thatís quite standard in this subject. Define F hat of K to be this interval. The interval from zero to one of E to the minus two pi I, KT, F of T, DT. So, again, in principle you can compute this, all right? Youíre given F. You can carry out the integration of F against this complex exponential. And itís often then by F hat K and this is called the K three A coefficient of F. What did we say? K four A coefficient, all right? So it depends on F and, in fact, it can viewed as a transform of F, although we wonít use that terminology Ė well, I wonít stress that terminology quite so much now because it will become much more useful when we talk about the Fourier transform. Itís not the Fourier transform. This is the Fourier coefficient.

So itís a transform of F, but evaluated on the integers. For each K, I have a corresponding number F hat of K given by this interval. And the question is, do we have Ė can we write the function? Can we sit beside the function in terms of its Fourier coefficients? Can we write F of T is the sum K equals minus N to N, F hat of K, E to the two pi I, KT for sum N? Is there some sum of these things that will allow us to express the given periodic function in terms of these simple building blocks? Thatís the question. I say, I know what the answer has to be if we can do it. The answer has to be given Ė such an expression has to involve this. Thatís what I talked about last time. The question is does it really work?

If it does, if a statement like this is true you have to believe that it gives you incredible power because, again, a general periodic function can be decomposed in this way into very simple terms. Analyzing a very complex system of periodic inputs, periodic outputs might be possible to do by analyzing what the system does to the relatively simple inputs and outputs given by the complex exponentials, all right? So if so we can analyze complex systems, simple building blocks. Not a great sentence here, but you get what I mean. Simple components, simple building blocks. Iíd like another try at that sentence, but I think Iíd just rather leave it there, all right? Okay.

Now, this method is only gonna be really helpful if itís fairly general. And thatís always been the question Iíve raised a couple times. How general really is this? How general can we expect this to be if it works? All right. Is it worth putting the time and effort into even addressing this question if itís only gonna work in a few specialized cases that somehow we can handle an ad hoc basis when they came up. Well, let me give you a little warning here, all right? Let me show you the kind of things, not to be negative about it, but let me show you how high the stakes are, all right? This is a very high stakes question. And let me do that by a couple of examples, all right?

Examples that are natural enough they can come up fairly easily in applications. Letís look at some examples of some of the kind of signals that could come up. For example, you could take a signal that looks like this, all right? Iíll draw it over here. Iíll just draw the graph. Something that looks like this. It could model a switch thatís periodically on or off, zero or one, current is flowing or itís not flowing. So something like itís one for half a second and then zero for half a second, all right? Thatís the basic period and then it repeats. So one down to zero. One-half and then it repeats and it goes up, then itís on, then itís off, then itís on, then itís off, and so on and so on. Okay? So one, three halves, two, and so on and so on, all right? Thatís a periodic function and it repeats also for the negative numbers. Thatís a periodic function of period one, all right? So switch on for half a second, off for half a second and repeats. All right. Can we expect to write, letís call this F of T, all right? Thereís the graph, and I can write down the formula for it. Can we write F in the form? I can compute the coefficients. I wonít do it, but, in fact, the coefficients are calculated in the notes, all right? You can carry out the integration. Itís not a complicated interval to carry out. Youíre just integrating the function one over the interval from zero to a half. Thatís all of the function times a complex exponential. So you can compute easily F hat of K. F hat of K would be, in this case, the interval from zero to a half. Letís say, I wonít carry it out. E to the minus two pi I, KT times one DT, all right? Itís one on the interval from zero to a half and zero on the interval from one-half to one. You only compute it on the interval on one period, on the interval from zero to one-half and thatís an easy interval to figure out. And we write F of T is sum from K equals N to N, F hat of K, E to the two pi I, KT. Thatís the question. And the answer is, no or at least not with a finite sum. But I hate to build up all this drama asking this fundamental question only to answer it no for a relatively simple example. So the answer is no, not for a finite sum. Why? Because the complex exponentials thinking theyíre just sines and cosines, all right? Theyíre continuous. And the sum of a finite number of continuous functions is continuous. It canít possibly represent a discontinuous phenomenon. You canít represent a discontinuous phenomenon by a continuous phenomenon, all right? You learned that theorem back in calculus, you know. Some jerk of a math teacher probably said something like and the sum of two continuous functions, of course, is continuous. And you said, yeah, right. Okay, fine. Whatís the point? Well, hereís the point right now for the first time. This limits what you can do. It limits what you might want to do. You canít represent a discontinuous function by the sum of a finite number of continuous functions. Bad luck, all right? Well, what if a function is continuous at least, all right? But maybe has a corner. So letís look at the second example. Letís say a triangle wave. Something that looks like this. Up, down, up, down, up, down, and so on. So on an interval from zero to one. So itís periodic of period one and, again, the power of these, along the negative axis also are for negative values of T. So, again, we can easily compute the Fourier coefficients. Thatís not a problem. They exist. Itís not a problem to carry out the integration and, again, thatís done. I think itís worked out in the notes explicitly and if not itís easy enough to do. I will not do it in public. Again, you can compute the Fourier coefficients. Has to be integrated in two pieces. Has to be integrated. The interval from zero to a half of the function T times the complex exponential plus the interval from one-half to one of the function of whatever this is on the corresponding interval times the other times the complex interval. But you can do it, all right? It requires a little bit of work, but itís not hard. You can do it.

Can we write, Iíll call this F of T, again. My generic error function is always called F, F of T. Can we write, again, the corresponding sum? Sum K equals minus N to N, E to the F head of K, E to the two pi I KAT. And, again, the answer is no. Not for a finite sum. Why? Well, because itís your jerk of a calculus teacher probably told you if your two functions are differentiable then the sum of two functions is differentiable. And you said, fine, fine, whatever. And it didnít seem to matter at the time, but now it matters because this function is not differentiable. It has a corner, all right? But the complex exponentials are just sines and cosines. They are differentiable. Their sum is differentiable. You canít represent a non-differentiable function as a finite sum of differential functions. It wonít work. You just canít do it. The right-hand side is differentiable. The left-hand side is not differentiable. Okay? Now, we could go on like this with more and more bad news, all right? That is, if there is a discontinuity in Ėso hereís the discontinuity in the first derivative, all right? Thereís a corner here. We could draw examples that would look smoother, but we could draw examples where thereís this continuity in the second derivative or maybe the first and second derivatives are fine, but thereís discontinuity in the third derivative and so on and so on. If there is any lack of smoothness, if there is any corner, no matter how smooth that corner looks, if there is some discontinuity in some high derivative youíre screwed, to use a technical term. All right?

Any discontinuity in any derivative includes writing F of T is this sum. Sum from K equals minus N to N minus N to N of F hat A, E to the two pi I, KT for a finite sum because these functions are infinitely differentiable. These functions are as smooth as they can be. Theyíre sines and cosines. And you canít take the sum of those sines and cosines and put them together and get something thatís not infinitely differentiable, all right? So this great idea Ė we might as well quit now or take the rest of the quarter off. Because it doesnít look very general at all. I mean, the kind of signals you come up against may have jumps, may have corners, may have discontinuities, whatever. All right. Maybe we want to make an approximation that theyíre as smooth as can be and then we can use this. And itís an argument, but itís getting away from what we really hope to accomplish here.

Let me, before I go any further, say that thereís a maxim lurking here thatís important, all right? That is, if we canít represent it as a finite sum then we have to turn to infinite sums or at least larger and larger finite sums. Sums of more and more terms. And the maxim thatís lurking, and Iíll say it now and then weíll go back to the general discussion, is that it takes high frequencies to make sharp corners, or any corners for that matter. Maxim is it takes high frequencies to make sharp corners or really any kind of, it sounds better if you say it like this, but really any kind of corner, all right? Any time thereís some kind of discontinuity in some high derivative, that means that youíre gonna have trouble representing that phenomenon as a finite sum. Youíre gonna have to take N larger and larger to try to represent it more and more accurately. It takes more and more terms, takes higher and higher frequencies to make that bend, all right? Even for a relatively high degree of smoothness. All right? Now, by the way, you may think, again, that his is an artificial maxim that is, real signals donít have sharp corners, but thatís not true. I mean, all the time when you, and later when we talk about filtering, producing sharp corners as sharp as you can is actually an important part of signal processing. Sometimes you want to take a signal and cut off after a certain point. Either cut it off in time or cut it off in frequency. Some of you, Iím sure, have had some experience with this, all right?

Sometimes your signal starts out pretty smooth, but for other reasons you want to make it somehow less smooth. You want to cut things off and cutting things off can introduce high frequencies in trying to represent the signal that is something else that has to be dealt with, all right? So there are some very tricky and very practical questions that go along with this maxim. All right. Now, like I said, at this point you say well whatís happened to this grand general program that youíve been announcing? If itís not gonna work and if I canít consider finite sums to represent all but the most specialized phenomena, if thereís any sum slight discontinuity in there at any level of smoothness, then what good is any of this? And the come back to that is we have to consider infinite sums, all right?

To represent the general periodic phenomena, periodic signals we have to consider infinite sums, all right? Thatís a mathematical point. As a practical matter, of course, you canít sum up an infinite series. You can only sum up an approximation, but if you want to have some confidence in what youíre doing and if you want to know what errors you might have to analyze, the first thing you have to do is realize that you have to expand your purview from finite sums to infinite sums. So to represent a more general periodic phenomena we must consider infinite sums of the form, say, for minus infinity to infinity half hat of K, E to the two pi I, KT. It may and be that not all these coefficients are non-zero. Some of them may be zero and so on, but for sure if the function has any sort of discontinuity at any level of derivatives then these coefficients are gonna go out and youíre gonna have non-zero coefficients as far as you go out, all right?

Any non-smooth phenomenon signal will generate infinitely many, not just a finitely many, but infinitely many Fourier coefficients, all right? The only way you could possibly have a finite Fourier series is if the function you start out with were infinitely smooth, all right? Now, thatís a problem, all right? I just want to say the stakes are high here because if youíre gonna deal with an infinite sum mathematically, and even for applications, you have to talk about issues of convergence, all right? How accurate is this gonna be? If I cut it off after a finite number of terms, how accurate is it gonna be? All right? If the series is converging and I cut it off after a finite number of terms maybe I have a certain amount of confidence that Iím getting a pretty good approximation to my function, all right?

But if the series is not converging and I still try to cut it off after a finite number of terms, what confidence do I have that Iím taking a reasonable approximation to the signal that I really want? So you have to deal with issues of convergence. You are forced to if you want this theory to apply at all generally. Okay? And, again, not just for mathematical reasons, but also for practical reasons. Now, thatís hard, all right? There are a lot of very hard questions here and weíre not gonna go into the mathematical analysis of all of them. I do want to give you a big picture. I want to give you some good news and some hard news, but ultimately pretty good news, about how this is dealt with because it has all been sorted out. But it took generations of mathematicians and scientists and engineers working on this to finally resolve all these issues.

Why is this so Ė I mean, talking about convergence of series is hard anyway. Itís particularly hard in cases like this because the terms are oscillating, all right? The complex exponential, again, think in terms of sines and cosines, all right? If I split this up into its real and imaginary parts, sometimes the cosine is positive, sometimes the cosine is negative, sometimes the sine is positive, sometimes the sine is negative. So you have positive terms and negative terms and adding infinitely many of them up, all right? So convergence for infinite sines like this has to depend on some type of cancellations that are going. There has to be some sort of conspiracy thatís making this series converge for a given value of T, all right?

You need a conspiracy of cancellations, how about that? To make such a series converge because of the oscillation, all right? And thatís hard to study. That can be hard to study in a given case. I just want you to be aware of this that the stakes are high and the issues are real. Now, hereís what I want to do. I want to talk about the situation. I want to give you a number of statements, theorems that cover what the story is here. And, again, weíre not going to go through the proofs of these things. Itís not so crucial for us the mathematical details. A lot of them are covered in the notes, not all of them, however, and Iíll say a little bit more about that as well. But I do want you aware of where the hard parts are and what the answers are, all right?

So what I want to do is, I want to have a summary of the main results. That is the convergence when the function is continuous, which happens often enough that you want to know or, let me say, smooth. It has to be an infinite series if itís not infinitely smooth, all right? The convergence, or what passes for convergence, when you have certain discontinuities and here there is a nice and helpful statement about when you have a jump discontinuity, all right? These two cases are actually relatively straightforward. Itís easy to remember and itís easy to have a certain amount of confidence in what the results are, again. Although, I wonít prove it. I wonít go through the proofs.

And, finally, the convergence issues in general and they involve some very really quite deep changes in the perspective that you have to adopt toward this circle of problems. Convergence in general, all right? You actually do have quite a broad general statement that covers pretty much all situations that come up reasonably in practice. But the notion of convergence is a little bit different and the mathematics involved in here it took a long time to sort out, all right? So this involves a fundamental change of perspective.

As said, Iím not gonna do the tails, but I at least want to say some of the words and Iíll tell you why. Because it has become so pervasive that is, itís become so Ė the framework for studying convergence in general, even as it applies maybe to these simple situations, has become so standard that you will see this in all the literature, all right? Youíll see in the engineering literature, the terms that Iím gonna use orthogonality, mean square convergence, L2, things like that. Iíll say all those words and Iíll tell you what they mean, but itís become absolutely the way of talking about these things. And if you look at modern treatments of signal processing, and Iím thinking, in particular, here of wave limit analysis, which has become very popular in recent years.

Youíll hear about orthonormal basis for so and so and distinguish from the complex exponentials as orthonormal basis. Youíll hear all the terminology that goes along with this point of view. So I think, at least, I want you to come away from this with some understanding and familiarity with the terminology that goes along with it, all right? Thatís as far as I want to go with it. But even that I hope is gonna be helpful for you. All right. So let me look at these cases, because these cases are pretty straightforward and theyíre good to know. Convergence when the signal is continuous. Yes, good news. It converges, all right? So that is if Ė so continuous case, all right?

So, again, you can form the Fourier series, all right? You have that the series sum K equals minus infinity to infinity, F hat of K, E to the two pi I, KT converges for each T to the value F of T point Y. Is that means that you plug in a value of T into this sum and add it all up, then you have a series of constants, and what will it add up to? It will add up to the value of the function F of T. So thatís good, all right? If the functionís continuous then you know the series is gonna converge and itís gonna give you the right value, all right? Good. Not so easy to prove, all right? It takes a little work to prove that and, again, thatís sort of sketched out in the notes, all right? Not all the details, but a number of them. You can at least see the broad outlines. And you will find this discussed in various levels of abstraction in most mathematical books on Fourier analysis, Fourier series, all right?

But thatís what to keep in mind there. So the continuous case is good. We call this point-wise convergence, again, because you plug in a value of T, a point in time, add up the series, which is then a sum of constants and youíre guaranteed that the sum will converge to the function F of T. In the case the function is smooth actually, you get a little bit more. If the function is differentiable, the smooth case. So thatís if you have various degrees of differentiability. If it has one derivative, if it has two derivatives, and then thereís the question about are the derivativeís continuous and so on, so I donít want to split hairs on this. And, again, thereís a fairly precise statement thatís given in the notes, but it says this. So if itís smooth and the particulars continuous, so you know the series converges.

So, again, the series converges, the Fourier series, K equals minus infinity to infinity F hat of K, E to the two pi I, KT converges to F of T. Thatís the same as in the continuous case, but thereís actually more to it than this that, again, can be helpful sometimes if youíre trying to estimate errors. You actually get whatís called uniform convergence and what that means Ė the way you should think about this is for different values of T you can control the rate at which the series converges. Same rate for different values of T. So, again, without trying to make it Ė I can make this precise, but I donít want to because it requires too much notation. So you can i.e. think of it this way, you can control, or you can estimate, the rate of convergence, how fast the series is converging.

Add different values of T depending on the degree of smoothness. What this often means is you get estimates on the size of the coefficients, you get estimates on the difference between the function, and a finite approximation, a finite version of the series, depending on the smoothness, all right? The smooth of the function is the faster it converges. Thatís one way of looking at it. And, again, without giving a technical definition of it. Yeah?

Student:[Inaudible]

Instructor (Brad Osgood):Well, what the uniform convergence means roughly and, again, what I donít want to write I donít want to write a lot of epsilonís and Nís and things like that, is if this is the function, all right? All right? Itís periodics, or the pattern, repeats. Then what I mean by uniform convergence is that if you look at a finite approximation, so if you look at a finite version of the sum just going from minus N to N, then that will track. So this is F of T, all right? Thatís the original function. And the approximation does something like, you know, it tracks the function along the way. This is [inaudible] sum, all right? You can estimate uniformly over the interval how far the approximation is from the function, all right?

So instead of just saying at a particular point the series is converging, all right? So, again, if I pick a particular point here then the value of the approximation is approaching the value of the function. Thatís fine. Thatís what happens for the continuous case. For the smooth case, you can say more than that. You can say uniformly how close the approximation is to the function over the entire interval, over all from zero to one, all right? And you can give an estimate for that, all right? Thatís what the smoothness gives for you, all right? And, again, you could write that down precisely, but Iím not gonna do that.

There is actually a statement of this theorem thatís given in the notes, if youíre interested. And itís interesting. I mean, and, again, it can have some practical implications because youíre never gonna work with an infinite series in practice. Youíre always gonna work with a finite approximation. So the question is can you estimate the error? If youíre called upon, can you give some reasonable estimate for how far off you are? Not just at a point, but uniformly over the interval where youíre interested in making the approximation. So it can come up. So you have uniform estimate of the closeness. Okay? All right.

So thatís the continuous case and the smooth case. So thatís good news, all right? Thatís good news. Thatís nice. If the function is continuous the series are converging. If the function is smooth the series is converging and it actually, sort of, stays uniformly close to the given function. Now, I want to jump back, actually, to the discontinuous case because there is one situation that comes up often enough in practice that itís useful to know about and thatís when you have a jump discontinuity. And, again, Iím not gonna prove this, but I think you should be aware of it and thereís actually gonna be some problems that use it, or a problem that uses it. So if you have a jump discontinuity, functions can fail to be continuous in a lot of different ways. The simplest way a function can fail to be continuous if it has a jump discontinuity like the first example that I showed.

So the example of a switch where itís on, then off, all right? Something like this, all right? Where the signal jumps down or up. Okay? So EG. And there the theorem is, that is if a T knot is a point of jump discontinuity, this is a really cool result actually. Then this converges, then the sum minus infinity to infinity Ė well, yeah. Iíll put it like this. Minus two pi I, KT, sorry, sorry, sorry. F hat of K, the Fourier series, F hat of K, E to the two pi I, KT does converge at T knot. It converges at the jump discontinuity, but the function doesnít really have a value of T knot, because it jumps. But it converges, actually, to the average value, all right? It converges to the point in the middle of the jump, to the average of the jump i.e. to one-half Ė let me write it like this one-half F of T knot. This is usually the way itís written. F of T knot plus F of T knot minus, all right?

So what I mean by that is youíre approaching T knot from the left, thatís F of T knot minus, it has this value. You approach F of T knot from the right it has this value, thatís F of T knot plus, all right? Theyíre two different values of jumps and if you look at the average of the jump, right in the middle, thatís what it converges to, all right? Kind of cool. And this wasnít proved until, I think, the 1900ís, sometime in the early part of the 1900ís. So this was way after Fourier had done his initial work and people were struggling with a lot of these problems. And this is useful enough that it comes up in applications. So, for example, for the saw tooth, or not the saw tooth, for the switch periodic signal here, if it jumps from zero to one then at a value of discontinuity converges to one-half, all right?

So even sometimes people define this function to be one on the interval from zero to one-half, leaving out one-half. One-half at the discontinuity and zero from here to here, right? You sometimes see that definition given. And the reason why people sometimes give that definition is because they want to anticipate this result. So if they want to use Fourierís series, they say that that, you know, itís, sort of, consistent with the definition of the function in consistent with the property of the convergence of the Fourier series. This is not so easy to prove, all right? None of these things are really easy to prove. It requires a lot of work, all right? It requires a lot of estimates and careful analysis, but itís nice. Itís very satisfying in some sense that it tells you Ė at least you know what the situation is. Iím not saying itís easy to establish, but at least you know what the facts are, all right? Thatís good. Thatís good. Okay?

So any questions on that? This should be, sort of, part of your vocabulary. Yeah?

Student:[Inaudible] continuity?

Instructor (Brad Osgood):Then thatís not a jump discontinuity, all right? What I mean by jump discontinuity, I mean it jumps between two finite values. Okay? Yeah. Iím trying to avoid Ė I mean, Iím talking to you in a somewhat informal way. Iím trying to avoid giving very precise definitions to all these things because you, sort of, know it when you see it, all right? You can do that, of course. You can give very precise statements here about the uniformity of approximation, about how it converges, and so on, but thatís not so crucial for us. Itís just you should know in general what the big picture is. Yeah?

Student:[Inaudible] point that ever function with jumps, so what about points where there are no jumps?

Instructor (Brad Osgood):Sorry?

Student:What about point where there are no jumps?

Instructor (Brad Osgood):All right. What about points where there are no jumps? So, actually, perhaps, what I should have done is give a more careful statement of the continuous case. The more careful statement of the continuous case is the series converges at any point of continuity, all right? So if a function Ė if it doesnít jump, if itís continuous at that point then the series converges at that point. Okay? So that would actually be a more precise statement of it. Oh, I better not stop asking for questions here, all right? As I say, Iím trying to avoid the infinite regression of making the statements as precise as I can. I can do that, but you donít want to see that. All right.

And, again, donít underestimate the effort that it took to do this, all right? When Fourier first came out with his ideas and weíll see that his original application, actually, on Monday. He was very bold. He actually said any function, not just any periodic function, because he was thinking of extending a function to be periodic by just repeating it, you know. He was thinking of functions, which die ff and then repeating it. So he made statements like any function can be represented by such an infinite series. And people were scandalized by that, especially if they were French and the French are easily scandalized. Bonjour, any function, you are a fool, monsieur.

So it caused a great deal of consternation and it caused a lot of work to get done to try to sort these things out. So donít underestimate the effort that went into getting even this much. But now, this was still ultimately not satisfying because Fourier really set his sights very high. But any periodic phenomena, never mind smoothness, whatever, could be represented in some sense by a Fourier series, by this sort of infinite sum. And to sort that out and to get to the truth of that really required an entirely different perspective. So I wanted to say a little bit about that. Iíll only get to a little bit today and Iíll finish it up on Monday, and then weíll see some applications of this, all right? All right.

So general case. That is not talking about continuity, not talking about smoothness, and so on, all right? In here, whatís really involved is you need, we learned after decades, centuries of bitter experience, you need a different notion of convergence of the infinite series. You learn not to talk about what you think would be the most natural thing, point-wise convergence, all right? You learned by hard lessons. You learned it because it didnít work. I mean, ultimately you couldnít get an answer that was very satisfying. In these cases weíre fine, but somehow under natural and fairly general situations, you learned not to ask for convergence of a sum like this, F hat Ė let me just put general coefficients in there. Cease of K, E to the two pi I, KTM. Even for general series, all right?

You learn not to ask for convergence of that at particular points. At values of T, all right? Youíve moved away from plugging in values of T and looking at the series of constants and asking whether or not converged, all right? And that was a hard step to take. Rather, what was ultimately learned was you get a satisfying answer if you asked for convergence in the mean, convergence on average. I mean, the proof that it was a good idea are the results that you can get if you take this point of view and it was a hard won point of view. So you get a better picture, more satisfying picture, if you ask for convergence in the average sense or also sometimes called convergence in the mean, and I will write that down, all right?

In engineering terms itís also sometimes called convergence in energy. You do see this term for reasons, which Iíll explain, probably not today, but next time. Okay? Remind me. All right. Now, what does that mean? Well, you need to make some assumption on the function, all right? Itís not maybe completely general in a sense the functionís arbitrary, but the assumption you make on the function is pretty minimal. So, again, Iím assuming the functionís periodic. So really everything takes place on the interval zero to one. Itís only the properties on the function on the interval from zero to one that matters for us because everything is just repeated after that, all right?

So, I suppose, again, F of T is periodic period one, periodic period one, and you also suppose that it has the following property. That the interval Ė itís square interval. Square DT is fine. If the interval of the square of F is finite on the interval and thatís not too hard a thing to insist on. Itís not too restrictive a thing to insist on. There are functions that donít satisfy that. If a function goes off to infinity at a certain rate on the interval from zero to one, if itís unbounded, then it may not satisfy this. But certainly the functions that come up most often in applications are gonna satisfy a condition like this and this actually is sometimes the hypothesis of finite energy for reasons which weíll understand a little bit more later. Take this interval, itís often taken to be the total energy of the function depending on what the function represents. And so youíre assuming that somehow the signal has finite energy, which is a reasonable physical assumption, all right?

Finite energy. All right. Then as it turns out, then you conform Ė the Fourier coefficients do exist. Thatís something that actually has to be proved separately because itís a question of integrability, but itís true. You conform the coefficients F hat of K equals the integral from zero to one, E to the minus two pi I, KT as before, F of T, DT, all right? Again, so thatís actually now a separate issue that has to be verified because youíre not assuming the function is continuous or anything else, but it can be done. And the punch line is, and itís quite a punch line, that you still get convergence of the series, of the infinite Fourier series, but not Ė and this board is floating up here. Let me do it over here. But not by plugging in points, but rather in an average sense as follows.

So then Iíll write it like this. The interval from zero to one. I want to look at how close the function is to an approximating series, so that means a finite version of the sum. So the sum, the interval of the sum from K equals minus N to N of F hat of K, E to the two pi I, KT minus F of T squared, DT, all right? So I look at the average of the square of the difference between a finite approximation of the sum, makes perfect sense to form that, and the function. And the statement is that this tends to zero as N tends to infinity, all right? The series converges of the function in the average sense. The series converges to the function in the mean, all right?

The idea of integrating a function to get an average value is probably not so unfamiliar to you Ė one second. The idea of integrating the square, or the difference of the squares, is also probably more or less more familiar to you and, as a matter of fact, this is exactly, again, for reasons I canít explain today, exactly related to the least squares approximation of a function by a combination of complex exponentials. Now, you had a question?

Student:[Inaudible] convergence in the mean square?

Instructor (Brad Osgood):Pardon me?

Student:Is this mean square convergence?

Instructor (Brad Osgood):Yeah. Mean square convergence, thank you. I say convergence to the mean, maybe I should even call it mean square convergence. If youíre familiar with that term by all means use it, all right? So itís mean square convergence. All right. So the result is Ė that Iím afraid weíll have to quit for today. That the function Ė the series converges to the function, right? That is, it makes sense to write, you write F of T equals itís Fourier series, K equals minus infinity to infinity, F hat of K, E to the two pi I, KT, but you have to understand what this equal sign means, all right? In this context, all right?

In this context you have to be careful here what this equal sign means. It doesnít mean pluck value of T and watch the series converge to the value of the function. It does not mean that. It means that if you compute that interval for a finite sum and let the degree go to infinity here, then that interval will tend to zero. The mean square difference will tend to zero. Thatís what that equal sign means, all right? That the difference between this and a finite approximation tends to zero as the approximation gets better and better. The square, interval to square.

Now, that was a big change of view, all right? That was a big change in attitude to adopt that notion of equality, that notion of convergence, and so on. And it had profound far reaching consequences, all right? And, again, it took a long time to sort out. And weíre out of time for today, so I caní tell you that. So on Monday I want to wrap this up. Not in all the mathematical details, only so far as to give you what the general picture is because youíre gonna see it beyond this class. Youíre gonna see people use this terminology, use these ideas well beyond what we do in here, and itís really quite satisfying. Itís a really quite thorough and satisfying coherent picture, all right? So more on that on Monday.

[End of Audio]

Duration: 52 minutes