Instructor (Brad Osgood):Weíre on the air. Okay. A few quick announcements. First of all, the second problem set is posted on the website. Actually, I posted it last evening. So for those of you that are very eager and check on the website all the time, it was there. And secondly, the TAs are beginning their office hours this week, today in fact; is that right?

Okay, so if you have any questions for them, they will be available to help. All right, anything on anybodyís mind, any questions about anything?


Instructor (Brad Osgood):Yeah.


Instructor (Brad Osgood):Anybody else have any issues with the online lectures? I donít know, I havenít Ė I'm afraid to look at myself, so I donít know what theyíre like.

Student:I was [Inaudible].

Student:Nothing happens.

Instructor (Brad Osgood):Nothing happens when you click on it?


Instructor (Brad Osgood):Itís a little trick we like to play on people.

Student:[Inaudible] which browser you are using, so in the Mac [inaudible], should be [inaudible].

Instructor (Brad Osgood):So the question may be which browser youíre using. I honestly donít know; Iíve never tried to do it before [inaudible].

Student:If youíre using a Mac, you have to use Safari; it doesnít work on anything else.

Instructor (Brad Osgood):It doesnít work on anything else, except Ė use the Mac, the word from over there is you have to use Safari, which is the one that comes with it. And I donít know about other ones. Anybody else have issues with this? I can find out and I can post an announcement, I suppose.

But I havenít heard, actually I havenít tried it, so I donít know that the Ė how do they look, the lectures?


Instructor (Brad Osgood):Great. Thank you; that was the right answer. Anything else? All right, so Iíll check into it, but try that on the Mac, try Safari or try other browsers. Any problem with PCs?

Student:They work fine.

Instructor (Brad Osgood):PCs work fine; okay. Donít, donít Ė I donít want to see. All right, anything else? All right, so today I have two things in mind today. I want to wrap up our discussion of some of the theoretical aspects of Fourier series. Weíre skimming the surface on this a little bit, and it really, you know, kind of kills me because itís such wonderful material and it really is important in its own way.

But as Iíve said before and now youíll hear me say again, the subject is so rich and so diverse that sometimes you just have to, you canít go into any Ė if you went into any one topic, you could easily spend most of the quarter on it and it would be worthwhile, but that would mean we wouldnít do other things which are equally worthwhile.

And so itís always a constant trade-off. Itís always a question of which choices to make. So again, there are more details in the notes than Iíve been able to do in class, and will be able to do in class, but I do want to say a few more things about it today. Thatís one thing.

And the second thing is I want to talk about an application to heat flow thatís a very important application historically, certainly and it also points the way to other things that we will be talking about quite a bit as the course progresses. All right, so let me wrap up and again, some of the sort of the theoretical size of things.

And Iíll remind what the issue is that weíre studying, and so this is our Fourier series fine, all right? Last time we talked about the problem in trying to make sense out of infinite sums, infinite Fourier series, and the important thing to realize is that thatís not by no means the exception, all right?

We want to make sense of infinite sums of complex exponentials sum from K equals Minus Infinity, Infinity, cK, either the 2 pi, KT. I'm thinking of these things as Fourier coefficients, but the problem is general. How do you make sense of such an infinite sum? And the tricky thing about it is that if you think in terms of sines and cosines, these functions are oscillating.

All right, everything here in sight is a complex number and complex functions, but think in terms of the real functions, sines and cosines where theyí oscillating between positive and negative, so for this thing to converge, thereís got to be some sort of conspiracy of cancellations that making it work.

Of course, the size of the coefficients is going to play a role as it always does when you study issues of convergents. But itís more than that because the function is bopping around from positive to negative, see, all right and that makes it trickier to do. That makes it trickier to study.

And again, realize that this is by no means the exception, and so in particular if F of T again is periodic, period 1, we want to write with some confidence that itís equal to its Fourier series.

We want to write with some confidence, at least we want to know what weíre talking about, that F of T, say is equal to its Fourier series going from minus [inaudible] 2 p i KT, and again, itís really if you want to deal with any degree of generality, itís going to be the rule rather than the exception that youíll have an infinite sum because any small lack of smoothness in the function or in any of its derivatives is gonna force an infinite number of terms.

A finite number of terms, a finite of trigometric sum will be infinitely smooth. The function and all its derivatives will be infinitely differentiatable, so if thereís any discontinuity in any derivative you canít have a finite sum.

So any lack of smoothness forces an infinite sum, again, so itís not because the method is trumpeted as being so general, you have to face the fact that youíre dealing with an infinite number of terms here, all right?

Now, by the way, I donít mean to say that all the terms are necessarily non-zero, that all the coefficients are necessarily non-zero. Thatís not true. Some of the terms may be zero.

For example, when you have certain symmetries, the even coefficients may be zero or the odd coefficients may be zero and in special cases, or a finite number may be zero or a block of them may be zero. You donít know exactly whatís gonna happen.

But all I'm saying is you canít resort to only a finite sum if thereís any lack of smoothness in there. All right, so again, thatís the issue. Yeah.


Instructor (Brad Osgood):Does [inaudible] of K, that actually is a K although it looks like a T, itís [inaudible]. All right, the K Fourier coefficient. Iíll remind you what the definition is since weíre gonna use that. So [inaudible] of K is the integral as a K, integral from 0 to 1 of either the minus 2 pi KT, F of T DT. Is somebodyís phone ringing?

All right, now, so last time we dealt with, at least in statement, the cases where that function was smooth or is smooth and you get all that nice sort of convergents that you want. All right, so F of T is continuous, smooth or even if you have a jump continuity, then you get the sort of kind of convergents that you want. You get satisfactory convergents.

Iíll just leave it at that because the precise statement we talked about last time, and itís also in the notes, you get satisfactory convergents results. So thatís fine, so again, that gives you a certain amount of confidence that you write the series down, you can manipulate it and plug into it and things like that and nothing terrible is gonna happen.

But to deal with a more general functions, the more general signals that arise, it really requires a different point of view, all right, greater generality requires a different point of view, and thatís where we finished up last time.

Now, itís not just for fun, even for mathematical fun. That is, this point of view turns out to have far-reaching consequences and really does frame a lot of the understanding and discussion, not only for Fourier series, but other subjects that are very similar and also in sort of everyday use in a lot of fields of signal processing.

So very generality requires a different point of view, different terminology, different language and a whole sort of re-orientation. All right, and again, I set that up last time, and I'm gonna remind you where we finished up and I wanna put in one more important aspect of it today and thatís all weíre gonna do, sad to say, all right?

So again, the condition is the integrability of all things. Instead of smoothness, instead of differentiability, the condition that turns out to be important is integrability to function. All right, the important condition, and a relatively easy one to verify, integrability, all right?

So you say that a function, say that F of T is squared integrable or briefly, you say that F is in L2 of the integral from 0 to 1. I'm only working on an integral from 0 to 1 here. And L2 stands for square, well the 2 stands for square and the L stands for Lebeg, and Iíll say a little bit more about that in a second if the integral is finite, integral of the square is finite, F of T DT is less than Ė

I want to allow complex value functions here, although many of the applications are real, but I want to allow complex value functions, so I put the absolute value of F of T squared there, all right?

Thatís an easy condition to satisfy. Physically one encounters this condition in the context of this integral representing energy and so one also says that this signal has finite energy. Thatís another way of saying it, and you see that terminology.

All right? So if you have a periodic function, which is integrable, like so, all right, so again if F of T is periodic and square integrable, then you form the Fourier coefficients and they exist actually the square integrability is enough to imply that the Fourier coefficients exist.

Then your form, [inaudible], as before, either the minus 2 pi I, KT F of T DT and the sum converges the infinite Fourier series is equal to the function in the sense of mean square convergents.

And you have, and this is the fundamental result, that the integral from 0 to 1 of the difference between the function and the finite approximation between the function, the square of the difference, the integral of the square of the difference, this tends to 0 as N tends to infinity. Takes up the entire board and it deserves to. All right, itís an important result, okay?

Okay, now I really feel like have to say a little bit more here, but only a little bit more. The fact is you only get these wonderful convergents results if you not only generalize your point of view towards convergents, but you also have to generalize the integral.

All right, this actually only holds for Ė the whole circle of ideas really only holds for a generalization of the [Inaudible] integral. Do you have to worry about this? No. All right. But the fact is, you only get these convergence results, such convergence results. You can only really prove them in the slightly more general context of integration.

Convergence results, if you use a generalization of the integral even, which is a whole other subject due to Lebeg, and thatís what the L stands for in L2, due to Lebeg. Itís a French mathematician in the turn of the 20th Century, who actually in the context of these sorts of applications, trying to extend integration, trying to extend limiting processes to more general circumstances where they couldnít prove the results classically, mathematicians were worried about this, he had a more general definition of the integral.

And with that more general, more flexible definition of the integral, the limiting processes are easier to handle. You can do more than youíd like to do, but somehow donít feel justified in doing, and in that context, this is something you donít have to know about, all right. But he generalized the integral only, and by doing so, it was perfectly suited to solving these sorts of problems.

It was really a quite compelling case. It was really a beautiful theory, but, you know, itís a famous quote. The usual integral that you studied when you studied integrals in Calculus is called the [inaudible] integral. All right, and that suffices for just about every application, but there are more general integrals.

On the other hand, John Tukey who was a famous applied mathematician was quoted as saying, ďI certainly donít want to fly in an airplane whose design depended on whether to function as a remount integral or a Lebeg integrable. Thatís not the point.Ē

All right, the point really is a theoretical one, not a practical one. But nevertheless, somehow honesty compels me to say whatís involved here. All right, so itís in that sense that you had to talk about convergents, and itís not an unreasonable condition, all right, that is also called, as I said last time, convergents in energy, convergents in the mean or mean squared convergents.

And what that means is on average this sum is converging to the functional and average in the sense that if you look at the difference between the function and finite approximation, square that, integrate it and integrating is sort of taking an average and then that tends to zero as [inaudible] infinity.

Itís approximating over the entire interval on average rather than concentrating its efforts instead of approximating it at a single point. And again, you will, if you look at the literature and I'm talking about the engineering literature, youíll see these terms all the time. Youíll see L2 all the time.

As a matter of fact, actually, if you take, some of you have probably had courses in quantum mechanics and if you take it all sort of an advanced course in quantum mechanics, not that I ever have, but if you do, itís also framed very much in the context of L2 [inaudible] spaces and things like that, [inaudible] spaces and L2 spaces and so on.

Itís not, I'm not making this up. It really is Ė it has become the sort of framework for a lot of the discussion. If you want to be sure that you have some certain amount of confidence in applying mathematical formulas, you need the right kind of general framework to put them in and this is for many problems, exactly that.

Now, there is one further aspect of it, and thatís as far as I'm gonna go, that brings back that fundamental property of the complex exponentials that we used to solve the Fourier coefficients. So I want to highlight that now.

So remember in solving for the Fourier coefficients way back when, I mentioned this last time and I wanted to bring it back now in a more general context again, in solving for the Fourier coefficients we used a simple integration fact that it was actually everything. It was essential that either the 2 pi, NT times the [inaudible] minus 2 pi MT DT so that we can combine those complex exponentials, thatís either the 2 pi and minus MT was equal to zero if M is different from N. If M is equal to N, then itís equal to 1, all right?

That simple fact, that simple calculus fact emerges Ė turns out to be the cornerstone for understanding these spaces of square integral function Ė if weíre introducing geometry into those spaces. So this simple observation, simple fact, is a cornerstone for introducing, if I can say it that way, introducing ďGeometry,Ē I put it in quotes because itís geometry like you donít even think of geometry, but the features are there by analogy.

Geometry into the space into square integral functions, L2 01. And when I say introduce geometry, again I'm reasoning by analogy here, although itís a very powerful analogy. And the thing that makes geometry, geometry as far as Euclid is concerned is the notion of no perpendicularity.

All right, thatís one of the most important notions of geometry and thatís exactly what gets carried over here. It allows you, it allows one to define orthogonality or perpendicularity, same word, same thing, all right, via inner product or dot product.

All right, so let me give you the definition. I'm not gonna justify it because thereís justification in the book actually, in the notes. But it looks like this, so again if F and G are square integrable functions and I'm gonna assume theyíre complex, thereís a little distinction here between what happens in the real case and what happens in the complex case, square integrable on 01, as always, thatís the basic assumption we make.

Then you define theyíre inner product which is a generalization of the usual dot product for vectors a generalization of the dot product for vectors, by an integral formula, all right, itís defined by FG and youíre right, there are various notations for it, but the common notation is just to put them in a pair, or sometimes people will write them with angled brackets or sometimes people do all sorts of things.

Sometimes physicists call it, you know, [inaudible] vectors and [inaudible] vectors and all sorts of bizarre, unnatural things. But Iíll take the simplest notation, so zero to one, F of T times G of T bar complex conjugate DT. Thatís because I'm allowing the complex value of functions.

If G of T were a real value of function, then I would just have F times integral or F times G because itís complex for reasons which again are explained a little bit better in the notes. You put the complex conjugate in there.

Now, what you have to believe is that this is sort of a continuous infinite dimensional generalization of the [inaudible] of two vectors. How do you take the [inaudible] of two vectors? You multiply the components together, the corresponding components together and add, all right?

So itís like youíre multiplying the values together, although the function times the conjugate of the other function, those are the values, and youíre adding, but continuously in the sense of taking the integral instead of the sum, all right? Thatís sort of where it comes from.

Now, thatís fine, but the real benefit is it allows you to define if you ever want to do that, when two vectors, when two functions are perpendicular. So you say that F and G are orthogonal, you could say theyíre perpendicular, but it sounds ever so more mathematical if you say theyíre orthogonal, all right.

Orthogonal and if you write it neatly Ė orthogonal, much neater. If theyíre in a product, itís 0, if FG is equal to 0 period Ė thatís a definition, all right. Thatís a definition. Now, where does the definition come from, again, you know, because I canít say everything as much as Iíd like to, let me refer to the notes, because itís actually not so unreasonable. This definition comes from the exactly wanting to satisfy the Pythagorean theorem or inner products.

Now, the calculation we do with the complex exponentials shows exactly the different complex exponentials are orthogonal. One more thing actually, let me introduce the length of a function or the norm of a function in terms of the inner product, or in terms of the integral, same thing.

So the norm of F, norm of a function F, is defined by the square of a norm is just the inner product of the function of itself, just like the square of the length of a vector is the inner product of a vector with itself. So the inner product of F with itself is the norm of F squared and thatís the notation you use, and so what is that; thatís exactly Ė so the norm of F squared is exactly the integral from 0 to 1 of F of T squared DT.

And let me just tell you what the Pythagorean theorem is then. All right, the Pythagorean theorem is Ė Iíll do it over here, because thatís where this comes from. Thatís exactly where this definition comes from and thatís exactly where the property comes from. The Pythagorean theorem is exactly this, that F is orthogonal to G if and only if the norm of F plus G squared is equal to the norm of F squared plus the norm of G squared and that is if and only if the inner product is 0.

Why is that the Pythagorean theorem Ė because of vector addition, all right? If I wrote vectors, U and V, then this is the vector U plus V, all right, if I just write vectors and the Pythagorean theorem says, ďThe square of the hypotenuse is the sum of the squares of the sides for right triangle and only for right triangle.Ē That characterizes the right triangle, all right.

So that says the norm of U plus V squared, the square of the length of the hypotenuse is the sum of the squares of the other two sides, U squared plus V squared, and that holds only exactly when the two vectors are perpendicular and thatís where the definition of the inner product comes from and everything else.

Beautiful Ė itís beautiful, all right. So the trick is in extending that from, if you like this sort of thing, the trick is extending that from vectors to functions and reasoning by analogy, all right. Itís not, you know, now let me say what you should think about and what you shouldnít think about, the analogy is very strong. In many ways your geometric intuition for what happens for vectors you can draw carries over at least algebraically to this more general situation; however, so let me say one more thing then.

These complex exponentials are exactly functions or orthogonal functions of length one. That calculation that Iíve just erased with complex exponentials says this, says either the 2 pi NT in a product with either 2 pi MT, all right, thatís the integral of this times the conjugate of this. The conjugate of either the 2 p MT is either the minus 2 pi MT, thatís where the minus sign comes from, all right.

This is equal to zero if M is different from N and itís different to one if M is equal to N. All right, these are ortho-normal vectors with respect to this inner product. Theyíre orthogonal, their inner product is zero, when theyíre distinct, and they have Length One, their norm is equal to 1 when theyíre equal.

Okay, now, as I said your reason by analogy, you can visualize what it means for vectors to be perpendicular. You can draw this picture, all right. So you might say to yourself, I should be able to visualize this. I should be able to sit in a quite room, turn the lights off and visualize what it means for complex exponentials to be orthogonal. Yes, yes, I see it.

No, you donít. Let me relieve you of this burden. There is no reason in hell that you should be able to visualize when two functions, let alone complex exponentials are orthogonal. Donít beat yourself up trying to do that and donít say to yourself youíre less of a person if you cannot visualize when two functions are perpendicular, not only complex exponentials, but sines and cosines, all sort of innocent-looking functions out there that you have worked with for all of your life turn out to have this sort of orthogonality relationship.

But you might say to yourself, like sines and cosines, so sine FT if orthogonal to cosine FT or sine of 2 p T is orthogonal to cosine 2 p T. All sorts of interesting results like that, but you might say to yourself, ďGee if I look at the graph, I should be able to visualize this.Ē Donít bother.

All right, thereís no point. Thereís no point. Itís reasoning by analogy, all right. The fact is you establish thee formulas; you establish theyíre orthogonal and then you apply your intuition for orthogonal functions, for orthogonal vectors where you can visualize it to situations where you canít visualize it, all right and thatís the real power of this line of reasoning, because you can apply your intuition to places where it should have no business applying somehow, all right.

All right, now, almost, almost, almost, almost, almost, the final thing then, the final piece of this approach to Fourier series is to realize the Fourier coefficients are projections of the function on to these complex exponentials, all right. So again, I want to remind you of one of the ways you use inner product is to define projections, to define orthogonal projections in particular, so you use the inner product for vectors to define and compute projections.

All right, if U and V are two vectors, univectors, say, norm of U equals 1 and norm of V equals 1, then what is the projection of V onto U, and the projection onto V onto U is just the inner product of V.U. Thatís how much U knows V or V knows U, all right. The projection here is the inner product of V with U, okay. And how much does U know V is the inner product of U, same thing.

All right, so the vector projection here is thatís the length of the projection, all right, and then the vector that goes in the direction of U that has this length is this time, so the vector projection is inner product of V with U times U.

All right, U is the univector, so you go in that direction this length and thatís how you project, and ití shouldnít be shocking. It should be somewhere in your background to realize that youíve certainly had classes in linear algebra that decomposing a vector into its projections onto given vectors can be a very useful thing.

Itís breaking a vector down into its component parts, breaking a vector down into its component parts, all right. Now, what is the situation for functions, itís exactly analogous. I donít want to say itís exactly the same because you canít draw the picture, but you can write down the formulas and the formulas are a good guide. The formulas are a good guide.

What is the Fourier coefficient? The Fourier coefficient is exactly a projection, all right. If I compute the inner product of the function F with the complex exponential, either the 2pi NT, then thatís exactly the integral from zero to one of F of T times either the 2 pi NT bar DT. That is the integral from zero to one of F of T either the minus 2 pi NT DT, that is the nth Fourier coefficient. The nth Fourier coefficient is exactly the projection of the function onto the nth complex exponential.

All right, cool. Way cool. Infinitely cool. So cool. And what is writing the Fourier series? What is writing the Fourier series Ė to write F of T is equal to the sum from K going to a minus infinity to infinity, [inaudible] either the 2 pi KT is to write F of T is the sum from K going to a minus infinity to infinity of the inner product of F with the K complex exponential times the complex K complex exponential.

This is a number, all right, thatís the Fourier coefficient. That Ďs the length of the projection of F onto its K component, and thereís the K component. All right, itís exactly what that is, and itís this point of view that is so ubiquitous, all right, so ubiquitous, not only in a Fourier series, but in other versions of essentially the same ideas.

And you see this all the time in signal processing. I talk about wavelets time. I mentioned wavelets just very briefly because wavelets is such a hot topic. Itís the same sort of thing. Youíre trying to decompose the function into its simpler components and in this case the simpler components are either the 2 p or the complex exponentials, all right.

So to write an expression like this down to be able to say this in the appropriate notions of convergents is to say Ė Iíll do it over here Ė is to say that the complex exponentials form an ortho-normal basis for the square integrable functions, all right.

To be able to write this statement and understand what it means in terms of convergents and all the rest of that jazz is to say, is to say, that the complex exponentials that is all of these, even the 2 pi KT, K going from minus infinity to infinity form an ortho-normal basis for these periodic functions, squared integral periodic functions, all right.

And then sometimes the game is to take different ortho-normal basis. Wavelets are nothing but, I'm not going to say nothing but because they have their own fascination, theyíre another ortho-normal basis for square integrable functions.

The complex exponentials are not the only ortho-normal basis just like any vector space just doesnítí have, has lots of different orthogonal-normal basis. These are particularly useful ones.

So to write this synthesis formula is to express F in terms of its components, what components, the components in terms of these elementary building blocks, all right, and what are the coefficients, the coefficients, like they are for any ortho-normal basis are the projection of the vector in those directions, all right.

Itís very satisfying and you should try to put this in your head, yeah.

Student:[Inaudible] in the sense that you canít get one from the other in the rotation [inaudible].

Instructor (Brad Osgood):Well, itís more complicated, so the question is are the bases like in [inaudible] dimensional spaces the bases are related to each other by a essentially a rotation of unitary or orthogonal matrix and in space, yes, you have unitary transformations linear operators that are unitary, but the definitions are a little bit more complicated.

But you have similar sets of things. All right, you can get to ortho-normal basis to another ortho-normal basis. Okay. All right, this is, like I say, so what one can say, of course, I would say much more about this. I donít want to. All right, well, I do want to, but I canít, all right.

All right, itís this point of view that is important for you to carry forward, all right. Thatís all I'm saying because you will see it, you will see it, all right. And again, the idea is you reason by analogy, all right. You gotta; itís hard to write down this formula, all right, thatís something new. All right, writing down a formula, writing down an inner product in terms of an integral, all right, I gotta deal with that. Thatís something hard, and I canít visualize it, all right.

But the words you use in the case where you can visualize things are almost identical to the words you use in the situation where you canít visualize things. All right, and you can carry that intuition over from the one case to the other case and itís extremely important and Iíll give you one, I keep saying one final thing.

So this time for sure, an application of this is whatís also called Raileyís identity, which is nothing more than to say a length of a vector can be obtained in terms of some of the sum of the squares of its components, all right.

You know how to find the length of a vector in [inaudible]. You add up the sums of the components, right? You do the same thing here. Raileyís identity, and I will not derive it for you, but it is derived in the book and I even say something like, ďDo not go to bed until you understand Ė do not go to sleep until you understand every step in the derivation.Ē

It says the integral from 0 to 1 of F of T squared DT. You write it in terms of its Fourier coefficients Ė I didnít write that down, is the sum of the squares of the Fourier coefficients. Okay, going from minus infinity to infinity of [inaudible] K squared, all right. Thatís Raileyís identify.

It says the length of a vector is the sum of the squares of its components, all right, the components of the function are its Fourier coefficients, all right. This is the length of a function; it is the inner product of the square length of a function. Itís the inner product of a function with itself. It is the sum of the squares of its components. Thatís all this says. And it follows algebraically exactly in the same way as you would prove this using inner products for vectors, exactly, exactly.

All right, now, this was known before any of this stuff was put in place, all right, but when all this sort of general framework was put in place, this identity was known before the general framework or orthogonal functions, square integrable functions, all the rest of that jazz, and it was viewed as an identity for energy, all right, this is the energy, the function and these are somehow, you know, the individual components here.

And thatís why one often says, you can compute the energy and the time domain or the frequency domain, weíre gonna find an analogy for this before it transforms, all right. But it really says nothing other than the length of the vectors, the sum of the squares of the components.

How much Ė so let me write down, hereís F of T is the sum of, hereís the Fourier series, [inaudible] either the 2 pi KT, all right. All right, how much energy does F have as a signal? It has this much. All right, how much energy does each one of its components have?

Well, the energy of each one of the complex exponentials is one because theyíre of length one. So, how much energy, how much square energy does each one of the components have? Itís the magnitude of this thing squared; itís the multiplier. Itís the projection out front.

[Inaudible] squared times the energy, which is 1 of the complex exponential, so what is the total energy here, what is the square of the total energy here? Itís the sum of the squares or the contributions of that energy from individual components. Each individual component is contributing an amount [inaudible] of K to the whole sum, or the energy it contributes is the square of [Inaudible] of K absolute value squared to the whole sum, the whole number, okay, pretty cool; pretty cool.

All right, here ends the sermon. Donít leave without putting something into the collection plate. All right. All right, here ends the sermon on inner product square [inaudible] and so on, okay? All I can say is trust me; youíll run into it again.

Now, I want to do and I probably wonít finish it today. Iíll finish it up on Wednesday, although it goes pretty fast. I want to do a classic application of Fourier series to the study of the classic physical problem, in fact, the problem that launched the whole subject. Sop I want to do an application to heat flow.

All right, this is a very important part of your intellectual heritage, all right. I'm serious. That is if youíre going to be practicing scientists and engineers, you know, you want to know something about where the subject came from because again, you know, you some how wind up re-visiting a lot of these ideas in different context, but often in similar context.

And this was the problem that really started it all. This was the problem in studying how heat, how the temperature varies in time, when thereís some initial temperature distribution.

All right, so you have a region in space and with an initial temperature distribution and an initial distribution F of X of temperature. I say F of X here just to indicate that X is some sort of spatial variable, all right, so some region in space, you know, the dimension of X is the dimension of the region, all right. And the question is how does the temperature [inaudible], all right.

You have an initial distribution of temperature, thatís what happens at T equals zero, then as T increases, the temperature changes. All right, the heat flows from one part of the body to the other part of the body and you want to know how is that governed, all right.

How does the temperature change both in position, I should say vary, well, change is fine Ė how does the temperature change in position and time. All right, this was an important problem, still is an important problem actually and weíre only gonna handle one very special case of it, the case of the original case that was handled by Fourier series, the problem with the Fourier study.

Where periodicity comes into the problem naturally because of the periodicity in space. So to study this problem means to say first of all, what the region is and then to say what the initial distribution of temperature is or at least say that there is given an initial distribution of temperature, all right.

So we look at a heated ring. Sound Ė that sound. Okay. All right, something like this, all right. Given an initial temperature, given an initial temperature say F of X, F of X Ė X is a position on the circle; X is a point on the circle, okay? And we let U of XT be the temperature at X at time T Ė at position X at time T. All right, and the question is can we say something about it. Thatís the function we want to find.

We want to study U of XT, all right. Now, the fact is periodicity enters into this problem, periodicity in space because the circle, the temperature here is the same as the temperature at the same place, if I increase it by 2 p if I'm going around a circle, then obviously the temperature is periodic as the position.

Okay, so the temperature is periodic as a function of X. All right, and letís normalize things so we assume the period is 1, so the circle is radius or whatever it is, or another way of looking at it is just imagine the circle is the interval from 0 to 1 and I identify the end points, all right. So letís supposed we have Ė just because weíve been dealing with functions of Period 1, letís suppose that. Letís supposed Period 1.

Period 1, all right, so that is the function of the initial temperature distribution, F or X is periodic of Period 1 and so is UNX, not in T; itís not periodic as a function of time, but itís periodic as a function of the spatial variable. U of X plus 1 for any variable of T is the same thing as U of X at that value of T, okay?

Thatís fundamental and thatís how Fourier got into this; thatís why he introduced those ideas. Youíd consider this problem where thereís periodicity in space. The symmetry of the object that youíre heating up and that has consequences. All right, so now, with a certain amount of confidence, with a certain amount of bravado, we write down the Fourier series.

So we write the Fourier series, U of XT is the sum from K going to minus infinity to infinity of C sub K, Iíll write it like this, C sub K either the 2 pi KX. Itís periodic in the spatial variable, so the variable in the complex exponential is X. Where is the time dependents? No, the time dependents is in the coefficient, C. Thatís where the time dependence has to be. K is just a constant; K is just an integer.

The time dependence is in the cK. That is a better way and more accurate way of writing it is like so. U of XT is the sum from K going from minus infinity to infinity of C sub K of T. I know what it is; itís the Fourier coefficient, Iíll bring that in later, but let me just write in terms, let me just call it C right now.

C sub K of T times E to the 2 pi KX, all right, periodic and X varying in T. How does it vary in T; thatís what we want to know, all right. The mystery here are the coefficients. We could solve the problem if I could find the coefficients, so in terms of the initial temperature distribution, so what are the CK? Thatís the question.

All right, now, weíre gonna be able to attack this problem because independent of periodicity, independent of anything else, heat flow is governed by a partial differential equation. All right, the flow of heat on a circle or another regions Ė all right, this is in itself a big subject, but itís one of the basic equations, partial differential equations of mathematical physics, which you have probably seen somewhere in your life and again, you will see again, heat flow Ė Iíll just do it in one dimension, all right.

I'm talking about one-dimensional problems here, but there are also ways of analyzing this for flow over a two-dimensional or a three-dimensional region. We have the heat equation, which says that the time derivative of the temperature is proportional and somehow maybe I should call it another constant because Iíve already used K, A times the second X derivative, all right. Thatís the one-dimensional heal equation.

All right, I'm not gonna derive that. Actually I have a derivation of that in the book that sort of follows Bracewellís discussion of the heat equation, but itís a fundamental equation of mathematical physics. The constant A here depends on the physical because this is one of these great dodges of all time.

The constant A depends on the physical characteristics of the region which no one wants to talk much about, but that affects the size of A, all right. I should say more generally, this equation governs not just the flow of heat, but itís in general is called the diffusion equation.

It governs how things diffuse. Things Ė what things, like charge through a wire is studied by this equation, holes through a semi conductor are governed by an equation, a higher dimensional version of the equation but the same idea, all right, so this is general governs, this is also called the diffusion equation and it governs phenomenon that are associated with diffusion.

Itís a general term, but itís a term youíll probably run across. All right, now I want to choose; I'm not gonna get too far today, but I'm gonna get a little along the way, so I wanna choose Ė I want to apply this equation to study this function. All right, I'm gonna use this for the ring. All right, and just to simplify my calculation, although it does not make any substantive difference, I'm gonna choose the constants so that A is equal to Ĺ.

Thatís a standard choice, certainly for the mathematical analysis, but it doesnít matter. You could have the constant tagging along in the whole thing and it wouldnítí affect the analysis. So I'm gonna choose A equals Ĺ or choose constants of A equals Ĺ. So the equation looks like use of T is equal to Ĺ use of XX, all right.

Now, I'm gonna short-cut the discussion a little bit. Thereís one way of doing it in the book, in the notes, which is a little bit more rigorous than what I'm gonna do although both can be justified very easily. I'm going to plug that equation, that formula for Ė we have a formula for U in some sense, not in some sense, we have a formula for U in this sense.

And I'm gonna plug this into the equation. Plug U of XT is sum over K, CK of T, either the 2 pi KX into the equation, into the use of T is equal to Ĺ UXX. What happens?

What happens? Well, so use of T, if I differentiate with respect, I'm sorry; I should have said this, but I assumed everybody knew, use of T is a partial [inaudible] with respect of T, or that function over there, so what is that? Well, the only thing that depends on T here is the coefficients, C sub K. So that is sum over K, CK prime of T times the complex exponential. That stays alone -- 2 pi KX. What is UXX of this expression?

Well, here Iíve put the derivatives on the complex exponential and I differentiate twice, differentiating complex exponential is like differentiating an ordinary exponential. The constant here comes down if I differentiate, so I get CK of T, thatís left alone because I'm differentiating with respect to X and then if I differentiate the 2 pi KX twice, with respect to X, I get 2 pi K squared times either the 2 pi KX.

Nothing up my sleeve, no tricks, no deceptions. Okay, thatís one more step, thatís the sum over K T sub K of T times minus 4 p K squared because I squared is minus 1 times a complex exponential either the 2 pi KX. All right, not quite the two.

Then we gotta go Ė damn. Equate the two sides, use the heat equation. Plug into the heat equation. So I get sum plug into UTs is equal to Ĺ UXX, and then I get sum over K CK prime of T, either the 2 pi KX is equal Ĺ times that thing, so itís gonna be sum over K minus 2 p squared K squared times C 2 p squared K squared times CK of T, times the complex exponential, either the 2 pi KX. Not hard, thatís not a hard step.

Okay, all right, how would we do? Equate like terms; equate the coefficients. If I equate the coefficients, something great happens. [Inaudible] consequences Iíll do next time. If I equate the coefficients, Iíll get CK prime of T is equal to the coefficient, when I say equate the coefficients, I mean the coefficients of the complex exponential. CK prime into the 2 p KX, whatís the corresponding coefficient over here. Itís this is equal to minus 2 pi squared K squared, CK of T.

But my friends, you know, that is a simple equation. That is an ordinary differential equation for CK. I can solve this other problem. I get CK of T is the CK of 0, the initial condition times [inaudible] minus 2 p squared, K squared T. Blow me down, all right.

Iíve found my coefficients Ė pretty cool. All right, extremely cool. Very cool. And next time what I havenít done is I havenít brought back in the initial distribution of temperature and I want to manipulate this solution a little bit more and something absolutely magical happens. Wait for it on Wednesday. Okay.

[End of Audio]

Duration: 54 minutes