TheFourierTransformAndItsApplications-Lecture23

Instructor (Brad Osgood):Are we on? I cant see. It looks kinda dark. I dont know. It looks a little dim there.

All right. So today assuming this is working or assuming even its not working we are going to spend a little bit of time over the next couple days talking about linear systems, particularly linear time invariance systems because those are the ones that are most naturally associated with the Fourier Transform and can be understood and analyzed some aspects of them in terms of the Fourier Transform.

But before doing that, we wanna talk about the general set up the idea of linear systems in general and talk about some of their general properties, as fascinating as they are.

Now its a pretty limited treatment that were gonna do of this. So I would say this is more an appreciation rather than anything like a detailed study. Its a vast field, and in many ways, I think it was one of the defining fields of the 20th century. The 20th century, in many ways, was a century of I think I even said this in the made this bold statement in the notes. The 20th century was a century of linearity in a lot of ways.

The 21st century I say this as a sweeping bold statement, but I stand by it. The 21st century may be the century of non-linearity. We dont know yet, but non-linear problems are becoming increasingly more trackable because of computational techniques. One of the reasons why linear problems were studied so extensively and were so useful is because a lot can be done sort of theoretically even if you couldnt compute. And then, of course, later on when computational techniques computational power was there, then they became even more they were able to be exploited even more. What I wanna get to is the connection between the Fourier Transform and linear systems, and thats gonna be primarily along the lines so we definitely wanna see how the Fourier Transform applies to linear systems, again in a fairly limited way.

And here, the main ideas are talking about the impulse response and the transfer function. These are the sort of major topics that I wanna be sure that we hit. The impulse response and the transfer function these are terms, actually, weve used already, but now were gonna see them a little bit more systematically and a little bit more generally. And, again, theyre probably terms and probably ideas that youve run across before if youve had some of this material earlier in singles and systems. And the other thing is again, somewhat limited and maybe even to a lesser extent is to talk a little bit about complex exponentials appearing as IGen functions of certain linear systems time invariance systems. So well put that up here.

Complex exponentials as IGen functions. Ill explain the term later if you havent heard it, although I suspect many of you have. IGen functions of linear time invariance systems. All right. So this is, I guess, sort of a preview of the main things that we wanna be we wanna discuss. But before getting that before doing that, I do have to do a certain amount of background work and frame things in somewhat general terms. So lets get the basic definitions in the picture. First a basic definition of a linear system. So a linear system for us is a method of associating an output to an input that satisfies the principle of super position. All right? So its a very general concept. Its a mapping from inputs to outputs. In other words, its a function. But this is usually the engineering terminology thats associate with it. Outputs that satisfies the principles of super position. And you know what that is, but I will write it down. Super not supervision. Super position. I get it. Ill get it. Super position.

So you have you think of the linear system L as a black box. It takes an input V to an output W, and to say this has the principle super position says that if you add signals, then the add the inputs, and the outputs also add. If you scale the inputs, then the outputs also scale. So it says that L of V1 plus V2 whatever their nature is L of V1 plus L of V2, and it says that L of alpha times V is alpha times L of V. By the way, its sort of a common convention here, when youre dealing with linear systems, not to write the parenthesis because its supposed to be reminiscent of matrix multiplication where you dont always write the parenthesis when youre multiplying by a matrix. As a matter of fact, Ill have more to say about that in just a little bit. All right?

Thats the definition of linearity. To say that its a system is just to say that its a mapping from inputs to outputs. Again, that doesnt really say very much. Everything we study is sort of a mapping from inputs to outputs, but this extra condition of linearity is what makes it interesting. And it took a long time before this simple principle was isolated for special attention, but it turned out to be extremely valuable. I mean, nature provides you with many varied phenomena, and to make some progress, you have to somehow isolate whats common to the various phenomena. And again in mathematics, the way it works is in the applications of mathematics, you wanna turn that around and turn that into a turn around the solution of what you observe and turn that into a definition. So the definition that came from studying many different phenomena in many different contexts was this simple notion of linearity or super position same thing. All right? So it really is quite striking how fundamental and important these simple conditions turned out to be in so many different contexts. And thats really, I say, almost defines a lot of the applications of mathematics practical applications of mathematics in the 20th century just isolating systems that satisfy this sort of property.

Now, there are additional properties that it might satisfy, and well talk about some of them. But the basic property of super position is the one that really started the whole ball rolling. Okay? I should say, as an extension of this, if you have finite sums, then I can take L applied to, say, I equals one to N of alpha I times VI, then thats the sum of so thats a linear combination of the inputs, and what linearity says is linear combination of the inputs go with the linear combination of the outputs. This is a sum from I equals one to N of alpha I times L of VI. Now its also true, in most cases, that this extends to infinite sums. But any time you deal with infinite sums, you have to deal with questions of conversions and extra properties, the operators we are not gonna make a big deal out of this. I wont tell you anything thats not true, I hope. But again, Im not gonna always take the assumptions carefully. You can extend these ideas to infinite sums and even to intervals, which we will talk about a little bit. But generally, that requires additional assumptions on the operator, L. Which again, Im not gonna make and usually, assumptions are fairly mild, all right? That are gonna be satisfied in any real applications. The basic assumption that you often make, and again, without really talking about it in detail, is you assume continuity some sort of continuity properties. Any time limiting operations are involved weve seen this in a number of instances there has to be some extra assumption on the operations youre working with. And its generally some sort of continuity assumption that allows you to take limits. So you assume some kind of continuity.

But the problem is defining what it is defining what continuity means and so on and so Im not gonna get into it. And again, its not gonna be an issue for us, but I thought I ought to mention it by to be honest. Now whats a basic example of anytime you learn a new concept, you should have or even revisit a familiar concept you should have examples in mind. What is an example of a linear system? There is actually only one example of a linear system. Theyre all the same. It is the relationship of direct proportionality. The outputs are directly proportional to the inputs. L of V is equal to alpha times V. All right? That is certainly linear. It certainly satisfies the properties of super position. L of V1V2 is alpha times V1 plus V2. So thats alpha times V1 plus alpha times V2. So thats L of V1 plus L of V2. And likewise, if I scale, I actually call that alpha there. But Im thinking of alpha as just a constant here. L of, say, A times V is equal to A times L of V for the same reason. All right? The relationship of direct proportionality is the prototype the archetype for a linear system. In fact, its the only example. All right? All linear systems essentially can be understood in terms of direct proportionality. Thats one of the things that I wanna convince you of. Thats one of the things that I wanna try to explain. Its the only example.

And thats sort of a bold statement, but I stand by it. Maybe a little shakily, but I stand by it. Say all linear systems say [inaudible] back somehow to the operation of direct proportionality. All right? So dont lose sight of that. So for example now, it can be very it can look very general, all right? It can look very general. Direct proportionality is also known as multiplication. So any linear system that is given by multiplication any system that is given by multiplication is a linear system. All right? So a little bit more generally is multiplication. That is to say you can think of multiplying by a constant, but if your signal is not a constant, but a function of T or a function of X, then I can multiply it by another function. So L of V of T, I can multiply by alpha T times V of T. Okay? The constant proportionality doesnt have to be constant. It can also depend on T. But nevertheless, the relationship is one of direct proportionality. And for the same simple reason as up here, that defines a linear system linear.

So when there are many such examples of that practical examples of that a switch! A switch can be models of a linear system. If its on for a certain duration of time, then thats multiplication by, say, a rectangle function of a certain duration. So EG a switch L of V is the L of V of T is, say, a rectangle function of duration T times VT, a duration A. All right? So you switch on for duration A. Then you switch off. On for duration A. Now you dont necessarily think of flipping the switch as a linear operation, but it is. Why? Because its multiplication. You could somebody could say to you, Verify that the act of switching on a light bulb is a linear operation. But the fact is that its modeled mathematically by multiplication by a function, which is one for a certain period of time and zero for the rest of the time. And as multiplication, it is the operation it is the principle it is just expressing direct proportionality, and thats always linear.

Sampling is a linear operation. Sampling at a certain rate L of V of T would be could be a Shaw function of spacing T times V of T because [inaudible] spacing P times V of T. All right? Its multiplication. Its direct proportion. Its linear. So again, somebody could say to you, Say, is it true that the sample of the sum of two functions is the sum of the sample functions? And you might be puzzled by that question, or you might that might take you a while to sort it out. You might try to show something. I dont know what you might try to show. You might try to show that sort of directly, but in fact, yes it must be true that the sum of two sample functions the sample of the sum of two functions is the sum of the sample functions. All right? But thats true because the relationship because sampling the act of sampling is a linear operation is a linear system. Okay? Its multiplication. Its direct proportion.

Now, a slight generalization of direct proportion is direct proportion plus adding up the results. That is, to say, matrix multiplication. I should say slight, but important, generalization. Generalization is well, lets say direct proportion plus adding two linear operations plus adding. And what I have in mind here is matrix multiplication. So i.e. matrix multiplication. All right? If I have an N-by-N matrix, say A is and let me see if I can even do this more generally say an N by M matrix, all right? So its N rows by M columns and V is an M vector, so its a column vector with M rows. Then A times V is an N vector, and the operation of multiplying the matrix, say, by the column vector V is a linear operation. It is a combination exactly of direct proportion or multiplication with adding. So what is it? If you write A as the matrix, say, AIJ so its mixed by columns and rows, then A times V the [inaudible] entry is a sum over J J equals one to N J equals one to N AIJVJ, isnt that right? Probably not. Lets see. Do I have an M? I go across the number of pies, and N by M I hate this stuff. Man, I can never get this right.

N by M matrix, so yeah. Right, M columns. Right, okay. Thats fine. That gives you all the entries. If it does, fine. If it doesnt, then switch M and N. Okay. Each component is multiplied by J thats direct proportionality and then theyre all added up. And as you know, the basic property of matrix multiplication is that A applied to the sum of two vectors is A of V plus A of W. And A of a scale times V is alpha times A of V. Okay? Its a slight generalization, but it turns out to be, actually, a crucial generalization. And it comes up in all sorts of different applications.

Those of you who are taking 266 [inaudible] have done nothing but study matrix multiplication. Well, that may be a little bit of an extreme statement, right? So EGE 263 where you study the linear dynamical system X dot is equal to AX, and you solve that. Say X of zero is equal to V in initial condition. All right? Then solve by X of T equals E to the T times A times X of zero, which is V. All right? Its a matrix times the fixed vector V gives you how the system evolves in time. All right? And you wanna be able to compute that, and you wanna be able to study that. And you spend your life doing that. Many people do. Now, again, without going into detail, now and well say a little bit more about this later the property of linearity is extremely general. There are special cases that are important, some of which Im sure youve seen. So let me just mention special linear systems lets just stick with the case of matrix multiplication right now. All right?

So special linear systems with special properties derive from the special property of the matrix of A. So for example, some of the main examples are some of the most important examples are, for example, if A is symmetric, then you sometimes call it a self-adjoint system or a symmetric system. So A to say that A is symmetric is to say that its equal to its transpose. So for example, if A is symmetric, thats a special type of linear system. As a matter of fact, Ill tell you why thats important in just a second. [Inaudible] transpose is equal to A. A can be or hermission is the complex version of this, where the condition is A star is equal to A. So this is the complex case. All right? That is A star is equal to the conjugate transpose. These are both very important special cases. They come up often enough, so again, it was important to single them out for special study.

Or another possibility those are, maybe, the two main ones. Another possibility is A can be unitary or foginal. So if A unitary means that A times its conjugate transpose its adjoint is equal to the identity or A star times A is equal to the identity. Im talking about I should say here, Im talking about square matrices a N-by-N matrix. So its square. Okay? Now, a very important problem and a very important way of and well again, were gonna talk about this when we talk more about general linear systems a very important approach to understanding the properties of linear systems is to understand the aspect of their IGen values and IGen vectors associated with them. Im saying these things to you fairly quickly because Im going under the assumption that this is largely by and large review. All right? That youve seen these things in other classes and other contexts.

So you often look for IGen vectors and IGen values of matrix A. All right? And we are going to, likewise, talk about IGen vectors and IGen values for general linear systems, and thats where the Fourier Transform comes in. But just to remind you what happens here in this case, just to give you sort of the basic definition so you say V is an IGen vector if A times V is equal to lambda times V for some V. So V is a non-zero IGen vector non-zero. If theres some non-zero vector thats transformed into itself. So there you see you really see the relationship with direct proportionality, all right? For an IGen vector, the relationship is exactly direct proportionality. A times V is just a scale version of V. The output is directly proportional to the input. All right?

Now it may be that any that you have a whole family of IGen vectors that span the set of all possible inputs that form a basis for the set of all possible inputs. If you have IGen vectors, say, V1 through VN with corresponding IGen values lambda one through lambda N that form a basis for all the inputs for all the input, all the system, all the signals that youre gonna input into the system, then you can analyze A the action of A easily. All right? Thats because, if it forms a basis for all the inputs if V is any input so lets say yea. Its V any input then you can write V is some combination alpha I times VI. I equals 1N. Thats what its saying that thats what it means to say that they form a basis for it. And then A operating on V by linearity I can pull that to I can pull that A inside the sum and have it operate on the individual scaled IGen vectors. So A of V is A of the sum is the sum of I equals one to N of A of alpha I times VI. But again, the scaler alpha I comes out by linearity. Thats the sum from I equals one to N of alpha I times A times VI. But A just takes V to itself or a scaled version of itself VI to a scaled version of itself. So this is sum I equals one to N of alpha I times lambda I times VI. The action of A on an arbitrary input is really here. You see youre getting direct proportionality plus adding. Its very simple to understand. Each component is stretched, and then the whole thing is scaled by whatever initially scaled the inputs. All right? If the inputs are scaled by alpha I, the outputs are also scaled by alpha I. In addition, theyre scaled by how much the individual IGen vectors are stretched. Okay?

Its a very satisfactory picture and an extremely useful picture. So the question is, for example, when do linear systems have a basis of IGen vectors? When can you do this? And thats when these special properties come in. All right? Thats when these special properties come in. So for example, and I wont Im not gonna this is sort of, again, fundamental linear algebra that I assume youve probably seen in some context. But because this is so important, you gotta ask yourself when can you actually do this. And the spectral theorem in finite dimensions for matrices says that if A is a hermission operator or symmetric operator in the real case, then it has a basis of IGen vectors, and you define a basis of IGen vectors. The spectral theorem says when you can do this. If A is symmetric or, in the context case, hermission, then you can find a basis actually, an orthonormal basis even better basis of IGen vectors. All right?

Now if youre thinking that this doesnt look that this looks sort of vaguely familiar or this is somehow Im using the similar sorts of words to when we talk about fourierciaries, and I talked about complex exponentials forming an orthonormal basis and so on. Its very similar. All right? Its very similar. And the whole idea of the Fourier Transform and diagonalizing the Fourier Transform applying IGen vectors, IGen functions in that case, you call them for the Fourier Transform or how they come up in Fourier series is exactly sort of whats going on here. Okay? These are simple ideas, right? We all that we started with this idea of super position that the sum of the inputs goes to the sum of the outputs and a scale version of the input goes to a scale version of the output. And the structure that that entails is really quite breathtaking. Its really quite astounding. All right? Now, theres one other important fact about the finite dimensional case the case of just finding N-by-N square matrices thats very important. And also, were going to all these things have some analogue in the continuous case and sort of the infinite dimensional continuous case, which is where were gonna spend most of our time. All right? But this you should sort of know. This should be your touchstone for understanding the more what happens more generally or sort of what happens in the case of N by N matrices and what happens in the finite dimensional case what happens from what you learned in linear algebra. So one more property its not that matrix multiplication is just a good example of linear systems. All right? This is like its not just direct proportionality as an example of linear systems. Direct proportionality is the only example of linear systems. All right?

Well slightly more generally, its not that or its not just that matrix multiplication is a good example a natural example of lets call it finite dimensional linear systems. All right? So its like an N-by-N matrix operating on an N vector whatever. Its the only example. All right? Now you learned this in linear algebra, although you may not have learned it quite that way. What that means is that any linear operator Ill say it very mathematically thatll just give you an example any linear operator on a finite dimensional space can be realized as matrix multiplication. And Im gonna give you a problem to think about. Any linear system let me put it this way. Any finite dimensional so finite number of degrees of freedom a finite number of ways of describing any input described by a finite set of vectors a finite set of signals, inputs. Any finite dimensional linear system can be realized as matrix multiplication. All right? Its not just that its a good example. Its the only example.

Now let me just say let me just take a little poll here. Raise your hand if you saw this in linear algebra saw this theorem in linear algebra. Not so widespread. All right. Well, you did. If you took a linear algebra class, you probably saw this result. All right? Maybe not phrased quite this way, but this is sort of one of the fundamental results of linear algebra. Now mathematicians are quick to say, Yes, but we dont like matrices. We would rather stay with the linear operators, per say. Beautiful and pristine as they are, to introduce matrices is an obscene act.

Went out like that. All right? We find it useful to manipulate matrices. We find it useful, often, to have this sort of representation. Ill give you one example you can try out for yourself. So for example, some of you may have an example you may have done example: let me look at all polynomials of degree less than or equal to N. All right? Thats the space of inputs. Inputs are polynomials of degree less than or equal to N. So N is fixed. All right? So any input looks like A0 plus A1 times X as a constant term, a coefficient of X, a coefficient of X squared up to a coefficient of X to the N. Not exact Ill allow myself to have some zero coefficients in here. So I dont go up I dont necessarily have to go all the way up to N, but I go up, at most, to N to X of the N. All right?

So any input looks like that. Now what is a familiar linear operator? A familiar linear operator on polynomials that takes polynomials to polynomials is the derivative. If I differentiate a polynomial, I get a polynomial of lower degree. So take L to BDDX. All right? Thats the linear operator. Thats a linear system. All right? As such, in the space of polynomials is a finite dimensional space. It can be finite degrees a finite number of degrees of freedom. The degrees of freedom are exactly described by the N plus one coefficients. Theyre N plus one because they have a constant term order one up to order up to degree N. All right? So L can be described as an N by one by N by one matrix as by an N plus one by N plus one matrix. Find it.

Any linear operator in a finite dimensional space can be described as matrix multiplication can be written in terms of matrix multiplication. All right? Theres a linear operator on a finite dimensional space. Doesnt look like a matrix, but it can be described as a matrix. Find the matrix. Yeah. Thank you. And no well, it can yes, actually yeah. So Ill leave it to you to think that. Thats right. It actually drops the degrees by one. So you can describe it either if you do N plus one by N plus one let me give Ill give you a hint. Youre gonna have either a row or column of zeros in there. All right? But in general, if Im thinking of it just sort of as a map from N plus one degree polynomials and N plus one degree polynomials, itd be a square matrix. All right? So Ill let you think Ill let you start this out. This is a problem actually, so let me take a poll again a brief poll again. Anybody do this problem in linear algebra class?

Yeah. Okay. You probably hated it then. You may hate it now. But and again, its a sort of scattered minority response out there. All right? But it just shows, again, that this idea that its not just that its a good idea. Its the only idea. Representing linear operators matrix interpreting linear operator linear system on finite dimensional space as a matrix multiplication its not just a clever thing. Its not just a nice example. Its the only example. All right? And in fact, were gonna see that that same statement, more or less and for our purposes, holds an infinite dimensional continuous case. Thats what I wanna get to. I wont quite I dont think Ill quite get there today, but Ill get a good part of the way. All right?

I wanna see that a similar very similar statement but there is an analogous statement the infinite dimensional continuous case very satisfactory state of affairs. There is an analogous statement for the infinite dimensional continuous case. All right? So lets understand that now. So Ill understand that first, in terms of an example rather than a general statement, the example that generalizes matrix multiplication is integration against a kernel or what I should say is the operation that generalizes matrix multiplication is integration against the kernel. Something we have seen. Something I will write down now for you. So the operation the linear system that generalizes matrix multiplication is the operation of integration against the kernel. That would thats the phrase that you would use to describe it.

So what it is? What do I have in mind here? Well, again, the inputs this time are gonna be functions. Well do it over here. All right. So the inputs are, instead of just a column vector, are going to be functions is a function, say, V of X. All right?

And the kernel a fixed kernel for the operator the things defines the operation is a function of two variables. So the kernel is a function lets call it K K for kernel K of XY. All right? Integration against the kernel is the operation is L of V. So I can its gonna be producing a new function. Ill say thats also a function of variable X. Theres also a little bit of a problem here like there is in this whole subject with writing variables, but let me write it. Its gonna go from minus infinity to infinity K of XY, V of Y, VY. All right? Ks a function of two variables. I integrate K of XY against V of Y, VY. What remains is a function of X. That by definition is the output devaluated at X. All right?

So L of V is another function. What is its value at X? I integrate K of XY against V of Y, VY. What remains in this integration is the function of X depends on X. Okay? Thats what I mean by integration against a kernel. The kernel K defines the operation defines the linear system. So it is linear because integration is linear. The integral of the sum of two functions is the sum of the integrals. The integral of a scaler times the function is the scaler times the integral of the function and so on. So thats the first thing. I wont write that down, but I will say it. So L is a linear system. So L is linear. All right? Now first of all, if you sort of open your mind a little bit, you can really think of this as a sort of infinite dimensional continuous analogue of matrix multiplication. Its the infinite dimensional continuous analogue of matrix multiplication.

Why? What do I have in mind by a statement like that? Well, what I have in mind is its like you think of V as, somehow, an infinite continuous column vector. All right? So its like you think of V as I mean, you can even make this more precise if you actually use [inaudible] sums, but I dont wanna do that. I dont wanna write the well, let me just write it out like this as an infinite column vector. All right?

And think of this operation integral from minus infinity to infinity K of XY, V of Y, VY whats going on here? So this is like a column vector. K of XY is like a matrix a doubly infinite continuous matrix. X is the index of the row. V is the index of the column. You are, like, summing across the columns of the matrix time summing across a row of a matrix thats integrating with respect to Y times the corresponding column entry V of Y. So this is like a column index. This is like a row index. And an interval, of course, is like a sum. Okay? This is exactly whats going on. Exactly whats going on.

K of XY youre summing across the X row, right? XYYYYYYYY times VY VYYYYYYYY and youre adding them all up according to the integral, and youre getting the component the X component of V. All right? Now see, analogue now what else is true? Or what else is true? If its such a good analogue, are there analogues to the other statements that went along with the finite dimensional case? Well again, just as in the finite dimensional case, there are special linear systems that are characterized by special properties of the matrix. So too, in the sense of [inaudible] continuous case, there are special properties of the systems that are characterized by special properties of the kernel. All right? And although Im not gonna use them now, I at least wanna mention them because I wanna continue this sort of analogy between the finite dimensional discreet case and the infinite dimensional continuous case.

So special linear systems arise by extra assumptions on the kernel on K of XY. All right? So for example, you might assume now what do you think is the analogue to the symmetric case? For a matrix, its that the transpose of the matrix is equal to the matrix. So what do you suppose the transpose of or the analogue of the transpose is for a kernel K of XY? What should the condition be? What should the symmetry condition be?

Yes. Be bold. Ill help you. I wont help you. All right. What should a symmetry condition be thats sort of analogous to a matrix being equal to its transpose? If K of XY is the analogue of the matrix where X is the row and Y is the column, how do you get the matrix? You interchange the column and the row.

Student:[Inaudible].

Instructor (Brad Osgood):Pardon me?

Student:Time invariance?

Instructor (Brad Osgood):No, not time invariance. Well get to that.

Student:[Inaudible].

Instructor (Brad Osgood):Right. I think I heard it there. All right.

Symmetry or self-adjointess is the property K of XY is equal to K of YX. If the kernel satisfies this property, you say its a symmetric system symmetric or sometimes you call it a self-adjoint linear system. They have special properties. Im not gonna talk about the properties now, but again, Im just pursuing the analogy between the discreet case and the continuous case. All right? Or and whats hermission symmetry? Hermission symmetry, in the case of a complex kernel and I wont allow the case of a complex kernel would be K of XY would be K of YX bar. Okay? Complex conjugate. This is all [inaudible] and so on and so on, and I wont gulf into that very much now. Now we have seen many examples of linear systems that are given by the integration against a kernel. What is an example of what is a fundamental example in this class of a linear system that is given by integration against a kernel? A Fourier Transform. Good. So for example, a Fourier Transform FF of S is the integral from minus infinity to infinity E to the minus two pi IST, F of T, DT is exactly integration against a kernel. What is the kernel? The kernel is K of ST is a minus two pi IST. All right? It fits into that category. It has special properties many special properties. Thats why we have a course on it. Okay? But nonetheless, it fits under the general category of a linear system. And actually, you can check that K of ST is equal to K of TS is actually symmetric. All right? If I switch S and T, the kernel doesnt change. So its a symmetric linear system and so on. What is another example of an important linear system that can be described by a by integration against a kernel? Whats another example that we have studied extensively and use everyday almost on good days?

Student:[Inaudible].

Instructor (Brad Osgood):Convolution. All right? All right? Fix a function H, all right? Then if I define L of V to be H convolved with V, that is a linear system. Thats a linear system. Convolution is linear, but what is that in terms of the operator? L of V of X is the integral from minus infinity to infinity H of X minus Y, V of Y, DY. All right? Convolution is a linear system that falls under the general category integration against the kernel. Its special one, actually, and its what as it turns out, its a very important special case because the kernel here doesnt depend on X and Y separately. It depends only on their difference. All right? So note for convolution all right. For convolution that is, for a linear system given by convolution, the kernel depends on X minus Y. [Inaudible] is a function only of one variable or the difference between the two variables X minus Y instead of X and Y separately and not X and Y separately. All right? Now for reasons which youve probably seen, actually, and which well talk a little bit more about detail this particular special case leads this and this property leads to a so-called shift invariance or time invariance. All right?

So in particular, if we shift X and Y by the same amount A, say some number A A. So X goes go X plus A or X minus A if I delay it by A. Y goes to Y minus A. And then, of course, X minus Y is equal to X minus goes to X minus A minus Y minus A is X minus Y. Its the difference is unchanged. All right? So the convolution is unchanged. If I shift X and Y. All right? And this leads to Im not I dont wanna say too much more about it than that, but this is what leads to the so-called shift invariance or time invariance of convolution. This leads to convolution. That is this observation leads to the phrase you hear and well talk about this convolution as a linear shift invariant or time invariant. People usually say time invariant, but its really better to say shift invariant somehow. Its more descriptive linear time invariant system. All right? But well get back to that. The fact is that, again, convolution is, of the form, integration against the kernel, but its a special kernel because it depends only on the difference of the variables, not on their not on the variables separately. Okay?

In general, integration against the kernel is integration against the function of two variables. Now its not just that this is a good idea. Its not just that this is a good example of linear systems. Im not talking about convolution here. Im talking about generally integrating against a kernel. All right? So the words that I said, like, ten minutes ago, Im gonna say again. But in this different context. So its not just that integration against the kernel is a good example of linear systems in this case, continuous linear systems infinite dimensional linear systems just like its not just that matrix multiplication is a good example of finite dimensional linear systems. Its the only example. Okay? Its the only example. Any linear system now this is statement has to be qualified because there are assumptions you have to make and so on, but thats not the point. The point is that any linear system can be realized somehow as integration against the kernel. Yeah.

Student:[Inaudible] manifest in a matrix operator?

Instructor (Brad Osgood):Oh, thats a good question, and well come back to that, actually. Its in the notes. The matrix has had special properties

Student:[Inaudible].

Instructor (Brad Osgood):Circulant, actually. Its a little bit more than Tarpowitz. Yeah. RTFN, man. Its in the notes. Okay? Well come back to that.

All right. For now, dont spoil my drama. All right?

Again, its not just that matrix in the finite dimensional case, its not just that matrix multiplication is a good example. Its the only example. In the infinite dimensional the continuous case, its not just that integration to kernel against the kernel is a good example. Its the only example. All right?

Any linear system can be realized as integration against the kernel. All right?

Now on that fantastically provocative statement, I think we will finish for today. And I will show you why this works next time.

[End of Audio]

Duration: 51 minutes