Instructor (Brad Osgood):Hey. Jesus. Man, I just back and Iím tired. And welcome back everybody. Letís see if we can get our head back in the game. Itís not so easy, somehow. Iím sure I speak for all of you. Anyway, I hope everybody had a very good holiday, either here or elsewhere, and we can now sprint to the finish. All right.
I want to wrap up today, the discussion on LTI systems. There are a lot of topics to do a lot of little things to do, a lot of big things to do, and like many things in this class, it goes off in a lot of different directions, and is often the subject of very specialized courses. I was going to do some material on a little bit more on filters, on digital filters today, but I decided not to that. Itís discussed in the notes, and thereís, of course, we have an entire course of digital filters. So you have plenty of opportunity to see it. If you donít see it there youíll see it elsewhere, worked into other courses.
So I thought I do some fairly general things. Do a couple sample calculations and talk about their relationship, the connection between the LTI, Linear Time Invariant systems and the 4A transformer, which is the main sort of important fundamental foundational information that I think everybody should maybe know.
So I want to remind you where we finished up last time. Last time we got a pretty satisfactory answer about the general structure of linear systems in terms of the impulse response. So if L is the linear system, Iíll label it linear system, and we introduce the impulse response, a function of two variables separately, L of XY is L of delta X minus Y. And then the basic result, which is known in the theory of distributions of the Schwartz quantum theorem, but for us, itís something you probably heard about when you had your first course in signals and systems, that the output of the system can be given in terms of integrating the impulse response against the input. All right.
So if W of X is equal to LV of X, then you actually get W of X is [inaudible] for minus infinity of H of XY, Z of Y, DY. All right. So once again, the output of the system is obtained by taking input of the system and integrating that against the impulse response. Itís a very satisfactory result.
Now, in the special case where we have an LTI system, the integrating reduces the convolution. So let me remind you what an LTI system is. You say that L is time invariant or shift invariant, invariant, let me get the word better here, or shift invariant if the following happens: Say if W is equal to L of V of X, then W of X minus Y is equal to L of V minus XY. I write it in symbols. Itís easier to say it in words. L has time invariant if the delay of input was also in corresponding delay of the output. All right. So if V is the output of L, V is the input and W is the output, V of X minus Y is the delayed form of V, then W of X minus Y is the delay formula of W and its delayed input is in correspondence with its delayed output. And a system is time invariant or shift invariant if and only if the impulse response is given by or the interval is given by convolution. Okay. So in this case, the impulse response for an LTI system is a little bit easier. Delta of X, I just set to the L of Ė Iíll get it again, Iíll get it right. H of X is L of the un shifted delta function, say delta of X, and then by the time invariants, H of X minus Y is equal to L of delta X minus Y, so that is the impulse response. And the action of the system is given by W of X the output is the interval from minus infinity to infinity, H of X minus Y, DY, DY. Same form, right, integrating the output of systems by integrating against the impulse response, but now the impulse response is of a special form. It doesnít depend on X and Y separately, it depends only on the difference. And we recognize this as a convolution. H star V FX, and in fact, this characteristic of time invariant systems. That is to say, a system is time invariant, a linear system is time invariant if, and only if, itís given by convolution. So system is time invariant if, and only if itís given by convolution. All right. Thatís where we finished up last time. All right and itís in a very satisfactory state of affairs, as far as a structure of linear systems go. Any linear system is given by integration against the impulse response this is the time invariant if, and only if, that integration reduces to a convolution. So itís another indication how fundamental convolution is in the whole theory, all right, or just as an operation.
Weíre going to see how 4A transformer comes into this in just a second because anytime, should be one of the great lessons of this class, is that anytime anybody mentions convolution, bells should go off your head suggesting that you take 4A transformer. But wait! Weíll do that in just a minute. I did want to comment, all of this in the context of continuous time systems, but I did want to comment, as a matter of fact just write down a simple system, that the same sort of consideration holds when you have discrete systems. The same considerations hold for discrete systems. All right. Now, any discrete system, remember, is multiplication by a matrix. All right. If W is equal to L of V, then this is given by Ė Iím thinking about this W and V as vectors here, is given by multiplication by a matrix. All right. We talked about that before. Any linear operator is given by multiplication by matrix. Any functional operator is given by multiplication by matrix. And it is the definition of shift in variance, of time variance is the same as before, except this time youíre shifting a discrete variable instead of a continuous variable. And again, you have a system that is time in variant or shift in variant if, and only if, it is given by convolution. In this case, vector convolution or matrix convolution, well, vector convolution. So L is an LTI if, and only if, W is equal to H star V. Okay?
Now, in this case, so here is the impulse response, D is the input vector and W is the output vector. All right. So again, H is L of delta. All right, the M shifted to delta function. And H of M minus N is L of delta, so let me write this, minus M is L of the delta function shifted to M. Now, itís interesting, I just wanted to share this example. I want to do a couple calculations today so youíll feel sort of comfortable with how these things work out. The matrix A that realizes though the linear system has special form in the case of a timing variance system. Itís cute and actually, itís very important in a lot of number calculations. So again, the operator is given by matrix multiplication. If we write a system as a matrix multiplication, say again, let me write it A times V, all right, where A is a matrix. Then A is special form for time and variance systems. Let me just do it, rather than try and give you the state of general fear here, let me just do an example so you see how it works.
All right. So letís take e.g., letís take just a four by four system. So Iím going to take H to B and the matrix or the vector, one, two, three, four. All right, just to take a random example that I happen to work out in detail before I got here so I wouldnít make any mistakes. So if W is equal to A times V, which is also given by H convolved with V, the question is what is A? All right. So Iím telling you the system is given by convolution. All right. H star V, where H is this vector. So even when your systemís given by matrix multiplication, the question is what is a matrix? All right. So itís clear what Iím asking here? Well, how do you find the matrix A? How do you find any matrix A? You have to find the image of the basis factors. All right. The columns of A are given by A of Ė well, the first basis vector is just what weíre calling, in fancy language, delta naught. Delta naught is 1000, right? Thatís the first column. The second column is A delta one, delta one is 0100, if I use the language of delta functions instead of the language of lie algebra. Second column, the next column, is A delta two, and the next column is A delta three. So A delta two, delta two is 0010. And the next thing here, remember thereís always this problem when youíre working in the sort of context of linear systems, DF etc., wherein the index is usually from zero to N minus one, instead of one to M. All right. So delta zero is this is the zero slot, the first slot, the second slot, the third slot, 0123, and so on. And the final column is A delta three, where delta three is the last basis vector, thatís 0001.
All right, so how do I compute all these things? Well, I compute them by convolution. All right, because by definition the system is given to you as convolution with the vector H. So A of delta to naught is H convolved with delta naught. All right, now, what is H convolved with delta naught? Wait! Donít tell me, I know, itís H. All right. Convolving it with the un shifted delta function doesnít do anything to the vector, itís H. All right, so thatís one, two, three, four. That letterís a column. Okay. What about A of delta one? What is A delta one? Itís H convolved with delta one. Now, what do you get if you convolve H with the shifted delta function? Itís a shifted H. All right. H convolved with delta one at the index M is H of M minus one. Okay. Just like the continuous case, just like the continuous case, all right. So what is that? Well, now, hereís where you have to say something a little extra. All right, if H is the vector of one, two, three, four, what is H shifted by one? All right. Now, you have to use the fact that H has to be assumed. Any time convolution comes into the picture, we havenít brought the DFT in, although we will, but anytime any of that sort of stuff comes in, you always have to assume that your signals, youíre discrete signals are extended to be periodic. All right. So that it makes sense to consider H for values other than the index, and to see zero, one, two, three, remember itís index one, two, three, all right. So it makes sense to consider H defined for all integers and you just keep repeating the pattern. All right. So what is H convolved with delta one as a vector, he looks at his note to make sure. I just forget what you shifted. Itís shifted like one to the right, right? You know, if I look at F minus one, itís like shifts the function over. Well, one, two, three. All right, make sure you see this, okay?
Again, you have to assume that H is extended to be periodic, and itís shifted down by one. So if itís shifted down by one the four goes up top. Or you can think about it this way, the zero component here is H of minus one. Right? Delta one convolved with H at zero is H of minus one. But H of minus one is the same thing as H of three because of the periodicity. And H of three is the third component, remember, weíre indexing zero, one, two three, so thatís four, and so on. Okay? What about the rest of them? Now you see what the pattern is. What is A of delta two, is that where I am now? Yes, thatís H convolved with delta two. So thatís H shifted by two or if I shift this thing one more, so what would this be? I ask. Pardon me?
Instructor (Brad Osgood):Be bold.
Instructor (Brad Osgood):Thank you! All right. Shift it down again. And finally HA of delta three, is H involved with delta three, thatís just H by three, and that is two, three, four, one, right? Yeah. All right.
Now, again, those are the four columns of the matrix A. So what is the matrix A? What is the matrix A? Or simply what is the matrix? Neo. A is one, two, three, four, thatís the first column. The second column is, what do I have here, four, one, two three. Third column was three, four, one, two, four by four matrix. And the fourth column is two three, four, one. All right.
So again, and you can check, you can check that this is a different description of the system. As the system is given by convolution but itís also given by matrix multiplication. W is equal to A times V. That is A times V, multiplying matrices out, is the same thing as convolving with H.
Now, this is kind of cool matrix. All right, if you look at this matrix, this is whatís called a circulant matrix in the biz. And I think I actually mentioned this once before. Someone asked a little bit about this. Circulant matrix. All right, this is a special case of more general matrices called Toeplitz matrices. They come up in a lot of different applications in discrete systems, all right. Circulant matrix is constant on the diagonals. The columns are periodic, as they are in this case, so the pattern just repeats the cycles around and each column is obtained form the previous column by a shift. And consequently, itís constant on the diagonals, so it constant on the main diagonals, all ones, fours here, threes here, twos, twos, threes, four, and so on. Okay. And itís called circulant. That sort of property, if itís constant on the diagonal, and Iím a little hesitant to give you the general definition here, because I donít want to get it wrong. But thereís standard terminology of the standard matrices that come up a lot in various applications. Typically itís Toeplitz matrices and circulant matrices and they have the circulant matrices are like Toeplitz matrices except they have the additional property that columns are periodic. Okay. But each one is obtained from the previous one by shift. Okay. Bob Gray, in our department, has a whole book on, he has a whole book on a lot of things, actually, but you know thereís a whole book in particular on Toeplitz matrices and their applications. So they come up a lot. Weíll come back to this matrix a little bit later. All right. Itís kind of cool. And itís the sort of calculation you should be able to do, all right, you know. You should be able to take this result in the continuous case, and bring it over to this brief case and realize what form it takes, and realize that itís not so different than what youíre doing in the continuous case. We set it up this way just so the formalism just so that symbols and everything else would look, just as much as possible, like the continuous case. Itís nice. Okay.
Now, at long last, letís bring back the 4A transform for LTI systems. Okay. LTI systems get convolution, bells should go off in your head, buzzers should go off in your pocket, who knows what else should happen, but whatever happens, you should think of the 4A transform. Bring in the 4A transform. All right, so with an LTI system we have convolution. So if W is equal to 8 involved with V, V is the input, W is the output, H is the impulse response, so H is fixed and V varies over different inputs. and if you take the 4A transform you, of course, get via the convolution theory 4A transform value is the transform of H times the 4A transform of V, or as it is universally written in uppercase letters, capital W is equal to capital H of S times capital V of S. And in this context, in terminology, again, Iím sure you have heard, capital H of S is called the transfer function. Little H is called the impulse response capital H is called the function. You have to be a little careful here. Thatís all right, never mind. Sometimes what you call the impulse response, what you call the impulse response H of X or H of X minus Y, I guess thatís the only question. But itís not important in this case.
Anyway, the standard terminology I gave you, again, that I am sure you have heard, I think we even used back when we talked about filters when we first started talking about part of convolution, is that capital H is called the transform function. Itís also sometimes called the system function. Any other terms of this anybody knows, just out of curiosity. Other than capital H function, I donít know. Either call it the system function or the transform function. Now, I want to point out something here, again, sort of in the spirit of this such beautiful structure involved in linear systems. When we start talking about linear systems, I said, I made the bold statement, that the most basic example of linear system is, is the relationship in direct proportionality. The output is directly proportional to the input. All right. And I said, boldly, that any linear system is somehow some generalization of that, or somehow you can trace back the idea of direct proportionality or you can redirect proportionality into any linear system. And for LTI systems, this is staring you in the face. All right. Because what this says is that for an LTI system in the frequency domain itís exactly described by multiplication. Itís exactly described by direct proportionality. Okay. In the frequency domain, the system is really given by direct proportion, the relationship of direct proportion. In the time domain, itís a little more complicated. In the time domain, it involves convolution, but in the frequency domain, it really is the relationship between the input and the output is given by direct proportion, the most basic relationship that underlies linearity. All Right.
And of course, again, we have you know part of the point of this course is that the time domain and the frequency domain in some sense are equivalent. You can pass back and forth between the two. Theyíre different pictures of the same thing, different views of the same phenomena. You can use one to study the other. All right. So again, I just want to point this out because I think itís just another, I donít know, an example of how beautifully unified and coherent this subject is when you talk about linearity and frequent of time and variance of convolution, this whole idea, again, direct proportion comes out very strongly here. Okay. Itís not just almost there, itís there! Itís right in front of you. Now, the importance of bringing the fully transform to LTI systems is the fact that it would not be obvious if you didnít do it with 4A transforms that complex exponentials are agin functions for all LTI systems. Let me write that down and then Iíll explain what I mean. This is sort of a last general fact that weíre going to talk about for linear systems and LTI systems in particular. For the last gray fact on LTI systems is the complex exponentials are agin functions. Now, this actually is an extremely important result but we are not going to take it anywhere. All right. Iíve got to say, just because again, this goes off in a lot of different directions and becomes, you see this more in special applications. And I would be surprised if you didnít see this in special applications. But for us, I just want to make sure you understand where it comes about, and why, and how it happens, what the basic definition is. So weíre not going to do any particular application with this because to do one application is to do dozens of applications, probably, and again, youíll probably see these more in other courses that come up. Quantum mechanics comes up in a lot of various aspects of sequel processing, but I just want to make sure you see the basic fact. So hereís what I mean by this, so then L of V is given by H star V. Okay. So itís a timing variance system, letís call it W on the left side. W is the output, V is the input, and if I take the 4A transform I get W of S is equal to H of S times V of S. Now, what happens if I input a complex exponential into the system? Input V of X is E of the two pi I nu times X, any nu. Okay. And the question is what is L of V of X? I think sometimes people call this the frequency response because youíre inputting a pure frequency or a pure harmonic. All right, but I tend not to use that term.
Anyway, what is it? Well, first of all, what is a 4A transformer? A 4A transformer either a two pi I nu X Ė ladies and gentlemen, letís work in the frequency domain. That is, work in the relationship between the 4A transformer of the output function of the 4A transformer, the input function, and the transfer function. All right, so the 4A transform of the, you know, the two pi nu X is delta X minus nu. Okay. It gives you the shift of delta function. And so the output is W of S, the output in the frequency domain is W of S is H of S times delta S minus nu. But now, thereís the fundamental sampling property of the delta function. H of S times delta S minus nu is H of nu, a constant times delta S minus nu. All right. Now, take the universe 4A transformer on both sides. Go back to the time domain. If I go back to the time domain, then H of nu, that is to say, if I take the 4A transform, H of nu just comes out, itís kind of along for the ride because itís a constant, the universe 4A transform of delta, the shifted delta function is a complex exponential, again, so I get W of X is equal to H of nu times E into pi to I nu X. All right. In other words, remember, the input was into the nu pi X and then the output is a multiple of that, mainly the value of the transfer function at nu. So i.e. L of E of the two pi I nu X is equal to H of nu times into the two pi I nu X. All right. That says exactly, that either the two pi, the complex exponential, either the two pi nu X is an agin function with agin value H of nu. Thatís exactly what that statement means. Okay. This says, all right, this says exactly, either to two I pi nu X is an agin function, agin function, of any LTI system, of any LTI system. Now, it doesnít always have the same agin value because the LTI system depends on the particular transfer function. The agin value for a given LTI system are values of the transfer function at the frequency nu. The agin values, request for the agin value is H of nu, the value of the transfer function at nu. All right. Thatís a fundamental fact. And again, some people interpret this, you know, often, I think I probably even put this thing in the notes, time invariance is a further indication of how natural and important convolution is from the fact that complex exponentials are agin functions is, I donít know, either a further statement of how important shifting invariant or linear timing invariant systems are, or how important complex exponentials are. All right. The fact that they enter into the theory of linear systems in such an important way, and I mean, itís very important. And if youíre analyzing a linear system, is Iím sure youíve seen in various classes, to know the possible agin values an agin vectors, and to know that there are agin values and agin vectors. I say agin function here, instead of agin vector, because Iím thinking of functions instead of vectors. That is, a function of the continuous invariable. So Iíll do a discrete action, actually. But same idea, itís the same terminology in linear algebra but just applied, as you use your linear algebra, but just applied to sort of the continuous case. Okay.
So any LIT system has complex exponentials as, actually, as it turns out, as a basis of agin functions, and that turns out to be very important. That allows you to diagonalize the operators associated with LTI systems and understand how they operate in a much more natural way. But again, as I say, to do one application is to do a lot of applications so I would rather let that wait for other occasions. But itís also important to realize that, you know, we often talk about how you can work with complex exponentials but, you know, but really youíre sometimes thinking about real signals so you take the real part and put a sign, you know, properties of complex exponentials are the same as the properties of signs and cosigns, and so on and so on. But this is a case where thatís not true. All right. That is, itís the complex exponentials that are agin functions of linear time invariant systems, not the sign and cosign separately. So let me show you that. So itís not true without additional assumptions. There are some cases where itís true, but generally, itís not true that sign and cosign are themselves agin functions of an LTI system. All right. Watch what happens here. So letís take cosign for example. So e.g. take Z of X to a cosign, cosign of two pi nu times X. All right. And the question is, what is L of VX. If itís an agin function L of VX has to a multiple of itself. It has a be a multiple of V of X. All right. Well, is it or not? We can calculate this by expressing to cosign in terms of complex exponentials as the real part. So L of cosign of two pi nu X is L of Ĺ E to the two pi I nu X plus E to the Ĺ of the sum, plus E to the minus the two pi I nu X. All right. L will apply to the sum. The Ĺ comes out and L is linear, so L of each one of those sum of the complex exponentials is the sum of L applied to the complex exponentials. And we know what happens in that case, actually. So this is Ĺ L of E to the two pi I nu X plus L of E of the minus two pi I nu X. All right. And the complex exponentials are agin functions. All right. So this is Ĺ H of nu times E to the two pi I nu X plus H of minus nu, you know, the minus two pi nu X is E to the two pi minus nu times X of H minus nu X equal minus two pi I nu X, on half of that whole thing, and now, youíre stuck. All right. You are stuck. Because unless you have additional assumptions, all right, you canít combine these terms. You have H of nu, H of minus nu, and without further assumptions, they donít have anything to do with each other. Okay. So you are now stuck without further assumptions. Okay. So itís an agin function without further assumptions.
Now, there are assumptions even made. Actually, one of the most natural assumptions, which almost gets you there, but not quite, is to assume that H is real. L of H, the impulse response, which is a natural assumption, is real, as it will be in most cases, all right. If H is real, then what symmetries does the 4A transform have? If H is real, then H has asymmetry, H of minus nu is equal to H of nu bar. Thatís the basic symmetry of the 4A transform, when you have the 4A transform of the real signal. Okay. So if I plug this in, letís go with that assumption. And if I go with that assumption, then L of again, cosign of two pi nu X is equal to Ĺ H of nu either the two pi I nu X plus H of nu bar, and let me write this as, actually, E to the two pi I nu X bar. All right, either minus two pi nu X is a congregant of E to the two pi nu X. So that is the real part of H of nu E to the two pi I nu X. Now, youíre still not quite there. Right? Youíre still stuck. If H of nu were real, then youíd be okay. Right? But Iím not assuming H of nu is real, just that H satisfies the symmetry property. Thereís a little more you can do, all right. Letí write, so youíre still stuck in the sense that itís not an agin function, itís just not. So donít say that it is. Donít make me mad. All right? So if you write, though, thereís a little bit more that you do on it, and itís the common thing to do, is if I write H in terms of its polar form, H of nu, the magnitude times E to say IC. Youíve got to say E to the IC before you can say it, so in terms of the magnitude of the phase. Then H of nu times E to the two pi I nu X is I to the value of H of nu times E of the I five times this is E to the two pi I nu X plus E. Okay. Or put the two pi, yeah, I guess Iíll write it like that.
So two pi I Ė Iíll get it, Iíll get it, donít panic. I times two pi nu X, here we go, plus V. Okay. So the real part of this does give you a cosign, but itís a phase shifted cosign. The real part of H of nu times E to two pi I nu X is going to be absolute value of H of nu times cosign of Ė so the real part of this is, this is already real, the real part of this is a cosign with a phase shift. So itís cosign of two pi nu X plus V. Okay. All phase shifts always drive people out. All right. So itís still not an agin function, right, but itís as close as youíre sort of going to get. This says L of cosign of two pi nu X is equal to absolute value of H of nu times cosign of two pi nu X plus C. All right, the cosign is not an agin function but itís close. Okay? Yeah.
Instructor (Brad Osgood):Well, then you can tend your business. The more assumptions you can put on this, because then you know youíll be okay. All right?
There are extra assumptions you can put on here, but all Iím saying is that if you donít put those extra assumptions on, itís just not the case that the signs and cosigns are separately agin functions for LTI systems. Only the complex thatís really interesting. I mean, itís almost really, sort of the fundamental difference between the complex exponentials world and the real world is that for any LTI system, complex exponentials are agin functions, but not the real imaginary parts are not agin functions without extra assumptions. Okay.
All right, letís finish up. Letís do the discrete case. This is the discrete version of this. Again, same considerations hold for the discrete case. A discrete case is W is equal to L of V, which is H involved with V, but everything here now is a discrete signal. Again, W of M is the 4A transform of H times the transform of V, that everything is discrete here. Okay. And again, discrete complex exponentials are agin functions. Discrete complex exponentials are agin functions. All right. Maybe in this case I should call them agin vectors, I suppose. All right.
So for example, what if I input V equals omega to the K for any case. Or maybe, as a discrete vector complex exponential, right, that we use many times, but let me put it to the omega K. Well, the 4A transform of omega to the K is, if you will recall, I donít recall so I have to write it down, is N times delta K. All right. Thereís that extra damn factor of N in there that comes in. What are you going to do? It just is, itís a pain in the neck, but here it is.
So again, what is L of VK? So to find L of omega to the K, I thought that was EK, thought it was Ek, omega to the K, look in the frequency domain, same argument as before. And I get W of, say, N is equal to H of N times Ė let me just write it like this, without the indexes, W is equal to H times this, N times delta K. All right. So thatís equal to H of K times N times delta K. Because of the sampling property of the discrete delta function, same property, same damn thing. Itís the same damn thing over and over, again. Itís the same argument
So back in the time domain, itís the same thing. W is equal to H of K times omega to the K. Okay. That is to say, i.e., L of omega to the K, the discrete complex exponentials of the K, the discrete complex exponential is H of K times omega to the K.
Now, an interesting thing happens here because whereas, in the continuous case you had sort of a infinite family discrete complex exponentials, even when the two pi nu X where nu can be anything and a continuous variable between minus infinity and plus infinity. Here these powers cycle, right. Up to zero minus to the first power and then they start repeating. All right. So what you have is, this is maybe a difference or this is a special feature of the discrete case, so here, now, you see that one omega to the one, omega square, up to omega to the end minus one, form a basis of agin vectors for any linear time invariant system. Theyíre independent, theyíre [inaudible], theyíre each agin vectors, they actually form a basis, and almost a normal basis. Theyíre not quite worth normal, right, because the lengths are N no one. Damn. All right. Form the basis of agin vectors for any LTI system. All right. Now, I want to make sure you understand, again, what this says and what this doesnít say. All right. Any LTI system, these discrete complex exponentials, one, this is a vector will all ones in it, all right. One omega, omega square up to omega and minus one, any LTI system, these are agin vectors and they form a basis of agin vectors. All right. The agin values are different. The agin values depend on the system. Because the agin values are the values of the transfer function at the index K and thatís going to be different for different systems. All right, because H is going to be different for different systems. To define a system, a discrete LTI system, is to give the H, and so that gives the agin value in these terms. But the vectors, themselves, one, omega, omega square, four omega N minus one are agin vectors for any LTI system. Another way of putting this is, any LTI system, so a system given by convolution in this discrete case, is diagonalized by the complex exponentials. All right. This is another important property, makes for good quals questions. All right. Itís an important property of discrete complex exponentials that they form a basis of agin vectors for any LTI system.
All right, now, Iíll do one more calculation for you. Letís take that system we had before. Letís do a 263 problem in a slightly way. All right. Letís take, again, W is equal to H involved with V, in the discrete case where H was the vector one, two, three, four. All right. And we found that this was given by matrix multiplication, W is equal to A times V where A was this matrix, right. A was the matrix I donít have to write it down again. One, two, three, four, thatís the first column, one, two, three, four. And I start shifting. Four, one, two, three; then three, four, one, two; he said looking at his notes desperately, then two, three, four, one. All right. All right. Thatís the matrix. All right. Now, agin vectors of the system, then therefore, are agin vectors of A. All right. Of the system are agin vectors of A. I should say agin vectors and agin values are agin vectors and agin values of the matrix A. All right. Letís find them. Now, you know how to do that, actually, by matrix methods where you look at A minus 11 times the identity, and figure out the determinant, and you know, figure out the roots and the characters of the components. No, no, thatís the thing where you just plug in the map lab and let the map lab chug away, and so on, and so on. Right. So you can do this. But letís solve this problem by using what we know. Okay. So letís do this via a theory of LTI systems. The agin values are given by values of the transfer function. All right. So we need the transfer function. Agin values are H of zero, H of one, remember Iím using that same one, two, three, H of two, and H of three. Those are the agin values. I mean, actually, I know what the agin vectors are. The agin vectors are, of course, are complex exponentials. All right.
So how do I find those numbers? Well, H is the 4A transform of little h, of course. All right. So I can calculate this directly. The 4A transform of little h is the sum from I equals zero to three of the values of H HMI omega to the minus I. Maybe I shouldnít use I, maybe I should use K. K, omega minus K. All right. Thatís the discrete 4A transform of H. All right. Now, H is easy. H is given explicitly. This is the sum from K equals zero to three of K plus one. Right, H of zero is one, H is one of two, three, four, so itís K plus one times omega in the minus K. All right. Write this out. Write this out in terms of vectors. Write this out. But I will do it. Okay. Itís a sum of vectors. K plus one times, this is a zero omega to zeroís all ones, and omega to minus one, omega minus two, and so on, and hereís what you get. You get very easily, very quickly you get this equals, Iíll write it out for you, ten minus two plus two I, minus two and minus two minus two I. All right. Just by evaluating the sum, all right, very, very easy, you do it by hand. Okay. Thatís what you get. And that tells you exactly the agin values. The agin values are exactly minus ten, minus two plus two I. That is the agin values of A. The matrix A are given by minus ten, minus two plus two I minus two and minus two minus two I. Okay. It drops to like a piece of light fruit. And in fact, I wasnít so sure my Ė well, I was sure of myself, of course, but I decided to check this and if you put this into matrix mathematics, and sure enough, if you ask for the agin values in that matrix this is what you get. All right. So it works like a charm. Once again, how does it work? An LTI system given by convolution, the convolution is also realized by matrix multiplication this is the matrix, therefore, agin vectors of the system are agin vectors of the matrix. But I know the matrix corresponds to an LTI system so the agin vectors are power of complex exponentials one, omega, omega squared, omega tubed, all right. And the agin values are the values of the corresponding transfer function at the value zero, one, two, three, all right.
So all I have to do is find the transfer function to find the agin values of the matrix. And to find the transfer function I calculate the 4A transform discrete function directly from the definition of discrete 4A transform. All right. Itís just the component of H times omega to the minus K, add up those vectors, and you get this vector and the entry here happen to be the agin vectors of the system, okay, of the matrix. Itís cute. All right. Itís pretty cute. All right. We will now leave the theory of linear systems, as much more as there is to do, and thereís plenty more to do. I want finish up the course with a discussion of how two dimensional transforms work, so weíll start on that on Wednesday. As always, please sort of read around and read ahead in the section so, again, our pursuit is going to try to make it look as much like one dimensional cases, as possible. And that will mean more to you if you sort of read ahead a little bit and familiarize yourself with how the formulas look so I can jump back and forth more easily. Okay. See you on Wednesday.
[End of Audio]
Duration: 53 minutes