Instructor (Stephen Boyd):Let me make a couple of announcements. I guess the first announcement is that the midterms are actually graded. So those are done and they値l be available for pickup in Packard after class today. And Denise, if she痴 in, you値l get them from Denise痴 office; her daughter was sick yesterday so I知 not sure if she値l actually be in today. If not, you can get them from my office or possibly, if I知 not there, from the TA offices or TA痴 office or something like that, so we値l figure that out. But these are available in Packard, the midterms.

We posted the solutions, literally minutes ago. This is in Packard for pickup. We posted the solutions a few minutes before the lecture. And I値l tell you, let me just say a few things about it. Oh, I should say that usually the way that everything gets schedules out and everything, it had to do with a shift in schedule. By tradition, we return the midterms the day after the grade change option is deadline has finished, that痴 the tradition. This time however, because the new schedule, we池e actually able to give you graded midterms beforehand. That doesn稚 apply to the vast majority of people but there might be a handful of people who, I don't know, whatever, decide they want to change their grading option.

So I値l say a little bit about the midterms. I guess we can all speak freely about the midterms now. They were good. I do have to apologize formally on air because it seemed like it was I think we undershot a little bit. Normally when people turn them in, we should see like every third person should kinda look like they just finished a marathon or something like that. And just by the people two people were too happy and too civilized and all that, so we really feel that we undershot.

And I mention this because, you know, it痴 a very small world and I知 sure the rumors of this, this undershoot, have already made it to they致e it痴 already at MIT and in Chennai and all these other places where, you know, people are gonna be they just people are gonna be disappointed. In fact, I already had one comment from current people who saw it I mean, sorry, current grad students who saw it and they said, 典hey had that? That痴 not a midterm. Ours was a real midterm. So that痴 what they said, so anyway just wanted to apologize.

Nevertheless, it seemed like it provided some amusement for you, at least, so that痴 good. And you could hardly have done it and then say you didn稚 do anything, so I think that痴 not possible to say.

Let me make a couple of other comments about it. Oh, if you and by the way, we do make mistakes grading, so sometimes we add things wrong, that痴 entirely possible; sometimes we致e missed whole things or something like that. So do feel free to come forward and ask us about things, but only do that after you have looked at our solutions very, very carefully. And you better be prepared to defend yourself. We saw some serious nonsense, stuff that looked kinda right but was basically wrong in some cases, and so just be prepared to defend yourself if you feel something痴 not there.

By the way, we do make mistakes and so I would encourage you to look at it and in the you should definitely do a postmortem, look at our solutions, look at yours, trying to figure out what happened, if anything. So that痴 any questions about the midterm? How壇 you like it? It痴 okay? Yeah, see that痴 see, we really should get a much more visceral response, you see. That痴 how we know we didn稚 hit it just right. I mean it wasn稚 too far off but it wasn稚, you know, it should produce like, at this point, it should be like a major traumo or something from the weekend. But all right, I値l move on.

There痴 another announcement. It has to do with numbering of exercises. I guess all of you know we went ahead and assigned the Homework 5. It痴 very short. Covers some of the material we池e doing now. That痴 just to keep us in the loop on the homework. There is one problem.

In the printed readers, the you know they池e sort of done by lecture, they池e numbered by lectured, so there痴 like Problem, you know, 2.6, 2.7. Anyway, they went up to like 9.25 or something like that, and then in the next lecture, 9.1 again. So just a mistake, it means that the numbering, but only the numbering, in the printed readers is wrong. So you池e just be aware of that.

We致e updated the the PDF file on the website we updated. Yeah not I think that痴 it. But anyway, nothing else is wrong. The problems are right or whatever. So if you want to just look at it and please do Exercise 10.5, which Exercise labeled 9.5 in your reader it is, you池e welcome to do that. Circle the correct 9.5 and then do it later or something. So that痴 it, okay.

That homework is gonna due Friday is gonna be due Friday. So and in fact, I know this is big midterm week. Yes? This is big midterm week so there痴 an option, which would be to make it due next Tuesday.


Instructor (Stephen Boyd):You値l exercise the option?


Instructor (Stephen Boyd):Okay and then that痴 done. Okay, so the Homework 5 will be due next Tuesday. But we池e gonna pipeline here, so. Oh, now look, this is modern times, okay? You don稚 you know you can稚 sit that痴 how processes work. We池e not gonna wait till you turn in Homework 6 before you I mean 5 before you start Homework 6. That痴 silly, that doesn稚 work that way. So you池e gonna do in fact, you should致e been doing speculative execution the whole time. You should致e been guessing what exercise we might assign and do them ahead of time, just in case we might assign them. I mean for speed, that is.

Okay so we値l you don稚 have to speculative execution, but we will assign a Homework 6 on Thursday. So and we might even back off on our natural tendencies on Homework 6, just a little bit, because of the it痴 pipelined, so. So make the the Homework 5 is due next Tuesday, and then Homework 6 comes out Thursday and then we池e back on a Thursday-Thursday schedule. So how痴 that sound? Okay.

Let痴 see, I知 trying to think what oh, the email I sent out about our progress in grading the midterms yesterday, no Sunday, I can稚 remember when I did it, Sunday. I sent it to last year痴 263 class first. Didn稚 know it for two hours until somebody actually came found us in Packard and said, 徹h, you know, thanks for letting us know about the grading but we took it last year. So I got a bunch of good responses from that, including some people who said that did they really have to do Homework 5.

So I sent a new email out to the entire class saying, 哲o, you don稚 have to do Homework 5. I mean I told them it will be on the final so they can choose to do it or not; it痴 their choice but they don稚 have to do it, so. Anyway, so if you don稚 if you know other people who are asking what like if, I don't know, if they ask what痴 wrong with your professor, I don't know, you can just say he痴 lost it. That痴 all. I guess that痴 the best thing to say.

Okay, any more questions about any of these things. Okay. We値l move on.

And we池e gonna do one thing today but it痴 pretty cool and it痴 this. We池e gonna look at autonomous linear dynamical system X dot equals AX, and we are gonna overload. So far, we致e actually overloaded that痴 a scalar equation. Everyone here knows the solution of that, that痴 X of T equals E to the TA times X of zero, like that. So we know everyone knows this. We are gonna overload actually all of these things to the vector matrix case.

So we致e already overloaded this scalar, simple scalar differential equation by capitalizing A and making A an n-by-n matrix and X a vector. So we致e already overloaded the differential equation itself. Later today, we池e gonna overload the exponential to apply to matrices. So that痴 our goal today.

And the cool thing I mean the nice thing about I mean what you want in overloading and extending notion, is you want it to suggest you want it to connect to things you already know, so it should remind you of things you know, it should make you guess a bunch of things, only some of which are true. So that痴 how you that痴 what real overloading should do. Right? That痴 how it should work.

If everything were true, then it痴 kinda stupid. You should致e defined it more generally in the first place, and I don稚 even really call it a real generalization. So that痴 if you really want to do it, you want to extend it in such a way that it suggests many things, some of which are true. So. Okay, so the first thing we値l do is we値l just solve this Laplace transform and I値l just review this very quickly, even though is a prerequisite, so here it is.

Suppose you have a function that maps R plus into P by Q matrices. So we池e gonna go straight to matrices from scalars. And so, Z itself is a function that maps non-negative scalars into P by Q matrices. So it痴 a P by Q matrix valued function on R plus; that痴 what Z is.

Now the Laplace transform, that痴 written several ways. One is to actually have a calligraphic or script L, which is an operator. And it takes as argument a function of this form and returns the Laplace transform which is another function. It痴 a function from some subset of the complex plane into complex P by Q matrices. So that痴 what it is. Now it turns out for us we池e not gonna worry too much about what this domain is. I値l say a little bit about that but not much.

So the Laplace transform is actually quite a complicated object. It痴 actually very useful, maybe just once to sit down and think about what it is. For example, how would you declare it in a computer language, right? So for example, C or something like that, just so you understand. It痴 very easy to casually write down, you know, little things with A-ASCII characters, which pack a lot of meaning. So L is itself a it is a function. It is a function that accepts as argument something which is itself a function. It is a function, which accepts as argument a non-negative real number and returns a P by Q matrix. Okay?

So L returns, the data type it returns is it returns another function, this is a function, which accepts as arguments some complex numbers and returns P by Q complex matrix. Okay? So it痴 important to sort of think about this at least once. After a while, of course, you壇 go insane if you thought about this every time somebody wrote down Laplace transform. And so, it痴 not advised that you should think of it all the time but you should definitely think of it once.

I should also add something here. And that is that the value of things like the Laplace transform, or at least it痴 shifting if not decreasing. Because a generation ago or two generations ago, this was actually one of the main tools for actually figuring out how things work, for actually simulating things and all that sort of stuff. It痴 not now, basically is not. So it痴 mostly to give you the conceptual ideas to understand how things work and all that sort of stuff. So things are shifting and it痴 not as important, I think, as it used to be.

By the way, there are those who scream and turn red when I say that. So. Okay.

Now the integral here you have the integral of a matrix and of course, that痴 extended or overloaded to be term-by-term or entry-by-entry. And the convention is that the upper an uppercase letter denotes the Laplace transform of the signal. This would be called maybe a signal; some people call that a time domain signal, something like that. Obviously, T does not have to even represent time here. Makes no difference whatsoever what this means. It often means time but it doesn稚 have to be.

Now D is called the domain or region of convergence of Z. This probably I mean there痴 long discussions in books that are actually mostly, in my opinion, completely idiotic. I mean there痴 absolutely no reason for this discussion; it makes no sense. It actually also has no particular use these days, other than confusing students. So. So I値l say a little about this later, but.

It includes at least the it痴 a strip; it痴 a right half-plane to the right of sum value A. And that value A is any number for which this signal Z grows slower than an exponential with A here, E to the AT, something like that. So that痴 what the domain is. It痴 at least that.

Now you might ask, you know, 展hy do you even care about signals that diverge? That痴 a good question. Actually, you need to care about signals that diverge for a couple of reasons. First of all, that might be a pathology in something you池e making. So if you want the error in something to go to zero, tracking error or something like a decoding error to go to zero, and you design the thing wrong, then instead your tracking error will diverge. So it痴 a pathology and you need to have the language to describe divergence.

Also, by the way, there痴 lots of cases where, although it痴 often bad if a signal diverges, that痴 by no means universally the case. If you池e working out the dynamics of an economy, then divergence is probably a good thing in that case. So.

Okay so let痴 look at the derivative property. There痴 only a few things you use in the Laplace transform. It says the Laplace transform of the time derivative of a signal is S times the Laplace transform of the signal minus the initial value. Now this is it痴 the basic property. You know this is what Laplace this is the whole point of Laplace transforms, essentially.

It痴 actually reasonably easy to just work out why this is the case. You look at the Laplace transform of Z dot, evaluated a function S, so that痴 a P by Q complex number. And it痴 the by definition it痴 the integral, E to minus ST Z dot of TDT. Now integrate by parts and we say that this is E to the minus ST Z of T, so this is I guess this is UDV. That痴 UV evaluated over the interval. Then minus integral VDU, and that痴 what this is here. Okay?

Now here, we池e gonna use the fact that the real part of S is large because that痴 the domain that we池e looking at. And that means that this goes to zero very rapidly, no swamp even if Z is expanding this is will swamp that up. By the way, if Z is growing at some if you don稚 pick the real part of S large enough here, this integral, actually this integral has no meaning whatsoever. It does not exist. Okay? So this is not sort of a convenience here; it痴 because this has no meaning unless the integrand here is integralable. And if this is diverging, this the only thing you can say is that simply has no meaning; it痴 like one over zero.

Okay, so this thing here, of course, goes for infinity, it goes away and this becomes minus Z of zero because I plug in T equals zero here and it doesn稚 matter what S is. And this gives me S Z of S, so that痴 your derivative property. And now we can very quickly solve X dot equals AX. That痴 an autonomous linear dynamical system.

So what we池e gonna do is this. We値l take the Laplace transform of both sides. And on the left-hand side, and these are all vectors, I get S capital X of S minus X of zero, and that痴 A capital X of S. X of S is the Laplace transform of X here. And what I値l do is I値l collect I値l move this over to the other side, and I値l write this as SA minus SI minus A capital X of S equals X of zero. Now I致e isolated stuff I know, that痴 this, from what I want, which is right here, and it痴 appearing in the right way. And therefore, at least formally, X of S is the inverse of this matrix times X of zero.

Now we池e actually gonna talk a lot about that but this matrix here, of course, you can稚 just casually write the inverse of a matrix. If you write the inverse of a non-square matrix that痴 just terrible. Actually as far as I know, no one did that for the midterm, which makes us very happy. So the matrix police actually didn稚 actually file any complaints, I think. Actually, that痴 true. I don't know that but I think that痴 true. Okay.

However, SI minus A can fail to be invertible. We池e gonna get to that later. It turns out SI minus A is invertible for almost all complex numbers, except a handful. And we値l get to those we値l get to the meaning of those soon, but for the moment let痴 just say this is for S minus A invertible, for the moment. And now we take the inverse Laplace transform and we have the answer. So X of T equals the inverse Laplace transform of SI minus A inverse times X of zero.

Notice here that X of zero comes out. There痴 a question?

Student:Does it make any [inaudible]?

Instructor (Stephen Boyd):We are saying that, yes. That痴 exactly right. Right. So I知 using linearity is the other thing I知 using here which I didn稚 mention but probably should have. So I知 using linearity of a Laplace transform, and now this is the matrix vector case.

The way you can check it is very simple. The Laplace transform is an integral, entry by entry. So and then if you work out what a matrix vector multiply is, just write out with all the horrible indices, then stick the the integral appears outside the sums. Put the integral inside, and then recognize it for what it and then put it back outside and so on and so forth, and you値l get that.

I don't know if that made any sense, but anyway. I知 using linearity of the Laplace transform. Okay.

So you get this. Okay, now actually a bunch of things appearing here are very famous; they come up in zillions of problems. This matrix SI minus A inverse comes up all the time. Lots and lots of fields, and it痴 called so the mathematical in mathematics it痴 called the resolvent of A. Notice it痴 a function of S, this complex number. So it痴 a function, it痴 a complex square matrix valued function. It痴 the resolvent, SI minus A inverse.

Now it痴 defined, of course, whenever this is invertible. If it痴 not invertible, of course, this inverse has no meaning here. So the places where it has no meaning are actually called the eigenvalues of A. And these are complex numbers for which det SI minus A is zero, so these are the eigenvalues of A. We池e gonna say an enormous amount about this. There痴 only N of those or fewer, and we値l talk about that later.

So when you see SI minus A, you really have to you have to understand when you write this SI minus A inverse, sorry. When you see SI minus A, there痴 just no problem at all. When you see SI minus A, you have to have the understanding here that this what you mean is, this expression, you don稚 want to put a little star or something like this and have a little footnote down here that says provided, 鉄 is not an eigenvalue of A, or something like that. It gets silly. It痴 basically like if you write down a function like this: S plus two squared over S minus one. Okay?

And you don稚 want to write you know every time you write this, you don稚 want to have a footnote that says, 泥efine for all S except S equals one. So after a while you get used to it and you just write this. By the way, you can get into trouble by forgetting that, in fact, there is in fact, a footnote there for this one. There痴 a footnote and the footnote says: whatever form, you know whatever you池e doing, you plus in S equals one here and all bets are off. Okay? So there is that footnote.

But as long as you just remember that that footnote is in place, everything is okay. And the same thing is true here. So we値l write SI minus A inverse, that痴 the resolvent of A, and it should just be understood that there are up to N complex numbers for which this is not invertible and you shouldn稚 be writing the inverse. Okay.

Now when you take this inverse Laplace transform here, this thing, that is a matrix valued function of time. It痴 gonna have a name real soon, but it痴 got a name first of all, we池e gonna give it a general name. And it痴 the state transition matrix, and it痴 denoted as phi of T. Okay? That痴 just it痴 called the state transition matrix and it looks like this. It says it痴 actually already we already have an interesting conclusion. We see that the state at any given time is a linear function of the initial state. So not surprising, it痴 a linear differential equation but there it is.

And if it痴 a linear function痴 initial state, it痴 given by matrix multiplication. There痴 some matrix in effect, the matrix is the state transition matrix. Okay? So we get that. We値l be able to I mean you can actually work this out. This is nothing but this you know everything here in principle. You can take a matrix A, you can calculate SI minus A inverse, at least in principle, you can take the Laplace transform of that if the entry痴 a rational functions, you can go get some Laplace transform table, take the inverse.

So in some sense, it痴 done. You now know the dynamics of linear dynamical systems I mean of autonomous linear dynamical systems. You know everything now, in some weird theoretical sense. Okay.

So this is called the state transition matrix, so let look at some examples real quick. First one is this one. It痴 harmonic oscillator, is the name of the system. And it looks like this. It says X one dot is X two, X to dot is minus X one. Okay? Now if you plot the vector field, it looks like this. And here it痴 certainly plausible that the trajectories are circular, plausible but it痴 not quite I think it痴 not quite circular. Sorry. Scratch that.

Actually, I just had a discussion this morning with the people doing the video production. And so I said, 的壇 like you to just remove when I say things like that, just remove, so it never happened. And then they said, 徹h no, no. There痴 huge, huge expenses associated with that, so I can稚 remove these now. But that痴 the kind of thing, by the way, I壇 like to remove. All right, so let痴 just pretend let痴 just rewind, pretend I didn稚 say that and go back.

When you look at this, you can imagine, with your eyeball I guess, that the trajectories are circular or nearly circular, something like that. Now it turns out they池e actually circular. We値l get to that. Let痴 see how this works.

So we form SI minus A, well the inverse, this is the one inverse you should致e kinda know by heart, certainly. Well there痴 a few special cases but two by two inverse everyone should know. It痴 one over the determinate and then you I guess you switch these and negate these, so that痴 one thing that痴 reasonable to know.

And so SI minus A inverse, that痴 the resolvent, is this. And notice that this matrix makes perfect sense for all complex numbers except plus minus J. But J is just because this in this course is officially listed in electrical engineering. This should be I. So you really the truth is, in mixed company, you shouldn稚 use J because it痴 very outside electrical engineering, it痴 very it痴 a dialect so you shouldn稚 really use J.

And my feeling is you shouldn稚 use J in mixed company. Okay? So that痴 but because the course is in EE, I知 gonna use J, so. But I知 making it explicit. That痴 this is not the high BBC mathematical phrase that would be used. That is absolutely universal in all fields, except electrical engineering where you have this. Okay?

And the reason, I think it goes back 100 years, an I apparently represented current. Now how I got connected to current, I do not know but nevertheless, it got the two got stuck together in the late 19th century and then here we are 120 years later with J. So. Sad but okay. I might change that someday because it痴 a bit weird but not this quarter, so. Okay.

Now the state transition matrix is you simply take the inverse Laplace transform of this. No problem. You go look up in some table or something like that, and you値l find that the inverse Laplace transform of entry-by-entry is cosine T sine T minus sine T cosine T. And you saw this matrix before. That is a rotation matrix of minus T radiants; that痴 what it is. Okay? So that it simply rotates.

What that means is this state transition matrix, and let痴 remember what it does, it maps initial states into the state T seconds later. This matrix here, it simply it takes a vector, the initial vector, and rotates it negative T radiants. Okay? So that痴 what it does.

So we致e actually now verified that the motion in this system is a is perfect periodic motion. You simply take a state vector, the initial vector, and you just rotate it at a constant angular velocity, in fact, of one radiant per second here. So that痴 our complete time domain solution.

Now by the way, I want to point something out right off the bat. We are generalizing this. That痴 a scalar differential equation. Now the solutions of this look like E to the AT oops TA. You値l know why in a minute, I keep writing TA. So that痴 the solution, okay? Now qualitatively the solutions of a first order scalar linear differential equation are pretty boring. Basically, there痴 only three qualitative possibilities.

No. 1, if A is positive you get exponential growth. If A is negative, you get exponential decay. If A is zero, you get a constant. Okay? There is no way out of this thing you can get an oscillation or any other kind of qualitative behavior, other than growth, exponential growth, exponential decay, or constant. Well this is X dot equals AX. A is two by two. And we just got something out of a first-order linear differential equation that you are not gonna get out of a scalar one. We got oscillation. Okay?

So when you generalize, when you go to vectors and you look at a first-order, when you look at the vector version, X dot equals AX, you get solutions that don稚 just have exponentials in them, they can have cosines and sines. Okay? So. I mean this is kind of obvious but I just want to point it out that our generalization here has already you致e already seen behavior you could not possibly see in the scalar case.

Okay, next case is double integrator. So for double integrator, you have X dot is zero one zero zero X, like that, so X one dot is X two, and X two dot is zero. And the reason it痴 called a double integrator is the block diagram would look something like this. And this is gonna be maybe X one and maybe that痴 X two. Everyone agree with that? Because X two dot this is zero going in. X two dot, is over here, is zero. And X one dot, that痴 what went into the integrator, is X two. And I think that痴 this. So that痴 the block diagram.

In fact, when you saw this matrix, you should have had an overwhelming urge to cause a block diagram. Come to think of it, that should致e happened here too. So let痴 just do this one for fun. Here痴 X one and, oh, I知 gonna try to do this right, that痴 the output, that痴 X two. Okay? And that痴 one over S.

So let痴 read this one. It says X one dot is X two, so I値l just connect up this wire here like that. Okay? And this one says X one dot is minus, so I壇 put a minus one like that. There you go. So that痴 our block diagram for this thing, so that痴 the picture. So it痴 basically two integrators hooked into a feedback loop with a minus one in the feedback loop. That痴 what it is.

Okay. Double integrator, that looks like this. And the solution, you know, is totally obvious. You don稚 need to know anything. I mean you certainly don稚 need to know anything about matrices and things like that to solve this. If X two dot is zero, X two痴 a constant, but if X one dot is a constant, X one grows linearly. It痴 a constant plus the second constant times T. So the solution, we could just work out immediately, but let痴 just see if all this Laplace and other stuff works.

So, oh, here痴 the vector field, which shows you what it is. If you start here, depending on your height, that tells you how fast you池e moving to the right, or if you池e down here you池e moving to the left, and so on. Let痴 work it out.

SI minus A is equal to S minus one zero S. That痴 a two by two invertible sorry, upward triangular matrix; you should be able to invert that. SI minus A is this, one over S, one over S squared, zero, and one over S. Now this is defined for all S except for one complex number, which is zero. You can list either zero or a pair of zeros, that痴 we値l see why that is in a minute, but I should really say something like the only eigenvalue is zero. At this point, I should say that.

Okay. So that痴 that. Inverse Laplace transform is this: phi of T, that痴 the state transition matrix, it痴 one T zero one. Okay? By the way, you致e now seen something else that is just absolutely impossible in the scalar case. In the scalar case, if the solution of X dot equals AX grows, it grows exponentially; it cannot grow linearly in time. That痴 for a scalar.

And yet, in the matrix case, you can have the solution of X dot equals AX grow linearly in time. Look at that. Okay? So I just it痴 very important to point out that you池e seeing qualitatively different behavior than you could possibly see in a scalar differential equation. This is not a big deal. If you worked out what this is, this says X two of T is X two of zero. We knew that because it was constant. And then it says X one of T is equal to X one of zero plus T times X two of zero. That痴 obvious because that痴 the derivative of X one. So it all works out and makes sense.

Okay, so let痴 let me just have some quick questions here about this matrix phi of T. We値l get to this. Let me ask the following. What does the first column of phi of T mean? What does it mean? What痴 the first column of phi of T?


Instructor (Stephen Boyd):It says what? It痴 what? X one


Instructor (Stephen Boyd):Correct.


Instructor (Stephen Boyd):Okay, so what does the first column of phi mean? It has a meaning.


Instructor (Stephen Boyd):Yeah, it says the following. The first column of this matrix tells you what the state trajectory is if the initial condition was E one. That痴 what it tells you. Okay? What does the first row of phi of T tell you?

All right, let痴 write down this. Phi of ten equals zero, zero minus one, 30, 5, and, of course that痴 got to be a square matrix, there you go. Strange placement but anyway let痴 live with it. What does that mean? That痴 phi of ten. There痴 a very specific meaning. What does it tell you?


Instructor (Stephen Boyd):Which state at ten?


Instructor (Stephen Boyd):All of them?


Instructor (Stephen Boyd):Thank you. This tells you this row is what maps the entire vector X of zero into X sub one of ten. That痴 what it does. So otherwise, I agree with your interpretation. So now, give me your interpretation again.


Instructor (Stephen Boyd):Exactly. So these two tell you, well at least to the precision I致e written them down. It says that X one of ten doesn稚 depend on the first two components of the initial state. Okay? This says it depends a whole lot and positively on the fourth component of the initial state. Everybody got this? So. Okay. And this says the first component of the third state actually has sort of an inhibitory affect on X one of ten.

By the way we池e gonna see interesting things where when you plug in ten you get one thing, and when you plug in 100 or 0.1, you get something totally different. So now, you can actually say things talk about when something has an affect, when an initial condition has an affect. Okay? So all right. Okay.

So let痴 talk about the characteristic polynomial. This is also very, very famous. The determinate of SI minus A comes up all over the place and it痴 called the characteristic polynomial of the matrix A. Sometimes you put a subscript here to determine this. This is absolutely standard language, so this is not some weird strange dialect from electrical engineering or something.

So it痴 called the characteristic polynomial. And it痴 a polynomial of degree N and it痴 got a leading coefficient of one. So by the way, some people call that it痴 a monic polynomial. Not some people actually, actually just people; it痴 called a monic polynomial, which means its leading coefficient is one.

And you can check that, I don't know, over here for example. Det SI minus A, it痴 the determinate of this thing, and it痴 just S squared. Well that痴 about as simple as characteristic polynomials go. Let痴 see let痴 do this one. This is gonna be that is the same one. We値l do this one. So the characteristic polynomial of the matrix, which is zero one minus one zero, the characteristic poly is det of this thing, which is of course S squared plus one. So that痴 the characteristic polynomial of this thing.

Okay, so that痴 the characteristic polynomial. And the roots of this polynomial, basically by definition, these are the eigenvalues of a matrix A. Okay? So how many people have seen this somewhere else? Okay. So this is the yeah, this should be review. So the roots are simply defined to be the eigenvalues of the matrix A.

Now this matrix has real coefficients here; I mean assuming A is. By the way, sometimes you look at complex linear dynamical systems, they do come up, they come up in communications, they come up in, for example, in physics, and they come up in all sorts of places. But generally speaking, we look at the real case, and then on an exceptional basis we値l look at what happens in the complex case. All right? So A is, if I don稚 say anything else it痴 real.

So this has real coefficients. Now, the polynomial a polynomial with real coefficients has a root symmetry property. It says that the roots are either real or they occur in conjugate pairs. In other words, if lambda is a root and it痴 complex of this characteristic polynomial, so is lambda bar, it痴 conjugate. Okay. So now, you can see why people talk about N eigenvalues.

When you have a polynomial of degree N, maybe the correct way to say it is something like this. You can have anywhere between one and N roots of an Nth order polynomial. Okay? It could be a full N and it could be just it could actually just be one root. And a good example would be S to the N. The only zero of this polynomial is S equals zero. Okay?

Now what people do is they actually in order to make the aesthetics of the fundamental theorem of algebra that says that a polynomial degree N has N roots, to make that, you know, that statement have no footnote, you have to agree to count the multiplicity of the roots here. And so this, here would count N of them, and then you can actually make the beautiful statement that an Nth order polynomial has N roots. Of course, they might all be the same but that痴 dealt with elsewhere. Okay.

Now if we resolve it, which is SI minus A inverse, it is likely I guess, that you were at some point tortured with something called Cramer痴 rule. Is that correct? This was his method for inverting matrices where you crossed out rows and you cross out a row and a column, took the determinate of what was left, and then you divided by something else and sometimes you put a minus one in front of it. Is this yeah, how many people actually saw that? Okay. How many people know how useful that is?


Instructor (Stephen Boyd):Do you know how useful it is?

Student:Yeah, it solves equations. [Inaudible].

Instructor (Stephen Boyd):It solves equations. Yeah. It was useful only for you to take that class. It has no use of any kind. Well, other right now, briefly, we池e gonna use it but not really. No, it has absolutely no practical use whatsoever. No, under no circumstances are linear equations solved using this method, at least after the late mid to late 1820s. So no, no, no, that痴 not true. People maybe all the way up into the 40s or something, but only because they didn稚 know what was going on. And yet, there it is in the curriculum. There it is; might as well teach people how to do long division with Roman numerals. That would actually be more useful, come to think of it. So anyway, all right.

Sorry, pardon me. Okay.

So this rule basically said this that it said take SI to calculate this matrix, you cross out a row and a column or something, like the Jth row in the Ith column, of this matrix. Calculate the determinate, that痴 this thing. Divide by the term of the whole thing. Well at least we have a name for that, that痴 the characteristic polynomial here. And then you put a minus one to the I plus J in front, and that gives you the

Now I don稚 actually care to do this. This is computationally completely intractable in any case because the number of turns in this is growing hugely and the whole thing is silly. There痴 one thing I want out of this, and that痴 this: that every entry in the resolvent is rational function and they have all the same denominator, which is the characteristic polynomial. The numerator is another polynomial. It痴 the determinate of SI minus A when you cross out the, I, whatever, one row and one column, and take the determinate of what痴 there. That痴 what that is. Okay?

Now when you do that, the degree of this numerator polynomial is less than N. So what that says is that every entry of the resolvent looks like this. It looks like a polynomial of degree less than N, divided by this polynomial whose degree is definitely N. Because this thing, the coefficient of S to the N is one, in chi of S here, that痴 one. Okay, so they all look like that.

Let痴 see. There痴 a name for that. If you have a rational function, which is a ratio of two polynomials, and the denominator has a bigger degree than the numerator, it痴 called strictly proper. So again, don稚 need to know this but that痴 just what it痴 called, so every entry of the resolvent is strictly proper.

One way to say that is as S goes to infinity, the entries of SI minus A all go to zero. Okay? Which is kind of easy to see well is it? I don't know. It sort of makes sense. As S goes to infinity, you get sort of like, you know, huge numbers times I minus A inverse. And it痴 plausible, at least, that that should be a matrix that痴 small. Okay.

Now comes the tricky part. It turns out that not all eigenvalues of A are gonna show up as poles of each entry. Because although each entry looks like this, here痴 what痴 gonna happen. In some cases, the numerator polynomial will also have some of the eigenvalues, the roots of chi, and those actually will cancel. Okay? So you値l actually not get the I think this will be clearer with examples and things like that. Let me see if I have one here. Oh, I did have one; aha, yes, we have one. A perfect example, if I can find it. Here痴 our perfect example. Great. Okay.

Eigenvalues are zero and zero. Here is the resolvent, okay? That痴 the resolvent right there. Now I値l ask you about the poles of each entry of the resolvent. What are the poles of the one-one entry? Zero. Well sure, they池e the eigenvalues. Okay. You could say the poles here is you could say zero and zero for this. Right? You can say zero and zero and those are the eigenvalues, no surprise here. But now I ask you about this entry, the two one entry. What are the poles of the two one entry of the resolvent? There are none. Okay? So the two one entry is a case, where there痴 an entry in the resolvent that does inherit a pole from the set of eigenvalues.

Now what if this had looked like this, like that? What would you have said? Well if I壇 asked what the poles of the two one entry now, what would you say? One. And then what would you say? You壇 say it痴 impossible because the entries here, the poles have to be among the eigenvalues but it doesn稚 have to include all of them, as this zero entry shows. Okay. The significance of that, I think just take only examples and fiddling around is gonna make it with these things, is gonna make it clear. Okay.

Next topic is this. We are now going to overload. Oh, by the way, we have overloaded this. If you didn稚 remember how to solve that, but that痴 the scalar case, but for some reason you didn稚 know how to solve this but you did remember all about Laplace transforms. I致e always found that a little bit implausible but anyway let痴 just go with the story; let痴 go with that story. You would致e said oh, you know, S capital X of S minus X of zero equals A, you know, X of S. And you would致e gotten a formula that looks like this. X of S is X of zero divided by S minus A. Did I do that right? Something like that, you would致e got that. Right?

And I知 allowed to write this because these are scalars. Okay? I mean you now know you would write this you know that痴 that is the scalar version of that. Okay? But this is what it looked like when you took an undergraduate class. And then someone would say, 展ell what so what X of T? And you壇 say, 展ell it痴 the inverse Laplace transform of this. Okay? So we致e just worked out all of that. We致e overloaded it now to the matrix case. And the only thing is that what had been a fraction like this, this SI minus A, I mean pretty look, it couldn稚 really really couldn稚 have worked out many other ways. It came out in front as SI minus A and has its own name, which is the resolvent.

Okay but now we are going to overload the exponential. All right? So we値l start with a series expansion of I minus C inverse. This is actually the matrix version of the scalar series you致e seen. So I minus C inverse is I plus C plus C squared plus C cubed. And that痴 if the series converges, actually quite soon we値l know exactly when it converges. But it certainly converges when C is small enough. It痴 small enough that the powers of the Cs are getting smaller fast enough. Then this for sure converges.


Instructor (Stephen Boyd):I said we値l get to it later.


Instructor (Stephen Boyd):So it痴 in a sense actually that in one lecture I値l be able you値l know exactly what it is. I don't mind. I値l tell you. The absolute values of the eigenvalues of C, which are have magnitude less than one. That痴 the exact condition. Okay? So.

Okay so let痴 look at this. And, you know, how would you show this. You壇 show this by terminating the series at some point and then multiplying, you know, telescoping the series. You壇 multiply this by that and finding out that what would be left over would be C to the N, where N is where you truncated it. And then if C is going to if C to the N then gets bigger goes to zero then you壇 get this. So you have that, that痴 your series thing. And we could just take this as formal.

Now let痴 look at SI minus A inverse and let痴 do this. Let痴 first pull out S out of this and we get I minus A over S inside. And we pull the S out which becomes a one over S outside, looks like that. It痴 a scalar. And you get I minus A over S, now that痴 this formula here. And I知 gonna use this power series expansion, here, of I minus C inverse. And if anyone bugs me about convergence, I値l wave my hands and say, 徹h, yeah. Right. This is only valid for S large. Okay? That痴 how this is gonna if anyone bugs me about it, that痴 what I知 gonna say.

Okay? Because if S is large A over S is small, and then in the way in which I didn稚 say if C is small enough this will converge. Okay, so we get this. And this is simply I plus A over S plus A over S squared plus A over S oh by the way, of course that痴 slang. Right? Everybody recognizes that? That痴 considerable slang but a lot of people write it. Maybe the correct way to write that is this, but then you get too many parentheses and it starts looking really unattractive and stuff like that. So but I figure now, post-midterm, I can be little bit more informal, so that痴 slang, just wanted to mention it. So.

I still, I don't know that I can actually still take things like this. That just looks weird for some reason. You know maybe I値l get used to it or whatever, but. And this looks kind of sick and I just like why would you do that. I don't know it just seems odd, anyway. So. But for some reason this just the S, this seems to flow, so. And it sure beats that because it壇 be a lot of parentheses otherwise.

Okay so I write it this way. Oh, that痴 slang too. There we go. See? Right there. That痴 a lot of slang but that痴 okay. You know what痴 meant by it.

So you take this series expansion, and now let痴 take the inverse Laplace transform term by term. Well if I do that, the inverse Laplace transform I over S, that痴 easy, that痴 I. A over S squared, that痴 easy, that痴 TA. Then A squared over S cubed, that痴 TA squared over two factorial, and so on. So I get a power series that looks like that, okay?

Well that痴 interesting because that looks just like this, E to the AT, like that. Except I知 gonna start writing these as the E to the TA. You値l see why in a minute. E to the TA; looks just like that. So here痴 what we池e gonna do. We池e simply going to define. All of that was just sort of little background. We池e simply gonna define the matrix exponential this way. E to the M is I plus M plus M squared over two factorial plus M cubed over three factorial, and so on. Okay?

Now just the way the series for the power series for the ordinary exponential for a scalar real or complex number converges for any number. Right? Any number, even a big number, what happens is these terms get way big. It will they will converge though, okay? Same way. It痴 true for all so this series converges for any matrix M. How does the how well does the series do for non-square matrices? X of a two by three matrix, what is it?


Instructor (Stephen Boyd):Yeah, it just it makes no sense. And, in fact, where would be the where in the syntax pass would you halt?


Instructor (Stephen Boyd):Here? You壇 stop already right here. I壇 stop by the minute I parsed, when I got to when I pulled the token M off and then asked somebody somewhere to add a two by three matrix with an identity, that壇 be the problem. But you池e right, I could say, 添ou know what? I知 gonna let one go. Just keep going. And then the M squared

Yeah, it痴 like, 哲o problem. Right? No, that痴 actually what compilers do, right? They try to get through as much as possible because the more they can get through, maybe the more informative their description of the exact kind of idiocy you suffer from they can describe. So, you know, you壇 say, 徹kay, fine. This person is adding an identity with a two by three matrix. No problem. Let痴 just keep going. And then, indeed, you壇 get the M squared and you壇 say, 鄭ll right, I know what we池e dealing with here. And then you return with a nice message. Okay.

So yeah, so matrix exponentials don稚 they don稚 exist, but for any square matrix they exist. By the way, when you do an overloading, we致e now just overloaded the exponential. It takes as argument a square matrix. Okay? Whenever you do an overloading, you want to check that in any context where the two different contexts overlap, they better agree. So for example, if someone walks up you know if someone says XA, and that痴 a scalar, I mean there痴 this weird thing where you could say, 哲o, no, no. It痴 a one by one matrix. And you have to make sure it痴 the same thing. But of course, it is the same thing, so everything痴 cool here. Okay.

All right, so that痴 the matrix exponential just defined for any matrix, and now it turns out that痴 just what that痴 what the state transition matrix is. It痴 E to the TA. And so what we致e done is we致e come around and we致e figured out the following. The solution of X dot equals AX is this: it痴 X of T equals E to the TA X of zero. And I知 gonna try to do this. You know the problem is I guess if you learn, or teach in my case, you know, the undergraduate classes, they are all they always looks like this. It痴 always E to the AT. Did people see that? Is that what you saw?

You know it痴 kind of like cosine omega T. Right? There痴 nothing wrong with a person writing this but it痴 just weird and kinda it goes against convention and I don't know what. Does everyone know what I知 talking about? Okay, so for some reason, I have no idea why, you put the T like this. So that was so ingrained in me from teaching undergraduate classes that for a long time I wrote E to the AT. And actually, a lot of people will do that. But that痴 kind of you know that痴 weird. That痴 that the post the scalar post multiplication of a matrix.

It痴 cool in some you know depending on the social situation it can be okay to post multiply a matrix by a scalar. Certainly among friends, on weekends, I don稚 see any problem with it. But it just somehow it痴 not right, so I知 now retraining myself to write this as E to the TA. I don't know, just so that when I then teach this class I have that. So that痴 the so anyway, I値l slip up a few times and that痴 fine. Okay.

So there you go. Now we have a name and we know that the solution of X dot equals AX is E to the TA is X of T equals E to the TA X of zero. Feel free to have when A is a scalar, this goes back to your undergraduate days, there痴 nothing here you didn稚 know about, when A is a matrix, that痴 the matrix exponential. Okay? So it痴 not and it痴 the solution. So okay, there you go.

So the solution of that is this. Now that as I just said, that generalizes the scalar case; note written here is TA. Now a couple of warnings here, and in fact this is what this makes this fun. If in fact everything just worked out, it wouldn稚 really be fun. And it wouldn稚 if it didn稚 really require like outer cortical activity, I mean if it was just notate, it痴 not interesting. So here痴 the idea behind this.

The matrix exponential, it痴 meant to of course, it痴 meant to look like the scalar exponential. That痴 absolutely by design it痴 supposed to look like it. Okay? Now what that means is that things you would guess, some things you will guess from your knowledge of the scalar exponential, hold. Okay? I値l show you one right now.

So for example, E to the minus A is, in fact, E to the A inverse. That痴 true, okay? But there痴 lots of things from your undergraduate scalar exponential knowledge base which doesn稚 go doesn稚 actually extend to it absolutely does not extend to the matrix case. So here痴 an example. You might guess that E to the A plus B is E to the A E to the B. That is absolutely the case if A and B are scalars it is false. In general, in fact for almost if you randomly pick and A and B, it will be false.

By the way, you will know soon why, when you understand the dynamic interpretation of what E to the A means and you thought about it carefully, other than as a as opposed to notationally, you would not even imagine that this would be the case because it痴 making a very strong statement.

Anyway, this is false. Quick, we致e actually worked out explicitly two matrix exponentials so we値l use that work. If A is this thing, E to the A is whatever that it痴 a one radian that痴 a one radian negative one radian rotation matrix. E to the B is this thing. That痴 just straight from our formula. You work out what E to the A plus B is. We did not work that out but I worked it out to a couple of significant figures, and it痴 not equal to the other way around. Okay? So it痴 just they池e just way different animals. Okay? So be very, very careful with the matrix exponential, and with actually a bunch of the other stuff that we致e overloaded.

By the way, you know this is not it痴 not like you haven稚 seen this before. And I show you an example. You know, for example, that if these are scalars and I say something like AB equals zero, you know that either A or B is zero. That痴 true. But if A and B are matrices, this is it is false that either A or B is zero, just false. Now it becomes true with some assumptions about A and B, and their size and rank and all that stuff. But the point is, it痴 just not true that that implies A equals zero or B equals zero. And you kinda you know after a while you get used to it and that痴 kind of same thing for the matrix exponential, so it痴 not like you haven稚 seen stuff like this before. Okay.

However, if A and B commute, so if AB is BA, if matrices commute, then in fact this formula holds, okay? And that痴 easy to do. You just simply work out the power series. You take the powers and then you池e free to rearrange the As and the Bs and you can make this power series look like that. Okay? So, and that tells you immediately the following. If you have two numbers, T and S, then E to the TA plus SA is actually E to the TA times E to the SA, like that. Okay? And if S is minus T, you get E to the TA times E to the minus TA that痴 zero. Okay?

So that says that the exponential of TA is non-singular always and it has inverse E to the TA, inverse it just says E to the minus TA. This値l make a lot of sense in just a momentarily. All right.

So how do you find the matrix exponential? Well let痴 take zero one zero zero. There痴 lots of ways to find it. You can start by we already worked out E to the TA so that痴 kinda silly; we just plug in T equals one and we get this. But we can also do it my power series. So by power series, we just take I plus A plus A squared over two. What is A squared for this A? It痴 zero because this matrix is oh okay, all right. Someone give me the English for what that does, give a name for that matrix, what it does. What does it do to a two vector?


Instructor (Stephen Boyd):What does it do? I think I heard it.


Instructor (Stephen Boyd):Shift up. Okay, let痴 call is the upshift matrix. So that痴 the upshift matrix. It takes a two vector, pushes the bottom entry up to the top, and fills in with and zero pads, so fills in a zero for the bottom entry. That this that痴 the so if you do that twice to a vector, there痴 nothing left. So A squared is zero, any vector; A cubed is zero. And actually, now this something you don稚 see this in the scalar case.

In the scalar case, when you work out the infinite series for the exponential, it痴 infinite, oh, except for one case, when the argument is zero. But other than E to the zero, that series is infinite. Here, for a non-zero matrix, the series was finite. It only looked like an infinite series. It was finite. So that痴 one way to get this matrix exponential. Okay.

Now the interpretation how many people have seen the matrix exponential, by the way? I知 just sort of curious as to how many. Some were. Okay, so. All right.

Now here oh and I should say let me say a little bit let me just give you one warning about this and that痴 this. If you type XA in MATLAB, for example, but actually in many systems, what you値l get actually is it痴 not what you think. What you値l get here is actually a matrix that looks like this. It痴 E to the A one one, E to the one two. It痴 basically exponentiating all the entries.

Now let痴 forget the fact that there痴 probably one out of 100 million possible cases where you壇 ever want to do such a thing. Okay? But nevertheless, that痴 what happens. Just to warn you. So in fact, the way this is it痴 actually XM of A. And that means the matrix exponential. So that痴 how that痴 what people call this.

So just be aware of this when you start you will be fiddling with this, so just be aware of it. And you値l make this mistake. There値l be many ways to check what you池e doing. By the way, the two would agree in, I think, in almost no cases or something like that, so. But the worst part is it could you might get something that痴 like plausible or that痴 the worst part, so you just have to check and be aware of this. Okay.

By the way, the way to compute the matrix exponential, it is not done by any of the methods. Nothing is computing a Laplace transform, I assure you. You値l know soon a little bit how it痴 done.

It turns out it痴 actually not that easy to calculate the matrix exponential. And there痴 some there痴 a wonderful paper about computing the matrix exponential. And the title is 19 Dubious Methods for Computing the Matrix Exponential. And they go through, it talks about 19 methods that people have used, shows how each one can, in the wrong circumstances with the wrong A, give you like totally wrong results and things like that. So that痴 it, so. But for paper titles, I thought that was that痴 right up there, I think.

Okay so we値l be able to finish today. And actually, it痴 very important to actually, I mean know what is the meaning of the matrix exponential, and this is extremely important. It痴 this. So far, it has a very specific meaning. E to the TA is an n-by-n matrix. It maps the initial state of X dot equals AX into the state at time T. So I think of it as a time propagator; it propagates from initial time, to time T. Okay?

Now it turns out, actually it you can actually work out the following. That X of T tau plus T is equal to the E to the TA times X of tau for any tau here. So in fact, the matrix E to the TA is a it propagates a state, forward in time T seconds. It propagates X of zero into X of T. But for example, it will propagate X of 17.3 into X of 17.3 plus T. Okay? This times E to the TA is gonna equal that because this propagates a state of linear system forward T seconds.

By the way, with a minus sign it works just as well here. You can check that. It works just as well for a minus sign. So E to the minus A is a matrix that propagates the state backwards in time one second. That痴 what it means. Okay? So these are kind of basic facts. That痴 what the matrix exponential means, right? So it痴 gonna mean all sorts of interesting things. And from that, you can derive all sorts of interesting facts about linear dynamical systems, how they propagate forward, backward in time, and things like that.

Okay, so now the interesting thing here is you have if you know the state at any time, any time, you actually fixed one time, you know it for all times because you can now propagate it forward in time with this exponential and you can propagate it backward in time. So for example, I can go to some chemical reaction or some bioreactor described by X dot equals AX. I can take a measurement of X at time 12, and then from that I can infer what X of zero was even if I didn稚 measure it.

Why didn稚 I measure it? Maybe because it was too the numbers were so small the colonies hadn稚 grown yet, and I could only measure them when they got to the billions or trillions, or something like that. Everybody see what I知 saying here? So in fact, how do you get X of zero if I tell you what X of 12 is. What do you write here?


Instructor (Stephen Boyd):E minus 12 A, that値l do it. Okay? So E to the minus 12 A is the matrix that actually goes back it痴 a goes backwards in time 12 seconds, okay? So that痴 what it is.

Now we can actually connect a few things up now that痴 kind of cool. We looked earlier at a forward Euler approximate state update. Now the forward Euler approximate state update said if you want to know what is X of T at time tau plus T. What am I doing?

If you want to know what X of T tau plus T is, you壇 say, 展ell that痴 about equal to X of tau plus, and this requires T small and it痴 an approximation, so I squigglize these. There we go. It痴 a new verb. X of tau plus T times X dot of tau, like that, okay? Now that痴 an approximation and it痴 based on basically this is, some people call it by the way, this is called dead reckoning in a lot of because basically say you池e going in that direction, you check your watch, check the elapsed time and say, 展here are you now?

We池e like that bearing times the time, that痴 where we are. So that痴 the approximation. Now this thing is AX of tau and so this is I plus TA times X of tau, like that. So this is an approximate. It痴 an approximate T second forward propagator. It痴 the forward Euler propagator, is what people would call it. But now we know the exact T second forward propagator.

The exact T second forward operator is the exponential. And look at this, this thing is merely the first two terms in the Taylor series. Okay? So now you can see forward Euler is basically just one term in the exponential series. You could take two and three, and all that kind of stuff. So that痴 the idea. Okay.

So let痴 take a look at this and let痴 talk about the idea of sampling. There痴 a lot of actually already there痴 a lot of applications of what you see, just simple ones immediately. So if someone says, 的致e got some measurements of X of T, you know, at different times, but I didn稚 know what it was in between, how would you do that? What if you how would you do that? In fact, let痴 talk about that. Let痴 talk about that.

You have X dot is AX. Let痴 make it a bioreactor; we talked about that before. And suppose you make an assay, you measure the thing. And like X of 13.1, X of 15, you know, X of 22, like that, and someone comes along and says, 展hat is what was the state? And the state might be, by the way, the volume of different colonies or concentrations or whatever. Okay? And they want to know what痴 that.

And the first answer is sorry we didn稚 do an assay at T equals ten hours. What do you do? Let痴 say you measured at eight, too. What do you do? Give me some methods. Give me a method. You know A. You致e measured X of 8, X of 13.1, X of 15, X of 22, and I want to get X of 10. Don稚 worry; so far, the measurements have been perfect. They池e absolutely perfect. A is not a lie. What do you do?

Student:You measure [inaudible].

Instructor (Stephen Boyd):Perfect. So here痴 one. Ready? Reconstruction Formula No. 1, tell me what to write please. What do I write here?


Instructor (Stephen Boyd):E to the two A. And the comment is propagate forward two seconds, oh hours, or whatever we said, whatever the unit is. Right? How about this, you said you could we could take this one, X of 13.1, E to the what?


Instructor (Stephen Boyd):Okay. And this is propagate backwards no, no, no, come on. That痴 not right. This is E to the minus 3.1. Okay great. I said that before, that reflects on you, you know, not me. So it痴 the length between I write something idiotic and you correct it.


Instructor (Stephen Boyd):Thank you. I knew that, I was just testing you. Okay. Fine, so we have that. All right. Oh by the way, which of these is better?


Instructor (Stephen Boyd):Hmm?


Instructor (Stephen Boyd):They池e what?


Instructor (Stephen Boyd):This one. You like that one. Why?


Instructor (Stephen Boyd):You think the so we got two two people over here say the former. They like propagating forward. But you oh, because you propagated forward two hours, is that it?


Instructor (Stephen Boyd):Oh you have the okay.


Instructor (Stephen Boyd):Ooh, okay. All right. So


Instructor (Stephen Boyd):All right. Could you have calculated it from X 15? Sure, no problem. E to the minus five A times X of 15. Okay, so which of these is better? Well if there痴 no noise and A is exactly what you think it is, they池e all exactly the same. So this could actually be an assertion here. And if it痴 not by the way, if these are not if you calculate these and you get two different answers, it means you池e gonna have to do something more sophisticated. Okay?

And just for fun, just given this state in the course, what would you do, if someone gave you all this data? Just a quick thing, quick what would you do?


Instructor (Stephen Boyd):You might do some Lee squares, exactly. You might I mean first of all, you might propagate all of these two time ten. Okay? If they池e like all over the map, you would say you壇 go back to the person and you壇 say, 鼎an we talk? Okay, that痴 what you壇 do here.

Now if they池e not all over the map but just sort of you know, one is estimating one thing, one痴 a they池e a little bit different and they池e not like, you know, weird numbers, you know, varying by factors of ten, if that痴 the case that痴 gonna come out really, really nicely by the way on the tape. That was me talking while inserting this thing back into its okay.

What you might do is take all those things back and then do some kind of Lee squares fit. That痴 what you might do. Right? And by the way, that壇 be a very, very good method. That would be a perfectly practical method. Actually, methods like that are used plenty. So. Okay.

So we will we値l quit here and continue next time. And let me, for those of you who came in late, the midterms actually are graded. Solutions are posted. They値l be available, I guess if you follow me up to Packard.

[End of Audio]

Duration: 77 minutes