Instructor (Stephen Boyd):Great. It looks like weíre on. Let me make a couple of announcements to start. You can turn off all amplification in here. So if you go down to the pad, I can make a couple of announcements here.
First, let me remind you Ė here we go Ė that today Iím gonna have extra office hours from 1:00 p.m. to 3:00 p.m. today, and the other Ė this is important, announcement, is that next Mondayís section is gonna be cancelled, so weíll let you rest from Ė Iím sure I donít have to mention this, but of course, the midterm is tomorrow, or Ė itís either tomorrow or tomorrow after Saturday or Saturday to Sunday. Your choice. And of course, all the information is on the web. Any questions about the midterm or anything like that before we continue to material?
Student:Where do we go to pick it up?
Instructor (Stephen Boyd):Thatís on the website announcement. Itíll be Ė itís actually Bytes Cafť. Itís just the ground floor of Packard building, is where youíll go. I hope thatís there. It should be there. Yeah.
Okay, for the Ė the only thing I would maybe make an announcement Ė I should report we Ė an alpha tester took the exam. He survived, and heís recovering steadily, so no big deal. Just some extra fluids and some light antibiotics. Heíll be fine. Heís gonna be just fine. And we got a beta tester taking it now, and as of last night around midnight, he was okay too. So it looks like Ė and they did, by the way, you can thank them, because they got a couple of bugs and we Ė a couple of bugs and typos, and we rephrased a few things to make them clearer. So theyíre suffering for you, just so you know. All right. Any questions? Any other questions? Are you ready? Okay.
Just remember, we put a lot more time into it than you will. Okay, weíll continue.
So weíre looking at examples of autonomous linear dynamical systems. This is not Ė basically systems of the form x dot is a x. Maybe zoom out a bit here so we can get the whole page in there. There you go.
So weíre looking at examples of autonomous linear dynamical systems, the systems that look like this, and it basically just says that the derivative of each component of the state is a linear function of the state itself.
So hereís a very standard model. It looks like this. Itís a reaction where some species, a, converts or decays to species b which, in turn, decays into species c, and this looks something like this. Its x dot is minus k one zero zero, and we can actually work out what that means.
This is basically x one dot equals minus k one x one. So here x one is the amount of species a present, and this says that, basically, the amount by which Ė the rate at which that decreases is proportional to the amount. Something like that. So k one is this reaction constant here.
The second one is actually very interesting, the second row. So as with looking at y equals a x, you should never look at x dot equals a x and actually just kind of say, yeah, okay, thatís fine. You need to actually understand exactly what everything is here.
So the second row says x two dot equals k one x one minus k two x two, and how would you explain it? What is Ė what is that term? Itís Ė what is it?
Instructor (Stephen Boyd):This one is minus x one dot, but how do you Ė how would you explain, just in words, what is the meaning of this term?
Itís Ė this is a buildup of x two coming from the decaying of x one or Ė actually, I said it wrong. Itís a buildup of species b, because thatís a byproduct of the decay of species one, and this is actually then Ė is actually the decay of x two, of Ė well, whatever it is, itís the decay of x two because some of species b is turning into species c. And the final one is this, which is x three dot equals k two times x two, meaning that species c, here, only comes from the decay of species two. Thatís this, and this Ė thatís this bottom row. Okay?
And what if I were to put something like this? Minus k three. What would that mean? Where k three is another positive constant. It would have a meaning. What would it mean?
Student:Yeah, x three decays, and where does it go? Right. To vacuum, to somewhere not on our Ė where we donít account for it, to the environment. Somewhere else. Everybody see what Iím saying here? So we put a zero here and thatís fine.
Letís see, thereís a couple of interesting things about this matrix. One is that the row Ė the column sums are zero here. So that has a meaning, actually. Iíll just mention that briefly. Weíll go into this later in much more detail, but letís see what it means that the column sums are zero Ė we should restore that zero there. That the column sums are zero.
Letís try one thing. Letís Ė what is the interpretation of one transpose x? X of t. One is the vector of all ones, so what is one transpose x of t? Itís a sum. Itís the Ė so what Ė how would you say this? Itís the total amount of all species in your system, right? All right.
What is this? Whatís DDT of one transpose x of t? Well itís one transpose times x dot of t. X dot of t is d x d t. Okay? Well wait a minute. X dot of t is a x of t. Now for this a here, the column sums are zero. What is this? In fact, more specifically, what is that? Itís zero. Exactly because the column sums are zero, thatís zero. What does all this mean?
This says that the time derivative of the total amount of material in the system is zero. That Ė you know what that says? That says that one transpose x of t is actually equal to one transpose x of zero. Itís constant. So you know what the column sum zero means in x dot equals a x? Itís actually conservation of the sum. Conservation of the total. Okay, so thereís a name for systems like this. Iím just mentioning this, because itís very important, as when you see y equals a x, to not Ė to look at it, understand what every entry means, what every feature means. They all have meanings.
In this case, the column sums are zero corresponds precisely to conservation of mass or material, whatever you wanna call it. Okay? Thatís just on the side.
Well letís see what happens here. Hereís an example, where we start with one zero zero. So we start with one unit of species a, and you can actually Ė what happens here, I think, is kind of Ė is obvious. By the way, you will very shortly know how to actually work out what the solution of x dot equals a x is, but here what happens is kind of obvious. X one immediately starts decaying. In fact, what is the initial slope, right here? If this is in t, at point zero one, what is the amount of x one present?
What is it? Well whatís Ė what is x one dot of zero?
Instructor (Stephen Boyd):Itís minus k one, okay? K one is one, so what is this initial slope here? Itís minus one. So if I ask you, after, for example, ten milliseconds, how much of Ė how much of x one has decayed, the answer is point zero one. Thatís how much.
So at point zero one, this thing is very, very close to point nine nine, is the amount of Ė is x one. Itís not actually point nine nine, but itís very close. Okay? Thatís the initial slope.
Now what happens is, this decays Ė by the way, as this one decays, x two builds up and, in fact, if you were to add x one and x two together, you would get something that, at first, is almost constant, but itís not quite constant, because x two is further decaying into x three. So as x one decays, that corresponds to x two building up.
Now, by the way, as x two builds up here, here, once x two is high, actually, x one Ė you can see that x one slows down here Ė no, sorry, it doesnít. What I just said was wrong, so scratch that. When we get the ability to edit the videos, thatíll be fine, so I can just like cut those things out, and Iíll claim I never said them. Okay.
All right, so x two builds up, and x three doesnít really start appearing, because the only way you can get x Ė species c Ė the only way x three can go up, is for x two to first build up, and then a significant amount of x two to today. So if you kind of look at this, everything looks the same. And by the way, as you sweep along here and add x one, x two, and x three, what youíd find, of course, is that the sum is constant and itís one. Thatís what we just worked out with conservation of mass here. Okay?
I mean Ė this is a stupid little problem. You could do this with undergraduate methods. Itís not a big deal. The important part Ė actually, in all of the things Iím looking at here, the important part is to understand the following. The same way least squares is not used to combine four arranged measurements to get two positions. Least wares is used, and least wares methods are used, to do things like to blend 1,000 sensor measurements, to make an estimate of 100 parameters. Thatís the real thing. So these are just little vignettes.
This is called a compartmental model, so this whole book is written on models like this. Itís used in Ė obviously, itís used in chemistry. Itís actually used in economics, with the flow of Ė to describe flow of materials and goods. Itís used also in, I guess, pharmacokinetics, is where you see this used a lot, where you trace things moving from one place to another.
Some of those models are not linear, but linear ones are sort of the basic ones. Whatís nice about our abstraction is that, if you look at the source code I wrote to simulate Ė to draw this plot, it defines a, and a is this thing, but it would just Ė the same code would work just as well if a were 1,000 by 1,000.
Now, I promise you, if I gave you a compartmental system with 1,000 species and different things leaking into other things, and making multiple byproducts, and things like that, what happens is not obvious at all. Same as least squares. Least Ė you know, if you have, give me four range measurements, anybody can kind of make a good guess as to what your position is. If I give you 1,000 samples of a signal or tomographic projections, and ask you what the image is, thereís no way. So same as here. I just mentioned that.
Next example is a Ė is actually a discreet time linear dynamical system. So a discreet time linear dynamical system. Of course, thatís just an iteration. Itís just this. Itís x of t is a x of t, right? For which, by the way, the solution I can write out right now. Itís no problem, Iíll just write out the general solution. Itís a to the t times x of zero. There. That Ė by the way, this formula even works because one convention is that a matrix to the zeroth power is the identity, so this will even be valid for t equals zero, but it doesnít matter. So, thereís the full solution of that, but letís look at an example.
The example here is a discreet time Marvok chain. By the way, you donít Ė I mean, probability is not a prerequisite for this course, so Ė but in any case, these are just examples. So these are just examples to show you that these things do come up. Actually, you really should know it. I canít imagine not knowing about Marvok chains. It would be a mistake in pretty much any field I can think of, but okay.
So if you have the background to understand what Iím Ė this example, great. If you donít, donít sweat it, but maybe you can get the idea. So hereís what happens. You have a random sequence, but the values in the sequence Ė itís just a finite number of values. It can only be one, two, three, up to n. Okay? And, generally, these are called states or something like that, and weíll look at an example shortly, but Ė so the states could be thing Ė this could either be states, like a system is up, down, itís in standby mode Ė itís also called modes of a system. These could Ė it could be in standby mode. Something Ė you could have admission block in a network or something like that. This can also give a q length, so this could Ė this state could be the number of jobs waiting to be processed. This could be the number of packets queued in a communications system to go out on some outgoing link, for example. So that would be the type of place where youíd see this.
Now what weíre given is this. If you know what z of t is Ė thatís one of these numbers. Itís one of these states or modes. You actually donít know what z of t plus one is, but you have a conditional probability, so hereís what you know. You know that, given that youíre in state j at time t, the probability that youíre in state i at the next step is given by some numbers p i j. I should warn you here that people who do Marvok chains use the transpose, and they use row vectors. Okay?
So what you see here is not what youíd see in Ė this whole course is on Marvok chains, but you will not Ė theyíll use a different notation. Actually, itís just transpose. Everything is transpose. Probability vectors are row vectors, and the i and the j are switched. So, for us, p i j is the transition probability from state j to state i. Why? Well, because thatís kind of Ė for us, if someone walks up to you and says y equals a x, and you say what is a i j, you say, well thatís the gain from input j to output i. So weíll keep our standard. They do it the other way around. For them, p i j is the transition probability from i to j, but this is just to warn you.
Okay. So this is a given matrix of transition probabilities, and that means, for example, that the column Ė that the Ė for us, not in probability, but for us Ė or statistics, a column is a Ė the first column, actually, is gonna be a probability vector. Itís actually Ė itís a bunch of numbers that add up to one, and it tells you, if youíre in state one, these are the probabilities of where youíll be at the next step.
So, for example, if it was this, right? If that was the first column of p, you would say, if youíre in state one, then with 100 percent probability, at the next time, you will be in state three. Okay? If it looked like this, point five and point five, and zero, zero and so on, this says, if you are in state one at time t, then at the next step thereís a 50 percent chance youíll still be in state one and a 50 percent chance you will move up to state two.
By the way, if the state represented some q length, it would be something like this. It would say that, with 50 percent chance, the q length will remain one, and with a 50 percent chance itíll go up to two, which means maybe a new packet or a new job arriving, for example. Okay.
So the way you analyze these is, you represent a probability distribution of z of t as an end vector. As I said, in probability, in statistics, and in OR, people represent these as row vectors. Some. Some people also do this.
So you make it a row vector, like this, and this is a vector that adds up to one and all the entries are positive, and it basically tells you the probability. Okay. Now if you wanna calculate something like the probability that something is one, two or three, you would simply Ė thatís a linear combination. Itís simply the sum of the first three entries, and so youíd have a row vector here times your probability vector.
Now if you simply work out Ė if you write out what this is in matrix form, it is Ė itís just nothing more than this. It basically says that the next probability distribution on states is capital p times the current probability distribution on states, so thatís what it was, and so this is a discreet time linear dynamical system. So youíd say, for a Marvok chain, the probability distribution propagates according to a discreet time linear dynamical system.
Now the p here is often, but not always, sparse, and a Marvok chain is often depicted graphically, and Iíll give you an example. Iíll show Ė hereís Ė Iíll just jump to an example. So hereís a baby, baby example, but here it is.
I have three states, and letís say they even have Ė Iíve been Ė weíve added a meaning or a comment to each one. So state one is something like, the system is working. State two is the system is down, and state three is the system is being repaired.
This is the Marvok chain. Now, of course, you could go through it and look at rows and columns, and figure out exactly what each one means. These zeros have various meanings, for example. That point nine has a meaning. But letís look up here. This is the way you would draw it. So you imagine a particle sort of sitting at one of these states. You flip a coin. Actually, a coin that has a 90 percent probability Ė anyway, you flip a coin or whatever. You get a random variable that has a 90 percent chance of coming up heads or whatever. So what this is, is if youíre in state one, which means that the system is okay, it is Ė thereís a 90 percent chance that at the next step it will also be okay. Thereís a 10 percent chance that it will go into state two, which is the system down. If the system is down, thereís a 70 percent chance that, at the next step, it will up again and running. Thereís a 20 percent chance it will remain down, and thereís a 20 percent chance it will Ė itíll go into Ė thereís a 20 percent chance youíll actually have to call in somebody and repair it, or something like that.
However, if itís repaired, this one point zero says that the repairs are infallible. They always work, okay? So this is Ė thatís a Ė and this is a silly little Ė this is even not so silly, because thereís some very interesting things you can do. You can take this linear dynamical system and you can do all sorts of interesting things. You can run it for a long time, to see what happens.
Now, in general, of course, you donít know whatís gonna happen here. Does anyone have a rough idea of what would happen here, if you run this for a long time? You can just guess. By the way, if you know about Marvok chains, donít answer. So I want someoneís whoís just intuition. What Ė let me ask you this. If I run this like 10 steps, do you think you can tell me intelligently what the state was 10 steps ago? 100. Suppose it had been running 100 times. What would you say? What would you say about where do you think the state is?
Let me Ė letís just answer that question. Weíll start it off in this state, so it started off in the state one zero zero, and I run this 100 times. What do you think the probability distribution looks like? This is pure intuition. I just want intuition, but I claim you have the intuition.
Well what do you think is the probability that youíre in this state? Obviously, I donít want a number to four significant figures here, right? I want a number between zero and one, and, in fact, all you have to do is say, small, large, zero, one?
Instructor (Stephen Boyd):Itís small. Okay. So, good, fine, small. And for this one Ė itís what?
Instructor (Stephen Boyd):Itís larger, but maybe Ė but still relatively small, and for this one it should be, I donít know. In fact, you wanna guess some numbers? Go ahead and guess the number on that. What is it?
Student:Point nine five.
Instructor (Stephen Boyd):Point nine five? I donít Ė really?
Instructor (Stephen Boyd):No, no, no, because this basically says that one in 10 times it goes here, okay? If it Ė actually, if it always went back, if this was a one point zero here, it would be point nine exactly, wouldnít it? Because one in 10 times it would be here, but the point is sometimes it actually goes a couple of times here, and a couple of times it goes over here, so itís gotta be less than point nine, I think. Are you buying this? Yeah.
So itís Ė I donít know what it is. Iíll make up a number. Itís point eight five or Ė hey, Jacob. You wanna calculate the steady state probability distribution on this guy? Only if your laptop is on. Is it? Okay. Weíll Ė donít worry, weíll Ė take your time, and weíll come back to it.
Itís the Ė I guess, in this case, itís not the left, itís the right eigenvector. Okay, so Ė all right, so whatíll Ė what happens is, actually, youíll be able to see, youíll be able to do this immediately. Youíll Ė within two or three weeks, youíll be able to say very cool things like this.
Someone can take a Ė this system, and say things like, by the way, when something Ė when the system goes down, whatís the average number of time periods it takes before it comes back up again? And that has an Ė I mean, itís some number. Well we could Ė you maybe Ė you could calculate it. These are incredibly useful. You could ask, what fraction of the time is this machine up and running? Thatís very important. That will tell you your throughput or whatever, and if the machine cost $14 million, thatís very important to know what the throughput is, because, well, the throughput is gonna be the Ė itís gonna be the denominator.
Okay. All right, so this is the type of thing. By the way, once again, this is a baby problem. We can get it. If I put 10 states here, if I put a little queuing system with 10 states and, maybe 50, 80 transition probabilities, I guarantee you, you have no idea what the probabilities are gonna converge to and all that. What is it?
Instructor (Stephen Boyd):And point what?
Instructor (Stephen Boyd):Thank you. Great. There you go. So here it is. If you run this for a long time Ė not even a long time, probably 10, 15, 20 steps will do the trick, the probabilities will go to this. Okay? These are extremely important numbers in practice. They tell you that, actually, after a little while, this thing, 88 percent of the time, the system is up and running. .1 percent of the time, itís actually down, and .2 percent of the time Ė whatís that? Thank you. Itís exactly 10%. What did I say?
Instructor (Stephen Boyd):I said .1 percent. Well, you know what I meant. When I say .1 percent, I mean 10 percent. Sometimes. Right. No, you know, look, come on. This is a Ė thatís a graduate class, thatís allowed, so thatís fine. Undergraduate class, you have to distinguish between .1 percent and 10 percent. Not here. Thatís Ė you can check, thatís in the official rules. Okay.
So these are the numbers, and thatís what it looks like. So Ė I mean youíll be able to look at these. These are interesting things. So now we know the answer. The system is under repair Ė itís under repair 2% of the time. Now, by the way, you can estimate your repair bill. Yeah, by the way, you could do things like say, I donít know, maybe I can get Ė I can upgrade to a fancier machine or whatever, and these probabilities change, and you might Ė someone can put a price tag on it and ask, how long would it take to pay for itself?
By the way, if thatís like Ė if itís infinity or a negative number, it probably means you shouldnít buy it or something like that. I guess not a negative number. If itís infinity, you shouldnít buy it. So Ė I mean these would be the types of questions you should answer, so Ė you would be able to answer. Okay. All right.
Next topic is Ė or next example. These are just examples. Letís look at a numerical method for calculating x dot equals a x. Hey, I was just involved in something like just this two weeks Ė three weeks ago. Okay. You have x of zero is Ė weíre gonna Ė we have x dot equals a x, x of zero equals x zero Ė by the way, within about a week youíre gonna have the exact solution of this. I mean, exact means sort of in an analytical sense, and you can Ė youíll be able to compute it and all that sort of stuff very, very, soon. Youíll be able to work out the solution of this, and youíll know a lot about it.
But hereís a method for approximately solving x dot equals a x, and the idea is this, is you take a small time step h, right, and the hope is that x Ė the state should not change much in h seconds, and this is the simplest possible way to approximate Ė to get an approximate solution of a differential equation is this. Itís really simple.
Basically it says this. Youíre at x of t at time t, thatís your current state, and someone says, where will you be h seconds in the future where h is this small number? Well, a very good approximation is this. Where you will be is where you are plus where youíre going, roughly, your speed, multiplied by the timing from an h. Now this is, of course, this is an approximation. It is not actually equal except in certain special Ė very special cases, but itís not equal. The reason is, the minute, the instant you move away from x of t, your velocity vector changes, and this sort of assumes, instead, that your velocity vector is gonna be the same for all h seconds or something like that.
So this is that of x of t plus h is about equal to x of t plus h x dot of t, but thatís a x of t, and so this is, basically, a plus h a, i plus h a Ė thatís Ė I could say that too. If I point to i and say a, you should interpret it as i, unless what Iíve written is wrong, in which case, you should interpret it as a. Thatís clear, isnít it? Good, okay.
So x t plus h is i plus h a x of t, and this, if you Ė if we change the indexing, right, this gives you a discreet time linear dynamical system, and it means you can actually work out an approximate solution. Thatís an approximate solution. Thatís what it looks like, so thatís the idea. Thatís x of k h. By the way, we can actually have some Ė well no, I wonít do it. Iíll keep with Ė Iíll keep at this thing for now.
Now, actually, this method is never used in practice, because Ė for a couple of reasons. The first is this. When youíre doing this, you sort of Ė you start at one place, you make an approximation as to where youíll be next step. Now you make an approximation based on that approximation, and so on and so forth, and although this can actually work, the error can and does build up in lots of systems.
Actually, youíll later be able to do a perfect analysis of when the error Ė what types of system the error would build up, and also where it doesnít, and there are also much better methods, but this is just the simplest method. Okay.
So those are just a bunch of examples, just to show you that these things do in fact come up. So letís look at Ė letís get one thing out of the way. We are looking at x dot equals a x. Thatís a first order vector, linear, differential equation. Itís nothing more. Depending on how you count it, itís Ė letís see, one, two, three, four, five Ė itís five ASCII characters or, I donít know, however you want to count that. Very compact. But what about second order, and third order, and fourth order, and all those sorts of things?
By the way, second order systems come up constantly in mechanics and dynamics. Basically everything is second order. Lots of other systems, lots of things are second order. Actually, for discreet time systems, you have the same thing. A lot of things, your next state depends not just on your current Ė sorry, your next value, I shouldnít call it the state. The next value of something depends not just on the current value, but actually on the previous one as well. So that would be higher order recursion.
Okay, so letís take a look at this. This is x differentiated k time, so this is a kth order linear dynamical system, like that. Lucky for you, we can reduce these to first order, otherwise thereíd have to be another class after this one on Ė this would just be the first order one, then youíd have second and third, and it would get very boring anyway.
So here it is. Thereís a standard trick Ė by the way, this doesnít work in the scalar case. So this is actually a payoff of taking the abstraction of matrices. So when youíre an undergraduate and study that, hopefully for not too long, and then someone comes along and says, yeah, but you know, I have to satisfy this equation or something like that, unfortunately you gotta dig in and solve this separately or something like that, right? Because thereís no way you can reduce this to that, because these are scalars.
Now the cool part is, once you have passed that boundary and become Ė and grown in sophistication, and overloaded this equation so that these are vectors and a is a matrix, then it turns out you donít have to worry about higher order stuff ever again, because it comes for free in this higher level of abstraction. So this is a very standard method. It works like this.
Weíre gonna take the new variable. Weíre gonna stack the derivatives here. So we stack x of what Ė by the way, you cannot call x the state here. So I might accidentally say it, but if I do, itís wrong. So you take the variable, the vector variable x, and you stack x, its derivative, second derivative, all the way up to the penultimate derivative, so thatís this guy here. The k minus one derivative. And now I wanna work out what is z dot. Iím gonna call that z, and thatís the state of this system.
So I take z dot. Well, if I differentiate all of these, thatís easy. The first one, differentiated, is just x dot, but that means Ė thatís the second block of elements in the vector, all the way down to the bottom one, and when I differentiate this bottom one, I get x differentiated k times, and now I use my formula here. So if you look at this matrix, you will see that it faithfully reproduces this thing here.
This first row, for example, says that x one Ė no, sorry. It says that x dot, thatís x differentiated one time, is equal to Ė youíre gonna multiply this by Ė over here, by x, x dot, x dot dot, and so on, it Ė we just across here and down here. These are block multiplication, and itís just x dot, so itís correct, and the bottom one tells you what x differentiated k times is. Thatís this thing, okay? By the way, youíre gonna see matrices like this coming up a lot. You should already have some ideas about what this matrix is.
This is called a block companion matrix. You donít need to know this, but its pattern, you should already have a feel for what it does, and let me ask you this, just for fun. Letís just make sure. If I showed you a matrix that looks like this Ė yeah, thatís good. There we go. Okay. And I asked you Ė everything I havenít shown is zero, okay? Tell me what it does. Just describe it. If I said Ė if we looked at this, can you please explain to me what y is, as a function? Just in words, what does it do? Itís an up Ė is it an up shift? Or is it a downshift? No, itís an up shift, sorry. You take the vector x, like this, and you simply shift it up. And what do you do at the bottom? Yeah, okay.
So hereís how you would say Ė the slang for this on the streets is, you up shift x and you zero pad. So up shift is kind of obvious. The zero pad means that when you go up, you have some spaces where there was nothing Ė you canít Ė well unless youíre the kind of person who likes indexing out of bounds of arrays, which we frown on here, you donít do that, so you zero pad. Thatís just what to do when your formula for shifting indexes is outside the array bounds. Everybody see what Iím saying?
So it basically Ė I shouldnít even Ė I mean you should Ė when you see that matrix, you should say, thatís an up shift and zero pad. Actually, when I fill this in with a bunch of matrices here, itís actually really cool. Itís actually an up shift, an up shift, and then, down here, you pad with a linear combination of all the things, or fill in. Itís not Ė I guess padding is usually used with a zero pattern. Okay. Yeah.
Instructor (Stephen Boyd):Why Ė yeah, it would work. Iíll show you. Yeah, no problem.
Instructor (Stephen Boyd):So Ė no, it does work, and thatís why I think the whole thing is silly. So letís go back to that and say, I really wanna solve this like that, okay? So we can write this out this way. Itís no problem. We write it this way, thatís the state, dot equals, and then this is easy, thatís zero one, and then letís see if I can get this right. Maybe thatís b and a times x x dot. Did I do it right? Bear in mind, this reflects on you, collectively, as a class. If I write something down and itís like totally and completely wrong, and you just passively sit there and go, yeah, sure, itís fine, because, basically, I donít care, so Ė I Ė it doesnít bother me. I donít mind, but it will reflect on you if I say something really stupid and wrong, and you donít collect Ė correct it.
Actually, I once yelled at a class Ė well, by email, after it was over, because it turns out those class email lists, they persist, because I found some horrible area in a homework problem or something like that or whatever, or in a lecture, and I collectively held the class responsible. Well I got some responses, actually. I said, you know, you took this class last year. This was completely wrong, and you Ė I remember seeing that, and I remember all of you looking at me like, yeah. Yeah, itís totally clear. I got various responses for that. Whatís that? Whatís not right?
Instructor (Stephen Boyd):Here?
Instructor (Stephen Boyd):Down here? Good.
Instructor (Stephen Boyd):This is x dot Ė letís do it together, and weíll see, but no, no Ė
Instructor (Stephen Boyd):You think Ė no, no, this is what I want.
Instructor (Stephen Boyd):Yeah, itís the Ė dot is the whole thing. Here, watch. There we go, there we go. No, this Ė listen, this is what we want. Thatís what we want, see. Thatís good. In fact, better Ė I mean for your collective Ė I mean what you really want, is you want that, is you wanna raise some things and say, is that right? And then you want me to go on and give a long Ė and say, yeah, at first it looks like this should be a minus, but in fact, it should be a plus. I should give you the story about it and every Ė and then it can turn out wrong. Then Iím the one that looks bad and not you, but your honor is preserved then, because you protested. Okay. No, good, so letís Ė keep being honest. All right.
Back to the question, why canít we do this? Well we can do it, itís fine. Hereís the problem, is that when you first learned about this, no one had told you about matrices yet, or at least, that was the case. So thatís the problem. Itís basically why people should be taught linear algebra a lot earlier than they are now, because it just short-circuits a lot of really stupid and painful and idiotic material such as, for example, multiple, multiple weeks studying second order equations.
So Ė but Iíll stop. So a block diagram Ė by the way, when you see x dot Ė no, sorry, z dot equals a z, and a was big Ė or z dot equals big a z, and itís got Ė that it like structure crying out at you. You should have an overwhelming, uncontrollable urge to draw a block diagram. So hereís the block diagram, itís this, and you can check that this is right.
This Ė by the way, this shift business, you can see immediately here, because actually when you Ė when weíre actually calculating a z, and weíre shifting z, weíre actually getting the derivative. So thatís right, that corresponds exactly. By the way, people, thereís a phrase for this. Itís called a chain of integrators. So youíll find that very frequently. It comes up in lots of things, and itís quite beautiful.
It says something like this. It says Ė by the way, the arrows, if you can look here, youíll see that all the arrows go down, so youíll see that, in fact, thatís just a chain of integrators. They donít do Ė each of these things simply Ė so, in fact, you can even say, if thatís x k Ė if thatís the input signal, this is Ė it integrated once. Thatís double integrated, triple integrated, and so on. Thatís what these are, okay?
It says take these things, which are integrated, form a linear combination, and feed that back into the input, so thatís what this picture is. I mean, not that this matters, but Ė although, this should maybe look like Ė if these were z minus ones, this might look like some horrible filter you might have encountered in some stupid class on signal processing, right? No? Yes? I donít know, hopefully Ė are you still tortured with these signals? This has some stupid Ė what name do Ė what do they call this? Itís like an IIR direct form blah blah blah? Is that it? Itís what?
Instructor (Stephen Boyd):No, no, no. That would be FIR Ė no, Iím sorry. If I did this, and then pulled this off, that would be FIR, but with this guy here, thatís an IIR. By the way, if some of you donít know what weíre talking about, youíre very lucky. Okay. All right. You should aim to keep it that way. All right.
Mechanical systems, thereís a huge number of Ė I mean, this is a perfect example of a higher order system, so mechanical systems. So a lot of mechanical systems. Again, by the way, this is a beautiful example where, once you have matrix notation, a lot of stuff works out very, very cleanly and beautifully.
You have m. So q is a vector of generalized displacements in some system. That means, basically, a displacement in a certain direct Ė each q three, for example, might be the horizontal displacement of some point on a system, but it can also be like an angular displacement or something like that.
So you get m, q double dot, thatís the Ė this is the acceleration vector, m is like the mass vector, plus d q dot, thatís a damping matrix, here, plus q k. And by the way, this is nothing more than Ė here, Iíll draw it. Itís the matrix analogue of this, so thereís a spring with a stiffness constant k, a mass, here, of Ė with mass m, and then, of course, we have the famous dashpot. This is one of my favorite things. Itís damping, but itís drawn like that. I guess Ė and it has been since the early, like, 19th century.
I donít know, maybe Ė have people seen this? Like a figure like this? You have. Did you ever actually Ė except for maybe a shock absorber in a car, have you ever Ė I mean, so you donít see these? Itís sort of Ė they actually Ė they look like this because they Ė you know, in the early 19th century, when they built all these machines and things like that Ė actually, even earlier, they actually added damping and there would be things like this. They would add a little piston, a little nozzle, and some oil or something like that that would be circulating, and it looked just like that, and itís called a dashpot. Donít ask me why, but thatís the history of it, so there it is, and thatís d.
And what this is, is Ė you know, this is basically m a. Thatís the force, and the force is equal to minus d q dot minus k q. Now k Ė weíll go back to this one, is simply the displacement, so the units of k would be in newtons per meter, so itís a stiffness, and d is in newtons per meter per second, okay? Or newtens per leftbren meters per second rightbren, right? Thatís what the unit of d r Ė and mass is kilograms, say. Okay.
Now whatís actually kind of cool about this is itís the same thing. This is sort of the Ė thereís your high school version, here, and all you do is you capitalize things and everything, and you Ė this describes a lot. I mean, a whole lot. So same number of ASCII characters as you saw this thing in some physics class for children, okay? This thing describes a whole lot here. I mean, like, this could be hundreds Ė in fact, typically are hundreds or thousands of Ė the dimension of q can be thousands, and then this would be, for example, a model of a bridge, for example, undergoing small motions, or a building or something like that, and that would be described by this.
So m is called the mass matrix, k is the stiffness matrix, d is the damping matrix, and you do the same Ė this so-called phase variable trick that we just did, and you take a state is x is q and q dot, so Ė and you get Ė so you get positions and velocities, and you have x dot is q dot and q double dot, but q double dot I can get from here with Ė by putting m inverse through here, and I get this. And I think this is maybe on Homework 1 or something anyway, except you didnít know about it then.
So thereís some actually rather interesting things here, and just for fun, I would like you to explain to me, what is the meaning of k one two? The units, newtons per meter, and I want you to tell me, what is k one two in a mechanical system? What is it?
Instructor (Stephen Boyd):It has a very specific, physical meaning. Wanna help us? No. You gotta Ė
Instructor (Stephen Boyd):That is it exactly. Itís something like a cross stiffness or Ė there probably is some name for it, like a trans-stiffness or something. I mean I know what youíd call it in circuits, but I donít know what youíd call it mechanically, but something like a trans-stiffness. It says, basically, itís the force Ė this tells you the Ė it says that, when you displace the thing, a node two in some structure, you feel a force at node one, and itís proportional to this, so this is the trans-stiffness. Itís the number of newtons of force you feel at node one when node two moves one meter. Thatís what it is.
Whereas, in general, k one one, for example, is basically what you think of as a stiffness. Itís basically, if you grab node one, push it a meter Ė that might not be the right unit, but nevertheless, I guess you can grab the top of a building and push it a meter, a big tall building, but you grabbed it Ė you grab it, and you move it a meter, and it pushes back with some number of newtons. Thatís k one one. Okay.
So letís look at Ė we should also mention linearization. Thatís as a general source of autonomous linear dynamical systems. So some systems actually really are Ė really look like Ė actually have Ė the modeling of x dot equals a x is actually pretty good. There are a number of cases where thatís true. Thereís a bunch of others where itís less true.
So letís Ė in general, you get things that look like this. X dot is f of x. Thatís an autonomous, time invariant, differential equation, and this could Ė this comes up all the time. This could describe an economy thatís propagating, it could describe the dynamics of a vehicle or something like that, or anything else, or a circuit. This would describe Ė essentially all circuits would have a description like that.
And, here, f is simply a function from r n to r n, and, in fact, the meaning of f is simple. It basically maps where you are to where youíre going. Itís a vector field. Okay.
Now, an equilibrium point of a general, time invariant, differential equation is simply a point in state space where f is zero, and what that means is interesting. It says, if youíre at one of these points, your derivative is zero, and that means you stay there. So it says that the constant solution x e satisfied the differential equation, because x e dot is zero, x e is a constant, and, on the right hand side, you plug in f of x d. Thatís zero too. So an equilibrium point corresponds to a constant solution of a differential equation. Thatís an equilibrium point.
Now suppose youíre near an equilibrium point, but not exactly at it. Then you write, x dot, well thatís f of x, but youíre near this equilibrium point, so weíll use a first order Taylor expansion. By the way, to connect to a discussion we had last week, it would Ė youíd probably do better if you knew the approximate range of x wiggling around. You might, instead of using this matrix, use one that you got from sort of a particle filter method, which would be to say the Ė do a bunch of Ė evaluate a bunch of points and then fit a least squares model, but letís just move on.
So weíll just take the derivative, the Jacobian here, and this says that your f is about equal to the f where you are plus the Jacobian where you are multiplied by the deviation from where you are, like that. Okay?
Now weíll put this Ė this is zero, right here, like that. This is x dot, but I could just as well write this Ė I can subtract, if I like, x dot minus x equilibrium dot, because x equilibrium dot is zero, so I can do that. Very Ė a traditional term for that is delta x. Thatís, by the way, to be interpreted a single token. The question?
Instructor (Stephen Boyd):I didnít say that.
Instructor (Stephen Boyd):What?
Instructor (Stephen Boyd):That was true, so I Ė but itís true. What I said was true this time, really. What I said was, if you start an equilibrium point, you will stay there, period. That is a true statement. Youíre getting onto our next topic. The question youíre asking is exactly what weíre gonna start looking at very soon, is what happens if youíre not exactly at the equilibrium? What if youíre a little bit off? And thereís is actually, roughly speaking, two dramatic things that could happen. Actually, thereís three, qualitatively.
One is that you Ė if youíre a little bit off, you could actually sort of start moving back towards the equilibrium point. That would be stability.
Another one is that, if youíre a little bit off, you actually start veering away the wrong direction. You move farther away. That would be an unstable equilibrium point.
And then thereís weird stuff in the middle like, for example, it could just sit there happily and stay there. Thatís stability. Now what you would say then is, had I not made a mathematical statement, but had I made a practical statement, in general, you donít see Ė you will not see, if I have a mechanical system in front of me here or something like that, you wonít actually physically observe a system in an unstable Ė sitting in an unstable equilibrium for obvious reasons, because the formula Ė the actual Ė the derivative Ė itís not really this, itís this. Right? Plus something like w of t, where w of t is tiny little noises acting on it. It really doesnít matter what they are. They could be even just from thermal noise or minor things.
If itís an unstable equilibrium and thereís a little bit of extra noise here, the Ė you move off immediately. Once youíre off, you start diverging, and you donít stay there. That was a very long and weird answer, but I go back and I say that I Ė what I said was correct. If you stay in a Ė start in an equilibrium position, you stay there.
Now weíre gonna talk about what happens if you start very near an equilibrium position, letís see what happens Ė equilibrium point. So, here, delta x is this deviation, and you can write it this way. Delta x dot is d f, thatís the Jacobian times that, and thatís a Ė that is exactly what we have been calling a autonomous, linear, dynamical system, right?
By the way, a lot of people get bored with the deltas, and they drop it. So, for example, if you ask somebody who is studying aircraft dynamics, theyíll just say, hereís the system, and youíll say, whatís x, and they go, well thatís the roll angle, thatís the roll rate, thatís the angle of attack, the angle of attack rate, and all that kind of stuff, and youíll say, really? And theyíll say, well no, no, no, itís phi. These are the deviations around level flight of a 747 at 40,000 feet and 580 miles an hour, something like that. That would be what the Ė so a lot of people just drop these.
I guess in electrical engineering, what do you call these little delta xs? What are the delta xs? Itís called small signal, right? What do you call an equilibrium point?
Instructor (Stephen Boyd):Bias. So you call it a bias, I think, right? Is there any Ė do they call it equilibrium point ever? I donít think so. No, I think you have a Ė well you have a little transistor circuit, and you figure out Ė they call it the DC? Something Ė DC operating point? Thatís a really old one, that there. So you can call it DC operating point or bias condition, and Ė letís see, and what do you call it in Aero Astro? I know thereís people in here who are in that department. Isnít it called like the trim condition? I think itís called the trim condition, but anyway. You can Ė someone will correct me if Iím wrong.
So youíd be Ė I mean, all sorts of fields have their own names for an equilibrium point and things like that, but itís just that equilibrium point. Thatís sort of the high language, to describe this as an equilibrium point. These other ones are all just dialects. Okay.
So when you approximate a differential Ė when you approximate the right hand side of a differential equation, this is like forward oiler approximation. Itís very Ė if you just approximate a function, you can say intelligent things about it like, well, you know, if you donít Ė if you Ė if delta x is smaller than such and such a thing, your error is no more than 3 percent. You can make specific claims.
When you approximate the right hand side of a differential equation, you might be in trouble, because in a Ė when you approximate the right hand side of a differential equation, youíre really sort of Ė I mean, when you look at the trajectory, youíre really building approximations on approximations, because youíre really saying, where am I going right now? And I go, well youíre going about in that direction, thatís what d f of x e times delta x is. And then you go, great, so you step over here, and you say, where am I going now? And I go, well youíre about in that direction. So the point is youíre building up approximations on approximations on approximations, and so you might be lucky and you might not be.
So the best verb to describe how delta x dot equals d x of x e delta x, in what way that approximates the actual trajectory is Ė the best verb would be hope, so Ė and thatís Ė which Ė actually that expresses it later, and weíll talk about that a bit Ė actually a bit later today.
Okay. So letís look at an example, is a pendulum, so we have a pendulum here and the angle itís hang Ė itís at an angle of theta, and thatís gonna put a torque on it of minus l m g sine theta, so thatís the torque. G is whatever, nine point eight meters per second squared or whatever gravitational acceleration, and thatís the rotational inertia. So this is basically rotational inertia times the angular acceleration is equal to the torque on it. And the minus is that Ė it means itís a restoring torque, so it means that if youíre this way, that appears to be how I drew theta positive, it says that the toque actually acts Ė it twists this Ė you canít see that. It twists that way, okay? So thatís a restoring toque here.
Now we can write that as a first order differential equation, non-linear. This way its x dot is x two. X one dot is x two, x two dot is minus g over l sine x one, so it looks like that, okay?
Now the first thing you do when you see something like this is you need to do Ė the first thing you do is you analyze the equilibrium points. I mean, unless itís obvious, unless someone gives it to you and says, weíre interested around this point. But here, so letís figure out what the equilibrium points are.
Equilibrium point says that this is f of x, here, and the question is, when does that vanish? Well if this vanishes, x two is zero. By the way, x two is the angular acceleration, so that says, at an equilibrium point, youíre not moving. The pendulumís not moving. The second one says g over l sine x one is zero. That means that x one is a multiple of pi, and so that means, in fact, thereís an infinite number of equilibrium positions, okay?
So you could have zero, and thatís actually pendulum down, thatís like this, like that. You could have pi, and thatís pendulum up. You could also have two pi, but thatís basically the same as pendulum down again, so itís kind of silly, but itís an equilibrium position. Okay.
Now, actually, weíre gonna get to your question of like stability and things like that. You probably have a very good idea of how that Ė of how this works.
If a pendulum is hanging down, and you poke it a little bit and let go, it will just oscillate. Thereís no damping in this, so itíll just oscillate forever. On the other hand, what happens if you a pendulum straight up, and then maybe just give it a little Ė just knock it a little bit? What would happen, with no damping? I want you to integrate the differential equation by intuition, so Ė anyway, you know what happens. What happens? What?
Well it goes down here. It has a high velocity at the bottom. In fact, at the bottom, its potential energy is as low as it can get, so its kinetic energy Ė all of the potential energy up here has now been converted to kinetic, and itís moving fast, and then it goes all the way up to the other side and then Ė it depends how I hit it and all that kind of stuff, but if I hit it at a velocity like that, it will arrive with just enough to keep going and do it again, and itíll just oscillate like that. Right?
Had I done something like released it stationary one degree at 89 degrees from Ė 89 degrees, so Ė well from vertical Ė from horizontal. What would have happened is it wouldíve gone like this, wouldíve gone all the way around, slowed way, way down, got to one degree in the other direction, stopped for an instant, and then gone back, and it would just keep making a long oscillation like that, okay? So thatís what would happen.
So letís look at the linearized approximation near these two equilibrium points. If you linearize in your equilibrium point, all I have to do is go back over here and take the Jacobian of this, so I take the partial derivative of this first row was vector x one, thatís zero. The partial derivative of this is vector x two, thatís one, here, and I fill in this matrix, and at the bottom, I take the partial derivative of this thing, with respect to x one, that turns this into a cosine, and I plug in x one equals zero, and I get minus g over l over here, like that, and then the partial derivative of this vector x two, thatís zero, and I get this.
Now, actually, youíll know soon enough that this, in fact, defines an oscillation, and this one would be a pretty good approximation of what actually happens.
By the way, letís calculate the linearized system near x e equals pi. So letís do the pendulum up. In this case, delta x is Ė nothing happens up here, itís the same. Thatís a zero. And whatíll happen here is, I take the derivative again, itís cosine, but I plug in pi now, and Iím gonna get the following. Iím gonna get plus g over l, and thatís it. Thatís the difference between the two linearized versions of a pendulum in up and down position.
Now, actually, this is kind of interesting, because this one corresponds to a restoring torque. Thatís what this is. If you just simply work out what it means, it means that the second derivative is as proportional to your displacement, but with a negative sign. So if youíre displaced two degrees Ė however you displaced, the torque is a restoring torque.
Do you see that? That is not a restoring torque, itís a Ė Iíll just say it in English. Whatís the opposite of a restoring torque? That is a not restoring torque. Thatís what this is. Itís a not restoring torque, meaning that once Ė if you move off, itís like, no problem. It puts a torque on it, but it puts a torque that exacerbates your deviation, okay?
And so youíd actually see it kind of all actually all makes sense. This predicts exactly what you pointed out, what happens if youíre near there. It tells you if youíre in the bottom, if youíre in this mode, and you different Ė and you move a little bit, itís actually gonna have a restoring torque on it. Up here, youíre gonna have, actually, a torque that pushes you away.
Okay, so this brings up the question, which we will look at later, I think, actually Ė no, I canít remember. Might be in a different class. Does linearization work? And the basic answer is, yes, but with some Ė thereís some footnotes, and there is some Ė there are some legal Ė you have to kind of sign a release when you do a linearization, so there are some conditions.
So here it is. The answer is this. A linearized usually, but not always, gives a good idea of the system behavior near an equilibrium point, and to give an example where it fails, hereís one. Letís take x dot equals minus x cubed. Well, itís a scalar differential equation. By the way, thatís sort of a restoring Ė itís restoring. If x is positive, it says your derivative is negative. It pushes you down.
But itís quite interesting what this looks like, and forget the solution with Ė this is one of the 13 differential equations you can actually solve analytically. Thatís not relevant. Itís much more important to understand what this says.
So x dot equals minus x cubed says this. If x is big and positive, what is x dot? If x is big and positive.
Instructor (Stephen Boyd):Okay, really big and negative. I like your answer very, very much, except for one thing. Letís be a little more precise about it. Ready? Instead of really big, I would say itís really, really big and negative. You know why that? Because it was big cubed. So big is big, really big is big squared, and really, really big is Ė so I Ė your answer was right, but I Ė a slightly more correct answer would be, if x is big and positive, what is x dot? The answer is really, really big and negative. So it means if youíre big, you are shooting towards the origin very quickly.
What happens if x is Ė you will Ė as you approach the origin, right, so x gets small, what is x dot?
Instructor (Stephen Boyd):What is it?
Instructor (Stephen Boyd):Thank you. If x is small and positive, x dot is really, really small. So we can already predict what this differential equation does. Compared to x dot equals minus x, which gives you a solution which is e to the minus x, we can say, this thing, when x is big, this gets smaller way faster than an exponential. It shoots towards the origin, when Ė as x sort of passes through one. Once x gets small, this thing decays way slower than an exponential. Okay? So we got the qualitative behavior a very simple way.
If x is negative, it repeats, but youíre Ė the point is, youíre always moving towards the right place. Okay. And indeed, the solution looks like this. It grows Ė it falls like what Ė you know, one over a square root or something like that. One over square root t, which is very slow.
Okay. Now letís flip the sign, and study z dot equals z cubed. Now, here what happens is itís the same story, except, if youíre big, your velocity Ė your actually derivative is positive and itís really, really big, so once youíre big, you start accelerating upward Ė sorry, you really, really accelerate upwards. Okay?
So this one is actually sort of gonna be wildly Ė you can predict, just by looking at it, itís gonna be wildly unstable. Actually, this is so unstable it has a phenomenon called a finite escape time. What actually happen is the solution goes like this. It actually goes like this. And actually, at a finite time, it just goes to infinity. Okay? So Ė which is a fairly dramatic form of instability, which we, by the way, havenít formally defined. We will later, but thatís Ė okay.
Now we know how the system really behaves, letís look at the linearization. If you linearize x dot equals minus x cubed near a zero, hereís what you get. You get delta x dot equals zero. Why? Because you say, what is Ė you say, whatís f of x when x is near zero? And the answer is, itís really, really small, but someone says, yeah, but first order what is it? Really, really small is zero, so itís this.
So it basically says, the linearization would predict that x is sort of constant. Actually, for this one and for this one, so, in a sense, neither is right. Actually, theyíre Ė neither of these Ė neither linearization predicts the correct long-term behavior, but actually, if we were to zoom way in on this, way down here, youíd actually see that both of them are correct in the following sense. For short times, they both give excellent predictions, because, actually, both the wildly unstable and the stable system once Ė when x is small, they, in fact, both are moving really, really slow. I said it right. Really, really slow near the origin. One is moving really, really slow, and increasing, and thatís the one that, at some point, is gonna go through Ė is gonna get big and then go through a finite escape time. The other one is moving really, really slow towards the origin, and itís just gonna keep going and, very slowly, move to the origin.
However, most of the time, it makes good predictions. Actually, later, weíll find out exactly when linearization makes a good prediction.
Another version of this is linearization along a trajectory. So linearization along a trajectory Ė something like this, a linearization around an equilibrium point. This would come up if youíre designing a circuit, if you are looking at something like vehicle stability or something, or vehicle dynamics, you wanna find out how does a vehicle do, what happens if thereís a wind burst under an airplane or a wind shear or something like that, itís off a little bit.
By the way, it would also come up in bigger things like a Ė something like a big healing system or something like that. So youíd say youíd have a big network, and you could say, what if I Ė or a big Ė just take a big network, and youíd say, what happens if, all of a sudden, 10,000 packets arrived at that node destined for this one? I mean, thatís supposed to be a small number, right? Compared to Ė but whatever. It would Ė so you could actually work out the changes in the queues and everything. It would be just Ė linearization would work quite well.
But now weíre gonna talk about linearization around a trajectory. So linearization around a trajectory goes like this. I have a trajectory, and now Iím actually gonna consider a time varying differential equation here. So I have x dot equals f of x and t, and I have a trajectory. I have something that actually satisfied that differential equation.
So this could be, for example, the Ė give you the dynamics of a rocket, or something like that. It doesnít matter. Something like that, okay? So thatís what it does.
Now suppose Ė and, in fact, this could be, here, some sort of Ė I Ė proposed trajectory Ė I Ė it doesnít matter. Some calculated trajectory. And what you want is you wanna take another trajectory, which is nearer the original one. I donít wanna be vague about what that means. It means, basically, at all times, youíre never too far away. And we wanna work out what happens now.
So here you have DDT of the difference, thatís x minus x trajectory. Thatís of x t minus f of x trajectory t, and thatís about equal to the derivative Ė the Jacobian of f, or the derivative of f with respect to x of this times x minus x trajectory, and this gives you a time varying, linear, dynamical system. It looks like that. And thatís called a linearized or variational system along a trajectory, and this is used constantly, always, constantly used to, for example, evaluate trajectories.
Here you have an idea like stability for an equilibrium point. So you would ask Ė youíd say, I just calculated a trajectory for a vehicle, and youíd say Ė then youíd ask a question like this. What if you just sort of get slightly off? What if thereís a little wind gust, and youíre slightly off? What will happen? And the question is, thereís the trajectory you want, like this, and then the things that have been blown off, and so now itís small, and the question is, will the trajectories diverge? That would be one possible behavior. If they diverge, by how much will they diverge before this goes where itís supposed to go, or will, for example, the trajectories converge, for example? That would be something like a stability, and weíll see things like that.
So hereís just a classic example is a linearized oscillator. So an oscillator is a differential system, a differential equation, a nonlinear dynamical system, which has a t periodic solution, so thatís a Ė thatís an oscillator with frequency one over t, and thereís a solution which is t periodic, like this.
Well the linearized system is delta x is a of t delta x, and a of t here is actually periodic, because you Ė itís the Jacobian of this thing plugged in along the trajectory. By the way, thereís a whole name Ė I mean thereís a whole field for studying perturbations of periodic systems. It comes up all the time. Itís called flow k theory. You donít have to know this, this is just for fun. Itís called flow k theory.
So here a of t is t periodic, and so you have a t periodic linear system, and you would use this to study all sorts of things, and I gave you an example. The one thatís actually quite important would be in circuits. So you might design, for example, an oscillator. It could be an LC oscillator, or it could a ring oscillator, or something like that. If you donít know what Iím talking about, it really doesnít matter for this Ė for the purpose of this example. You might built a ring oscillator, or something like that, and then youíd be Ė youíd ask a question like this. Youíd say, what if Ė what happens in this thing if thereís like thermal noise? Or what if some other circuit on the chip draws a lot of power and the voltage Ė the supply voltage to the oscillator drops 30 mV, for example.
That actually is gonna give you Ė itís gonna knock you off this trajectory, and now the trajectory Ė letís imagine the trajectory sort of looks like this, right? Youíre going around like that. You get knocked off, and one of two things happen. Well several things could happen. You might converge back in to this trajectory, or you might go Ė you might diverge. If you diverge, itís what we call a bad Ė itís a nonfunctional oscillator, right? Although you could ask interesting questions like this. How big a hit can you take here and actually reconverge to the solution? Okay? Actually, generally speaking, real oscillators will recapture from a huge range, but you can ask all sorts of cool questions like, when you Ė as you go back here, when you come back, youíve actually Ė the time it took you to go around changed a little bit, thatís called timing jitter, and so you might ask, how much does a 30 mV step in VDD change Ė affect the timing jitter. And in fact, that would be exactly analyzed by a linear dynamical system like this.
But is there anyone who knows what Iím talking about? Because, if not, Iíll just stop talking about these things. Thatís no. For those of you watching this on TV, thatís like, no. Okay, fine. So you donít care about circuits, right? No. There we got an actual, explicit Ė there we got a response, which is people shaking their heads no. Okay, fine. No problem. Iím not wedded to them either.
Okay, so this finishes up Ė maybe this is a good time to quit. If anybody has any last questions about the midterm? Iíll take that as a no. So have fun. Weíll see you tomorrow, or weíll see some of you tomorrow.
[End of Audio]
Duration: 74 minutes