MAIN POINTS
A limit cycle is a closed trajectory surrounded by non-closed trajectories. It can be stable (attracting) or unstable (or even half-stable....?). They cannot exist in linear systems because they cannot be isolated (the trajectory of a linear system can be multiplied inwards and outwards). 7.1 gives examples. The first example shows all trajectories attracted to a unit circle trajectory. The second example shows a trajectory that is not a circle. 7.2 shows that closed orbits are impossible in gradient systems. This is used to show that a given system has no closed orbit. One way this is done is to give an integral for the energy of some closed periodic solution, and show that it is nonzero, reaching a contradiction. Also, Liapunov functions (energy-like function that decreases along trajectoris) can be used to show there is no periodic solution. Otherwise, Dulac's criterion (finding a function that, multiplied by a gradient, has one sign), can be used.
CHALLENGES
The first two chapters were quite short and low on math. Thus, I consider the third:
In example 7.2.2, they use an energy function... when will we not be able to consider a function like this? When it's impossible to show that the integral is nonzero? Is there a typical case when we would use Liapunov functions? Is there a rule to see which method we should use to show that a system has no isolated, closed orbit?
REFLECTIONS
He says that constructing Liapunov functions usually require "divine inspiration" to construct. If we are not prone to divine inspiration, should we still remember their use in this respect?
Monday, November 17, 2008
Monday, November 10, 2008
Tue 11/11
MAIN POINTS
6.5 defines Newton's law, then defines potential energy within the motion equation to show that the total energy is conserved. This makes it a conservative system. A conservative system, by definition, cannot have attracting fixed points. Ex. 6.5.3 shows the energy surface, which seems to be a convenient way of generalizing the potential diagram to 3 dimensions.
Reversible systems work the same when time is increased as when it is decreased... more precisely, thy are invariant as t -> -t and y -> -y. Example 6.6.1 is crazy looking. We are given more terminology with names about trajectories and saddles (could we possibly get a handout that distinguishes all of the different trajectory/saddle terms we've encountered so far?)
CHALLENGES
I don't understand the concept of a the homoclinic orbit... What does the text mean when it says 'centers are more robust when the system is conservative"? (p 163).
On p.165 they say curves "sufficiently close to the origin" are closed. I understand the logic based on symmetry... but is there a closed curve for every reversible system? Do you never see completely divergent behavior?
REFLECTION
Where do reversible systems show up in the real world? Potential energy/velocity are the main example of conservative systems... in this idealized model, does this mean a trajectory will always be moving, because no attracting fixed points are allowed? So.. a marbe would always be rolling around the energy surface...
6.5 defines Newton's law, then defines potential energy within the motion equation to show that the total energy is conserved. This makes it a conservative system. A conservative system, by definition, cannot have attracting fixed points. Ex. 6.5.3 shows the energy surface, which seems to be a convenient way of generalizing the potential diagram to 3 dimensions.
Reversible systems work the same when time is increased as when it is decreased... more precisely, thy are invariant as t -> -t and y -> -y. Example 6.6.1 is crazy looking. We are given more terminology with names about trajectories and saddles (could we possibly get a handout that distinguishes all of the different trajectory/saddle terms we've encountered so far?)
CHALLENGES
I don't understand the concept of a the homoclinic orbit... What does the text mean when it says 'centers are more robust when the system is conservative"? (p 163).
On p.165 they say curves "sufficiently close to the origin" are closed. I understand the logic based on symmetry... but is there a closed curve for every reversible system? Do you never see completely divergent behavior?
REFLECTION
Where do reversible systems show up in the real world? Potential energy/velocity are the main example of conservative systems... in this idealized model, does this mean a trajectory will always be moving, because no attracting fixed points are allowed? So.. a marbe would always be rolling around the energy surface...
Wednesday, November 5, 2008
Thu 11/6
MAIN POINTS
6.0 states that we will begin studying nonlinear systems. (That was fast...) 6.1 defines a vector field on a plane, where each point on the plane has a velocity vector. A phase point traces out a solution on the vector plane. Because analytically finding solutions is so difficult, we focus on qualitative behavior of nonlinear systems, i.e fixed points, closed orbits, stability. The chapter introduces Runge Kutte. 6.2 gives a theorem for the existence of solutions, which are guaranteed it f is continuously differentiable. DIfferent trajectoris do not intersect. There is also a theorem that a trajectory confined to a region without a fixed point will eventually reach a closed orbit. 6.3 looks at the stability near a fixed point by substituting a disturbance into the Taylor expansion to find the Jacobian matrix, which is similar to f'(x) but for many variables. The text defines various sorts of behavior— repellers, attractors, saddles, centers, and fixed points.
CHALLENGES
I would like to practice making and using a Jacobian matrix. Substitution of polar coordinates (used on p. 153) is something I need to brush up on, and going through the Runge Kutte would be nice, too! It's something I didn't really get in Scientific Computation. Also... an explanation of topological equivalence..
REFLECTION
So yeah, we did Runge Kutte in Comp 365. We also used Jacobian matrix then, but I was never sure of the definition of a Jacobian. I see the Taylor expansion show up a lot in proofs, but we have never had to use it in practice— will we ever?
6.0 states that we will begin studying nonlinear systems. (That was fast...) 6.1 defines a vector field on a plane, where each point on the plane has a velocity vector. A phase point traces out a solution on the vector plane. Because analytically finding solutions is so difficult, we focus on qualitative behavior of nonlinear systems, i.e fixed points, closed orbits, stability. The chapter introduces Runge Kutte. 6.2 gives a theorem for the existence of solutions, which are guaranteed it f is continuously differentiable. DIfferent trajectoris do not intersect. There is also a theorem that a trajectory confined to a region without a fixed point will eventually reach a closed orbit. 6.3 looks at the stability near a fixed point by substituting a disturbance into the Taylor expansion to find the Jacobian matrix, which is similar to f'(x) but for many variables. The text defines various sorts of behavior— repellers, attractors, saddles, centers, and fixed points.
CHALLENGES
I would like to practice making and using a Jacobian matrix. Substitution of polar coordinates (used on p. 153) is something I need to brush up on, and going through the Runge Kutte would be nice, too! It's something I didn't really get in Scientific Computation. Also... an explanation of topological equivalence..
REFLECTION
So yeah, we did Runge Kutte in Comp 365. We also used Jacobian matrix then, but I was never sure of the definition of a Jacobian. I see the Taylor expansion show up a lot in proofs, but we have never had to use it in practice— will we ever?
Thursday, October 30, 2008
Thu 10/30
MAIN POINTS
Chapter 5 examines higher-dimensional phase spaces due to linear systems. It begins with the simplest case— a two dimensional system. The text shows the form, and shows how it can be written in terms of a matrix A as x'=Ax. Fixed points also exist in 2D phase space... the simple harmonic oscillator in Ex5.1.1 oscillates in 2-space. Lines denote the orbits to form the Phase Portrait. In Ex5.1.2, the two equations are uncoupled, so may be solved separately. Stability is described in terms of 'attraction'— whether or not the system is attracted towards a fixed point for some initial condition. 5.2 looks at the more general case, which is similar to what we read about in the handout. It defines eigenvalues/vectors, etc. When the eigenvalues are complex, the FP is a center or spiral. Fig. 5.2.8 gives a great classification of the various stabilities based on trace and determinant. 5.3 examines love affairs between two Shakespearean characters. It discusses the character of the love based on parameters, and soves it as a 2x2 matrix system.
CHALLENGES
How would one illustrate a Liapunov-stable FP on a phase portrait?
What do systems with eigenvalue of multiplicity 2 look like? Drawing a phase portrait like that in Fig 5.2.2 looks really difficult... how would one do that?
REFLECTION
I'm taking linear algebra very late in my college career... I'm taking it concurrent with this course, so we're covering eigen* topics right now, conveniently. Two find an eigenvalue for an nxn matrix, we have to solve an nth degree polynomial, so how would we find the eigenvalue for a 4x4 or above? (Perhaps it's in the numerical analysis textbook...)
Chapter 5 examines higher-dimensional phase spaces due to linear systems. It begins with the simplest case— a two dimensional system. The text shows the form, and shows how it can be written in terms of a matrix A as x'=Ax. Fixed points also exist in 2D phase space... the simple harmonic oscillator in Ex5.1.1 oscillates in 2-space. Lines denote the orbits to form the Phase Portrait. In Ex5.1.2, the two equations are uncoupled, so may be solved separately. Stability is described in terms of 'attraction'— whether or not the system is attracted towards a fixed point for some initial condition. 5.2 looks at the more general case, which is similar to what we read about in the handout. It defines eigenvalues/vectors, etc. When the eigenvalues are complex, the FP is a center or spiral. Fig. 5.2.8 gives a great classification of the various stabilities based on trace and determinant. 5.3 examines love affairs between two Shakespearean characters. It discusses the character of the love based on parameters, and soves it as a 2x2 matrix system.
CHALLENGES
How would one illustrate a Liapunov-stable FP on a phase portrait?
What do systems with eigenvalue of multiplicity 2 look like? Drawing a phase portrait like that in Fig 5.2.2 looks really difficult... how would one do that?
REFLECTION
I'm taking linear algebra very late in my college career... I'm taking it concurrent with this course, so we're covering eigen* topics right now, conveniently. Two find an eigenvalue for an nxn matrix, we have to solve an nth degree polynomial, so how would we find the eigenvalue for a 4x4 or above? (Perhaps it's in the numerical analysis textbook...)
Monday, October 20, 2008
Tue Oct 21
MAIN POINTS
The premise of the reading is that we want to solve a differential equation in terms of a matrix, i.e y'=Ay. The text defines an eigenvalue, a constant l such that Av=lv for some vector v. Then, A solutin to x'=Ax is x(t)=e^(lt)v.
We can find eigenvalues, first by solving 0=Av-lv=(A-lI)v, which requires that det(A-lI)=0. We solve this ("characteristic") polynomial for 0, and the roots are the values of l that are eigenvalues.
Linear systems of dimension 2 are planar systems. The eigenvalues of the matrix A that represents a planar system can be calculated simply by using the quadratic equation (to find the roots of the characteristic polynomial of A). Because we use the quadratic equation, we can see that we could have two distinct real roots, or two complex roots, or one root of multiplicity 2. The text explores these three cases. In the case of a root of multiplicity 2, there is a unique solution that exhausts all exponential solutions.
CHALLENGES
What are the qualitative changes we see in a planar system when it has complex roots, or roots of multiplicity 2? It would be good to see an illustration of a planar system, because we haven't really thought about diff eq's above 1 dimension.
REFLECTIONS
I'm assuming that you can rewrite a system of differential equations as a matrix, and then use eigenvalues of that matrix to solve for that system. Otherwise, what is the difference between a differential equation defined in terms of matrices, and one defined by a system of equations?
Again, it would be cool to see some examples so we can visualize what's actually happening.
The premise of the reading is that we want to solve a differential equation in terms of a matrix, i.e y'=Ay. The text defines an eigenvalue, a constant l such that Av=lv for some vector v. Then, A solutin to x'=Ax is x(t)=e^(lt)v.
We can find eigenvalues, first by solving 0=Av-lv=(A-lI)v, which requires that det(A-lI)=0. We solve this ("characteristic") polynomial for 0, and the roots are the values of l that are eigenvalues.
Linear systems of dimension 2 are planar systems. The eigenvalues of the matrix A that represents a planar system can be calculated simply by using the quadratic equation (to find the roots of the characteristic polynomial of A). Because we use the quadratic equation, we can see that we could have two distinct real roots, or two complex roots, or one root of multiplicity 2. The text explores these three cases. In the case of a root of multiplicity 2, there is a unique solution that exhausts all exponential solutions.
CHALLENGES
What are the qualitative changes we see in a planar system when it has complex roots, or roots of multiplicity 2? It would be good to see an illustration of a planar system, because we haven't really thought about diff eq's above 1 dimension.
REFLECTIONS
I'm assuming that you can rewrite a system of differential equations as a matrix, and then use eigenvalues of that matrix to solve for that system. Otherwise, what is the difference between a differential equation defined in terms of matrices, and one defined by a system of equations?
Again, it would be cool to see some examples so we can visualize what's actually happening.
Thursday, October 9, 2008
Thu Oct 9
MAIN POINTS
Fireflies have a method of synchronizing their flashes with each other, but only if the stimulus is at a rate that they can learn to match. The first part of the model describes the firefly as the simple oscillator on the circle with period little-omega, and the stimulus as the oscillator with the period big omega. Equation (2) shows a simple model of the attempt to synchronize. Terms are introduced to nondimensionalize the model, and shifting the mu term changes the nature of fixed points, with saddle-node bifurcation behavior. A fixed point represents a part of the model where the firefly's rhythm is phase-locked to the stimulus.
CHALLENGES
There's no indication of how the model in equation (2) was derived— what were the logical steps in producing that equation? Also, I don't know what steps they took to decide how to nondimensionalize to create the new terms... it would be nice to step through that.
REFLECTIONS
When I arrived at this section I realized that I have read a book by this same author when I was in high school, about synchronization, that was more of a popular science book. It's cool that we can model something biological already, just by knowing bifurcations and flow on a circle.
Fireflies have a method of synchronizing their flashes with each other, but only if the stimulus is at a rate that they can learn to match. The first part of the model describes the firefly as the simple oscillator on the circle with period little-omega, and the stimulus as the oscillator with the period big omega. Equation (2) shows a simple model of the attempt to synchronize. Terms are introduced to nondimensionalize the model, and shifting the mu term changes the nature of fixed points, with saddle-node bifurcation behavior. A fixed point represents a part of the model where the firefly's rhythm is phase-locked to the stimulus.
CHALLENGES
There's no indication of how the model in equation (2) was derived— what were the logical steps in producing that equation? Also, I don't know what steps they took to decide how to nondimensionalize to create the new terms... it would be nice to step through that.
REFLECTIONS
When I arrived at this section I realized that I have read a book by this same author when I was in high school, about synchronization, that was more of a popular science book. It's cool that we can model something biological already, just by knowing bifurcations and flow on a circle.
Sunday, October 5, 2008
Tues 10/7
MAIN POINTS
Chapter four considers differential equations solved for points on a circle, rather than as a vector field on a line. This seems to be the simplest case for which periodic behavior is possible. Example 4.1.1 solves for the very simple Theta'=sin(Theta), which has one stable FP and one unstable FP on opposite sides of the circle. It is simpler than considering the equation as on a line. 4.2 defines an oscillator. It is simply constant motion around the circle. If you subtract two oscillators you get a phase difference equation, which shows how the two oscillators go in and out of phase with each other.
4.3 defines a nonuniform oscillator. The value of the parameter determines whether or not there's a fixed point. The low values of the differential equation correspond to a "bottleneck," which is where the FP is most likely to occur as the parameter changes. These are due to ghosts, which are ghosts of bifurcation points.. hm. THey use the square-root scaling law somehow to calculate the time spent in the bottleneck.
CHALLENGES
What's the difference between a ghost and a bottleneck? They sort of seem like the same thing. How did they arrive at the integral that they use to calculate the oscillation period in 4.3? What is the relevance of this square-root scaling law?
REFLECTIONS
I know that earlier in the book they defined an oscillator, just not on the circle. There seems to be an interesting relationship between defining something on a straight line and defining it on the circle. For instance, defining the sinx function on the straight line means giving infinite solutions, while you only give one solution when you define it on the circle.
Chapter four considers differential equations solved for points on a circle, rather than as a vector field on a line. This seems to be the simplest case for which periodic behavior is possible. Example 4.1.1 solves for the very simple Theta'=sin(Theta), which has one stable FP and one unstable FP on opposite sides of the circle. It is simpler than considering the equation as on a line. 4.2 defines an oscillator. It is simply constant motion around the circle. If you subtract two oscillators you get a phase difference equation, which shows how the two oscillators go in and out of phase with each other.
4.3 defines a nonuniform oscillator. The value of the parameter determines whether or not there's a fixed point. The low values of the differential equation correspond to a "bottleneck," which is where the FP is most likely to occur as the parameter changes. These are due to ghosts, which are ghosts of bifurcation points.. hm. THey use the square-root scaling law somehow to calculate the time spent in the bottleneck.
CHALLENGES
What's the difference between a ghost and a bottleneck? They sort of seem like the same thing. How did they arrive at the integral that they use to calculate the oscillation period in 4.3? What is the relevance of this square-root scaling law?
REFLECTIONS
I know that earlier in the book they defined an oscillator, just not on the circle. There seems to be an interesting relationship between defining something on a straight line and defining it on the circle. For instance, defining the sinx function on the straight line means giving infinite solutions, while you only give one solution when you define it on the circle.
Subscribe to:
Posts (Atom)