Monday, November 17, 2008

Tues Nov 18

MAIN POINTS
A limit cycle is a closed trajectory surrounded by non-closed trajectories. It can be stable (attracting) or unstable (or even half-stable....?). They cannot exist in linear systems because they cannot be isolated (the trajectory of a linear system can be multiplied inwards and outwards). 7.1 gives examples. The first example shows all trajectories attracted to a unit circle trajectory. The second example shows a trajectory that is not a circle. 7.2 shows that closed orbits are impossible in gradient systems. This is used to show that a given system has no closed orbit. One way this is done is to give an integral for the energy of some closed periodic solution, and show that it is nonzero, reaching a contradiction. Also, Liapunov functions (energy-like function that decreases along trajectoris) can be used to show there is no periodic solution. Otherwise, Dulac's criterion (finding a function that, multiplied by a gradient, has one sign), can be used.

CHALLENGES
The first two chapters were quite short and low on math. Thus, I consider the third:
In example 7.2.2, they use an energy function... when will we not be able to consider a function like this? When it's impossible to show that the integral is nonzero? Is there a typical case when we would use Liapunov functions? Is there a rule to see which method we should use to show that a system has no isolated, closed orbit?

REFLECTIONS
He says that constructing Liapunov functions usually require "divine inspiration" to construct. If we are not prone to divine inspiration, should we still remember their use in this respect?

Monday, November 10, 2008

Tue 11/11

MAIN POINTS
6.5 defines Newton's law, then defines potential energy within the motion equation to show that the total energy is conserved. This makes it a conservative system. A conservative system, by definition, cannot have attracting fixed points. Ex. 6.5.3 shows the energy surface, which seems to be a convenient way of generalizing the potential diagram to 3 dimensions.

Reversible systems work the same when time is increased as when it is decreased... more precisely, thy are invariant as t -> -t and y -> -y. Example 6.6.1 is crazy looking. We are given more terminology with names about trajectories and saddles (could we possibly get a handout that distinguishes all of the different trajectory/saddle terms we've encountered so far?)

CHALLENGES
I don't understand the concept of a the homoclinic orbit... What does the text mean when it says 'centers are more robust when the system is conservative"? (p 163).

On p.165 they say curves "sufficiently close to the origin" are closed. I understand the logic based on symmetry... but is there a closed curve for every reversible system? Do you never see completely divergent behavior?

REFLECTION
Where do reversible systems show up in the real world? Potential energy/velocity are the main example of conservative systems... in this idealized model, does this mean a trajectory will always be moving, because no attracting fixed points are allowed? So.. a marbe would always be rolling around the energy surface...

Wednesday, November 5, 2008

Thu 11/6

MAIN POINTS
6.0 states that we will begin studying nonlinear systems. (That was fast...) 6.1 defines a vector field on a plane, where each point on the plane has a velocity vector. A phase point traces out a solution on the vector plane. Because analytically finding solutions is so difficult, we focus on qualitative behavior of nonlinear systems, i.e fixed points, closed orbits, stability. The chapter introduces Runge Kutte. 6.2 gives a theorem for the existence of solutions, which are guaranteed it f is continuously differentiable. DIfferent trajectoris do not intersect. There is also a theorem that a trajectory confined to a region without a fixed point will eventually reach a closed orbit. 6.3 looks at the stability near a fixed point by substituting a disturbance into the Taylor expansion to find the Jacobian matrix, which is similar to f'(x) but for many variables. The text defines various sorts of behavior— repellers, attractors, saddles, centers, and fixed points.

CHALLENGES
I would like to practice making and using a Jacobian matrix. Substitution of polar coordinates (used on p. 153) is something I need to brush up on, and going through the Runge Kutte would be nice, too! It's something I didn't really get in Scientific Computation. Also... an explanation of topological equivalence..

REFLECTION
So yeah, we did Runge Kutte in Comp 365. We also used Jacobian matrix then, but I was never sure of the definition of a Jacobian. I see the Taylor expansion show up a lot in proofs, but we have never had to use it in practice— will we ever?

Thursday, October 30, 2008

Thu 10/30

MAIN POINTS
Chapter 5 examines higher-dimensional phase spaces due to linear systems. It begins with the simplest case— a two dimensional system. The text shows the form, and shows how it can be written in terms of a matrix A as x'=Ax. Fixed points also exist in 2D phase space... the simple harmonic oscillator in Ex5.1.1 oscillates in 2-space. Lines denote the orbits to form the Phase Portrait. In Ex5.1.2, the two equations are uncoupled, so may be solved separately. Stability is described in terms of 'attraction'— whether or not the system is attracted towards a fixed point for some initial condition. 5.2 looks at the more general case, which is similar to what we read about in the handout. It defines eigenvalues/vectors, etc. When the eigenvalues are complex, the FP is a center or spiral. Fig. 5.2.8 gives a great classification of the various stabilities based on trace and determinant. 5.3 examines love affairs between two Shakespearean characters. It discusses the character of the love based on parameters, and soves it as a 2x2 matrix system.

CHALLENGES
How would one illustrate a Liapunov-stable FP on a phase portrait?
What do systems with eigenvalue of multiplicity 2 look like? Drawing a phase portrait like that in Fig 5.2.2 looks really difficult... how would one do that?

REFLECTION
I'm taking linear algebra very late in my college career... I'm taking it concurrent with this course, so we're covering eigen* topics right now, conveniently. Two find an eigenvalue for an nxn matrix, we have to solve an nth degree polynomial, so how would we find the eigenvalue for a 4x4 or above? (Perhaps it's in the numerical analysis textbook...)

Monday, October 20, 2008

Tue Oct 21

MAIN POINTS
The premise of the reading is that we want to solve a differential equation in terms of a matrix, i.e y'=Ay. The text defines an eigenvalue, a constant l such that Av=lv for some vector v. Then, A solutin to x'=Ax is x(t)=e^(lt)v.

We can find eigenvalues, first by solving 0=Av-lv=(A-lI)v, which requires that det(A-lI)=0. We solve this ("characteristic") polynomial for 0, and the roots are the values of l that are eigenvalues.

Linear systems of dimension 2 are planar systems. The eigenvalues of the matrix A that represents a planar system can be calculated simply by using the quadratic equation (to find the roots of the characteristic polynomial of A). Because we use the quadratic equation, we can see that we could have two distinct real roots, or two complex roots, or one root of multiplicity 2. The text explores these three cases. In the case of a root of multiplicity 2, there is a unique solution that exhausts all exponential solutions.

CHALLENGES
What are the qualitative changes we see in a planar system when it has complex roots, or roots of multiplicity 2? It would be good to see an illustration of a planar system, because we haven't really thought about diff eq's above 1 dimension.

REFLECTIONS
I'm assuming that you can rewrite a system of differential equations as a matrix, and then use eigenvalues of that matrix to solve for that system. Otherwise, what is the difference between a differential equation defined in terms of matrices, and one defined by a system of equations?
Again, it would be cool to see some examples so we can visualize what's actually happening.

Thursday, October 9, 2008

Thu Oct 9

MAIN POINTS
Fireflies have a method of synchronizing their flashes with each other, but only if the stimulus is at a rate that they can learn to match. The first part of the model describes the firefly as the simple oscillator on the circle with period little-omega, and the stimulus as the oscillator with the period big omega. Equation (2) shows a simple model of the attempt to synchronize. Terms are introduced to nondimensionalize the model, and shifting the mu term changes the nature of fixed points, with saddle-node bifurcation behavior. A fixed point represents a part of the model where the firefly's rhythm is phase-locked to the stimulus.

CHALLENGES
There's no indication of how the model in equation (2) was derived— what were the logical steps in producing that equation? Also, I don't know what steps they took to decide how to nondimensionalize to create the new terms... it would be nice to step through that.

REFLECTIONS
When I arrived at this section I realized that I have read a book by this same author when I was in high school, about synchronization, that was more of a popular science book. It's cool that we can model something biological already, just by knowing bifurcations and flow on a circle.

Sunday, October 5, 2008

Tues 10/7

MAIN POINTS
Chapter four considers differential equations solved for points on a circle, rather than as a vector field on a line. This seems to be the simplest case for which periodic behavior is possible. Example 4.1.1 solves for the very simple Theta'=sin(Theta), which has one stable FP and one unstable FP on opposite sides of the circle. It is simpler than considering the equation as on a line. 4.2 defines an oscillator. It is simply constant motion around the circle. If you subtract two oscillators you get a phase difference equation, which shows how the two oscillators go in and out of phase with each other.

4.3 defines a nonuniform oscillator. The value of the parameter determines whether or not there's a fixed point. The low values of the differential equation correspond to a "bottleneck," which is where the FP is most likely to occur as the parameter changes. These are due to ghosts, which are ghosts of bifurcation points.. hm. THey use the square-root scaling law somehow to calculate the time spent in the bottleneck.

CHALLENGES
What's the difference between a ghost and a bottleneck? They sort of seem like the same thing. How did they arrive at the integral that they use to calculate the oscillation period in 4.3? What is the relevance of this square-root scaling law?

REFLECTIONS
I know that earlier in the book they defined an oscillator, just not on the circle. There seems to be an interesting relationship between defining something on a straight line and defining it on the circle. For instance, defining the sinx function on the straight line means giving infinite solutions, while you only give one solution when you define it on the circle.

Monday, September 29, 2008

Tue Sep 30

MAIN POINTS
Pitchfork bifurcation is a third type of bifurcation that arises in problems with a symmetry. The supercritical pitchfork bifurcation has the form x'=rx-x^3. This equation has a single, central stable fixed point for all values of r ≤ 0, and three fixed points for positive r. In this case, the two symmetric fixed points on either side are stable, and the central fixed point is unstable. Plotting this on a bifurcation diagram shows a pitchfork shape. Example 3.4.2 shows how this idea conveys onto a diagram of Potential. As x increases past 0, two pits form on either side of the origin. The subcritical bifurcation has two symmetric FP's that are unstable and converge towards the origin as r is increased to zero from a negative value.

3.5 discusses the overdamped bead on a rotating hoop. They find a way to simplify the second order Newton equation to a first order.

CHALLENGES
The bead/hoop example was kind of impenetrable to me. I expect that we'll go over this in class. On p.59, the book says that the detailed analysis of (3) is left in exercises... perhaps it would be good to go over how the bifurcation diagram for that equation with the 5th degree term is derived.

REFLECTION
What are some examples of situations where the symmetry causes a pitcfork bifurcation? Are there any simple ones? Does it arise in chemistry or physics?

Sunday, September 21, 2008

tues sep 23

MAIN POINTS
Some fixed points must exist for all values of a given parameter— however, the nature of their stability may change as the parameter changes. Transcritical bifurcation is the model for this. This is given by the equation x'=rx-x^2. As r varies the parabola defined by x vs. x' shifts from the left to the right, but always crosses the axis at x=0. Thus, there is no disappearance of the fixed point as there is with saddle bifurcation.

The next chapter discusses an application— lasers. Energy excites a material, which emits light. After a certain threshold, the energy is released in phase as a laser. The text shows that the differential equation for the number of photons, n'=gain-loss, can be rewritten in a form similar to the transcritical bifurcation. As pump strength increases, the fixed point goes from being a stable fixed point at 0 to an unstable fixed point at 0. This means that the number of photons increases given any slight prodding. A new stable fixed point appears for some positive value of n. The point at which the origin fixed point becomes unstable is called the laser threshold.

CHALLENGES
Some practice drawing bifurcation diagrams would be nice... it seems like you just have to draw a vector field for many values of a parameter, and pretend that they're cross-sections of the bifurcation diagram.

Is there a graphical way to envision Fig. 3.2.1 on the computer?

REFLECTION
3.3 provided a way about thinking about physics that I never thought about. It really is a new dimension of physics when you can state behaviors in terms of their derivatives rather than explicitly...

Also, the emphasizes that first order systems are interesting because of the way we can change parameters... Wouldn't this be similar to simply adding another variable, and making the system second-order?

Thursday, September 18, 2008

Thu Sep 18

Main Points
The book notes that 1-dimensional systems are interesting in the way that they change as parameters change. THese changes (fixed points, stability) are called bifurcations. The saddle-node bifurcation is a typical situation— a saddle shape can overlap the line, creating FP's, it can touch the line (for a half-stable FP), or it can be above the line. Parameters can change this. Several examples are shown where this behavior happens— parameters vary, creating fixed points "out of the blue sky." The book discusses "normal forms," but it doesn't go into much detail about how to use them, it only shows how simple quadratic functions are prototypes for saddle-node bifurcation.

CHALLENGES
The bifurcation diagram looks a little weird to me. Why are they drawn like that? What do they demonstrate? What does the Taylor expansion on page 50 prove?

REFLECTIONS
It's interesting that variability in FP's can be condensed down to thinking about saddle-shapes. Thinking about it, it is inevitable—if a single FP appears when a parameter changes, a second one has to show up, unless it's tangential to a saddle shape. I'm just curious about how this topic is relevant— what it can be used for.

Monday, September 15, 2008

Tue Sep 16

MAIN POINTS
Dimensional analysis is based on the premise that physical quantities have dimensions, and physical laws are not changed by a change in units. Dimensions are multiplied along with their associated values, i.e mass, velocity, time, space. Each has an exponent, and when all of them have an exponent of 0, they are called 'dimensionless.' It can be seen immediately if an equation is possible by checking if the dimensions are compatible with each other. This is used to show how the dimensions of a pendulum work out. It is also demonstrated that you can infer what kind of dependence is necessary in an equation if you just know the unit of the answer.
Buckingham's theorem states that an equation is dimensionally homogeneous if it can be stated as a function of dimensionless products. Applying this yields a system of equations that can be used (using linear algebra, for instance), to set up an equation for the function.

CHALLENGES
I don't quite understand the concept of "dimensionally homogeneous," and how it is important that the laws of physics be this way. For instance, I'd like to know, if physics weren't dimensionally homogeneous, what kind of contradictions would we see?
I'd like to see another problem worked out too!

REFLECTION
In science classes we got used to the idea of treating units just like numbers, and multiplying/dividing/adding them so far as they were compatible. That's what dimensional analysis means to me— taking a known equation and seeing that the units cancel out. This approaches that problem from the other end.

Monday, September 8, 2008

Thurs. 9/4

MAIN POINTS
Linear Stability Analysis is a way of examining stability without looking at the system graphically. The derivation uses a Taylor expansion to look at behavior around the point in question, which yields the first derivative term. The other terms are negligable. Thus, f'(x*0 determines te stability at that point.

Section 2.5 gives an overview of when systems may have unique solutions, or when solutions exist at all. A simple example with multiple solutions is dx/dt = x^(1/3). The text explains the Existence and Uniqueness Theorem, which requires that f(x) and f'(x) are continuous in an open interval. This doesn't mandate that solutions are unique over the whole domain, however.

Section 2.6 shows how oscillations are impossible in a first-order system. This is because overshoots can't occur— and obviously, the system is restricted to one dimension.

Section 2.7 I don't understand what the notion of Potential is for.

CHALLENGES
2.7 defines Potential, and it seems like it's just a pedagogical tool for explaining dynamics, but I don't understand how this view contrasts with how they previously explained it.

REFLECTIONS
The Taylor Expansion is used for the derivation of Linear Stability Analysis. Is this necessary? I haven't taken Analysis, but is a Taylor expansion generally regarded as a solid proof, whereas simply taking the derivative isn't?

Tues. 9/2

MAIN POINTS
The beginning of the first chapter serves as an overview of the history of chaos/dynamics. Then it turns to some examples of differential equations— an oscillator, the heat equation, and then defines the generalized case of a differential equation. The text introduces dummy variables to put equations in standard form, and defines the difference between linear and nonlinear systems (whether the x variables on the right hand side are all in the first power or not). The text defines autonomous vs. nonautonomous systems (systems with the time variable explicit or not), the trajectory, and phase space.

Ch. 2 defines a first-order system as a function of one variable. It gives the example of dx/dt=sinx, which it explains graphically. Stability is defined as whether or not a small peturbation will be rectified, or if it will cause a drastic change in trajectory in the movement of the peturbation. It gives a number of examples of systems, particularly population growth, which uses a logistic equation.

CHALLENGES:
I'm a little unclear on the precise definition of a logistic equation— and how the limitations are incorporated into the system.

REFLECTION:
We did a quick exercise on Chaos in Scientific Computation. I also read the book by Gleick at some point in High School. As for the differential equations, I have little experience working with them apart from using the Runge-Kutte method, etc.

Thursday, September 4, 2008

First Post

Name: Casey B

Year: Senior

Majors: Math/Computer Science

Math Classes: Discrete Math, Multivariable Calculus, Scientific Computation, Theory of Computation, Abstract Algebra, Graph Theory. Taking: Discrete Applied Math, Linear Algebra, Statistical Modeling

Weakest Part: Probability and Statistics, haven't taken a real course in Linear Algebra

Strongest Part: Discrete + Graph Theory, Calculus (but it's been a while)

Why I'm Taking the Course: Want a more solid background in continuous math, Diff Eqs seem like a lot of fun.

What I want out of it: Becoming very comfortable with the notion of DE's, including terminology and the various methods of solving them.

Interests: Electronic Music / Analog synthesizers, Computer Science, Food, Birds, WMCN

Worst Math Teacher Experience: Too easy, didn't assign enough problem sets to solidify knowledge, didn't energize the class, taught very slowly.

Best Math Teacher Experience: Many problem sets, challenging material, readily available out of class, brought food to class :), had some supplementary handouts but didn't rely on them.