New Home

I’ve moved over to a new website, and hopefully once the DNS records finish updating, that will be where “colindcarroll.com” sends you to, though I hope to have new, exciting material here in the future!

Good linear algebra projects?

Have been talking up learning linear algebra to some of the programmers at work (in particular, using Gil Strang’s fantastic lectures), and am now looking for projects to work with them at various levels.  In particular, any interesting programs/scripts/projects that might only utilize matrix subspaces.

Excited to move on to perhaps interactive curve fitting once they cover least squares, PageRank when they get to eigenvalues, and maybe image compression using singular values.  Just having a hard time coming up with a good project that *actually* starts from matrix subspaces (i.e., not casing an eigenvalue as a mysteriously useful null space value).

A great end result of all this might be a project investigating large networks and locating clusters.

A primer on the Euler-Lagrange equations

To me, one of the most beautiful pieces of mathematics is the Euler-Lagrange equations.  The idea is this: you are seeking to minimize a functional, which is a function that eats functions and gives numbers.  You won’t lose much by assuming that this functional is an integral.  In particular, we will assume we are dealing with minimizing

L[u] = \int_{\Omega} F(Du(x),u(x),x)~dx,

where u: \Omega \subset \mathbb{R}^n \to \mathbb{R}, and (as usual) Du is the vector of partial derivatives of u.  For example, perhaps L[u] = \int_0^1 |u(x)|~dx, which would be minimized by choosing u(x) identically equal to zero.  Or maybe we want to minimize L[u] = \int_0^1 u'(x)-u(x)~dx, subject to the requirement that u(0) = u(1) = 1.  Note that our desired “solution” will be a function on, in this case, the unit interval.

One could spend a while talking about classic problems in the calculus of variations (which the study of these problems is called), but I would like to move right to how to solve these problems.  What we will do is extract a partial differential equation which will be zero at the minimizer (though it might be zero in other places too).  The strategy is as follows: assuming we have a minimizer, “poking” the function in any direction will make L[u] larger.  This is similar to the calculus argument that a relative extrema will have derivative zero.  But in (single variable) calculus, we may “perturb” a point by moving it to the right or left on the real line.  How do we “perturb” a function?

Well, we add a “small” function to it.  Namely, we will suppose that u is a minimizer with the necessary boundary data (one typically specifies boundary data in these sorts of problems, to guarantee uniqueness), and let v be a smooth, compactly supported function.  Then, for any real number t, we have L[u] \leq L[u+tv], and that u+tv has the desired boundary data.  So now we have a function of a single real variable, call it f(t), and have deduced that it has a minimum when t = 0.

Specifically, we have

f(t) = L[u+tv] = \int_{\Omega} F(Du(x)+tDv(x),u(x)+tv(x),x)dx.

In order to differentiate, the notation gets sort of bad.  In practice, just remembering that you should differentiate now is pretty good, but let’s forge on.  We notice that F is a map from a big vector space to the reals.  Namely, F: \mathbb{R}^n \times \mathbb{R} \times \mathbb{R}^n \to \mathbb{R}, since x and Du are vectors.  I will denote the derivative of F with respect to its first n variables by D_pF, and the derivative of F with respect to u by D_uF.  The p in the first derivative is convention, and notice that D_uF is a number, not a vector.  Also, I won’t need the derivative with respect to the last n variables, but if I did, I’d write $D_xF$.

Now then, differentiating with respect to t, we get

f'(t) = \int_{\Omega}Dv\cdot D_pF(Du+tDv,u+tv,x) + vD_uF(Du+tDv,u+tv,x)~dx,

so

f'(0) = \int_{\Omega} Dv \cdot D_pF(Du, u,x) + vD_uF(Du,u ,x)~dx.

Now if you’re really sharp with integrating by parts (and remember that v has compact support, and so vanishes on the boundary of \Omega), you would probably know that the left hand summand may be transformed (and f'(0) swapped with 0) to get

0=\int_{\Omega}-v(div_x(D_pF(Du,u,x)) + vD_uF(Du,u,x)~dx,

which can be rewritten as

0 = \int_{\Omega}(D_uF(Du,u,x)-div_x(D_pF(Du,u,x)))v~dx.

This equality must hold for all smooth functions v.  Hence if the big ugly guy is nonzero anywhere in \Omega, we choose a v that is supported right there, and break the equality.  This sentence can be fleshed out into a full argument that the big ugly guy (which maybe I’ll call the Euler-Lagrange equation instead) must vanish at any critical point of the functional.  Once more for emphasis:
If u is a critical point of the functional L, then u satisfies the Euler-Lagrange equation
$latex D_uF(Du,u,x)-div_x(D_pF(Du,u,x)) = 0.$
I’ve had enough integral signs for one day, but at least I got to post another nice .gif I had lying around!

Joint Meetings

Day 2 of the Joint Math Meetings.  Gave a talk this morning on my dissertation research, and then was delighted to see Frank Morgan give arising from some observations on the work we did in his geometry group REU five (!) years ago.  Have a full afternoon of talks to attend now.

 

Some quick thoughts on yesterday.  Maybe some day I’ll have a backlog of posts that will post automatically, but today is not that day.

Poor choice: Walking 2.5 miles to and from the (long) day in dress shoes.  Ran a few miles on the Charles at night with a friend, and my feet were killing me.

Good choice: Grilled cheese on a cold day.  Way to go, New England!

 

Starting up

I’m trying to avoid defining what I will be talking about here, as I wait and see what I end up being interested in on any given day.   I’d anticipate some mix of math, programming, running, with some general geekery thrown in.  On the other hand, who knows?

Coarea Formula, part I: the Jacobian

There’s a formula called the coarea formula which I have been researching for the past year or so.  There are two good ways to think about it.  One is to look at the so-called “Jacobian” and seek to interpret the integral of that number.  The second is to look at it as a natural dual (in a colloquial, rather than mathematical sense) to its more-famous-brother, the area formula.  We deal with the Jacobian today.

The Jacobian is typically introduced in calculus courses, and associated with a change of variables.  In a typical case, you would like to take and integral in one set of coordinates, (x,y) and change to a set of coordinates (u,v), according to a map \phi: \mathbb{R}^2 \to \mathbb{R}^2 which, since the domain is two dimensional, we may write as \phi(x,y) = (u(x,y),v(x,y)).  In this case, we have for an open set

\int_{\mathbb{R}^2}f(x,y)J\phi(x,y)~dx~dy = \int_{\mathbb{R}^2} f(u,v) ~du~dv

Let me finally define precisely what the Jacobian from calculus is- for a general map \phi: \mathbb{R}^n \to \mathbb{R}^n, we define

J\phi(x_1,\ldots, x_n) = \det \left(\phi^j_{x_k}\right)_{j,k = 1}^n,

where we are writing \phi = (\phi^1,\ldots, \phi^n), and \phi^j_{x_k} :=\frac{\partial \phi^j}{\partial x_j} .  As a quick example, one might recall changing coordinates from Euclidean (rectangular) to polar.  Typically it went x = r \cos{\theta} and y = r \sin{\theta} (that is to say, our change of coordinates is \phi(r, \theta) = (r\cos{theta},r\sin{theta})).  Then, using the “absolute value” notation for determinant, we have

J \phi(r,\theta) = \left| \begin{array}{cc} \cos{\theta} & \sin{\theta} \\ -r \sin{\theta} & r \cos{\theta} \end{array} \right| = r \cos^2 \theta + r \sin^2 \theta = r,

which returns us to the (somewhat) familiar formula,

\int f(x,y)~dx~dy = \int f(r,\theta) r~dr~d \theta.

That seems like well over enough for a first post.  Next up: an intuition for what the Jacobian measures, as well as a definition of Jacobian for maps between spaces of different dimensions.

Also! I should mention that the words I use are almost surely wrong- typically what I call the “Jacobian” is called the “Jacobian determinant”, while the actual “Jacobian” is the matrix of derivatives.  It just seems like a mouthful.