Busy days.

Somehow spring break has turned into one of the busier weeks of my year.  Trying to keep up with real life work has not left a ton of time for writing anything thoughtful/reasonable, though at least for continuity I will try to keep a paragraph or so up here each day with my favorite thought of the day.  This also means I can reuse some old graphics!

Today I really enjoyed a particular fact about Sobolev functions.  Recall that these are actually equivalence classes of functions, as they are really defined under an integral sign, which “can’t see” sets of small measure.  However, the following quantifies exactly how small the bad set might have to be:

If f \in W^{1,p}(\Omega) for \Omega \subset \mathbb{R}^n, then the limit \lim_{r \to 0} \frac{1}{\alpha(n)r^n}\int_{B(x,r)}f(y)~dy exists for all x outside a set E with \mathcal{H}^{n-p+\epsilon}(E) = 0 for all \epsilon > 0.

Put another way, every Sobolev function may be “precisely defined” outside a set of small dimension, where the dimension gets smaller as p gets larger.  I suppose a given representative may be worse, but this allows you to require that the member of the equivalence class of Sobolev functions has some nice properties.

The fibers of two functions in a sequence. I was thinking the above argument might imply that the limit was not Sobolev, but the limit is precisely represented outside a set with positive 1-dimensional measure, so the result is silent on this issue.


More geometry with inverse images

Yesterday’s post was on inverse images of functions as sets, and ways to visualize them.  Today, I realized that despite my early series of posts on the Jacobian derailing, I probably have enough background to describe the area and coarea formulae.  The two give a relationship between the “size” of the fibers and the derivative of the map.  The first thing I’ll need to do is define the Jacobian for maps f: \mathbb{R}^m \to \mathbb{R}^n.  The definition will be slightly different depending on whether m or n is larger, but if n \geq m, then

|Jf(x)| := \sqrt{|Df(x)^T \cdot Df(x)|},

and if m \geq n, then

|Jf(x)| := \sqrt{|Df(x) \cdot Df(x)^T|}

where Df is the n x m matrix of partial derivatives of f, and we use the absolute value bars to indicate a determinant.  Notice that if m = n, then the definitions agree, and it is just the absolute value of the determinant of the matrix of partial derivatives.  If n = 1 so that f is a real-valued function, then the Jacobian is the length of the gradient of f.

Now then, the area formula says that for a Lipschitz f:\mathbb{R}^m \to \mathbb{R}^n with m \leq n, and any Lebesgue measurable U \subset \mathbb{R}^m,

\int_U |Jf(x)| d\mathcal{L}^m(x) = \int_{\mathbb{R}^n} \#(f^{-1}(y) \cap U)~d\mathcal{H}^m(y),

A hyperboloid projecting onto a circle.

where, for a set S, \#(S) denotes the cardinality of S, i.e., how many points are in S.  We expect this number to be finite (for most functions f I think of, each inverse image has cardinality either 1 or 0).  Indeed, notice that if f is a smooth embedding, then f is one-to-one, and the right hand side of the above is always 1 if f maps there.  Hence the right hand side will be \mathcal{H}^m(f(U)), the area of the image of U under f.  This explains why it is called the area formula- it agrees with the classical area of parametrized surfaces.

The coarea formula (the subject of my research) keeps all the conditions above, but now f maps from high dimensions into a lower one, so m \geq n.  We have

\int_U |Jf(x)| d\mathcal{L}^m(x) = \int_{\mathbb{R}^n} \mathcal{H}^{m-n}(f^{-1}(y) \cap U)~d\mathcal{H}^n(y).

In plain English, the integral of the Jacobian of f is equal to the integral of the length of the fibers of f.  [Technical sentences coming up!] One surprising fact is that this coarea formula was first proven in 1959 in Herbert Federer’s paper “Curvature Measures“, while the area formula was (is) a basic calculus fact, at least for smooth functions.  The formula has since played a role in image processing, as when f is a real valued function, the quantity is usually referred to as the total variation.  De Giorgi showed that the fibers of functions which minimized the left hand side are actually minimal surfaces.

A string hyperboloid.

I’ve included a few illustrations of how the coarea formula might relate to “projections” of hyperboloids onto the circle.  The first shows such a hyperboloid, along with the fibers of the “projection”.  The coarea of this map will be the surface area of the hyperboloid.  Such a hyperboloid can be made with string from a cylinder, and twisting the top.  See the second figure.  The final .gif illustrates continuing to twist the top, and the resulting surfaces.  In each case, integrating the function whose level sets are these straight lines will return the surface area of the hyperboloid

Twisting hyperboloids

Fibers of functions


Something that is easy to miss in early calculus classes is that the inverse of a function is typically not a function.  We go through this whole confusing notion first with the square root (because while it is true that if x^2 = 16 then x = \pm 4, we all know that we like +4 better), then with trig functions.  I would argue that it is helpful to think about the inverse of a function as a set, and then point out the wonderful fact that if all the inverses of individual points have only one or zero members, then there is a function g so that g(f(x)) = x.

Typically though, inverse images will have more than one point.  Indeed, for a map f: \mathbb{R}^m \to \mathbb{R}^n, you will expect f^{-1}(y) to be m-n dimensional, if m is bigger than n, and a point otherwise.  Intuitively, this is because we have m equations and n unknowns, leaving us with m-n free variables.  This suggests a way of visualizing functions that I have actually never seen used (references to where it has been used are welcome).

What I have in mind is that, if you have a function f: U \subset \mathbb{R}^m \to \mathbb{R}^n, and it so happens that f(U) can be isometrically embedded back into U by choosing the well from the sets f^{-1}(y), then we may plot the inverse images of f on the same graph as we draw the domain of f.

That last paragraph was confusing, so let me give an example right away.  We will look at the function f which maps from the solid torus (donut) to the real numbers, so

The map of the torus that gives the radius of a point. The line in red is the range of the map. Notice it intersects with every shell exactly once.

that f(x) is the distance of x from the center of the solid torus.  Hence f^{-1}(r) will be the (not solid) torus of radius r. I have made the graph I describe above for this map.  Notice that the image of the torus under f, a circle, is indicated in blue in the left of the graph.

This picture has a nice intuition: each surface will map down to one point (so our intuition earlier holds up- f maps a three dimensional object down onto one dimension, so the inverse images are all two dimensional), so we can easily look at this and see the domain, range and action of f on the domain.  Notice also that to plot this in a traditional manner it would take either 4 dimensions as a graph, or 1 overloaded dimension as a parametric plot.  This particular example *could* be displayed using a movie, though again we would be displaying fibers of the map.

The last image of this sort is where we instead map a torus (again, non-solid) to a circle.  Notice that now the map is from a 2-D surface to a 1-D curve, so we expect (and see that) the fibers to be 1 dimensional.

The inverse images of the torus-radius map, as the radius goes from 0 to 1.

Inverse images of a projection-of-sorts of the torus onto a circle.

Coarea Formula, part I: the Jacobian

There’s a formula called the coarea formula which I have been researching for the past year or so.  There are two good ways to think about it.  One is to look at the so-called “Jacobian” and seek to interpret the integral of that number.  The second is to look at it as a natural dual (in a colloquial, rather than mathematical sense) to its more-famous-brother, the area formula.  We deal with the Jacobian today.

The Jacobian is typically introduced in calculus courses, and associated with a change of variables.  In a typical case, you would like to take and integral in one set of coordinates, (x,y) and change to a set of coordinates (u,v), according to a map \phi: \mathbb{R}^2 \to \mathbb{R}^2 which, since the domain is two dimensional, we may write as \phi(x,y) = (u(x,y),v(x,y)).  In this case, we have for an open set

\int_{\mathbb{R}^2}f(x,y)J\phi(x,y)~dx~dy = \int_{\mathbb{R}^2} f(u,v) ~du~dv

Let me finally define precisely what the Jacobian from calculus is- for a general map \phi: \mathbb{R}^n \to \mathbb{R}^n, we define

J\phi(x_1,\ldots, x_n) = \det \left(\phi^j_{x_k}\right)_{j,k = 1}^n,

where we are writing \phi = (\phi^1,\ldots, \phi^n), and \phi^j_{x_k} :=\frac{\partial \phi^j}{\partial x_j} .  As a quick example, one might recall changing coordinates from Euclidean (rectangular) to polar.  Typically it went x = r \cos{\theta} and y = r \sin{\theta} (that is to say, our change of coordinates is \phi(r, \theta) = (r\cos{theta},r\sin{theta})).  Then, using the “absolute value” notation for determinant, we have

J \phi(r,\theta) = \left| \begin{array}{cc} \cos{\theta} & \sin{\theta} \\ -r \sin{\theta} & r \cos{\theta} \end{array} \right| = r \cos^2 \theta + r \sin^2 \theta = r,

which returns us to the (somewhat) familiar formula,

\int f(x,y)~dx~dy = \int f(r,\theta) r~dr~d \theta.

That seems like well over enough for a first post.  Next up: an intuition for what the Jacobian measures, as well as a definition of Jacobian for maps between spaces of different dimensions.

Also! I should mention that the words I use are almost surely wrong- typically what I call the “Jacobian” is called the “Jacobian determinant”, while the actual “Jacobian” is the matrix of derivatives.  It just seems like a mouthful.