At a conference in Palo Alto, heard a few clever riddles:
- Suppose you have a 15 x 20 sheet of chocolate. What is the least number of sheets you must break to have only 1 x 1 pieces of chocolate left?
- Suppose you are blindfolded, and sitting in front of a table with 40 tiles on it. Each tile is black on one side and white on the other. You are told that precisely 10 of the tiles is turned white side up. You are asked to split the tiles into two groups, each with the same number of white tiles. Further, you are allowed to flip some tiles if you would like (though, remember, you are blindfolded). How do you do this?
- Also, to satisfy my mathematically curious reader(s), a speaker at the conference yesterday was discussing “optimal Lipschitz extensions”. Roughly, the question is this: if you have an open, bounded region in , with a Lipschitz function defined on the boundary, is there a “best” Lipschitz function defined on the entire region? By “best”, we’ll mean that on any open bounded subset, the function has the smallest Lipschitz constant given its boundary.
For example, these two graphs are both Lipschitz with the same boundary data, though the second graph has very small crinkles in it. Really, we are trying to define a notion of “best” that will choose the first extension, but not the second one. It should not be obvious that such an extension even always exists, since we are imposing a number of conditions on the map, or if it exists, that there is only one (existence, in fact, does not hold for vector-valued functions). In any case, there has been some research done into this, but the speaker mentioned a proof of the existence and uniqueness of such extensions using a game theoretic proof (pdf link to the paper by Peres, Schramm, Sheffield and Wilson). This is great.
How it goes is that you choose a point x in the interior of the region, and then have two players playing the following game. A (fair) coin is flipped, and the winner gets to move the point by a certain fixed distance d. The game ends when the point reaches the boundary of the region, at which point player 2 pays player 1 the value of the function on the boundary. Intuitively, player 1 will steer towards high values, and player 2 will steer towards lower values. Now if each player plays optimally, then you can calculate an expected payoff for player 1 at each value x (that is, the average payoff as one plays the game over and over again). As d approaches zero, this expected payoff converges to a value that will define the optimal Lipschitz extension. I think this is a fantastic result.
Back to Houston on Saturday.