Busy days.

Somehow spring break has turned into one of the busier weeks of my year.  Trying to keep up with real life work has not left a ton of time for writing anything thoughtful/reasonable, though at least for continuity I will try to keep a paragraph or so up here each day with my favorite thought of the day.  This also means I can reuse some old graphics!

Today I really enjoyed a particular fact about Sobolev functions.  Recall that these are actually equivalence classes of functions, as they are really defined under an integral sign, which “can’t see” sets of small measure.  However, the following quantifies exactly how small the bad set might have to be:

If f \in W^{1,p}(\Omega) for \Omega \subset \mathbb{R}^n, then the limit \lim_{r \to 0} \frac{1}{\alpha(n)r^n}\int_{B(x,r)}f(y)~dy exists for all x outside a set E with \mathcal{H}^{n-p+\epsilon}(E) = 0 for all \epsilon > 0.

Put another way, every Sobolev function may be “precisely defined” outside a set of small dimension, where the dimension gets smaller as p gets larger.  I suppose a given representative may be worse, but this allows you to require that the member of the equivalence class of Sobolev functions has some nice properties.

The fibers of two functions in a sequence. I was thinking the above argument might imply that the limit was not Sobolev, but the limit is precisely represented outside a set with positive 1-dimensional measure, so the result is silent on this issue.

L’Hopital’s rule.

Two photos from a recent trip up north. Major bonus points for knowing which of New England's many trails this was taken on.

L’Hopital’s rule is really how every student of calculus (and I believe Leibniz, though I cannot find a reference) wishes the quotient rule worked.  Specifically, that

\lim_{x \to a} \frac{f(x)}{g(x)} = \lim_{x \to a} \frac{f'(x)}{g'(x)}.

Of course, it can’t be that easy.  We also need that f and g are differentiable in a neighborhood of a, that both function approach 0, or they both approach \infty, or they both approach -\infty as x approaches this point a, and finally that the limit on the right hand side exists (though we all recall that if it does not work the first time, we may continue to apply L’Hopital until the limit does exist, which then justifies using L’Hopital in the first place).

I was thinking of this rule in relation to generating interesting examples of limits.  In particular, if we are in a situation where L’Hopital’s applies, then we can apply the rule in two ways:

\lim_{x\to a}\frac{f'(x)}{g'(x)}=\lim_{x \to a}\frac{f(x)}{g(x)}=\lim_{x\to a}\frac{\left(\frac{1}{g(x)}\right)'}{\left(\frac{1}{f(x)}\right)'}.

Proceeding informally (i.e., I’m not going to keep track of hypotheses), the right hand side of this evaluates to
\lim_{x\to a}\frac{f(x)}{g(x)}=\lim_{x\to a}\frac{f(x)^2}{g(x)^2}\frac{g'(x)}{f'(x)}.

This is all well and good- the right hand side looks appropriately ugly, but now the trick is picking f and g to get interesting limits.  I have worked out two reasonable examples:

1. Choosing f(x) = sin(x)g(x) = x, we get

\lim_{x \to 0} \frac{\sin{x}}{x^2}\tan{x} = 1.

Also, moderate amounts of bonus points for naming (at least) two universities in the northeast with this mascot.

2. Choosing f(x) = e^x-1 and g(x) = \log{x}, and applying (hopefully correctly!) a number of logarithm rules, we can get

\lim_{x \to 1} \frac{(e^x-1)^2}{\log{(x^{\log{(xe^x)}})}} = 0.

What would be interesting is to find an example where it is difficult/impossible to evaluate without recognizing that it was created using this process.  This second example might fit the “difficult” bill, as I would not want to take the derivative of the denominator directly, but factoring, you might recognize it as xe^x (\log{x})^2, and then be able to reverse engineer this process, somehow.

As usual, just a thought I’ve been playing with.

More with fibers of functions

I posted earlier on a way of visualizing the fibers of certain maps from high dimensions to low dimensions.  Specifically, if the range can be embedded in the domain so that f is the identity of the image of the range, then we can draw the inverse image at each point.  I had some images of functions whose inverse image was a torus, but had trouble making these sorts of images for maps f: \Omega \subset \mathbb{R}^3 \to \mathbb{R}^2, so that the inverse image of a point is a line.  Well, no more!  Here are two images, one is the projection of a cube onto a square, and the other is somewhat more complicated, and is the string hyperboloid map.  See the previous post for more details on these specific maps, but I just thought these were nice images!

Fibers of the projection map from the cube to the square.

Fibers of the "twisted cylinder", which are again straight lines.

Patterns in numbers

I am partial to some examples of coincidence (or not). Hint: see the left column of letters.

I tend to shy away from even faint whiffs of mysticism, numerology, and in general attaching undue weight to circumstance.  As such, I was initially a little upset when sent this link from my friend over at the fantastic Austin music blog ovrld.com.  For those who don’t wish to click, the link points out that the fraction 1/998,001 prints out the digits from 000-999, in order, except for 998.  I mean, if we look at the first 1,000,000 fractions, we’ll to find 1,000,000 decimals that might hide some patterns significant to someone – and that’s only fractions of the form 1/n.

Also, recall that if you want a repeating decimal with some pattern, there is a nice algorithm to find the associated fraction: to find the fraction equal to x = 0.19851985… (being both my day of birth and my year of birth), we notice that

10000x-x = 1985.1985… – 0.19851985… = 1985. 

Then solving for leaves us with x = 1985/9999.  Here’s the thing though: this is not very pretty- the numerator is pretty big.

So I guess some kudos should go to whoever noticed this fact about 998001.  Let’s just not attach any special significance to it.

Today's filler photos come from Shiner, TX. Czech it out!

As an aside, I assume this was done by noticing that

1/98 = 0. 01 02 04 08 16 32 65 30 61 22 44 89 79 59 18 36 73 46 93 87 75 5…,

which is almost the sequence of square numbers powers of two (once you get to 128, the “1” gets added to the “4” in “64”, and so on), which is somewhat interesting.  Looking at higher denominators like this, you might get to

1/9801 = 0. 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 99…

Still don't know what Lady Liberty was doing in Texas...

Which might somehow suggest getting to 998001… eventually.  Finally, it turns out that this fraction is not so special, in the following way: it does not start repeating for another few thousand digits.  In particular, it goes through 2997 digits, then cycles back to the “000001002003…” pattern.

I wrote a very short Python script to play around with this sort of thing- it is a pretty quick way to perform division arbitrarily accurately, and just uses the Python function divmod.

def long_div(p,q,digs):
	#finds the first digs digits of p/q, returned as a string.  p and q must be integers.
	a,r = divmod(p,q)
	n = str(a)+'.'
	for j in range(digs-len(str(a))):
		r *=10
		a,r = divmod(r,q)
		n += str(a)
	return n

A function for displaying bits

A very busy week, so today’s post is just a code snippet which converts a decimal number to bits.  Notice that you can go inside the code and adjust how many bits are given to exponents and to the mantissa.  I’ve adjusted a bit to try to help the formatting, but I want to prioritize copying and pasting.  Let me know any problems/criticisms of the code!

from math import log,floor

def bitrep(mbits,ebits):
	#this function writes a float N = sign*M*2**(E-offset), where sign = 0 or 
	#1, M ranges from 2**0 up to 2**(mbits+1)-1 and E ranges from 2**0 up to 
	#2**(ebits+1)-1.  The numbers sign,M,N are then converted to binary, and 
	#sign,M,N are then returned.  To get the bit representation of a double, 
	#set mbits = 52, ebits = 11.
	mbits +=1
	offset = 2**(ebits-1)-1
	N = input('What decimal do you want the bit representation of?\n> ')

	#find the sign of the number
	if N >= 0:
		S = '0'
	else:
		S = '1'

	N = abs(N)
	#find the size of the number's exponent
	E= int(floor(log(N,2)))+offset-mbits

	#convert N into the fraction n/d, where d is a power of 10.
	n = str(N)
	try:
		d = 10**(len(n)-n.index('.')-1)
		n = int(d*N)
	except(ValueError):
		try:
			d = 10**(len(n)-n.index('e')-1)
			n = int(d*N)
		except(ValueError):
			d = 1
			n = N

	#finds the mantissa of N
	if offset>=E:
		if d == 1:
			M = N*2**(-E+offset)
		else:
			M,r = divmod(n*2**(-E+offset),d)
			if r>=d/2:
				M+=1
	else:
		M,r = divmod(N,2**(E-offset))
		if r >= (E-offset)/2:
			M+=1

	#converts both the exponent and mantissa to binary
	Ebin = bin(E-1)
	Ebin = Ebin[Ebin.index('b')+1:]
	Mbin = bin(M)[3:]

	#pads each number up to the total number of bits
	Ebin = Ebin.zfill(ebits)
	Mbin = Mbin.zfill(mbits)
	#cuts the mantissa back down to mbits of length, if necessary
	Mbin = Mbin[:mbits-1]

	#print out with the mantissa and exponent, then carrying out the 
	#calculation with python, then in bits
	print N,'\t"="',M,'* (2**[',E,'-',offset,'])\n','\t = ',M*(2**(E-offset)),'\n',S,Ebin,Mbin

bitrep(52,11)

Getting started with Python, on a mac

I think the default Mac terminal colors are different from this, but you can change them in "preferences"

This is intended as a quick guide, since one of the things that got me into programming was realizing how quickly I could start on my Apple laptop without installing anything.  As a practice first project, we’ll first compute the first 10 Fibonacci numbers, then write a function that will do this.
1. Finding Python.  The easiest way to get started is to just hit “cmd+space” (this opens spotlight: my favorite Mac feature), then type “terminal”, and open the terminal.  After that, it is as easy as typing python, and you have either a powerful calculator, or a way to start writing scripts.
2. Fibonacci. As a nice first calculation, we’ll save two variables to initialize the Fibonacci sequence, then run a loop 99 times (I had to play around a bit to see how the loop count corresponded to where in the Fibonacci sequence I was). See the image for this section.  Note that the spaces are very important.  You need to end lines with a colon, and start lines after the colon by indenting.  See below for more proper indenting.

Wikipedia claims the first Fibonacci number is 0, which I am ignoring for this post.

3. Function. Now to write a helper function that I can use to calculate many such numbers, I will open a text file (just in TextEdit), type in the code below, and save it as Fib.py:

def Fib(n):
#calculate the nth digit of the Fibonacci sequence
a,b = 1,0
for j in range(n):
a,b = a+b,a
return a

for j in range(10):
print Fib(j)

You can also see this post in progress behind the semi-opaque terminal!

4. Running a script. Now go back to the terminal window, hit “ctrl+d” to get out of Python, and navigate to the folder where you saved Fib.py. Mine was in “colcarroll/mystuff/blog”. Once there, just type “python Fib.py“, and the terminal will print out anything you asked to be printed out. Above, I print out the first 10 Fibonacci numbers.

Cheers! Now go find a proper introduction. Or use Project Euler.

Floating point, continued

As a followup to yesterday’s post on a floating point problem I encountered using the arctangent, I want to discuss how floating point numbers are stored in computers.  Apologies in advance to my misrepresenting a lot of this (and corrections are, as always, welcome), but it is helpful for me to work it out.  If you want to just get straight to experimenting, I am writing a Python script which will be posted Wednesday that converts decimals to bits, and for mac users, a quick guide to getting started with Python will appear tomorrow.

Each of these bits is either 0 or 1, and goes towards storing a "double". From wikipedia.

We represent each (double precision) number x as (to use the lingo)

x \approx sign*mantissa*2^{exponent-1023},

where the mantissa and exponent are each an integer.  We will deal with doubles, which means we get 64 bits, each either 0 or 1, to work with (see the nice figure above).  Then

1. The first bit (#63 above) records the sign of the number: 0 for positive, 1 for negative.
2. Bits 63-52 (11 bits) record an exponent E, the “size” of the number.
3. Bits 52-0 (52 bits) record the “mantissa” M of the number itself.  We always choose the mantissa to be as large as possible, which means the leading digit will always be 1, so we will actually squeeze out 53 bits of information by leaving out the leading 1.

Also notice that with 64 bits, we can hope to distinguish at most 2^{64} unique numbers.

Some nice hills in Montana.

As an example, we might store the number 1 as follows:
1. The sign is positive, so the first digit is 0.
2. E should be chosen so that the mantissa lies between 2^{53} and 2^{54}-1.  Some inspection shows that E = 970 is the only choice, so the exponent is 2^{970-1023}= 2^{53}. Note that in binary, 970 = 512+256+128+64+8+1 = 01111001001.
3. M should then be  2^{53}, stored in binary, with the leading 1 omitted.  This is just a string of 52 zeros.
4. So the 64 bits to store 1 is 0 01111001001 0000000000000000000000000000000000000000000000000000.

Now the problem: we cannot store most decimals exactly.  Let’s write the number 1.1.  Notice that if you type 1.1 into a Python prompt and hit return, 1.1000000000000001 is displayed, which is already a problem.  But anyways:

1. The sign is positive, so the first digit is 1.
2. Again we choose so the mantissa is in the correct range.  Again will be 970.
3. Now we want to choose M so that M\cdot 2^{-51} is as close to 1.1 as possible, so we want to round the number \frac{11}{10}*2^{51} to the nearest integer, and then encode that integer as a binary number.  This is easy to do with the Python command
v,r = divmod(11*(2**51),10),
noting that the remainder is greater than 5 (namely, 6), so we should round up v = 9907919180215090 to v+1. Then
bin(v+1) outputs as a binary number.  In particular, we find that
1.1 “=” 0 01111001001 0001100110011001100110011001100110011001100110011001

This bench was mentioned in a sign in an earlier photo...

This is awful close, but not exactly it.  As a last two points:
1. The approximation is off by at most 2^{-53} as compared to the leading digit.  So the approximation to 1.1 was this close, whereas 4.1 might be wrong by 2^{-51}, since the leading digit is about 4 times as big.
2. We can represent all integers between 0 and 2^{53}-1 precisely.  After that, there is some error.