Hamming codes in python

As a bit of an experiment I’ve started up a GitHub repository with some code from this blog here [LATE EDIT: you need numpy (and might as well get scipy and matplotlib) to run this].  In particular, this week I’ve implemented a script in Python which contains a script for generating Hamming matrices (one for encoding, one for parity check, one for decoding), which constitutes as much of a proof of existence as I’m willing to get into.

There is also a “Message” class inside where you can play around with how many corrupted bits your message might have versus how long your message is versus the size of Hamming matrix you use.  The defaults are set at 3 corrupt bits in a set of 4000 bits, with the error checking done with the Hamming(7,4) code.  You can run this by downloading hamming_codes.py and running “python hamming_codes.py” from the command line.

The specific directory with this project is located inside the “HammingCodes” folder.  Possible experiments with this code later, but now I need sleep!

Advertisements

Hamming Codes III

We are now in a position to actually write down the Hamming (7,4) code.  As explained in the previous three entries, we want some way of both detecting that an error occurred and of correcting that error.  As with many problems, this is much easier once you know it is possible (thanks, Richard Hamming!)  In particular, last time we proved that it is necessary to send a 7-bit message in our scheme of correcting errors for a 4-bit message, but is it sufficient?  An easy way to deduce the solution in this case (and then to see the pattern that proves the general case) is to require that our parity check detects the location of any 1-bit error.

Specifically, someone will receive a (possibly corrupt) 7-bit string v, and we want a matrix that will output 0 if all is well, or communicate a number 1-7 if one of the bits is corrupt.  It takes 3 bits to communicate 8 numbers (8 = 23), so our parity matrix H (following wikipedia’s notation) must be 3 x 7.  To make it easy to remember, we’ll even have column j be the binary representation for j.  More directly:

I am so fed up with wordpress' latex compiler...

I am so fed up with wordpress’ latex compiler…

Now we can work backwards (again, we’re assuming an answer exists),  and for reasons that may be clear later, we’ll set our three parity bits to be the three “singleton” columns of H, so that the “coded” message v = (p1,p2,d1,p3,d2,d3,d4).  Then if everything goes smoothly, we have that Hv0, so that

0 = p1+d1+d2+d4
0 = p2+d1+d3+d4
0 = p3+d2+d3+d4.

GNotice also that if one bit gets corrupted, this is equivalent to sending the message v+ej, and
G(v+ej) = 0+gj,
where gj is the jth column of G (which is the binary representation of the number j). Hence multiplying a message with a 1-bit mistake gives us the index of the corrupt bit.
But this tells us how we must encode our message m = (d1,d2,d3,d4) as well.  We want a matrix G so that G= (p1,p2,d1,p3,d2,d3,d4).   But the above gives us a linear condition for what this matrix must look like (and an explanation for why the parity bits are all “singletons”).

R

Finally we want to “decode” our message, which is also straightforward at this point, since it will just be the matrix which returns the non-parity bits from the encoded message.

As a review, and to wrap everything up:

1. Start with a message m = (1,0,0,1)

2. Transmit the message v = Gm = (0,0,1,1,0,0,1)
3. Check the parity by confirming that Hv = (0,0,0).
4. Decode the message Rv = (1,0,0,1), as desired.

Wikipedia’s also got an explanation involving Venn diagrams which I did not much like, though I may write a bit about Venn diagrams themselves in the future…

Expectations

Hard thinkin' being done today.

It is useful (for me!) to think about the importance of math as teaching us how to think about problems, rather than providing us with useful factoids (I’m looking at you, history class).  There are a lot of problems/puzzles/patterns in the world, and the chance of seeing the same problem twice is very low (and really, I’ve never seen Batman use the Pythagorean theorem even once, so what’s the point?), so we focus on solving problems in as broad of a context as possible.  In this way, I’d argue, mathematicians become very good problem solvers (“toot! toot!” <– my own horn)

One method of problem solving I would like to focus on today is to name and describe your answer before you have found it.  As a simple example, in order to answer the question “what number squared is equal to itself?”, we would:
1. Name the answer: Suppose x squared is equal to x.
2. Describe the answer: This is where the explicitly developed machinery comes in: We know that x^2 = x, so we deduct that x also has the property x(x-1) = 0, and conclude that either x = 0 or x = 1.

A geometric way of looking at the word problem. NOT TO SCALE.

As a second example, much of linear algebra is naming objects, describing them, and then realizing you accidentally completely described them.  For example, suppose we wanted to identify every matrix with a number, and make sure that every singular matrix has determinant 0:
1. Name the answer: Let’s call the answer the determinant, or det() for short.
2. Describe the answer: det() should be a function from matrices to numbers, and at least satisfy the following properties: (i) det(I) = 1, so that the identity matrix is associated with the number 1 (so at least some nonsingular matrices will not have determinant zero), (ii) if the matrix A has a row of zeros, then det(A) = 0 (so that at least some singular matrices will have determinant zero, and (iii) the determinant is multilinear, which takes some motivation, will definitely respect identifying singular matrices.

Well, it turns out that these three properties have already completely determined the object we are looking for!  If I had been greedy and asked (iv) each nonsingular matrix is associated with a unique number, then I would have deduced that no such map exists.  If I had not included property (iii), then I would have found there are many such maps.  It is a fairly enjoyable exercise to deduce the other properties of determinants starting from just these three rules.

More filler photos! This is from Cinqueterre in Italy, between some two of the towns.

Another nice theorem

Trying to visualize the projection map using fibers. You'll have to take my word that the lines stop before getting to the origin.

Today’s Theorem of the Day (TM) I used to compute the Jacobian of a radial projection.  In particular, consider the map F: \mathbb{R}^n \to \mathbb{R}^n where x \mapsto x/|x| for all |x| > 1.  This projects all of n-space onto the surface of the unit ball, and leaves the interior untouched.  Then we may compute the derivative \frac{\partial F_j}{\partial x_k} = \frac{\delta_{j,k}|x|^2 - x_k^2}{|x|^3}.

To calculate the Jacobian of means we have to calculate the determinant of that matrix.  With a little figuring, we can write that last sentence as |JF(x)| = \det \left(\frac{1}{|x|} ( I - \frac{x^Tx}{|x|^2} ) \right) = \frac{1}{|x|^n} \det \left(I-\frac{x^Tx}{|x|^2}\right).

Now we apply The Theorem, which Terry Tao quoted Percy Deift as calling (half-jokingly) “the most important identity in mathematics”, and wikipedia calls, less impressively, “Sylvester’s determinant formula“.  Its usefulness derives from turning the computation of a very large determinant into a much smaller determinant.  At the extreme, we apply the formula to vectors u and v, and it says that \det (I+u^Tv) = 1+v^Tu.  In our case, it yields |JF(x)| = 0.  Thus we turned the problem of calculating the determinant of an n x n matrix into calculating the determinant of a 1 1 matrix.

Pretty nifty.

Prime matrices II

Someday soon I'll have a post where figures will be useful. Until then, more New England.

Yesterday, I asked for the “smallest” 3×3 singular matrix, each of whose entries is a distinct prime.  By “smallest”, I am adding all the entries in the matrix.

It turns out that there is an optimal matrix, by which I mean there is a way to arrange the smallest 9 primes so that the matrix is singular, and the sum is 100.  One such way (remember, we can exchange rows, exchange columns, or take transposes and still have a solution) is this:

M=\begin{bmatrix}3 & 2 & 5\\ 11 & 13 & 7\\ 19 & 17 & 23\end{bmatrix}

As to how I found this solution, I used the following observation for such a solution MThe columns of M are linearly dependent over the integers.  Thus I wrote a short program in MATLAB that looked through the first primes, took them two at a time, say and q, and then checked whether  ap+bq was a prime, for parameters a and b that I could change.

Using a=1, b=-2, I found the optimal matrix is

M=\begin{bmatrix}11 & 2 & 7\\ 23 & 3 & 17\\ 31 & 13 & 5\end{bmatrix},

 

So much New England.

whose sum is 112, and I thought this was pretty good, since it only missed 19 and 29.  The next choice of a = 3, b = -2 turned out to be the winner above (notice, as expected, that 3 times the first column minus twice the second is the third).  I was also surprised to find that choosing a = 5, b = -6 did very well, with a matrix total of 106 (this is, by necessity, the second best answer, since it only swaps out a 23 for a 29).
Since yesterday, I was able to use the same sort of ad-hoc method to find a similarly optimal 4×4 “prime” matrix (again, “optimal” means I use only the first 16 primes):

M=\begin{bmatrix}7 & 3 & 2 & 13\\ 23 & 5 & 11 & 29\\ 31 & 19 & 17 & 47\\ 43 & 41 & 37 & 53\end{bmatrix},

which is enough for me to ask the following question, verified only for the first two cases:

 For n > 2, is there a singular n x n matrix whose entries are the first n^2 prime numbers?

Too much New England! Creepy woods women!

Prime matrices

Trip up to New England tomorrow. Be there.

Flying to New England for a while, so continuing with extra short posts. For today, here is a (I think) nice question I gave for my linear algebra class last summer, as a problem meant to encourage the use of technology:

It is clear that any 2 x 2 matrix, each of whose entries is a distinct prime, will be nonsingular.  For example, the matrix

M = \begin{bmatrix} 2&5\\ 17&11 \end{bmatrix}

is nonsingular.  However, there exist 3 x 3 matrices, each of whose entries is a distinct prime, which are singular.  For example, the matrix

M=\begin{bmatrix}7&3&5\\23&11&13\\47&19&37\end{bmatrix}

is singular.

Out of all such matrices (3 x 3, distinct prime entries, singular), which one has the smallest sum of entries? (where the solution will be this sum, since exchanging rows/columns will leave the determinant and sum invariant, but shows that the actual matrix will be nonunique)

Solution Friday.  As an aside, I have only upper bounds for the 4 x 4 case.

Linear transformations of the plane

The untransformed photo, or the photo acted on by the identity matrix.

As a continuation of my post on graphs of various functions, I am looking at a particular subclass of functions f: \mathbb{R}^2 \to \mathbb{R}^2.   Specifically, these will be linear transformations of the plane (as opposed to more general functions- for example, here is a fantastic video on Möbius transformations).

Before going further, let me say a fact that took me a surprisingly long time to realize: linear transformations of the plane are the same as 2 x 2 matrices.  This is a theorem and not a definition, in the following sense: a linear transformation from a real vector space V to a real vector space W is a function f  so that f(av+bw) = af(v)+bf(w) for any vectors v and w in V and real numbers a and b.  An m x n matrix A can be thought of as a function A: \mathbb{R}^n \to \mathbb{R}^m, since it acts on n dimensional vectors by multiplication and gives back an m dimensional vector.

So any matrix is a linear transformation: we would write A(av+bw) = aAv+bAw, where v and w are vectors, a and b real numbers.  

Acted on by the matrix (-1 0;0 1)

The only difficult part is realizing that any linear transformation can be represented by a matrix.  Rather than be bogged down in notation, if f: \mathbb{R}^2 \to \mathbb{R}^2 is a linear transformation, then we only need to know the values of f on a basis of \mathbb{R}^2.  That is to say, if f((1,0)) = (a,c) and f((0,1)) = (b,d), then the matrix

A = \left( \begin{array}{cc} a &b \\ c& d \end{array} \right)

is the associated matrix.  We check this by noting that f((x,y)) = f((x,0)+(0,y)) = xf((1,0)) + yf((0,1)) = x(a,c)+y(b,d) = (ax + by, cx + dy).  Similarly,

Av=\left(\begin{array}{cc}a&b\\c&d\end{array}\right)\left(\begin{array}{c}x\\y\end{array}\right)=\left(\begin{array}{c}ax+by\\cx+dy\end{array}\right)

Acted on by -1 times the identity matrix. This is a rotation.

Thus, for at least finite dimensional vector spaces, we can use the words “matrix” and “linear transformation” interchangeably.  Notice where I used the fact that a (finite) basis for the domain exists.  Just to drive the point home, I will state the result in a different way: any linear transformation of the plane has real numbers a,b,c,d so that each point (x,y) is sent to the point (ax + by,cx+dy).

Now how do we visualize these transformations?  If we wanted to graph them, we would need four dimensions: two for the domain and two for the range.  If we wanted to view it as a parametric plot, we’ll only need two dimensions, but most such maps are surjective, that is to say, the image of the plane under a (nonsingular) matrix is the entire plane.  So using our previous strategies, we would just display a picture of the plane, which is not very helpful.

Shearing the image with the matrix (1 0;1 1). Also, a joke: how do you shear a sheep? Multiply it by the matrix (1 0;1 1). Hey-o!

So what do we do?  Well, we really care about where individual points go.  Intuitively, one might think of these maps as stretching, pulling and rotating the plane.  If you had a sandbox with a rainbow of colored sand inside, and then the wind blew on this sandbox all day, you would be able to see how the colors changed during the day.  This is an example of a function from the plane (the sandbox before the wind) to the plane (the sandbox after the wind).  We can do this pretty well with pictures.  What we will do is take an image, and move the pixel in the (i,j)-th position to the (ai+bj,ci+dj)-th position (or the nearest integer to that number).

I have displayed a few examples of matrices acting on the same image.

A random matrix that reflects, rotates and shears.