This is part 1 in hopefully a series on the Hamming error-correcting codes, to be continued on Friday. The problem is this: suppose you wish to send a message composed entirely of 0’s and 1’s, but suspect part of it might get “corrupted”. More concretely, I am going to send you the message

01101,

but there’s a chance that you receive instead something like 00101 (so the second bit was flipped), or 01111 (so the fourth bit was flipped). Is there any way to control for this? An idea that is both natural and wrong is to send two copies of the message. In this case, I’d send along

0110101101.

Now if a bit gets flipped (note that there are now more bits that could be flipped), you can see exactly where — perhaps you receive

0**0**1010**1**101,

where the non-matching bits are highlighted. The problem here is that you cannot tell whether the offending bit should be interpreted as a 0 or a 1 (which might be ok if you are only interested in error *detection)*. But if we want to be able to correct the error, we’ll have to be a little more clever. As a very broad outline of our strategy, we are going to take each of the symbols we would like to transmit and encode them into a “big enough” space so that each symbol has a buffer zone around it to account for any errors in transmission.

In some sense, this is a familiar strategy: on touchscreen keyboards you do not have to hit the *exact* spot for a key to work, only be close enough. In this case, the “symbols” I’m referring to are letters, and the “buffer zone” is the area of the screen you can touch so the computer recognizes that you are trying to touch this particular letter.

The trick here (and what is sort of confusing about this) is that the symbols we wish to transmit are bits, and the space that we will be embedding our symbols into will also be bits (rather than a shiny retina display!) As a hint of things to come, I’ve displayed below a representation of the space of 2-bit strings (which will not be big enough to detect 1-bit errors), and a representation of the space of 3-bit strings, which is of perfect size to detect 1-bit errors in a 1 bit message.

All 2-bit strings, with a line connecting them if the distance between them is 1, so you can get to one from the other by switching a single bit.

All 3-bit strings, connected whenever the distance between them is 1. Notice that the points (0,0,0) and (1,1,1) have disjoint balls of radius 1 around them. This will be the basis of the error correcting code, by agreeing that anything received in the red “ball” will be interpreted as the symbol (0,0,0), and anything received in the blue “ball” with be interpreted as (1,1,1). Then this can be turned into a 1 bit error correcting code.

### Like this:

Like Loading...

*Related*

[…] discussion on error-correcting codes is about to get a little hypercube heavy (never a good state to be in), and a brief foray into how […]

[…] now we know the general idea behind Hamming error correcting codes, and how one might construct and visualize hypercubes. Now suppose we want to encode, in an error […]