No reasonable pictures for today! Have a cactus!

So now we know the general idea behind Hamming error correcting codes, and how one might construct and visualize hypercubes. Now suppose we want to encode, in an error correcting way, 4 bits. Recall that this means finding a hypercube with enough vertices that we can designate 16 (=4^{2}) of them, *and* pick those 16 so that no two “symbol vertices” are closer than distance 2. This means each “symbol vertex” has a disjoint neighborhood of distance 1.

A back of the envelope calculation gives a necessary condition to allow this: an *n* dimensional hypercube has 2^{n} vertices and each vertex has *n* neighbors (so a “symbol neighborhood” takes up *n*+1 vertices). Hence it is necessary that *n* satisfy

16*(*n*+1) ≤ 2^{n}.

More generally, to encode *m* bits in *n* bits, we require 2^{m}*(*n*+1) ≤ 2^{n}. Note without proof (for now, hopefully soon by construction) that this is also a sufficient condition. Interesting from an efficiency point of view is seeing where equality exists.

Taking logs (base 2) of both sides, and realizing that log(*n*+1) is an integer only when (*n*+1) is a power of 2, so *m = n-*log(*n+1*), or, letting *n *:=2^{k}-1, *m = *2^{k}-1 – *k*. In fact, one may (and we *may*) describe a whole class of Hamming (2^{k}-1, 2^{k}-1 – *k*) codes.

### Like this:

Like Loading...

*Related*