21 September 2003

POSTSCRIPT to "Coding for Interactive Communication," Special issue on
Codes and Complexity of the IEEE Trans. Information Theory, 42(6) Part
I, 1745-1756, Nov. 1996.

In the discussion section of this paper I alluded to two known
constructions of weak versions of tree codes. One of those allusions
even referenced an unpublished (and to remain so) manuscript of Will
Evans, Michael Klugerman and myself. On several occasions I've been
asked about the missing constructions, so I decided I should post
copies of the emails explaining these on my web page. So far everyone
seems to have been satisfied either by the first email or by the
clarification in the second; but I'll be happy to answer any questions
that remain. The poly-size alphabet construction is joint with Evans
and Klugerman.

Leonard Schulman

-------------------------------------------------------------------------
1 February 2002

Hi -----, ...
The only known
facts beyond what's in my paper weren't published. One is that you can
effectively construct a finite-depth tree code with a poly-size (as a
function of the depth of the tree) alphabet. This by induction, doubling
the depth of the tree at each step. You glue a copy of the n/2-type tree
at each of its leaves. Then you extend each of the labels in the bottom
half of the new tree with a constant-length string, to take care of
pairs of paths that cross the middle level. To make these extensions you
use standard asymptotically good codes of lengths 1, 2, 4, etc; you use
the first code at level n/2 + 1, the next code across levels n/2 + 2,
n/2 +3, the next 4 at the succeeding 4 levels, etc. Each code serves to
hamming-separate pairs of paths that are rooted within the corresponding
number of levels away from the midlevel, in the upper half of the tree.

The other fact is that you can effectively construct a weak tree code in
which all you demand is hamming-separation on path-pairs of length at
most log n. (Good enough for a communication protocol to fail with only
a polynomial probability.) This is because you can (by exhaustive
search) construct a tree code of log depth in poly time. Then you can
construct a depth n tree code with this weak property by overlapping
small trees like shingles (and concatenating the two labels from the
overlapping trees), so that for every node there is a small tree which
contains the node between its (log n)/4 'th and (3 log n)/4 'th levels.

I hope these make sense... I can explain over the phone if not.

Leonard

-------------------------------------------------------------------------
13 May 2003

> Dear Leonard,
> 
> I'm afraid I don't quite understand how the extensions to the bottom-level
> trees go using the aymptotically good codes. When one uses a code of
> length say 8 for 8 levels, where exactly do all the codewords go?

You would spread that length-8 codeword across levels (n/2)+8
... (n/2)+15. Suppose the top vertex here (the one at level (n/2)+8)
is called v; let w be its ancestor at level n/2. By the address of a
vertex we'll mean the left-right binary string which leads to it from
the root. Then the "message" which is encoded in that length-8
codeword, is the last 8 bits of the address of w.

> It also seems that if one keeps the top tree unchanged, then the fact that
> are 2^{n/2} leaves means the first say 2 labels on many of the paths
> emanating from these leaves must be the same, and then one would need a
> stronger induction hypothesis about the top tree than just the tree code
> property.

The new bits are required only for separating pairs of paths that
cross level n/2, and which moreover have a majority of their length
below level n/2; for the others you can just settle for a weaker
constant in the Hamming distance. Once the majority of the path is
below the midlevel, you get a bound on the Hamming distance from the
asymptotically good code.

Is this clearer? If not let's try by phone.
Leonard