Lecture 4 *** comment on Theorem from last time. showed that IF rank r implies rectangle of size 2^{-c(r)}|X||Y|, then (essentially) D(f) \le \approx c(r) Note that if D(f) \le c(r), then there must be a rectangle of size 2^{-c(r)}|X||Y|. So this says that an upper bound of D(f) in terms of rank is *equivalent* to proving an upper bound on rectangle size in terms of rank. **** Nondeterministic CC. RECALL:let C^0(f) be the smallest *cover* of 0’s by combin. rects. respectively C^1(f) cover of 1’s * Define N^1(f) = log_2 C^1(f) and N^0(f) = log_2 C^0(f) Then N^1 is “non-deterministic CC” and N^0 is “co-non-deterministic CC” (analogy w/ NP, coNP) * example showing exponential gap: D(EQ) = n+1 N^0(EQ) \le log_2 n + 1 [by announcing index i where differ] Related claim: N^0(EQ) \ge log_2 n. Proof: general statement: D(f) \le C^0(f) + 1 [also C^1(f) + 1] why? Alice sends list of 0-rectangles x is in Bob compares with list of 0-rectangles y is in f(x,y) = 0 iff some rectangle on both lists * Comment: 0,1-fooling set and 0,1-rectangle size technique give lower bounds on C^0, C^1 example: N^1(EQ) \ge n * Comment: not rank method. This gives lower bounds on best *disjoint* cover * Theorem: D(f) = O(N^0(f)*N^1(f)) “P = NP \cap coNP” Corollary: log_2(C^D(f)) \le D(f) \le O(log_2^2 (C^D(f))) where C^D is smallest *disjoint* cover. Proof? * proof of Theorem: general idea: kill off 1/2 of remaining 0-rectangles in each phase Phase of protocol: A looks for 1-rectangle containing x that intersects \le 1/2 of remaining 0-rectangles, sends to Bob or else “couldn’t find one” if A couldn’t find one, B looks for 1-rectangle containing y that intersects \le 1/2 of remaining 0-rectangles, sends to Alice or else “couldn’t find” if either succeed, we reduced number of 0-rectangles by 1/2 no 0 rectangles left? then output f(x,y) = 1. Both fail? then output f(x,y) = 0 [if f(x,y) had been equal to 1, then rectangle containing (x, y) either intersects \le 1/2 0-rectangles in rows or in columns, since otherwise it would intersect some 0 rectangle in both] Each phase costs N^1(f) + O(1) At most N^0(f) = log_2 C^0(f) phases. * this result is tight. To show this, we study Disjointness variant Disj_k^n note: |X| = |Y| = n choose k 1. N^0(Disj^n_k) \le log_2 n [why?] 2. N^1(Disj^n_k) \le O(k + log log n) Proof: will show C^1(Disj^n_k) \le 2^{2k}ln((n choose k)^2), by prob. method. Choose random set S (each element in/out w prob 1/2 independently) Define R_S = {x : x \in S} x {y: y \in complement of S} Pr_S[(x,y) \in R_S] = 2^{-2k} so exists one that covers 2^{-2k} fraction of remaining repeat specified number of times to reduce remaining 1’s to < 1. note this critically uses non-disjointness of cover. 3. D(Disj^n_k) \ge log_2 (n choose k) Proof by showing matrix has full rank. Refer to picture. (I | 0; M | -I(n-2k)) * (D^n_{k-1} | ? ; ? | 0) = (D^{n-1}_k | ?; 0 | (n-2k+1)D^{n-1}^{k-1}) where M[x, z] = 1 if z of size k contains x\{n}, and 0 otherwise base cases: D^n_1 and D^n_{n/2} have full rank Consider x containing {n} vs. y omitting n: two cases (intersect or disjoint); both give 0 Consider x containing {n} vs. y omitting n: x’ = x \ {n} and y’ = y \ {n}. two cases (x’, y’ intersect or disjoint) first case gives 0, second gives n - 2k + 1