
September 22nd.
Guy Rothblum,
Samsung Research America.
Title: ConstantRound Interactive Proofs for Delegating Computation
Abstract:
(Click to show)
Interactive proofs have had a dramatic impact on Complexity Theory and Cryptography. The celebrated IP=PSPACE Theorem [LFKN92,Shamir92] allows an allpowerful but untrusted prover to convince a polynomialtime verifier of the validity of extremely complicated statements (as long as they can be evaluated using polynomial space). The interactive proof system designed for this purpose requires a polynomial number of communication rounds. This talk will focus on studying the power of more efficient interactive proof systems.
Our main result is that for every statement that can be evaluated in polynomial time and boundedpolynomial space, there exists an interactive proof that satisfies the following strict efficiency requirements:
(1) The honest prover runs in polynomial time.
(2) The verifier is almost linear time (and under some conditions even sub linear).
(3) The interaction consists of only a constant number of communication rounds.
To obtain this result, we introduce and study several new notions for interactive proofs, which may be of independent interest. One of these notions is that of unambiguous interactive proofs, where the prover has a unique successful strategy. Another notion is that of probabilistically checkable interactive proofs (PCIPs) where the verifier only reads a few bits of the transcript in checking the proof (this could be viewed as an interactive extension of PCPs).
Joint work with Omer Reingold and Ron Rothblum.

September 29th.
Arnaud Marsiglietti,
Caltech.
Title: The entropy power inequality for the Renyi entropy
Abstract:
(Click to show)
The entropy power inequality, fundamental in Information Theory, states that for every independent continuous random vector X,Y in R^n$, one has N(X+Y) \geq N(X) + N(Y). Here N(X) denotes the entropy power of X, defined as N(X) = e^{2h(X)/n}, where h(X) is the entropy of X.
In this talk, we will see that the entropy power inequality can be extended to the Renyi entropy.
(based on a joint work with S. Bobkov)

October 13th.
Vitaly Feldman,
IBM Almaden.
Title: From algorithms to lower bounds and back via the statistical query complexity
Abstract:
(Click to show)
In this talk I'll show how algorithms for convex optimization can be used to derive lower bounds against general convex relaxations for wellstudied constraint satisfaction problems. This technique relies on several recent advances in understanding of statistical query (SQ) algorithms*:
1. A notion of SQ dimension of a problem that lower bounds the SQ complexity of the problem.
2. Analysis of the SQ dimension of a class of constraint satisfaction problems.
3. SQ algorithms for stochastic convex optimization.
I'll overview these results and give some of their additional applications.
* Statistical query algorithms [Kearns 1993] are algorithms that instead of random samples from an input distribution D over a domain X, have access to a SQ oracle for D. Given a realvalued query function g over X, the oracle returns an estimate of the expectation of g on a sample chosen randomly from D.
Based on joint works with C. Guzman, W. Perkins and S. Vempala.

October 27th.
Michael Forbes,
Stanford.
Title: On Probabilistic Checking in Perfect Zero Knowledge
Abstract:
(Click to show)
The sumcheck protocol of Lund, Fortnow, Karloff and Nisan is an interactive proof for #P, enabling the verifier to count the number of satisfying assignments of a boolean formula. This celebrated protocol sparked the subsequent probabilistically checkable proof (PCP) revolution. We revisit this protocol from the perspective of zeroknowledge, that is, interactive proofs that reveal no information other than their correctness. While the protocol asis is not zeroknowledge, fundamental work of BenOr, Goldreich, Goldwasser, Hastad, Kilian, Micali, and Rogaway leveraged cryptographic tools (oneway functions) to show that any interactive proof (such as sumcheck) can be made zeroknowledge. However, the obtained zeroknowledge is computationally secure, as opposed to statistical (or even perfect) security.
In this work we study how to make the sumcheck protocol perfect zeroknowledge, by working in an extension of the interactive proof model called interactive probabilistically checkable proofs (interactive PCPs), due to Kalai and Raz. In this model the prover can send an exponentially long string to the verifier, which the verifier can read via random access, after which an interactive proof proceeds as usual. While there is a natural candidate for a zeroknowledge sumcheck protocol in this setting, it turns out to be nontrivial to prove the required zeroknowledge property.
To establish that this protocol is perfect zeroknowledge, we use derandomization techniques from algebraic complexity theory due to RazShpilka and BogdanovWee. These algorithms efficiently and deterministically solve succinctlydescribed exponentiallylarge linear systems of equations arising from questions about lowdegree polynomials, which have a natural codingtheory interpretation.
Joint work with Eli BenSasson, Alessandro Chiesa, Ariel Gabizon, Michael Riabzev, and Nicholas Spooner. arxiv:1610.03798.

November 3rd.
Ilias Diakonikolas,
USC.
Title: Computational Efficiency and Robust Statistics
Abstract:
(Click to show)
Suppose we would like to estimate the mean of a highdimensional Gaussian. It is easy to see that the sample mean is an accurate estimator of the mean. Now suppose that an adversary corrupts a small fraction of our samples. Can we still recover the true mean, up to a small error? A moment's thought reveals that the sample mean is not a good estimator in this noisy setting.
The concept of the Tukey median is a robust estimator of the mean in the noisy setting under quite general conditions. Unfortunately, it is hard to compute in the worstcase, even approximately. This prompts the following question: Can we reconcile robustness and computational efficiency in highdimensional distribution estimation?
In this talk, I will describe a polynomial time spectral technique to robustly estimate the mean of a highdimensional Gaussian, up to *nearoptimal* accuracy. Time permitting, I will also discuss a plausible computational barrier towards improving the performance of this algorithm.
The aforementioned spectral technique forms the basis of a general recipe to efficiently detect corruptions in high dimensions. As an application of this recipe, we obtain the first polynomial time algorithms for robustly learning various families of highdimensional distributions, including Gaussians (with unknown mean and covariance), discrete product distributions, mixtures thereof (under some natural conditions), and certain families of graphical models.
The talk will be based on joint works with (different subsets of) G. Kamath, D. Kane, J. Li, A. Moitra, and A. Stewart.

November 17th.
Gregory Valiant,
Stanford.
Title: Learning from Untrusted Data
Abstract:
(Click to show)
We consider the problems of estimation, learning, and optimization over a large dataset of which a subset consists of points drawn from the distribution of interest, and make no assumptions on the remaining pointsthey could be wellbehaved, extremely biased, or adversarially generated. We investigate this question via the introduction of two new model for studying robust estimation, learning, and optimization. One of these models, which we term the "semiverified" model, assumes access to a second much smaller (typically constantsized) set of "verified" or trusted data points that have been drawn from the distribution of interest. The key question in this model is how a tiny, but trusted dataset can allow for the accurate extraction of the information contained in the large, but untrusted dataset. The second model we propose, which we term "listdecodable learning", considers the task of returning a small list of proposed answers. Underlying this model is the question of whether the presence of a significant amount of adversarial datapoints can hide the structure of the "good" datapoints, or whether it can simply create additional false copies of the structure of the good points. (For example, if half the points are drawn from a highdimensional Gaussian, can the remaining adversarial points obscure this structure, or is the most pernicious use of such points simply drawing them from a different Gaussian, in which case the learning/estimation algorithm could output a list of 2 answers?). We present several results for these models, for a large class of mathematically clean and practically relevant robust estimation and learning tasks.
The talk is based on joint works with Jacob Steinhardt and Moses Charikar, and with Michela Meister.

December 8th.
Avishay Tal,
IAS.
Title: TimeSpace Hardness of Learning Sparse Parities
Abstract:
(Click to show)
How can one learn a parity function, i.e., a function of the form f(x) = a_1 x_1 + a_2 x_2 + ... + a_n x_n (mod 2) where a_1, ..., a_n are in {0,1}, from random examples?
One approach is to gather O(n) random examples and perform Gaussianelimination. This requires a memory of size O(n^2) and poly(n) time. Another approach is to go over all possible 2^n parity functions and to verify them by checking O(n) random examples for each guess. This requires a memory of size O(n), but O(2^n * n) time.
In a recent work, Raz [FOCS, 2016] shows that if an algorithm has memory of size much smaller than n^2, then it has to spend roughly 2^n time in order to learn a parity function. In other words, fast learning requires good memory.
In this work, we show that even if the parity function is known to be extremely sparse, where only log(n) of the a_i's are nonzero, then the learning task is still timespace hard. That is, we show that any algorithm with linear size memory and polynomial time fails to learn log(n)sparse parities.
Consequently, the classical tasks of learning linearsize DNF Formulas, linearsize Decision Trees, and logarithmicsize Juntas are all timespace hard.
Based on joint work with Gillat Kol and Ran Raz.

January 8th.
Ryan Williams,
Stanford.
Title: SuperLinear Gate and SuperQuadratic Wire Lower Bounds for DepthTwo and DepthThree Threshold Circuits
Abstract:
(Click to show)
Lowdepth threshold circuits provide a clean way to model lowdepth neural networks. We prove the first superlinear gate lower bounds and the first superquadratic wire lower bounds for depthtwo linear threshold circuits with arbitrary weights, and depththree majority circuits computing an explicit function. The key is a new random restriction lemma for linear threshold functions. Our main analytical tool is the LittlewoodOfford Lemma from additive combinatorics.
Joint work with Daniel Kane (UCSD). Paper available at http://arxiv.org/abs/1511.07860.

January 22nd.
Raghu Meka,
UCLA
Title: Pseudorandomness via the Discrete Fourier Transform
Abstract:
(Click to show)
We present a new approach to constructing unconditional pseudorandom generators against functions that involve computing a linear function of the inputs. We give an explicit construction of a pseudorandom generator that fools the discrete Fourier transforms of linear functions with seedlength that is nearly logarithmic (up to polyloglog factors) in the input size and the desired error parameter. Our result gives a single pseudorandom generator that fools several important classes of tests computable in logspace that have been considered in the literature, including halfspaces (over general domains), modular tests, and combinatorial shapes. For all these classes, our generator is the first that achieves near logarithmic seedlength in both the input length and the error parameter. Getting such a seedlength is a natural challenge in its own right, which needs to be overcome in order to derandomize RL  a central question in complexity theory.
Our construction combines ideas from a large body of prior work, ranging from a classical construction of [NN93] to the recent gradually increasing independence paradigm of [KMN11, CRSW13, GMRTV12], while also introducing some novel analytic machinery which might find other applications.

February 4th (Note unusual day, time and location!), 3pm, 107 Annenberg.
Antonina Kolokolova,
Memorial University of Newfoundland.
Title: Does looking inside the circuit help?
Abstract:
(Click to show)
The celebrated Rice's theorem in computability theory states that any nontrivial semantic property of Turing machines is undecidable. That is, to check if the language accepted by a Turing machine satisfies some property, essentially the only thing one can do is to run the machine.
Is there is a similar phenomenon in the complexity theory world? Following a conjecture posed by Barak et al. in their paper on impossibility of obfuscation, we ask whether working with a description of a Boolean circuit gives any computational advantage for deciding properties of a Boolean function, as opposed to a blackbox (oracle) access.
We show that for readonce branching programs there is indeed such an advantage. For general Boolean circuits, we make a step towards resolving this question by showing that if properties of a certain type are easier to decide given a circuit description then there is a nontrivial algorithm for SAT problem.
Joint work with Russell Impagliazzo, Valentine Kabanets, Pierre McKenzie and Shadab Romani.

February 5th.
Valentine Kabanets,
Simon Fraser University.
Title: Algorithms from natural lower bounds
Abstract:
(Click to show)
Based on Hastad's (1986) circuit lower bounds, Linial, Mansour, and Nisan (J ACM, 1993) gave a quasipolytime learning algorithm for AC^0 (constantdepth circuits with AND, OR, and NOT gates), in the PAC model over the uniform distribution. It was an open question to get a learning algorithm (of any kind) for the class of AC^0[p] circuits (constantdepth, with AND, OR, NOT, and modp gates for a prime p).
Our main result is a quasipolytime learning algorithm for AC^0[p] in the PAC model over the uniform distribution with membership queries. This algorithm is an application of the general connection between natural properties (in the sense of Razborov and Rudich (JCSS, 1997)) and learning algorithms. We show that a natural circuit lower bound against any (sufficiently powerful) circuit class yields a learning algorithm for the same circuit class. Then the known lower bounds against AC^0[p] by Razborov (1987) and Smolensky (1987), shown to be natural by Razborov and Rudich, imply our main result.
Joint work with Marco Carmosino (UCSD), Russell Impagliazzo (UCSD), and Antonina Kolokolova (MUN).

February 12th.
Shachar Lovett,
UCSD.
Title: Structure of protocols for XOR functions
Abstract:
(Click to show)
Communication complexity studies the structure of optimal communication protocols. Even in the simplest case of deterministic twoparty protocols, the structure of optimal protocols is unknown. For example, the famous log rank conjecture postulates that the communication cost is related to the rank of an associated matrix.
In this work, we study XOR functions: two players receive inputs x,y in F_2^n, and their goal is to compute f(x + y), where f:F_2^n \to F_2 is a boolean function. A natural way to construct such a protocol is to simulate a parity decision tree for f. This is a decision tree model, where each node is allowed to query a linear function of the input. A parity decision tree of depth d gives a communication protocol with 2d communication.
Our main result is establishing the reverse direction: any protocol which sends c bits, implies a parity decision tree with depth poly(c). Our proof technique differs from most "simulation" techniques in communication complexity: we do not argue about individual rectangles, but instead about the structure of the entire protocol. Our result also sheds some light on quantitative differences between coloring and density versions of structural results in additive combinatorics.
Joint work with Kaave Hosseini.

February 16th.
Gil Cohen,
Caltech.
Title: BiLipschitz Bijection between the Boolean Cube and the Hamming Ball
Abstract:
(Click to show)
We construct a biLipschitz bijection from the Boolean cube to the Hamming ball of equal volume. More precisely, we show that for all even n, there exists an explicit bijection f from the ndimensional Boolean cube to the Hamming ball of equal volume embedded in (n+1)dimensional Boolean cube, such that for all x and y it holds that dist(x,y) / 5
This result gives a strong negative answer to an open problem of Lovett and Viola [CC 2012], who raised the question in the context of sampling distributions in lowlevel complexity classes. The conceptual implication is that the problem of proving lower bounds in the context of sampling distributions requires ideas beyond the sensitivitybased structural results of Boppana [IPL 97].
No prior knowledge is assumed.
Joint work with Itai Benjamini and Igor Shinkar

April 22nd.
Cris Moore,
Santa Fe Institute.
Title: Informationtheoretic bounds and phase transitions in community detection and highdimensional clustering
Abstract:
(Click to show)
Over the past decade, a rich interaction has grown up between statistical physics and statistical inference. In particular, physics often predicts phase transitions where, if a data set becomes too sparse or too noisy, it suddenly becomes impossible to find its underlying structure, or even to distinguish it from a ΓÇ£null modelΓÇ¥ with no structure at all. For instance, in the community detection problem in networks, there is a transition below which the vertices cannot be labeled better than chance, and the network cannot be distinguished from an ErdosRenyi random graph. Similarly, in the clustering problem in Gaussian mixture models, it becomes impossible to find the clusters, or even to tell whether clusters exist, i.e., to distinguish the data from a single Gaussian.
Many of the physics predictions have been made rigorous, but there is still a very lively interchange between the fields, both about these transitions and about asymptotically optimal algorithms. In particular, while efficient messagepassing and spectral algorithms are known that succeed down to the socalled KestenStigum bound, in a number of problems we believe that it is informationtheoretically possible (but perhaps not computationally feasible) to go further. I will give a friendly introduction to this field, and present some new results proving this conjecture.
This is joint work with Jess Banks, Joe Neeman, Praneeth Netrapalli, and Jiaming Xu.

May 6th.
Virginia Vassilevska Williams,
Stanford.
Title: Fast distance product of bounded difference matrices with applications to language edit distance and RNA folding
Abstract:
(Click to show)
The distance product of two matrices A and B is the matrix C defined as C[i,j]=min_k A[i,k]+B[k,j]. Computing the distance product of arbitrary n x n matrices is equivalent (wrt worst case runtime) to the All Pairs Shortest Paths problem on nnode graphs, a problem whose best runtime has been notoriously stuck at n^{3o(1)} for decades. Nevertheless, when A and B are structured, sometimes truly subcubic, O(n^{3eps}) time, for constant eps>0, algorithms are possible.
An interesting structured case that has so far resisted improvements (and whose best distance product algorithm is not known to take truly subcubic time) is that of bounded differences (BDD) matrices. A matrix A is said to be an L bounded difference (LBDD) matrix if for all i,j: A[i,j]  A[i,j+1]
In this work we design the first truly subcubic algorithm for LBDD matrices for small L, thus obtaining the first truly subcubic algorithms for RNA folding and Context Free Language Edit Distance.

September 25th.
Ravi Kannan,
Microsoft Research.
Title: Topic Modeling and NMF using Soft Clustering
Abstract:
(Click to show)
The model fitting problem in Topic Modeling is a special case of NonNegative Matrix Factorization and both are computationally hard. Soft clustering, where, each datapoint can belong fractionally to several clusters is a useful tool for both. We make two main assumptions on the data  that each datapoint belongs dominantly to one cluster and each cluster has some dominant features and prove that an algorithm with 3 natural steps  Thresholding, SVD and kmeans  does the softclustering in polynomial time. We use that to solve Topic Modeling and NMF with provable error guarantees which are better than known algorithms. We demonstrate good empirical performance of the algorithm as well as reasonableness of the assumptions.
Joint work with: T. Bansal, C. Bhattacharyya, N. Goyal, J. Pani

October 2nd.
Ronen Shaltiel,
University of Haifa.
Title: Incompressible Functions, Relative Error Extractors, and the Power of Nondeterministic Reductions
Abstract:
(Click to show)
A circuit $C$ compresses a function $f$ from $n$ bits to $m$ bits if given an $n$ bit input $x$ the circuit $C$ can shrink $x$ to a shorter $\ell$bit string $C(x)=x'$ such that later, a computationallyunbounded solver $D$ will be able to compute $f(x)$ based only on $x'$. We study the existence of functions which are incompressible by circuits of some fixed polynomial size $s=n^c$. Motivated by cryptographic applications, we focus on averagecase incompressibility, which guarantees that on a random input $x$ for every size $s$ circuit $C$ and any unbounded solver $D$, the success probability $\Pr_x[D(C(x))=f(x)]$ is upperbounded by $2^{m}+\epsilon$. While this notion of incompressibility appeared in several works (e.g., Dubrov and Ishai, STOC 06), so far no explicit constructions of efficiently computable incompressible functions were known.
Assuming that $E=DTIME(2^{O(n)})$ is hard for exponential size nondeterministic circuits, we construct a polynomial time computable boolean function which is incompressible by size $n^c$ circuits with communication $\ell=(1o(1)) \cdot n$ and error $\epsilon=n^{c}$. Our technique generalizes to the case of PRGs against nonboolean circuits, improving and simplifying the previous construction of Shaltiel and Artemenko (STOC 14).
We can achieve negligible error of $\epsilon=n^{c} \cdot 2^{m}$ for constructing nonboolean incompressible functions (under a related stronger assumption).
We also show that "blackbox techniques" cannot obtain negligible error when constructing an incompressible boolean function. This result is quite general and translates to limitations on "hardness amplification" and "constructions of pseudorandom generators", explaining in retrospect, why previous constructions were not able to achieve negligible error in these settings.
This is based on joint work with Benny Applebaum, Sergei Artemeno and Guang Yang.

October 23rd.
Omer Tamuz,
Caltech
Title: The Speed of Social Learning
Abstract:
(Click to show)
We consider Bayesian agents who learn from private signals, as well as the actions of the other. We show that learning from actions is slower than learning from signals, and that that increased interaction between the agents can lower the speed of learning.
Joint with Matan Harel, Elchanan Mossel and Philipp Strack.

January 9th.
Jordan Greenblatt,
UCLA.
Title: The Interplay Between Structure of Finite Graphs and Maximal Averages on Their Cartesian Powers
Abstract:
(Click to show)
In 2013, Harrow, Kolla, and Schulman published a proof that the spherical maximal averaging operator on the hypercube satisfies an L^2 bound independent of dimension. Later, Krause extended the bound to all L^p with p > 1 and, together with Kolla and Schulman, we extended the result to arbitrary finite cliques. I will provide exposition for the classical theory and applications of maximal operators and discuss the proof of dimensionindependent bounds for clique powers/hypercubes. Then I will talk about my current research concerning asymptotic bounds for more general graphs.

February 13th.
Alexander Sherstov,
UCLA.
Title:
Breaking the MinskyPapert Barrier for ConstantDepth Circuits
Abstract:
(Click to show)
The threshold degree of a Boolean function f is the minimum
degree of a real polynomial p that represents f in sign:
f(x)=sgn p(x). In a seminal 1969 monograph, Minsky and Papert
constructed a polynomialsize constantdepth AND/OR circuit
in n variables with threshold degree Omega(n^{1/3}). This bound
underlies the fastest known algorithms for constantdepth
circuits. It has been an open problem (O'Donnell and Servedio,
STOC 2003) to improve Minsky and Papert's bound to n^{Omega(1)+1/3}.
We give a detailed solution to this problem. For any k>=1, we
construct an AND/OR formula of size n and depth k with threshold
degree Omega(n^{(k1)/(2k1)}). This lower bound nearly matches
the famous O(sqrt(n)) bound for arbitrary formulas (Ambainis,
Childs, Reichardt, Spalek, Zhang, FOCS 2007). Our result
proves a conjecture due to O'Donnell and Servedio (STOC 2003)
and a different conjecture due to Bun and Thaler (2013).

February 20th.
Yuval Rabani,
The Hebrew University of Jerusalem.
Title:
Learning Arbitrary Statistical Mixtures of Discrete Distributions
Abstract:
(Click to show)
We study the problem of learning from unlabeled samples very general statistical mixture models on large finite sets. Specifically, we want to learn a model which is a mixture of distributions $p$ on $\{1,2,\dots,n\}$. When we sample from the model, we do not observe a specific $p$ from the mixture directly, but rather indirectly and in a very noisy fashionΓÇöwe observe $K$ independent samples of $\{1,2,\dots,n\}$ according to a distribution $p$. We give efficient algorithms for learning this mixture model without making any restricting assumptions on the structure of the mixture. We bound the quality of the solution as a function of the size of the samples $K$ and the number of samples used. Our model and results have applications to a variety of unsupervised learning scenarios, including learning topic models and collaborative filtering.
Most of the talk is based on a joint paper with Jian Li, Leonard J. Schulman, and Chaitanya Swamy. We will also mention some earlier work with some of the above coauthors.

April 17th.
Quentin Berthet,
Caltech.
Title:
Detection of Planted Solutions for Flat Satisfiability Problems
Abstract:
(Click to show)
We study the detection problem of finding planted solutions in random instances of flat satisfiability problems, a generalization of boolean satisfiability formulas. We describe the properties of random instances of flat satisfiability, as well of the optimal rates of detection of the associated hypothesis testing problem. We also study the performance of an algorithmically efficient testing procedure. Introducing a modification of our model, the light planting of solutions, allows us to hint strongly at the difficulty of detecting planted flat satisfiability for a wide class of tests, by relating this problem to learning parity with noise.

May 1st.
Jaikumar Radhakrishnan,
Tata Institute for Fundamental Research (TBC).
Title:
Set membership with two bit probes
Abstract:
(Click to show)
We will consider the bitprobe complexity of the set membership problem, where a set S of size at most n from a universe of size m is to be represented as a short bit vector in order to answer membership queries of the form Is x in S? by adaptively probing the bit vector at t places. Let s(m,n,t) be the minimum number of bits of storage needed for such a scheme. Alon and Feige showed that for t=2 (two bit probes) such schemes can be obtained from dense graphs with large girth. In particular, they showed that for n at most log m, s(m,n,2) = O(m n log((log m) / n) / log m). We improve their analysis and obtain a better upper bound and a corresponding lower bound.
Upper bound: There is a constant C>0, such that for all large m, s(m,n,2) is at most C m^{11/(4n+1)}.
Lower bound: There is a constant D>0, such that for n>=4 and all large m, we have s(m,n,2) is at least D m^{1{4}{n}}.
(This is joint work with Mohit Garg.)

May 15th.
Sergey Yekhanin,
Microsoft Research.
Title:
Kolmogorov width of discrete linear spaces: an approach to matrix rigidity
Abstract:
(Click to show)
A square matrix V (over any field) is called rigid if every matrix obtained by altering a small number of entries of V has suf∩¼üciently high rank. While random matrices are rigid with high probability, no explicit constructions of rigid matrices are known to date. Obtaining such explicit matrices would have major implications in computational complexity theory. One approach to establishing rigidity of a matrix V is to come up with a property that is satis∩¼üed by any collection of vectors arising from a lowdimensional space, but is not satis∩¼üed by the rows of V even after alterations. In this work we propose such a candidate property that has the potential of establishing rigidity of combinatorial design matrices over the binary ∩¼üeld. Stated informally, we conjecture that under a suitable embedding of the Boolean cube into the Euclidian space, vectors arising from a low dimensional linear space modulo two always have somewhat small Kolmogorov width, i.e., admit a nontrivial simultaneous approximation by a low dimensional Euclidean space. This implies rigidity of combinatorial designs, as their rows do not admit such an approximation even after alterations. Our main technical contribution is a collection of results establishing weaker forms and special cases of the conjecture above.
(Joint work with Alex Samorodnitsky and Ilya Shkredov).

May 29th.
Alistair Sinclair,
UC Berkeley.
Title:
Dynamics for the randomcluster model
Abstract:
(Click to show)
The randomcluster model is a unifying framework for studying random graphs, spin systems in physics and random spanning trees. The model is closely related to, though much more general than the classical Ising and Potts models, but its dynamics are much less well understood. In this talk we study a natural nonlocal Markov chain known as the ChayesMachta dynamics for the meanfield case of the randomcluster model, and identify a critical interval $(lambda_s,lambda_S)$ of the model parameter $lambda$ in which the dynamics undergoes an exponential slowdown. Namely, we prove that the mixing time is $Theta(log n)$ for $lambda$ outside the above interval, and $exp(Omega(sqrt{n}))$ for $lambda$ inside the interval. These results hold for all values of the second model parameter $q>1$. Thus, we obtain the first analysis of a dynamics for the randomcluster model for values of $q$ other than the already well understood special case $q=2$ (which corresponds to the Ising model) over almost the full range of values of $lambda$. In addition, we prove that the local heatbath dynamics undergoes a similar exponential slowdown in $(lambda_s,lambda_S)$.
This is joint work with Antonio Blanca.

Thursday
October 2nd, 12pm (unusual day!).
Mor Weiss,
Technion.
Title:
Probabilistically Checkable Proofs of Proximity with ZeroKnowledge.
Abstract:
(Click to show)
A probabilistically Checkable Proof (PCP) allows a randomized verifier, with oracle access to a purported proof, to probabilistically verify an input statement of the form ``$x\in L$'' by querying only few bits of the proof. A \emph{PCP of proximity} (PCPP) has the additional feature of allowing the verifier to query only few bits of the {\em input} $x$, where if the input is accepted then the verifier is guaranteed that (with high probability) the input is \emph{close} to some $x'\in L$.
Motivated by their usefulness for sublinearcommunication cryptography, we initiate the study of a natural zeroknowledge variant of PCPP (ZKPCPP), where the view of any verifier making a bounded number of queries can be efficiently simulated by making the same number of queries to \emph{the input oracle alone}. This new notion provides a useful extension of the standard notion of zeroknowledge PCPs. We obtain two types of results.
Constructions. We obtain the first constructions of queryefficient ZKPCPPs via a general transformation which combines standard queryefficient PCPPs with protocols for secure multiparty computation. As a byproduct, our construction provides a conceptually simpler alternative to a previous construction of honestverifier zeroknowledge PCPs due to Dwork et al. (Crypto '92).
Applications. We motivate the notion of ZKPCPPs by applying it towards sublinearcommunication implementations of commitandprove functionalities. Concretely, we present the first sublinearcommunication commitandprove protocols which make a {\em blackbox} use of a collisionresistant hash function, and the first such multiparty protocols which offer {\em informationtheoretic} security in the presence of an honest majority.

Thursday
October 16th.
(Special seminar:
unusual day!)
Christos Papadimitriou,
UC Berkeley.
Title:
Computational Insights and the Theory of Evolution
Abstract:
(Click to show)
Covertly, computational ideas have influenced the Theory of Evolution from the very start. This talk is about recent work on Evolution that was inspired and informed by computation. Considerations about the performance of genetic algorithms led to a novel theory of the role of sex in Evolution based on the concept of mixability, while the equations describing the evolution of a species can be reinterpreted as a repeated game between genes. The complexity of local optimality informs the mystery of variation, and a theorem on Boolean functions helps us understand better the emergence of complex adaptations.

October 17th.
No seminar:
SoCal Theory Day!
See
here
for the program and registration.

October 31st.
David Kempe,
USC.
Title:
Incentivizing Exploration
(joint work with Peter Frazier, Jon Kleinberg, Robert Kleinberg).
Abstract:
(Click to show)
We study a Bayesian multiarmed bandit (MAB) setting in which a principal seeks
to maximize the sum of expected timediscounted rewards obtained by pulling arms, when the arms are actually pulled by selfish and myopic individuals. Since such individuals pull the arm with highest expected posterior reward (i.e., they always exploit and never explore), the principal must incentivize them to explore by offering suitable payments. Among others, this setting models crowdsourced information discovery and funding agencies incentivizing scientists to perform highrisk, highreward research.
We explore the tradeoff between the principal's total expected timediscounted incentive payments, and the total timediscounted rewards realized. Specifically, with a timediscount factor gamma in (0,1), let OPT denote the total expected timediscounted reward achievable by a principal who pulls arms directly without having to incentivize selfish agents, in a MAB problem. We call a (payment, reward) combination (b,rho) in [0,1]^2 achievable if for every MAB instance, using expected timediscounted payments of at most b*OPT, the principal can guarantee an expected timediscounted reward of at least rho*OPT. Our main result is a complete characterization of achievable (payment, reward) pairs: (b,rho) is achievable if and only if sqrt(b) + sqrt(1rho) >= sqrt(gamma).
In proving this characterization, we analyze socalled timeexpanded policies, which in each step let the agents choose myopically with some probability p, and incentivize them to choose "optimally" with probability 1p. The analysis of timeexpanded policies leads to a question that may be of independent interest: If the same MAB instance (without selfish agents) is considered under two different timediscount rates gamma, eta, how small can the ratio of OPT(eta) to OPT(gamma) be? We give a complete answer to this question, showing that OPT(eta) >= ((1gamma)^2/(1eta)^2) OPT(gamma), and that this bound is tight.

November 7th.
Lorenzo Orecchia,
MIT/Boston University.
Title:
FirstOrder Iterative Methods in the Design of Fast Algorithms: from Multiplicative Weight Updates to Nesterov's Method.
Abstract:
(Click to show)
Fast iterative methods from Convex Optimization play a crucial role in a number of recent breakthroughs in the design of nearlylineartime algorithms for fundamental combinatorial problems, such as maximum flow and graph partitioning. Multiplicative Weight Updates, Gradient Descent, and Nesterov's Method have become a mainstay in the construction and analysis of fast algorithms.
The power of these approaches raises a number of questions: What is the exact relation between Multiplicative Weight Updates and Gradient Descent? Why do Multiplicative Weight Updates show up in so many settings? What is the intuition behind Nesterov's iterative algorithm that achieves the asymptotically optimal iteration count for smooth convex functions?
In this survey talk, I will provide answers by presenting a unified framework that reveals the power and limitations of each method and provides much needed geometric intuition. Among other insights, I will explain how Multiplicative Weight Updates are a particular instance of a dual iterative method, known as Mirror Descent, and how Nesterov's algorithm can be naturally derived as a combination of Mirror and Gradient Descent.
As an example of the application of these insights, I will also discuss their crucial role in recent progress in the nearlylineartime solution of packing linear programs, both in parallel and sequentially.
The material in this talk is joint work with Zeyuan AllenZhu (MIT).

Wednesday
November 12th, 4pm (unusual day and time!).
Moritz Hardt,
IBM Almaden.
Title:
The Reusable Holdout: Preserving Validity in Interactive Data Analysis.
Abstract:
(Click to show)
A great deal of effort has been devoted to reducing the risk of spurious scientific discoveries, from the use of holdout sets and sophisticated crossvalidation techniques, to procedures for controlling the false discovery rate in multiple hypothesis testing. However, there is a fundamental disconnect between the theoretical results and the practice of science: the theory assumes a fixed collection of hypotheses to be tested, or learning algorithms to be applied, selected nonadaptively before the data are gathered, whereas science is by definition an adaptive process, in which data are shared and reused, and hypotheses and new studies are generated on the basis of data exploration and previous outcomes.
Surprisingly, the challenges of adaptivity can be addressed using insights from differential privacy, a field of study supporting a definition of privacy tailored to private data analysis. As a corollary we show how to safely reuse a holdout set a great many times without undermining its validation power, even when hypotheses and computations are chosen adaptively. Armed with this technique, the analyst is free to explore the data ad libitum, generating and evaluating hypotheses, verifying results on the holdout, and backtracking as needed.
Joint work with Cynthia Dwork, Vitaly Feldman, Toni Pitassi, Omer Reingold and Aaron Roth.

December 5th.
Andrea Montanari,
Stanford.
Title:
The hidden subgraph problem .
Abstract:
(Click to show)
I will discuss the problem of finding a hidden subgraph in an otherwise random graph. The distinguishing feature of the hidden subgraph is that its average degree is higher than the background. This problem has applications in a variety of contexts, from network analysis to clustering, and is intimately connected to dimensionality reduction and principal component analysis.
I will consider two different regimes: a sparse graph regime, where the graph degree remains bounded as the number of vertices diverges, and a dense graph regime, where the average degree is proportional to the number of vertices. The last setting includes as a special case the hidden clique problem. I will show that both regimes are characterized by computational and statistical phase transitions, and that indeed these phase transitions can be described within a unified manner.
[Based on joint work with Yash Deshpande]