Chris Umans Research Summary

WARNING: outdated! This page was written in 2007 and hasn't been updated since then.

The following summary is organized around four research areas: derandomization, explicit constructions, algorithms for algebraic problems, and the complexity of problems in the Polynomial-Time Hierarchy. Online versions of all papers listed below can be found on my research page.

Derandomization

One of the central open questions in Computational Complexity concerns the power of randomness in computation. Randomness is a key resource that is by now ubiquitous in Computer Science. Many problems have clean and powerful solutions when the (idealized) computer is permitted to "flip coins". There are even problems that have an efficient randomized solution, but no known efficient deterministic solution. One of the core questions of Computational Complexity asks, then: does randomized computation afford greater computational power than ordinary deterministic computation?

While the P vs. NP question asks whether there is a generic algorithmic technique for efficiently simulating non-determinism, the above question can be regarded as asking: Is there a generic algorithmic technique for efficiently and deterministically simulating all efficient randomized algorithms}? Unlike most other questions of this sort, the answer appears to be "yes" -- although strong evidence supporting this view only emerged relatively recently. Derandomization theory, which investigates the conditions under which such wholesale derandomization is possible, has developed into one of the most active research areas of Computational Complexity, and the general techniques that have been devised for "removing randomness" from computation are among the most sophisticated and technically involved in the field.

My papers in this area:

Explicit Constructions:

In some cases a computational procedure's reliance on randomness is (often implicitly) confined to exploiting properties of some combinatorial structure with specific "random-like" properties. Perhaps the most venerable example is that of expander graphs, which are sparse, yet highly-connected networks. Other fundamental combinatorial objects include error-correcting codes, families of hash functions, and randomness extractors. All of these objects have proven to be extremely useful tools in the design of algorithms, network and distributed protocols, and a diverse array of applications within computational complexity itself. Indeed it is not surprising that objects possessing certain extremal combinatorial properties (as these objects do) should naturally appear in arguments that explore the boundary of what is computationally possible.

Probabilistic constructions exist for all of these objects; however most applications require something much stronger -- an efficient deterministic, or explicit, construction. Explicit constructions are typically much harder to achieve, and indeed constitute a form of derandomization.

My research has focused primarily on obtaining explicit constructions of randomness extractors, which are bipartite graphs with the following "random-like" property: any distribution of nodes on the left with sufficient entropy induces a nearly-uniform distribution on the nodes on the right. The original motivation for extractors is that they supply the algorithmic component necessary for using "real" randomness emanating from a physical source to execute randomized computations (as opposed to ad-hoc simulation on an otherwise inherently deterministic machine). But extractors have since emerged as a fundamental combinatorial object by dint of numerous applications in a wide variety of settings unrelated to their original motivation. Explicit extractor constructions have been the subject of a long and intensive line of research over the past 15 or more years.

My papers in this area:

Algorithms for Algebraic Problems:

Matrix multiplication is perhaps the most fundamental problem in algorithmic linear algebra. This claim is justified by the fact that numerous other important problems are known to have the same algorithmic complexity as matrix multiplication, including computing the matrix inverse, computing the determinant, solving a system of linear equations, and computing various matrix decompositions. These problems lie at the core of application areas within scientific computing, information indexing and retrieval, cryptanalysis, data mining, network routing, and others. So, resolving the complexity of matrix multiplication automatically impacts the complexity of a host of other problems, via known reductions.

Given the importance of matrix multiplication, it is surprising that the best known algorithms are far from optimal. The best known algorithm for n x n matrix multiplication, due to Coppersmith and Winograd, has running time O(n^{2.376}), whereas most researchers believe that an algorithm with running time O(n^{2+ epsilon}) is possible, for all epsilon > 0 ("the exponent of matrix multiplication is 2").

Coppersmith and Winograd's algorithm, first published in 1987, marked the end of a sequence of improvements that began with Strassen's breakthrough algorithm in 1969. These works developed increasingly sophisticated algorithms for matrix multiplication, in dozens of papers by numerous authors. We propose a new approach that utilizes the Discrete Fourier Transform (DFT) over non-abelian finite groups to perform fast matrix multiplication. This approach imports the problem into the domain of Group Theory and Representation Theory; the challenge of designing a fast algorithm for matrix multiplication is now exposed to a rich set of mathematical tools and techniques.

There is a SIAM News article on this work.

In the same way that matrix multiplication is a fundamental problem in the domain of linear algebraic computations, modular composition is a fundamental problem in the domain of computations with polynomials. It plays a similar role in that the fastest known algorithms for a diverse array of problems (most notably polynomial factorization, but also irreducibility testing, computing minimal polynomials, manipulating normal bases) depend on fast algorithms for modular composition. Like matrix multiplication, it has stubbornly resisted progress for a long time: the only non-trivial algorithm for modular composition dates to 1978 (and coincidentally, that algorithm relies on fast matrix multiplication).

This work gives a new algorithm that solves the problem optimally (up to lower order terms) in small characteristic:

Complexity and Approximability of Problems in the Polynomial Hierarchy:

The complexity classes that form the Polynomial-Time Hierarchy (PH) were defined in the 1970s to capture the complexity of a number of natural problems that are "harder than NP-complete." The best-known of these are circuit minimization problems, fundamental problems in logic synthesis and circuit design that are solved in the real world using a variety of heuristics. Just as with NP problems, a mathematical understanding of these problems' complexity can expose the solution techniques that have a chance to succeed, and those likely to fail. However, achieving such an understanding seems more difficult than at the NP level: since the 1970s, researchers have been unable to prove (the conjectured) completeness of circuit minimization problems for the second level of the PH.

I showed that the most important special case of circuit minimization, DNF minimization, is complete for the second level of the PH, resolving a longstanding conjecture. My subsequent work in the area addresses the complexity of problems in the PH, limits on the approximability of these problems, and approximation algorithms for certain problems in the PH. We establish that more than a dozen natural optimization problems drawn from logic synthesis, learning theory, game theory, and combinatorics are complete for various classes within the PH, and many are just as hard to approximate to within very large factors.

My papers in this area:


[Home][Research][Teaching][Theory links][Other]