Principal Collaborators



Sparse approximation: Theory and algorithms

Sparse approximation refers to the challenge of finding a sparse approximate solution to an underdetermined system of linear equations. This problem arises in applications as diverse as imaging sciences, machine learning, communications, and statistics. It is known that sparse approximation is computationally hard in the general case. Nevertheless, it has recently been established that there are tractable algorithms for solving these problems in a variety of interesting cases.

My research identifies situations where sparse approximation problems can be solved using efficient computational algorithms. In particular, I have analyzed the performance of greedy pursuit methods, which are popular with practitioners because of their speed. I have also developed a substantial body of results for techniques based on convex programming. A third strand of work addresses the tractability of random instances of sparse approximation. This research has yielded new results on the behavior of random matrices.

    Collaborators:

  • A. C. Gilbert and M. J. Strauss

    Selected papers:

  • [ .pdf ] "The sparsity gap: Uncertainty principles proportional to dimension," 2010
  • [ .pdf ] "Norms of random submatrices and sparse approximation," 2008
  • [ .pdf ] "On the linear independence of spikes and sines," 2007
  • [ .pdf ] "On the conditioning of random subdictionaries," 2006
  • [ .pdf ] "Algorithms for simultaneous sparse approximation, Part I: Greedy Pursuit," 2005
  • [ .pdf ] "Algorithms for simultaneous sparse approximation, Part II: Convex Relaxation," 2005
  • [ .pdf ] "Average-case analysis of greedy pursuit," 2005
  • [ .pdf | Correction .pdf ] "Just relax: Convex programming methods for identifying sparse signals," 2004
  • [ .pdf ] "Greed is good: Algorithmic results for sparse approximation," 2003

    Funding:

  • Supported in part by NSF DMS 0503299 and an NSF graduate fellowship


Randomized algorithms for matrix analysis

Classically, the field of numerical linear algebra has focused on deterministic algorithms that produce highly accurate results. Over the last decade, researchers have observed that randomized methods can quickly produce approximate answers to familiar problems, such as computing the SVD. Randomness also expands the range of problems that can be solved efficiently.

I am currently developing randomized techniques that can produce highly regular submatrices of a given matrix. In particular, I have developed an algorithm that can extract a large, exceptionally well-conditioned set of columns from a fixed matrix. I am also exploring whether these ideas can be used to accelerate large-scale computations in numerical linear algebra and sparse approximation.

    Collaborators:

  • P.-G. Martinsson and N. Halko

    Selected papers:

  • [ .pdf ] "Improved analysis of the subsampled randomized Hadamard transform," 2010
  • [ .pdf ] "User-friendly tail bounds for sums of random matrices," 2010
  • [ .pdf ] "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions," 2009
  • [ .pdf ] "Column subset selection, matrix factorization, and eigenvalue optimization," 2008
  • [ .pdf ] "The random paving property for uniformly bounded matrices," 2007

    Funding:

  • Supported in part by NSF DMS 0503299 and ONR N00014-08-1-0883


Architectures for compressive sampling

Although many signals of interest contain very little information in comparison with their ambient dimension, traditional approaches to sampling collect a complete representation of the signal and then compress it to remove redundancies. Compressive sampling is a new paradigm for signal acquisition that attempts to address this inefficiency. It is based on the idea that for certain common classes of signals, there are random sampling schemes that automatically sift out the information in the signal.

With colleagues at Rice University and the Technion, I have worked to develop and analyze novel sampling schemes that can be implemented efficiently with modern analog hardware. The two main approaches, random demodulation and random filtering, may have a substantial impact in high-end analog-to-digital converters.

    Collaborators:

  • R. G. Baraniuk, M. Duarte, Y. Eldar, J. Laska, M. Mishali, H. Rauhut, J. Romberg, and M. B. Wakin

    Selected papers:

  • [ .pdf ] "Restricted isometries for partial random circulant matrices," 2010
  • [ .pdf ] "Beyond Nyquist: Efficient sampling of sparse, bandlimited signals," 2010
  • [ .pdf ] "Efficient sampling and stable reconstruction of wide band sparse analog signals," 2008
  • [ .pdf ] "Random filters for compressive sampling and reconstruction," 2006

    Funding:

  • Supported by DARPA/ONR N66001-06-1-2011 and ONR N00014-08-1-0883


Algorithms for compressive sampling

One of the major challenges in compressive sampling is to find practical algorithms for reconstructing a target signal from random samples. With D. Needell, I have developed a pursuit algorithm, called CoSaMP, that accepts data from a wide variety of sampling schemes and produces a signal approximation with guaranteed accuracy in near-linear time. This algorithm supersedes earlier work with A. C. Gilbert that demonstrates the effectiveness of a reconstruction algorithm based on a simpler greedy pursuit. Other projects have resulted in sublinear-time reconstruction algorithms; these methods, however, require specially coded samples.

    Collaborators:

  • A. C. Gilbert, D. Needell, M. J. Strauss, and R. Vershynin

    Selected papers:

  • [ .pdf ] "CoSaMP: Iterative signal recovery from incomplete and inaccurate samples," 2008
  • [ .pdf ] "A Tutorial on Fast Fourier Sampling," 2007
  • [ .pdf ] "One sketch for all: Fast algorithms for Compressed Sensing," 2006
  • [ .pdf ] "Algorithmic linear dimension reduction in the l1 norm for sparse vectors," 2005
  • [ .pdf ] "Signal recovery from random measurements via Orthogonal Matching Pursuit," 2004

    Funding:

  • Supported by NSF DMS 0503299, DARPA/ONR N66001-06-1-2011 and ONR N00014-08-1-0883


Frames and packing

A frame is a generalization of a basis that contains a redundant collection of vectors. This redundancy allows the frame to provide a more succinct representation of a signal than a basis. As a result, frames arise in sparse approximation problems, in quantization, in communications, and in many other areas. Some natural questions about frames lead to problems in extremal combinatorics, such as packing and covering. I have studied the properties of these extremal configurations and attempted to devise algorithms for producing them.

    Collaborators:

  • I. S. Dhillon, R. W. Heath Jr., T. Strohmer, S. Sra, and M. Sustik

    Selected papers:

  • [ .pdf ] "On the existence of equiangular tight frames," 2004
  • [ .pdf ] "Constructing packings in Grassmanian manifolds via alternating projection," 2004
  • [ .pdf ] "Designing structured tight frames via alternating minimization," 2004
  • [ .pdf ] "Generalized finite algorithms for constructing Hermitian matrices with prescribed diagonal and spectrum," 2004

    Funding:

  • Supported in part by an NSF graduate fellowship


Matrix nearness problems and data analysis

Many problems in data analysis can be reduced to a matrix approximation problem or a matrix factorization problem. That is, we attempt to approximate a fixed data matrix by a structured matrix or a product of structured matrices in a fashion that reveals patterns in the data. A classical example is PCA, where we seek a low-rank matrix that explains most of the variation in the data.

With colleagues at UT-Austin, I have studied how matrix approximations and factorizations can be used for a variety of data analysis tasks. In my dissertation, I showed that most clustering formulations in the literature can be rephrased as structured matrix factorizations. We have also demonstrated that matrix Bregman divergences can be used to automatically enforce important structural properties, such as rank constraints.

    Collaborators:

  • J. Brickell, I. S. Dhillon, and S. Sra

    Selected papers:

  • [ .pdf ] "Matrix nearness problems with Bregman divergences," 2006
  • [ .pdf ] "The metric nearness problem," 2006
  • [ .pdf ] "Clustering and sparse matrix approximation," Chapter 8 of Topics in Sparse Approximation, 2004
  • [ .pdf ] "An alternating minimization algorithm for nonnegative matrix approximation," 2003

    Funding:

  • Supported in part by NSF DMS 0503299 and an NSF graduate fellowship


[ Home | Basics | Research | Publications | Talks | Teaching | Travel ]

Last Modified: 27 December 2010