Principal Collaborators
Sparse approximation: Theory and algorithmsSparse approximation refers to the challenge of finding a sparse approximate solution to an underdetermined system of linear equations. This problem arises in applications as diverse as imaging sciences, machine learning, communications, and statistics. It is known that sparse approximation is computationally hard in the general case. Nevertheless, it has recently been established that there are tractable algorithms for solving these problems in a variety of interesting cases. My research identifies situations where sparse approximation problems can be solved using efficient computational algorithms. In particular, I have analyzed the performance of greedy pursuit methods, which are popular with practitioners because of their speed. I have also developed a substantial body of results for techniques based on convex programming. A third strand of work addresses the tractability of random instances of sparse approximation. This research has yielded new results on the behavior of random matrices.
Collaborators:Randomized algorithms for matrix analysisClassically, the field of numerical linear algebra has focused on deterministic algorithms that produce highly accurate results. Over the last decade, researchers have observed that randomized methods can quickly produce approximate answers to familiar problems, such as computing the SVD. Randomness also expands the range of problems that can be solved efficiently. I am currently developing randomized techniques that can produce highly regular submatrices of a given matrix. In particular, I have developed an algorithm that can extract a large, exceptionally well-conditioned set of columns from a fixed matrix. I am also exploring whether these ideas can be used to accelerate large-scale computations in numerical linear algebra and sparse approximation.
Collaborators:
Selected papers:Architectures for compressive samplingAlthough many signals of interest contain very little information in comparison with their ambient dimension, traditional approaches to sampling collect a complete representation of the signal and then compress it to remove redundancies. Compressive sampling is a new paradigm for signal acquisition that attempts to address this inefficiency. It is based on the idea that for certain common classes of signals, there are random sampling schemes that automatically sift out the information in the signal. With colleagues at Rice University and the Technion, I have worked to develop and analyze novel sampling schemes that can be implemented efficiently with modern analog hardware. The two main approaches, random demodulation and random filtering, may have a substantial impact in high-end analog-to-digital converters.
Collaborators:Algorithms for compressive samplingOne of the major challenges in compressive sampling is to find practical algorithms for reconstructing a target signal from random samples. With D. Needell, I have developed a pursuit algorithm, called CoSaMP, that accepts data from a wide variety of sampling schemes and produces a signal approximation with guaranteed accuracy in near-linear time. This algorithm supersedes earlier work with A. C. Gilbert that demonstrates the effectiveness of a reconstruction algorithm based on a simpler greedy pursuit. Other projects have resulted in sublinear-time reconstruction algorithms; these methods, however, require specially coded samples.
Collaborators:Frames and packingA frame is a generalization of a basis that contains a redundant collection of vectors. This redundancy allows the frame to provide a more succinct representation of a signal than a basis. As a result, frames arise in sparse approximation problems, in quantization, in communications, and in many other areas. Some natural questions about frames lead to problems in extremal combinatorics, such as packing and covering. I have studied the properties of these extremal configurations and attempted to devise algorithms for producing them.
Collaborators:Matrix nearness problems and data analysisMany problems in data analysis can be reduced to a matrix approximation problem or a matrix factorization problem. That is, we attempt to approximate a fixed data matrix by a structured matrix or a product of structured matrices in a fashion that reveals patterns in the data. A classical example is PCA, where we seek a low-rank matrix that explains most of the variation in the data. With colleagues at UT-Austin, I have studied how matrix approximations and factorizations can be used for a variety of data analysis tasks. In my dissertation, I showed that most clustering formulations in the literature can be rephrased as structured matrix factorizations. We have also demonstrated that matrix Bregman divergences can be used to automatically enforce important structural properties, such as rank constraints.
Collaborators: |
[ Home | Basics | Research | Publications | Talks | Teaching | Travel ]
Last Modified: 27 December 2010