Research Projects

__The Computational Power of Relative Entropy__

The relative entropy function plays a
prominent role in information theory and in statistics in
the characterization of the performance of a variety of
inferential procedures as well as in proofs of a number of
fundamental inequalities. By
virtue of its convexity, the relative entropy function
also has a number of attractive *computational*
properties. This project is aimed
at investigating the benefits of the relative entropy
function from an algorithmic viewpoint. We
demonstrate the utility of relative entropy convex
programs in providing a tractable approach for challenging
problems arising in combinatorial optimization, dynamical
systems, and quantum information. Building on ideas
in real algebraic geometry (developed in the context of
Hilbert's 17th problem), we also describe a family of
convex relaxations based on relative
entropy for signomial programs, which
are a difficult class of non-convex problems commonly
encountered in domains such as chemical process control,
circuit design, and resource allocation in wireless
networks.

- V. Chandrasekaran and P. Shah, Relative Entropy Optimization
and its Applications,
*Mathematical Programming*, Vol. 161, No. 1, January 2017.

- V. Chandrasekaran and P. Shah, Relative Entropy
Relaxations for Signomial Optimization,
*SIAM Journal on Optimization*, Vol. 26, No. 2, May 2016.

Atomic Norms for Parsimonious Modeling

Deducing a system or statistical model
given partial and imprecise information is a fundamental
task throughout the sciences and engineering.
Regularization techniques offer a general
optimization-based methodology for addressing such
ill-posed inverse problems. These methods are based
on augmenting the objective with a penalty function, which
is specified based on prior domain-specific expertise to
induce a desired structure in the solution. This
project proposes a unified framework to transform notions
of structure based on parsimonious models into convex
penalty functions known as *atomic norms*, which can
subsequently be employed in convex optimization approaches
for inverse problems. Atomic norm regularization
techniques expand upon the significant success of methods
that utilize the L1 norm and the nuclear norm for inducing
models with sparse and low-rank structure. We
demonstrate the utility of atomic norm regularization in
linear inverse problems, statistical denoising,
change-point estimation, and in the design of controllers
with a simple architecture. Recent efforts have also
been directed towards learning suitable atomic norms from
data in settings in which precise domain knowledge is not
directly available.

- Y. S. Soh and V. Chandrasekaran, A Matrix Factorization Approach for Learning Semidefinite-Representable Regularizers, preprint.

- Y. S. Soh and V. Chandrasekaran, High-Dimensional Change-Point
Estimation: Combining Filtering with Convex Optimization,
*Applied and Computational Harmonic Analysis*, accepted.

- N. Matni and V. Chandrasekaran, Regularization for Design,
*IEEE Transactions on Automatic Control*, Vol. 61, No. 12, December 2016.

- V. Chandrasekaran, B. Recht, P. A. Parrilo, and A.
S. Willsky, The Convex
Geometry of Linear Inverse Problems,
*Foundations of Computational Mathematics*, Vol. 12, No. 6, December 2012.

Latent-Variable Statistical Modeling

A central task in data analysis is to learn simple or concisely described models that characterize the statistical dependencies among a collection of variables. Concisely specified models avoid problems associated with overfitting, and they often provide useful interpretations about the relationships inherent in the underlying variables. However, latent or unobserved phenomena complicate this task significantly as latent variables induce confounding relationships among the observed variables that are not succinctly described. In this project we develop principled and computationally tractable methodology for identifying the effects of latent variables in statistical models.

- A. Taeb and V. Chandrasekaran, Interpreting Latent Variables in Factor Models via Convex Optimization, preprint.

- V. Chandrasekaran, P. A. Parrilo, and A. S. Willsky, Latent Variable Graphical Model Selection via Convex Optimization, Annals of Statistics (with discussion), Vol. 40, No. 4, August 2012. (software)

- V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky, Rank-Sparsity Incoherence for Matrix Decomposition, SIAM Journal on Optimization, Vol. 21, No. 2, June 2011.

__Convex Graph Parameters__

Inverse problems over unlabeled graphs
arise in a range of application domains as graphs are
widely used to model dependencies among large numbers of
interacting entities; examples includes gene interaction
networks in computational biology and social
networks. To address the combinatorial complexity
typically associated with these problems, the goal of this
project is to develop a conceptually unified optimization
framework for the solution of inverse problems over spaces
of graphs. We introduce and characterize *convex*
graph parameters, which are functions of a graph with
appealing structural and convexity properties. These
graph invariants enable the formulation of tractable
algorithmic approaches for several problems on graphs and
networks.

- U. Candogan and V. Chandrasekaran, Finding Planted Subgraphs with Few Eigenvalues using the Schur-Horn Relaxation, preprint.

- V. Chandrasekaran, P. A. Parrilo, and A. S. Willsky, Convex Graph Invariants, SIAM Review, Vol. 54, No. 3, September 2012.

Computational and Statistical Tradeoffs in Data Analysis

The rapid growth in the size and scope of
datasets in science and technology has created a need for
novel foundational perspectives on data analysis that
blend computer science and statistics. That
classical perspectives from these fields are not adequate
to address emerging challenges with massive datasets is
apparent from their sharply divergent nature at an
elementary level – in computer science, the growth of the
number of data points is a source of "complexity" that
must be tamed via algorithms or hardware, whereas in
statistics, the growth of the number of data points is a
source of "simplicity" in that inferences are generally
stronger and asymptotic results can be invoked. In
classical statistics, one usually considers the increase
in inferential accuracy as the number of data points grows
(with little formal consideration of computational
complexity), while in classical numerical computation, one
typically analyzes the improvement in accuracy as more
computational resources such as space or time are employed
(with the size of a dataset not formally viewed as a
resource). This project addresses the question of
trading off the amount of data and the amount of
computation required to achieve a desired inferential
accuracy.

- Q. Berthet and V. Chandrasekaran, Resource Allocation for
Statistical Estimation,
*Proceedings of the IEEE*, Vol. 104, No. 1, January 2016.

- V. Chandrasekaran and M. I. Jordan, Computational and
Statistical Tradeoffs via Convex Relaxation,
*Proceedings of the National Academy of Sciences*, Vol. 110, No. 13, March 2013.