SIAM Annual Meeting '12 Minisymposium:

Low Rank and Sparse Modeling

Wednesday & Thursday, July 11-12, 2012
Hyatt Regency Minneapolis
1300 Nicollet Mall
Minneapolis, Minnesota, USA 55403
(specific time and locations
of sessions are below)

Synopsis

Low-rank and sparse modeling are two of the most ubiquitous paradigms of data analysis. Low-rank modeling reduces the dimension of the data, whereas sparse modeling reduces the description of the data by selecting a few features from a large dictionary. These two paradigms can be combined, e.g., when modeling data with a few low-rank models or with a single low-rank model having sparse residual. The goal of this workshop is to discuss some of the very recent and exciting developments of such modeling and highlight fundamental mathematical theories related to these explorations. Furthermore, the workshop will discuss interesting areas of applications for these developments.

Organizer

Gilad Lerman, University of Minnesota, Twin Cities

Program

The official program for the minisymposium is online: MS50 (session I), M63 (session II) and MS79 (session III)

Session I (MS50)

Time: 10:30 AM - 12:30 PM, July 11, 2012
Room: Greenway C - Level 2


10:30-10:55 Robust image recovery via total variation minimization

Rachel Ward
, University of Texas at Austin, USA; Deanna Needell, Claremont McKenna College, USA

Abstract: Discrete images, consisting of slowly-varying pixel values except across edges, have sparse or compressible representations with respect to the discrete gradient. Despite being a primary motivation for compressed sensing, stability results for total-variation minimization do not follow directly from the standard "l1" theory of compressed sensing. In this talk, we present near-optimal reconstruction guarantees for total-variation minimization and discuss several related open problems.

Slides of Talk

11:00-11:25 Fast global convergence of gradient methods for high-dimensional statistical recovery

Sahand Negahban
, Massachusetts Institute of Technology, USA

Abstract: Many statistical M-estimators are based on convex optimization problems formed by the combination of a data-dependent loss function with a norm-based regularizer. We analyze the convergence rates of projected gradient methods for solving such problems, working within a high-dimensional framework that allows the data dimension d to grow with (and possibly exceed) the sample size n. This high-dimensional structure precludes the usual global assumptions---namely, strong convexity and smoothness conditions---that underlie much of classical optimization analysis. We define appropriately restricted versions of these conditions, and show that they are satisfied with high probability for various statistical models. Under these conditions, our theory guarantees that projected gradient descent has a globally geometric rate of convergence up to the \emph{statistical precision} of the model, meaning the typical distance between the true unknown parameter θ* and an optimal solution θˆ . This result is substantially sharper than previous convergence results, which yielded sublinear convergence, or linear convergence only up to the noise level. Our analysis applies to a wide range of M-estimators and statistical models, including sparse linear regression using Lasso (1 -regularized regression); group Lasso for block sparsity; log-linear models with regularization; low-rank matrix recovery using nuclear norm regularization; and matrix decomposition.

11:30-11:55 Sharp Recovery Bounds for Convex Deconvolution, With Applications

Michael McCoy and Joel A. Tropp, California Institute of Technology, USA

Abstract: This work investigates the limits of a convex optimization procedure for the deconvolution of structured signals. The geometry of the convex program leads to a precise, yet intuitive, characterization of successful deconvolution. Coupling this geometric picture with a random model reveals sharp thresholds for success, and failure, of the deconvolution procedure.
These generic results are applicable to a wide variety of problems. This work considers deconvolving two sparse vectors, analyzes a spread-spectrum coding scheme for impulsive noise, and shows when it is possible to deconvolve a low-rank matrix corrupted with a special type of noise. As an additional benefit, this analysis recovers, and extends, known weak and strong thresholds for the basis pursuit problem.

12:00-12:25 Exploiting Saliency in Compressive and Adaptive Sensing

Jarvis Haupt
, University of Minnesota, USA

Abstract: Recent developments in compressive and adaptive sensing have demonstrated the tremendous improvements in sensing resource efficiency that can be achieved by exploiting sparsity in high-dimensional inference tasks. In this talk we describe how compressive sensing techniques can be extended to exploit saliency. We discuss our recent work quantifying the effectiveness of a compressive sensing strategy that accurately identifies salient features from compressive measurements, and we demonstrate the performance of this technique in a two-stage active compressive imaging approach to automated surveillance.

Session II (MS63)

Time: 4:00 PM - 6:00 PM, July 11, 2012
Room: Greenway C - Level 2


4:00-4:25 Performance limits of non-local means

Ery Arias-Castro, University of California, San Diego, USA; Joseph Salmon and Rebecca Willett, Duke University, USA

Abstract: In this talk I describe a novel theoretical characterization of the performance of non-local means (NLM) for noise removal. NLM has proven effective in a variety of empirical studies, but little is understood fundamentally about how it performs relative to classical methods based on wavelets or how various parameters (e.g., patch size) should be chosen. For cartoon images and images which may contain thin features and regular textures, the error decay rates of NLM are derived and compared with those of linear filtering, oracle estimators, variable-bandwidth kernel methods, Yaroslavsky’s filter and wavelet thresholding estimators. The trade-off between global and local search for matching patches is examined, and the bias reduction associated with the local polynomial regression version of NLM is analyzed. The theoretical results are validated via simulations for 2D images corrupted by additive white Gaussian noise. This is joint work with Ery Arias-Castro and Joseph Salmon.

Slides of Talk

4:30-4:55 A Novel M-Estimator for Robust PCA

Teng Zhang, University of Minnesota, USA; Gilad Lerman, University of Minnesota, USA

Abstract: We formulate a convex minimization to robustly recover a subspace from a contaminated data set, partially sampled around it, and propose a fast iterative algorithm to achieve the corresponding minimum. We establish exact recovery by this minimizer, quantify the effect of noise and regularization, explain how to take advantage of a known intrinsic dimension and establish linear convergence of the iterative algorithm. We compare our method with many other algorithms for Robust PCA on synthetic and real data sets and demonstrate state-of-the-art speed and accuracy.

Slides of Talk

5:00-5:25 Compressive Principal Component Pursuit

Yi Ma, University of Illinois at Urbana-Champaign, USA; John Wright, Columbia University, USA

Abstract: We consider the problem of recovering target matrix that is a superposition of low-rank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of videos and hyperspectral images, as well as in the analysis of transformation invariant low-rank matrix recovery. We analyze the performance of the natural convex heuristic for solving this problem, under the assumption that measurements are chosen uniformly at random. We prove that this heuristic exactly recovers low-rank and sparse terms, provided the number of observations exceeds the number of intrinsic degrees of freedom by a polylogarithmic factor. Our analysis introduces several ideas that may be of independent interest for the more general problem of compressed sensing of superpositions of structured signals.

Slides of Talk

5:30-5:55 Robust locally linear analysis with applications to image denoising and blind inpainting


Yi Grace Wang, University of Minnesota, USA; Arthur Szlam, New York University, USA; Gilad Lerman, University of Minnesota, USA

Abstract: I will talk about the problems of denoising images corrupted by impulsive noise and blind inpainting (i.e., inpainting when the deteriorated region is unknown). Our basic approach is to model the set of patches of pixels in an image as a union of low-dimensional subspaces, corrupted by sparse but perhaps large magnitude noise. For this purpose, we develop a robust and iterative method for single subspace modeling and extend it to an iterative algorithm for modeling multiple subspaces. I will also cover the convergence for the algorithm and demonstrate state of the art performance of our method for both imaging problems.

Slides of Talk

Session III (MS79)

Time: 10:30 AM - 12:30 PM, July 12, 2012
Room: Greenway C - Level 2


10:30-10:55 Poisson tensor factorization for sparse count data

Tamara G. Kolda
, Sandia National Laboratories, USA; Eric Chi, University of California, Los Angeles, USA

Abstract: In data mining and analysis, we often encounter data that is indexed by three or more indices. Tensor factorizations can be used for such "multidimensional" data and have already found application in a variety of fields, including high-dimensional function approximations, signal processing, chemometrics, multi-attribute graph analysis, image recognition, and more. We consider the problem of tensor factorizations for sparse count data. For example, we might have a collection of email messages and arrange them by sender, receiver, and date; the data is sparse because not all potential senders and receivers communicate and even those that do communicate do not necessarily do so every day. Typically, we would fit a tensor factorization model to data by minimizing the least squares difference between the data and the tensor factorization model. Statistically speaking, we have assumed that the random variation is best described by a Gaussian distribution. We propose, however, that the random variation may be better described via a Poisson distribution because the count observations (especially the zeros) are more naturally explained. Under this Poisson assumption, we can fit a model to observed data using the negative log-likelihood score, also known as Kullback-Leibler divergence. We discuss an alternating optimization approach and propose solving the subproblem using a majorization-minimization approach that yields multiplicative updates. In fact, the method we propose is a generalization of the Lee-Seung multiplicative updates but has the advantage of being faster and also having provable convergence even for solutions on the boundary (i.e., with zeros in the factorization). Moreover, we show how to prevent the algorithm from converging to non-stationary points (i.e., fixing the problem of getting stuck at undesirable zero values). We will show several examples to demonstrate the utility of the approach.

11:00-11:25 Posterior Rates of Contraction for Sparse Bayesian Models

Artin Armagan, SAS Institute, Inc., USA; Larry Carin, David Dunson, Rayan Saab and Nate Strawn, Duke University, USA.

Abstract: Previous theoretical analysis of Bayesian Compressed Sensing has focused purely upon the consistency behavior of MAP estimators. In this talk, we shall demonstrate that much stronger statements can be made about the posterior in Bayesian Compressed Sensing, which can be useful for uncertainty quantification. Leveraging metric geometry and nonparametric Bayesian techniques, we exhibit universal finite undersampling concentration bounds of Bayesian posteriors for linear regression. The form of these bounds makes it clear that only priors with sufficient concentration on sparse signals enjoy strong contraction around the true parameter in this regime. Based upon these bounds, we also obtain asymptotic contraction rates for well-structured priors.

11:30-11:55 Clustering and Embedding of High-Dimensional Multi-Manifold Data using Sparse Representation

Ehsan Elhamifar and Rene Vidal, Johns Hopkins University, USA

Abstract: In many problems across different areas of science and engineering, we are dealing with massive collections of high-dimensional data. Having thousands and millions of dimensions, the data often lie in or close to low-dimensional structures, called manifolds. Separating the data into the underlying low-dimensional models and finding compact representations for them is one of the most fundamental problems in the high-dimensional data analysis. We address the problem of clustering and embedding of the data on multiple manifolds using sparse representation techniques. We use the collection of the data points as a self-expressive dictionary in which each point can be efficiently represented as a sparse combination of a few other points from the same manifold with appropriate weights that encode locality information. We propose a convex optimization program whose solution is used in a spectral graph theory framework to infer the clustering and embedding of the data. When the manifolds correspond to low-dimensional subspaces, we show that under appropriate conditions on the principal angles between the subspaces and the distribution of the data, the solution of the proposed convex program recovers the desired solution. We demonstrate the effectiveness of the proposed algorithm through experiments on several real-world problems.

Slides of Talk

12:00-12:25 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

Morteza Mardani, Gonzalo Mateous and Georgios B. Giannakis, University of Minnesota, USA

Abstract: Given the superposition of a low-rank matrix plus the product of a known fat compression matrix
times a sparse matrix, this work establishes deterministic conditions under which exact
recovery of the low-rank and sparse components becomes possible. This fundamental identifiability issue
arises with traffic anomaly detection in backbone networks, and subsumes compressed sensing as well
as the timely low-rank plus sparse matrix recovery tasks encountered in matrix decomposition problems.

Slides of Talk

Contact

For further information on this event, please email Gilad Lerman at lerman(at)umn.edu.