Invited Talks to RSL-CV 2019
Title: Bridging Robust PCA and Variational Autoencoder Models (David Wipf)
Abstract: Variational autoencoders (VAE) represent a popular, flexible form of deep generative model that can be stochastically fit to samples from a given random process using an information-theoretic variational bound on the true underlying distribution. Once so-obtained, the model can be putatively used to generate new samples from this distribution, or to provide a low-dimensional latent representation of existing samples. However, as a highly non-convex approach involving high-dimensional integrals, it remains unclear exactly how minima of the underlying energy relate to original design purposes. We attempt to better quantify these issues by analyzing a series of tractable special cases. In doing so, we unveil interesting connections with more traditional dimensionality reduction techniques, as well as an intrinsic yet underappreciated propensity for robustly dismissing sparse outliers when estimating latent manifolds. With respect to the latter, we demonstrate that the VAE can be viewed as the natural evolution of robust PCA models, capable of learning nonlinear manifolds of unknown dimension obscured by gross corruptions. But quite unlike robust PCA, the VAE can also be leveraged to generate realistic samples that mirror the data distribution within such manifolds.
Biography: David Wipf is a researcher with the Visual Computing Group at Microsoft Research in Beijing, where he has been employed full-time since 2011. Prior to this position, he received the B.S. degree with highest honors in electrical engineering from the University of Virginia, and the M.S. and Ph.D. degrees in electrical and computer engineering from the University of California, San Diego. He was later an NIH Postdoctoral Fellow in the Biomagnetic Imaging Lab at the University of California, San Francisco. His research interests include non-convex optimization, sparse/structured estimation, deep generative models, and deep network compression. He is the recipient of numerous fellowships and awards including the 2012 Signal Processing Society Best Paper Award, the Biomag 2008 Young Investigator Award, and the 2006 NIPS Outstanding Student Paper Award. He is currently an Action Editor for the Journal of Machine Learning Research.
Title: L1-norm Analysis of Matrices and Tensors: Theory, Algorithms, and Some Applications (Panos P. Markopoulos)
Abstract:
In an increasingly high number of applications in machine learning, computer vision, bioinformatics, and other fields of science/engineering, multi-modal data are stored and processed as multi-way arrays known as tensors. This approach has demonstrated enhanced inference and learning, leveraging innate inter- and intra-modality dependencies of the data. Tucker decomposition is a cornerstone method for tensor analysis, used, e.g., for compression, feature extraction, and denoising in a plethora of applications. For 2-way tensors (i.e., matrices), Tucker decomposition simplifies to standard Principal Component Analysis (PCA). Due to their L2-norm-based formulation as L2-residual-error minimization or L2-projection-variance maximization, standard PCA and Tucker tensor decomposition are known to be highly sensitive against faulty entries among the processed data. Recent research has demonstrated that significant robustness can be achieved if the L2-norm in PCA and Tucker is substituted with the corruption-resistant L1-norm. In this talk, we will review theoretical developments and recent algorithmic solutions (exact and practical) for L1-norm based analysis of data matrices and tensors.
Biography: Dr. Panos P. Markopoulos received the Ph.D. degree in Electrical Engineering from The State University of New York at Buffalo, Buffalo NY, USA, in 2015. Since 2015, he has been an Assistant Professor of Electrical Engineering with the Rochester Institute of Technology, Rochester NY, USA, where he directs the Signal Processing for Data Analysis and Learning (SPAN) Lab. In Summer 2018, he was a Visiting Research Faculty at the U.S. Air Force Research Laboratory (Information Directorate), in Rome NY. His research is in the areas of machine learning, signal processing, and data analysis, with a current focus on L1-norm-based analysis of multi-modal data. In these areas, he has co-authored multiple peer-reviewed journal and conference articles. Dr. Markopoulos’s research has been supported with multiple grants from the U.S. National Science Foundation, US. Air Force Office of Scientific Research, U.S. Air Force Research Lab, as well as industry partners. He is a member of IEEE Signal Processing, Computer, and Communication Societies, with high service activity including, most recently, the organization of the IEEE International Workshop on Machine Learning for Signal Processing (IEEE MLSP 2019) and the Symposium on Tensor Methods for Signal Processing and Machine Learning, at the IEEE Global Conference on Signal and Information Processing (IEEE GlobalSIP 2019).
Title: Tensor methods for robust deep learning (Jean Kossaifi)
Abstract: Tensor methods are a natural extension of traditional matrix algebraic methods to any arbitrary number of dimensions (higher orders). The data manipulated by most traditional deep learning methods naturally has such multi-dimensional structure (e.g. images are third order tensors, videos 4th order, etc). However, while convolutional layers have the ability to preserve some local (spatial) structure in the data, that structure is not fully leveraged by existing methods. In this talk, we will show how to go beyond linear algebraic methods and instead leverage the structure in the data manipulated using tensor methods. We will show that the methods obtained by combining deep learning and tensor methods can result in large parameter savings, computational speed-ups, and superior performance, on a wide range of applications. We will also show how, using tensor methods, we can easily make the model more robust to noise in the input, both random and adversarial. We will also introduce how to practically implement these methods with TensorLy, a Python library that provides a high level API for tensor algebra, decomposition and regression, and deep tensorized architectures.
Biography: Jean Kossaifi is a research scientist at Samsung AI Cambridge and at Imperial College London. His research is primarily focused on face analysis and facial affect estimation in natural conditions, a field which bridges the gap between computer vision and machine learning. He is currently working on tensor methods, and how to efficiently combine these with deep learning to develop more robust models that are also memory and computation efficient . He created TensorLy, a high-level API for tensor methods and deep tensorized neural networks in Python that aims at making tensor learning simple and accessible.