Invited Talks

Four Invited Talks to RSL-CV 2021

Title: A Critical View of Self-Expressive Deep Subspace Clustering (Ben Haeffele, Johns Hopkins Mathematical Institute for Data Science, USA)

Abstract:

Subspace clustering is an unsupervised clustering technique designed to cluster data that is supported on a union of linear subspaces, with each subspace defining a cluster with dimension lower than the ambient space. Many existing formulations for this problem are based onexploiting the self-expressive property of linear subspaces, where any point within a subspace can be represented as linear combination of other points within the subspace. To extend this approach to data supported on a union of non-linear manifolds, numerous studies have proposed learning an appropriate embedding of the original data using a neural network, which is regularized by a self-expressive loss function on the data in the embedded space to encourage a union of linear subspaces prior on the data in the embedded space. Here we show that there are a number of potential flaws with this approach which have not been adequately addressed in prior work. In particular, we show the model formulation is often ill-posed in multiple ways, which can lead to a degenerate embedding of the data, which need not correspond to a union of subspaces at all. Further we show experimentally that a significant portion of previously claimed performance benefits can be attributed to an ad-hoc post processing step rather than the clustering model itself.

Biography: Ben Haeffele is an associate research scientist in the Johns Hopkins Mathematical Institute for Data Science (MINDS) and Center for Imaging Science (CIS). His research is broadly focused on developing theory and algorithms for processing high-dimensional data at the intersection of machine learning, optimization, and computer vision. In addition to basic research in data science he also works on a variety of applications in medicine, microscopy, and computational imaging.

Title:  Unlabeled Principal Component Analysis (Manolis Tsakiris, School of Information Science and Technology, ShanghaiTech University, China)

Abstract:

This talk introduces principal component analysis from a data matrix in which the entries of its columns have been corrupted by unknown permutations, termed Unlabeled Principal Component Analysis (UPCA). Using algebraic geometry, we establish that for generic data and up to a permutation of the coordinates, there is a unique subspace of minimal dimension that explains the data.  We show that a polynomial system of equations has finitely many solutions, with each solution corresponding to a row permutation of the ground-truth data matrix. Allowing for missing entries on top of permutations leads to Unlabeled Matrix Completion, for which we give theoretical results of similar flavor. We also describe a two-stage algorithmic pipeline for UPCA, for the case where only a fraction of the data have been permuted. Stage-I employs outlier robust-PCA methods to estimate the ground-truth column-space. Equipped with the column-space, stage-II applies methods for linear regression without correspondences to restore the permuted data. A case study is presented, in which face images from Extended Yale-B with arbitrarily permuted patches of arbitrary size are restored within 0.3 seconds on a desktop computer.

Biography:  Manolis C. Tsakiris holds a PhD in machine learning from Johns Hopkins University advised by Rene Vidal and a PhD in commutative algebra from the University of Genova advised by Aldo Conca. He also holds a masters in signal processing from Imperial College London and a diploma in electrical engineering and computer science from the National Technical University of Athens. His research interests include robust PCA, subspace clustering, matrix completion and algebraic-geometric aspects of subspace arrangements. Currently, he is an assistant professor with the School of Information Science and Technology at ShanghaiTech University. 

Title:  Deep Unfolded Robust PCA: Method and Applications (Nikos Deligiannis, Vrije Universiteit Brussel, IMEC, Belgium, and  Yonina C. Eldar, Weizmann Institute of Science, Israël)

Abstract:

Deep unfolding neural networks are designed as unrolled iterations of optimization algorithms. Such networks achieve faster convergence and higher accuracy than their optimization counterparts. Moreover, they naturally inherit the domain knowledge (e.g., sparse priors) of the optimization methods rather than using extensive training data to learn it from data. As a result, they typically generalize better and are more interpretable than traditional deep neural networks. Deep unfolding has been effective in designing efficient deep models that incorporate mainly sparsity priors, for example, the LISTA and ADMM-Net network.

This talk will present deep-unfolding-based network designs for the problem of Robust Principal Component Analysis (RPCA), which refers to a low-rank and sparse signal decomposition. The talk will present the unfolding of various algorithms (proximal, alternating direction of multipliers, re-weighting) that solve different versions of the optimization problem, including versions that incorporate (temporal) side information. This will lead to networks with innovative forms of layers, activation functions and loss functions. Furthermore, various applications of the presented networks will be discussed including video foreground-background separation and clutter suppression in ultrasound imaging.

Biography:

Nikos Deligiannis is an associate professor at the department of Electronics and Informatics (ETRO) at Vrije Universiteit Brussel and senior scientist at the IMEC research institute in Belgium. In 2006, he received the Diploma in electrical and computer engineering from the University of Patras in Greece and, in 2012, the Ph.D. degree (Hons.) in engineering sciences from Vrije Universiteit Brussel in Belgium. From 2013 to 2015, he was a senior researcher at the Department of Electronic and Electrical Engineering at University College London, UK. His current research interests focus on interpretable machine learning, signal processing, and distributed learning for computer vision, data mining, and natural language processing. He has authored over 130 journal and conference publications, 5 book chapters, and 5 international patent applications. Since 2021, he serves as chair of the EURASIP Technical Area Committee on Signal and Data Analytics for Machine Learning and as Associate Editor for the IEEE Transactions on Image Processing. He is a member of the IEEE and EURASIP.

Yonina C. Eldar is a Professor in the Department of Mathematics and Computer Science, Weizmann Institute of Science, Rehovot, Israel. She was previously a Professor in the Department of Electrical Engineering at the Technion, where she held the Edwards Chair in Engineering. She is also a Visiting Professor at MIT, a Visiting Scientist at the Broad Institute, and an Adjunct Professor at Duke University and was a Visiting Professor at Stanford. She received the B.Sc. degree in physics and the B.Sc. degree in electrical engineering both from Tel-Aviv University (TAU), Tel-Aviv, Israel, in 1995 and 1996, respectively, and the Ph.D. degree in electrical engineering and computer science from the Massachusetts Institute of Technology (MIT), Cambridge, in 2002. She is a member of the Israel Academy of Sciences and Humanities, an IEEE Fellow and a EURASIP Fellow. She has received many awards for excellence in research and teaching, including the IEEE Signal Processing Society Technical Achievement Award (2013), the IEEE/AESS Fred Nathanson Memorial Radar Award (2014) and the IEEE Kiyo Tomiyasu Award (2016). She was a Horev Fellow of the Leaders in Science and Technology program at the Technion and an Alon Fellow. She received the Michael Bruno Memorial Award from the Rothschild Foundation, the Weizmann Prize for Exact Sciences, the Wolf Foundation Krill Prize for Excellence in Scientific Research, the Henry Taub Prize for Excellence in Research (twice), the Hershel Rich Innovation Award (three times), the Award for Women with Distinguished Contributions, the Andre and Bella Meyer Lectureship, the Career Development Chair at the Technion, the Muriel & David Jacknow Award for Excellence in Teaching, and the Technion’s Award for Excellence in Teaching (two times). She received several best paper awards and best demo awards together with her research students and colleagues, was selected as one of the 50 most influential women in Israel, and was a member of the Israel Committee for Higher Education. She is the Editor in Chief of Foundations and Trends in Signal Processing and a member of several IEEE Technical Committees and Award Committees.

 

Title:  Unveiling the Power of RPCA for Computer Vision Applications, (Sajid Javed, Assistant Professor, Khalifa University of Science and Technology, UAE)

Abstract:

Robust Principal Component Analysis (RPCA) is a machine learning tool to estimate low-rank and sparse matrices from the input noise data matrix. In the last decade, RPCA has attained much attention from the machine learning and computer vision community and been shown to be a potential solution for many computer vision applications especially background modeling and moving object segmentation. In this talk, first, I would like to give a brief overview of RPCA and its application in low-rank and sparse background-foreground modeling. More specifically, I will give a brief overview to present the potential, significance, and systematical progress on moving object segmentation tasks. Second, I will present how generic tracking of a single object which is an important task in computer vision can be posed as a low-rank learning problem employing RPCA. Third, a cellular-level community detection to unveil the biological meaningful tissue phenotypes from multi-giga whole slide image of colorectal cancer using low-rank subspace clustering will be presented. More specifically, a potential, significant, and systematics progress for each application and future directions will be discussed to highlight the power of unsupervised learning.

Biography: Sajid Javed is an Assistant Professor of Computer Vision in Electrical and Computer Engineering (ECE) department at Khalifa University of Science and Technology, UAE. Prior to that, he was a research scientist at Khalifa University Center for Autonomous Robotics System, UAE, from February 2019-April 2021. Before joining Khalifa University, he was a research fellow at the University of Warwick  from October 2017 to December 2018. he received his B.Sc. (Hons) degree in computer science from the University of Hertfordshire, United Kingdom in 2010. After that, he completed his combined Master’s and Ph.D. studies in computer science from Kyungpook National University, South Korea, in August 2017. During the fall of 2012 to the fall of 2017. He is interested in computer vision, image processing, machine learning, and deep learning research problems. More specifically, he is working on background-foreground modeling, multiple object tracking, and single object tracking in video sequences. His research theme involves deep learning, robust principal component analysis, low-rank matrix completion, subspace learning, and unsupervised machine learning problems.