Download PDFOpen PDF in browserMLLC-Net: A Progressive Multilayer Latent Low-Rank Coding Network for Deep Subspace DiscoveryEasyChair Preprint 537013 pages•Date: April 24, 2021AbstractLow-rank representation is a powerful and popular algorithm for discovering the subspaces from samples and has obtained the impressive performance, however it cannot capture deep hierarchical information hidden in data due to the essence of single-layer structures. In this paper, we explore the deep image representation in a progressive way by presenting a new strategy to extend existing single-layer latent low-rank models into multiple layers. Technically, we propose a Multilayer Latent Low-rank Coding Network termed MLLC-Net to uncover deep features and the clustering structures embedded in the latent subspace. The basic idea of MLLC-Net in each layer is to refine the principal and salient features progressively from the previous layers by fusing the subspaces, which can potentially learn more accurate features and subspaces for image representation learning and clustering. To learn deep hidden information, MLLC-Net inputs the shallow features from the last layers into the subsequent layers. Then, it recovers hierarchical information and deeper features by respectively congregating the projective subspaces and representation subspaces in each layer. As such, one can learn deeper subspaces and can also ensure the representation learning of deeper layers to remove the noise and discover the underlying clean subspaces, which will be versified by the simulations. It is noteworthy that the framework of our MLLC-Net is applicable to most existing latent low-rank representation models, i.e., existing latent models can be easily extended to multilayer scenario using our MLLC-Net. Extensive results on real databases show that our models can deliver enhanced performance over existing related techniques. Keyphrases: Clustering, Deep subspace discovery, image representation, progressive multilayer latent low-rank coding network
|