Abstract:
Tensor Decomposition (TD) is a powerful technique in solving high-dimensional optimization problems and has been widely used in machine learning and data science. Many TD models aim to establish a trade-off between computational complexity and representation ability. However, they have the problem of tensor rank selection and latent factor arrangement, and the neglected internal correlation between different modes. In this paper, we propose a data-adaptive TD model established on a generalized tensor rank and name it adaptive tensor network (ATN) decomposition, which constructs an optimal topological structure for TD according to the intrinsic properties of the data. We exploit the generalized tensor rank to measure the correlation between two modes of the data and establish a multi-linear connection between the corresponding latent factors with an adaptive rank. ATN possesses the merits of permutation invariance, strong robustness, and represents high-order data with fewer parameters. We verified ATN's effectiveness and superiority on three typical tasks: tensor completion, image denoising, and neural network compression. Experimental results on synthetic data and real datasets demonstrate that the overall performance of ATN surpasses the state-of-the-art TD method.