A1 Journal article (refereed)
Sparse nonnegative tensor decomposition using proximal algorithm and inexact block coordinate descent scheme (2021)
Wang, D., Chang, Z., & Cong, F. (2021). Sparse nonnegative tensor decomposition using proximal algorithm and inexact block coordinate descent scheme. Neural Computing and Applications, 33(24), 17369-17387. https://doi.org/10.1007/s00521-021-06325-8
JYU authors or editors
Publication details
All authors or editors: Wang, Deqing; Chang, Zheng; Cong, Fengyu
Journal or series: Neural Computing and Applications
ISSN: 0941-0643
eISSN: 1433-3058
Publication year: 2021
Publication date: 04/10/2021
Volume: 33
Issue number: 24
Pages range: 17369-17387
Publisher: Springer
Publication country: United Kingdom
Publication language: English
DOI: https://doi.org/10.1007/s00521-021-06325-8
Publication open access: Openly available
Publication channel open access: Partially open access channel
Publication is parallel published (JYX): https://jyx.jyu.fi/handle/123456789/78051
Abstract
Nonnegative tensor decomposition is a versatile tool for multiway data analysis, by which the extracted components are nonnegative and usually sparse. Nevertheless, the sparsity is only a side effect and cannot be explicitly controlled without additional regularization. In this paper, we investigated the nonnegative CANDECOMP/PARAFAC (NCP) decomposition with the sparse regularization item using l1-norm (sparse NCP). When high sparsity is imposed, the factor matrices will contain more zero components and will not be of full column rank. Thus, the sparse NCP is prone to rank deficiency, and the algorithms of sparse NCP may not converge. In this paper, we proposed a novel model of sparse NCP with the proximal algorithm. The subproblems in the new model are strongly convex in the block coordinate descent (BCD) framework. Therefore, the new sparse NCP provides a full column rank condition and guarantees to converge to a stationary point. In addition, we proposed an inexact BCD scheme for sparse NCP, where each subproblem is updated multiple times to speed up the computation. In order to prove the effectiveness and efficiency of the sparse NCP with the proximal algorithm, we employed two optimization algorithms to solve the model, including inexact alternating nonnegative quadratic programming and inexact hierarchical alternating least squares. We evaluated the proposed sparse NCP methods by experiments on synthetic, real-world, small-scale, and large-scale tensor data. The experimental results demonstrate that our proposed algorithms can efficiently impose sparsity on factor matrices, extract meaningful sparse components, and outperform state-of-the-art methods.
Keywords: signal processing; algorithms
Free keywords: tensor decomposition; nonnegative CANDECOMP/PARAFAC decomposition; sparse regularization; proximal algorithm; inexact block coordinate descent
Contributing organizations
Ministry reporting: Yes
Reporting Year: 2021
JUFO rating: 1