Filters








28,216 Hits in 2.3 sec

Block-term Tensor Neural Networks [article]

Jinmian Ye, Guangxi Li, Di Chen, Haiqin Yang, Shandian Zhe, Zenglin Xu
2020 arXiv   pre-print
We name the new corresponding structure as block-term tensor layers (BT-layers), which can be easily adapted to neural network models, such as CNNs and RNNs.  ...  In this paper, we explore the correlations in the weight matrices, and approximate the weight matrices with the low-rank block-term tensors.  ...  This paper extends the Block-term layer in LSTM [15] to more general neural network architectures.  ... 
arXiv:2010.04963v2 fatcat:ncf7pf4tuzd5hprdym725vfgii

Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition [article]

Jinmian Ye, Linnan Wang, Guangxi Li, Di Chen, Shandian Zhe, Xinqi Chu, Zenglin Xu
2018 arXiv   pre-print
Recurrent Neural Networks (RNNs) are powerful sequence modeling tools.  ...  To overcome this problem, we propose a compact and flexible structure, namely Block-Term tensor decomposition, which greatly reduces the parameters of RNNs and improves their training efficiency.  ...  Other tensor decomposition methods also applied in Deep Neural Networks (DNNs) for various purposes [19, 49, 18] .  ... 
arXiv:1712.05134v2 fatcat:g4fdyr5jvvebpffc7qae34j2my

Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition

Jinmian Ye, Linnan Wang, Guangxi Li, Di Chen, Shandian Zhe, Xinqi Chu, Zenglin Xu
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
Recurrent Neural Networks (RNNs) are powerful sequence modeling tools.  ...  To overcome this problem, we propose a compact and flexible structure, namely Block-Term tensor decomposition, which greatly reduces the parameters of RNNs and improves their training efficiency.  ...  Other tensor decomposition methods also applied in Deep Neural Networks (DNNs) for various purposes [19, 49, 18] .  ... 
doi:10.1109/cvpr.2018.00977 dblp:conf/cvpr/YeWLCZCX18 fatcat:mclr7pbyfvghndnx36v4fgobkq

A variable projection method for block term decomposition of higher-order tensors

Guillaume Olikier, Pierre-Antoine Absil, Lieven De Lathauwer
2018 The European Symposium on Artificial Neural Networks  
In this paper, we focus on the best approximation in the least-squares sense of a higher-order tensor by a block term decomposition.  ...  decomposing a tensor.  ...  In this paper, we focus on a recently introduced tensor decomposition called block term decomposition (BTD) [3, 4, 5] .  ... 
dblp:conf/esann/OlikierAL18 fatcat:c6hga4zcxzc7pixnmqdynyyhqi

BT-Nets: Simplifying Deep Neural Networks via Block Term Decomposition [article]

Guangxi Li, Jinmian Ye, Haiqin Yang, Di Chen, Shuicheng Yan and Zenglin Xu
2017 arXiv   pre-print
In this paper, we propose the Block Term networks (BT-nets) in which the commonly used fully-connected layers (FC-layers) are replaced with block term layers (BT-layers).  ...  In BT-layers, the inputs and the outputs are reshaped into two low-dimensional high-order tensors, then block-term decomposition is applied as tensor operators to connect them.  ...  Block Term Format Tensor Network Diagrams and Notations A tensor in tensor network, also known as a multi-way array, can be viewed as a higher-order extension of a vector (i.e., an order-1 tensor) and  ... 
arXiv:1712.05689v1 fatcat:ealwokazsjfstnu4l4ijn5lqki

TedNet: A Pytorch Toolkit for Tensor Decomposition Networks [article]

Yu Pan, Maolin Wang, Zenglin Xu
2021 arXiv   pre-print
TedNet implements 5 kinds of tensor decomposition(i.e., CANDECOMP/PARAFAC(CP), Block-Term Tucker(BT), Tucker-2, Tensor Train(TT) and Tensor Ring(TR)) on traditional deep neural layers, the convolutional  ...  Tensor Decomposition Networks(TDNs) prevail for their inherent compact architectures.  ...  We implemented 5 variants of tensor decomposition methods, namely CP, Tucker, Tensor Ring, Tensor Train, and Block-term Tucker. Tensor decomposition can be fulfilled in convolution neural networks.  ... 
arXiv:2104.05018v1 fatcat:hv3as72nazcu5glofmuh6xfdeq

Sharing Residual Units Through Collective Tensor Factorization in Deep Neural Networks [article]

Chen Yunpeng, Jin Xiaojie, Kang Bingyi, Feng Jiashi, Yan Shuicheng
2017 arXiv   pre-print
Then, based on the new explanation, we propose a new architecture, Collective Residual Unit (CRU), which enhances the parameter efficiency of deep neural networks through collective tensor factorization  ...  term decomposition.  ...  of recurrent neural networks.  ... 
arXiv:1703.02180v2 fatcat:zbgfmpqibnbjbkckhklkus25ci

Introduction to the Special Issue on Tensor Decomposition for Signal Processing and Machine Learning

Hongyang Chen, Sergiy A. Vorobyov, Hing Cheung So, Fauzia Ahmad, Fatih Porikli
2021 IEEE Journal on Selected Topics in Signal Processing  
Rontogiannis et al. develop some interesting new results on block-term decomposition (BTD) tensor model.  ...  Zhang et al. propose a tensor-train deep neural network (TT-DNN)-based channel estimator.  ... 
doi:10.1109/jstsp.2021.3065184 fatcat:qbvihejwkfaa5hoztety77pnwi

A Tensorized Transformer for Language Modeling [article]

Xindian Ma, Peng Zhang, Shuai Zhang, Nan Duan, Yuexian Hou, Dawei Song, Ming Zhou
2019 arXiv   pre-print
In this paper, based on the ideas of tensor decomposition and parameters sharing, we propose a novel self-attention model (namely Multi-linear attention) with Block-Term Tensor Decomposition (BTD).  ...  Latest development of neural models has connected the encoder and decoder through a self-attention mechanism.  ...  We use Block Term Tensor decomposition (BTD) to construct a new representation, namely Multi-linear attention, which is a 3-order tensor.  ... 
arXiv:1906.09777v3 fatcat:2o7u4242wbfipixrjotrv47eby

Matrix and tensor decompositions for training binary neural networks [article]

Adrian Bulat and Jean Kossaifi and Georgios Tzimiropoulos and Maja Pantic
2019 arXiv   pre-print
While prior methods for neural network binarization binarize each filter independently, we propose to instead parametrize the weight tensor of each layer using matrix or tensor decomposition.  ...  This paper is on improving the training of binary neural networks in which both activations and weights are binary.  ...  Related work In this section, we review the related work, in terms of neural network architectures (2.1), network binarization (2.2) and tensor methods (2.3).  ... 
arXiv:1904.07852v1 fatcat:z37niixs7rc3bhk35ljcteec2a

T-Net: Parametrizing Fully Convolutional Nets with a Single High-Order Tensor [article]

Jean Kossaifi, Adrian Bulat, Georgios Tzimiropoulos, Maja Pantic
2019 arXiv   pre-print
In this paper, we propose to fully parametrize Convolutional Neural Networks (CNNs) with a single high-order, low-rank tensor.  ...  of the network (e.g. number of convolutional blocks, depth, number of stacks, input features, etc).  ...  Other methods for network decomposition. There are also other methods, besides tensor-based ones, for reducing the redundancy and number of parameters in neural networks.  ... 
arXiv:1904.02698v1 fatcat:s2io24rw2vf4pjj6v5n4ltye3m

Polynomial Networks in Deep Classifiers [article]

Grigorios G Chrysos, Markos Georgopoulos, Jiankang Deng, Yannis Panagakis
2021 arXiv   pre-print
Deep neural networks have been the driving force behind the success in classification tasks, e.g., object and audio recognition.  ...  The expressivity of the proposed models is highlighted both in terms of increased model performance as well as model compression.  ...  Tensor decompositions have also been used for modeling the components of deep neural networks.  ... 
arXiv:2104.07916v1 fatcat:44xa72ypf5bfbpjgctimghqfam

Sharing Residual Units Through Collective Tensor Factorization To Improve Deep Neural Networks

Yunpeng Chen, Xiaojie Jin, Bingyi Kang, Jiashi Feng, Shuicheng Yan
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
In this work, we revisit the standard residual function as well as its several successful variants and propose a unified framework based on tensor Block Term Decomposition (BTD) to explain these apparently  ...  CRU further enhances parameter efficiency of deep residual neural networks by sharing core factors derived from collective tensor factorization over the involved residual units.  ...  Tensor Block Term Decomposition A tensor is a multi-dimensional array and the order of a tensor is the number of its dimensions.  ... 
doi:10.24963/ijcai.2018/88 dblp:conf/ijcai/ChenJKFY18 fatcat:zuwfslc4ebhbdgkbhdkactve7m

Recurrent Graph Tensor Networks: A Low-Complexity Framework for Modelling High-Dimensional Multi-Way Sequence [article]

Yao Lei Xu, Danilo P. Mandic
2021 arXiv   pre-print
reduce parameter complexity, resulting in a novel Recurrent Graph Tensor Network (RGTN).  ...  Recurrent Neural Networks (RNNs) are among the most successful machine learning models for sequence modelling, but tend to suffer from an exponential increase in the number of parameters when dealing with  ...  Recurrent Neural Networks Recurrent Neural Networks (RNNs) [4] [13] are among the most successful deep learning tools for sequence modelling.  ... 
arXiv:2009.08727v5 fatcat:eoul7vqcrjd4zknyn3cfjbh3ki

Graph Neural Network for Senior High Student's Grade Prediction

Yang Yu, Jinfu Fan, Yuanqing Xian, Zhongjie Wang
2022 Applied Sciences  
The proposed grade prediction model based on graph neural network is tested on the dataset of Ningbo Xiaoshi High School.  ...  Therefore, the graph network block can be composed to achieve the graph neural network.  ...  Figure 2 shows the graph network block. The second is to apply the neural network in the graph structure data.  ... 
doi:10.3390/app12083881 fatcat:gm6ebszpsbaujngdtul4o5i364
« Previous Showing results 1 — 15 out of 28,216 results