Filters








5,630 Hits in 4.5 sec

Super-resolved multi-temporal segmentation with deep permutation-invariant networks [article]

Diego Valsesia, Enrico Magli
2022 arXiv   pre-print
Multi-image super-resolution from multi-temporal satellite acquisitions of a scene has recently enjoyed great success thanks to new deep learning models.  ...  We expand upon recently proposed models exploiting temporal permutation invariance with a multi-resolution fusion module able to infer the rich semantic information needed by the segmentation task.  ...  We leveraged recent results on the importance of temporal permutation invariance in the design of deep neural network that deal with multi-temporal data, and specifically for MISR.  ... 
arXiv:2204.02631v1 fatcat:wc4ngkj4fzbalnyvpnjo2vofwq

AutoShuffleNet: Learning Permutation Matrices via an Exact Lipschitz Continuous Penalty in Deep Convolutional Neural Networks [article]

Jiancheng Lyu, Shuai Zhang, Yingyong Qi, Jack Xin
2019 arXiv   pre-print
ShuffleNet is a state-of-the-art light weight convolutional neural network architecture. Its basic operations include group, channel-wise convolution and channel shuffling.  ...  We present examples of permutation optimization through graph matching and two-layer neural network models where the loss functions are calculated in closed analytical form.  ...  Introduction Light convolutional deep neural networks (LCNN) are attractive in resource limited conditions for delivering high performance at low costs.  ... 
arXiv:1901.08624v1 fatcat:26a4wy2cd5brjpgcxdgldhk2ui

Train-by-Reconnect: Decoupling Locations of Weights from their Values [article]

Yushi Qiu, Reiji Suda
2020 arXiv   pre-print
What makes untrained deep neural networks (DNNs) different from the trained performant ones?  ...  to that of a well-trained fully initialized network; when the initial weights share a single value, our method finds weight agnostic neural network with far better-than-chance accuracy.  ...  Acknowledgements We greatly appreciate the reviewers for the time and expertise they have invested in the reviews.  ... 
arXiv:2003.02570v6 fatcat:pcuo4egw3rewjguhcjim3oefsu

Permutation Invariant Training of Deep Models for Speaker-Independent Multi-talker Speech Separation [article]

Dong Yu, Morten Kolbæk, Zheng-Hua Tan, Jesper Jensen
2017 arXiv   pre-print
This strategy cleverly solves the long-lasting label permutation problem that has prevented progress on deep learning based techniques for speech separation.  ...  We propose a novel deep learning model, which supports permutation invariant training (PIT), for speaker independent multi-talker speech separation, commonly known as the cocktail-party problem.  ...  In their architecture, N frames of feature vectors of the mixed signal |Y| are used as the input to some deep learning models, such as deep neural networks (DNNs), convolutional neural networks (CNNs),  ... 
arXiv:1607.00325v2 fatcat:sp4qdeik5rdvzf7e4gy5g7bmvm

PINE: Universal Deep Embedding for Graph Nodes via Partial Permutation Invariant Set Functions [article]

Shupeng Gui, Xiangliang Zhang, Pan Zhong, Shuang Qiu, Mingrui Wu, Jieping Ye, Zhengdao Wang, Ji Liu
2019 arXiv   pre-print
In this paper, we propose a novel graph node embedding (named PINE) via a novel notion of partial permutation invariant set function, to capture any possible dependence.  ...  Our method 1) can learn an arbitrary form of the representation function from the neighborhood, withour losing any potential dependence structures, and 2) is applicable to both homogeneous and heterogeneous  ...  Neural network based approaches such as graph convolutional networks (GCN) [16] and GraphSAGE [17] define fixed-depth neural network layers to capture the neighborhood information from one-step neighbors  ... 
arXiv:1909.12903v1 fatcat:2b6hv6qiiffvzjgx3iu3kyhm6i

PINE: Universal Deep Embedding for Graph Nodes via Partial Permutation Invariant Set Functions

Shupeng Gui, Xiangliang Zhang, Pan Zhong, Shuang Qiu, Mingrui Wu, Jieping Ye, Zhengdao Wang, Ji Liu
2021 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Index Terms-Graph embedding, partial permutation invariant set function, representation learning ! • S. Gui is with University of Rochester (sgui2@ur.rochester.edu), X.  ...  In this paper, we propose a novel graph node embedding method (named PINE) via a novel notion of partial permutation invariant set function, to capture any possible dependence.  ...  Neural network based approaches such as graph convolutional networks (GCN) [17] and GraphSAGE [18] define fixed-depth neural network layers to capture the neighborhood information from one-step neighbors  ... 
doi:10.1109/tpami.2021.3061162 fatcat:jgmhyduzvfel3ljpkdigldwzdq

Norm-preserving Orthogonal Permutation Linear Unit Activation Functions (OPLU) [article]

Artem Chernodub, Dimitri Nowicki
2017 arXiv   pre-print
OPLU activation function ensures norm preservance of the backpropagated gradients, therefore it is potentially good for the training of deep, extra deep, and recurrent neural networks.  ...  We propose a novel activation function that implements piece-wise orthogonal non-linear mappings based on permutations.  ...  Acknowledgments We thank FlyElephant (http://flyelephant.net) and Dmitry Spodarets for computational resources kindly given for our experiments.  ... 
arXiv:1604.02313v5 fatcat:oysngjh275hydhw4cptvipclne

Using Deep CNN with Data Permutation Scheme for Classification of Alzheimer's Disease in Structural Magnetic Resonance Imaging (sMRI)

Bumshik LEE, Waqas ELLAHI, Jae Young CHOI
2019 IEICE transactions on information and systems  
A data permutation scheme including slice integration, outlier removal, and entropy-based sMRI slice selection is proposed to utilize the benefits of AlexNet.  ...  In order to overcome problems of conventional classical machine learning methods, the AlexNet classifier, with a deep learning architecture, was employed for training and classification.  ...  [21] and Recurrent Neural Network (RNN) [17] .  ... 
doi:10.1587/transinf.2018edp7393 fatcat:epevpsjdvbhhvbntkjmuba5bkm

DeepPermNet: Visual Permutation Learning

Rodrigo Santa Cruz, Basura Fernando, Anoop Cherian, Stephen Gould
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
Unfortunately, permutation matrices are discrete, thereby posing difficulties for gradient-based methods.  ...  We present a principled approach to uncover the structure of visual data by solving a novel deep learning task coined visual permutation learning.  ...  Acknowledgements: This research was supported by the Australian Research Council (ARC) through the Centre of Excellence for Robotic Vision (CE140100016) and was undertaken with the resources from the National  ... 
doi:10.1109/cvpr.2017.640 dblp:conf/cvpr/CruzFCG17 fatcat:xqpnfjtak5hezkhqtotgsha3pm

DeepPermNet: Visual Permutation Learning [article]

Rodrigo Santa Cruz, Basura Fernando, Anoop Cherian, Stephen Gould
2017 arXiv   pre-print
Unfortunately, permutation matrices are discrete, thereby posing difficulties for gradient-based methods.  ...  We present a principled approach to uncover the structure of visual data by solving a novel deep learning task coined visual permutation learning.  ...  Acknowledgements: This research was supported by the Australian Research Council (ARC) through the Centre of Excellence for Robotic Vision (CE140100016) and was undertaken with the resources from the National  ... 
arXiv:1704.02729v1 fatcat:gf5abm7mxfcdhgq2ikrusksioi

Permute, Quantize, and Fine-tune: Efficient Compression of Neural Networks [article]

Julieta Martinez, Jashan Shewakramani, Ting Wei Liu, Ioan Andrei Bârsan, Wenyuan Zeng, Raquel Urtasun
2021 arXiv   pre-print
We then establish a connection to rate-distortion theory and search for permutations that result in networks that are easier to compress.  ...  Compressing large neural networks is an important step for their deployment in resource-constrained computational platforms.  ...  Introduction State-of-the-art approaches to many computer vision tasks are currently based on deep neural networks.  ... 
arXiv:2010.15703v3 fatcat:dj37pnxzpjfkhithbqffgd7lfm

Permutation Matters: Anisotropic Convolutional Layer for Learning on Point Clouds [article]

Zhongpai Gao, Guangtao Zhai, Junchi Yan, Xiaokang Yang
2020 arXiv   pre-print
Behind the success story of convolutional neural networks (CNNs) is that the data (e.g., images) are Euclidean structured. However, point clouds are irregular and unordered.  ...  In this paper, we propose a permutable anisotropic convolutional operation (PAI-Conv) that calculates soft-permutation matrices for each point using dot-product attention according to a set of evenly distributed  ...  Recently, many deep neural networks have been developed to handle point clouds and achieved promising results.  ... 
arXiv:2005.13135v2 fatcat:a5awrzi33faf7a7n7d34amaa2y

Style Permutation for Diversified Arbitrary Style Transfer

Pan Li, Dan Zhang, Lei Zhao, Duanqing Xu, Dongming Lu
2020 IEEE Access  
Arbitrary neural style transfer aims to render a content image in a randomly given artistic style using the features extracted from a well-trained convolutional neural network.  ...  The core of our style permutation algorithm is to multiply the deep image feature maps by a permutation matrix.  ...  INDEX TERMS Convolutional neural network, diversified style transfer, feature transformation, permutation matrix I.  ... 
doi:10.1109/access.2020.3034653 fatcat:lnfpa2zvovcj3p4gqylud2ijo4

End-to-end Microphone Permutation and Number Invariant Multi-channel Speech Separation [article]

Yi Luo, Zhuo Chen, Nima Mesgarani, Takuya Yoshioka
2020 arXiv   pre-print
Based on the filter-and-sum network (FaSNet), a recently proposed end-to-end time-domain beamforming system, we show how TAC significantly improves the separation performance across various numbers of  ...  Conventional optimization-based beamforming techniques satisfy these requirements by definition, while for deep learning-based end-to-end systems those constraints are not fully addressed.  ...  INTRODUCTION Deep learning-based beamforming systems, sometimes called neural beamformers, have been an active research topic in recent years [1, 2] .  ... 
arXiv:1910.14104v3 fatcat:svwgp3pxr5bmvbq5f2kluupiti

Deep Sets [article]

Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan Salakhutdinov, Alexander Smola
2018 arXiv   pre-print
We also derive the necessary and sufficient conditions for permutation equivariance in deep models.  ...  This family of functions has a special structure which enables us to design a deep network architecture that can operate on sets and which can be deployed on a variety of scenarios including both unsupervised  ...  Our deep neural network consists of 9 2D-convolution and max-pooling layers followed by 3 permutation-equivariant layers, and finally a softmax layer that assigns a probability value to each set member  ... 
arXiv:1703.06114v3 fatcat:ce5ijlok4jgjhfovcsur7z2eom
« Previous Showing results 1 — 15 out of 5,630 results