Filters








2,493 Hits in 7.5 sec

Residual Convolutional Neural Network Revisited with Active Weighted Mapping [article]

Jung HyoungHo, Lee Ryong, Lee Sanghwan, Hwang Wonjun
2018 arXiv   pre-print
It results in multiple paths of data flow under a network and the paths are merged with the equal weights.  ...  In this paper, we introduce the active weighted mapping method which infers proper weight values based on the characteristic of input data on the fly.  ...  Introduction It has recently been noted that deeper stacking of the layers of a convolutional neural network lead to better accuracy of the visual recognition.  ... 
arXiv:1811.06878v1 fatcat:onyaea26yret7nkjwplhfpslpa

DVMN: Dense Validity Mask Network for Depth Completion [article]

Laurenz Reichardt, Patrick Mangat, Oliver Wasenmüller
2021 arXiv   pre-print
We develop a guided convolutional neural network focusing on gathering dense and valid information from sparse depth maps.  ...  State of the art methods use image guided neural networks for dense depth completion.  ...  Current state of the art solutions rely on neural networks to complete depth maps.  ... 
arXiv:2107.06709v1 fatcat:sykaqvo7sngdblfkgavqfdeedi

Camera-Based Blind Spot Detection with a General Purpose Lightweight Neural Network

Yiming Zhao, Lin Bai, Yecheng Lyu, Xinming Huang
2019 Electronics  
Many new convolutional neural network (CNN) structures have been proposed and most of the networks are very deep in order to achieve the state-of-art performance when evaluated with benchmarks.  ...  Subsequently, a series of experiments are conducted to design an efficient neural network by comparing some of the latest deep learning models.  ...  The extreme case for weight representation is the binary notation. Some researchers directly applied binary weights or activations during the model training process [28, 29] .  ... 
doi:10.3390/electronics8020233 fatcat:le3byzn2h5axrekdzwbii5h6im

Deep Anchored Convolutional Neural Networks

Jiahui Huang, Kshitij Dwivedi, Gemma Roig
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
We name it Deep Anchored Convolutional Neural Network (DACNN).  ...  Convolutional Neural Networks (CNNs) have been proven to be extremely successful at solving computer vision tasks.  ...  Conclusion We introduced a new convolutional neural network architecture, which we refer it as Deep Anchored Convolutional Neural Network (DACNN).  ... 
doi:10.1109/cvprw.2019.00089 dblp:conf/cvpr/HuangDR19 fatcat:ozr6po45kjcplcmv7d3qzso76m

Deep Anchored Convolutional Neural Networks [article]

Jiahui Huang, Kshitij Dwivedi, Gemma Roig
2019 arXiv   pre-print
We name it Deep Anchored Convolutional Neural Network (DACNN).  ...  Convolutional Neural Networks (CNNs) have been proven to be extremely successful at solving computer vision tasks.  ...  Conclusion We introduced a new convolutional neural network architecture, which we refer it as Deep Anchored Convolutional Neural Network (DACNN).  ... 
arXiv:1904.09764v1 fatcat:6hksfylrzzbf7bwedjwdwyj4vy

Perturbative Neural Networks

Felix Juefei-Xu, Vishnu Naresh Boddeti, Marios Savvides
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
Empirically, deep neural networks with perturbation layers, called Perturbative Neural Networks (PNNs), in lieu of convolutional layers perform comparably with standard CNNs on a range of visual datasets  ...  The perturbation layer does away with convolution in the traditional sense and instead computes its response as a weighted linear combination of non-linearly activated additive noise perturbed inputs.  ...  Networks with binary weights [3, 2, 23] , networks with sparse convolutional weights [20, 21, 18] , networks with efficient factorization of the convolutional weights [10, 15] and networks with a hybrid  ... 
doi:10.1109/cvpr.2018.00349 dblp:conf/cvpr/Juefei-XuBS18 fatcat:4r35f3ppprb4phabfgfjhcj2ta

Denoising single images by feature ensemble revisited [article]

Masud An Nur Islam Fahim, Nazmus Saqib, Shafkat Khan Siam, Ho Yub Jung
2022 arXiv   pre-print
The proposed architecture's number of parameters remains smaller than the number for most of the previous networks and still achieves significant improvements over the current state-of-the-art networks  ...  The proposed architecture revisits the concept of modular concatenation instead of long and deeper cascaded connections, to recover a cleaner approximation of the given image.  ...  These modules are standing upon the customized convolution and residual setup with supportive activation functions.  ... 
arXiv:2207.05176v1 fatcat:k5ve53ppojcntkwlnn2mhzj53e

Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer [article]

Sergey Zagoruyko, Nikos Komodakis
2017 arXiv   pre-print
CNN network by forcing it to mimic the attention maps of a powerful teacher network.  ...  To that end, we propose several novel methods of transferring attention, showing consistent improvement across a variety of datasets and convolutional neural network architectures.  ...  Visualizing attention maps in deep convolutional neural networks is an open problem.  ... 
arXiv:1612.03928v3 fatcat:vz5we7vsrbhatjrq723jobandy

Dilated Deep Residual Network for Image Denoising [article]

Tianyang Wang, Mingxuan Sun, Kaoning Hu
2017 arXiv   pre-print
Variations of deep neural networks such as convolutional neural network (CNN) have been successfully applied to image denoising.  ...  Specifically, we enlarge receptive field by adopting dilated convolution in residual network, and the dilation factor is set to a certain value.  ...  Variations of deep neural networks such as convolutional neural network (CNN) have been successfully applied to image denoising [5] , [6] .  ... 
arXiv:1708.05473v3 fatcat:zrws7qe7jjbyre37kakd5ogcoi

Spatial Channel Attention for Deep Convolutional Neural Networks

Tonglai Liu, Ronghai Luo, Longqin Xu, Dachun Feng, Liang Cao, Shuangyin Liu, Jianjun Guo
2022 Mathematics  
The proposed attention mechanism can be seamlessly integrated into any convolutional neural network since it is a lightweight general module.  ...  Recently, the attention mechanism combining spatial and channel information has been widely used in various deep convolutional neural networks (CNNs), proving its great potential in improving model performance  ...  neural networks (CNNs).  ... 
doi:10.3390/math10101750 fatcat:lluoelwotve77hfrsfmc5cdg4m

Sparsely Aggregated Convolutional Networks [article]

Ligeng Zhu and Ruizhi Deng and Michael Maire and Zhiwei Deng and Greg Mori and Ping Tan
2018 arXiv   pre-print
We explore a key architectural aspect of deep convolutional neural networks: the pattern of internal skip connections used to aggregate outputs of earlier layers for consumption by deeper layers.  ...  This is a primary reason for the widespread adoption of residual networks, which aggregate outputs via cumulative summation.  ...  Highway Networks [14] and ResNets [6] create shortcut connections between layers with an identity mapping and are among the first works that successfully trained convolutional neural networks with  ... 
arXiv:1801.05895v1 fatcat:saasbivlhjanpo62y2nso745vy

Performance Guaranteed Network Acceleration via High-Order Residual Quantization [article]

Zefan Li, Bingbing Ni, Wenjun Zhang, Xiaokang Yang, Wen Gao
2017 arXiv   pre-print
In particular, the proposed scheme recursively performs residual quantization and yields a series of binary input images with decreasing magnitude scales.  ...  Input binarization has shown to be an effective way for network acceleration.  ...  The idea of BWN is to constrain a convolutional neural network with binary weights.  ... 
arXiv:1708.08687v1 fatcat:cv2csvlwbbanjmzu3mgwnehgku

Neural Networks for Lorenz Map Prediction: A Trip Through Time [article]

Denisa Roberts
2020 arXiv   pre-print
The article is a reflection upon the evolution of neural networks with respect to the prediction performance on this canonical task.  ...  In this article the Lorenz dynamical system is revived and revisited and the current state of the art results for one step ahead forecasting for the Lorenz trajectories are published.  ...  In the convolutional neural network case the objective includes the term corresponding to the l2 penalty on the weights as well.  ... 
arXiv:1903.07768v5 fatcat:r6ey47umtjc7hnulqai23c66sm

Widening and Squeezing: Towards Accurate and Efficient QNNs [article]

Chuanjian Liu, Kai Han, Yunhe Wang, Hanting Chen, Qi Tian, Chunjing Xu
2020 arXiv   pre-print
Then, a compact quantization neural network but with sufficient representation ability will be established.  ...  Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision  ...  Wherein, binary neural networks with weights and activations constrained to +1 or −1 have many advantages.  ... 
arXiv:2002.00555v2 fatcat:cfksfi7yg5cyxocragvccdf25a

Perturbative Neural Networks [article]

Felix Juefei-Xu, Vishnu Naresh Boddeti, Marios Savvides
2018 arXiv   pre-print
Empirically, deep neural networks with perturbation layers, called Perturbative Neural Networks (PNNs), in lieu of convolutional layers perform comparably with standard CNNs on a range of visual datasets  ...  The perturbation layer does away with convolution in the traditional sense and instead computes its response as a weighted linear combination of non-linearly activated additive noise perturbed inputs.  ...  Networks with binary weights [3, 2, 23] , networks with sparse convolutional weights [20, 21, 18] , networks with efficient factorization of the convolutional weights [10, 15] and networks with a hybrid  ... 
arXiv:1806.01817v1 fatcat:fbxfwepy2ferjaszofn2igojvy
« Previous Showing results 1 — 15 out of 2,493 results