Filters








16,641 Hits in 3.0 sec

Towards Distortion-Predictable Embedding of Neural Networks [article]

Axel Angel
2015 arXiv   pre-print
Our contribution makes a step towards embeddings where features of distorted inputs are related and can be derived from each others by the intensity of the distortion.  ...  Current research in Computer Vision has shown that Convolutional Neural Networks (CNN) give state-of-the-art performance in many classification tasks and Computer Vision problems.  ...  In this work, a few steps are presented towards predictable embedding with respect to distortions and a simple qualitative measure is presented to compare similar methods.  ... 
arXiv:1508.00102v1 fatcat:giqhxsenpfbklkevy2vkkvedza

Bayesian Neural Networks for Reversible Steganography [article]

Ching-Chun Chang
2022 arXiv   pre-print
A fundamental pillar of reversible steganography is predictive modelling which can be realised via deep neural networks.  ...  Bayesian neural networks can be regarded as self-aware machinery; that is, a machine that knows its own limitations.  ...  It has been reported that deep neural networks can serve the role of predictive models [23] - [25] .  ... 
arXiv:2201.02478v1 fatcat:3sgon3ktundc5pfqqzixdddhxa

Bayesian Neural Networks for Reversible Steganography

Ching-Chun Chang
2022 IEEE Access  
A fundamental pillar of reversible steganography is predictive modelling which can be realised via deep neural networks.  ...  Bayesian neural networks bring a probabilistic perspective to deep learning and can be regarded as self-aware intelligent machinery; that is, a machine that knows its own limitations.  ...  It has been reported that deep neural networks can be used as advanced predictive models [22] - [24] .  ... 
doi:10.1109/access.2022.3159911 fatcat:gcov63thsfgvtg7or4cfmctruy

Improving the Robustness of Deep Neural Networks via Stability Training [article]

Stephan Zheng, Yang Song, Thomas Leung, Ian Goodfellow
2016 arXiv   pre-print
In this paper we address the issue of output instability of deep neural networks: small perturbations in the visual input can significantly distort the feature embeddings and output of a neural network  ...  We present a general stability training method to stabilize deep networks against small input distortions that result from various types of common image processing, such as compression, rescaling, and  ...  Applying stability training to the Inception network makes the class predictions of the network more robust to input distortions.  ... 
arXiv:1604.04326v1 fatcat:kfqoovo4x5ae7nq6v7au3efkd4

Robustness of Neural Networks against Storage Media Errors [article]

Minghai Qin and Chao Sun and Dejan Vucinic
2017 arXiv   pre-print
We study the trade-offs between storage/bandwidth and prediction accuracy of neural networks that are stored in noisy media.  ...  We study the robustness of deep neural networks when bit errors exist but ECCs are turned off for different neural network models and datasets.  ...  [11] provides adversarial attack algorithms on input and defensive distillation towards evaluating the robustness of neural networks.  ... 
arXiv:1709.06173v1 fatcat:nssvysrbsrbgtjq6cbtzzbuygq

Deep Learning for Predictive Analytics in Reversible Steganography [article]

Ching-Chun Chang, Xu Wang, Sisheng Chen, Isao Echizen, Victor Sanchez, Chang-Tsun Li
2022 arXiv   pre-print
The objective of this study is to evaluate the impacts of different training configurations on predictive neural networks and to provide practical insights.  ...  Therefore, instead of reinventing the wheel, we can adopt neural network models originally designed for such computer vision tasks to perform intensity prediction.  ...  The authors would like to thank the anonymous reviewers and the associate editor for their insightful comments and valuable suggestions that helped improve the quality of the article.  ... 
arXiv:2106.06924v2 fatcat:frp7oyqhinajlflhtsvvudxss4

Improving the Robustness of Deep Neural Networks via Stability Training

Stephan Zheng, Yang Song, Thomas Leung, Ian Goodfellow
2016 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
output instability of deep neural networks: small perturbations in the visual input can significantly distort the feature embeddings and output of a neural network.  ...  Deep neural networks are easily fooled: High confidence predictions for unrecog- nizable images.  ... 
doi:10.1109/cvpr.2016.485 dblp:conf/cvpr/ZhengSLG16 fatcat:rhlsyrmek5durapxai52e4ortm

Task and Model Agnostic Adversarial Attack on Graph Neural Networks [article]

Kartik Sharma, Samidha Verma, Sourav Medya, Sayan Ranu, Arnab Bhattacharya
2021 arXiv   pre-print
Graph neural networks (GNNs) have witnessed significant adoption in the industry owing to impressive performance on various predictive tasks. Performance alone, however, is not enough.  ...  The proposed algorithm, GRAND (Graph Attack via Neighborhood Distortion) shows that distortion of node neighborhoods is effective in drastically compromising prediction performance.  ...  Towards more practical adversarial attacks on graph Lin Wang, Jun Zhu, and Le Song. Adversarial attack on neural networks.  ... 
arXiv:2112.13267v1 fatcat:ixldolkfkfctrbk33akaorjqsu

2020 Index IEEE Transactions on Circuits and Systems for Video Technology Vol. 30

2020 IEEE transactions on circuits and systems for video technology (Print)  
Tang, F., +, TCSVT Dec. 2020 4739-4754 METEOR: Measurable Energy Map Toward the Estimation of Resampling Rate via a Convolutional Neural Network.  ...  Liu, R., +, TCSVT Dec. 2020 4861-4875 History METEOR: Measurable Energy Map Toward the Estimation of Resampling Rate via a Convolutional Neural Network.  ...  A Memory-Efficient Hardware Architecture for Connected Component Labeling in Embedded System.  ... 
doi:10.1109/tcsvt.2020.3043861 fatcat:s6z4wzp45vfflphgfcxh6x7npu

Problem-Agnostic Speech Embeddings for Multi-Speaker Text-to-Speech with SampleRNN [article]

David Álvarez, Santiago Pascual, Antonio Bonafonte
2019 arXiv   pre-print
We finally show that, with a small increase of speech duration in the embedding extractor, we dramatically reduce the spectral distortion to close the gap towards the target identities.  ...  In this paper we first propose the use of problem-agnostic speech embeddings in a multi-speaker acoustic model for TTS based on SampleRNN.  ...  We have concluded that the use of these new embeddings help the network to converge quicker and obtain better objective results in terms of likelihood and spectral distortions.  ... 
arXiv:1906.00733v3 fatcat:2btf7fdazrhljajubcssloneo4

Preventing Copyrights Infringement of Images by Watermarking in Transform Domain Using Full Counter Propagation Neural Network

Chitralekha Dwivedi
2012 International Journal of Information Sciences and Techniques  
Earlier techniques embedded the watermark in the image itself but is has been observed that synapses of neural network provide a better platform for reducing the distortion and increasing the message capacity  ...  The fast and effective full counter propagation neural network helps in the successful watermark embedding without deteriorating the image perception.  ...  Dinu Coltuc [9] reduced the distortions due to watermarking by embedding the expanded difference into the current pixel and its prediction context.  ... 
doi:10.5121/ijist.2012.2602 fatcat:xtpryba3xfaxteg2m5a4qhvluy

Automatic Prediction of Speech Intelligibility based on X-vectors in the context of Head and Neck Cancer

Sebastião Quintas, Julie Mauclair, Virginie Woisard, Julien Pinquier
2020 Zenodo  
In this paper we investigate an automatic prediction of speech intelligibility using the x-vector paradigm, in the context of head and neck cancer.  ...  Our approach also displayed the possibility of achieving very high correlation values (p = 0.95) when adapting the evaluation to each individual speaker, displaying a significantly more accurate prediction  ...  Shallow Neural Network As previously stated, to predict an intelligibility score based on the embedding representations, a shallow neural network was modeled to fit our data.  ... 
doi:10.5281/zenodo.4263951 fatcat:7cs33dqb3zcndggvtmzzpkbpai

Neural Video Compression using Spatio-Temporal Priors [article]

Haojie Liu, Tong Chen, Ming Lu, Qiu Shen, Zhan Ma
2019 arXiv   pre-print
All of these parts are connected and trained jointly towards the optimal rate-distortion performance.  ...  Spatial priors are generated using downscaled low-resolution features, while temporal priors (from previous reference frames and residuals) are captured using a convolutional neural network based long-short  ...  To improve the quality of warped frames, we propose to apply a processing network using ten residual blocks with embedded re-sampling to enlarge the receptive field, resulting inX wp t .  ... 
arXiv:1902.07383v2 fatcat:brynmcohtzdtdo3nyymhsshubi

Hardware based Spatio-Temporal Neural Processing Backend for Imaging Sensors: Towards a Smart Camera [article]

Samiran Ganguly, Yunfei Gu, Mircea R. Stan, Avik W. Ghosh
2018 arXiv   pre-print
sensor materials, and inferencing and spatio-temporal pattern recognition capabilities of these networks with applications in object detection, motion tracking and prediction.  ...  We then show designs of unit hardware cells built using complementary metal-oxide semiconductor (CMOS) and emerging materials technologies for ultra-compact and energy-efficient embedded neural processors  ...  We described the three classes of neural networks used in the work and demonstrated learning and prediction tasks individually from these networks.  ... 
arXiv:1803.08635v1 fatcat:btfh4lpdmrh5fmdfrafcwgbdoq

Domain Aware Training for Far-Field Small-Footprint Keyword Spotting

Haiwei Wu, Yan Jia, Yuanfei Nie, Ming Li
2020 Interspeech 2020  
Our baseline system is built on the convolutional neural network trained with pooled data of both far-field and close-talking speech.  ...  To cope with the distortions, we develop three domain aware training systems, including the domain embedding system, the deep CORAL system, and the multi-task learning system.  ...  As for modeling, many structures based on Convolutional Neural Network (CNN) [3] , Recurrent Neural Network (RNN), Convolutional Recurrent Neural Network (CRNN) [4] , Long Short Time Memory [5] (LSTM  ... 
doi:10.21437/interspeech.2020-1412 dblp:conf/interspeech/WuJNL20 fatcat:aghbj3lqjjefbpphkmt5q3np6u
« Previous Showing results 1 — 15 out of 16,641 results