Filters








63,886 Hits in 5.1 sec

On 1/n neural representation and robustness [article]

Josue Nassar, Piotr Aleksander Sokol, SueYeon Chung, Kenneth D. Harris, Il Memming Park
2020 arXiv   pre-print
We use adversarial robustness to probe Stringer et al's theory regarding the causal role of a 1/n covariance spectrum.  ...  A pressing question in these areas is understanding how the structure of the representation used by neural networks affects both their generalization, and robustness to perturbations.  ...  Acknowledgments and Disclosure of Funding  ... 
arXiv:2012.04729v1 fatcat:64n46oqxdzg63ouzhkmiscdx2u

Optimal linear compression under unreliable representation and robust PCA neural models

K.I. Diamantaras, K. Hornik, M.G. Strintzis
1999 IEEE Transactions on Neural Networks  
Finally, we show that standard Hebbian-type PCA learning algorithms are not optimally robust to noise, and propose a new Hebbiantype learning algorithm which is optimally robust in the NPCA sense.  ...  Abstract| In a typical linear data compression system the representation variables resulting from the coding operation are assumed totally reliable and therefore the solution in the mean-squared-error  ...  : : :; n, and we de ne the input and output vectors x = x 1 ; : : :; x n ], y = y 1 ; : : :; y n ].  ... 
doi:10.1109/72.788657 pmid:18252619 fatcat:ijf2z2g6jnfvxkrl3ydagvvlje

Analysis of Spatio-temporal Representations for Robust Footstep Recognition with Deep Residual Neural Networks

Omar Costilla Reyes, Ruben Vera-Rodriguez, Patricia Scully, Krikor B. Ozanyan
2018 IEEE Transactions on Pattern Analysis and Machine Intelligence  
We perform a feature analysis of deep residual neural networks showing effective clustering of clients footstep data and provide insights of the feature learning process.  ...  We propose spatio-temporal footstep representations from floor-only sensor data in advanced computational models for automatic biometric verification.  ...  Foster, Hujun Yin and Bernardino Romera-Paredes for useful discussions. O. Costilla-Reyes would like to acknowledge CONACyT (Mexico) for a studentship.  ... 
doi:10.1109/tpami.2018.2799847 pmid:29994418 fatcat:7vra2lo26rennoamvuo6fuh4vu

Incomplete Graph Representation and Learning via Partial Graph Neural Networks [article]

Bo Jiang, Ziyan Zhang
2021 arXiv   pre-print
To address this problem, we develop a novel partial aggregation based GNNs, named Partial Graph Neural Networks (PaGNNs), for attribute-incomplete graph representation and learning.  ...  Graph Neural Networks (GNNs) are gaining increasing attention on graph data learning tasks in recent years.  ...  In this paper, we use G(A, H, M) to represent this incomplete graph where M ∈ {0, 1} n×d denotes the indicative matrix in which M ij = 0 indicates that H ij is missing/unknown and M ij = 1 otherwise, as  ... 
arXiv:2003.10130v2 fatcat:2whdcqkyvrg7dprqlchn5rhmnq

A binary Hopfield network with 1/(n) information rate and applications to grid cell decoding [article]

Ila Fiete, David J. Schwab, Ngoc M. Tran
2014 arXiv   pre-print
As an example, we apply our network to a binary neural decoder of the grid cell code to attain information rate 1/(n).  ...  A Hopfield network is an auto-associative, distributive model of neural memory storage and retrieval.  ...  Thus, the probability that no cluster will switch to the wrong state is at least (1n1 ) n/ log(n) → 1 as n → ∞. A neural grid cell decoder Recall the discrete grid cell code defined by (2) .  ... 
arXiv:1407.6029v1 fatcat:6tmx5nfih5g6zfhkfwjokr7kxe

Advances in Distributed Computing and Artificial Intelligence Jornual, 2013, vol. 1, n. 4, pp. 1-66

Ediciones Universidad de Salamanca
2013 Advances in Distributed Computing and Artificial Intelligence Journal  
Its application in distributed environments, such as the Internet, electronic commerce, mobile communications, wireless devices, distributed computing and so on, is increasing and becoming and element  ...  and artificial intelligence, and their application in different areas.  ...  These approaches are focused on the retrieval mechanisms and the associated case representation and indexing.  ... 
doaj:5945a03d7dab4b20be1b3dbd6fef9c7c fatcat:eyn3fum7eja5zp3uugateci4km

Effects of Loss Functions And Target Representations on Adversarial Robustness [article]

Sean Saito, Sujoy Roy
2020 arXiv   pre-print
In this work, we present interesting experimental results that suggest the importance of considering other loss functions and target representations, specifically, (1) training on mean-squared error and  ...  Understanding and evaluating the robustness of neural networks under adversarial settings is a subject of growing interest.  ...  s.t.x ∈ [0, 1] n The above formulation aims to minimize two objectives; the left term measures the distance (L 2 norm) between the input and the adversarial example, while the right term represents the  ... 
arXiv:1812.00181v3 fatcat:g3m3qzecz5cm5on4judfalkdta

Anomaly Detection using One-Class Neural Networks [article]

Raghavendra Chalapathy (University of Sydney and Capital Markets Cooperative Research Centre, Aditya Krishna Menon (Data61/CSIRO and the Australian National University), Sanjay Chawla (Qatar Computing Research Institute
2019 arXiv   pre-print
We propose a one-class neural network (OC-NN) model to detect anomalies in complex data sets.  ...  A comprehensive set of experiments demonstrate that on complex data sets (like CIFAR and GTSRB), OC-NN performs on par with state-of-the-art methods and outperformed conventional shallow methods in some  ...  Algorithm 1 one-class neural network (OC-NN) algorithm 1: Input: Set of points X n: , n : 1, ..., N 2: Output: A Set of decision scores S n: =ŷ n: , n: 1,...  ... 
arXiv:1802.06360v2 fatcat:vtqz2ym2uneb3kgdd3oouq5jia

Robust Face Recognition Based on Convolutional Neural Network

Ying Xu, Hui Ma, Lu Cao, He Cao, Yikui Zhai, Vincenzo Piuri, Fabio Scotti
2018 DEStech Transactions on Computer Science and Engineering  
The network is trained on the self-expanding CASIA-WebFace database and tested on the Labeled Faces in the Wild (LFW) database.  ...  Moreover, the designed model, named Lightened Convolutional Neural Network model, and the center loss layer jointly to enhance the discriminative of the designed network features.  ...  Experiments on LFW Improved lightened convolution neural network model is evaluated by (1: 1) [7] and the probe-gallery identification protocol (1: N) [8] on the unconstrained environment face recognition  ... 
doi:10.12783/dtcse/icmsie2017/18635 fatcat:yy3lj5ad4valfglet2rkokprsq

On Tractable Representations of Binary Neural Networks [article]

Weijia Shi and Andy Shih and Adnan Darwiche and Arthur Choi
2020 arXiv   pre-print
First, we consider the task of verifying the robustness of a neural network, and show how we can compute the expected robustness of a neural network, given an OBDD/SDD representation of it.  ...  We consider the compilation of a binary neural network's decision function into tractable representations such as Ordered Binary Decision Diagrams (OBDDs) and Sentential Decision Diagrams (SDDs).  ...  Acknowledgements This work has been partially supported by NSF grant #ISS-1910317, ONR grant #N00014-18-1-2561, and DARPA XAI grant #N66001-17-2-4032.  ... 
arXiv:2004.02082v2 fatcat:ujwdghblefe77oiczmdhuql5uy

Robust Feature Extraction for Speaker Recognition Based on Constrained Nonnegative Tensor Factorization

Qiang Wu, Li-Qing Zhang, Guang-Chuan Shi
2010 Journal of Computer Science and Technology  
and find a robust sparse representation for speech signal.  ...  A novel feature extraction framework based on the cortical representation in primary auditory cortex (A1) is proposed for robust speaker recognition.  ...  (2RM + M d=1 N d ) M i=1 N i ) PARAFAC O(TMR M i=1 N i ) Tucker O(T (M − 1) M k=1 J k M i=1 N i ) Cortical Representation Based on Tensor Structure In this section, we employ multi-resolution spectrotemporal  ... 
doi:10.1007/s11390-010-9365-6 fatcat:6h4wzfjq7ng6no3477k6e7yac4

Noise Robust Classification Based On Spread Spectrum [chapter]

Joern David
2009 Proceedings of the 2009 SIAM International Conference on Data Mining  
In this paper we develop a robust classification mechanism based on a connectionist model in order to learn and classify objects from arbitrary feature spaces.  ...  robustness against noisy or incomplete data.  ...  Φ c, c (n) = 1 N N m=1 c[m] · c[m + n] A spreading code c holds a good autocorrelation if its inner product Φ c, c (0) with itself is high and Φ c, c (n) is low for all shifts n = 1 . . . N -1.  ... 
doi:10.1137/1.9781611972795.57 dblp:conf/sdm/David09 fatcat:mhtjazqc4zax7dyuhedvkejdoa

Significance measure of Local Cluster Neural Networks

Ralf Eickhoff, Joaquin Sitte
2007 Neural Networks (IJCNN), International Joint Conference on  
In this paper we analyze the robustness of Local Cluster Neural Networks and determine upper bounds on the mean square error for noise contaminated weights and inputs.  ...  Artificial neural networks are intended to be used in future nanoelectronics since their biological examples seem to be robust to noise.  ...  Each gradient can be determined ∇ w y m ( ξ) = m μ=1 n i=1 n j=1 α μ ∂L μ ∂w μij (21) ≤ k 1 k 2 8 m μ=1 n i=1 n j=1 |α μ | |x j − r μj | (22) = k 1 k 2 8 m μ=1 nα μ x − r μ 1 (23) and ∇ r y m ( ξ) ≤ m  ... 
doi:10.1109/ijcnn.2007.4370950 dblp:conf/ijcnn/EickhoffS07 fatcat:yhiqertgr5h6hgwcc2azlmk6ca

A Note on Lewicki-Sejnowski Gradient for Learning Overcomplete Representations

Zhaoshui He, Shengli Xie, Liqing Zhang, Andrzej Cichocki
2008 Neural Computation  
Overcomplete representations have greater robustness in noise environment and also have greater flexibility in matching structure in the data.  ...  Lewicki and Sejnowski (2000) proposed an efficient extended natural gradient for learning the overcomplete basis and developed an overcomplete representation approach.  ...  To get robust solutions, we set the initial basis matrix A (0) such that a (0) i 2 2 = 1, i = 1, . . . , n in the following examples.  ... 
doi:10.1162/neco.2007.07-06-296 pmid:18047437 fatcat:jo5nauldirdhxhq4zut6i7bjau

On visual self-supervision and its effect on model robustness [article]

Michal Kucer, Diane Oyen, Garrett Kenyon
2021 arXiv   pre-print
Recent self-supervision methods have found success in learning feature representations that could rival ones from full supervision, and have been shown to be beneficial to the model in several ways: for  ...  example improving models robustness and out-of-distribution detection.  ...  from [15] in that they omit the averaging term sults can be found in the appendix. 1/N .  ... 
arXiv:2112.04367v1 fatcat:lgoa4vaozfhaxgipkvbyo7famq
« Previous Showing results 1 — 15 out of 63,886 results