Filters








5 Hits in 12.9 sec

Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?

Raja Giryes, Guillermo Sapiro, Alex M. Bronstein
2016 IEEE Transactions on Signal Processing  
Similar points at the input of the network are likely to have a similar output.  ...  properties of a classification machinery are: (i) the system preserves the core information of the input data; (ii) the training examples convey information about unseen data; and (iii) the system is able to  ...  correct it in the locations that result in classification errors.  ... 
doi:10.1109/tsp.2016.2546221 fatcat:w2jok2hppfhczc7ueszdaahoze

Comments on "Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?" [article]

Talha Cihad Gulcu, Alper Gungor
2019 arXiv   pre-print
This leads us to conclude that Theorem 3 and Figure 5 in [1] are not accurate.  ...  Consequently, the behavior of networks consisting of random Gaussian weights only is not useful to explain how DNNs achieve state-of-art results in a large variety of problems.  ...  be corrected, and its corrected version is given below, along with the proof.  ... 
arXiv:1901.02182v2 fatcat:qgregjnljbdpzcy6vi7kf5cla4

2020 Index IEEE Transactions on Signal Processing Vol. 68

2020 IEEE Transactions on Signal Processing  
Xiang, Q., +, TSP 2020 4336-4351 Comments on "Deep Neural Networks With Random Gaussian Weights: A Universal Classification Strategy?".  ...  Garcia-Fernandez, A.F., +, TSP 2020 1300-1314 Corrections to "Deep Neural Networks With Random Gaussian Weights: A Universal Classification Strategy?" [Jul 1, 2016 3444-3457].  ... 
doi:10.1109/tsp.2021.3055469 fatcat:6uswtuxm5ba6zahdwh5atxhcsy

Deep learning generalizes because the parameter-function map is biased towards simple functions [article]

Guillermo Valle-Pérez, Chico Q. Camargo, Ard A. Louis
2019 arXiv   pre-print
If we assume that the training algorithm samples parameters close to uniformly within the zero-error region then the PAC-Bayes theorem can be used to guarantee good expected generalization for target functions  ...  While many proposals for some kind of implicit regularization have been made to rationalise this success, there is no consensus for the fundamental reason why DNNs do not strongly overfit.  ...  Deep neural networks with random gaussian weights: a universal classification strategy? IEEE Trans. Signal Processing, 64(13): 3444-3457, 2016. Noah Golowich, Alexander Rakhlin, and Ohad Shamir.  ... 
arXiv:1805.08522v5 fatcat:rvoxcyrkzzhd5mm2loiaarrmhm

Discovering and Deciphering Relationships Across Disparate Data Modalities [article]

Joshua T. Vogelstein, Eric Bridgeford, Qing Wang, Carey E. Priebe, Mauro Maggioni, Cencheng Shen
2018 arXiv   pre-print
(linear, quadratic, cubic), trigonometric (sinusoidal, circular, ellipsoidal, spiral), geometric (square, diamond, W-shape), and other functions, with dimensionality ranging from 1 to 1000.  ...  applications, including brain imaging and cancer genetics, MGC is the only method that can both detect the presence of a dependency and provide specific guidance for the next experiment and/or analysis to  ...  Deep neural networks with random gaussian weights: A universal classification strategy. CoRR, abs/1504.08291, 2015.  ... 
arXiv:1609.05148v8 fatcat:wccit3ohp5f33hxtp24gcmkmgq