A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Training Matters: Unlocking Potentials of Deeper Graph Convolutional Neural Networks
[article]
2020
arXiv
pre-print
The performance limit of Graph Convolutional Networks (GCNs) and the fact that we cannot stack more of them to increase the performance, which we usually do for other deep learning paradigms, are pervasively ...
This paper first identify the training difficulty of GCNs from the perspective of graph signal energy loss. ...
Introduction Being a structure particularly capable of modeling relational information [14, 16, 19, 10, 21, 7] , graph has inspired the emerge of Graph Neural Networks (GNNs), machine learning paradigms ...
arXiv:2008.08838v1
fatcat:3nysiorxmbbgvix5fvc36bcpsu
Describing condensed matter from atomically resolved imaging data: from structure to generative and causal models
[article]
2021
arXiv
pre-print
given one of the most powerful tools in condensed matter physics arsenal. ...
The development of high-resolution imaging methods such as electron and scanning probe microscopy and atomic probe tomography have provided a wealth of information on structure and functionalities of solids ...
This approach is somewhat analogous convolutional filters in deep convolutional neural networks and corresponds to the cases when no prior information on the objects of interest is available. ...
arXiv:2109.07350v1
fatcat:jfj57u24izgltkj3a4emvmjmte
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures
[article]
2020
arXiv
pre-print
Despite being the workhorse of deep learning, the backpropagation algorithm is no panacea. It enforces sequential layer updates, thus preventing efficient parallelization of the training process. ...
Here, we challenge this perspective, and study the applicability of Direct Feedback Alignment to neural view synthesis, recommender systems, geometric learning, and natural language processing. ...
Acknowledgments and Disclosure of Funding We thank Igor Carron and Laurent Daudet for the general guidance on the subject of this investigation and the insightful comments, as well as the larger LightOn ...
arXiv:2006.12878v2
fatcat:bisorv4p7bdebendx52ocwo6ri
EmgAuth: Unlocking Smartphones with EMG Signals
[article]
2021
arXiv
pre-print
When training the Siamese network, we design a special data augmentation technique to make the system resilient to the rotation of the armband, which makes EmgAuth free of calibration. ...
In this paper, we propose EmgAuth, a novel electromyography(EMG)-based smartphone unlocking system based on the Siamese network. ...
Fig. 2 . 2 Examples of the raw data from the Myo armband. model training step, all pairs are fed into the neural network to train a convolutional Siamese neural network.
2: len = length(Data) 3: for ...
arXiv:2103.12542v1
fatcat:bwrkiq4qfbfxxafmhlogg6hdsi
Rapid training of deep neural networks without skip connections or normalization layers using Deep Kernel Shaping
[article]
2021
arXiv
pre-print
Using an extended and formalized version of the Q/C map analysis of Poole et al. (2016), along with Neural Tangent Kernel theory, we identify the main pathologies present in deep networks that prevent ...
In our experiments we show that DKS enables SGD training of residual networks without normalization layers on Imagenet and CIFAR-10 classification tasks at speeds comparable to standard ResNetV2 and Wide-ResNet ...
) at deeper layers of the network. ...
arXiv:2110.01765v1
fatcat:p4dxhkmczngubjt6xhzdgrlf7q
Investigating the Relationship Between Dropout Regularization and Model Complexity in Neural Networks
[article]
2021
arXiv
pre-print
We explore the relationship between the dropout rate and model complexity by training 2,000 neural networks configured with random combinations of the dropout rate and the number of hidden units in each ...
Turning to Deep Learning models, we build neural networks that predict the optimal dropout rate given the number of hidden units in each dense layer, the desired cost, and the desired accuracy of the model ...
Govind Tatachari for his tremendous help and mentorship over the course of this research. We very much appreciate his valuable inputs and feedback. ...
arXiv:2108.06628v2
fatcat:hiy4hfbj25axbia67nqtabkaea
Fuzzy Logic Interpretation of Quadratic Networks
[article]
2019
arXiv
pre-print
Recently, we proposed artificial quadratic neural networks consisting of second-order neurons in potentially many layers. ...
neural networks. ...
Due to the dominating role of convolution in the convolutional neural network, we only processed the neurons in the convolutional layers. ...
arXiv:1807.03215v3
fatcat:crfsmcjax5fdxl47asskrmzd5e
Graph-Based Deep Learning for Medical Diagnosis and Analysis: Past, Present and Future
2021
Sensors
As such, graph neural networks have attracted significant attention by exploiting implicit information that resides in a biological system, with interacting nodes connected by edges whose weights can be ...
We also outline the limitations of existing techniques and discuss potential directions for future research. ...
Conflicts of Interest: The authors declare no conflict of interest. ...
doi:10.3390/s21144758
fatcat:jytyt4u2pjgvhnhcto3vcvd3a4
Data Curation with Deep Learning [Vision]
[article]
2019
arXiv
pre-print
In most organizations, data curation plays an important role so as to fully unlock the value of big data. ...
Despite decades of efforts from both researchers and practitioners, it is still one of the most time consuming and least enjoyable work of data scientists. ...
(c) Convolutional Neural Networks (CNNs) CNNs are feed-forward neural networks (see Figure 2 (c)). ...
arXiv:1803.01384v2
fatcat:ymch4jazxzanzpv7dbmhl5beiy
Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems
[article]
2022
arXiv
pre-print
The widely accepted narrative attributes this progress to massive increases in the quantity of computational and data resources available to support statistical learning in deep artificial neural networks ...
neural computing. ...
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or Microsoft. ...
arXiv:2205.01128v1
fatcat:b5z7omfvcba7zgmiu5b64db2qy
MCADNNet: Recognizing Stages of Cognitive Impairment through Efficient Convolutional fMRI and MRI Neural Network Topology Models
2019
IEEE Access
We developed a state-of-the-art deep learning algorithm based on an optimized convolutional neural network (CNN) topology called MCADNNet that simultaneously recognizes MCI, AD, and normally aging brains ...
A decision-making algorithm was also designed to stabilize the outcome of the trained models. ...
The rotation of the weights derives from a delta error in the convolutional neural network. ...
doi:10.1109/access.2019.2949577
pmid:32021737
pmcid:PMC6999050
fatcat:u5hpsrvvgbgfvdthprndg3u7tq
Deep Learning for Design and Retrieval of Nano-photonic Structures
[article]
2017
arXiv
pre-print
In this context, nano-photonics has revolutionized the field of optics in recent years by enabling the manipulation of light-matter interaction with subwavelength structures. ...
However, despite the many advances in this field, its impact and penetration in our daily life has been hindered by a convoluted and iterative process, cycling through modeling, nanofabrication and nano-characterization ...
We observe a significant gain from training one network on all of the training set over the alternative of training multiple separate networks. ...
arXiv:1702.07949v3
fatcat:q4c447frhrg2rlya63jeqcvwpa
Exploring order parameters and dynamic processes in disordered systems via variational autoencoders
[article]
2021
arXiv
pre-print
This approach is predicated on the synergy of two concepts, the parsimony of physical descriptors and general rotational invariance of non-crystalline solids, and is implemented using a rotationally-invariant ...
extension of the variational autoencoder applied to semantically segmented atom-resolved data seeking the most effective reduced representation for the system that still contains the maximum amount of ...
Note that the same principle is used in convolutional neural networks. ...
arXiv:2006.10267v2
fatcat:skmrjsmhtvblvjxcclblrqbkma
Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges
[article]
2021
arXiv
pre-print
network architectures, such as CNNs, RNNs, GNNs, and Transformers. ...
Such a 'geometric unification' endeavour, in the spirit of Felix Klein's Erlangen Program, serves a dual purpose: on one hand, it provides a common mathematical framework to study the most successful neural ...
Acknowledgements This text represents a humble attempt to summarise and synthesise decades of existing knowledge in deep learning architectures, through the geometric lens of invariance and symmetry. ...
arXiv:2104.13478v2
fatcat:odbzfsau6bbwbhulc233cfsrom
Survey of Generative Methods for Social Media Analysis
[article]
2021
arXiv
pre-print
We included two important aspects that currently gain importance in mining and modeling social media: dynamics and networks. ...
This survey draws a broad-stroke, panoramic picture of the State of the Art (SoTA) of the research in generative methods for the analysis of social media data. ...
matter of scale and training data [224] . ...
arXiv:2112.07041v1
fatcat:xgmduwctpbddfo67y6ack5s2um
« Previous
Showing results 1 — 15 out of 253 results