15 Hits in 3.6 sec

Parameterized Hypercomplex Graph Neural Networks for Graph Classification [article]

Tuan Le, Marco Bertolini, Frank Noé, Djork-Arné Clevert
2021 arXiv   pre-print
neural networks that leverage the properties of hypercomplex feature transformation.  ...  Despite recent advances in representation learning in hypercomplex (HC) space, this subject is still vastly unexplored in the context of graphs.  ...  Conclusion We have introduced a model class we named Parameterized Hypercomplex Graph Neural Networks.  ... 
arXiv:2103.16584v1 fatcat:fmovzq7v5vhm3c2lzwkspyhbda

Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks

Yi Tay, Aston Zhang, Anh Tuan Luu, Jinfeng Rao, Shuai Zhang, Shuohang Wang, Jie Fu, Siu Cheung Hui
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
Many state-of-the-art neural models for NLP are heavily parameterized and thus memory inefficient.  ...  This paper proposes a series of lightweight and memory efficient neural architectures for a potpourri of natural language processing (NLP) tasks.  ...  Acknowledgements The authors thank the anonymous reviewers of ACL 2019 for their time, feedback and comments.  ... 
doi:10.18653/v1/p19-1145 dblp:conf/acl/TayZLRZWFH19 fatcat:d6lbidfwsjfqjel6u5hmxzk3qe

Quaternion Knowledge Graph Embeddings [article]

Shuai Zhang and Yi Tay and Lina Yao and Qi Liu
2019 arXiv   pre-print
In this work, we move beyond the traditional complex-valued representations, introducing more expressive hypercomplex representations to model entities and relations for knowledge graph embeddings.  ...  Experimental results demonstrate that our method achieves state-of-the-art performance on four well-established knowledge graph completion benchmarks.  ...  Neural networks based methods have also been adopted, e.g., Neural Tensor Network [Socher et al., 2013] and ER-MLP [Dong et al., 2014] are two representative neural network based methodologies.  ... 
arXiv:1904.10281v3 fatcat:a2wr2eycbrb7dhbmzmsfu4xgcu

High-Order Pooling for Graph Neural Networks with Tensor Decomposition [article]

Chenqing Hua and Guillaume Rabusseau and Jian Tang
2022 arXiv   pre-print
Graph Neural Networks (GNNs) are attracting growing attention due to their effectiveness and flexibility in modeling a variety of graph-structured data.  ...  CP decomposition to efficiently parameterize permutation-invariant multilinear maps for modeling node interactions.  ...  Introduction Graph neural networks (GNNs) generalize traditional neural network architectures for data in the Euclidean domain to data in non-Euclidean domains [24, 38, 31] .  ... 
arXiv:2205.11691v1 fatcat:ltn4xqvcgfdo7pqgitufvalzqi

Knowledge Embedding Based Graph Convolutional Network [article]

Donghan Yu, Yiming Yang, Ruohong Zhang, Yuexin Wu
2021 arXiv   pre-print
Recently, a considerable literature has grown up around the theme of Graph Convolutional Network (GCN).  ...  Experimental results on benchmark datasets show the advantageous performance of KE-GCN over strong baseline methods in the tasks of knowledge graph alignment and entity classification.  ...  ACKNOWLEDGMENTS We thank the reviewers for their helpful comments.  ... 
arXiv:2006.07331v2 fatcat:lbbure7zsvhljdecmz6fssq6wi

A Survey on Knowledge Graphs: Representation, Acquisition and Applications [article]

Shaoxiong Ji and Shirui Pan and Erik Cambria and Pekka Marttinen and Philip S. Yu
2021 IEEE Transactions on Neural Networks and Learning Systems   accepted
For knowledge acquisition, especially knowledge graph completion, embedding methods, path inference, and logical rule reasoning, are reviewed.  ...  In this survey, we provide a comprehensive review of knowledge graph covering overall research topics about 1) knowledge graph representation learning, 2) knowledge acquisition and completion, 3) temporal  ...  [109] proposed a transductive meta-learning framework, called Graph Extrapolation Networks (GEN), for few-shot out-of-graph link prediction in knowledge graphs. 6) Triple Classification: Triple classification  ... 
doi:10.1109/tnnls.2021.3070843 pmid:33900922 arXiv:2002.00388v4 fatcat:4l2yxnf3wbg4zpzdumduvyr4he

Graph Neural Networks with Learnable Structural and Positional Representations [article]

Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, Xavier Bresson
2022 arXiv   pre-print
Graph neural networks (GNNs) have become the standard learning architectures for graphs.  ...  Possible graph PE are Laplacian eigenvectors. In this work, we propose to decouple structural and positional representations to make easy for the network to learn these two essential properties.  ...  VPD would like to thank Andreea Deac for her helpful feedback, Quan Gan for his support on the DGL library, Gabriele Corso for answering questions related to the PNA model, and Chaitanya K.  ... 
arXiv:2110.07875v2 fatcat:db3uuj6idjfxvd7q2nnevpy53m

A Birds Eye View on Knowledge Graph Embeddings, Software Libraries, Applications and Challenges [article]

Satvik Garg, Dwaipayan Roy
2022 arXiv   pre-print
We discuss existing KGC approaches, including the state-of-the-art Knowledge Graph Embeddings (KGE), not only on static graphs but also for the latest trends such as multimodal, temporal, and uncertain  ...  knowledge graphs.  ...  Graph Neural Networks The overall idea of graph neural networks was introduced in [46] , however various neural network based models have been introduced for inferring multi relational representation.  ... 
arXiv:2205.09088v1 fatcat:c4gfzg4ldras3axpf5wvbldstm

Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics [article]

Prajjwal Bhargava, Aleksandr Drozd, Anna Rogers
2021 arXiv   pre-print
Siamese neural networks for one-shot im- age recognition. In ICML deep learning workshop, volume 2. Lille.  ...  We use f to represent classification layer parameterized by ξ. The output vectors F A = f ([U, V ]; ξ) and F G = f ([0, V ]; ξ) are concatenated along the non-batch dimension.  ... 
arXiv:2110.01518v1 fatcat:orftxwh7uvcwdkpamabwqzxgti

Ocular biometrics: A survey of modalities and fusion approaches

Ishan Nigam, Mayank Vatsa, Richa Singh
2015 Information Fusion  
machine learning algorithms for better representation and classification, (iv) developing algorithms for ocular recognition at a distance, (v) using multimodal ocular biometrics for recognition, and (  ...  We also propose a path forward to advance the research on ocular recognition by (i) improving the sensing technology, (ii) heterogeneous recognition for addressing interoperability, (iii) utilizing advanced  ...  Hugo Proença for their insightful comments.  ... 
doi:10.1016/j.inffus.2015.03.005 fatcat:ph2katoyuzdylamlesnt7vzbay

Cortical connections and parallel processing: Structure and function

Dana H. Ballard
1986 Behavioral and Brain Sciences  
One of the deepest mysteries of the function of cortex is that neural processing times are only about one hundred times faster than the fastest response times for complex behavior.  ...  A detailed consideration of this model has several implications for the underlying anatomy and physiology. CARLSO~~ LIBRARY  ...  Finally, many thanks go to Peggy Meeker for preparing the numerous drafts of the manuscript. ..  ... 
doi:10.1017/s0140525x00021555 fatcat:63ratfgnivhptl723dvvg7n5hi

Some Fundamental Theorems in Mathematics [article]

Oliver Knill
2022 arXiv   pre-print
The classification of hypercomplex algebras (up to isomorphism) of two-dimensional hypercomplex algebras over the reals are the complex numbers x + iy with i 2 = −1, the split complex numbers x + jy with  ...  Hypercomplexity A hypercomplex algebra is a finite dimensional algebra over R which is unital and distributive.  ...  "Calculus of variations" is illustrated by the Kakeya needle set in "geometric measure theory", "Fourier analysis" appears when looking at functions which have fractal graphs, "spectral theory" as part  ... 
arXiv:1807.08416v4 fatcat:lw7lbsxyznfrnaozilxapihmdy

International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering : July 20 - 22 2015, Bauhaus-University Weimar

Klaus Gürlebeck, Tom Lahmer
Architects, computer scientists, mathematicians, and engineers from all over the world will meet in Weimar for an interdisciplinary exchange of experiences, to report on their results in research, development  ...  This paper proposes a conceptual approach for extending the current IFC object model in order to include monitoring-related information.  ...  Acknowledgments: The research is supported by the German Academic Exchange Service (DAAD), project 91531467, and the Russian Foundation for Basic Research (RFBR), project 13-01-00003.  ... 
doi:10.25643/bauhaus-universitaet.2451 fatcat:uy35r7npord7znzqfzg3spwyxu

Neural computation of depth from binocular disparity

Nuno Reis Goncalves, Apollo-University Of Cambridge Repository, Apollo-University Of Cambridge Repository, Zoe Kourtzi, Andrew Welchman
Using convolutional neural networks, we found that disparity encoding in primary visual cortex can be explained by shallow, feed-forward networks optimized to extract absolute depth from naturalistic images  ...  These networks develop physiologically plausible receptive fields, and predict neural responses to highly unnatural stimuli commonly used in the laboratory.  ...  N-way classification In addition to the binary case, I also trained a network to perform N -way classification.  ... 
doi:10.17863/cam.27049 fatcat:jq246y4lrfgsfadwjy6qddxgmu

Computational models of contrast and orientation processing in primary visual cortex [article]

Marcel Stimberg, Technische Universität Berlin, Technische Universität Berlin, Klaus Obermayer
Although it is the best studied part of the visual system — and one of the best studied areas in the brain in general — many questions about the involved neural mechanisms remain unclear to date.  ...  By systematically exploring two classes of network models we show that the experimentally observed dependence of tuning properties on position in this map is best explained in a network that operates in  ...  Tuning of V m , f , g i ,and g e for a sample parameterization (point REC in Figure 3.5) of the Hodgkin-Huxley network.  ... 
doi:10.14279/depositonce-2961 fatcat:hrtconup5vfgheudg6sttrmjza