1,555 Hits in 4.0 sec

LSP : Acceleration and Regularization of Graph Neural Networks via Locality Sensitive Pruning of Graphs [article]

Eitan Kosman, Joel Oren, Dotan Di Castro
2021 arXiv   pre-print
Graph Neural Networks (GNNs) have emerged as highly successful tools for graph-related tasks.  ...  In this paper, we take a further step towards demystifying this phenomenon and propose a systematic method called Locality-Sensitive Pruning (LSP) for graph pruning based on Locality-Sensitive Hashing.  ...  INTRODUCTION Graph neural networks have become extremely popular for tasks involving graph-data.  ... 
arXiv:2111.05694v1 fatcat:4k2me4wgenanjdwfbv6tnzrs4e

Deep Learning Acceleration Techniques for Real Time Mobile Vision Applications [article]

Gael Kamdem De Teyou
2019 arXiv   pre-print
As a consequence, the possibility of implementing deep neural networks to mobile environments has attracted a lot of researchers.  ...  This paper presents emerging deep learning acceleration techniques that can enable the delivery of real time visual recognition into the hands of end users, anytime and anywhere.  ...  Additionally, on Android Devices that support it, the interpreter can also use the Android Neural Networks API for hardware acceleration, otherwise it will default to the CPU for execution [48] .  ... 
arXiv:1905.03418v2 fatcat:mxtgdesm2fafbjmyuck5jkphpa

Low-Bit Quantization for Attributed Network Representation Learning

Hong Yang, Shirui Pan, Ling Chen, Chuan Zhou, Peng Zhang
2019 Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence  
Experiments on real-world node classification and link prediction tasks validate the promising results of the proposed LQANR model.  ...  Attributed network embedding plays an important role in transferring network data into compact vectors for effective network analysis.  ...  The link prediction results are shown in Figure 3 (a).  ... 
doi:10.24963/ijcai.2019/562 dblp:conf/ijcai/YangP0Z019 fatcat:q4jwh3xq2fbd7fm2h7jnq3kcyi

A Survey on Efficient Processing of Similarity Queries over Neural Embeddings [article]

Yifan Wang
2022 arXiv   pre-print
So for measuring data similarities semantically, neural embedding is applied.  ...  Embedding techniques work by representing the raw data objects as vectors (so called "embeddings" or "neural embeddings" since they are mostly generated by neural network models) that expose the hidden  ...  on multi-layer neural network whose input is raw data and output is hash code.  ... 
arXiv:2204.07922v1 fatcat:u5osyghs6vgppnj5gpnrzhae5y

Learning to Hash with Graph Neural Networks for Recommender Systems [article]

Qiaoyu Tan, Ninghao Liu, Xing Zhao, Hongxia Yang, Jingren Zhou, Xia Hu
2020 arXiv   pre-print
In this work, we investigate the problem of hashing with graph neural networks (GNNs) for high quality retrieval, and propose a simple yet effective discrete representation learning framework to jointly  ...  Specifically, a deep hashing with GNNs (HashGNN) is presented, which consists of two components, a GNN encoder for learning node representations, and a hash layer for encoding representations to hash codes  ...  ACKNOWLEDGMENTS The authors thank the anonymous reviewers for their helpful comments. The work is in part supported by NSF IIS-1718840, IIS-1750074, and IIS-1900990.  ... 
arXiv:2003.01917v1 fatcat:wpexdptpeze7tdumqb6r4lhn7u

Reconfigurable Hardware Accelerators: Opportunities, Trends, and Challenges [article]

Chao Wang, Wenqi Lou, Lei Gong, Lihui Jin, Luchao Tan, Yahui Hu, Xi Li, Xuehai Zhou
2017 arXiv   pre-print
In the end, we prospect the development tendency of accelerator architectures in the future, hoping to provide a reference for computer architecture researchers.  ...  To better review the related work of reconfigurable computing accelerators recently, this survey reserves latest high-level research products of reconfigurable accelerator architectures and algorithm applications  ...  on neural networks, graph-based accelerators, and data mining algorithm accelerators.  ... 
arXiv:1712.04771v1 fatcat:3lxv45qb4zaqpagtn3eghrmroe

Semi-supervised Network Embedding with Differentiable Deep Quantisation [article]

Tao He, Lianli Gao, Jingkuan Song, Yuan-Fang Li
2021 arXiv   pre-print
Our evaluation on four real-world networks of diverse characteristics shows that d-SNEQ outperforms a number of state-of-the-art embedding methods in link prediction, path prediction, node classification  ...  Learning accurate low-dimensional embeddings for a network is a crucial task as it facilitates many downstream network analytics tasks.  ...  Although we could construct the original network by link prediction an then use graph search algorithms, such as Depth First Search (DFS) [24] or Breadth First Search (BFS) [25] , to identify paths  ... 
arXiv:2108.09128v1 fatcat:ix7x3rk5fvfuppjn4rsltylssa

BenchNN: On the broad potential application scope of hardware neural network accelerators

Tianshi Chen, Yunji Chen, Marc Duranton, Qi Guo, Atif Hashmi, Mikko Lipasti, Andrew Nere, Shi Qiu, Michele Sebag, Olivier Temam
2012 2012 IEEE International Symposium on Workload Characterization (IISWC)  
Alternatively, allocating die area towards accelerators targeting an application, or an application domain, appears quite promising, and this paper makes an argument for a neural network hardware accelerator  ...  In this paper, we want to highlight that a hardware neural network accelerator is indeed compatible with many of the emerging high-performance workloads, currently accepted as benchmarks for high-performance  ...  efficient neural network accelerator is best.  ... 
doi:10.1109/iiswc.2012.6402898 dblp:conf/iiswc/ChenCDGHLNQST12 fatcat:jvseyojvzneqjlhtc4aixthzii

Rubik: A Hierarchical Architecture for Efficient Graph Learning [article]

Xiaobing Chen, Yuke Wang, Xinfeng Xie, Xing Hu, Abanti Basak, Ling Liang, Mingyu Yan, Lei Deng, Yufei Ding, Zidong Du, Yunji Chen, Yuan Xie
2020 arXiv   pre-print
However, learning from graphs is non-trivial because of its mixed computation model involving both graph analytics and neural network computing.  ...  Such a hierarchical paradigm facilitates the software and hardware accelerations for GCN learning.  ...  We enhance neural network accelerators in a lightweight way, so that minimum effort is needed to tailor the neural network accelerator for efficient GNN computing. A.  ... 
arXiv:2009.12495v1 fatcat:c7alktpjfjdzhbfmnsbwivv74a

Design and Evaluation of an Ultra Low-power Human-quality Speech Recognition System

Dennis Pinto, Jose-María Arnau, Antonio González
2020 ACM Transactions on Architecture and Code Optimization (TACO)  
Current ASR systems have taken advantage of the tremendous improvements in AI during the past decade by incorporating Deep Neural Networks into the system and pushing their accuracy to levels comparable  ...  used for the most compute-intensive tasks.  ...  The specific DNN included as acoustic model for the Kaldi system is a Time Delay Neural Network (TDNN) [33] .  ... 
doi:10.1145/3425604 fatcat:73qovytiebbddjvjrfce5igvny

Deep learning for drug repurposing: methods, databases, and applications [article]

Xiaoqin Pan, Xuan Lin, Dongsheng Cao, Xiangxiang Zeng, Philip S. Yu, Lifang He, Ruth Nussinov, Feixiong Cheng
2022 arXiv   pre-print
Repurposing existing drugs for new therapies is an attractive solution that accelerates drug development at reduced experimental costs, specifically for Coronavirus Disease 2019 (COVID-19), an infectious  ...  Next, we discuss recently developed sequence-based and graph-based representation approaches as well as state-of-the-art deep learning-based methods.  ...  Naturally, the DTIs prediction can also be modeled as link prediction in KG.  ... 
arXiv:2202.05145v1 fatcat:5oqujy2daffdpa33b4cbrg6hqy

Binarized Graph Neural Network [article]

Hanchen Wang, Defu Lian, Ying Zhang, Lu Qin, Xiangjian He, Yiguang Lin, Xuemin Lin
2020 arXiv   pre-print
This motivates us to develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters following the GNN-based paradigm.  ...  Recently, there have been some breakthroughs in graph analysis by applying the graph neural networks (GNNs) following a neighborhood aggregation scheme, which demonstrate outstanding performance in many  ...  To effectively and efficiently support important analytic tasks on graph data, such as node/graph classification, node clustering, community detection, node recommendation, link prediction and graph visualization  ... 
arXiv:2004.11147v1 fatcat:qdaw22pbwjbgdk3kxjpnsak7ea

2019 Index IEEE Transactions on Knowledge and Data Engineering Vol. 31

2020 IEEE Transactions on Knowledge and Data Engineering  
., +, TKDE Nov. 2019 2079-2092 DeepClue: Visual Interpretation of Text-Based Deep Stock Prediction. Shi, L., +, TKDE June 2019 1094-1108 Differentially Private Mixture of Generative Neural Networks.  ...  Zhang, L., +, TKDE Nov. 2019 2035-2050 Differentially Private Mixture of Generative Neural Networks.  ... 
doi:10.1109/tkde.2019.2953412 fatcat:jkmpnsjcf5a3bhhf4ian66mj5y

Hardware accelerator design for data centers

Serif Yesil, Muhammet Mustafa Ozdal, Taemin Kim, Andrey Ayupov, Steven Burns, Ozcan Ozturk
2015 2015 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)  
In this paper, we summarize existing hardware accelerators for data centers and discuss the techniques to implement and embed them along with the existing SOCs. 978-1-4673-8388-2/15/$31.00 ©2015 IEEE  ...  To overcome this problem, customized applicationspecific accelerators are becoming integral parts of modern system on chip (SOC) architectures.  ...  applications can be solved by using neural networks [40] .  ... 
doi:10.1109/iccad.2015.7372648 dblp:conf/iccad/YesilOKABO15 fatcat:swlgxlan55ezhcuzifg2hr2fga

A Survey on Deep Hashing Methods

Xiao Luo, Haixin Wang, Daqing Wu, Chong Chen, Minghua Deng, Jianqiang Huang, Xian-Sheng Hua
2022 ACM Transactions on Knowledge Discovery from Data  
Hashing is one of the most widely used methods for its computational and storage efficiency. With the development of deep learning, deep hashing methods show more advantages than traditional methods.  ...  Moreover, deep unsupervised hashing is categorized into similarity reconstruction-based methods, pseudo-label-based methods and prediction-free self-supervised learning-based methods based on their semantic  ...  In conclusion, this model optimizes the training process by adopting a gradient attention network for acceleration.  ... 
doi:10.1145/3532624 fatcat:7lxtu2qzvvhrpnjngefli2mvca
« Previous Showing results 1 — 15 out of 1,555 results