71,750 Hits in 4.6 sec

SNE: Signed Network Embedding [article]

Shuhan Yuan, Xintao Wu, Yang Xiang
2017 arXiv   pre-print
However, these models based on skip-gram cannot be applied to signed networks because they can only deal with one type of link.  ...  We conduct two experiments, node classification and link prediction, on both directed and undirected signed networks and compare with four baselines including a matrix factorization method and three state-of-the-art  ...  The log-bilinear model then trains word embeddings v and position weight vectors c by optimizing the objective function similar to the skip-gram.  ... 
arXiv:1703.04837v1 fatcat:rfwrq45aubfbxajqwkuxc3i4bq

Supervised Random Walks: Predicting and Recommending Links in Social Networks [article]

L. Backstrom, J. Leskovec
2010 arXiv   pre-print
We formulate a supervised learning task where the goal is to learn a function that assigns strengths to edges in the network such that a random walker is more likely to visit the nodes to which new links  ...  We develop an efficient training algorithm to directly learn the edge strength estimation function.  ...  Co-authorship networks.  ... 
arXiv:1011.4071v1 fatcat:e3rkkcrhirgxvcixmhscxmxpxe

Feature Hashing for Network Representation Learning

Qixiang Wang, Shanfeng Wang, Maoguo Gong, Yue Wu
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
More specifically, we firstly derive a proximity measurement called expected distance as target which combines position distribution and co-occurrence statistics of nodes over random walks so as to build  ...  There are two main mapping functions in this framework. The first is an encoder to map each node into high-dimensional vectors.  ...  In such function, we define a proximity measurement called expected distance which combines the pairwise position distribution and the co-occurrence statistics between nodes over random walks.  ... 
doi:10.24963/ijcai.2018/390 dblp:conf/ijcai/WangWG018 fatcat:fvnu43y5nnbsdbhkyj6immusfe

Supervised random walks

Lars Backstrom, Jure Leskovec
2011 Proceedings of the fourth ACM international conference on Web search and data mining - WSDM '11  
We formulate a supervised learning task where the goal is to learn a function that assigns strengths to edges in the network such that a random walker is more likely to visit the nodes to which new links  ...  We develop an efficient training algorithm to directly learn the edge strength estimation function.  ...  Co-authorship networks.  ... 
doi:10.1145/1935826.1935914 dblp:conf/wsdm/BackstromL11 fatcat:5glee4eo5rfbfb6sug5vxifvnm

Learning Multigraph Node Embeddings Using Guided Lévy Flights [chapter]

Aman Roy, Vinayak Kumar, Debdoot Mukherjee, Tanmoy Chakraborty
2020 Lecture Notes in Computer Science  
The transition probabilities are learned in a supervised fashion as a function of node attributes (metadata based and/or network structure based).  ...  for learning multigraph network representation.  ...  In a scientific network, researchers can share a link by virtue of being co-authors on a paper or by citing each other's works.  ... 
doi:10.1007/978-3-030-47426-3_41 fatcat:wv65mhiuyfgiblswvubpsguuqy

Exploring chromatin conformation and gene co-expression through graph embedding

Marco Varrone, Luca Nanni, Giovanni Ciriello, Stefano Ceri
2020 Bioinformatics  
representation of each gene from the interaction network; finally, we train a classifier on gene embedding pairs to predict if they are co-expressed.  ...  We first define a gene chromatin interaction network where each gene is associated to its physical interaction profile; then, we apply two graph embedding techniques to extract a low-dimensional vector  ...  For the training of the random forest classifier, we sampled a number of inter-chromosomal links from the whole-genome co-expression network equal to the total number of intra-chromosomal links.  ... 
doi:10.1093/bioinformatics/btaa803 pmid:33381846 fatcat:3hlmoep64ze3vnauodyk74pkrm

Layer Information Similarity Concerned Network Embedding

Ruili Lu, Pengfei Jiao, Yinghui Wang, Huaming Wu, Xue Chen
2021 Complexity  
Firstly, we introduce the common vector for each node shared by all layers and layer vectors for each layer where common vectors obtain the overall structure of the multiplex network and layer vectors  ...  To evaluate our proposed model, we conduct node classification and link prediction tasks to verify the effectiveness of our model, and the results show that LISCNE can achieve better or comparable performance  ...  Performance on Link Prediction. For single-layer methods, we train the node embedding for each layer and use it to predict links in the corresponding layer.  ... 
doi:10.1155/2021/2260488 doaj:96847eb8600244d5920fef38d23c1e97 fatcat:uzdeghl3gjg3zepjm7ggk7lo4a

Network representation learning: models, methods and applications

Anuraj Mohan, K. V. Pramod
2019 SN Applied Sciences  
attribute vector of node i.  ...  Definition 5 A signed network is a network G = (V , E) , v ∈ V , e ∈ E and for each edge, e ij = +1 or e ij = −1 , denoting a positive link or a negative link between v i and v j .  ...  Author citation network links authors when one author cites the other, paper citation network links papers when one paper cites the other and co-authorship network links authors if they co-author at least  ... 
doi:10.1007/s42452-019-1044-9 fatcat:zvlbj4qozzfw3dxoyevb6wgska

Principled Multilayer Network Embedding [article]

Weiyi Liu, Pin-Yu Chen, Sailung Yeung, Toyotaro Suzumura, Lingli Chen
2017 arXiv   pre-print
network into a continuous vector space.  ...  From the evaluation, we have proved that comparing with regular link prediction methods, "layer co-analysis" achieved the best performance on most of the datasets, while "network aggregation" and "results  ...  The mapping function f is defined by training on the merged network G using node2vec.  ... 
arXiv:1709.03551v3 fatcat:xa2tptlng5dmdaka3hyxdiy6vy

Exponential Family Graph Embeddings [article]

Abdulkadir Çelikkanat, Fragkiskos D. Malliaros
2019 arXiv   pre-print
Representing networks in a low dimensional latent space is a crucial task with many interesting applications in graph learning problems, such as link prediction and node classification.  ...  We introduce the generic exponential family graph embedding model, that generalizes random walk-based network representation learning techniques to exponential family conditional distributions.  ...  that co-occur within a random walk.  ... 
arXiv:1911.09007v1 fatcat:lgwiuh3zuffznb7rdsnf2cghre

Network Embedding For Link Prediction in Bipartite Networks

2021 European Journal of Science and Technology  
Random Forest models trained with embedding vectors obtained from BiNE method achieved the highest performances.  ...  Network embedding, which maps each node in the network to a low-dimensional feature vector is used to solve many problems.  ...  Therefore, similarity between two nodes u and v is defined as the probability of their co-occurrence on a random walk through the network.  ... 
doi:10.31590/ejosat.937722 fatcat:fgl3ran6lzdfxps3guzuqddu2i

Community aware random walk for network embedding

Mohammad Mehdi Keikha, Maseud Rahgozar, Masoud Asadpour
2018 Knowledge-Based Systems  
Social network analysis provides meaningful information about behavior of network members that can be used for diverse applications such as classification, link prediction.  ...  CARE builds customized paths, which are consisted of local and global structure of network nodes, as a basis for network embedding and uses the Skip-gram model to learn representation vector of nodes.  ...  Wikipedia: As another evaluation, we test CARE on co-occurrence word network of Wikipedia articles.  ... 
doi:10.1016/j.knosys.2018.02.028 fatcat:bx76vmee75da3knb2t4ahaikvu

Paper2vec: Citation-Context Based Document Distributed Representation for Scholar Recommendation [article]

Han Tian, Hankz Hankui Zhuo
2017 arXiv   pre-print
Despite of the success of previous approaches, they are, however, based on co-occurrence of items. Once there are no co-occurrence items available in documents, they will not work well.  ...  After that we explore a variant of matrix factorization approach to train distributed representations of papers on the matrix, and leverage the distributed representations to measure similarities of papers  ...  When the cost function is satisfied and according to the weight scheme, the exponential of the inner product of the paper vector and the context vector represents the random walk probability of the context  ... 
arXiv:1703.06587v1 fatcat:ay7e6cydgnexff7wixp7qyudfi

SSNE: Effective Node Representation for Link Prediction in Sparse Networks [article]

Min-Ren Chen, Ping Huang, Yu Lin, Shi-Min Cai
2020 arXiv   pre-print
In this paper, we propose a model, Sparse Structural Network Embedding (SSNE), to obtain node representation for link predication in sparse networks.  ...  Graph embedding is gaining its popularity for link prediction in complex networks and achieving excellent performance.  ...  Y 1×n u the output vector for node u i 6: end for 7: Normalizing matrix SP CO h by row, SNHAM= Normal (SP CO h );B.  ... 
arXiv:2011.07788v1 fatcat:d45vkr3farhnvc6quaqd6o3hi4

Healthy Cognitive Aging: A Hybrid Random Vector Functional-Link Model for the Analysis of Alzheimer's Disease

Peng Dai, Femida Gwadry-Sridhar, Michael Bauer, Michael Borrie, Xue Teng
We propose a hybrid pathological analysis model, which integrates manifold learning and Random Vector functional-link network (RVFL) so as to achieve better ability to extract discriminant information  ...  Support Vector Machine (SVM), Random Forest (RF), Decision Tree and Multilayer Perceptron (MLP).  ...  Random vector functional-link (RVFL) network The idea of functional link network was suggested by Pao and co-workers in 1988 (Klassen, Pao, and Chen 1988) .  ... 
doi:10.1609/aaai.v31i1.11181 fatcat:4u3a3o7pqzd7lkk7kw5x23q4r4
« Previous Showing results 1 — 15 out of 71,750 results