Filters








10,964 Hits in 4.3 sec

Personalised Search Time Prediction using Markov Chains

Vu Tran, David Maxwell, Norbert Fuhr, Leif Azzopardi
2017 Proceedings of the ACM SIGIR International Conference on Theory of Information Retrieval - ICTIR '17  
For personalising the predictions based upon a few user events observed, we devise appropriate parameter estimation methods.  ...  As a prerequisite, in any search situation, the system must be able to estimate the time the user will need for nding the next relevant document.  ...  Considering the interaction log data we acquired, we propose ve dierent models based upon discrete time, discrete state Markov chains with costs as times spent on each state (refer to Figure 1 ).  ... 
doi:10.1145/3121050.3121085 dblp:conf/ictir/TranMFA17 fatcat:ciwstegqwfgynke3lfpded4o54

Supervised rank aggregation

Yu-Ting Liu, Tie-Yan Liu, Tao Qin, Zhi-Ming Ma, Hang Li
2007 Proceedings of the 16th international conference on World Wide Web - WWW '07  
As case study, we focus on Markov Chain based rank aggregation in this paper. The optimization for Markov Chain based methods is not a convex optimization problem, however, and thus is hard to solve.  ...  This paper is concerned with rank aggregation, the task of combining the ranking results of individual rankers at meta-search.  ...  and Markov Chain based rank aggregation [7] .  ... 
doi:10.1145/1242572.1242638 dblp:conf/www/LiuLQML07 fatcat:axe6pj5c2neijo3e4guctojjre

Enhanced Model of Web Page Prediction using Page Rank and Markov Model

Soumen Swarnakar, A. Thakur, D. Misra, D. Paul, M. Pakira, S. Roy
2016 International Journal of Computer Applications  
[3] presented a new document clustering method based on correlation implementing indexing.  ...  In future his research work came to be known as Markov process and Markov chains.  ... 
doi:10.5120/ijca2016909410 fatcat:tl6ex6jjbfdpxivj224dgomqty

Google PageRank Algorithm: Markov Chain Model and Hidden Markov Model

Prerna Rai, Arvind Lal
2016 International Journal of Computer Applications  
The Markov model is based on the probability the user will select the page and based on the number of incoming and outgoing links, ranks for the pages are determined.  ...  Keywords Markov chain Model, PageRanking, Finite state machine.  ...  on the rank of page based on their score.  ... 
doi:10.5120/ijca2016908942 fatcat:pzdiqg7vwnd3zmxbyqkhogklbq

PostRank

Mohsen Sayyadiharikandeh, Mohammad Ghodsi, Mohammad Naghibi
2012 Proceedings of the 2nd International Conference on Web Intelligence, Mining and Semantics - WIMS '12  
It tries to rank the posts using Markov chain model like PageRank in Google. We used the ranking model under the assumption that top ranked nodes contain blog best representative words.  ...  By arriving new instances of posts we try to update the blog graph by setting the initial scores of old nodes in the Markov chain to their final score from last run and continue the PostRank iterations  ...  PostRank algorithm is based on Regular Markov Chain, which according to Markov Chain Fundamental Limit Theorem, its iterations will converge eventually.  ... 
doi:10.1145/2254129.2254152 dblp:conf/wims/SayyadiharikandehGN12 fatcat:hquu7sjnavdy5bb3jdvb4aicdi

A Hidden Markov Model for the TREC Novelty Task

John M. Conroy
2004 Text Retrieval Conference  
In the HMM developed for this evaluation, we used a joint distribution for the features set which varied based upon the position in the document.  ...  All of the features used by the HMM were based upon the terms (as defined in Section 2.1) found in a sentence.  ... 
dblp:conf/trec/Conroy04 fatcat:clyvccf4rfezhg3qktcp44kolm

From TREC to DUC to TREC Again

John M. Conroy, Daniel M. Dunlavy, Dianne P. O'Leary
2003 Text Retrieval Conference  
In the HMM developed for this evaluation, we used a joint distribution for the features set which varied based upon the position in the document.  ...  All of the features used by the HMM were based upon the terms (as defined in Section 2.1) found in a sentence.  ... 
dblp:conf/trec/ConroyDO03 fatcat:tnkysqcc4fhkzhvvjqs6v24fcm

Building user profiles from topic models for personalised search

Morgan Harvey, Fabio Crestani, Mark J. Carman
2013 Proceedings of the 22nd ACM international conference on Conference on information & knowledge management - CIKM '13  
In this work we use query logs to build personalised ranking models in which user profiles are constructed based on the representation of clicked documents over a topic space.  ...  Our experiments show that by subtly introducing user profiles as part of the ranking algorithm, rather than by reranking an existing list, we can provide personalised ranked lists of documents which improve  ...  Gibbs sampling is a Markov chain Monte Carlo method where a Markov chain is constructed that slowly converges to the target distribution of interest over a number of iterations.  ... 
doi:10.1145/2505515.2505642 dblp:conf/cikm/HarveyCC13 fatcat:t43z7b6yqva3rnhyznrusm47fm

Vertical Data Migration in Large Near-Line Document Archives Based on Markov-Chain Predictions

Achim Kraiss, Gerhard Weikum
1997 Very Large Data Bases Conference  
The integrated migration policy is based on a continuous-time Markov-chain (CTMC) model,fijr predicting the expected number of accesses to a document within a specified time horizon.  ...  Large multimedia document archives hold most of their data in near-line tertiary storage libraries for cost reasons.  ...  The continuous-time Markov chain model upon which our method is based may be used not only for vertical data migration, but also for data placement (e.g., clustering) on both secondary and tertiary storage  ... 
dblp:conf/vldb/KraissW97 fatcat:ai5cl2faw5fmjbjqqqna4dz6qi

Query-Log Based Authority Analysis for Web Information Search [chapter]

Julia Luxenburger, Gerhard Weikum
2004 Lecture Notes in Computer Science  
Thus the QRank model is based on a time-discrete finite-state homogeneous Markov chain. Proof.  ...  We know from the theory about Markov chains, that the limiting probabilities exist, if the considered Markov chain is ergodic.  ...  Table B .9: Top-10 result rankings for the query "East Germany 1989"  ... 
doi:10.1007/978-3-540-30480-7_11 fatcat:you5loysgrcfzbu3tookpsyuw4

Pythia: AI-assisted Code Completion System

Alexey Svyatkovskiy, Ying Zhao, Shengyu Fu, Neel Sundaresan
2019 Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining - KDD '19  
We describe the architecture of the system, perform comparisons to frequency-based approach and invocation-based Markov Chain language model, and discuss challenges serving Pythia models on lightweight  ...  It generates ranked lists of method and API recommendations which can be used by software developers at edit time.  ...  Invocation-based Markov Chain model Markov Chain models have demonstrated great strength at modeling stochastic transitions, from uncovering sequential patterns [10, 11] to modeling decision processes  ... 
doi:10.1145/3292500.3330699 dblp:conf/kdd/SvyatkovskiyZFS19 fatcat:wtjj6z7ed5gbpetyrvqj4rv2zm

Exploiting Popularity and Similarity for Link Recommendation in Twitter Networks

Jun Zou, Faramarz Fekri
2014 ACM Conference on Recommender Systems  
The first approach employs the rank aggregation technique to combine rankings generated by popularity-based and similarity-based recommendation algorithms.  ...  ACKNOWLEDGMENTS This material is based upon work supported by the National Science Foundation under Grant No. IIS-1115199.  ...  There are various rank aggregation algorithms such as Borda's method, median rank aggregation, and Markov chain methods. We choose the Markov chain method for its superior performance.  ... 
dblp:conf/recsys/ZouF14 fatcat:4o7hqnauivcttjag7pyzs656dq

Saccade selection when reward probability is dynamically manipulated using Markov chains

Samuel U. Nummela, Lee P. Lovejoy, Richard J. Krauzlis
2008 Experimental Brain Research  
Markov chains (stochastic processes where probabilities are assigned based on the previous outcome) are commonly used to examine the transitions between behavioral states, such as those that occur during  ...  However, relatively little is known about how well primates can incorporate knowledge about Markov chains into their behavior.  ...  Introduction Markov chains are stochastic processes in which probabilities are assigned based on the previous outcome.  ... 
doi:10.1007/s00221-008-1306-z pmid:18330552 pmcid:PMC3400467 fatcat:cts2b4co5vdijei3xoatly2hta

Prediction and Semantic Association

Thomas L. Griffiths, Mark Steyvers
2002 Neural Information Processing Systems  
Here, we present a novel approach to inference in this model, using Markov chain Monte Carlo with a symmetric Dirichlet(a) prior on ()(di) for all documents and a symmetric Dirichlet(,B) prior on 1>(j)  ...  chain Monte Carlo is a procedure for obtaining samples from complicated probability distributions, allowing a Markov chain to converge to the taq~et distribution and then drawing samples from the states  ... 
dblp:conf/nips/GriffithsS02 fatcat:hoiv567krfhlbo666tubhnfozu

21st Century Search and Recommendation: Exploiting Personalisation and Social Media [chapter]

Morgan Harvey, Fabio Crestani
2014 Lecture Notes in Computer Science  
We describe a novel approach which uses query logs to build personalised ranking models in which user profiles are constructed based on the representation of clicked documents over a topic space.  ...  Our experiments show that this model can provide personalised ranked lists of documents which improve significantly over a non-personalised baseline.  ...  Gibbs sampling is a Markov chain Monte Carlo method where a Markov chain is constructed that slowly converges to the target distribution of interest over a number of iterations.  ... 
doi:10.1007/978-3-319-12511-4_5 fatcat:glhlh6czive2hfz67acdhof3ua
« Previous Showing results 1 — 15 out of 10,964 results