A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
Bayesian Opponent Modeling in a Simple Poker Environment
2007
2007 IEEE Symposium on Computational Intelligence and Games
In this paper, we use a simple poker game to investigate Bayesian opponent modeling. ...
The opponent modeling player plays well against non-reactive player styles, and also performs well when compared to a player that knows the exact styles of each opponent in advance. ...
in a four-player environment. ...
doi:10.1109/cig.2007.368088
dblp:conf/cig/BakerC07
fatcat:z6qg6f3e6zd7bd5nylczwqloha
Bayesian Poker
[article]
2013
arXiv
pre-print
describe our Bayesian Poker Program(BPP), which uses a Bayesian network TO model the programs poker hand, the opponents hand AND the opponents playing behaviour conditioned upon the hand, and betting ...
The history of play with opponents is used to improve BPPs understanding OF their behaviour.We compare BPP experimentally WITH : a simple RULE - based system; a program which depends exclusively ON hand ...
govern play. 3
A BAYESIAN NETWORK FOR
POKER
3.1 NETWORK STRUCTURE
BPP uses a simple Bayesian network structure (Fig
ure 1) for modeling the relationships between current
hand type, I
I
OPP ...
arXiv:1301.6711v1
fatcat:jxvyx3jk3fbsdne7y72qsjl7cm
Can opponent models aid poker player evolution?
2008
2008 IEEE Symposium On Computational Intelligence and Games
We investigate the impact of Bayesian opponent modeling upon the evolution of a player for a simplified poker game. ...
We test the effectiveness of this model against various collections of dynamic and partially randomized opponents and find that using a Bayesian opponent model enhances our AI players even when dealing ...
Opponent modeling has been seen as having a greater impact on success in games of poker than most other games, indeed poker is an important testbed for opponent modeling research. ...
doi:10.1109/cig.2008.5035617
dblp:conf/cig/BakerCRJ08
fatcat:nxexkyavpzgi7dnz2levheiktu
Estimating Winning Probability for Texas Hold'em Poker
2013
International Journal of Machine Learning and Computing
Index Terms-Opponent modeling, support vector machine, texas hold'em poker, winning probability. ...
Among all the technologies in creating a good poker agent, estimating winning probability is a key issue. ...
ACKNOWLEDGMENT We would like to thank Ruobing Li and Yujing Hu for their helps in experiments and Bing Xue for her comments and help for improving the paper. ...
doi:10.7763/ijmlc.2013.v3.275
fatcat:46uiqlwjjjckxbj4nlycjfjd2q
CASPER: A Case-Based Poker-Bot
[chapter]
2008
Lecture Notes in Computer Science
This paper investigates the use of the case-based reasoning methodology applied to the game of Texas hold'em poker. The development of a CASe-based Poker playER (CASPER) is described. ...
CASPER uses knowledge of previous poker scenarios to inform its betting decisions. ...
Introduction The game of poker provides an interesting environment to investigate how to handle uncertain knowledge and issues of chance and deception in hostile environments. ...
doi:10.1007/978-3-540-89378-3_60
fatcat:wugqe54i5vhi5c26yqaazyrzxu
Computer poker: A review
2011
Artificial Intelligence
Approaches to constructing exploitive agents are reviewed and the challenging problems of creating accurate and dynamic opponent models are addressed. ...
The poker domain has often featured in previous review papers that focus on games in general, however a comprehensive review paper with a specific focus on computer poker has so far been lacking in the ...
Finally, we wish to thank the University of Alberta Computer Poker Research Group and those involved with organising the Annual Computer Poker Competitions. ...
doi:10.1016/j.artint.2010.12.005
fatcat:scqlvti56jdv5o7tn4lyoq675m
Approximating n-player behavioural strategy nash equilibria using coevolution
2011
Proceedings of the 13th annual conference on Genetic and evolutionary computation - GECCO '11
In order to support our case we provide a set of experiments in both games of known and unknown equilibria. ...
In this paper we propose a coevolutionary algorithm that approximates behavioural strategy Nash equilibria in n-player zero sum games, by exploiting the minimax solution concept. ...
Acknowledgements This research is partly funded by a postgraduate studentship from the Engineering and Physical Sciences Research Council. ...
doi:10.1145/2001576.2001726
dblp:conf/gecco/SamothrakisL11
fatcat:hcw7pblre5esjaaayzrjcwv7ta
Improving a case-based texas hold'em poker bot
2008
2008 IEEE Symposium On Computational Intelligence and Games
This paper describes recent research that aims to improve upon our use of case-based reasoning in Texas hold'em poker bot called CASPER. ...
CASPER uses knowledge of previous poker scenarios to inform its betting decisions. ...
INTRODUCTION The game of poker provides an interesting environment to investigate how to handle uncertain knowledge and issues of chance and deception in hostile environments. ...
doi:10.1109/cig.2008.5035661
dblp:conf/cig/WatsonLRW08
fatcat:6jkxym2dzfgxhpkwqfurbsez2e
Skill in Games
1997
Management science
The analytic core of the paper is a detailed analysis of a game of skill, Sum Poker. ...
This paper uses a simplified version of stud poker to better understand the concept of differential player skill in games. ...
A Bayesian scheme would require a model of the opponent to generate likelihoods. The complexity of the strategies imaginable in this game makes this task quite difficult. ...
doi:10.1287/mnsc.43.5.596
fatcat:o3ev64unw5h4bmjczbilnxbbxi
A View on Deep Reinforcement Learning in Imperfect Information Games
2020
Studia Universitatis Babes-Bolyai: Series Informatica
In this paper, I want to explore the power of reinforcement learning in such an environment; that is why I take a look at one of the most popular game of such type, no limit Texas Hold'em Poker, yet unsolved ...
When applied to no-limit Hold'em Poker, deep reinforcement learning agents clearly outperform agents with a more traditional approach. ...
Games: Texas Hold'em Poker [16] ) which contains many more details about the implementation and methods used for each Poker agent. ...
doi:10.24193/subbi.2020.2.03
fatcat:5umr7wvphba23ltwqibeotyqau
Opponent Modeling by Expectation–Maximization and Sequence Prediction in Simplified Poker
2017
IEEE Transactions on Computational Intelligence and AI in Games
a sequence prediction method, and finally, to simulate games between our agent and our opponent model in-between games against the opponent. ...
Experiments in simplified poker games show that it increases the average payoff per game of a state-of-the-art no-regret learning algorithm. ...
For example, Korb et al. propose a Bayesian Poker Program for two-player five-card stud poker, which learns through experience using a Bayesian network to model each player's hand, opponent behaviour conditioned ...
doi:10.1109/tciaig.2015.2491611
fatcat:gw5p3l3wqzh7jae3pdquo5v2ny
The challenge of poker
2002
Artificial Intelligence
Opponent modeling is another difficult problem in decision-making applications, and it is essential to achieving high performance in poker. ...
In addition to methods for hand evaluation and betting strategy, Poki uses learning techniques to construct statistical models of each opponent, and dynamically adapts to exploit observed patterns and ...
Statistics-based opponent modeling In poker, opponent modeling is used in at least two different ways. ...
doi:10.1016/s0004-3702(01)00130-8
fatcat:5cxf5itov5awppfw6iveh3kp44
A Reinforcement Learning Algorithm Applied to Simplified Two-Player Texas Hold'em Poker
[chapter]
2001
Lecture Notes in Computer Science
In a perfect information game, this has little point, as the opponent knows the game state at all times. ...
We give a reinforcement learning algorithm for two-player poker based on gradient search in the agents' parameter spaces. ...
It estimates the utilities of different actions by approximate Bayesian analysis based on simulations with the current state of the opponent models. ...
doi:10.1007/3-540-44795-4_8
fatcat:wovthwv2gra5jolrzdis2lww4y
Adversarial problem solving: Modeling an opponent using explanatory coherence
1992
Cognitive Science
In APS, you should: your model of yourself, 0, and the environment to make a decision about the best course of action. ...
In military decision making, and even in poker where a player can be more interested in having fun than in winning money, a major part of the task is to infer what the opponent wants, generally and in ...
(proposition 'C2 "The invasion of Normandy is a diversion.") (proposition 'C3 "There is a Iarge Allied force in southeast England preparing to invade.") ...
doi:10.1016/0364-0213(92)90019-q
fatcat:uhrsgcpfenbntdhjpr2hgpvgie
Robust Opponent Modeling via Adversarial Ensemble Reinforcement Learning in Asymmetric Imperfect-Information Games
[article]
2020
arXiv
pre-print
In order to maximize the reward, the protagonist agent has to infer the opponent type through agent modeling. ...
In order to achieve a good trade-off between the robustness of the learned policy and the computation complexity, we propose to train a separate opponent policy against the protagonist agent for evaluation ...
that the explicit opponent modeling outperforms a black-box RNN approach, and the stochastic optimization results in better results (in terms of the robustness-complexity trade-off) than standard ensemble ...
arXiv:1909.08735v4
fatcat:ne2qkof3lvf7rdjww4xhtdsjhi
« Previous
Showing results 1 — 15 out of 261 results