Commentary on Abstraction Methods for Game Theoretic Poker
by Nick Christenson
This article originally appeared in the June, 2009 issue of
TwoPlusTwo Magazine.
Motivation
Mathematics and computer science resarchers are performing a great deal
of research into determining better strategies for playing poker. In
fact, I will make the prediction that going forward, a majority of
new breakthroughs in poker strategy will be accomplished on the basis
of a combination of advanced mathematics or computer simulation.
As a consequence, those poker players who wish to learn (or continue)
to play at the highest levels will want to understand the research
that is being performed by academics about the game of poker. Unfortunately,
few people without a strong background in the mathematical sciences
are going to be able to interpret this research. Moreover, it is common
that the authors of these papers are not advanced poker players themselves,
and one can be certain that few among their target audience fit this
criterion.
Some poker players may find it useful to have the findings of
these papers explained in plain language, especially with an eye to how
they might be adapted to realworld poker situations. The goal of this
article series is to do just this. These articles are intended
to bridge the gap between the sophisticated poker player and pokerrelated
research that results in a scientific publication.
Each of these articles is based on a specific publication. I expect
that the reader of these articles will also want to read the publications
on which they are based, but it is my hope that this will not prove
to be strictly necessary, and that the reader will still find something
worthwhile here even if they choose not to study the original research.
Despite the effort to reduce the number and complexity of advanced
mathematical concepts, and despite every attempt to restate the most
obscure jargon into less obscure language, I expect that these articles
will still demand more effort on the part of the reader than the typical
poker article. I hope the topics discussed here prove worth the effort.
Introduction
The first article I've chosen to discuss is motivated by a discussion on
the twoplustwo.com Books and Publications forum. The paper in question is
titled "Abstraction Methods for Game Theoretic Poker", and it was published in
Computers and Games: Second International Conference, CG 2000,
Hamamatsu, Japan in October, 2000. It is available in PostScript
format online at:
athos.rutgers.edu/~mlittman/attweb/papers/cg00poker.ps.
Not everyone can easily decode a document in PostScript format, but
recent versions of Adobe Acrobat(TM) should be able to handle it.
Objectives
The paper begins by describing two approaches for developing a game
strategy: (1) a gametheoretic approach, and (2) a learning approach.
The authors attribute the learning approch to poker researcher Darse Billings
(of the University of Alberta games research group, the folks responsible
for Poki Poker, Polaris, and a great deal of the most important poker
research performed.) I believe the implication that Billings is dismissive
of gametheoretic approach to be unfair, but perhaps he would accept
this mantle.
The second approach is gametheoretic, in which we mathematically determine
a game's optimum strategy. This is the approach taken for most examples
in Mathematics of Poker by Bill Chen and Jerrod Ankenman.
There are certainly advantages to a gametheoretic approach, not the least
of which is that if you come up with a gametheoretic solution, you can
prove its correctness. You can't do this with a learned strategy,
and being able to prove correctness is strong stuff. The authors also
point out that finding an optimal strategy, even for "toy" poker games,
"... can lead to unexpected insights into the structure of the game."
In Section 2, the authors provide some background about gametheoretic
solutions to games with hidden information (such as poker) using some
simple games as exmaples (including a game with only three cards, a K,
a Q, and a J that's identical to the AKQ game that gets significant
mention in Mathematics of Poker.) Then they provide an
equation for counting the number of decision points in a game, thus
providing a measure of the complexity for two player pokerlike card
games with various size decks, numbers of betting rounds, numbers of
bets and raises allowed, and the numbers of cards dealt in each round.
If one or more of these parameters grows beyond a certain size, it
becomes intractable to solve highly complex mathematicsl problems directly.
So, the authors discuss two methods by which the complexity of these
games can be reduced, and each of these gets its own section of the
paper. These are the "Abstraction" methods mentioned in the title of
the paper. The first of these methods the authors call "binning",
the second they call "independence of betting rounds".
Primary Research
Let's pretend I'm playing 7card stud, my starting cards are split
queens with an offsuit 6. Assuming there's no difference in the liveness
of my kicker, I'm going to play this hand exactly the same way I'd
play split queens with an offsuit 7. In fact, I can come up with
a range of hands that I'd play in exactly the same way. Whenever
I have hands I would play in the exactly or very close to the same
way, I can consider them to all be the same from a strategic point
of view, and I can put them all in the same "bin".
The fewer the total number of bins, the easier it becomes to do the
math, but the resulting simplified strategy becomes less accurate.
The authors created a test game with an ante of 1 chip, a possible
bet and call or fold of 1 chip, and 200 possible starting hands.
They found an optimal strategy for the complete game version of this
game, and then divided the starting hand space into different numbers
of bins and compared the effect of this on the win rate. Twenty bins
works pretty well in this game, and once you get into the 4050 bin
range, the results look pretty similar to the results for the unabstracted
game.
They made the game more complex and a little more pokerlike and
ran it again, achieving similar results. Clearly, for complex
games grouping similar starting hands can reduce the overall
complexity of the calculations necessary to come up with a strong
strategy, and if the grouping is done judiciously, the impact
to the bottom line is likely to be minimal. Of course, this
doesn't help us come up with good groupings for real poker games,
but it's a proof of concept.
In this simplified games, there's no such thing as a hand with
"potential". That is, there are no drawing hands. In order to
explore this aspect, they modified the game to include drawing
type hands and reran their binning experiment, obtaining similar
results.
In Section 4 the authors introduce a new game that's a little
more complex than their previous two. They call this game "Rhode
Island hold'em", as a smaller, but structurally similar, game to
"Texas hold'em". In case it isn't immediately obvious, this is
the sort of joke that mathematics PhDs find amusing. As an aside,
there exists a paper about finding an optimum solution for this game:
Optimal
Rhode Island hold'em Poker by Andrew Gilpin and Thomas
Sandholm. The solution is too lage to be given in text form, so
the paper serves as an outline to the methods by which the authors
derived their results. That's an issue for another article, though.
Rhode Island hold'em is played as follows:
 Each player antes.
 Each of two players is dealt one down card, followed by a round of
betting.
 On each round of betting, the first player may check or bet. If
player 1 checks, player 2 may check or bet. If one player bets, the
other player may call or fold. If both players check or one player
bets and the other calls, we proceed to the next round.
 Next, one shared board card is dealt, followed by a round of betting.
 Finally, one more board card is dealt, followed by a round of
betting.
 If there is a showdown, the best hand wins, using the rules to
the casino game "3Card Poker" to determine a winner. That is,
the order of hands is 3card straightflush, trips, 3card straight,
3card flush, pair, high card.
Alternatively, the rules to this game can be expressed as Texas
hold'em with no flop and only one down card where the hands are
evaluated according to the hand rankings for 3Card Poker.
In the next section of the paper, instead of using binning, they
reduce the complexity of finding a solution to a poker game by
considering each round of a multiround poker game to be independent.
That is, instead of finding the solution to the complete game, we
treat each betting round as an independent game.
Here's how this might work in "real life": A Texas hold'em hand is
played by a team of four players. The first player plays the preflop
betting round with none of these colleagues in the room or able to see
the action in any way. After the preflop betting round is complete,
that player leaves the room without communicating with anyone, and
player two comes in to play the flop. This player knows the cards and
the size of the pot, but does not know whether the first player bet
or called to get there (assuming that the first player didn't fold.)
After the flop betting round, the second player is replaced for the
turn and the river in a similar manner.
This simplification preculdes adopting a multistreet betting strategy.
However, it doesn't preclude any of the multistreet strategies
with which poker players are familiar, it just means that they have to
occur by "accident" rather than by a priori design. For example, a
certain amount of time a player will bluff on the first round, as
game theory suggests that they should, and, just by accident, a certain
amount of time they will also bluff on the second and even the third
betting rounds.
Thus, it is possible, albeit rare, for a player to
make a threebarrel bluff, but since most players should agree that
this strategy should be adopted rarely (unless one's opponent has demonstrated
themselves to be ridiculously weak on the river), that doesn't seem to
be a huge impediment. Similarly, slowplaying, semibluffing, floating,
and other popular strategies can all occur "by accident" as well, however
they might not occur as frequently or effectively as a strategy that
considers the totality of all betting rounds. Nonetheless, they can
all arise even if the betting rounds are considered independently.
So, while there can be a significant disadvantage to treating betting
rounds independently, for some types of games it's possible that such
an abstraction may not be as horrendous as it first appears.
In any case, the authors demonstrate that by adopting this approximation
they can save a great deal of compute time in evaluating positions in
Rhode Island hold'em, although since they don't have an exact solution
to this game (at least not at the time the paper was written), they
don't know how much EV this abstraction method costs.
However, they can play their independent bettinground
strategy against some sample programmed algorithms. One opponent is
a "rulebased" player, that is, a player whose play is governed by a
lookup table with a preprogrammed strategy that approximates what the
authors think a good strategy for Rhode Island hold'em looks like.
This is essentially the same method used by such programs as Wilson's
Turbo Texas Hold'em. The second and third opponents are
based on the rulebased strategy, skewed to be too aggressive and too
passive respectively. The fourth opponent is one that implements an
"opponentmodeling" method. That is, it analyzes opponent behavior and
attempts to figure out its opponent's strategy and determine a proper
counterstrategy. This is the approach taken by the Loki program
developed by the University of Alberta research group. This algorithm
can be found in the the "Poker Academy" and "STACKED" commercial computer
software packages.
In any case, the independent betting model beat all four of these
opponents. The normal rulebased model outperformed both the too
aggressive and too passive models, and the opponentmodeling program
beat all three rulebased models. Does this indicate that an independent
model will do well against a better algorithm? No, but it does indicate
that the notion of independent betting isn't a horrible idea. How bad
it is remains to be seen, and it almost certainly depends on the structure
of the game.
As I already mentioned, since the publication of this paper the game
of Rhode Island hold'em has been solved. This means that the authors
could now play their independent abstraction model against an opponent
who plays optimally. From this we could see how much one actually loses
treating each betting round independently. Such a study would be an
excellent followup to this line of research.
One parenthetical comment the authors made that I found to be especially
interesting is that the independent betting program used both bluffing
and slowplaying strategies in its mix. I fully expected to find out
that it would be bluffing, this has been mathematically known to be an
important part of poker since the beginnings of game theory. However,
I was a little surprised to find that a simplified strategy for a game
with three betting rounds would include slowplaying. I don't think
I've seen that in an algorithm before outside of a rulebased or
opponentmodeling approach. I'm curious to find out of the complete
solution to the game also includes slowplaying.
Conclusion
Of course, real poker is more complex than the games examined here,
and it's not clear exactly how far we can generalize these results for
real games. Nonetheless, the authors' point in that for some games it
is possible to develop abstractions, namely binning and treating each
betting round independently, and that under the right circumstances
these abstractions still produce good strategies for pokerlike games.
