The First Man-Machine
Poker Championship
Cards Sharks to Battle Computer at Poker
Saturday, July 21, 2007. 12:22 PM
Matt Crenson, Associated Press
NEW YORK (AP) -- Poker champion Phil Laak has a good chance of winning when he sits down this week to play 2,000 hands of Texas Hold'em - against a computer.
It may be the last chance he gets. Computers have gotten a lot better at poker in recent years; they're good enough now to challenge top professionals like Laak, who won the World Poker Tour invitational in 2004.
But it's only a matter of time before the machines take a commanding lead in the war for poker supremacy. Just as they already have in backgammon, checkers and chess, computers are expected to surpass even the best human poker players within a decade. They can already beat virtually any amateur player.
"This match is extremely important, because it's the first time there's going to be a man-machine event where there's going to be a scientific component," said University of Alberta computing science professor Jonathan Schaeffer.
The Canadian university's games research group is considered the best of its kind in the world. After defeating an Alberta-designed program several years ago, Laak was so impressed that he estimated his edge at a mere 5 percent. He figures he would have lost if the researchers hadn't let him examine the programming code and practice against the machine ahead of time.
"This robot is going to do just fine," Laak predicted.
The Alberta researchers have endowed the $50,000 contest with an ingenious design, making this the first man-machine contest to eliminate the luck of the draw as much as possible.
Laak will play with a partner, fellow pro Ali Eslami. The two will be in separate rooms, and their games will be mirror images of one another, with Eslami getting the cards that the computer received in its hands against Laak, and vice versa.
That way, a lousy hand for one human player will result in a correspondingly strong hand for his partner in the other room. At the end of the tournament the chips of both humans will be added together and compared to the computer's.
The two-day contest, beginning Monday, takes place not at a casino, but at the annual conference of the Association for the Advancement of Artificial Intelligence in Vancouver, British Columbia. Researchers in the field have taken an increasing interest in poker over the past few years because one of the biggest problems they face is how to deal with uncertainty and incomplete information.
"You don't have perfect information about what state the game is in, and particularly what cards your opponent has in his hand," said Dana S. Nau, a professor of computer science at the University of Maryland in College Park. "That means when an opponent does something, you can't be sure why."
As a result, it is much harder for computer programmers to teach computers to play poker than other games. In chess, checkers and backgammon, every contest starts the same way, then evolves through an enormous, but finite, number of possible states according to a consistent set of rules. With enough computing power, a computer could simply build a tree with a branch representing every possible future move in the game, then choose the one that leads most directly to victory.
That's essentially the strategy IBM's Deep Blue computer used to defeat chess champion Gary Kasparov in their famous 1997 match. No computer can calculate every single possible move in a chess game, but today's best chess programs can see an astounding 18 moves ahead.
Yet poker involves not just myriad possibilities but uncertainty, both about what cards the opponent is holding and more importantly, how he is going to play them.
"It's mandatory for you to understand how the other guy approaches the game. This is critical information in poker, and it's not true of any of these other games that we've studied in academia," said Darse Billings, a recent Alberta Ph.D. who has worked on the robot for 15 years - except for a three-year break to play poker professionally.
The game-tree approach doesn't work in poker because in many situations there is no one best move. There isn't even a best strategy. A top-notch player adapts his play over time, exploiting his opponent's behavior. He bluffs against the timid and proceeds cautiously when players who only raise on the strongest hands are betting the limit. He learns how to vary his own strategy so others can't take advantage of him.
That kind of insight is very hard to program into a computer. You can't just give the machine some rules to follow, because any reasonably competent human player will quickly intuit what the computer is going to do in various situations.
"What makes poker interesting is that there is not a magic recipe," Schaeffer said.
In fact, the simplest poker-playing programs fail because they are just a recipe, a set of rules telling the computer what to do based on the strength of its hand. A savvy opponent can soon gauge what cards the computer is holding based on how aggressively it is betting.
That's how Laak was able to defeat a program called Poker Probot in a contest two years ago in Las Vegas. As the match progressed Laak correctly intuited that the computer was playing a consistently aggressive game, and capitalized on that observation by adapting his own play.
Programmers can eliminate some of that weakness with game theory, a branch of mathematics pioneered by John von Neumann, who also helped develop the hydrogen bomb. In 1950 mathematician John Nash, whose life inspired the movie "A Brilliant Mind," showed that in certain games there is a set of strategies such that every player's return is maximized and no player would benefit from switching to a different strategy.
In the simple game "Rock, Paper, Scissors," for example, the best strategy is to randomly select each of the options an equal proportion of the time. If any player diverted from that strategy by following a pattern or favoring one option over, the others would soon notice and adapt their own play to take advantage of it.
Texas Hold 'em is a little more complicated than "Rock, Paper, Scissors," but Nash's math still applies. With game theory, computers know to vary their play so an opponent has a hard time figuring out whether they are bluffing or employing some other strategy.
But game theory has inherent limits. In Nash equilibrium terms, success doesn't mean winning - it means not losing.
"You basically compute a formula that can at least break even in the long run, no matter what your opponent does," Billings said.
That's about where the best poker programs are today. Though the best game theory-based programs can usually hold their own against world-class human poker players, they aren't good enough to win big consistently.
Squeezing that extra bit of performance out of a computer requires combining the sheer mathematical power of game theory with the ability to observe an opponent's play and adapt to it. Many legendary poker players do that by being experts of human nature. They quickly learn the tics, gestures and other "tells" that reveal exactly what another player is up to.
A computer can't detect those, but it can keep track of how an opponent plays the game. It can observe how often an opponent tries to bluff with a weak hand, and how often she folds. Then the computer can take that information and incorporate it into the calculations that guide its own game.
"The notion of forming some sort of model of what another player is like ... is a really important problem," Nau said.
Computer scientists are only just beginning to incorporate that ability into their programs; days before their contest with Laak and Eslami, the University of Alberta researchers are still trying to tweak their program's adaptive elements. Billings will say only this about what the humans have in store: "They will be guaranteed to be seeing a lot of different styles."
Even so, Laak and Eslami are top-notch players with a deep understanding of poker's mathematical fundamentals. They should be able to keep up with the computer - this time.