Always happy to see some game theory on HN. If you're looking for a good book that focuses more on how game theory is actually used in practice versus the more computational exposition here, then I'd recommend a very readable and cheap book called "Game Theory for Applied Economists" by Robert Gibbons (Google Preview: http://books.google.com/books/p/princeton?id=8ygxf2WunAIC). The book has only 4 chapters which cover the 4 different types of games:
1. Static Games of Complete Information
2. Dynamic Games of Complete Information
3. Static Games of Incomplete Information
4. Dynamic Games of Incomplete Information
This segmentation covers all possible types of games. It's great because then you only have to decide if the game is static vs. dynamic and whether it's a game of complete vs. incomplete information (remember, perfect/imperfect information is not the same as complete/incomplete information). If you can answer those 2 questions, then you know what kind of equilibrium is relevant. For example, if it's a game of incomplete information (meaning that there is a move of nature, or equally, that the players don't necessarily know the types/payoffs of the other players) then you know that you are playing a Bayesian game, and hence the equilibrium (it if exists) will be some kind of a Bayesian Nash equilibrium.
You can always express a game of incomplete information as a game of imperfect information (see: Harsanyi transformation). However, here's something to think about: What do you lose when you transform a game from extensive form (a tree) to strategic form (a matrix)? The answer: Timing.
For me it's less intuitive than a tree, but you can use a matrix to express all possible strategies of a two-player sequential game. This can help you visualize credible vs. noncredible threats from the second player. Ultimately I think the tree is more helpful in solving it visually, though.
This book provides a great introduction to game theory. What I like best about it is the way that it introduces simple examples in the first chapter, and then expands upon these examples in the following chapters, adding new complications each time.
1. Static Games of Complete Information
2. Dynamic Games of Complete Information
3. Static Games of Incomplete Information
4. Dynamic Games of Incomplete Information
This segmentation covers all possible types of games. It's great because then you only have to decide if the game is static vs. dynamic and whether it's a game of complete vs. incomplete information (remember, perfect/imperfect information is not the same as complete/incomplete information). If you can answer those 2 questions, then you know what kind of equilibrium is relevant. For example, if it's a game of incomplete information (meaning that there is a move of nature, or equally, that the players don't necessarily know the types/payoffs of the other players) then you know that you are playing a Bayesian game, and hence the equilibrium (it if exists) will be some kind of a Bayesian Nash equilibrium.
You can always express a game of incomplete information as a game of imperfect information (see: Harsanyi transformation). However, here's something to think about: What do you lose when you transform a game from extensive form (a tree) to strategic form (a matrix)? The answer: Timing.