Chapter 3. Moving from Static to Dynamic Games
In dynamic games, the strategy of the players depends at least in part on past actions. They are characterized by the way in which their environment, the players' objectives and the order of play are modeled.
Repeated games iterate over a static game any number (possibly infinite) of times, in which the players aim to optimize their average utility. In this context, a Nash equilibrium becomes a profile of action plans. Folk theorems aim to characterize the utilities of repeated games.
Stochastic games are a more general case of repeated games, in which the game can evolve over time. In this case, the stage utility of a player will depend on his actions and also on the current state of the game. This ...

Get Game Theory and Learning for Wireless Networks now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.