Zero-Sum Discrete-Time Markov Games with Unknown Disturbance Distribution

Zero-Sum Discrete-Time Markov Games with Unknown Disturbance Distribution
-0 %
Der Artikel wird am Ende des Bestellprozesses zum Download zur Verfügung gestellt.
Discounted and Average Criteria
 eBook
Sofort lieferbar | Lieferzeit: Sofort lieferbar

Unser bisheriger Preis:ORGPRICE: 56,24 €

Jetzt 53,48 €* eBook

Artikel-Nr:
9783030357207
Veröffentl:
2020
Einband:
eBook
Seiten:
120
Autor:
J. Adolfo Minjárez-Sosa
Serie:
SpringerBriefs in Probability and Mathematical Statistics
eBook Typ:
PDF
eBook Format:
Reflowable eBook
Kopierschutz:
Digital Watermark [Social-DRM]
Sprache:
Englisch
Beschreibung:

This SpringerBrief deals with a class of discrete-time zero-sum Markov games with Borel state and action spaces, and possibly unbounded payoffs, under discounted and average criteria, whose state process evolves according to a stochastic difference equation. The corresponding disturbance process is an observable sequence of independent and identically distributed random variables with unknown distribution for both players. Unlike the standard case, the game is played over an infinite horizon evolving as follows. At each stage, once the players have observed the state of the game, and before choosing the actions, players 1 and 2 implement a statistical estimation process to obtain estimates of the unknown distribution. Then, independently, the players adapt their decisions to such estimators to select their actions and construct their strategies. This book presents a systematic analysis on recent developments in this kind of games. Specifically, the theoretical foundations on the procedures combining statistical estimation and control techniques for the construction of strategies of the players are introduced, with illustrative examples. In this sense, the book is an essential reference for theoretical and applied researchers in the fields of stochastic control and game theory, and their applications.
This SpringerBrief deals with a class of discrete-time zero-sum Markov games with Borel state and action spaces, and possibly unbounded payoffs, under discounted and average criteria, whose state process evolves according to a stochastic difference equation. The corresponding disturbance process is an observable sequence of independent and identically distributed random variables with unknown distribution for both players. Unlike the standard case, the game is played over an infinite horizon evolving as follows. At each stage, once the players have observed the state of the game, and before choosing the actions, players 1 and 2 implement a statistical estimation process to obtain estimates of the unknown distribution. Then, independently, the players adapt their decisions to such estimators to select their actions and construct their strategies. This book presents a systematic analysis on recent developments in this kind of games. Specifically, the theoretical foundations on the procedures combining statistical estimation and control techniques for the construction of strategies of the players are introduced, with illustrative examples. In this sense, the book is an essential reference for theoretical and applied researchers in the fields of stochastic control and game theory, and their applications.
Zero-sum Markov games.- Discounted optimality criterion.- Average payoff criterion.- Empirical approximation-estimation algorithms in Markov games.- Difference-equation games: examples.- Elements from analysis.- Probability measures and weak convergence.- Stochastic kernels.- Review on density estimation. 

Kunden Rezensionen

Zu diesem Artikel ist noch keine Rezension vorhanden.
Helfen sie anderen Besuchern und verfassen Sie selbst eine Rezension.