Simulation-based Algorithms for Markov Decision Processes

Simulation-based Algorithms for Markov Decision Processes
-0 %
Der Artikel wird am Ende des Bestellprozesses zum Download zur Verfügung gestellt.
 eBook
Sofort lieferbar | Lieferzeit: Sofort lieferbar

Unser bisheriger Preis:ORGPRICE: 118,18 €

Jetzt 106,98 €* eBook

Artikel-Nr:
9781846286902
Veröffentl:
2007
Einband:
eBook
Seiten:
189
Autor:
Hyeong Soo Chang
Serie:
Communications and Control Engineering
eBook Typ:
PDF
eBook Format:
Reflowable eBook
Kopierschutz:
Digital Watermark [Social-DRM]
Sprache:
Englisch
Beschreibung:

Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. It is well-known that many real-world problems modeled by MDPs have huge state and/or action spaces, leading to the notorious curse of dimensionality that makes practical solution of the resulting models intractable. In other cases, the system of interest is complex enough that it is not feasible to specify some of the MDP model parameters explicitly, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based numerical algorithms have been developed recently to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function. Specific approaches include:

• multi-stage adaptive sampling;

• evolutionary policy iteration;

• evolutionary random policy search; and

• model reference adaptive search.

Simulation-based Algorithms for Markov Decision Processes brings this state-of-the-art research together for the first time and presents it in a manner that makes it accessible to researchers with varying interests and backgrounds. In addition to providing numerous specific algorithms, the exposition includes both illustrative numerical examples and rigorous theoretical convergence results. The algorithms developed and analyzed differ from the successful computational methods for solving MDPs based on neuro-dynamic programming or reinforcement learning and will complement work in those areas. Furthermore, the authors show how to combine the various algorithms introduced with approximate dynamic programming methods that reduce the size of the state space and ameliorate the effects of dimensionality.

The self-contained approach of this book will appeal not only to researchers in MDPs, stochastic modeling and control, and simulation but will be a valuable source of instruction and reference for students of control and operations research.

Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. It is well-known that many real-world problems modeled by MDPs have huge state and/or action spaces, leading to the notorious curse of dimensionality that makes practical solution of the resulting models intractable. In other cases, the system of interest is complex enough that it is not feasible to specify some of the MDP model parameters explicitly, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based numerical algorithms have been developed recently to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function. Specific approaches include: multi-stage adaptive sampling; evolutionary policy iteration; evolutionary random policy search; and model reference adaptive search. Simulation-based Algorithms for Markov Decision Processes brings this state-of-the-art research together for the first time and presents it in a manner that makes it accessible to researchers with varying interests and backgrounds. In addition to providing numerous specific algorithms, the exposition includes both illustrative numerical examples and rigorous theoretical convergence results. The algorithms developed and analyzed differ from the successful computational methods for solving MDPs based on neuro-dynamic programming or reinforcement learning and will complement work in those areas. Furthermore, the authors show how to combine the various algorithms introduced with approximate dynamic programming methods that reduce the size of the state space and ameliorate the effects of dimensionality.The self-contained approach of this book will appeal not only to researchers in MDPs, stochastic modeling and control, and simulation but will be a valuable source of instruction and reference for students of control and operations research.

Often, real-world problems modeled by Markov decision processes (MDPs) are difficult to solve in practise because of the curse of dimensionality. In others, explicit specification of the MDP model parameters is not feasible, but simulation samples are available. For these settings, various sampling and population-based numerical algorithms for computing an optimal solution in terms of a policy and/or value function have been developed recently.

Here, this state-of-the-art research is brought together in a way that makes it accessible to researchers of varying interests and backgrounds. Many specific algorithms, illustrative numerical examples and rigorous theoretical convergence results are provided. The algorithms differ from the successful computational methods for solving MDPs based on neuro-dynamic programming or reinforcement learning. The algorithms can be combined with approximate dynamic programming methods that reduce the size of the state space and ameliorate the effects of dimensionality.

Markov Decision Processes.- Multi-stage Adaptive Sampling Algorithms.- Population-based Evolutionary Approaches.- Model Reference Adaptive Search.- On-line Control Methods via Simulation.

Kunden Rezensionen

Zu diesem Artikel ist noch keine Rezension vorhanden.
Helfen sie anderen Besuchern und verfassen Sie selbst eine Rezension.