Simulation-based Algorithms for Markov Decision Processes

Simulation-based Algorithms for Markov Decision Processes
Nicht lieferbar | Lieferzeit: Nicht lieferbar I

142,99 €*

Alle Preise inkl. MwSt. | Versandkostenfrei
Artikel-Nr:
9781846286896
Seiten:
184
Autor:
Hyeong Soo Chang
Gewicht:
440 g
Format:
235x188x35 mm
Serie:
Communications and Control Engineering
Sprache:
Englisch
Beschreibung:

Steven I. Marcus received his Ph.D. and S.M. from the Massachusetts Institute of Technology in 1975 and 1972, respectively. He received a B.A. from Rice University in 1971. From 1975 to 1991, he was with the Department of Electrical and Computer Engineering at the University of Texas at Austin, where he was the L.B. (Preach) Meaders Professor in Engineering. He was Associate Chairman of the Department during the period 1984-89. In 1991, he joined the University of Maryland, College Park, where he was Director of the Institute for Systems Research until 1996. He is currently a Professor in the Electrical Engineering Department and the Institute for Systems Research.
Steven Marcus is a Fellow of IEEE, and a member of SIAM, AMS, and the Operations Research Society of America. He is an Editor of the SIAM Journal on Control and Optimization, and Associate Editor of Mathematics of Control, Signals, and Systems, Journal on Discrete Event Dynamic Systems, and Acta Applicandae Mathematicae. He has authored or co-authored more than 100 articles, conference proceedings, and book chapters.
Dr. Marcus's research interests lie in the areas of control and systems engineering, analysis and control of stochastic systems, Markov decision processes, stochastic and adaptive control, learning, fault detection, and discrete event systems, with applications in manufacturing, acoustics, and communication networks.
Dr. Fu received his Ph.D. and M.S degrees in applied mathematics from Harvard University in 1989 and 1986, respectively. He received S.B. and S.M. degrees in electrical engineering and an S.B. degree in mathematics from the Massachusetts Institute of Technology in 1985. Since 1989, he has been at the University of Maryland, College Park, in the College of Business and Management.
Dr. Fu is a member of IEEE and the Institute for Operations Research and the Management Sciences (INFORMS). He is the Simulation Area Editor for Operations, an Associate Editor for Management Science, and has served on the Editorial Boards of the INFORMS Journal on Computing, Production and Operations Management and IIE Transactions. He was on the program committee for the Spring 1996 INFORMS National Meeting, in charge of contributed papers. In 1995, he received the Maryland Business School's annual Allen J. Krowe Award for Teaching Excellence. He is the co-author (with Jian-Qiang Hu) of the book, Conditional Monte Carlo: Gradient Estimation and Optimization Applications (0-7923-9873-4, 1997), which received the 1998 INFORMS College on Simulation Outstanding Publication Award. Other awards include the 1999 IIE Operations Research Division Award and a 1998 IIE Transactions Best Paper Award. In 2002, he received ISR's Outstanding Systems Engineering Faculty Award.
Dr. Fu's research interests lie in the areas of stochastic derivative estimation and simulation optimization of discrete-event systems, particularly with applications towards manufacturing systems, inventory control, and the pricing of financial derivatives.
Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. This book provides practical modeling methods for many real-world problems with high dimensionality or complexity which have not hitherto been treatable with Markov decision processes. In addition to providing numerous specific algorithms, coverage includes both illustrative numerical examples and rigorous theoretical convergence results. The algorithms developed and analyzed differ from the successful computational methods for solving MDPs based on neuro-dynamic programming or reinforcement learning and will complement work in those areas. In addition, the book shows how to combine the various algorithms introduced with approximate dynamic programming methods that reduce the size of the state space and ameliorate the effects of dimensionality.
Often, real-world problems modeled by Markov decision processes (MDPs) are difficult to solve in practise because of the curse of dimensionality. In others, explicit specification of the MDP model parameters is not feasible, but simulation samples are available. For these settings, various sampling and population-based numerical algorithms for computing an optimal solution in terms of a policy and/or value function have been developed recently.
Here, this state-of-the-art research is brought together in a way that makes it accessible to researchers of varying interests and backgrounds. Many specific algorithms, illustrative numerical examples and rigorous theoretical convergence results are provided. The algorithms differ from the successful computational methods for solving MDPs based on neuro-dynamic programming or reinforcement learning. The algorithms can be combined with approximate dynamic programming methods that reduce the size of the state space and ameliorate the effects of dimensionality.
Markov Decision Processes.- Multi-stage Sampling.- Population-based Evolutionary Approaches.- Ordinal Comparison Method.- Combining Multiple Policies for On-line Control.

Kunden Rezensionen

Zu diesem Artikel ist noch keine Rezension vorhanden.
Helfen sie anderen Besuchern und verfassen Sie selbst eine Rezension.