Algorithms for Reinforcement Learning

Algorithms for Reinforcement Learning

Csaba Szepesvari,
ISBN: 9781608454921 | PDF ISBN: 9781608454938
Hardcover ISBN: 9781681732138
Copyright © 2010 | 103 Pages | Publication Date: 01/01/2010


Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner’s predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms’ merits and limitations. Reinforcement learning is of great interest because of the large number of practical applications that it can be used to address, ranging from problems in artificial intelligence to operations research or control engineering. In this book, we focus on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming. We give a fairly comprehensive catalog of learning problems, describe the core ideas, note a large number of state of the art algorithms, followed by the discussion of their theoretical properties and limitations.

Table of Contents

Markov Decision Processes
Value Prediction Problems
Control
For Further Exploration

About the Author(s)

Csaba Szepesvari, University of Alberta
Csaba Szepesvari received his PhD in 1999 from “Jozsef Attila” University, Szeged, Hungary. He is currently an Associate Professor at the Department of Computing Science of the University of Alberta and a principal investigator of the Alberta Ingenuity Center for Machine Learning. Previously, he held a senior researcher position at the Computer and Automation Research Institute of the Hungarian Academy of Sciences, where he headed the Machine Learning Group. Before that, he spent 5 years in the software industry. In 1998, he became the Research Director of Mindmaker, Ltd., working on natural language processing and speech products, while from 2000, he became the Vice President of Research at the Silicon Valley company Mindmaker Inc. He is the coauthor of a book on nonlinear approximate adaptive controllers, published over 80 journal and conference papers and serves as the Associate Editor of IEEE Transactions on Adaptive Control and AI Communications, is on the board of editors of the Journal of Machine Learning Research and the Machine Learning Journal, and is a regular member of the program committee at various machine learning and AI conferences. His areas of expertise include statistical learning theory, reinforcement learning and nonlinear adaptive control.

You may also like...