Image for Learning Representation and Control in Markov Decision Processes

Learning Representation and Control in Markov Decision Processes : New Frontiers

Part of the Foundations and Trends (R) in Machine Learning series
See all formats and editions

Learning Representation and Control in Markov Decision Processes describes methods for automatically compressing Markov decision processes (MDPs) by learning a low-dimensional linear approximation defined by an orthogonal set of basis functions.

A unique feature of the text is the use of Laplacian operators, whose matrix representations have non-positive off-diagonal elements and zero row sums.

The generalized inverses of Laplacian operators, in particular the Drazin inverse, are shown to be useful in the exact and approximate solution of MDPs. The author goes on to describe a broad framework for solving MDPs, generically referred to as representation policy iteration (RPI), where both the basis function representations for approximation of value functions as well as the optimal policy within their linear span are simultaneously learned.

Basis functions are constructed by diagonalizing a Laplacian operator or by dilating the reward function or an initial set of bases by powers of the operator.

The idea of decomposing an operator by finding its invariant subspaces is shown to be an important principle in constructing low-dimensional representations of MDPs.

Theoretical properties of these approaches are discussed, and they are also compared experimentally on a variety of discrete and continuous MDPs.

Finally, challenges for further research are briefly outlined. This is a timely exposition of a topic with broad interest within machine learning and beyond.

Read More
Special order line: only available to educational & business accounts. Sign In
£84.60 Save 10.00%
RRP £94.00
Product Details
now publishers Inc
1601982380 / 9781601982384
Paperback / softback
519.233
02/06/2009
United States
184 pages
156 x 234 mm, 268 grams
Professional & Vocational Learn More