A State-Space Representation Model and Learning Algorithm for Real-Time Decision-Making Under Uncertainty

Andreas A. Malikopoulos, Panos Y. Papalambros, Dennis N. Assanis
2007 Volume 9: Mechanical Systems and Control, Parts A, B, and C   unpublished
Modeling dynamic systems incurring stochastic disturbances for deriving a control policy is a ubiquitous task in engineering. However, in some instances obtaining a model of a system may be impractical or impossible. Alternative approaches have been developed using a simulation-based stochastic framework, in which the system interacts with its environment in real time and obtains information that can be processed to produce an optimal control policy. In this context, the problem of developing a
more » ... lem of developing a policy for controlling the system's behavior is formulated as a sequential decision-making problem under uncertainty. This paper considers real-time sequential decision-making under uncertainty modeled as a Markov Decision Process (MDP). A state-space representation model is constructed through a learning mechanism and is used to improve system performance over time. The model allows decision making based on gradually enhanced knowledge of system response as it transitions from one state to another, in conjunction with actions taken at each state. A learning algorithm is implemented realizing in real time the optimal control policy associated with the state transitions. The proposed method is demonstrated on the single cart-pole balancing problem and a vehicle cruise control problem.
doi:10.1115/imece2007-41258 fatcat:vre5mqxb7raktoenvrw5xt4zvy