[{"type":"paper-conference","title":"Finite-Horizon Optimal State-Feedback Control of Nonlinear Stochastic Systems Based on a Minimum Principle","issued":{"date-parts":[["2006"]]},"page":"371 - 376","container-title":"Proceedings \/ 2006 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, 3 - 4 Sept. 200, Heidelberg, Germany","author":[{"family":"Deisenroth","given":"Marc P."},{"family":"Ohtsuka","given":"Toshiyuki"},{"family":"Weissel","given":"Florian"},{"family":"Brunn","given":"Dietrich"},{"family":"Hanebeck","given":"Uwe D."}],"publisher":"IEEE Service Center","publisher-place":"Piscataway (NJ)","ISBN":"1-424-40566-1","abstract":"In this paper, an approach to the finite-horizon optimal state-feedback control problem of nonlinear, stochastic, discrete-time systems is presented. Starting from the dynamic programming equation, the value function will be approximated by means of Taylor series expansion up to second-order\r\nderivatives. Moreover, the problem will be reformulated, such that a minimum principle can be applied to the stochastic problem. Employing this minimum principle, the optimal control problem can be rewritten as a two-point boundary-value problem to be solved at each time step of a shrinking horizon.\r\nTo avoid numerical problems, the two-point boundary-value problem will be solved by means of a continuation method. Thus, the curse of dimensionality of dynamic programming is avoided, and good candidates for the optimal state-feedback controls are obtained. The proposed approach will be evaluated by means of a scalar example system.","kit-publication-id":"1000013896"}]