KIT | KIT-Bibliothek | Impressum | Datenschutz

Adaptive Optimal Control for Reference Tracking Independent of Exo-System Dynamics [in press]

Köpf, Florian; Westermann, Johannes; Flad, Michael; Hohmann, Sören

Abstract:
Model-free control based on the idea of Reinforcement Learning is a promising approach that has recently gained extensive attention. However, Reinforcement-Learning-based control methods solely focus on the regulation problem or learn to track a reference that is generated by a time-invariant exo-system. In the latter case, controllers are only able to track the time-invariant reference dynamics which they have been trained on and need to be re-trained each time the reference dynamics change. Consequently, these methods fail in a number of applications which obviously rely on a trajectory not being generated by an exo-system. One prominent example is autonomous driving. This paper provides for the first time an adaptive optimal control method capable to track arbitrary reference trajectories that are provided on a moving horizon. The main innovation is a novel Q-function that directly incorporates the given reference trajectory. This new Q-function exhibits a particular structure which allows the design of an efficient, iterative, provably convergent Reinforcement Learning algorithm that enables optimal tracking. Two real-world examples demonstrate the effectiveness of our new method.



Zugehörige Institution(en) am KIT Institut für Regelungs- und Steuerungssysteme (IRS)
Institut für Anthropomatik und Robotik (IAR)
Publikationstyp Zeitschriftenaufsatz
Publikationsjahr 2020
Sprache Englisch
Identifikator ISSN: 0925-2312, 1872-8286
KITopen-ID: 1000118892
Erschienen in Neurocomputing
Schlagwörter Adaptive Dynamic Programming, Optimal Tracking, Reinforcement Learning, Optimal Control
KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft
KITopen Landing Page