KIT | KIT-Bibliothek | Impressum | Datenschutz

Markov Decision Processes of the Third Kind: Learning Distributions by Policy Gradient Descent

Bäuerle, Nicole ORCID iD icon 1; Vasileiadis, Athanasios 1
1 Institut für Stochastik (STOCH), Karlsruher Institut für Technologie (KIT)

Abstract:

The goal of this paper is to analyze distributional Markov Decision Processes as a class of control problems in which the objective is to learn policies that steer the distribution of a cumulative reward toward a prescribed target law, rather than optimizing an expected value or a risk functional. To solve the resulting distributional control problem in a model-free setting, we propose a policy-gradient algorithm based on neural-network parameterizations of randomized Markov policies, defined on an augmented state space and a sample-based evaluation of the characteristic-function loss. Under mild regularity and growth assumptions, we prove
convergence of the algorithm to stationary points using stochastic approximation techniques.
Several numerical experiments illustrate the ability of the method to match complex target distributions, recover classical optimal policies when they exist, and reveal intrinsic non-uniqueness phenomena specific to distributional control.


Volltext §
DOI: 10.5445/IR/1000190868
Veröffentlicht am 23.02.2026
Originalveröffentlichung
DOI: 10.48550/arXiv.2602.06567
Cover der Publikation
Zugehörige Institution(en) am KIT Institut für Stochastik (STOCH)
Publikationstyp Forschungsbericht/Preprint
Publikationsjahr 2026
Sprache Englisch
Identifikator KITopen-ID: 1000190868
Verlag Karlsruher Institut für Technologie (KIT)
Umfang 31 S.
Schlagwörter Distributional MDPs, Policy Gradient, Randomized controls
Nachgewiesen in arXiv
KIT – Die Universität in der Helmholtz-Gemeinschaft
KITopen Landing Page