This work deals with the optimal control problem for Piecewise Deterministic Markov Processes (PDMP) under Partial Observation (PO).
The total expected discounted cost over lifetime shall be minimized while neither the states of the PDMP nor the current or cumulated cost are observable. Only noisy measurements (with known noise distribution) of the post-jump states are observable. The cost function, however, depends on the trajectory of the unobservable PDMP as well as on the observable noisy measurements of the post-jump states.
Admissible control strategies are history dependent relaxed piecewise open loop strategies: For each point in time and depending on the observable history up to this time, a probability distribution on the action space is selected. This probability distribution defines an expected control action on the jump rate, the drift and the transition kernel at jump times of the PDMP.
We first transform the initial continuous-time optimization problem under PO into an equivalent discrete-time optimization problem under PO. For the latter one, we obtain a recursive formulation for the filter: the probabilit ... mehry distribution of the unobservable post-jump state of the PDMP given the observable history. This leads to an equivalent fully observable optimization problem in discrete time.
Classical approaches of stochastic dynamic programming in combination with results for measurable selection of optimizers are then applied to prove the existence of optimal control strategies.
We derive sufficient conditions for the existence of optimal control strategies for lower semi-continuous cost functions and in the case of finite dimensional filters, i.e. if the set of possible post-jump states of the PDMP is finite.
Finally, we apply the theory developed in this work to a concrete example of a three states problem that could arise, e.g., from moving particles facing disturbances of their trajectories at random points in time.