We consider stochastic control problems with jump-diffusion processes
and formulate an algorithm which produces, starting from a given admissible control Pi, a new control with a better value. If no improvement is possible, then Pi is optimal. Such an algorithm is well-known for discrete-time Markov Decision Problems under the name Howard’s policy improvement algorithm. The idea can be traced back to Bellman. Here we show with the help of martingale techniques that such an algorithm can also be formulated for stochastic control problems with jump-diffusion processes. As an application we derive some interesting results in portfolio optimization.