When training a classifier the choice of loss function heavily influences the characteristics of the resulting model.
The most commonly used loss function for classification is cross entropy. In image segmentation problems where each pixel is assigned to a particular class, overlap-based losses have recently been shown to improve classifier performance especially for datasets with an imbalanced class distribution. This is particularly relevant to segmentation because class imbalance mitigation strategies used in regular classification are often not applicable. Overlap-based losses, however, have different drawbacks. We are aiming at combining the upsides of different losses with a simple scheduling scheme during training while minimizing their downsides. Gradually transitioning from an overlap-based dice loss to cross entropy allows to reliably select a distinct minimum in the optimization landscape as a valuable alternative to results obtained from traditional unscheduled loss functions. We demonstrate the efficacy of our approach on different combinations of loss functions, datasets, and models.