Outlier detection searches for unusual, rare observations in large, often high-dimensional data sets.
One of the fundamental challenges of outlier detection is that ``unusual'' typically depends on the perception of a user, the recipient of the detection result.
This makes finding a formal definition of ``unusual'' that matches with user expectations difficult.
One way to deal with this issue is active learning, i.e., methods that ask users to provide auxiliary information, such as class label annotations, to return algorithmic results that are more in line with the user input.
Active learning is well-suited for outlier detection, and many respective methods have been proposed over the last years.
However, existing methods build upon strong assumptions.
One example is the assumption that users can always provide accurate feedback, regardless of how algorithmic results are presented to them -- an assumption which is unlikely to hold when data is high-dimensional.
It is an open question to which extent existing assumptions are in the way of realizing active learning in practice.
In this thesis, we study this question from different perspectives with a differentiated, user-centric view on active learning.
In the beginning, we structure and unify the research area on active learning for outlier detection.
Specifically, we present a rigorous specification of the learning setup, structure the basic building blocks, and propose novel evaluation standards.
Throughout our work, this structure has turned out to be essential to select a suitable active learning method, and to assess novel contributions in this field.
We then present two algorithmic contributions to make active learning for outlier detection user-centric.
First, we bring together two research areas that have been looked at independently so far: outlier detection in subspaces and active learning.
Subspace outlier detection are methods to improve outlier detection quality in high-dimensional data, and to make detection results more easy to interpret.
Our approach combines them with active learning such that one can balance between detection quality and annotation effort.
Second, we address one of the fundamental difficulties with adapting active learning to specific applications: selecting good hyperparameter values.
Existing methods to estimate hyperparameter values are heuristics, and it is unclear in which settings they work well.
In this thesis, we therefore propose the first principled method to estimate hyperparameter values.
Our approach relies on active learning to estimate hyperparameter values, and returns a quality estimate of the values selected.
In the last part of the thesis, we look at validating active learning for outlier detection practically.
There, we have identified several technical and conceptual challenges which we have experienced firsthand in our research.
We structure and document them, and finally derive a roadmap towards validating active learning for outlier detection with user studies.