The growing importance of 3d scene understanding and interpretation is inher-ently connected to the rise of autonomous driving and robotics. Semanticsegmentation of 3d point clouds is a key enabler for this task, providing geo-metric information enhanced with semantics. To use Convolutional NeuralNetworks, a proper representation of the point clouds must be chosen. Variousrepresentations have been proposed, with different advantages and disadvantages.In this work, we present a twin-representation architecture, which is composedof a 3d point-based and a 2d range image branch, to efficiently extract and refinepoint-wise features, supported by strong context information. Additionally, afeature propagation strategy is proposed to connect both branches. The approachis evaluated on the challenging SemanticKITTI dataset  and considerablyoutperforms the baseline overall as well as for every individual class. Especiallythe predictions for distant points are significantly improved.