In this work, we improve the semantic segmentationof multi-layer top-view grid maps in the context of LiDAR-based perception for autonomous vehicles. To achieve thisgoal, we fuse sequential information from multiple consecu-tive lidar measurements with respect to the driven trajectoryof an autonomous vehicle. By doing so, we enrich the multi-layer grid maps which are subsequently used as the input ofa neural network. Our approach can be used for LiDAR-only360◦surround view semantic scene segmentation while beingsuitable for real-time critical systems. We evaluate the bene-fit of fusing sequential information based on a dense groundtruth and discuss the effect on different semantic classes.