There are real world data sets where a linear approximation like the principalcomponents might not capture the intrinsic characteristics of the data. Nonlineardimensionality reduction ormanifoldlearning uses a graph-based approach tomodel the local structure of the data. Manifold learning algorithms assumethat the data resides on a low-dimensional manifold that is embedded in ahigher-dimensional space. For real world data sets this assumption might not beevident. However, using manifold learning for a classification task can reveal abetter performance than using a corresponding procedure that uses the principalcomponents of the data. We show that this is the case for our hyperspectral dataset using the two manifold learning algorithms Laplacian eigenmaps and locallylinear embedding.