We investigate quality assurance and motivation in peer-production settings. We focus on the collaborative creation of structured knowledge. We study, how rating-based incentive mechanisms can increase the quality of the knowledge created. Further, we study how to increase the classification accuracy, in particular the presence of low-competence raters. Finally, we analyze how authors of a scientific conference rate peer reviews, and how authors' ratings can increase the quality of the reviews.