Algorithms, including artificial intelligence, are used in a variety of ways to differentiate people, services, products or positions. This study uses examples to illustrate the technical and organisational causes of discrimination risks and analyses the resulting forms of discrimination. Its particular focus is on the social risks from algorithmic differentiation and automated decision-making, including injustice by generalisation, treatment of people as mere objects, restrictions on the free development of personality and informational self-determination, accumulation effects and growing inequality as well as risks to societal goals of equality or social policy. In these cases, there is a need for reforms of the anti-discrimination and data protection law, but also for societal considerations and definitions of which kinds of algorithmic differentiations are considered acceptable in a society in order to protect fundamental rights and values. Last but not least, the study discusses tasks for anti-discrimination agencies and equality bodies, ranging from the identification and proof of algorithm-based discrimination to preventive and cooperative actions.