The Cambridge Handbook of Responsible Artificial Intelligence has been launched today. The Handbook provides conceptual, technical, ethical, social, and legal perspectives on “Responsible AI” and is available as print as well as Open Access.
Our director Prof. Dr. Antje von Ungern-Sternberg authored the chapter “Discriminatory AI and the Law” The chapter focuses on the legality of discriminatory AI which is increasingly used to assess people (profiling).
Intelligent algorithms – which are free of human prejudices and stereotypes – would prevent discriminatory decisions, or so the story goes. However, many studies show that the use of AI can lead to discriminatory outcomes. From a legal point of view, this raises the question whether the law as it stands prohibits objectionable forms of differential treatment and detrimental impact. In the legal literature dealing with automated profiling, some authors have suggested that we need a ‘right to reasonable inferences’, i.e. a certain methodology for AI algorithms affecting humans. von Ungern-Sternberg takes up this idea with respect to discriminatory AI and claims that such a right already exists in antidiscrimination law. She argues that the need to justify differential treatment and detrimental impact implies that profiling methods correspond to certain standards. It is now a major challenge for lawyers and data and computer scientists to develop and establish those methodological standards.