Algorithmic Fairness

Machine learning and data analysis have enjoyed tremendous success in a broad range of domains. These advances hold the promise of great benefits to individuals, organizations and society. Undeniably, algorithms are informing decisions that reach ever more deeply into our lives, from news article recommendations to criminal sentencing decisions to healthcare diagnostics. This progress, however, raises (and is impeded by) a host of concerns regarding the societal impact of computation. A prominent concern is that these algorithms should be fair. Unfortunately, the hope that automated decision-making might be free of social biases is dashed on the data on which the algorithms are trained and the choices in their construction: left to their own devices, algorithms will propagate - even amplify - existing biases of the data, the programmers, and the decisions made in the choice of features to incorporate and measurements of 'fitness' to be applied. Addressing wrongful discrimination by algorithms is not only mandated by law and by ethics but is essential to maintaining the public trust in the current computation-driven revolution. The study of fairness is ancient and multi-disciplinary: philosophers, legal experts, economists, statisticians, social scientists and others have been concerned with fairness for as long as these fields have existed. Nevertheless, the scale of decision making in the age of big-data, the computational complexities of algorithmic decision making, and simple professional responsibility mandate that computer scientists contribute to this research endeavor. This is an introduction to this booming area of research.

Course Page
$4,368.00 Subject to change
Online, instructor-led
Stanford School of Engineering