Designing Reliable and Robust AI Systems

As artificial intelligence is integrated into our daily lives, its reliability and safety is more important than ever. In this one-week course, you will learn the fundamental concepts around designing reliable AI systems for a range of applications.

You will gain an understanding of core principles and techniques for building safe and robust machine learning models. In addition, you will explore the application of these concepts and techniques in short assignments, supported by course facilitators and the instructor.

  • Understand the key assumptions underlying modern statistical machine learning, and identify situations where these assumptions can break down
  • Apply techniques for improving model robustness, including data augmentation, distributionally robust optimization, and fine-tuning from pre-trained models
  • Evaluate models using explainability and failure identification techniques, including importance sampling for probability of failure estimation
  • Validate machine learning models, including using stochastic methods such as randomized smoothing, and formal methods such as neural network verification
  • Quantify uncertainty in machine learning models, including aleatoric vs. epistemic uncertainty, and apply common algorithms for estimating uncertainty

The week-long, online course includes live daily lectures, daily assignments, office hours, and working sessions, where you can receive hands-on guidance working through the course materials. You will be invited to join a dedicated slack channel so you can connect directly with other students. (Review the course outline.)

Course Page
Online, instructor-led
Jul 24 - Jul 28, 2023
One week: 2-5 hours a day
Stanford School of Engineering