Machine Learning Theory

Note: This course will not be offered AY23-24

How do we use mathematical thinking to design better machine learning methods? This course focuses on developing mathematical tools for answering these questions. This course will cover fundamental concepts and principled algorithms in machine learning. We have a special focus on modern large-scale non-linear models such as matrix factorization models and deep neural networks. In particular, we will cover concepts and phenomenon such as uniform convergence, double descent phenomenon, implicit regularization, and problems such as matrix completion, bandits, and online learning (and generally sequential decision making under uncertainty).

Topics Include

  • Concentration inequalities
  • Generalization bounds via uniform convergence
  • Non-convex optimization
  • Implicit regularization effect in deep learning
  • Unsupervised learning
  • Domain adaptations

Note: This course is cross listed with CS229M.


Course Page
Price
$4,200.00 Subject to change
Delivery
Online, instructor-led
Level
Introductory
Commitment
10 weeks, 9-15 hrs/week
Credit
Artificial Intelligence Graduate Certificate Statistics Graduate Certificate
School
Stanford School of Humanities and Sciences
Language
English