Natural Language Understanding

From conversational agents to automated trading and search queries, natural language understanding underpins many of today’s most exciting technologies. How do we build these models to understand language efficiently and reliably? In this project-oriented course you will develop systems and algorithms for robust machine understanding of human language. The course draws on theoretical concepts from linguistics, natural language processing, and machine learning.

In the first half of the course, you will explore three fundamental tasks in natural language understanding: contextual language representation, information retrieval, and advanced behavioral evaluation of NLU models. Each topic includes a hands-on component where you will build baseline models The baseline models will help you develop your own original models which you will enter into informal class-wide competitions.

In the second half of the course, you will pursue an original project in natural language understanding with a focus on following best practices in the field. Additional lectures and materials will cover important topics to help expand and improve your original system, including evaluations and metrics, semantic parsing, and grounded language understanding. You can view sample projects from previous learners in the course here.

  • Develop systems and algorithms for robust machine learning understanding of human language.
  • Build neural information retrieval systems using large language models.
  • Understand semantic and syntactic relations between words with contextual word representation models such as transformers, BERT, ELECTRA, and GPT.
  • Access desired information from texts with classical and neural information retrieval methods.
  • Design and conduct an NLU research project of your choosing.

Course Page
Online, instructor-paced
Feb 5 - Apr 14, 2024
10-15 hours per week
Artificial Intelligence Professional Program
Stanford School of Engineering