SEE

Artificial Intelligence |
Natural Language Processing

Instructor: Manning, Christopher D.

(return to course list)

Course Image

This course is designed to introduce students to the fundamental concepts and ideas in natural language processing (NLP), and to get them up to speed with current research in the area. It develops an in-depth understanding of both the algorithms available for the processing of linguistic information and the underlying computational properties of natural languages. Wordlevel, syntactic, and semantic processing from both a linguistic and an algorithmic perspective are considered. The focus is on modern quantitative techniques in NLP: using large corpora, statistical models for acquisition, disambiguation, and parsing. Also, it examines and constructs representative systems.

Prerequisites:
• Adequate experience with programming and formal structures (e.g., CS106B/X and CS103B/X).
• Programming projects will be written in Java 1.5, so knowledge of Java (or a willingness to learn on your own) is required.
• Knowledge of standard concepts in artificial intelligence and/or computational linguistics (e.g., CS121/221 or Ling 180).
• Basic familiarity with logic, vector spaces, and probability. Intended Audience:
• Graduate students and advanced undergraduates specializing in computer science, linguistics, or symbolic systems.

Due to copyright issues, video downloads and lecture slides are not available for Natural Language Processing.

View Lectures and Materials

 
Lecturer Image

Christopher D. Manning

Manning works on systems that can intelligently process and produce human languages. Particular research interests include probabilistic models of language, statistical natural language processing, information extraction, text mining, robust textual infererence, statistical parsing, grammar induction, constraint-based theories of grammar, and computational lexicography.

My current research focuses on robust but linguistically sophisticated probabilistic natural language processing, and opportunities to use it in real-world domains. Particularly topics include richer models for probabilistic parsing, grammar induction, text categorization and clustering, incorporating probabilistic models into constraint-based syntactic theories such as Head-driven Phrase Structure Grammar and Lexical Functional Grammar, electronic dictionaries and their usability, particularly for indigenous languages, information extraction and presentation, and linguistic typology.

My research at Stanford is currently supported by an IBM Faculty Partnership Award, ARDA, Scottish Enterprise, and DARPA. Previous funding at Stanford comes from a Terman Fellowship, NSF (for GIB), NTT, NHK, and the Australian Reseach Council.

Complete Course Material Downloads:

Course Handouts: The ZIP file below contains all of the course handouts for this course. If you do not need the complete course, individual documents can be downloaded from the course content pages.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 United States License.