neural-networks.io

neural-networks.io

MIT open course on artificial intelligence part 1/3

This page references the MIT 6.034 Artificial Intelligence open course (Fall 2010). This course has been taugh by Patrick Winston at Fall 2010. You can view the complete course at http://ocw.mit.edu/6-034F10. Creative Commons BY-NC-SA Licence.

 

# 1. Introduction and Scope

In this lecture, Prof. Winston introduces artificial intelligence and provides a brief history of the field. The last ten minutes are devoted to information about the course at MIT.

 

# 2. Reasoning: Goal Trees and Problem Solving

This lecture covers a symbolic integration program from the early days of AI. We use safe and heuristic transformations to simplify the problem, and then consider broader questions of how much knowledge is involved, and how the knowledge is represented.

 

# 3. Reasoning: Goal Trees and Rule-Based Expert Systems

We consider a block-stacking program, which can answer questions about its own behavior, and then identify an animal given a list of its characteristics. Finally, we discuss how to extract knowledge from an expert, using the example of bagging groceries.

 

# 4. Search: Depth-First, Hill Climbing, Beam

This lecture covers algorithms for depth-first and breadth-first search, followed by several refinements: keeping track of nodes already considered, hill climbing, and beam search. We end with a brief discussion of commonsense vs. reflective knowledge.

 

# 5. Search: Optimal, Branch and Bound, A*

This lecture covers strategies for finding the shortest path. We discuss branch and bound, which can be refined by using an extended list or an admissible heuristic, or both (known as A*). We end with an example where the heuristic must be consistent.

 

# 6. Search: Games, Minimax, and Alpha-Beta

In this lecture, we consider strategies for adversarial games such as chess. We discuss the minimax algorithm, and how alpha-beta pruning improves its efficiency. We then examine progressive deepening, which ensures that some answer is always available.

 

# 7. Constraints: Interpreting Line Drawings

How can we recognize the number of objects in a line drawing? We consider how Guzman, Huffman, and Waltz approached this problem. We then solve an example using a method based on constraint propagation, with a limited set of junction and line labels.

 

# 8. Constraints: Search, Domain Reduction

This lecture covers map coloring and related scheduling problems. We develop pseudocode for the domain reduction algorithm and consider how much constraint propagation is most efficient, and whether to start with the most or least constrained variables.

 

# 9. Constraints: Visual Object Recognition

We consider how object recognition has evolved over the past 30 years. In alignment theory, 2-D projections are used to determine whether an additional picture is of the same object. To recognize faces, we use intermediate-sized features and correlation.

 

# 10. Introduction to Learning, Nearest Neighbors

This lecture begins with a high-level view of learning, then covers nearest neighbors using several graphical examples. We then discuss how to learn motor skills such as bouncing a tennis ball, and consider the effects of sleep deprivation.