neural-networks.io

neural-networks.io

MIT open course on artificial intelligence part 3/3

This page references the MIT 6.034 Artificial Intelligence open course (Fall 2010). This course has been taugh by Patrick Winston at Fall 2010. You can view the complete course at http://ocw.mit.edu/6-034F10. Creative Commons BY-NC-SA Licence.

 

# 21. Probabilistic Inference I

We begin this lecture with basic probability concepts, and then discuss belief nets, which capture causal relationships between events and allow us to specify the model more simply. We can then use the chain rule to calculate the joint probability table.

 

# 22. Probabilistic Inference II

We begin with a review of inference nets, then discuss how to use experimental data to develop a model, which can be used to perform simulations. If we have two competing models, we can use Bayes' rule to determine which is more likely to be accurate.

 

# 23. Model Merging, Cross-Modal Coupling, Course Summary

This lecture begins with a brief discussion of cross-modal coupling. Prof. Winston then reviews big ideas of the course, suggests possible next courses, and demonstrates how a story can be understood from multiple points of view at a conceptual level.

 

# Mega-R1. Rule-Based Systems

In this mega-recitation, we cover Problem 1 from Quiz 1, Fall 2009. We begin with the rules and assertions, then spend most of our time on backward chaining and drawing the goal tree for Part A. We end with a brief discussion of forward chaining.

 

# Mega-R2. Basic Search, Optimal Search

This mega-recitation covers Problem 2 from Quiz 1, Fall 2008. We start with depth-first search and breadth-first search, using a goal tree in each case. We then discuss branch and bound and A*, and why they give different answers in this problem.

 

# Mega-R3. Games, Minimax, Alpha-Beta

This mega-recitation covers Problem 1 from Quiz 2, Fall 2007. We start with a minimax search of the game tree, and then work an example using alpha-beta pruning. We also discuss static evaluation and progressive deepening (Problem 1-C, Fall 2008 Quiz 2).

 

# Mega-R4. Neural Nets

We begin by discussing neural net formulas, including the sigmoid and performance functions and their derivatives. We then work Problem 2 of Quiz 3, Fall 2008, which includes running one step of back propagation and matching neural nets with classifiers.

 

# Mega-R5. Support Vector Machines

We start by discussing what a support vector is, using two-dimensional graphs as an example. We work Problem 1 of Quiz 4, Fall 2008: identifying support vectors, describing the classifier, and using a kernel function to project points into a new space.

 

# Mega-R6. Boosting

This mega-recitation covers the boosting problem from Quiz 4, Fall 2009. We determine which classifiers to use, then perform three rounds of boosting, adjusting the weights in each round. This gives us an expression for the final classifier.

 

# Mega-R7. Near Misses, Arch Learning

This mega-recitation covers a question from the Fall 2007 final exam, in which we teach a robot how to identify a table lamp. Given a starting model, we identify a heuristic and adjust the model for each example; examples can be hits or near misses.