by James Bailey |

It costs $49/month to follow the Deep Learning sequence in its entirety, but you get a seven-day peek for free. That is enough to listen to the lectures in Week One, take the Week One quiz, and then listen to the Week Two lectures. You should.

The lectures are frankly preparatory to the nontrivial programming exercise at the end, so the goal here is simply to understand the problem that the exercise will address. Lurking in the background are 400 images, some of cats. There is also a file that says which ones actually are of cats and which are not. Each image is 64x64 pixels with each pixel being a combination of some red, some green, and some blue. (See, this isn't so bad. You actually know about pixels and RGB. Your parents didn't when they were your age.) Anyway, that comes to 64×64×3 numbers, which is all the learning algorithm gets. And it gets them in one row. It does not know that some values were physically adjacent to others in the original image. It does not know that the numbers group in RGB triplets. It doesn't even know what color is. But off it goes.

It takes in the list of these 12,288 numbers and, based on it, guesses “cat” or “not cat.” Then it peeks at the correct answer, and "back propagates” in the way explained in Week One. Then it does the same for the next example. Gradually, it starts to learn. Not great yet, but that is why the course is more than two weeks long. All it is doing at this point is very shallow learning.

There are two pedagogical points here. The first is that, if all you do is read somewhere (like here) that a program can learn, you will never truly get it. It is only when you press on and do the programming assignment that you realize that a program you wrote actually learned something you did not teach it. Nobody ever forgets their first neural network.

The second pedagogical point is that there is no separate document describing the programming exercises. When Prof. Ng created the original Machine Learning course (still available. You can read much more about it in Self Schooling the Book: Brain Science, MOOCs, and the Reviving Cash Value of the Liberal Arts) there was a PDF for each assignment that spelled out the task. You read the PDF and then wrote your code separately.

Now it's all together. Every few paragraphs, the text breaks out in code sort of the way a Broadway musical breaks out in song. This new notebook style of computing (familiar to Mathematica and Wolfram Language users) means that the course can jump in and help students with their programming, not just say what to do and leave them to their devices. It is this new style of presentation that makes it plausible to inject subjects like Deep Learning down into the K-12 curriculum.

tags:

Comments