by James Bailey |

We have been following Andrew Ng's new course in Deep Learning. The first unit finishes this week, so next week we will try to summarize the implications for the K-12 curriculum. For now, two news articles from the weekend relate to the subject.

The first is an article about the chip maker Nvidia, which notes that:


A new phase began in 2012 after Canadian researchers began to apply [Nvidia chips] to unusually large neural networks, the many-layered software required for deep learning.


Not just any old Canadian researchers. This work was done in Geoffrey Hinton's lab, whom we met in a previous post.

The second item was an op-ed by the leader of a major AI lab on "How To Regulate Artificial Intelligence." His suggestions are all sensible, but the contrast to the days of Sputnik are notable. Back then there were no pronouncements from on high on how to regulate satellites. Instead there was a bottom-up push to bring the K-12 curriculum up to snuff.

Today we need to do the same for AI, for two reasons. First, if society as a whole is going to stay in control, society as a whole needs to understand the game. Trusting the experts to tell each other how to behave is not the answer.

The more important reason is that the study of AI algorithms installs better, more nuanced, habits of thought than the ones currently being installed by an algebra and calculus curriculum. We noted in a previous post how the former CIA director Michael Hayden fell back on algebra to try to understand the interplay of America and Israel over Iran. It was the wrong tool for the job.

Meanwhile, Anne-Marie Slaughter is back in the news. Self Schooling is a fan of her recent book contrasting the mental metaphor of chess with that of networks. She offers network habits of thought as the right ones for understanding exactly the kinds of geopolitical issues that Dir. Hayden was grappling with. So why, in her current dustup, does she appear to, herself, be playing chess?

To be continued.