by James Bailey |

This week in its magazine section, the New York Times proffers an extended riff on the inexplicability of deep learning algorithms. As it notes:

"

Artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us.

"

No argument there. It then goes on to note a new European Union request/demand that:

"

AI must nevertheless conform to the society we've built-one in which decisions require explanations

"

Today's non-fatal car crashes meet this criteria. The driver tells the investigator what happened. That tale is accepted as the "explanation" and the wrecked car is towed away. Again and again. The option of self-driving cars changes the parameters. There are way fewer crashes, but no ensuing narratives "explaining" the ones that still occur. Or, at least, none that "conform to the society we've built.”

So we need to look at that society that we've built. What counts as an explanation? In his “Model Thinking” MOOC, Prof. Scott Page nominates algebraic equations.

"

There is a movement towards what people call Big Data, and the Big Data movement says that maybe we don’t even need the [algebraic] models any more. Right? This is what some people say … No need for the model. Now I want to make the point that I think that’s not true. I think that’s an overstatement … Big Data does not obviate the use of models.

First, let’s think of the broad reasons we have [algebraic] models. One is just to understand how the world works. So even if you see the pattern, right, the identification of the pattern is completely different than understanding where it came from. Right? See you could recognize, wow, we’ve got a ton of experience and force seems to equal mass times acceleration, right?

"

So we encapsulate that repetition in "f=ma" and declare that to be explanation. Prof. Alasdair MacIntyre points out that this explanation of explanation is circular.

"

[I]f social science does not present its findings in the form of law-like generalizations, the grounds for employing social scientists as expert advisors to government or to private corporations become unclear 

"

So the Times article illustrates deep learning by a wall full of algebraic equations and, for historical completeness, geometric diagrams. What we accept as explanation is what society as a whole has trained itself to accept. And where did that training come from? Readers of this blog will not be surprised to see that it goes right back to that shining boarding school on a hill in Puritan Boston, where geometry and algebra were reified as peeks into God's own habits of thought. They were—wait for it—true.

The Puritan form of analytic philosophy (Works. Because True) has been exposed by the pragmatism of philosophers like John Dewey: True. Because Works. As pointed out by Prof. Andrew Ng in his current Deep Learning MOOC, practitioners of this field, himself included, are instinctive pragmatists, when, for example, they "explain" the ubiquity of so-called "pooling" layers in today's deep learning algorithms:

"

I have to admit, I think the main reason people are using "Max pooling" is because it's been found in a lot of experiments to work well… I don't know of anyone who fully knows if that is the real underlying reason.

"

Those who cling to old Age of Enlightenment fantasies of truth are on the wrong side of history.

tags:

Commenting is now closed.