Saturday, April 11, 2009

Strings are not Meanings Part 2

Strings are not Meanings Part 2: Matt refines his earlier points:

Data may be unreasonably effective, but effective at what?

In asking this, I was really drawing attention to firstly the ability for large volumes of data (and not much else) to deliver interesting and useful results, but its inability to tell us how humans produce and interpret this data. One of the original motivations for AI was not simply to create machines that play chess better than people, but to actually understand how people’s minds work.

The data we were discussing in the original paper tells us a lot about how people “produce and interpret” it. Links, clicks, and markup, together with proximity, syntactic relations, and textual similarities and parallelisms, are powerful traces of how writers and readers organize and relate pieces of information to each other and to the environments in which they live. As David Lewis once said, just as the Web was emerging, they form a bubble chamber that records the traces of much of human activity and cognition. Like with a bubble chamber, it is noisy, it requires serious computation to interpret, and most important of all, it requires prior hypotheses about what we are looking at to organize those computations. How much those hypotheses depend on fine-grained models of “how people's minds work,” we really have no idea. If we were to measure the success of AI for its progress on creating such models, we'd have to see AI as a dismal, misguided failure. AI's successes, such as they are, are not about human minds, but about computational systems that behave in a more adaptive way in complex environments, including those involving human communication. Indeed, neither AI researchers nor psychologists, nor linguists, nor neuroscientists have made much progress (not since I came into AI 30 years ago, anyway) in figuring out the division of labor between task-specific cognitive mechanisms and representations and more shallow, statistical, neural and social learning systems in enabling human cognition and communication. If anything, we have increasing reason to be humble about the alleged uniquely fine-grained nature of human cognition, as opposed to the broader, shallower power of a few inference-from-experience hacks, social interaction, and external situated memory (territory marking, as it were), not just in humans, to construct complex symbolic processing systems: Language as Shaped by the Brain, Out of our Heads, The Origins of Meaning, Cultural Transmission and the Evolution of Human Behavior, ....

Despite all the ontology nay-sayers, a big chunk of our world is structured due to the well organized, systematic and predictable ways in which industry, society and even biology creates stuff.

Here, I want to draw attention to the skepticism around ontologies. Yes, they come at a cost, but it is also the case that they offer true grounding of interpretations of textual data. Let me give an example. The Lord of the Rings is a string used to refer to a book (in three parts) a sequence of films, various video games, board games, and so on. The ambiguity of the phrase requires a plurality of interpretations available to it. This is a 1-many mapping. The 1 is a string, but what is the type of the many? I actually see the type of work described in the paper as being wholly complimentary with categorical knowledge structures.

Hey, you gave your own answer! The many are "a book", "a sequence of films", "[a] video game", ... Sure, the effect of the (re)presentation of those strings in certain media (including our neural media) in certain circumstances has causal connections to action involving various physical and social situations, such as that of buying an actual, physical book from a book seller. But that causal aspect of meaning — which I contend is primary — is totally ignored by ontologies. Ontologies may pretend to be somehow more fundamental than mere text, but they are just yet another form of external memory, like all the others we already use, whose value will be determined by practical, socially-based activity, and not by somehow being magically imbued with “true grounding.” Grounding is not true or false, it's the side effect of causally-driven mental and social learning and construction. What a symbol means is what it does to us and what we do with it, not some essence of the symbol somehow acquired by having it sit in a particular formal representation. No one has provided any evidence that by having "Harry Potter" sit somewhere in WordNet, the string becomes more meaningful than what we can glean from its manifold occurrences out there. It may be more useful to summarize the symbol's associations in a data structure for further processing, I'm all for useful data structures, but they don't add anything informationally (it may add something in computational efficiency, of course), and it often loses a lot, because context of use gets washed out or lost. Let's be serious — and a bit humbler — about what we are really doing with these symbolic representations: engineering — which is cool, don't worry — not philosophy or cognitive science. Much of this was already said or implied in McDermott's classic (unfortunately I can find it only behind the ACM paywall, so much for “Advancing Computing as a Science and a Profession,” but I digress...), which we'd do well to (re)read annually on the occasion of every AAAI conference, and whenever semantic delusions strike us. (Via Data Mining.)

Update: Partha (thanks!) found this freely accessible copy of Artificial Intelligence Meets Natural Stupidity.

No comments: