I read this interview of Judea Pearl on AIwhen it first came out a year or so ago. Lots of important points in there. He's absolutely right about the rut AI is stuck in, but I think he is partially wrong about the way out, which he thinks will involve engineers building models of reality inside robots:
"We have to equip machines with a model of the environment. If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans"
That representation-based, "information processing" view of intelligence is the problem, not the solution. The models real intelligence uses are, for the most part, implicit in the system, not built explicitly as models.They emerge naturally from the interaction of the adaptive organism with its environment, and become embedded in the physics of the system (its tissue, its joints, its neural networks, etc.) The capacity to build explicit models appears very late in evolution, and even then, it is more the capacity to "feel as though" there is a model rather than there actually being a model that is used in decision-making. Robots will have the same capacity when they become able to make sufficiently complex decisions. At that point, they too will have theories and hypotheses about the world, i.e., models of the kind Pearl talks about. Free will, consciousness, and other such fictional things will also emerge then, as Pearl says too. I don't think we should worry about implementing these things. I am also very skeptical about correct causality as the basis of intelligence. The bee does not "know" the cause of anything but does very intelligent things. The estimation of "true" causes in complex systems is mostly futile; what we care about are relationships, and since some of them are temporally ordered, they can be seen as cause and effect, but only in a post facto descriptive framework such as language. And yes, statistical learning alone is likely not sufficient to discover relationships, as too many machine learning people seem to think today, but that is just an issue of levels, Ultimately, all our knowledge about the world is statistical, except that a lot of it is acquired at evolutionary scales and is encoded in genes that generate specific bodies and brains, and in the developmental process. Learning comes in late to build on this scaffolding of constraints, instincts, and intuitions.
A robotics/AI colleague and I had an interesting discussion yesterday, and agreed that, rather than projects like the Human Brain Project, AI should have a Real Insect Project - building an insect that can live independently in the real world, survive, find food, find shelter, etc., completely autonomously. Once that is possible, it's just a question of scaling up to get human intelligence :-). We can call it Project Kafka! I once said something like this at an AI conference. People were not pleased....
No comments:
Post a Comment