This article just gets lost in anecdote, but Gary Smith is making an important point here: Good priors matter. When we look at lots of data and try to make sense of it, the expectations we start with make a crucial difference. If those expectations are productive and realistic, data will tell us important things. If the expectations are wrong or poor, we'll just find lots of garbage and think it is gold. Unfortunately, when we're faced with data, we often don't know what expectations to go in with, and the reasonable option seems to be to assume nothing, i.e., to go in assuming that everything is equally possible - a uniform prior. After all, what could be better than looking at data with an open mind? It turns out that this is usually a bad choice. Without the inherent discrimination provided by prior expectations, any large dataset can show all sorts of "patterns" leading to false conclusions.
There has recently been justifiable criticism of the use of motivated thinking, confirmation biases, and such in scientific investigations. What is often not discussed is that unmotivated thinking and unbiased analysis is often far more dangerous. The inevitable lesson is that, rather than rejecting all motivation and bias, we need to identify good motivations and appropriate biases. Unfortunately, there is no good way to do this in a purely mathematical or computational way. In animals (and, in a broad sense, in all living organisms), evolution has successfully configured useful biases. In fact, that can be seen as its most amazing accomplishment. We give these prior biases many names: Instinct, intuition, heuristics. But ultimately, they shape expectations based not only on the animal's own experience, but the experience of all its ancestors going back to the origin of life. As animals, we sense everything in the context of our biases, and make instinctive sense of it. Our physical body is an instrument sculpted by evolution to accomplish this task every instant of our lives. This is the essence of cognition, consciousness, and intelligence. And this is what even our best AI systems lack. They do have biases of course - every computer program, every circuit, every robot is biased by its architecture - but these biases are not the result of an adaptive process such as evolution. Rather, they reflect mathematical convenience, engineering constraints, and sometimes just plain ignorance or laziness. Not surprisingly, then, such systems have a hard time learning the right thing.
This also leads into another subtle point. Our most successful AI systems are those that use supervised learning, i.e., where some type of "ground truth" is used to correct the behavior of the system during learning. But that is just an implicit and very strong way to bring in prior biases based on reality. Where we have the most difficulty is in unsupervised learning, where the AI system goes looking for patterns in data without much prior bias, e.g., finding correlations or clusters. Unfortunately, the use of supervised learning is limited by the fact that, most of the time, the ground truth just isn't available. Real animals do almost all of their learning unsupervised, and mostly succeed because their instinct substitutes for the absence of the ground truth. That is what we will need in any real AI systems, and that is where the AI project should concentrate its greatest effort.
There is one type of AI that does try to approximate this: Reinforcement learning, where a system learns to critique its own options, and ultimately to make better decisions. The spectacular success of AI programs like AlphaGo and AlphaZero is based on a good marriage between the algorithms of supervised learning and the principles of reinforcement learning. However, there is still one big difference. In (most) reinforcement learning, the internal critic itself learns by pattern recognition. It builds instinct from the ground up based on data, albeit in collaboration with the decision-making and feedback from the environment. This is why reinforcement learning works best when the system is operating in the real world with real feedback, and why it is so slow. It's trying to build a new mind every time it is applied! In an animal, evolution has already configured a mind in the physical structure of the body (including the brain). That mind already has instincts, and needs very little experience to learn ti be (mostly) right enough.
There's much more to say about this, but I'm going to start by ordering and reading Gary Smith's book.....
No comments:
Post a Comment