Friday, January 7, 2011

In Praise of Error

One of the most interesting attributes of complex systems is the way they leverage noise and error. Our engineering tradition -- of which I am a part -- has fetishized precision, accuracy and correctness (which are all slightly different things), and thus stigmatized imprecision, noise and error. However, in complex, adaptive, growing and evolving systems, these nemeses of good engineering are a source of strength. Indeed, nothing illustrates the limitations of conventional engineering and explains its inability to produce truly useful complex adaptive systems (e.g., intelligent robots) better than this issue.

I will be writing much more on this in the future, but, thanks to the wonderful 3QuarksDaily blog, I happened to come upon a very nice article that gives insight into one aspect of the issue: The benefits of imprecision. The article focuses mainly on the precision and imprecision of computers, but the point applies much more broadly. Many things that we do require only low precision and have many "right" answers. And when we try to replicate them in artificial systems, e.g., getting computers to see or getting robots to act, we spend far too much on needless precision or correctness.

Consider the simple act of taking a sip from your grande cinnamon spice caramel cranberry cucumber honey chai latte. You can pick it up with either hand (or both), grip it high or low, with thumb and one finger or two, three or four, bring it to your mouth straight up, with a rightwad curve, leftward curve or any fancier trajectory of your choice, and as long as some part of the rim of the cup ends up at an appropriate part of your mouth, you'll be able to enjoy your fix. From a purely engineering point of view, implementing all this in a robot presents serious problems. First of all, there is no "right" answer, which means that the poor engineers don't know what they are looking for -- oh, the horror! Second, even if one could arbitrarily define a "right" answer (e.g., shortest path, fewest joints involved, precise middle of the lower lip target, etc.), coming up with the right control signals to get exactly the right joint movements is a monumental task. And the robot one gets after all that is doomed to drink all its lattes with the same stereotypical action.

But what if engineers welcomed the allowance for imprecision as an opportunity rather than seeing it as an impediment? After all, the second difficulty -- achieving precise control to get a desired action -- was created by trying to resolve the first difficulty -- having too many options. If the latter is no longer regarded as a difficulty at all, the second difficulty is greatly mitigated by allowing for a "sloppier" controller with less precision. As the linked article shows, this can reduce the cost of the resulting system dramatically with no real loss of performance. Why isn't this always done?

The root of the problem is that engineers insist on precision, control, predictability and efficiency. The engineering paradigm is based on these attributes (and a few others - I have written about it elsewhere). The reasons they want an optimal trajectory for the robot's arm are: a) So they know exactly what control signals to apply; b) can tell whether the robot is behaving correctly; c) correct it if it is not; d) measure its performance; and e) ensure that resources are being used efficiemtly. From the robot's viewpoint, none of these is especially important; all it cares about is enjoying its latte at not too high a cost in energy. A much simpler, less precise controller could achieve that, though the result would probably not be "optimal" and might vary from trial to trial -- just as in humans! The extra precision being required -- at great cost in terms of design difficulty and expense -- is to satisfy the gods of engineering, not to meet the actual goal.

Of course, this doesn't mean that solving the less precise engineering problem is easy, but one reason it isn't easy is that all the tools of engineering are geared towards optimization. The great Herb Simon observed a long time ago that intelligent systems do not optimize, they just "satisfice," i.e., find solutions that work "well enough." Satisficing is a lot cheaper than optimizing, but requires judgment rather than blind rules. With optimization, we can measure how far a system is from the optimum and use this knowledge systematically to improve it, i.e., just use an algorithm from the engineering toolkit. Discovering satisfactory solutions, in contrast, often requires trial-and-error, exploration, and deciding when things are acceptable -- and therein lies the problem. There are (so far) no universally applicable canonical paradigms for satisficing because it is, by definition, a messy process with uncertain goals.

One important point to remember here is that it isn't imprecision per se that is good (or harmless), but the judicious use of imprecision.  Some things do need to be precise while others less so. Appropriate allocation of precision can be the difference between success and failure, or even life and death. Since we seldom have generic rules for deciding how and where to be imprecise, engineering practice has been to play it safe and always seek the maximum possible precision everywhere. Over time, this has become a fundamental part of the engineering ethos, and has led to the pursuit of precision as an end in itself -- often at great cost.

However, if one looks at living organisms -- which, we all agree, are marvels far beyond our engineering capabilities -- we see that  imprecision is not only prevalent, but is actively exploited to great benefit. Imprecision is, in fact, a primary driver of evolution and adaptation in organisms. It is genetic "errors" that, when beneficial, lead to fitter organisms; and it is cognitive/behavioral "errors" (i.e., mistakes) that, when productive, lead to learning. The strategies that organisms have evolved to exploit such errors while avoiding their dangers are among the most useful lessons we can learn from life as we try to build new complex systems (e.g., intelligent robots) or manage existing ones (e.g., financial markets, social networks, economies, etc.) Systems that have been engineered to squeeze out all possibility for error are systems incapable of growth.

More to come on this.

2 comments: