Tuesday, February 26, 2019

What AI Fails to Understand - For Now

Pearl on AI


I read this interview of Judea Pearl on AIwhen it first came out a year or so ago. Lots of important points in there. He's absolutely right about the rut AI is stuck in, but I think he is partially wrong about the way out, which he thinks will involve engineers building models of reality inside robots:

"We have to equip machines with a model of the environment. If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans"

That representation-based, "information processing" view of intelligence is the problem, not the solution. The models real intelligence uses are, for the most part, implicit in the system, not built explicitly as models.They emerge naturally from the interaction of the adaptive organism with its environment, and become embedded in the physics of the system (its tissue, its joints, its neural networks, etc.) The capacity to build explicit models appears very late in evolution, and even then, it is more the capacity to "feel as though" there is a model rather than there actually being a model that is used in decision-making. Robots will have the same capacity when they become able to make sufficiently complex decisions. At that point, they too will have theories and hypotheses about the world, i.e., models of the kind Pearl talks about. Free will, consciousness, and other such fictional things will also emerge then, as Pearl says too. I don't think we should worry about implementing these things. I am also very skeptical about correct causality as the basis of intelligence. The bee does not "know" the cause of anything but does very intelligent things. The estimation of "true" causes in complex systems is mostly futile; what we care about are relationships, and since some of them are temporally ordered, they can be seen as cause and effect, but only in a post facto descriptive framework such as language. And yes, statistical learning alone is likely not sufficient to discover relationships, as too many machine learning people seem to think today, but that is just an issue of levels, Ultimately, all our knowledge about the world is statistical, except that a lot of it is acquired at evolutionary scales and is encoded in genes that generate specific bodies and brains, and in the developmental process. Learning comes in late to build on this scaffolding of constraints, instincts, and intuitions.

A robotics/AI colleague and I had an interesting discussion yesterday, and agreed that, rather than projects like the Human Brain Project, AI should have a Real Insect Project - building an insect that can live independently in the real world, survive, find food, find shelter, etc., completely autonomously. Once that is possible, it's just a question of scaling up to get human intelligence :-). We can call it Project Kafka! I once said something like this at an AI conference. People were not pleased....

Sunday, January 27, 2019

(Machine) Learning Biases

 In a recent tweet, Congresswoman Alexandria Ocasio-Cortez - widely known as AOC - responded to a report from Amazon that facial recognition technology sometimes identified women as men when they have darker skin. She said:

"When you don’t address human bias, that bias gets automated. Machines are reflections of their creators, which means they are flawed, & we should be mindful of that. It’s one good reason why diversity isn’t just “nice,” it’s a safeguard against trends like this"

While I agree with the sentiment underlying her tweet, she is profoundly wrong about what is at play here, which can happen when you apply your worldview (i.e. biases) to things you're not really familiar with. To be fair, we all do it, but here it is AOC, who is an opinion-maker and should be more careful. The error she makes here, though, is an interesting one, and get to some deep issues in AI.

The fact that machine learning algorithms misclassify people with respect to gender, or even confuse them with animals, is not because they are picking up human biases as AOC claims here. In fact, it because they are not picking up human biases - those pesky intuitions gained from instinct and experience that allow us to perceive subtle cues and make correct decisions. The machine, lacking both instinct and experience, focuses only on visual correlations in the data used to train it, making stupid errors such as relating darker skin with male gender. This is also why machine learning algorithms end up identifying humans as apes, dogs, or pigs - with all of whom humans do share many visual similarities. As humans, we have a bias to look past those superficial similarities in deciding whether someone is a human. Indeed, it is when we decide to override our natural biases and sink (deliberately) to the same superficial level as the machine that we start calling people apes and pigs. The errors being made by machines do not reflect human biases; they expose the superficial and flimsy nature of human bigotry.

There is also a deeper lesson in this for humans as well. Our “good” biases are not all just coded in our genes. They are mostly picked up through experience. When human experience becoming limited, we can end up having the same problem as the machine. If a human has never seen a person of a race other than their own, it is completely natural for them to initially identify such a person as radically different or even non-human. That is the result of a bias in the data (experience, in this case), not a fundamental bias in the mind. This is why travelers in ancient times brought back stories of alien beings in distant lands, which were then exaggerated into monstrous figures on maps etc. This situation no longer exists in the modern world, except when humans try to create it artificially through racist policies.

The machine too is at the mercy of data bias, but its situation is far worse than that of a human. Even if it is given an "unbiased" data set that includes faces of all races, genders, etc., fairly, it is being asked to learn to recognize gender (in this instance) purely from pictures. We recognize gender not only from a person’s looks, but also from how they sound, how they behave, what they say, their name, their expressions, and a thousand other things. We deprive the machine of all this information and then ask it to make the right choice. That is a huge data bias, comparable to learning about the humanity of people from distant lands through travelers’ tales. On top of that, the machine also has much simpler learning mechanisms. It is simply trying to minimize its error based on the data it was given. Human learning involves much more complicated things that we cannot even fully describe yet except in the most simplistic or metaphorical terms.

The immediate danger in handing over important decision-making to intelligent machines is not so much that they will replicate human bigotries, but that, with their limited capacities and limited data, they will fail to replicate the biases that make us fair, considerate, compassionate, and, well, human.

Tuesday, January 15, 2019

The Perils of Unmotivated Thinking

This article just gets lost in anecdote, but Gary Smith is making an important point here: Good priors matter. When we look at lots of data and try to make sense of it, the expectations we start with make a crucial difference. If those expectations are productive and realistic, data will tell us important things. If the expectations are wrong or poor, we'll just find lots of garbage and think it is gold. Unfortunately, when we're faced with data, we often don't know what expectations to go in with, and the reasonable option seems to be to assume nothing, i.e., to go in assuming that everything is equally possible - a uniform prior. After all, what could be better than looking at data with an open mind? It turns out that this is usually a bad choice. Without the inherent discrimination provided by prior expectations, any large dataset can show all sorts of "patterns" leading to false conclusions.

There has recently been justifiable criticism of the use of motivated thinking, confirmation biases, and such in scientific investigations. What is often not discussed is that unmotivated thinking and unbiased analysis is often far more dangerous. The inevitable lesson is that, rather than rejecting all motivation and bias, we need to identify good motivations and appropriate biases. Unfortunately, there is no good way to do this in a purely mathematical or computational way. In animals (and, in a broad sense, in all living organisms), evolution has successfully configured useful biases. In fact, that can be seen as its most amazing accomplishment. We give these prior biases many names: Instinct, intuition, heuristics. But ultimately, they shape expectations based not only on the animal's own experience, but the experience of all its ancestors going back to the origin of life. As animals, we sense everything in the context of our biases, and make instinctive sense of it. Our physical body is an instrument sculpted by evolution to accomplish this task every instant of our lives. This is the essence of cognition, consciousness, and intelligence. And this is what even our best AI systems lack. They do have biases of course - every computer program, every circuit, every robot is biased by its architecture - but these biases are not the result of an adaptive process such as evolution. Rather, they reflect mathematical convenience, engineering constraints, and sometimes just plain ignorance or laziness. Not surprisingly, then, such systems have a hard time learning the right thing.

This also leads into another subtle point. Our most successful AI systems are those that use supervised learning, i.e., where some type of "ground truth" is used to correct the behavior of the system during learning. But that is just an implicit and very strong way to bring in prior biases based on reality. Where we have the most difficulty is in unsupervised learning, where the AI system goes looking for patterns in data without much prior bias, e.g., finding correlations or clusters. Unfortunately, the use of supervised learning is limited by the fact that, most of the time, the ground truth just isn't available. Real animals do almost all of their learning unsupervised, and mostly succeed because their instinct substitutes for the absence of the ground truth. That is what we will need in any real AI systems, and that is where the AI project should concentrate its greatest effort.

There is one type of AI that does try to approximate this: Reinforcement learning, where a system learns to critique its own options, and ultimately to make better decisions. The spectacular success of AI programs like AlphaGo and AlphaZero is based on a good marriage between the algorithms of supervised learning and the principles of reinforcement learning. However, there is still one big difference. In (most) reinforcement learning, the internal critic itself learns by pattern recognition. It builds instinct from the ground up based on data, albeit in collaboration with the decision-making and feedback from the environment. This is why reinforcement learning works best when the system is operating in the real world with real feedback, and why it is so slow. It's trying to build a new mind every time it is applied! In an animal, evolution has already configured a mind in the physical structure of the body (including the brain). That mind already has instincts, and needs very little experience to learn ti be (mostly) right enough.

There's much more to say about this, but I'm going to start by ordering and reading Gary Smith's book.....

Sunday, January 6, 2019

Impeachment Talk

Now that the Democrats have taken the U.S. House of Representatives, discussions about the impeachment of Donald Trump have quickly reached fever pitch - aided in no small part by newly elected Rep. Rashida Tlaib's graphic statement (at a private event) that "we" were going to "impeach the motherf****r". While many American voters agree with both aspects of this expression, it has been seen in the elite media as a political blunder and a "gift to Trump". The usual suspects have gone forth tut-tutting Tlaib's language and recommending that she apologize. Some, including the newly elected Rep. Alexandria Ocasio-Cortez, have risen to defend Tlaib, pointing out that the outcry reflects a double standard for men and women, and fails to take into account the much more persistent profanity of Donald Trump himself. Whataboutism and well-placed accusations of misogyny aside, the question is whether such loud talk of impeachment is politically wise or foolish for the Democrats. That question deserves to be looked at through a strategic rather than an emotional lens.

The case for why Donald Trump deserves to be removed from office has now been made so well by Trump's own actions and pronouncements that it really requires no elaboration. But if such elaboration was needed, this piece by David Leonhardt in the New York Times provides it in great detail. And this does not even take into account the possibility that Trump may have conspired with the Russians to get elected, and may still be getting his talking points from Putin. Others, such as John Dean and Tom Steyer, have argued that removing Trump from office is an urgent national imperative simply because of the damage Trump is doing to America's institutions. In contrast, Democratic leadership - especially since retaking the House - has been very cautious, saying that they prefer to wait for Special Counsel Robert Mueller to submit his report before deciding on impeachment.

The truth is that, on the impeachment issue, Democrats are caught between two opposing realities. The first is the explosive urge that has built up within the Democratic base to indeed "impeach the motherf****r: as soon as possible, if not sooner. Now that the Democrats have some actual power, this is no longer just an aspirational idea. The second, and perhaps more important, reality is that the country as a whole - including many voters that the Democrats would like to win over to their side in 2020 - do not yet support impeachment. The Democrats argue - correctly - that just impeaching Trump in the House will not help, since conviction requires a two-thirds vote in the Senate, which is still majority Republican. They also make the case - again correctly - that impeaching and failing to convict will only make Trump stronger, and that any impeachment requires bipartisan support that can only come from a damning Mueller report. The impeachment of a president is a traumatic process for the country, and making it partisan only makes the target more popular - as demonstrated by the impeachment of Bill Clinton. The alternative - suggested, among others, by incoming House Judiciary Committee Chair, Rep. Jerry Nadler - is to put aside impeachment until Mueller reports, but to go full steam ahead now with Congressional investigations of the President, which may also yield rich deposits of offenses. The hope is that this will slake the Democratic base's thirst for going after Trump, and eventually lead to a soft landing on the whole issue. It may well work, but the Democrats need to think more strategically.

People often say that the Republican Party has fallen off the cliff of craziness in the era of Trump, but those paying any attention know that this fall began long, long before Trump's arrival. The Republican Party has been on a journey away from reality for almost forty years, getting further and further away into the wilderness of conspiracies where Rush Limbaugh, Ann Coulterm Sean Hannity, and other such specters are their primary voices of "reality". But, until the advent of Trump, there remained something of a convenient fiction that - at least for the elite "thinkers" in the party - this was all part of a cynical ploy to stay in power. Perhaps that was true, and certainly, those who now call themselves Never Trumpers have implicitly validated that claim, but for the rank and file of the Party, the journey to La-La Land was real. Now they are there, have built houses, and are fully invested in their imaginary homeland. It is not getting rid of Trump that is a national imperative; it is getting rid of the current madness possessing one-third of the country, and Trump offers potentially the best opportunity to do so.

For all his talk of being a nationalist, Trump has never sought to serve any interest broader than his own. But, by becoming President, Donald Trump is now performing his first great service to the nation: Exposing the reality of the modern Republican Party, and waking up the 65% of the country that had been  in denial about it. As the record voter turnout in 2018 shows, those 65% are now fully awake. Donald Trump is the first president in decades - probably ever - to have a sustained disapproval rating above 50%, and occasionally as high as 60%. That is a signal achievement, and a great one for the country. There is finally a majority believing strongly that the Limbaugh-Coulter-Hannity-Trump vision of America is utterly unacceptable. Keeping Trump around is necessary to maintain this fervor, which is one big strategic reason why Democrats should not impeach Donald Trump, and give him every opportunity to be the Republican nominee again in 2020. Yes, there will be naysayers pointing out that this argument was made in 2016 and backfired. This is not 2016. Indeed, November 8, 2016 was as big a hinge event politically as September 11, 2001. The U.S. is a new country today, and Donald Trump is toxic to a majority in this new country.

But what of the talk of impeachment? Should that be tempered in the interest of the strategic goal of keeping Trump around? And this is where strategy really matters. There has been broad perplexity among the pundit class regarding Trump's decision to follow a "base-only" strategy, i.e., playing so strongly to his base that he keeps the rest of the country alienated. But, as some observers have begun to note, this is actually a strategy born of desperate need. The 35% of the country that supports Donald Trump blindly is, in fact, 80% of the voter base of the Republican Party, which means that most Republican senators and congressmen/women cannot get elected without its support. This is the only reason why the vast majority of these otherwise sentient human beings continue to fawn before Trump and support everything he says. Trump's primary hope for avoiding impeachment is to retain this loyalty, which means retaining the loyalty of his (and their) base - hence the "base only" approach. But while this strategy increases Trump's chances of surviving through this term and getting nominated again in 2020, it diminishes catastrophically his chances of actually getting re-elected. And, in the bargain, it also makes election more difficult for House and Senate Republicans in states with high urban and suburban populations. Well, this is exactly what Democrats want - or at least should want: A toxic Donald Trump hung like an albatross around the neck of the Republican Party going into 2020. So how can they make sure that this happens? Why, by continuing to talk about impeachment as a real possibility without actually doing it, thus pinning Trump and the Republicans to their base strategy. This is why statements such as Tlaib's, while tactically problematic, are strategically useful, and one can be sure that Nancy Pelosi and Chuck Schumer understand this.

Two further issues bear discussion here. First, the strategic imperatives described above notwithstanding, it may well become impossible even for Nancy Pelosi to keep a lid on impeachment fervor in the House. The prospect of investigating and removing Donald Trump was a huge driver for Democratic voters in the 2018 midterms, and it may be impossible for Democrats to resist this demand for too long. The hope is that, once election fever picks up, progressive voters will see diminishing value in impeaching Trump and look to an electoral removal. When this shift in balance may occur is an interesting question, but late 2019 looks to be a reasonable guess.

The second important issue is the one raised by those who say that removing Trump as soon as possible is a national imperative to preserve the fabric of the country. In his recent New York Times op-ed, David Leonhardt also makes a very persuasive case that a president must not be seen as escaping punishment for crimes such as self-dealing, obstruction of justice, and campaign finance violations. These arguments are absolutely valid, and the damage Trump is doing to the institutions of the United States will take a long time to recover from. But it is important to remember that, as many have noted, Trump is not the problem, just a symptom. The real problem is the fundamentally undemocratic attitude of the Republican base, which is a much more pervasive problem than one person or a few people at the top. The malignancy afflicting the American body politic cannot be treated simply by removing the visible tumor and stitching things back up. Rather, it requires systemic treatment that removes all traces of the disease throughout the body. This does not mean disenfranchising Republican voters or turning them into pariahs, but it does mean finally demonstrating the futility of their reactionary vision for America and allowing them a graceful way back. To all those who say "I want my country back", events must show that the country will move inexorably towards a more diverse and open future; that it is already too late to turn back. This will have a cost, but there is no other way. Easy fixes like partisan impeachment will only make things worse. Ultimately, a healthy American democracy requires a healthy conservative political party to temper the idealism of liberals and progressives. Democracy is fundamentally a dialectic, and no functioning democracy can long afford for one of the interlocutors to be out of their mind.

Finally, it is fair to ask if the hope the American democracy will survive and thrive is, in fact, justified. Is it really true that the United States cannot turn towards a darker vision? The lesson of history is that this dark vision lurks deep within human minds and human societies. No amount of education and enlightenment can guarantee that the nightmares of yesterday cannot again become the reality of tomorrow. It has happened - is happening - in too many places for those who cherish the delicate dream of liberal democracy to rest easy. As abolitionist Wendell Phillips said more than 150 years ago, eternal vigilance in still the price of liberty.

Saturday, January 5, 2019

The History of Genghis Khan

Few personalities in history have aroused more interest and held greater fascination than Genghis Khan. Even eight centuries after his death, he stands as a symbol of both power and destruction - perhaps unfairly so. In a short time, he created what remains still the largest contiguous land conquest in history. In doing so, he generated an unprecedented mixing of cultures, churning of ideas, and flourishing of trade across most of the Old World. The world that followed was so indelibly stamped with his influence that his personal biological signature can still be seen in populations across the world. In a real sense, Genghis Khan was fundamental to the making of the world we live in today.

The Mongols had no systematic writing system, and were quite cagey about their own history. Thus, a lot of what we know about the Great Khan and his immediate successors comes from the writings of others - mostly writers among the conquered. Historian-administrator Ata Malik Juvaini, though also from a conquered group, became an insider in the Mongol administration of Iran, accompanied Hulegu, Genghis Khan's grandson, on his conquests, and eventually became governor of Mesopotamia. He also visited the Mongol capital, Karakorum, on multiple occasions. His hisory of Genghis Khan, titled History of the World Conqueror (Tarikh-e Jahan-Gusha), is the most direct, near-contemporary history of the great conqueror, and one of the most valuable historical treatises from the entire period.

An updated version of the definitive 1958 English translation of the book by J.A. Boyle is available here to read for free.

A Dynamic Levant

Genetic analysis of populations in the Levant shows how much relatively recent history has shaped the genetic distribution in the region.

Quantum Sense

A beautiful, accessible talk by David Tong on the mysteries of quantum field theory.

The New Reading

Real problem or crying wolf? An interesting piece on how changes in the way we read might be having much wider - mostly bad - effects. This paragraph, in particular, drew my attention:

“Multiple studies show that digital screen use may be causing a variety of troubling downstream effects on reading comprehension in older high school and college students. In Stavanger, Norway, psychologist Anne Mangen and her colleagues studied how high school students comprehend the same material in different mediums. Mangen’s group asked subjects questions about a short story whose plot had universal student appeal (a lust-filled, love story); half of the students read Jenny, Mon Amour on a Kindle, the other half in paperback. Results indicated that students who read on print were superior in their comprehension to screen-reading peers, particularly in their ability to sequence detail and reconstruct the plot in chronological order.”

Many years ago, I wrote something (can’t recall if it was shared anywhere) on the way hypertext is changing our notions of reading. For five thousand years, reading has fundamentally been a linear process, moving sequentially from one word to the next. That changed with hypertext. Reading is now a multi-dimensional experience. It is not unreasonable to think that this has made it difficult for some people to read old-fashioned linear text. We won’t really know the effects of this until we have a generation that grew up only using hypertext. We still don’t have such a generation since schools still begin by using print books, but we’re getting there. It will either supercharge our minds or destroy precious capacities developed painstakingly over millennia.

The issue this article talks about is closely related, though not the same. There is an interesting connection that I can relate to personally. I am an exceptionally slow reader, and have tried to analyze why. The conclusion I have come to is that there are two reasons. First, I do actually read every word as well as its connections within the text. This is probably a consequence of having read too much poetry very early in life. About 80% or more of my extracurricular reading until age 9 was poetry, and poetry has to be read slowly. The second reason is that, as I read, I often jump off on other tangents suggested by what I’m reading - a related idea, the history of a person mentioned in the text, or just the beauty of a particular sentence. In a sense, I read linear text like hypertext, except that the hyperlinks are in my mind, not on the page. Now, I’m sure this this is the case for all readers, but those who read fast are better able to streamline this process and probably resist the distraction of tangents. Ultimately, this comes down to a broader habit of mind. Some of us are better at concentration, others prefer ramification. This may also be related to the dichotomy between convergent and divergent thinking. Perhaps, by destabilizing the linearity of reading in young minds, we are also making them more creative.

Complexity is complicated....

Why Complexity is Difference

Yaneer Bar-Yam, one of world's leading experts on complex systems, explains why complexity really is very different. Much more to be said on this deep topic, but these are some great insights.

The Algorithm Knows

AI is finally being put to good use -- predicting whether two people in a relationship are likely to break up. How long before the app begins to sent you "You have 35 minutes till breakup. Traffic is moderate" notifications?

Faith in the U.S. Congress

Pew Research has a very interesting piece on the religious affiliations of members of the US House and Senate in the 116th Congress which began this month. As expected, Christians of various denominations dominate, with 293 Protestants, 163 Catholics, 5 Orthodox, and 10 Mormons out of 535 total members. As for the rest, 34 members are Jewish, 3 Muslim, 3 Hindu, 2 Buddhist, 2 Unitarians, and 1 Unaffiliated. Eighteen members did not provide the information. Apparently, not a single member had the guts to call themselves an atheist.

Wednesday, September 13, 2017

Asma on the Nature of Imagination

In this fascinating piece, the philosopher Stephen Asma lays out a compelling case for why the faculty of imagination may be older - and conceptually prior - to language. In his formulation, wordless imagination was an earlier step above the "lizard brain" state of purely stimulus-response and instinctual behavior. It represented the ability of the brain to generate states that were not triggered directly by stimuli, i.e., a kind of confabulation. Language is then seen as a further complexification, where an apparatus evolved to harness the power of imagination to more useful ends.

This view resonates strongly with my own conception of the nature of the mind, which must be seen in an evolutionary and developmental context to be fully understood. The the "higher" capabilities of human and animal cognition are essentially layers built on top of the basic stimulus-response animal. As these layers of intertwining and autonomously activatable neural networks developed, they created increasingly complex ways for information to "dwell" and ramify within the brain, thus creating room first for imagination and then for language. And next, who knows for what?

This is especially significant in the quest for AI. The entire AI enterprise began with a top-down symbol-centered view. Even though that has now given way to more emergence-based conceptions such as neural networks and subsumption, the phylogenetic sequencing by which intelligence arose in the world is still not appreciated. Nor is the similar ontogenetic sequencing whereby a developing animal moves from a "primitive" stimulus-response functionality towards an increasingly "linguistic" one based on explicit communication. This is true not just in humans but also in primates, birds, probably cetaceans and many other vertebrates. Perhaps we need to abandon all the "short cut" methods we are currently using to produce artificial intelligence, and follow Nature's evolutionary pathway. Start with the simplest Braitenberg vehicles, and let them evolve brains capable first or imagination and then of language. It would be especially interesting to see whether these things emerge spontaneously as a consequence of complexification in brain and body.

Monday, April 10, 2017

Calamari Editors

It seems that octopuses, squids and cuttlefish are really good at using RNA editing to produce proteins that their genes don't code for. This interesting article argues that this flexibility may be an important component of their intelligence. It would be interesting to see if this also plays a big role in the remarkable ability of these animals to change color and camouflage themselves. However, this comes at a price, since it slows down evolution. So yeah, you can open bottles and change colors like nobody's business, but you'll always have eight tentacles growing out of your head....

Your Snout is Showing...

Never forget that when you eat chicken, you're eating a dinosaur. Now some scientists are beginning to unmask the chicken and show us it true face - snout and all. Be very afraid....

One quibble with this piece: There was nothing "accidental" about what happened here. These scientists were out to do exactly what they did. Now if only they'd hatch some of these ugly devils.

Brain Hacking

Getting earworms into your head isn't enough any more. It appears that social media and other on-line businesses are hacking into our brains in quest of clicks, eyeballs and ka-ching. If you are holding your cell-phone, put is down and move away slowly until you can neither see the screen nor hear alerts. Sit down and breathe normally. You'll feel much better in an hour .... that is if you can survive the anxiety of being far from your phone for an hour. Good luck!

Emergent Art - the Reddit Version

This really cool experiment on Reddit demonstrates how complex order can emerge bottom up from relatively simple rules under constraints. While it isn't clear that says all that much about the genesis of art, it certainly provides very interesting insights into the self-organization process - and especially on the delicate balance between randomness and constraint that seems to be essential to produce truly non-trivial and useful order.

 

(via Yaneer Bar-Yam)

Fair Inequality

In an interesting review piece in Nature Human Behavior, Starmans et al argue that, in realistic settings involving large groups, people show a preference for "fair inequality" over "unfair equality" in economic terms. The studies with children are the most interesting because studies involving adults are probably too confounded by political leanings and social norms.

"There is immense concern about economic inequality, both among the scholarly community and in the general public, and many insist that equality is an important social goal. However, when people are asked about the ideal distribution of wealth in their country, they actually prefer unequal societies. We suggest that these two phenomena can be reconciled by noticing that, despite appearances to the contrary, there is no evidence that people are bothered by economic inequality itself. Rather, they are bothered by something that is often confounded with inequality: economic unfairness. Drawing upon laboratory studies, cross-cultural research, and experiments with babies and young children, we argue that humans naturally favour fair distributions, not equal ones, and that when fairness and equality clash, people prefer fair inequality over unfair equality. Both psychological research and decisions by policymakers would benefit from more clearly distinguishing inequality from unfairness."

Thursday, March 30, 2017

Reflecting on AI

A nice reflection on AI, its current status, and future possibilities by Namit Arora - with a very useful set of links.

"As for the more dramatic claims about AI, my view, which I articulated in The Dearth of Artificial Intelligence (2009), remains that even if we develop ‘intelligent’ machines (much depends here on what we deem ‘intelligent’), odds are near-zero that machines will come to rival human-level general intelligence if their creation bypasses the particular embodied experience of humans forged by eons of evolution. By human-level intelligence (or strong AI, versus weak or domain-specific AI), I mean intelligence that’s analogous to ours: rivaling our social and emotional intelligence; mirroring our instincts, intuitions, insights, tastes, aversions, adaptability; similar to how we make sense of brand new contexts and use our creativity, imagination, and judgment to breathe meaning and significance into novel ideas and concepts; to approach being and time like we do, informed by our fear, desire, delight, sense of aging and death; and so on. Incorporating all of this in a machine will not happen by combining computing power with algorithmic wizardry. Unless machines can experience and relate to the world like we do—which no one has a clue how—machines can’t make decisions like we do. (Another way to say this is that reductionism has limits, esp. for highly complex systems like the biosphere and human mind/culture, when the laws of nature run out of descriptive and predictive steam—not because our science is inadequate but due to irreducible and unpredictable emergent properties inherent in complex systems.)"

Looking at the World with Human Eyes

A few weeks ago, I wrote a piece on 3 Quarks Daily arguing that thinking of realistic AI as being hyper-rational was a mistake, and that AI that is convincingly "real" will, in fact, be convincingly irrational - albeit not necessarily in ways that humans are.

This article reports on an attempt to develop a machine learning system for image analysis that makes mistakes similar to humans. This is much more than just a "cute" idea. As the report says, quoting David Cox, the study's lead researcher:

"Algorithms that make decisions in a similar way to us could also be easier to understand and trust, says Cox. Computer systems sometimes make mistakes that humans wouldn’t – like Tesla’s Autopilot system failing to notice a white trailer against a bright sky. Systems trained on brain data would make mistakes in a more human way. “And if you make mistakes that a human would make, humans will continue to trust that system,” says Cox."

Ultimately, the effort to make the irrationality of intelligent machines similar to that of humans will fail because machines capable of autonomous learning will go in unpredictable directions, but it isn't a bad place to start.