A nice reflection on AI, its current status, and future possibilities by Namit Arora - with a very useful set of links.
"As for the more dramatic claims about AI, my view, which I articulated in The Dearth of Artificial Intelligence (2009), remains that even if we develop ‘intelligent’ machines (much depends here on what we deem ‘intelligent’), odds are near-zero that machines will come to rival human-level general intelligence if their creation bypasses the particular embodied experience of humans forged by eons of evolution. By human-level intelligence (or strong AI, versus weak or domain-specific AI), I mean intelligence that’s analogous to ours: rivaling our social and emotional intelligence; mirroring our instincts, intuitions, insights, tastes, aversions, adaptability; similar to how we make sense of brand new contexts and use our creativity, imagination, and judgment to breathe meaning and significance into novel ideas and concepts; to approach being and time like we do, informed by our fear, desire, delight, sense of aging and death; and so on. Incorporating all of this in a machine will not happen by combining computing power with algorithmic wizardry. Unless machines can experience and relate to the world like we do—which no one has a clue how—machines can’t make decisions like we do. (Another way to say this is that reductionism has limits, esp. for highly complex systems like the biosphere and human mind/culture, when the laws of nature run out of descriptive and predictive steam—not because our science is inadequate but due to irreducible and unpredictable emergent properties inherent in complex systems.)"
"As for the more dramatic claims about AI, my view, which I articulated in The Dearth of Artificial Intelligence (2009), remains that even if we develop ‘intelligent’ machines (much depends here on what we deem ‘intelligent’), odds are near-zero that machines will come to rival human-level general intelligence if their creation bypasses the particular embodied experience of humans forged by eons of evolution. By human-level intelligence (or strong AI, versus weak or domain-specific AI), I mean intelligence that’s analogous to ours: rivaling our social and emotional intelligence; mirroring our instincts, intuitions, insights, tastes, aversions, adaptability; similar to how we make sense of brand new contexts and use our creativity, imagination, and judgment to breathe meaning and significance into novel ideas and concepts; to approach being and time like we do, informed by our fear, desire, delight, sense of aging and death; and so on. Incorporating all of this in a machine will not happen by combining computing power with algorithmic wizardry. Unless machines can experience and relate to the world like we do—which no one has a clue how—machines can’t make decisions like we do. (Another way to say this is that reductionism has limits, esp. for highly complex systems like the biosphere and human mind/culture, when the laws of nature run out of descriptive and predictive steam—not because our science is inadequate but due to irreducible and unpredictable emergent properties inherent in complex systems.)"