Sunday, September 7, 2008

The Singularity Is Far

The Singularity Is Far: In this post, I wish to propose for the reader’s favorable consideration a doctrine that will strike many in the nerd community as strange, bizarre, and paradoxical, but that I hope will at least be given a hearing.  The doctrine in question is this: while it is possible that, a century hence, humans will have built molecular nanobots and superintelligent AIs, uploaded their brains to computers, and achieved eternal life, these possibilities are not quite so likely as commonly supposed, nor do they obviate the need to address mundane matters such as war, poverty, disease, climate change, and helping Democrats win elections. [...] The fifth reason is my (limited) experience of AI research. [...] For whatever it’s worth, my impression was of a field with plenty of exciting progress, but which has (to put it mildly) some ways to go before recapitulating the last billion years of evolution.  The idea that a field must either be (1) failing or (2) on track to reach its ultimate goal within our lifetimes, seems utterly without support in the history of science (if understandable from the standpoint of both critics and enthusiastic supporters).  If I were forced at gunpoint to guess, I’d say that human-level AI seemed to me like a slog of many more centuries or millennia (with the obvious potential for black swans along the way). [...] And I can’t helping thinking that, before we transcend the human condition and upload our brains to computers, a reasonable first step might be to bring the 17th-century Enlightenment to the 98% of the world that still hasn’t gotten the message. (Via Shtetl-Optimized.)

Read the whole thing. I'm not sure I'd give as much credit to the Singularitarians as Scott does, but I'm in complete agreement with his estimation of AI: “a field with plenty of exciting progress, but which has (to put it mildly) some ways to go before recapitulating the last billion years of evolution.”

No comments: