At BetaNews, Robert X. Cringely writes that fifty years ago attempts to create artificial intelligence (AI) failed because there was not enough “processing power” to do it: “But thanks to Map Reduce and the cloud we have more than enough computing power to do AI today.”
At The Washington Post, Dominic Basulto tells us that AI is “the next big thing” for Silicon Valley start-ups. Not only will it be created, it will be done so easily and cheaply:
AI will move from something that took tens of millions of dollars and thousands of people to create, to something that takes tens of thousands of dollars and can be created by a group of kids after an all-night Red Bull session. When they do, then we’ll know that visionaries like Erik Brynjolfsson and Andrew McAfee were right – we are entering the dawn of the age of artificial intelligence.
The problem with AI is that we don’t really know what the problem is, or agree with what success would look like. With your cellphone (or any number of similar rapidly-improving technologies) we are perfectly aware of what constitutes success, and we know pretty well how to improve them. With AI, defining the questions remains a major task, and success a major disagreement. That is fundamentally different from issues like increasing processor power, squeezing more pixels onto a screen, or speeding up wireless internet. Failing to see that difference is massively unhelpful.
If people want to reflect meaningfully on this issue, they should start with the central controversy in artificial intelligence: probabilistic vs. cognitive models of intelligence.