In my last post, which recapped a fascinating lunch I had with a bunch of economists and AI researchers at MIT, I wrote
Computers are getting bigger and faster, but not ‘smarter’ in any human sense of the word. Artificial intelligence bears very little relationship to the human variety, and the two are not going to merge. One of the AI researchers referred to the idea of the Singularity as a ‘category mistake,’ which is a great academic insult.
A couple people asked for more details about this ‘category mistake,’ so I went back and reviewed my notes from the meeting. Here’s an edited, imperfect transcript of what one of the attendees said (I’ll follow the Chatham House rule)
There’s a type error there, which is conflating FLOPS with intelligence. [Some Singularity advocates] are just plotting FLOPS against brain size, saying they cross in 2035 or whenever and therefore… and that’s just obviously a mistake. There’s something missing there. [Intelligence is] not just counting cycles
The AI professionals were pretty adamant that faster machines were not automatically smarter machines, and that all the work they were doing to accomplish amazing feats like speech recognition, automatic translation, robot mobility and manipulation, driverless driving, and so on was not causing computers to become any more human.
To which I can only say, whew!