“FLOPS are not Intelligence” The Type Error of the Singularity

by Andrew McAfee on May 8, 2012

In my last post, which recapped a fascinating lunch I had with a bunch of economists and AI researchers at MIT, I wrote

Computers are getting bigger and faster, but not ‘smarter’ in any human sense of the word. Artificial intelligence bears very little relationship to the human variety, and the two are not going to merge. One of the AI researchers referred to the idea of the Singularity as a ‘category mistake,’ which is a great academic insult.

A couple people asked for more details about this ‘category mistake,’ so I went back and reviewed my notes from the meeting. Here’s an edited, imperfect transcript of what one of the attendees said (I’ll follow the Chatham House rule)

There’s a type error there, which is conflating FLOPS with intelligence. [Some Singularity advocates] are just plotting FLOPS against brain size, saying they cross in 2035 or whenever and therefore…  and that’s just obviously a mistake. There’s something missing there. [Intelligence is] not just counting cycles

The AI professionals were pretty adamant that faster machines were not automatically smarter machines, and that all the work they were doing to accomplish amazing feats like speech recognition, automatic translation, robot mobility and manipulation, driverless driving, and so on was not causing computers to become any more human.

To which I can only say, whew!

  • http://twitter.com/piewords Laurence Hart

    It all boils down to algorithms. Engineering can make faster processors with more memory, but what it does with it still matters. What the increased power does is allow us to run more complicated algorithms without impacting perceived performance.

    The thing is, even with inefficient algorithms, this could hit eventually. Say we are able to map basic processes and include the ability to add additional processes based upon trial and error. That could create the intelligence. It might require more power than what we will have in 2035, but with continued technological growth, consider 2040 or 2045.

    It is a matter of time. The only real question is whether or not we’ll live to see it.

    -Pie

  • http://twitter.com/monavernon Mona M. Vernon

    AI+Crowd is emerging in enterprise grade solution and I wonder if the question is relevant when instead of choosing we get to have both (example of Soylent from MIT CSAIL)

  • Jon Perry

    I think this line of criticism mischaracterizes the arguments of Singularity advocates. This critique mostly seems to be responding to a small portion of Ray Kurzweil’s argument (Ray himself representing only a small subset of actual views held in the Singularity community.) Specifically these AI professionals appear to be addressing the part of Ray Kurzweil’s argument where he discusses possible timelines for when we could expect to have the requisite hardware to fully model the brain at the level of individual neurons. So the argument is mostly about a theoretical project of whole brain emulation which doesn’t really bear any incremental resemblance to driverless cars and other modern narrow AI projects. Also I don’t think anyone in the Singularity camp would be so facile as to suggest that software innovation is not also a requirement to achieve human level intelligence, and that hardware progress in the form of more cycles is at all sufficient. So at the end of the day I don’t think anything really very new or controversial is being said here.

  • Jed Harris

    Thanks for following up on this. 

    There are really two separate points in what you say (channeling the AI experts):

    1)  FLOPS are not intelligence.  I guess we can all agree on that.  There are projections of machine intelligence (Moravec, Kurzweil) that do place a lot of weight on basic computational power, but they also know very clearly that algorithms are needed.  So I agree with Jon Perry that the Singularity folks are NOT making this category error. 

    2)  The ongoing work to turn FLOPS into “amazing feats… [is] not causing computers to become any more human.”  I don’t know that the Singularity folks are claiming computers will “become more human” — just that at some point “anything we can do, they can do better.”  And I don’t see the argument against that in anything you’ve said. 

    But let’s try to figure out what they may have meant.  Maybe there’s an implicit argument here of the following form — I’m very much speculating:

    1)  All “amazing feats” are enabled by invention of “intelligent” algorithms (plus FLOPS). 

    2)  This invention is done by humans, and for the foreseeable future, can’t be done by computers. 

    3)  These algorithms are fairly task specific, they won’t in the foreseeable future lead to general intelligence. 

    4)  Thus human invention sets the rate and direction of increased computer “intelligence”, and computers can’t take off and become ultra-intelligent (as the Singularity story would require). 

    Whether or not your experts would endorse this argument, it fits what they did say, is interesting and somewhat plausible.  I see one strong point (2) and one weak point (3). 

    I tend to believe (2) because we haven’t seen much, if any progress in computer self-programming since Sussman’s thesis in 1973.  I guess we could count various attempts at evolutionary programming as progress but they haven’t been very useful. 

    But I tend not to believe (3).  Algorithms are becoming capable of increasingly general learning, and I see no reason to think this trend will stop.  At some point in the near future computers will be able to learn most human skills by analyzing corpuses of examples.  Whether that makes them “more human” is a delicate philosophical question — and I think not the one that your experts were addressing. 

    But as long as (2) is true, I agree we won’t get a discontinuous Singularity, since computers won’t be able to advance their own core abilities.  Perhaps the experts were saying that the two categories that shouldn’t be confused are AI systems, and the intelligence needed to invent better AI systems — that we can’t create machines that do what they do. 

    I’d be very interested in getting their comments on this question if / when you discuss it further. 

  • Jed Harris

    Also, a minor issue:  You embedded a JPG ( http://www.computableminds.com/img/singularidad/Hans-Moravec.jpg )  but the tags are messed up.  In Chrome it displays as a broken image icon, in Firefox it doesn’t display at all.

  • Chris L

    What Jon said about the straw man nature of this and what Jed said about “anything we can do, they can do better”

  • http://twitter.com/multiplex multiplex

    My main issue is not the FLOPs evolution or the type error, it is the assumption that “once we create an intelligence with the capacity to make another that surpass itself, at the same time, this new intelligence could do the same.” Why would that hold? In effect: can we think of a self-limiting phenomenon here?

  • tourisma

    interesting point : “The AI professionals were pretty adamant that faster machines were not automatically smarter machines.”
    Even if a lot of researchers, in this paradigm, smells to hope in “nano…”
    http://for-tourisma.com/2014/01/05/human-robots/

Previous post:

Next post: