The Intuitionists

by Andrew McAfee on May 13, 2010

I gave a talk at Palantir Technologies‘ way-cool DC offices on Monday as part of their “Palantir Night Live” series. My topic was “In the Age of Smart Machines, What are We Good For?” I expanded on a couple recent posts (here, here, and here) and talked about what computers are good at, what people are good at, and how the two types of ability can be combined to yield good results.

I’d post the slides from the talk, but I fear they really wouldn’t make much sense without the words that went along with them. Palantir will make a video of the talk available, and I’ll let you know when it’s up.

I want to highlight here what was for me the most interesting moment of the talk. The audience was a group of 70+ folk from the DC geek community. This is a deep community because of the government’s vast spending on technology, and there was a lot of intellectual horsepower in the room. Most folk seemed to be in their 20s, and I got the impression that a lot of them worked directly with code. This was, in short, a highly educated and technologically sophisticated group.

One slide in my talk was titled “People in the Age of Fast, Dumb Computers: Good News and Bad News.” I explained that good news and bad news here was being evaluated from the perspective of a person who wanted to be considered a valuable contributor and keep his job.

The next slide just said “Human Intuition vs. Algorithmic Predictions.” I asked the audience whether they thought that this was the good news portion of the talk, or the bad news one. I asked them, in other words, if they thought that human intuition was holding up well against fast, dumb computers applying algorithms to data.

After counting hands a couple times, the audience and I agreed that about 60% of the people thought that this was going to be the good news portion of the talk –  that I was going to say next how robust and valuable human intuition is proving to be. I then had to tell them that they were wrong.

The blog posts referenced above (to repeat, they’re here, here, and here) make the case that in many if not most cases our intuition is biased and faulty. If we’re interested in making good decisions and predictions (rather than protecting the feelings and reputations of the alleged intuitive experts) we should in such cases replace intuition with data and algorithms. I appreciate that this stance is cold hearted and removes ‘the human element.’ I also believe, based on the research I’ve seen, that the the human element is the weak link in many decision making and prediction tasks. When a stronger link becomes available, it should be used instead.

I don’t want to rehash here all the arguments in support of my stance. Take a look at the posts if you’re interested, and let us know what you think of them. With this post I just want to underscore that decreasing our reliance on human intuition is going to be a long, slow haul.

The clear majority of one of the most data- and algorithm-friendly groups I’ve seen in a while thought I was going to give them good news about the comparative efficacy of human intuition over the digital tools they themselves were working on. This optimism persists among highly educated people despite an extensive body of research on biases and heuristics in human decision making.

We are very fond of ourselves, as well we should be. Like Hamlet, I marvel at “What a piece of work is a man, how noble in reason, how infinite in faculties… in apprehension how like a god!” The bulk of my talk, in fact, was devoted to the paltry progress of Artificial Intelligence research in building anything remotely close to a computerized mind. I think The Singularity, the Age of the Smart Machine, or whatever else you want to call it is a long, long way off. What our brains do is extraordinarily weird, difficult, powerful, and unique. So while I really like the remade Battlestar Galactica I don’t think a war against intelligent machines is anything we need to start worry about yet.

But I do think that we need to start seriously questioning our assumption that our brains are good at every task to which they’re currently applied. When better alternatives come along we need to recognize them and act accordingly, even if these alternatives make us feel a bit less exalted.

What do you think? Am I being too hard on human decision making and self-regard? What are the good arguments, if any, for continuing to let people make decisions and predictions in areas where algorithms have been shown to be superior? Leave a comment, please, and let us know.

  • chrisbeveridge

    This reminds me of something I like to call “California-style navigation.” Driving around sort of aimlessly, no maps needed- when you get to an area that looks like it might be the intended destination, the intuition kicks in, and you hear the driver say “this feels right”. Not exactly the accuracy of GPS, or even Google maps. But somehow we still practice it….and we drive around for hours and are usually late and stressed out…..

  • http://acleanlife.org robotchampion

    Andy – great talk and I loved your point about human + computer = the most powerful combo yet.

    I did differ with you on subtle point. I would posit that intuition is a complex process made up of many algorithms. A composite of raw data, statistical comparisons, decision making, and even sometimes experiential emotional learning.

    In essence AI is about building the brain one process/algorithm at a time. The most basic process first (sentence structure, meaning through sentences, chess moves). The next process builds on those getting ever more complex. Until finally we can recreate the most complex of brain processes which I would guess are emotions, intuition, and creativity.

    Its like you cant have translation without understanding basic english sentence structure. Or, you can't have AI in chess without first programming all the rules and then programming statistical response to the rules.

  • http://art2science.org Roger Bohn

    Remember that people's decisions are robust, while computers' are not. So computers will make spectacular errors.
    I've been researching the evolution of flying from craft to science – very similar situation. An evolved algorithm beats an average human pilot on any standard task. (But it chokes on things it was not programmed for. Hence pilots will be around for a long time.)

  • ssickels

    A couple thoughts, robotchampion:
    - I've always liked the great quote from, I believe, Stuart Dreyfus (from long ago): “Current claims and hopes for progress in models for making computers intelligent are like the belief that someone climbing a tree is making progress toward reaching the moon.”
    - Actually, you can have translation without understanding. That's how Google translation works (http://research.google.com/about.html): “We employ statistical techniques to learn translation models from very large quantities of parallel and monolingual text relying heavily on our large computing clusters and corresponding systems infrastructure. Our learning process is largely language-independent which allows us to build machine translation systems for many languages (assuming the availability of training data) very efficiently.“ I think this is at the core or Andy’s point.
    So we’re shifting from the early AI assumption that humans are algorithmic (and the failed attempts to capture those presumed algorithms in “good old fashioned AI,” per Dreyfus’s point) to embracing the power of statistics (which humans are notoriously bad at) with our computers. The results (as in Google’s ability to find what we’re looking for, and even to do decent translations) are astounding!

  • http://acleanlife.org robotchampion

    yaya, but saying we are notoriously poor at stats I would debate with you. I mean we are born with mostly empty brains and we learn through interpreting massive amounts of data signals.

    You could even say that we have simple rules at birth (hungry means cry, milk is good) and progress from those rules to more complex rules or maybe even algorithms (hunger means suck on thumb, milk and other foods are good).

    I think we are often too quick to assume AI functioning like adult mature brains would. Wouldn't it be more appropriate to see if AI can mimic the most basic brain functions of babies?

  • http://twitter.com/natedlee Nate Lee

    AGREED! I'd also suggest one can't have intuition without the ability for independent and “organic” growth – ie not manufactured.

  • http://twitter.com/deb_lavoy deb louison lavoy

    Hi Andrew – I was at palantir and we spoke briefly afterward. Great talk.. I've been thinking about these issues since. I think there are several concepts that are relevant here, though these are probably more free form than well argued:

    1. Free will. An actuarial approach will tell you the most likely outcome, but it offers no opportunity to recognize an individual's limitless ability to achieve, given enough determination. The recognition of the individual over society, mercy and ultimately self-determination is horribly undermined by the actuarial approach to human behavior.

    2. The Why – humans have a great capacity for wonder that leads to new ways of thinking and insight. We ask why. I do not believe that cybernetic actuaries will be capable of asking, let alone answering this question.

    3. Diversity of approach and experience. Computers (at least as we currently imagine them) will always be limited by the fundamental paradigm that they are created with. The incredible diversity of the human experience – the one that forms our world views and powers that collective intelligence we're so happy with these days is unlikely to be represented in artificial intelligence. Diversity is the universal imperative for both survival and advancement (I ran across Darwin's quote yesterday: “Variation proposes and selection disposes.”).

    4. Empathy and the human experience. These are things that should be highly valued. i'm just noting them rather than making a case here. philosophy, empathy, art and all that – as you have already exempted from the computational domain.

    That said – the integration of human expertise, powerful computers and excellent process – the winning combination in the kasparov article – this is something that may enable and empower human creativity, strategy, innovation, problem solving, while freeing us from the tedium of the grunt work involved in knowledge work. This is the angle to blow out and explore. This is the angle I think will help transform our ability to tackle both hard and wicked problems.

    Apologies for length/volume.

  • http://twitter.com/deb_lavoy deb louison lavoy

    this is a very, very good point

  • http://www.makemoneyonlineyo.com faiz

    you are right about the point of smart machines. that time is really far away. actually i don't think that the machines will become better thinkers than humans. it is shown in movies like it will happen very soon that machines will become far superior to humans but its just fantasy in my view.

  • http://www.makemoneyonlineyo.com faiz

    you are right about the point of smart machines. that time is really far away. actually i don't think that the machines will become better thinkers than humans. it is shown in movies like it will happen very soon that machines will become far superior to humans but its just fantasy in my view.

  • http://www.canlialem.com görüntülü chat

    Or, you can’t have AI in chess without first programming all the rules
    and then programming statistical response to the rules. 

Previous post:

Next post: