I gave a talk at Palantir Technologies‘ way-cool DC offices on Monday as part of their “Palantir Night Live” series. My topic was “In the Age of Smart Machines, What are We Good For?” I expanded on a couple recent posts (here, here, and here) and talked about what computers are good at, what people are good at, and how the two types of ability can be combined to yield good results.
I’d post the slides from the talk, but I fear they really wouldn’t make much sense without the words that went along with them. Palantir will make a video of the talk available, and I’ll let you know when it’s up.
I want to highlight here what was for me the most interesting moment of the talk. The audience was a group of 70+ folk from the DC geek community. This is a deep community because of the government’s vast spending on technology, and there was a lot of intellectual horsepower in the room. Most folk seemed to be in their 20s, and I got the impression that a lot of them worked directly with code. This was, in short, a highly educated and technologically sophisticated group.
One slide in my talk was titled “People in the Age of Fast, Dumb Computers: Good News and Bad News.” I explained that good news and bad news here was being evaluated from the perspective of a person who wanted to be considered a valuable contributor and keep his job.
The next slide just said “Human Intuition vs. Algorithmic Predictions.” I asked the audience whether they thought that this was the good news portion of the talk, or the bad news one. I asked them, in other words, if they thought that human intuition was holding up well against fast, dumb computers applying algorithms to data.
After counting hands a couple times, the audience and I agreed that about 60% of the people thought that this was going to be the good news portion of the talk — that I was going to say next how robust and valuable human intuition is proving to be. I then had to tell them that they were wrong.
The blog posts referenced above (to repeat, they’re here, here, and here) make the case that in many if not most cases our intuition is biased and faulty. If we’re interested in making good decisions and predictions (rather than protecting the feelings and reputations of the alleged intuitive experts) we should in such cases replace intuition with data and algorithms. I appreciate that this stance is cold hearted and removes ‘the human element.’ I also believe, based on the research I’ve seen, that the the human element is the weak link in many decision making and prediction tasks. When a stronger link becomes available, it should be used instead.
I don’t want to rehash here all the arguments in support of my stance. Take a look at the posts if you’re interested, and let us know what you think of them. With this post I just want to underscore that decreasing our reliance on human intuition is going to be a long, slow haul.
The clear majority of one of the most data- and algorithm-friendly groups I’ve seen in a while thought I was going to give them good news about the comparative efficacy of human intuition over the digital tools they themselves were working on. This optimism persists among highly educated people despite an extensive body of research on biases and heuristics in human decision making.
We are very fond of ourselves, as well we should be. Like Hamlet, I marvel at “What a piece of work is a man, how noble in reason, how infinite in faculties… in apprehension how like a god!” The bulk of my talk, in fact, was devoted to the paltry progress of Artificial Intelligence research in building anything remotely close to a computerized mind. I think The Singularity, the Age of the Smart Machine, or whatever else you want to call it is a long, long way off. What our brains do is extraordinarily weird, difficult, powerful, and unique. So while I really like the remade Battlestar Galactica I don’t think a war against intelligent machines is anything we need to start worry about yet.
But I do think that we need to start seriously questioning our assumption that our brains are good at every task to which they’re currently applied. When better alternatives come along we need to recognize them and act accordingly, even if these alternatives make us feel a bit less exalted.
What do you think? Am I being too hard on human decision making and self-regard? What are the good arguments, if any, for continuing to let people make decisions and predictions in areas where algorithms have been shown to be superior? Leave a comment, please, and let us know.