The Diminishment of Don Draper

I am a huge fan of the TV series “Mad Men,” which has aired for two seasons on AMC. The show revolves around the employees of Sterling Cooper, a fictional Madison Avenue ad agency in the early 1960s. It’s written and filmed with the intelligence and attention to detail that we’ve come to expect from the best television since “The Sopranos” showed us just how good the small screen can be. The third season of “Mad Men” will start in August; my withdrawal symptoms are becoming acute, but I think I’ll be able to make it that long.

The show’s main character is Don Draper, who is in many ways not a nice person. He’s selfish, deceitful, unpleasant to coworkers, and serially unfaithful to his wife. He is, however, extraordinarily good at his job, which is to think up compelling ad campaigns for the agency’s clients (he also looks damn good in a suit). Many episodes center on Draper’s efforts to come up with the slogan that will differentiate a company’s offerings and cause them to fly off the shelves. He has a clear idea of his own skills; as he said during a boardroom battle, “I sell products, not advertising.”

And does he?  Well, the show is ambiguous on this point. He’s clearly creative, insightful about how consumers and markets work, and ridiculously effective in pitch meetings. The show’s first season, for example, opens with him figuring out a whole new way to advertise cigarettes as health claims were being disallowed (“Lucky Strike: It’s toasted.”) and closes with a meeting in which he convinces Kodak to stop referring to its new slide projector as a wheel (“It’s not called a wheel. It’s called a carousel.”).

But “Mad Men” spends very little time on whether or not these ad campaigns worked —  whether they led to greater sales. I’m not sure if this is deliberate or not, but it is pretty typical, especially for the era. It’s historically been quite difficult to assess the effectiveness of a given campaign, or even of total spending on the Don Drapers and Sterling Coopers of the world. As the pioneering 19th century retailer John Wanamaker said, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.”

When there’s this much uncertainty it’s a common strategy to put all one’s trust (and dollars) with a business oracle like Draper. But is this still the right strategy when times change and it’s possible to see through the fog of business better? Or is it at least possible to combine insights from business oracles with other sources and methods, thereby making improved business decisions?

I got to thinking about these questions during the conferences I attended at MIT last week, which were all about IT’s impact on the business world. The presentations I heard and data I saw indicated to me that the era of Don Draper —  of wholehearted and unquestioning trust in business oracles — might well be coming to an end

To explain why this is, let me first present a strawman of the oracle-based mode of making important business decisions, then describe some alternatives, or at least complements, to it. It won’t surprise most readers to see that these alternatives and complements have a strong information technology component.

Business decisions that spring from the work of oracles like Draper have a few common attributes. They tend to be:

  • Opaque. Draper couldn’t explain his creative process if he tried. He just ‘knows’ what will work and what won’t. Watching him, I was reminded of Cayce Pollard, the heroine of William Gibson‘s novel Pattern Recognition. Pollard is a savant about brands, and one of her services is an expensive but brutally simple evaluation of a proposed logo. She looks at it, once, and says yes or no. Her clients are not allowed to ask any follow-up questions because she wouldn’t have answers for them. She can’t explain how she knows whether or not it’s good; she just does.
  • Not amendable. Decisions by Draper, Pollard, and other oracles tend to be take-it-or-leave-it propositions, not subject to refinement.
  • Not disconfirmable.  It’s very hard to know if Draper’s decisions are good ones. How would Kodak know if it’s really a better idea to call its slide projector a carousel rather than a wheel?
  • Not revisited. Because they’re not amendable and not disconfirmable, there’s little point in going back and reviewing an oracle’s decisions.
  • Universal. Draper gives a single right answer, one that leads to mass media campaigns. There is one market out there, and he knows the one best way to best reach it.

In addition, old-school business oracles like Don Draper share a few characteristics themselves. They are:

  • Individualistic. Draper works largely alone. He has a small staff of writers and graphic artists, but they work largely to flesh out his ideas and carry out his orders.
  • Accepting of few inputs. Draper takes ideas from his staff, but not from anyone else. He brushes off input from account executive Pete Campbell (who is admittedly a tool and a back-stabber) and throws away a research report on smoking written by a Freudian psychologist (OK, that might not be such a bad idea…).
  • Charismatic. People at Sterling Cooper know that if they can just get current or prospective clients into a meeting with Draper, he likely will close the deal. He can project the ‘reality distortion field’ ascribed to real-world folk like Steve Jobs and Shai Agassi (wish I had it).
  • Intolerant of competition, second-guessing, and unofficial channels. Good ideas come from Don and Don alone. He tries to fire Campbell for pitching an idea to a client.
  • Credentialed. Draper is the head of creative for Sterling Cooper, so he must be good.  In the business world, common credentials include degrees (especially MBAs) from good schools, impressive job titles, time spent at leading companies, accumulated years of experience, and reputation among insiders. Credentials are externally visible signals of quality, of oracle-dom.

The above lists of characteristics are focused on a single fictional character in the advertising industry, but in my experience they’re fairly common across business oracles and their decisions in many real-world settings as well. When I reflect on how I’ve seen strategy, marketing, planning, and product design decisions made at large organizations, I see a lot of the stuff listed above.

To be sure, I also see business oracles gathering lots of data, commissioning studies, and sometimes even running experiments. But I often get the sense that the point of all this activity is to confirm the soundness of the oracle’s initial idea, rather than to test it (a state of affairs captured elegantly by this New Yorker cartoon). Several people at last week’s workshop on business experimentation observed that it takes months for many companies to set up even a simple experiment today, and opined that this is because of the great care taken to ensure the outcome. I found myself nodding my head in agreement as I read in last Sunday’s New York Times magazine the following passage written by Matthew Crawford, who dropped out of organizational life to open a motorcycle repair shop:

“[C]ertain perversities became apparent as I settled into the job. It sometimes required me to reason backward, from desired conclusion to suitable premise. The organization had taken certain positions, and there were some facts it was more fond of than others… Further, my boss seemed intent on retraining me according to [the] cognitive style… of the corporate world… This style demanded that I project an image of rationality but not indulge too much in actual reasoning.”

The main problem with relying on old-school oracles to make important business decisions, though, isn’t their backward reasoning or sometimes false rationality. It’s the fact that they might be wrong. When you follow the advice of a Don Draper you’re making a big bet on a black box —  you’re committing substantial resources (like the money and effort required for a national ad campaign) based on a decision process that you don’t understand very well and can’t understand better. You’re just trusting in the oracle’s wisdom, and if he turns out to be not that wise you’re in trouble.

Is there an alternative? Can businesses lessen their dependence on the world’s Don Drapers, even in nebulous, touchy-feely, hard-to-quantify areas like advertising and brand-building?

I think they can. To show how, I’ll stick with the topic of advertising and show how a few things I’ve seen recently indicate to me that there is an alternative to relying so heavily on business oracles. I am by no means an expert on advertising, digital or otherwise; I’m concentrating on this discipline because it’s been historically very rich in oracles like Draper, and because I’ve recently been exposed to some advertising and marketing technologies I find fascinating.

The first of these relates to the alchemy of coming up with a good idea. Kamal Malek and Noubar Afeyan, two MIT-trained engineers, realized that new products, packaging, brands, ad campaigns, etc. are actually combinations of a relatively small number of elements. The packaging for a new line of eco-friendly copier paper, for example, is a combination of colors, logos, other design elements, descriptive words, and a few other important features. A branding oracle would come up with one combination of these, or direct his team to put a few options together so he could pick the ‘best.’ A more scientifically inclined oracle might put a few options in front of a few focus groups to get some consumer input. But all of these approaches ignore the vast majority of possible combinations of attributes; they don’t ever consider most of the ways that colors, logos, words, and so on can be combined.

Malek and Afeyanm started a company called Affinnova because they realized that in the era of the Web a very different approach was possible. Affinnova uses some pretty high-powered math (including genetic algorithms and conjoint analysis) to figure out which combination of attributes appears ‘best’ to target customers. Staples actually did want to figure out packaging for its eco-friendly papers, so according to a Business Week article:

Affinnova set up a panel of 750 consumers across the country who, over the course of a week, participated in a 20-minute study of Staples’ paper line. Each was shown a screen of three possible packaging designs and asked to select their favorite. The software analyzed their choices in real time, and presented three new designs. “In total, we put 22,000 choices in front of consumers for the Staples test,” says Steve Lamoureux, Affinnova chief marketing officer. By looking at selections over multiple generations and across the whole panel, the software identified preference patterns—a tendency toward a certain color or font or wording—and ultimately identified the top concepts.

As a result of the Affinnova study, Staples made several changes. For instance, the company ditched the special green packaging of its recycled papers, instead incorporating its eco-offerings into its regular line, which is packaged in red (basic), blue (midrange), or gold (premium). A green band across the top of the new packaging indicates the percentage of recycled content, and a large triangular recycling icon just below the paper type reiterates its environmental credentials.

The only thing opaque about this process is the details of the algorithms used, which very few of us would understand anyway. It’s based on constant amendment, competition, and revisiting  instead of being one shot take-it-or-leave-it. It’s also not universal; Affinnova’s software can identify whether the target population breaks down into discrete segments, each with its own preferences, or whether there really is only one best answer. This process also depends on tons of inputs instead of only a few, and is blind to the charisma and credentials of any participant. Affinnova presented case studies of its effectiveness, but I’m not aware of any systematic research to investigate whether its methods do in fact yield better results in the marketplace than do oracles.

I did learn, though, about research that both presents an alternative to one-size-fits-all marketing and tests its effectiveness. It’s well established that people have differing cognitive styles; some are visual while others are verbal, some are analytical while others are holistic, etc. On the Web it’s not difficult at all to serve up different versions of the same ad tailored to each style, but it is hard to quickly figure out what someone’s cognitive style is. People don’t just show up and announce it when they visit a site, and they’re likely not willing to take a short quiz to find out.

So MIT’s John Hauser, Glen Urban, Guilherme Liberali, and Michael Braun fell back on some old math to help them serve up appropriately customized ads. The Gittins index was formulated by the statistician J. C. Gittens in 1979 in response to a nasty problem that had been around since at least World War II: how can you get the most money over time out of a slot machine with two arms, each of which has unknown payout odds? The Gittins index helps with this (do not ask me how), and can also be used to arrive at a quick and good guess about cognitive styles based on a person’s initial clicks around a site (think of each click as a pull on one arm of a multi-armed slot machine).

Hauser, Urban, and their colleagues souped up the Gittins index for the Web, and used it in combination with a bunch of other hardcore techniques to serve up ads with the same content but differentiated look and feel to prospective customers of BT’s broadband Internet service at an experimental site. In other words, they ‘morphed’ the website as they learned more about the visitor’s cognitive style. They found that sales increased by almost 20% during the trial period, which would translate into approximately $80 million in additional sales if implemented at full scale.

As I said, I’m not an advertising expert, but isn’t this kind of sales jump remarkable? Here again, we see a departure from one-shot and one-size-fits-all approaches to important decisions, and a move toward experimentation, iteration, and contingent answers. We also see that the MIT team, in the tradition of all good scientists, reached back deep into relevant prior work in a few disciplines (cognitive psychology, statistics, operations research, etc.) instead of just relying on their personal storehouse of knowledge. They also substituted quantitative rigor for qualitative intuition whenever possible, and they tested all their hypotheses and conclusions instead of patting themselves on the back.

At the workshop on business experimentation I heard many examples of similar approaches to making important business decisions. Not all of them involved collecting data via the Web. Jim Manzi, for example, related how Applied Predictive Technologies uses all kinds of corporate data to design and evaluate its experiments; the example he gave concerned changing convenience store layouts. All the presentations, though, stressed decision-making processes that were very far from those employed at Sterling Cooper. And while some of the presenters were charismatic and all had excellent credentials, they sounded not at all like Don Draper.

Other examples I’ve come across recently show me that even if a company wants to rely heavily on old-school oracles to make important decisions, it’s becoming much easier to assess over time whether or not they were good decisions. In the realm of digital marketing and advertising a rich toolkit now exists to measure the effectiveness of different campaigns, channels, adword buys, and so on. Hubspot and Clickfox are two companies that specialize in this work.

With this toolkit in place, the key skill becomes not so much coming up with ideas that might work as walking away from ideas that aren’t working. Michael Sikorsky, founder of the crowdsourced business idea incubator Cambrian House, helped me realize how important this is. He described how Cambrian House set up a series of tests for every idea, then immediately stopped working on a given idea once it failed even if Sikorsky or other senior people were personally enamored of it. This kind of discipline is rare —  most of us hold on to our brainchildren far too tightly, and far too long.

Mounds of data, clever algorithms, and the ability to run rapid-yet-rigorous experiments combine to lessen our dependence on the Don Drapers of the business world, and to treat their work not as the final and complete answer but rather as part of a broader, rigorous process. I used the word diminishment in the title of this post, not disappearance. I’m not saying that creative and insightful people are no longer valuable, or that the flash of inspiration can (or should) be automated. I’m just saying that such flashes should, whenever possible, be subject to all feasible useful scrutiny, and that technology is vastly increasing both the quantity and quality of available scrutiny.

I’m also not saying that there are no true business oracles out there; the track records of Steve Jobs and Warren Buffet speak for themselves. But the number of people who think they’re oracles is much larger than the number who actually are. Mind you, I’m not calling the rest con men or charlatans, just folk who are overconfident in their predictive abilities. Overconfidence is a well-established cognitive bias, and is reinforced by our tendency to remember what we got right and forget our mistakes and errors.

Malcolm Gladwell wrote a fascinating New Yorker article a while back on criminal profilers —  the guys who look around a murder scene then tell the police to look for (for example) a white male in his thirties who lives alone, enjoys outdoor activities, is personally neat, works in a low-level white collar job, and has difficulty talking to women. Gladwell calls this kind of prediction the “Hedunit” (from “He done it…”), but after looking carefully at profilers’ track records concludes that:

“[The profiler] did not really understand the mind of the [criminal]. He seems to have understood only that, if you make a great number of predictions, the ones that were wrong will soon be forgotten, and the ones that turn out to be true will make you famous. The Hedunit is not a triumph of forensic analysis. It’s a party trick.”

Making good business decisions is not as important as catching murderers, but they’re both too important to be addressed with party tricks. I’d like to see the conception of a business oracle undergo a substantial change. The old-school business oracle was someone like Don Draper who said to a company “I know what will work.” The new version, I hope, is someone who says to a company “I know how to figure out what’s working.”

What do you think —  am I being too hard on Don Draper-style business oracles?  Is my fondness for technology, analytical rigor, and experimentation blinding me to something important about making business decisions? Or do you share my belief that old-school business oracles need to be consulted with a bit less blind reverence these days?  And are you looking forward to the third season of “Mad Men” as much as I am?  Leave a comment, please, and let us know.