The Guesswork of Science

by Andrew McAfee on September 22, 2010

I’ve been arguing for a while now that businesses should be more scientific in their decision making —   less dependent on intuition, experience, credentials, and charisma when making important calls and figuring out what to do, and more reliant on evidence, analysis, and logic. Some people hear this as an argument against people in general, and a recommendation that organizations get humans out of the loop wherever possible and make decisions by pressing a button and doing whatever the computer tells them to.

Nothing could be further from the truth. Science is a deeply human enterprise, even if it’s sometimes carried out by folk who lack advanced social skills. The scientific method is a process for coming up with ideas, then evaluating and refining them over time. Advances in equipment, from the microscope to the cyclotron to the computer, are a great help in the work of evaluation and refinement, but they’re pretty hopeless at coming up with ideas. That’s what the people — the scientists —  are for, and that job is not going away.

So an argument for more scientific organizations is not an argument for less human ones. It’s simply a recommendation that we test our ideas and hypotheses about what’s going on and what will work, rather than just assuming that we’re right because we’re smart, or experienced, or the guru or the boss.

To support this argument, I’ve been looking around for great writing on what the scientific method is and the role of people within it, and it’s been surprisingly hard (if you’ve come across any such writing, please let us know about it). I should have just started by searching for “Feynman scientific method,” since very few people have been as articulate about science as Richard Feynman, the Nobel-prize winning American physicist who’s held in awe by many of his colleagues (and by anyone who aspires to live a rich life).

In a 1964 lecture to Cornell students, Feynman nails it when talking about how physicists uncover a new law of nature. He says

First we guess it. Don’t laugh, that’s really true.

Then we compute the consequences of the guess to see what –  if this law that we guessed is right, we see what it would imply. And then we compare those computation results to nature. Or we say, compare to experiment or experience [by which he means 'observation,' not personal experience]. Compare it directly with observation to see if it works.If it disagrees with experiment, it’s wrong.

In that simple statement is the key to science. It doesn’t make a difference how beautiful your guess is, it doesn’t make a difference how smart you are if you made the guess, or what his name is…  If it disagrees with experiment, it’s wrong. That’s all there is to it.

It sounds so simple, but we know how hard it is do to. We all love our guesses, have faith in them, and desperately want them to be right. It’s tempting to skip altogether the step of testing them, or to construct the test so that it’s most likely to confirm our hypothesis rather than overturn it. The scientific method is, in large part, the work of overcoming our egos and biases and subjecting our ideas to stern, fair tests. The data, algorithms, and computing horsepower available to the modern enterprise are a huge help with this, which is why I say that the time is now for more scientific organizations.

Gary Loveman, who brought the scientific method to the gaming industry and is now the CEO of Harrah’s, wrote the forward to the great book Competing on Analytics by Tom Davenport and Jeanne Harris. Loveman, too, nails it. He talks about how hard it is an organization to be scientific about the boss’s ideas, yet how important it is.

The owners of any company, especially mine, deserve the very best ideas to be put into practice. While I hope Harrah’s Entertainment shareholders have some faith in me, they are far better served to have confidence in the capability of my team to gather and test the best ideas available within and outside Harrah’s Entertainment and use only those that lead to sustained superior performance and growth… It is not my job to have all the answers, but it is my job to ask lots of penetrating, disturbing, and occasionally almost offensive questions as part of the analytic process that leads to insight and refinement.

My journey is complicated by the fact that all organizations seek to please the leader, so there is constant pressure to give my otherwise lame and ill-considered views far more gravitas than they deserve… Hence, it is critical to create and constantly cultivate an environment that views ideas as separable from people, insists on the use of rigorous evidence to distinguish among ideas, and includes a healthy dose of people both sufficiently dedicated and skillful to do the heavy lifting.

How many enterprises are well on their way to achieving this vision? What have you observed? Is your organization trying to become more scientific? If so, how’s it going? Leave a comment, please, and let us know what your experiences have been.

  • http://www.ddmcd.com Dennis D. McDonald

    One of the more interesting books I’ve read recently is Len Fisher’s “Weighing the Soul: The Evolution of Scientific Beliefs.” The theme of this book is that scientists do sometimes believe things that seem opposed to reality or common sense but that how these beliefs change over time and are challenged often — but not always — leads to progress.

    Fisher catalogs ideas that turned out to be just nutty but which did have the effect of stimulating good work by others. Does that make up for the nutty ideas? No, but many of the ideas he does discuss just could not be measured or evaluated at the time due to an overall lack of instrumentation as well as a lack of fundamental scientific understanding.

    Regarding your point that businesses need to be more scientific and fact-driven, I couldn’t agree more. But like in Fisher’s book, if you lack the fundamental ability to frame and answer a question scientifically, you fall back on intuition, gut, experience – maybe even superstition. Problem is, many folks lack the basic understanding of the scientific method.

    Even if we overcame that problem, there remains the problem of how to measure inherently difficult events or impacts. Take one example – advertising. No one could argue that measuring the impact of advertising hasn’t become more “scientific” since the 19th Century. But if that’s the case, why can’t we do a better job of predicting the impact of advertising on sales?

    (If any one is interested, my review of Fisher’s book is here: http://www.ddmcd.com/soul.html .)

  • Dave Sims

    Interesting post, thanks Andy. It’s true that some parts of the organization resist putting greater reliance on data. I’ve heard that sales teams are particularly prone to wanting to avoid numbers and trust their gut. IT’s generally a lot more open to it, of course.

    I’m actually working on an article right now about the ways that industry leaders use analytics to gain an edge in their industry. Some of the examples are very well known (Walmart’s use of RFID, Capital One’s analysis of its customer’s financial needs & capabilities, Amazon’s recommendation engine). Others, less so.

    One anecdote in the article tells how the sales team at a pharma company resisted using the results of new analysis, saying that their customer relationships told them what they needed to know. Management asked for a pilot: found 20% of the sales force willing to invest 6 months changing the way they sold, who they invested time in and what they offered. At the end of the pilot, results for the 20% were so much improved that the rest of the team wised up and got with the program.

    I, too, would be interested in hearing more stories about organizations changed themselves to make better use of the data around them.

    Dave Sims, editor
    managementexchange.com

  • http://twitter.com/chadothompson Chad Thompson

    How many enterprises are well on their way to achieving this vision? What have you observed?

    In my observation, the hardest part about organizations becoming ‘scientific’ is that we often use data that is only marginally applicable to the task at hand. In order to see if our experiment succeeded or failed, we need to define our problems in measurable terms – or at a minimum see if the data that we do gather is consistent.

    For example, one of the measurements that is often used is the ‘online survey’ asking about customer satisfaction with a website or service – but the results of that survey often don’t tell the entire story. If a newspaper invites people to a quick survey asking about changes to a newspaper website, the reviewers can give glowing reviews on a certain redesign, commenting system or other features – but do the analytics support the responses given in the survey? If the site features are easier to use – are they being used more often? Are survey responses a representative sample? Could we be collecting better data? What kind of data should we be collecting?

    (In my experience, the planning of data collection, analysis and decision making is often completely left out of a planning process, or the data that is collected is strictly metrics that are useful for developers and systems administrators, not business development.)

  • http://jdmoyer.com J.D.

    There are examples of successful companies led by intuitive leaders, who respond not to market research but to anecdotal reports and customer complaints; J. Crew’s Mickey Drexler (profiled recently in the New Yorker) is a good example. As someone who runs an independent music label, I find the idea of making decisions (about what kinds of music to put out) based on scientific testing to be distasteful (even though I acknowledge it might make our label more commercially successful). If the bottom line is only thing that matters, maybe scientific testing makes sense for all companies. But what about art? What about helping to expose or create a new trend that the public doesn’t yet have an appetite for? (what people want but don’t yet know they want)

    TV maybe provides the best examples … many ultimately successful shows “tested badly” (South Park, I Love Lucy, Batman).

  • http://www.TractionSoftware.com grlloyd

    Thanks for the Feynman references! He also set a very high bar for simplified explanations of complex topics: Truth, clarity, respect for your audience, no backtracking. I wish all discussions of complex and controversial topics could be held to the same standard – political, social and technical. The “no backtracking” criteria is the killer:

    “I will tell you about quantum electrodynamics without math, so my explanation will be necessarily incomplete. But I promise that my simplified explanation will not tell you anything that I would need to retract in a more complete explanation.” ~ Richard Feynman, opening a Cornell University talk on Quantum Electrodynamics, to a general audience.

    Feynman compares science and chess: http://is.gd/4z5Fs
    Such an admirable person.

  • http://acleanlife.org robotchampion

    Two things:

    1.) This reminds a lot of the story in Moneyball, where the author explores how baseball teams are/were run more by scouting and intuition than by stats and context. Further, even though many teams have found ultimate success through it in the World Series (I believe the 2002 Angels were) and serious playoff runs – many teams are still not adopting the “science based” approach. Old habits die hard.

    2.) Don’t u think the term “scientific method” is out of date? — Isn’t that like saying I use the artistic method or the computer method. I would prefer if we all incorporated a deeper understanding of science into our lexicon. Start using terms like empirically based, proven experiment, theoretical research, etc.

  • http://www.elimueller.com Eli Mueller

    Good business decisions are becoming increasingly reliant on logical intuition. Decisions are typically more complex because of our reliance on technology. This is the world we live in. Consequently, more and more business decisions warrant close consideration of analytics and proven data. Decision-makers who is reluctant (or under-prepared) to incorporate more data in their decision-making processes will lose ground to those who embrace the data-demanding business environment we live in.

    I just finished reading the MIT article which outlines the preliminary findings for “The New Intelligent Enterprise,” a group that aims to explore how management can (and should) approach data induced challenges. I am assuming you will have a large role in this initiative, and I look forward to further research and findings. Thanks for the post.

  • Gil Press

    Great post, Andy. I like the analogy with science (specifically, the way science advances) because I think there is too much emphasis today on collecting and “mining” data, looking for invisible patterns that emerge miraculously if you throw enough computing power and statistical models at the data. Instead, managers should clearly define their ideas (theories), what data will prove them wrong, and then actively search for this data. To quote myself, if I may (this was in response to Peter Drucker’s 1994 HBR article “The Theory of the Business”): “Science advances by eliminating the theories that do not explain reality as well as others do. Managers, too, must articulate their theories and how they can be refuted and then seek data that proves their theories wrong. That will prevent them from falling into the trap of discarding successful theories. In both business and science, theories help us reduce the uncertainty of the world we live in. Theories also help us make sense of data, and they provide us with lenses to view and interpret the world. Without a theory, we cannot understand the causes of business success or failure.”

  • Trish (HR Ringleader)

    I love this post because it really speaks to the way I like to learn as well as approach new ideas in the HR industry. Working in a hospital, I have the luxury of being surrounded by employees who use the scientific method in everything they do. This encourages our HR Team to take a similar approach. In addition, I think we can use it as a different lens with which to look at issues. By reading resources that you mention in addition to sites like http://www.scienceblogs.com, I can open my mind to find new ways to approach old business problems. Enjoyed your post!

  • http://twitter.com/zornwil Wilson Zorn

    Shows that became successful but tested badly largely tested so because the tests were (and remain) at best incomplete, and at worst just plain flawed, or the evaluations of those tests were incomplete or flawed. (Outside of television, probably the most infamous example is the testing of New Coke; here many have defended the testing and evaluation process, but as countless articles have pointed out while some majority clearly liked New Coke better the same tests showed some large percentage (IIRC 20%) pointed out they would reject outright a change in Coke’s tastes and these results were ignored in the evaluation process, and had they not been so ignored it would have been clear to Coca-Cola they could not have sustained a 20% (or whatever other significant percentage) outrage while trying to promote a change in taste.)

    You raise a great point that I hope I’m not misdirecting in saying that in commercial, business terms (which to be fair Mr. McAfee’s post is about as opposed to art for art’s sake, which your business necessarily has to balance somehow) how does one deal with testing for sentiment, taste, and, even harder to deal with, taste-making/market-changing? As you have a for-profit business (I presume) you might not care about being “more” profitable merely for the sake of it (and I wish more people would think that way) but you do care about at least sustenance. To sustain your business, you have to sign bands and publish music that “enough” (whatever that might mean according to the reach and scale of your label) people not only will like but will buy. As your post suggests, merely putting a focus group in a room and having them listen, for the first and only time, to some new music won’t reveal how likely it is that acceptance will grow over time. But what about your current loyal followers? Shouldn’t you be able to get some idea via leveraging them? Here I realize the other problem is ensuring the cat isn’t out of the bag too early. But still, wouldn’t you say that some objective process makes sense to identify as much as possible how closely the idea “this band is good for our label” is to the truth? As you say, that can’t be perfect. But taking such an approach is better than not, it would seem, as Mr. McAfee suggests. Some of a scientific approach isn’t about qualitative analysis but rather a sound and objective qualitative method. For example, looking at historical examples of bands that via an objective process can be qualitatively compared should yield some notions of acceptance and what is required to change market tastes; one might be able to devise a typology of bands according to how they fit social and musicological trends and see how those match to historical trends. For example, if we look at the Velvet Underground as fusing experimental music with retro (even in its time) basic rock and roll while being an urban “street level” focused band, we could see how those elements appealed to a niche audience at that time and yet predated a mass popular media focus on candid exploration of street level themes, how Lou Reed’s later work correlated to that exploration (e.g., the acceptance of “Walk on the Wild Side” in mainstream AOR of the ’70s). Now, as you can already see that’s fuzzy and difficult, but the point is to simply break variables into blocks that an organization can analyze even “just” qualitatively and consistently and force itself to address inconvenient truths rather than say “I really like Melt Banana so everyone else should.” I do come back to, though, that if you have a label that is already having some success AND if your label has any consistency in the kinds of music you produce (I realize you may not for deliberate reasons) then your current audience should provide at least a sanity checkpoint, especially your most engaged and loyal audience membership, in evaluating how far a prospective band has to go to reach an “acceptable” audience (or at least your degree of risk). The point is as objective a process as is possible, and I think it’s clear that not just in more clearly artistic endeavors but even in less clearly artistic ones such as the aesthetics of a new car design that in the foreseeable future we simply can’t read peoples minds so we’ll always be guessing in some part at how to drive a market (or release something in the wild and hope for the best).

  • http://www.TractionSoftware.com grlloyd

    “Some of a scientific approach isn’t about qualitative analysis but rather a sound and objective qualitative method.” A very good point. Sociology, anthropology and other fields of social science are highly relevant to business and use the scientific method that Feynman describes. They aren’t limited to quantitative analysis, but can use case studies, historical research and other methods to test theories and understand behavior.

  • http://twitter.com/terrigriffith terrigriffith

    I’m always looking for great ways to teach evidence-based management to MBA students. Would add Pfeffer & Sutton’s book (and the HBR article) Hard Facts to the list of resources. They also provide info here: http://www.evidence-basedmanagement.com/

    Also good: video of Marissa Meyer talking at Stanford: Data is Apolitical http://ecorner.stanford.edu/authorMaterialInfo.html?mid=1529

  • http://twitter.com/deb_orton Deb Orton

    Andrew, your post reminds me of a debate between Malcolm Gladwell and Tom Davenport that I listened to a year ago at one of our executive conferences (I work for SAS, so you can infer which side of the debate I landed on).

    I’m wondering what advice you would give to the IT departments of companies who are trying to adopt analytical methods for decision-making. Is there a change afoot to create environments for business discovery and analysis?

  • Matt Moore

    There are important differences between business and science. In science, you can spend years researching a problem, exploring your hypotheses, carrying out further tests. You often select problems that you know are more tractable than others. If you get incomplete or inconsistent data you can simply not publish it.

    Businesspeople tend not have these luxuries. Often you are faced with an ambiguous problem, conflicting data and a very short timeframe. It’s in these situations that you make a gut call. You can make such calls in a better or worse way but you cannot escape them.

    I do think that organizational decision-making should be more evidence-based. I am a fan of science. I like analytics. But analytics enthusiasts worry me in their disdain for intuition and what they consider to be “irrational” decision making. I think they need to spend a bit more time actually watching decisions being made in the field.

  • Jonah Lopin

    We do lots of experimentation & testing at my current startup. In fact, I’d give us an A on being scientific managers. We do have a couple challenges, however, that keep us from using the scientific method as much as I’d like:

    1. You have to a pay a premium on every decision for the extra time it takes to gather and analyze the right information. It’s hard to justify paying this premium when there are other things you can spend the money on.

    2. Experiments & data always reflect the present and the past, but startups (and many larger businesses) change so fast that you often have reason to believe that the results of your experiment “may be different in 6 months anyway… so… let’s just do it even though the data suggests it’s a bad idea!”

    3. The scientific method (at least the way I do it) requires some healthy skepticism. Sometimes skepticism and startups don’t mix because startups thrive on optimism.

    Any help on these would be much appreciated!

  • http://haydenduncan.wordpress.com/ Hayden

    Hi Andrew, this post has made me think about how modern web 2.0 technologies, such as Facebook and Twitter, have provided businesses with tools that help with the process of conducting decision making. There is much evidence of businesses utilising the feedback, from these tools, to base their corporate decisions upon.

    I was wondering if you had personally come across anything that might add to this or if you have any further thoughts on this.

  • http://www.coerverdrills.info Coerverdrills

    Moneyball came to my mind right away as well. It seems like so many organizations are intimidated by the “scary” science and prefer to do things the old way. Even if the old way isn’t working.

  • Dkabell

    Gary Loveman did nail it, but turned down the opportunity for a timely investment in Macau ! But I agree, ever since I first entered the business world I was astounded how little or no quant techniques were applied to typical business decisions, let alone the full scientific method. Gary Loveman did prove these methods do yield significant results.

  • http://www.infovark.com Dean Thrasher

    There’s quite a lot of work being done on applying scientific research methods to business startups. Two thinkers adored by the venture capital community are Steve Blank (http://steveblank.com/) and Eric Ries (http://www.startuplessonslearned.com/). Both talk at great length about strategies and tactics for forming hypotheses and testing them as rapidly as possible.

    Your comments on the human elements involved in the pursuit of science immediately made me think of Thomas Kuhn’s Structure of Scientific Revolutions (http://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions) which coined the now overused phrase “paradigm shift.” It’s still the canonical reference for explaining how science advances in cycles of revolution and evolution.

Previous post:

Next post: