I’m really looking forward to our annual faculty research symposium, which is taking place later this week. It’s a great opportunity to catch up on some of the most interesting work being done around the school, and to learn what my colleagues in other departments have been up to.
Last year I heard David Yoffie talk about the course he’s put together on strategy and technology (at HBS, course development is often considered a form of research). Yoffie has been studying technology-producing companies for some time now, has written extensively for both academics and managers, and has put together a rich and fascinating course.
As he was finishing his presentation, a question occurred to me. "Dave," I asked, "when, if ever, are technology companies going to get boring?" Luckily, he understood what I was really asking, which was something like "As you know, at HBS we don’t study industries for their own sake. We don’t have, for example, specialists in the trucking or re-insurance industries, even though they’re large and important. If and when we study specific industries in depth, it’s usually because they illustrate phenomena of broader interest. Early in the school’s history, for example, faculty studied railroads to understand the challenges and opportunities inherent in building nationwide, distributed, and capital-intensive companies that were local monopolies. But none of us study railroads any more. Is the same thing going to be true of high tech? Are we going to absorb all they have to teach us, and then move on? If so, is this going to happen anytime soon?"
If I recall Yoffie’s answer correctly, he said that he didn’t think high tech was about to get boring any time soon. Yoffie’s work, and that of many others, shows that the dynamics of high-tech companies are fascinating at every level: project, product, business unit, company, sector, and industry. At each of these levels, research reveals high levels of both variation and flux: there are large spreads in performance, and lots of turbulence (the ‘winner’ in one period is not necessarily the winner in the next). Both of these kinds of variance attract academics like blood in the water draws sharks.
I mention this discussion not only because this year’s symposium is coming up, but also because I think it sheds light on an important question around IT’s impact. Two weeks ago, Erik Brynjolffson and I published an article jointly in the Wall Street Journal and Sloan Management Review titled "Dog Eat Dog." In it, we presented some of the results of our research, conducted in collaboration with Feng Zhu and Michael Sorell, showing that industries that spend a lot on IT have experienced more intense and dynamic competition since the mid-1990s: more than was the case prior to that time, and more than was the case for industries that don’t spend as much on IT (an earlier blog post accompanying the article’s publication is here).
One common response to the article and the underlying research has been the argument / assumption / statement / hypothesis that these changed dynamics are real and linked to IT, but that they’re temporary. According to this argument, the new information technologies that appeared in the mid-1990s — the Web, commercial enterprise IT, XML, and others — were major innovations, and ones that will not be absorbed quickly or evenly by companies. But they will eventually be absorbed, and once this occurs competition will revert to its ‘pre-IT’ state. As G. Oliver Young commented on my original post:
IT doesn’t matter in the long-run. In the short-run, as firms adopt IT at different rates and rationalize those purchases with different levels of success there will be an amplification of turbulence. In the long-run, however, these forces will shake out as all firms reach the same plateau, rationalizing much of the same technology for the same business benefit. This is not to say that IT doesn’t matter at all — in the short-run it is tremendously important and can change the competitive landscape quite drastically — but the major game-changing shifts do not occur every day, they occur at most once every 10-12 years.
My concern is that your analysis is, due to our place in history, examining the short-term impact of a major shift — where we would expect the competitive landscape to change.
This comment echoes some of the points Nick Carr made in his May 2003 HBR article "IT Doesn’t Matter:"
The trap that executives often fall into… is assuming that opportunities for advantage will be available indefinitely. In actuality, the window for gaining advantage from an infrastructural technology is open only briefly… the opportunities for gaining IT-based advantage are already dwindling… It may well be that, in terms of business strategy at least, the future [of reduced IT impact on competition] has already arrived.
I certainly agree that the IT innovations of the mid-1990s were major ones, and quite difficult for many companies to internalize. Several of my case studies, in fact, have been about how hard it is to adopt some types of IT. But my colleagues and I think that the phenomenon of slow and uneven adoption is actually not the most important phenomenon relating to IT and competition. The most important competitive impact of IT, we believe, is to make technology-consuming industries a lot more like technology-producing ones.
The research cited above reveals that the tech-producing industries experience great variation and flux in part because rates of product innovation are so high, and replication of these innovations is fast and easy. As we wrote in "Dog Eat Dog:"
Competition [in high-tech industries] is constant, fierce and characterized by only temporary advantage, fueled by the ease with which software makers and other high-tech companies can copy and distribute new products and services. Instantaneous delivery through the Internet to hundreds of millions of consumers means a company with a slightly better online marketplace or search engine, for example, can quickly dominate the market, and just as easily be dethroned by a rival with a new approach.
We think that what’s going on in technology-consuming industries now is high rates of process innovation and fast and faithful process replication that are exactly analogous to the product innovation and replication we’ve been seeing for a while in technology-producing industries. And IT is the enabler of this new style of process innovation and replication. As we wrote:
Just as Google Inc. can take an improved search algorithm and make it immediately available via the Web, or Apple Inc. can quickly distribute an updated version of iTunes, companies in other industries can invent a new way of doing business, embed it in internal software, and deploy it as widely as necessary across locations, divisions and departments to gain an advantage.
I’ve seen, firsthand, IT used to embed and replicate process innovations in companies as diverse as Zara, CVS, Los Grobo, Alibris, MK Taxi, and SYSCO. These examples, and many others I’m familiar with, strongly suggest to me that IT’s impact on business processes is not limited to a few geographies or industries. Instead, it’s very broad-based. I regard IT spending as a proxy for the amount of IT-enabled process innovation and propagation taking place. It’s an imperfect proxy, of course, but results from our research indicate that it works fairly well.
The distinction between IT-as-innovation and IT-as-enabler-of-process-innovations is critically important. It’s the distinction between one shock to a system, and a system in which shocks become much more common. In the former situation it might be reasonable to expect that many or most members of the system (i.e. most firms in an industry) would eventually figure out how to respond properly to the shock. For a shock consisting of the mid-1990s IT innovations of the Web and commercial enterprise software, proper response would mean something like implementing ERP without crashing and burning and designing an easy-to-use eCommerce site. After this, it would be back to business as usual.
However, if the deepest impact of the mid-’90s innovations in corporate IT is to increase the pace and scope of process innovation, then simply succeeding with a couple system installations is clearly insufficient. It’s insufficient because at least some competitors won’t stop using IT to change existing processes or invent and deploy new ones.
When this happens, and when the process innovations are competitively valuable ones, rivals will really have no choice except to keep up with the new higher pace of innovation. Not doing so would be the equivalent of a hardware or software vendor’s electing to sit out a couple generations of product innovation in their industry. This would pretty quickly be a fatal decision.
Of course, only time will tell which of the two theories is correct — whether IT-consuming industries experienced a one-time shock in the mid 1990s, or whether they’re now experiencing more frequent and profound process shocks because of IT. For what it’s worth, our data are showing that the competitive dynamics of high-IT industries (those that consume a lot of IT) are getting more intense, not less so. In fact, just as "IT Doesn’t Matter" appeared in 2003, turbulence, concentration, and performance spread were increasing in high-IT industries, and continued to do so through 2005 (the most recent year for which we have data).
What do you think? What’s the correct way to think about IT’s effects going forward: diminished competitive significance because the technology innovations are being absorbed, or sustained significance because IT has moved industries into permanently higher rates of process innovation and replication? And what logic or evidence supports your argument? Leave a comment and let us know.