Are Enterprise Systems Part of the Problem or the Solution?

On August 8, the website of MIT Sloan Management Review (one of my favorite journals) published an article by Cynthia Rettig called "The Trouble With Enterprise Software." Rettig writes that

"Software promises evolutions, revolutions and even transformations in how companies do business. The triumphant vision many buy into is that enterprise software in large organizations is fully integrated and intelligently controls infinitely complex business processes while remaining flexible enough to adapt to changing business needs."

then spends most of the article advancing the argument that most companies are nowhere close to that vision, and that the new technologies they’ve been buying in recent years have been making things worse instead of better. The heart of her argument is that most companies IT infrastructures have become both too complex and too rigid to deliver on the vision, and that installing new systems serves primarily to increase complexity and hence worsen the problem.

Rettig makes some insightful points about the perils of software complexity, and I share her deep skepticism about Service-Oriented Architecture (SOA) as a solution to the challenge of high and increasing complexity. 

Rettig, however, also paints commercial enterprise systems such as those sold by SAP and Oracle as part of the problem. She writes that "The concept of a single monolithic system failed for many companies… In the end, ERP systems became just another subset of the legacy systems they were supposed to replace…   Try as they might to measure the productivity gains of ERP implementations or IT in general, researchers have yet to arrive at any coherent or consistent conclusions."

It is certainly true that enterprise systems have failed in many companies, and it’s also true that, as she points out, many others have not been able to shut off legacy systems to the extent they expected after ERP went live. But it is simply not the case that researchers have been unable to draw any coherent conclusions about these technologies.

"ERP doesn’t help" is a testable hypothesis, and some colleagues of mine have tested it. NYU’s Sinan Aral, Georgia Tech’s D.J. Wu, and my friend and coauthor Erik Brynjolfsson at MIT recently published a wonderful paper, titled "Which Came First, IT or Productivity? Virtuous Cycle of Investment and Use in Enterprise Systems." I’ll quote from the paper’s abstract:

"While it is now well established that IT intensive firms are more productive, a critical question remains: Does IT cause productivity or are productive firms simply willing to spend more on IT? We address this question by examining the productivity and performance effects of enterprise systems investments in a uniquely detailed and comprehensive data set of 623 large, public U.S firms. The data represent all U.S. customers of a large vendor during 1998–2005 and include the vendor’s three main enterprise system suites: Enterprise Resource Planning (ERP), Supply Chain Management (SCM), and Customer Relationship Management (CRM). A particular benefit of our data is that they distinguish the purchase of enterprise systems from their installation and use. Since enterprise systems often take years to implement, firm performance at the time of purchase often differs markedly from performance after the systems go live. Specifically, in our ERP data, we find that purchase events are uncorrelated with performance while go-live events are positively correlated. This indicates that the use of ERP systems actually causes performance gains rather than strong performance driving the purchase of ERP. In contrast, for SCM and CRM, we find that performance is correlated with both purchase and go-live events. Because SCM and CRM are installed after ERP, these results imply that firms that experience performance gains from ERP go on to purchase SCM and CRM… These results provide an explanation of simultaneity in IT value research that fits with rational economic decision-making: Firms that successfully implement IT, react by investing in more IT…"

The paper is well worth downloading and reading in its entirety; it’s a great example of rigorously conducted research aimed at an important open question (the same kind of research Erik and I, along with Michael Sorell and Feng Zhu, have been striving for as we test the hypothesis that "IT Doesn’t Matter" in competitive battles. Our results suggest strongly that it does — see these papers and these blog posts).

The sober and understated language of this paper’s abstract contains a vital insight for people who question the overall value delivered to companies by their information technologies: if IT were not delivering value, rational decision makers would not keep investing in it. Rettig’s argument falls into a long line of pessimistic writing about the value of corporate IT. Much of this writing takes the implicit, and at times explicit, view that the executives who make technology decisions are dupes, perennially falling for a "triumphant vision" of software. These executives are presumably swayed by vendors’ sales pitches and the consistent message from IT’s ‘helper industries’ — an ecosystem of analysts, journalists, consultants, and (yes) academics — that everything’s different now, so investments must be made.

There is plenty of anecdotal evidence to support this pessimistic view, and it even seems that US companies have collectively lost their senses for a bit when presented with a particularly appealing IT-based vision (remember how B2B exchanges like Chemdex were going to change everything?).  But to believe that corporate executives have been sold technological snake oil for the entire history of the IT industry is to believe that these executives are essentially idiots. This belief underlies a lot of funny Dilbert cartoons and episodes of The Office, but it is at odds with any realistic and logical view of corporate decision making. 

Managers would not be spending more than 20% of their capital budgets each year on IT if they didn’t perceive substantial benefits. And their companies wouldn’t stay in business very long if this perception was hugely inaccurate. The only way I can see for the IT pessimists to be right is if the delusion about IT’s benefits is both persistent and virtually universal. And I don’t buy that, if for no other reason than because we don’t have any other examples of such a delusion —  of massive and longstanding economy-wide misallocation of resources within a capitalist system (Some might point out that our continued reliance on fossil fuels is just such a misallocation, but the companies and individuals burning these fuels are not in the short term feeling the effects of global warming; they’re not, in other words, bearing the full costs of their actions. This is clearly not the case with IT.).

I agree that it’s important not to naively accept anyone’s triumphant vision of corporate IT. But it’s also important not to make claims in the other direction that are too sweeping. Perhaps most fundamentally, it’s critical at some point to stop floating hypotheses about IT’s impact (or lack thereof), and to start testing them. We have enough history and enough data to permit more excellent studies like the one conducted by Aral, Brynjolffson, and Wu. Designing and executing research that is both rigorous and relevant is difficult, at times dismayingly so, but as these three show it’s well worth the effort.