Data From The Edge

by Andrew McAfee on February 25, 2011

Deloitte’s Center for the Edge has just published a report titled “Social Software for Business Performance.” Its accurate subtitle is “The missing link in social software: measurable business performance improvements,” and it takes some important first steps toward providing this link.

In the real world (as opposed to the lab), it’s incredibly difficult to accurately and confidently assess the performance improvement associated with any technology adoption. The gold standard for doing so is to take a bunch of independent but similar business units, randomly divide them into two groups A and B, and at each of them start tracking the performance measures you’re interested in. After doing this for long enough, turn on the technology in all the units in group A, but none in group B. Keep measuring performance in all of them over time.

If performance improves in group A but not group B, you have pretty high confidence that the change is due to the technology, and not to anything else. If it were due to something else, the reasoning goes, that something would have affected the units in group B as well, since they’re so similar.

This is the corporate equivalent of a randomized control trial in medicine. As I say, it’s the gold standard. It’s also hugely difficult to do. How many businesses do you know with lots of virtually identical units? And how many of those are willing to let a researcher come in, start tracking sensitive performance measures (with the aim of publication), then mess with half the units and continue to track performance?

I don’t know many such organizations, and I’ve been looking for them. When I propose this design, I typically hear at least one of following responses:

“We don’t want our key performance measures published for the world to see (even if they’re disguised / normalized / whatever).”

“I’m not confident this technology is going to improve things, so I don’t want it in any of my units.”

“I am confident this technology’s going to improve things, so I want it in all my units, pronto.”

“What’s in it for me? I ‘get to help advance the state of management knowledge?’ Let’s see…  nope. That’s not one of my objectives this year…”

It’s pretty great news if you can convince the leaders of a company to let you access performance data at even a single unit before and after technology goes live. Your confidence that the change is due to the tech is significantly lower with this research design, but it’s a whole lot better than not having data at all.

The Deloitte team carried out research using this design with Alcoa Fastening Systems and OSIsoft. Alcoa put in place Traction Software’s wiki tool to reduce the amount of time spent on compliance activities. OSISoft adopted Socialtext’s Enterprise 2.0 suite in order to shorten the amount of time needed to resolve customer issues. Here are graphs (reproduced here with permission) showing what happened to performance over time at a single unit at each company:

Performance improvement following Enterprise 2.0 at Alcoa Fastening Systems (reproduced from "Social software for business performance," by the Deloitte Center for the Edge)

Performance improvement following Enterprise 2.0 at OSISoft (reproduced from "Social software for business performance," by the Deloitte Center for the Edge)

The report’s authors Megan Miller, Aliza Marks, and Marcelus DeCoulode are careful to mention other factors that could have influenced these performance changes, and to recalibrate OSISoft’s raw numbers to take into account unresolved customer issues. So they’re doing what careful researchers are supposed to do, which is present the data while acknowledging its limitations.

They present the first long-term, before-and-after data on Enterprise 2.0 that I’m aware of, and I’m grateful for the team’s work. We need a great deal more of it, of course, and we need to get closer to gold standard research designs, but the Deloitte team has shown us something new. I’m glad that it confirms my intuition about the business value of Enterprise 2.0, and I look forward to more good quantitative research in this area.

What do you think? Are you persuaded by the data in this report? Do you know of other similar research you’d like to share? Leave a comment, please, and let us know.

  • http://twitter.com/briantullis Brian Tullis

    Hi Andy. You are right, all of those questions do exist, because I asked them myself when I worked with Deloitte to participate in this study. In the end, though, I believed enough in our results and the potential to change the world of work that I felt it was worth our time to participate. I know that this sounds incredibly hokey but changing the world is easy as long as you start in your little corner of it and hopefully get others excited about what you are doing. At least I’d like to think that it’s more about that than about shameless self-promotion.

    I had the support of our business unit, IT, and communications leadership to do so. It is a real shame that we don’t see more of this because I know that the stories are out there; they just need to be told. The conclusions of the study are sound and the benefits of social business software are real. In our particular case, “correlation does not equal causality” as the saying goes, but our results would have been extremely difficult to achieve without an enabling tool like Traction TeamPage. And of course there are other tools out there that will do the trick, for sure. It’s not so much the particular tool that you use, but how you use it.
    Brian Tullis
    Alcoa Fastening Systems

  • http://twitter.com/roundtrip Greg Lloyd

    Andy — I join you in thanking the Deloitte team, Alcoa, Ensign Bickford, and OSIsoft for taking the time to develop this very well done case study. As you say, studies like this are difficult, expensive, and the gold standard. A while back I found and quoted great story from Edward Tufte on why large scale experimental studies based on overall business success are problematic – except in hindsight, see surgeon story in Schism ref.

    I hope this study encourages Deloitte, business schools and others to use study as a model for measuring Enterprise 2.0 value reflected my measuring improvements in business activities that can be assigned values that are the ground truth for that organization.

    Needless to say I’m also delighted that the measurable results back up what I’ve seen in daily work at Traction Software, and from the experience of many TeamPage customers and partners shared formally and informally on our server in the context of work.

    My deepest thanks to great TeamPage customers like Alcoa and Ensign Bickford.

    Enterprise 2.0 Schism
    http://traction.tractionsoftware.com/traction/permalink/Blog1163

  • http://www.jmorganmarketing.com jacobmorgan

    Hi Andrew,

    I’ve put together 4 in depth case studies on enterprise collaboration thus far, three of them are here: http://www.chessmediagroup.com/resources_cat/case-studies/ and the 4th I’m in the process of writing up live at socialbusinessadviser.com on Penn State University. I have also looked at a few reports that you will find interesting: here: http://www.jmorganmarketing.com/the-impact-of-collaboration-on-enterprise-business-performance/ and here: http://www.jmorganmarketing.com/collaboration-impacts-business-performance/

  • http://twitter.com/SparkingInsight Dibyendu De

    The case is illustrative of the effect of E2.0 implementation. But I am sure more effects would surface in due course of time.

  • Anonymous

    I do agree with you on this.. It’s illustrative that we the readers can understand well in the first read.

  • Anonymous

    The illustration is impressive that we get to understand well in the first read.

  • stand up comedy

Previous post:

Next post: