Deloitte’s Center for the Edge has just published a report titled “Social Software for Business Performance.” Its accurate subtitle is “The missing link in social software: measurable business performance improvements,” and it takes some important first steps toward providing this link.
In the real world (as opposed to the lab), it’s incredibly difficult to accurately and confidently assess the performance improvement associated with any technology adoption. The gold standard for doing so is to take a bunch of independent but similar business units, randomly divide them into two groups A and B, and at each of them start tracking the performance measures you’re interested in. After doing this for long enough, turn on the technology in all the units in group A, but none in group B. Keep measuring performance in all of them over time.
If performance improves in group A but not group B, you have pretty high confidence that the change is due to the technology, and not to anything else. If it were due to something else, the reasoning goes, that something would have affected the units in group B as well, since they’re so similar.
This is the corporate equivalent of a randomized control trial in medicine. As I say, it’s the gold standard. It’s also hugely difficult to do. How many businesses do you know with lots of virtually identical units? And how many of those are willing to let a researcher come in, start tracking sensitive performance measures (with the aim of publication), then mess with half the units and continue to track performance?
I don’t know many such organizations, and I’ve been looking for them. When I propose this design, I typically hear at least one of following responses:
“We don’t want our key performance measures published for the world to see (even if they’re disguised / normalized / whatever).”
“I’m not confident this technology is going to improve things, so I don’t want it in any of my units.”
“I am confident this technology’s going to improve things, so I want it in all my units, pronto.”
“What’s in it for me? I ‘get to help advance the state of management knowledge?’ Let’s see… nope. That’s not one of my objectives this year…”
It’s pretty great news if you can convince the leaders of a company to let you access performance data at even a single unit before and after technology goes live. Your confidence that the change is due to the tech is significantly lower with this research design, but it’s a whole lot better than not having data at all.
The Deloitte team carried out research using this design with Alcoa Fastening Systems and OSIsoft. Alcoa put in place Traction Software’s wiki tool to reduce the amount of time spent on compliance activities. OSISoft adopted Socialtext’s Enterprise 2.0 suite in order to shorten the amount of time needed to resolve customer issues. Here are graphs (reproduced here with permission) showing what happened to performance over time at a single unit at each company:
The report’s authors Megan Miller, Aliza Marks, and Marcelus DeCoulode are careful to mention other factors that could have influenced these performance changes, and to recalibrate OSISoft’s raw numbers to take into account unresolved customer issues. So they’re doing what careful researchers are supposed to do, which is present the data while acknowledging its limitations.
They present the first long-term, before-and-after data on Enterprise 2.0 that I’m aware of, and I’m grateful for the team’s work. We need a great deal more of it, of course, and we need to get closer to gold standard research designs, but the Deloitte team has shown us something new. I’m glad that it confirms my intuition about the business value of Enterprise 2.0, and I look forward to more good quantitative research in this area.
What do you think? Are you persuaded by the data in this report? Do you know of other similar research you’d like to share? Leave a comment, please, and let us know.