Should Knowledge Workers have E2.0 Ratings, Part 3

My previous two posts on measuring knowledge workers’ participation in Enterprise 2.0 generated a good bit of discussion. Many of the comments I received relate to the eternal debate over optimal incentive design, and whether it’s desirable (or even possible) to measure and reward effort vs. activities vs. outcomes. Rather than trying to summarize this debate (which I’d do poorly), let me instead try to make it specific to the topic at hand: whether it would be a good idea to measure, using some kind of multidimensional scale, the contributions of knowledge workers to emergent social software platforms (ESSPs) as well as the popularity of these contributions.

Many of the comments on my previous two posts pointed out problems with the approach I advocated, which was to measure each knowledge worker’s relative levels of activity/contribution along with the popularity of their contributions. I can boil a lot of the excellent points raised to three archetypal objections, phrased here as questions:

If you measure activity, aren’t you just going to get activity?  Yes!  This is exactly the point! The objection is that activity does not always lead to desirable results, and that it’s possible to have large amounts of unproductive activity. And it is. But all the evidence I’ve seen indicates that thriving ESSPs yield useful stuff. They get questions answered. They serve as large and dynamic knowledge repositories. They help people find each other and stay close. They transmit good ideas . They harness collective intelligence. And they work in concert with the goals of the organization, not at cross purposes.
So the basic goal is pretty simple: to encourage more activity in these environments.  It’s only a small leap of faith, I find, to believe that activity will yield results.  And the activity doesn’t have to totally self-directed. Instead, the organization’s leaders can guide Enterprise 2.0 by signaling and stressing where they want people to focus their contributions. In this economic environment, a focus on cost cutting (and survival) seems like a good idea.

Why not measure instead what we’re really interested in —  innovativeness, productivity, service levels, etc.?  For one thing, they can be hard to measure. For another, few companies would think to measure receptionists based on their contributions to innovativeness or R&D scientists based on their contributions to customer service. But these kinds of contributions can and do occur on ESSPs. So I advocate measuring and evaluating people based on their contributions to E2.0, and have some faith that E2.0 helps with innovation, productivity, service, etc.. And let people themselves figure out how they want to contribute, participate, and be helpful to each other, and let their abilities to do these things become clear over time, instead of assuming that their place on the org chart completely specifies their areas of expertise, or dictates how they should be spending all their time. Believe instead that expertise is emergent (I hope this phrase becomes a bumper sticker).
One other problem with measuring high level outcomes like innovativeness and productivity is that they’re typically measured at the level of the group or the entire enterprise. This gives rise to the free rider problem —  the fact that some people don’t pull their weight and instead count on others to do the work. With group-level outcome measures it’s hard to detect and deter free riding. With individual-level measures, in contrast, it’s easy to see who’s not pulling their weight.

Wouldn’t some people treat ESSP contribution as a chore, doing the minimum necessary, and with minimal thoughtfulness?  Yes, and so what? This would be a problem if others in the organization (the people of good will) came to believe that this approach of least possible effort —  call it the ‘phoning it in’ strategy — worked as well as a more conscientious approach. If that were the case, more and more people over time would start phoning it in. But if people see instead that sincere effort is rewarded, and those who phone it in get treated as if they’re phoning it in, then it’ll be perceived as a losing strategy and avoided.
Another worry is that the poor content generated by the phoning it in strategy will clutter up the Intranet or Extranet, obscuring the good stuff and making it harder for people to navigate, search, etc.  But think how much clutter there is on the Internet, and how little it impedes us. Thanks to Googleish link-based search, tagging, rating resources like Technorati, Digg, and Yelp, and many other such mechanisms of emergence, the cream rises to the top on the Internet. We can find what we’re looking for, navigate efficiently, and pretty accurately assess quality. The whole point of E2.0 is to make Intranets a lot more like the Internet in this regard —  to make it a place where there’s a lot of content, and where the bad or irrelevant does not get in our way of finding the good and relevant.

So I hear the objections and am trying hard not to dismiss them out of hand, but they don’t yet dissuade me from advocating an individual-level multidimensional E2.0 measurement program. What do you think? Am I leaving out or misrepresenting any of the main objections? Are my answers to the archetypal objections above wrong, naive, incomplete, or otherwise bad? Leave a comment, please, and let us know.