Imagine that an organization has deployed a full suite of emergent social software platforms (ESSPs) for its members — blogs, wikis, discussion / Q&A forums, upload facilities for photos and videos and etc., Digg-like utilities to flag and vote on digital content, prediction markets, some kind of enterprise Twitter, and whatever else a ‘full suite’ consists of, now or in the future. And imagine further that the leaders of the organization are sincerely interested in pursuing Enterprise 2.0 and getting their people to actually use the new tools. What would they then do? What would be their smart course(s) of action?
Virtually everyone agrees that coaching, training, explaining, and leading by example would be appropriate and beneficial activities. But what about measuring? It’s a technical no-brainer to measure how much each individual has contributed and to generate some kind of absolute or relative metric. Would doing so be helpful or harmful? Would it lead to negative outcomes and perverse behaviors, or would measuring E2.0 contributions stimulate and encourage the right kinds of actions?
These are fundamental questions, and they touch both on uncharted territory (ESSPs are new, after all) and on longstanding debates about motivations, incentives, and the interplay between them. It’ll likely take a few posts to cover all this territory, so consider this the first in a series of posts about the utility and desirability of an "E2.0 Rating" for knowledge workers.
One immediate objection is that E2.0 is simply too broad a phenomenon to be reduced to any single metric. Furthermore, a single metric that’s too simplistic, like ‘number of blog posts per month,’ leads to bad and easily predictable results: people blog at the expense of using any of the other tools, and have a strong temptation to put up lots of short (even trivial) posts in order to up their scores.
There are a couple responses to this objection. One is that a score can be composed of many elements, even if it’s just a single number. In American football, passer ratings are an example of such a metric. The NFL’s passer rating takes into account most of the things a quarterback is supposed to do when he throws — complete passes, throw for a lot of yards and a lot of touchdowns, and not get intercepted — and combines them into a number that varies between 0 and 158.3 (?). It seems to work pretty well at capturing the player’s contributions, and is widely referenced.
But an E2.0 score wouldn’t have to be distilled all the way down to a single number. Instead, it could be multidimensional. For example, an off-the-top-of-my-head list of productive activities using ESSPs includes authoring (on a blog, for example), editing (wikis), interacting with others (on discussion boards and Q&A forums), tagging online content, and uploading such content or pointing to it (using something like Digg). All of these activities can be done well or poorly, and there are lots of tools like votes and ratings that colleagues can use to give positive feedback that someone is doing them well.
So one approach would be to graph where everyone stands within the organization along six dimensions: authoring, editing, interacting, tagging, uploading, and positive feedback. A simple radar graph would instantly show were an individual is on each, based on their contributions to various ESSPs and relative to everyone else in the organization (in the graph below, ’100′ means that they’re at the 100% percentile, in other words the top contributor).
In the hypothetical graph below the individual is a relatively heavy uploader and interacter, does some authoring and tagging, and not much editing. This person has also received a lot of positive feedback — enough to put her in the 75% percentile:
Is this good or bad participation in ESSPs over all? That’s for the organization to decide, but this kind of contribution looks pretty good to me.
The benefits of measuring this way are that it doesn’t weight some kinds of contribution more heavily than others and that it provides for easy comparisons across people.
But is this latter really a benefit? Should participants in E2.0 be compared in this way? Or would it be counterproductive, or in violation of the whole spirit of contribution to organizational ESSPs? As I said, later posts will consider these questions; I just wanted to get the ball rolling with this post.
Leave a comment, please, and tell us what you think of the whole idea of quantitative measurement and E2.0 ratings. What, if anything, would you do with them if they were available? Would you include them in formal objectives and/or performance reviews? Do you think they’d lead to perverse incentives, or proper ones? Hold forth, please, and let us know what you think and what’s on your mind when it comes to this topic. I’ll try to shape later posts around these comments.