Should Knowledge Workers Have Enterprise 2.0 Ratings?

by Andrew McAfee on September 23, 2008

Imagine that an organization has deployed a full suite of emergent social software platforms (ESSPs) for its members —  blogs, wikis, discussion / Q&A forums, upload facilities for photos and videos and etc., Digg-like utilities to flag and vote on digital content, prediction markets, some kind of enterprise Twitter, and whatever else a ‘full suite’ consists of, now or in the future. And imagine further that the leaders of the organization are sincerely interested in pursuing Enterprise 2.0 and getting their people to actually use the new tools. What would they then do? What would be their smart course(s) of action?

Virtually everyone agrees that coaching, training, explaining, and leading by example would be appropriate and beneficial activities. But what about measuring? It’s a technical no-brainer to measure how much each individual has contributed and to generate some kind of absolute or relative metric. Would doing so be helpful or harmful? Would it lead to negative outcomes and perverse behaviors, or would measuring E2.0 contributions stimulate and encourage the right kinds of actions? 

These are fundamental questions, and they touch both on uncharted territory (ESSPs are new, after all) and on longstanding debates about motivations, incentives, and the interplay between them. It’ll likely take a few posts to cover all this territory, so consider this the first in a series of posts about the utility and desirability of an "E2.0 Rating" for knowledge workers.

One immediate objection is that E2.0 is simply too broad a phenomenon to be reduced to any single metric. Furthermore, a single metric that’s too simplistic, like ‘number of blog posts per month,’ leads to bad and easily predictable results: people blog at the expense of using any of the other tools, and have a strong temptation to put up lots of short (even trivial) posts in order to up their scores. 

There are a couple responses to this objection. One is that a score can be composed of many elements, even if it’s just a single number. In American football, passer ratings are an example of such a metric. The NFL’s passer rating takes into account most of the things a quarterback is supposed to do when he throws — complete passes, throw for a lot of yards and a lot of touchdowns, and not get intercepted —  and combines them into a number that varies between 0 and 158.3 (?). It seems to work pretty well at capturing the player’s contributions, and is widely referenced.

But an E2.0 score wouldn’t have to be distilled all the way down to a single number. Instead, it could be multidimensional. For example, an off-the-top-of-my-head list of productive activities using ESSPs includes authoring (on a blog, for example), editing (wikis), interacting with others (on discussion boards and Q&A forums), tagging online content, and uploading such content or pointing to it (using something like Digg). All of these activities can be done well or poorly, and there are lots of tools like votes and ratings that colleagues can use to give positive feedback that someone is doing them well. 

So one approach would be to graph where everyone stands within the organization along six dimensions: authoring, editing, interacting, tagging, uploading, and positive feedback. A simple radar graph would instantly show were an individual is on each, based on their contributions to various ESSPs and relative to everyone else in the organization (in the graph below, ’100′ means that they’re at the 100% percentile, in other words the top contributor).

In the hypothetical graph below the individual is a relatively heavy uploader and interacter, does some authoring and tagging, and not much editing. This person has also received a lot of positive feedback — enough to put her in the 75% percentile:

Hypothetical E2.0 Radar Chart

Is this good or bad participation in ESSPs over all?  That’s for the organization to decide, but this kind of contribution looks pretty good to me.

The benefits of measuring this way are that it doesn’t weight some kinds of contribution more heavily than others and that it provides for easy comparisons across people. 

But is this latter really a benefit? Should participants in E2.0 be compared in this way? Or would it be counterproductive, or in violation of the whole spirit of contribution to organizational ESSPs? As I said, later posts will consider these questions; I just wanted to get the ball rolling with this post. 

Leave a comment, please, and tell us what you think of the whole idea of quantitative measurement and E2.0 ratings. What, if anything, would you do with them if they were available?  Would you include them in formal objectives and/or performance reviews? Do you think they’d lead to perverse incentives, or proper ones? Hold forth, please, and let us know what you think and what’s on your mind when it comes to this topic. I’ll try to shape later posts around these comments. 

 

  • http://www.lyzasoft.com Scott Davis

    A few questions that come to mind re: SNS rating in the workplace are…
    1)Is “value” a supply-derivative metric or a demand-derivative metric? Or some mixture?
    2)What is the weight of quality, and how is qualitative value established? Does the transition of SNS from popular media (in which popularity is per se the standard of value) into the corporate world require a new/different notion of quality besides popularity?

  • http://dinesht.typepad.com Dinesh Tantri

    Incidentally, I wrote a post about Reputation systems for Enterprise 2.0 [http://tinyurl.com/42crcg] couple of days back and also had a fantastic conversation with Vanderwal around some of the related ideas.
    There is a definitely a need to come up with a global reputation idex if you will for any Enterprise 2.0 ecosystem. To get there, we would need to be able to roll-up or aggregate reputation across multiple social platforms.
    While looking at “tasks” within each social platform is important to begin with, I think it is important to look at each social system holistically and choose the right reputation pattern based on more strategic objectives. For instance, a social Q&A platform holds good potential of acting as a “proxy” to the users expertise and perhaps a pattern like a “LeaderBoard” or “Named Levels”[ Guru,Expert,Beginner etc.,] makes sense. Please see these reputation patterns from Yahoo – http://tinyurl.com/6jlul3 – I guess they are very relevant to enterprises as well.

  • Hans

    I often experience that lazy or uncommitted managers favor and rely on ratings.

    I think that E2.0 ratings can indeed visualize the productivity of a knowledge worker, but ratings shoud really only be the affirmation of what you already have experienced. In my view it is best to review the actual (E2.0) ‘products’ of a knowledge worker, to ask him or her what he or she is proud of and finally, in asking a few peers for review. Ratings can then affirm your conclusion.

  • http://blog.k1v1n.com Kevin Gamble

    Good questions Andrew.

    I have no doubt that some organizations will attempt to quantify participation. Enabling the various workstreaming tools is an essential component of the whole E2.0 suite. It’s important for colleagues to know, in a radically transparent way, where and how people are contributing.

    As soon as you try to quantify it, however, you will kill the goose… This would be something akin to paying people to contribute to a KM system. You’ll get nothing but garbage. People will try to game the system and it will distort the very results you were hoping to achieve.

  • http://www.ijsolutions.ca/blog/ Joel Halse

    I think these stats would be very interesting for those of us building the tools. I also think these stats will be of little interest to current organizational leaders unless you are able to draw a direct line between these stats and their impact against the problems the tools are designed to solve.

    I spend most of my working hours meeting and talking with organizational leaders from the business community, government, and academic institutions about E2.0 (we have developed and are now installing a collaborative workspace called FlowThink). Rarely am I able to talk about ‘Enterprise 2.0′ and maintain their interest. Instead, I have to focus on the effects that proper group communications and knowledge capture will have on their workforce’s ability to achieve organizational goals. I also stress the positive impact these tools have on the lives of their organizational leaders. With these tools they will gain the ability to manage group work without having to be physically present 24/7. This message really resonates with professionals who can never seem to take time off because they are central to all group activities given that the information flows through them. Remove the leader and work stops. E2.0 addresses this problem.

    So, I suspect that you’ll need to ensure that the results of your performance measurements can be easily understood in the context of Enterprise goals. The NFLÂ’s passer rating is useful because it directly relates to the probability that your team will win the game.

    We need to remember that E2.0 is as much about the Enterprise as it is about the 2.0. We need to be careful that these measurements put the use of 2.0 into context for the Enterprise. Business leaders won’t care how much their workforce uses these tools if they aren’t able to also see the impact the increased use of these tools is having on their organization.

  • Ben Plouviez

    Useful approach. One of the things that’s clear is that people use social networking tools to select the information they want to see – even though those tools deliver huge quantities of stuff, we actually look at quite small amounts (http://news.bbc.co.uk/1/hi/technology/7562475.stm).

    The evidence is similar in the workplace (http://hbswk.hbs.edu/item/6011.html) – the silo lives. So the question is, can we use the kind of approach you’re suggesting to incentivise people towards broadening their horizons and exploring more widely? For example, if I tag a resource prepared by my near neighbour, should this score lower than if I tag one from a very different business unit a continent away? Maybe we could build a 3D radar chart with distance as the third dimension…..

  • Brent Blum

    I’m staffed at a large energy company who is implementing an E2.0 system. HR’s stance is that there is only one measurement of employee performance (namely, the performance review) and are disallowing all outwardly visible rating of individual contributors. We’ll see if I can talk them out of this…

  • Bryan Labutta

    Excellent post and something that I have been thinking about and struggling with for a little while now. Although the company I work for has a corporate intranet that allows for basic collaboration and a couple of us have introduced new tools recently around social bookmarking and micro-blogging, usage remains minimal.

    I have given some thought to whether it would be valuable to incent people to use the tools since I do believe that people would find them useful. However, I see two problems with that approach. First, I agree with others’ comments that it would be difficult to determine the correct incentive that does not produce “garbage” while also not requiring large amount of manual effort to review and rate content. Second, I tend to believe that Enterprise 2.0 tools should be collaboration outlets that employees want to use willingly and not something that is force fed to them. To me, that is the time when you see the greatest benefit.

    As far as a rating system is concerned, I would be hesitant to implement one around Enterprise 2.0 because I would not want to stifle an individual’s willingness to contribute. As I mentioned above, I feel like the biggest benefits to Enterprise 2.0 should come when employees realize the benefits on their own and contribute at will. If a regular contributor sees that their personal feedback rating is very low will that reduce the amount of time they spend editing wikis and writing blog posts? Perhaps believing that people will see the value on their own is an idealistic point of view and I’ll come around to wanting some sort of rating system eventually…

    One other note, Dinesh, thanks for the link to the Gartner article. Definitely gave me some insight into some viewpoints about why our Enterprise 2.0 software is not being utilized to its fullest potential.

  • http://www.evolutionofbpr.com GregoryY

    I am a big enthusiast of measuring performance as much as possible, but it is better not to measure at all than to measure wrong thing. What you are proposing is exactly that. Unless people are employed by an organization to “socialize”, it is distraction to judge their performance based on their use of tools. The metrics need to be designed to measure contribution of their use of the tools to the improvement of their job performance.

    I had a lot of experience with salespeople, CRM utilization champions, who could not meet their sales quotas and IT trying to keep them from firing.

    Don’t put a carriage in front of a horse. It never works.

  • http://www.cisco.com Yatman Lai

    I believe we should only use individual ratings as positive reinforcement assuming it offer evidence as to how it help him or hers job performance. Otherwise, a group level ratings maybe more appropriate and less controversial.

  • http://atulrai1.blogspot.com Atul Rai

    Hi Andrew,

    Awesome idea. I do quite agree with you that participation in social computing cannot be reduced to one number, but as you said, this could be treated as a set of numbers … some folks would be good at edits, others at authoring … In addition, this is making the entire idea of social computing what it should be … fun!

    Something on lines i wrote about …

    http://atulrai1.blogspot.com/2008/09/sabre-social-networking.html

    Check out the way Sabre are giving “Karma” to folks based on how they are answergin to other people’s questions, etc.

  • http://www.workliteracy.com Tony Karrer

    I’m not sure I buy this kind of rating scheme, but there is an underlying issue that most knowledge workers have big time skill gaps. This hinders adoption and effectiveness of E2.0. So while this might not be the right scheme, something that highlights the gaps is important.

  • http://www.infovark.com Dean Thrasher

    I think measuring community participation will become less controversial as employees and employers become more familiar with social software.

    If social software is to become mainstream in the enterprise, you’ll have to measure community participation. Otherwise you can’t calculate return on investment, effectiveness of training programs, or compare on suite of tools with another.

    It’s a short step from measuring participation in the aggregate to measuring the contributions of an individual employee. At many companies — and in academia — employees are already measured by the amount they contribute to the organization’s intellectual capital. Whether it’s objective measures such as number of articles published or patents filed, or more subjective measures like reputation or peer reviews will depend on the nature of the enterprise.

    On the public internet, “karma” rankings on social networking sites such as Slashdot or Digg have been around for some time.

  • tjelliott

    As a previous commenter noted, what matters most are the effects. A QB rating in the National Football League factors in scores; here that is TDs or Touchdowns.

    a = (((Comp/Att) * 100) -30) / 20
    b = ((TDs/Att) * 100) / 5
    c = (9.5 – ((Int/Att) * 100)) / 4
    d = ((Yards/Att) – 3) / 4

    a, b, c and d can not be greater than 2.375 or less than zero.

    QB Rating = (a + b + c + d) / .06

    Rating Knowledge Workers based on their activities even when in collaboration might miss the point: create something which was not there out of what you have. Rating results of knowledge work makes sense because there may be many other factors affecting the ability of a person to produce those results. This scale overemphasizes activities; they should only matter if they produce something that the organization values.

  • http://AlexBain.com Alex Bain

    I think a scoreboard can be extremely motivational. I remember the Cambrian House founder mentioning in your class that the virtual currency they created for their community lead to a surge in contribution.

    I’ve also seen a scoring system work within a company. I know the designers that work at Zurb, and they boil down their contribution to their company’s blog to a single number, and keep track of who’s winning: http://www.zurb.com/article/88/team-motivation-for-us-its-just-a-game [they say this has lead to both more and better work]

  • http://pflix.com Mark Bean

    I think you could get a lot of metrics from the following items:

    1. Attention. The amount of traffic to your “content” for a given period of time.
    2. Participation. The extent to which users engage with your content in a channel. Think blog comments, surveys, wall posts, ratings, or widget interactions.
    3. Authority. (like Technorati), the inbound links to your content – like trackbacks and inbound links to a blog post or people linking to a YouTube video.
    4. Influence. The size of the user base subscribed to your content. For blogs, feed or email subscribers; followers on Twitter or Friendfeed; or fans of your Facebook page.

  • http://jedsundwall.com Jed Sundwall

    I agree with Joel that “these stats will be of little interest to current organizational leaders unless you are able to draw a direct line between these stats and their impact against the problems the tools are designed to solve.”

    We developed an algorithm for a client to rate Facebook Pages by looking at a handful of FB Page activity metrics. The results helped us identify some techniques to develop an active FB Page.

    Then we realized that we don’t care about having an active FB Page unless it’s helping accomplish some other strategic objective. Focusing on the metrics we’d identified (which are similar to yours here) would make our clients more popular, but, as Scott’s second point addresses, the value of that popularity is hardly fungible.

    The activities you’re looking at in this example could certainly lead to the creation of some good content, but looking at them per se, even in aggregate, doesn’t seem that useful at the end of the day.

  • http://www.ultimedium.com Soenke Dohrn

    I fully concur with Kevin GambleÂ’s comment. You can only manage, what you can measure. But what do we want to measure in the first place? Measuring self-organising processes has to be different from measuring linear processes. By selecting items usage frequencies, like authoring, you incentivise its use – whether it supports the quality of self-organisation respective organisational goal or not. Is that really helpful?

    After all, you do not know what the best outcome would be, or else you could design and formalise the process. But then, why introducing self-organising collaboration tools in the first place?

    You also have to consider that the tools, like authoring, serve different purposes in a knowledge discovery and innovation process. Authoring aides creativity teams to form, managers to transparently discuss and communicate decisions, departments to get an understanding for concerns present in the organisation as well as channelling customersÂ’ feedback.

    So what we should measure instead is not the frequency people made use of E2.0 tools, but the frequency they have been applied to processes or decisions in the different areas of organisation. Since the core question is, which processes have benefited the most, which have emerged, which have ceased existence?

    I think your radar chart has good categories, though you will find that tagging is more likely to sky-rocket since it is the most efficient and a highly effective tool to connect person to people to content (three-tier). My hypothesis is, that using that chart to analyse fuzzy-front-end of innovation processes, you probably find that more authoring, editing, and interacting categories dominate. At later stages once the ideas have become more focused positive feedback becomes stronger as the transparency of decision-making process increases..

    Finally, interesting would be to measure, where those collaborative tools help reaching a decision. I.e. when managers have to make a decision on, for example, going ahead or stop the idea project, there should be qualitative measures such as top three most influencing discussions (discussion name + link + summary 150). This decision should be an aggregated info dossier, tagged accordingly and made available to the OrganisationÂ’s network.

  • Fenton Travers

    I think people need to think about this without fear…that always screws it up. E2.0 is about embracing the good, and not being afraid of the ‘bad’. A rating, that your boss is going to look at and beat you up about is pretty pointless management activity. Please GET IT GANG, E2.0 is not for the bosses it’s for you! These ratings are not for your boss to give you a raise! It’s for you to find the right person to give you information on a certain topic! The comparison is for your personal feedback too. Do people think I’m a jerk? I.e. one of the spokes on the radar graph would be “Is good to talk to”. We are trying to get technology to server our greater good, that’s what E2.0 is about IMO. I am currently using some technology that is making my job easier to do, it’s great. We are really at the Model-T days of computing, let alone E2.0 computing. We want to hire a smart grad, plug him into an E2.0 company, and let him make the company millions, and serve the world simultaneously right!

  • http://www.8.to Lim Boon Chuan

    IMHO no, it is hard to quantify tradition work and services, till no there isn’t really a good measure of the current not to mention Enterprise 2.0 which is much more abstract. Measurement can help gauge the level of effectiveness of an enterprise. I qualify it by adding “accurate measurement”. To date I do not see any accurate barometers or quantitative tools to measure Enterprise 2.0 performances. Inaccurate measurements are worse than accurate measurements as it will bring uncertainty, frustrations and distrust which will work against the organization.

    It is certainly useful to be able to quantify some aspects of Enterprise 2.0. But until we do have sufficiently accurate tools, lets not even try it.

  • Daniel Mintz

    Right now at the US Department of Transportation, my focus is on getting 2.0 pilots started so people can get familiar, and less fearful, of doing so; thus I am not sure I can draw upon practical experience to respond to this kind of suggestion.

    My thoughts are that focusing on what we are trying to use Enterprise 2.0 ‘stuff’ for might be a more useful focus than measuring the 2.0 activities themselves. OR if possible some combination.

    For example, in a performance plan require at least one cross-organizational project with a 2.0 technology. It seems to me that sometimes we focus too much on symptoms and work hard to mask or overcome them, rather than looking at the problem and how to solve for that.

  • http://www.troyuncu.net Oyun

    I often experience that lazy or uncommitted managers favor and rely on ratings.

    I think that E2.0 ratings can indeed visualize the productivity of a knowledge worker, but ratings shoud really only be the affirmation of what you already have experienced. In my view it is best to review the actual (E2.0) ‘products’ of a knowledge worker, to ask him or her what he or she is proud of and finally, in asking a few peers for review. Ratings can then affirm your conclusion.

  • http://mashable.com/2008/09/22/government-intelligence-renaissance-networks Chris Rasmussen

    In any open social system you’re always going to have a power law distribution (long tail). You can generate what the “average” user does, but it doesn’t tell you much about the system as a whole. So when it comes to performance evals comparing against the average doesn’t help.

    There are all sorts of neat mashups that can be made to help judge interaction, but when it comes to qualitative measures the best approach is for both worker and management to work in the tools together. I could easily write my colleagues’ performance evals because I read their blogs, subscribe to their social bookmarks, and watch the same wiki pages.

    Management cannot view this interaction from a distance.

    Clay Shirky does a great job of explaining power law distributions in this talk

  • http://www.steptwo.com.au/columntwo James Robertson

    Like others, I’d highlight that this is really a business issue. Or perhaps an HR issue.

    If these activities are part of peoples job/position descriptions, then let’s measure them as part of people’s performance reviews. This type of graph could then be one good way.

    If it’s not part of their job, then who are we to impose this measurement? And to what end? How is this aligned with their job and business performance, not to mention business outcomes?

    I think it’s great to talk all this through, but we have to keep reminding ourselves that collaborative organisations will be created by management decisions, not our enthusiasm.

    Cheers, James

  • http://info-architecture.blogspot.com Samuel

    It would be really if we could do this. But doesn’t KM research show KPI’s hurt knowledge sharing? Would measuring E2.0 contribution do the same? Furthermore there have been some interesting experiments trying to measure social media ROI. But it’s still hard to do this objectively. Will the comparison between my e2.0 contribution and that of my colleague be fair? Can’t we ‘just’ ask for stories and try to quantify them? Ask employees to tell managers how the tools helped them or others become more productive.

  • http://www.bitinsight.com Patrick McHugh

    While tracking participation on social networking sites is important for understanding I would not broadly incorporate such measures into the formal objective or performance reviews of all the participants. Research on virtual groups indicates that the dearth of social cues in the available media enhances those cues that are available and increases their impact. I would be concerned that broadly implementing a scheme such as proposed could have negative impact on the quality of the participation and overall group trust.

    Interestingly virtual group research also indicates that the active behavior of a small number of individuals in a group can drive overall behavior norms. Given this fact I believe leveraging the proposal to set objectives for a small group of “rainmakers” responsible for the success of the social networking site could prove effective.

    Patrick McHugh
    Managing Director
    BitInsight LLC

  • Kuba

    Great Post, In my company we are currently
    implementing set of web 2.0 tools (Blog, Wiki’s) our main focus right now is to attract all the early adopters and creators. I believe we should offer them some incentives, mainly not financial as most of creators and contributors seek recognition and feedback from the community. Still I’m not sure how to approach it long term, I will be following this post Thanks.

  • http://www.bestipodtips.info Dereck

    I would like to make some suggestion. Great deal of money is spent on some knowledge or skill upgrade on workers on any level. Yet, there are no methods, at least I havenÂ’t encounter, to measure whatÂ’s the life time of knowledge upgrade and how much money are lost with not taking measures to refresh the skills and knowledge learned. I think this is the start point and after would follow some of your suggestions.

  • http://andrewmcafee.org/blog/?p=789 Three Mantras : Andrew McAfee’s Blog

    [...] of the reason I  advocate Enterprise 2.0 ratings for knowledge workers (see this 3-post sequence) is to harness this addiction –  to find the Justin McCurrys of the world, [...]

  • http://bhc3.wordpress.com/2009/07/10/enterprise-2-0-culture-is-as-culture-does/ Enterprise 2.0: Culture Is as Culture Does « I’m Not Actually a Geek

    [...] in effecting change. Or companies could take it even further, following Andrew McAfee’s suggestion that social software participation be baked into performance [...]

  • pixbook

    I agree with Joel that “these stats will be of little interest to current organizational leaders unless you are able to draw a direct line between these stats and their impact against the problems the tools are designed to solve.”

    Ways to make money

  • micheleadrian

    We think its difficult to create a sencefull rating which would create more and better i.e. post, wiki entries. it's better to have a look on your knowledge workers and tell them to use the E2.0 framework. We don't use E 2.0 ratings, because we think the E2.0 platform would lose on quality. So we, the management looks that we are a good example for our knowledge workers by using our E2.0 platform.

  • reanmw

    We would advise against the use of such ratings for measuring the performance of any employee. While we do not have statistics on the main reasons why people use social software, their motivations seems to be largely intrinsic (i.e. making the best use of one's tools to do great work, connecting with colleagues, working in groups, sharing). Theory of motivation clearly shows that any form of extrinsic motivation comes with the danger of eliminating present intrinsic motivation. Possibly to the point where it does more harm than help.

    However, such ratings show great promise for reviewing the general use of the available infrastructure as well as tendencies amongst employees. Social software is most likely not adopted at the same speed everywhere. Improvements to the software and specific training can be offered to those groups of employees that have not yet adopted new possibilities.

    The diagrams do have another potential weakness. As a power law distribution will most likely always be present, most of the rating charts will consist of a mere blip in the very center. The graphs will possibly need to be improved, so that the lower percentages account for more space on the chart.

  • harobu

    We believe that in order for such tools to work, i.e. the employees to use them, the company culture has to enable / support this.

    Our own personal incentive derives from the participation in internal Wikis. However, this is not the sole reason for the participation. In addition to this, we enjoy sharing and spreading our own knowledge, and we see this as advantageous for our reputation.

    For us, who actively participate in the company Wiki out of conviction, the ranking is not a disadvantage, instead we consider this a confirmation of our position.

    On the other hand, what about employees who only participate in ESSPs for the ranking? We consider this a “mechanical” participation, and this brings up the question on whether time will transform it into conviction.

    We believe that currently these ESSPs are still in an early phase considering their use in the industry, and that therefore they need certain degrees of freedom regarding the measurment / ranking of the employees. As mentioned before, a change in culture is required, which can be reached, in our opinion, rather by convincing people than forcing them into participation. One way to do this is to present successes achieved by using ESSPs to the employees.

    Once such a change of culture has begun, it may make sense to implement a ranking. The presented 6-point radar chart, generated automatically, would allow each employee to check their status and to determine, which aspects can still be “improved”.

    Not least, such a chart will reflect the methods and the social competence of the employees.

    One issue not considered is the following: what happens to an employee who has reached the maximum in all six dimensions? Will they continue their participation or rest on their laurels? Hence, measurement and ranking can serve as a personal determination of the status, but not as a comparison to co-workers.

  • staffing321

    Interestingly virtual group research also indicates that the active behavior of a small number of individuals in a group can drive overall behavior norms. Given this fact I believe leveraging the proposal to set objectives for a small group of “rainmakers” responsible for the success of the social networking site could prove effective.

    Find more jobs: http://www.staffingpower.com/

  • http://speakerinteractive.com/ Swan

    Adding information to a knowledge base only has value if others are deriving value from it. One of the best things we can do as an employee is use or improve upon a corporate best practice as much as possible.

    Thus, shouldn't one of the most important metrics be how much what we draw from the e20 environment is improving our work? Hard to measure, but a lot more meaningful than number of edits or number of authorings.

    Andrew, would love to have you moderate a chat at http://KMers.org on this topic. Slots are open for Tuesdays in April.

    Swan

  • petemodigliani

    Another option is establishing E2.0 rating levels. Level I requires users to setup profiles, add a document, contribute to a wiki and author a blog and microblog post. Additional levels could be attained by a certain number of contributions to the Enterprise: followers, retweets, engagement, connections, influence, etc. Once you obtain a certain level, there would be minimum requirements to maintain it (contributions, followers). I would love to see companies and Government agencies establish cash awards to employees with the top E2.0 ratings or a bonus for achieving the next E2.0 level.

  • Mike Ricard

    I think it can be like wielding a hammer in a china shop if you expect everyone to come out and ‘be social’ and expect to measure outcomes on an equal basis. The truth is that some people are more social than others (human nature). If you force the unsocial to participate for the sake of their KPIs then their lesser quality submissions will mitigate against them when they may otherwise do their jobs very well.

    Don’t get me wrong, I’m an enterprise community manager for a global publisher and my job is to get everyone on the employee community to become active members. But some will take to it better than others (the old 1-9-90 rule). We should accept the reality that people are infinitely variable and avoid shoe-horning them with arbitrary ratings and measures.

    What these ratings and measurements may be validly used for is to identify those social high-flyers who can be co-opted for innovative, knowledge-based or more customer-focused work. It will be interesting to see when HR starts hiring on the basis of social reputation/capital; some jobs will be more relevant to this than others.

    I’m writing this comment after having read a reference to this post from Hutch Carpenter in his topic called Reputation and Game Mechanics Are the Future of Social Software. http://bit.ly/deQOxo. He calls us to ‘Mix fun with achievement’. This approach may work better at Toys ‘R Us and Google than at my legal publishing and B2B company. There is too much corporate stodginess still in place to allow for fun in most workplaces.

    A great provocative post as always Andrew.

  • http://twitter.com/bunchball bunchball

    Hey Andrew – we should talk – my company has a platform that measures participation online and then uses the statistics combined with game mechanics to incent and motivate behavior. Drop me a line at partners [at] bunchball.com if you'd like to learn more.

    best, – rajat

  • http://twitter.com/bunchball bunchball

    Hey Andrew – we should talk – my company has a platform that measures participation online and then uses the statistics combined with game mechanics to incent and motivate behavior. Drop me a line at partners [at] bunchball.com if you'd like to learn more.

    best, – rajat

  • http://georgezapo.com George Zapo

    Thank you for this valuable information! I especially like the Radar Chart Graph. Quantatatiive measurements are far more reliable–they deal with facts. In addition, Enterprise 2.0 ratings can add quality to social platforms which, in my opinion, is necessary.

    Thank you for leading us into unchartered territory!

    George Zapo
    http://georgezapo.com

  • http://idotamaphack.blogspot.com/ Ronnel@Dota Map Hack

    Thanks for this info’s sir, I guess they should.

  • http://mytrusted-best-ptc-sites.blogspot.com/ Best PTC Sites 2011

    I think it’s maybe it depends on the worker, I guess other’s have and some not. Btw good flow chart mister.

  • http://freebacklinkspot.com Free Backlinks

    To rate workers can be a good idea or a bad one. The working crew must also have say in that.
    I think it is a bad idea to participate in ESSPs.

  • http://pulse.yahoo.com/_I57OEBTJSOMFHXR6OPGLDS7U3U Theresa

    This information means a lot in terms of business collaborations. It is the significant element nowadays that every entrepreneurs must imply.

    Dreamweaver CS4 tutorials

Previous post:

Next post: