Tag Archives: Gov 2.0

Developing a social media policy for your enterprise? Use bottom-up design principles

In response to the explosion of use of social media and collaboration tools over the past 12 months, many organization leaders (e.g., CIOs, CPOs, etc.) are developing formal Social Media Polices to guide their staff in approved use of these tools inside the enterprise. Their challenge is to ensure staff use social media in ways that comply with both the enterprise’s mission and general policies—without overly inhibiting the benefits of open collaboration. By starting from bottom-up design principles, leaders can create Social Media Policies that productively encourage creativitywithout risking their enterprise’s mission and reputation.

Enterprises routinely start with a top-down approach

HierarchyEnterprises traditionally employ top-down approaches when defining standards, policies and procedures. This is natural and unsurprising as the majority of enterprises are hierarchical entities.

Top-down approaches are great at driving compliance

Top-down approaches are very effective when the goal is to prevent outliers and discrepancies. Their use is ideal when you want to drive adherence to things like plans and regulatory compliance.

However, top-down approaches are counter-productive to encouraging creativity

Creativity is not something you can drive on demand, from the top downward. Have you ever tried to order a team to be creative according to a plan? If so, did this produce the results you desired? Likely not. Creativity needs to be encouraged, not driven.

Social media requires a bottom-up approach

Social media is inherently non-hierarchical. It creates a “flat” network that enables all members to participate in the same way, regardless of level, time or location.

Social media results develop along embryonic lines

nautilusSocial media-based creativity follows a rather organic approach. Members initially join social media networks and share information about topics of personal concern or interest. Members with similar interests then link together to collaboratively develop initial thoughts into fleshed-out Ideas. These more complete Ideas then compete with the Ideas of others for attention and support. Those that “rise to the top” attract increased interest and collaboration, resulting in fully-vetted solutions to problems or unmet needs. While the social media community has coined worlds like “crowdsourcing” and “wikification” to describe this process, it is essentially embryology at work (albeit embryology of Ideas).

Embryology works from the bottom-up, following local rules

Embryology forms rich, complex works from simple beginnings by following a bottom-up process. Richard Dawkins, elegantly described the power of this on page 220 of his latest book, The Greatest Show on Earth:

“The key point is that there is no choreographer and no leader. Order, organization, structure—these all emerge as by-products of rules which are obeyed locally and many times over… That is how embryology works…this kind of programming is self-assembly.
… [I]t seems impossible to believe that the genes that program their development don’t function as a blueprint, a design, a master plan. But no: … it is all done by individual cells obeying local rules. The beautifully ‘designed’ body emerges as consequence of rules being locally obeyed by individual cells, with no reference to … an overall global plan.”

Dawkin’s major point is that you will obtain richer, more robust results by defining bottom-up, local rules for the evolution of Ideas (instead of driving them top-down from a master plan or policy).

Many examples of this exist throughout the technology world

Bottom-up self-assembly of robust, complex systems through use of local rules is not simply confined to the biological world. Some of the most successful expansions of technological change were built on the same approach. Just take a look at everything from Internet routing and open source technologies to Google’s Page Rank algorithm and Apple’s iPhone application development model.

Creating a social media policy based on bottom-up principles

Using bottom-up, locally followed rules to develop a Social Media Policy looks very different in structure than a traditional top-down policy.

Below is an outline of sample rules (and how they would locally execute throughout a social media ideation process) that I would initially consider to develop an effective Social Media Policy. For simplicity’s sake my unit is an Idea. An Idea could be a plan, policy, design, rule, product or anything else you can imagine.

A) Define the stages of ideation

Define what stages a collaborative idea should pass through from a root concept to completion. This is the skeleton for all other local rules. An example:

  1. Brainstorming of Ideas to consider
  2. Competition of Ideas to see which should be elaborate upon
  3. Elaboration of Winning Ideas into a critical level of detail
  4. Editing of Elaborated Ideas to a Released State

Once Ideas are Releases they become subject for further Brainstorming efforts to adapt them to changing business conditions (evolution at work).

B) Define the allowed actions at each stage

Define what staff can do to an Idea at each Stage. For example, staff can—

  • Create or delete Ideas during Brainstorming
  • Vote, share (internally) or comment on them during Competition
  • Add or remove whole Idea Components during Elaboration
  • Refine existing Idea Components (only) during Editing

Limiting what can be done at each stage provides just enough organization to reduce chaos and encourage productive collaboration. Brainstorming is all done in one place. You do not waste time fleshing out Ideas until they proceed through the Competition Stage. Similarly you focus on Elaborating upon and Editing late-stage Ideas (instead of chaotically replacing them with an unexplored, pre-Brainstormed half-Idea).

C) Define the transitions between each stage

Define what conditions triggers movement of an Idea from one stage to another (forward or backward). By defining the conditions you let the network act without requiring extensive oversight. Samples for movement out of Competition could include the following:

  • When an Idea gets enough votes it moves into Elaboration
  • When an Idea gets flagged as offensive or disruptive enough times it moves back to Brainstorming

D) Define who can see what at each stage

For example, only I would be able to see my Idea until I advance it for Competition. Once this occurs, only My Organization would be able to see and vote on it until it reaches a particular threshold (or is approved by the Organization Leader)

This type of rule set encourages two things. First, it enables edge-condition “long tail” idea creators to participate. Second, it makes department heads feel safer encouraging their employees to ideate and collaborate.

E) Define who can do what to an idea at each stage

For example—

  • Only I may be able to edit my Idea in Brainstorming
  • Only my department Colleagues (i.e., my friends) may be able to add or remove Components of an Idea in Elaboration
  • While everyone can refine Idea Components in Editing

The first rule protects the individual and encourages Ideation. The second protects the Department, encouraging the Department Head to allow social media-based Ideation. The third protects the mission or the enterprise (and can even ensure regulatory compliance).

These rules are just a brainstorm to start

These Rules are only Ideas at the Brainstorming stage. They require a full cycle of collaboration to see which win out and which do not. (After all, defining these as the rules for social media and collaboration would be Top-Down thinking.)

Social Networks for Business Tip #9: Create a SAFE Environment

I have found ten common tips that apply irrespective of what your enterprise does, your market is or what technology platform you are using. This is my ninth tip in this series. There will be 10 total posts; each with a particular theme. These intended to be read in the order presented, as they will build upon each other…

Tip09

Too Many Communities are Not Safe

I don’t mean to be an alarmist, but too many enterprise (i.e., mission-focused) communities are simply not safe. I routinely look at newly launched Enterprise 2.0 and Government 2.0 communities and immediate spot holes that I could easily compromise to do any of the following, within minutes or hours:

  • Hijack the community’s core mission and message with distracting, embarrassing or even detrimental content
  • Shifting the community’s focus or value though manipulated rating and voting
  • Disincentive or even harass contributing members from continuing to engage with the community
  • Capture personal information for use from anything from masquerading or stealing members’ identities to using private information for personal gain or exploitation

Of course, I would never do this to. However, I am always happy to evaluate communities and share my insights on their invulnerabilities to make them safer (as this ultimately helps the entire movement to use of social media to foster engagement, collaboration and outreach).

Four MUST HAVE Tools for Safe a Community

Any community should be created with four “tools” (really four key design and administration attributes) to be safe. While these are “nice-to-have’s” for recreational communities, they are absolutely essential for mission-focused ones.

1. Authentication-based Attribution

6a00d83451586c69e201156fb4ed1a970c-400wiAuthentication is the process of verifying the identities of members of your community are when they visit. Attribution is the process of matching every contribution (from rating and voting to content creation and comment) to a member. When you combine these to together, you know which members are contributing what (and they know this as well). This simple action drives whole changes in behavior:

  • Members are more likely to contribute valuable content. (They are also far less likely to create damaging content.)
  • Members will be more polite to each other (as their interactions are no longer hidden by anonymity). This will foster a much more constructive dialog (ultimately creating more value for all).
  • You community manager is now able to recognize and reward constructive members—and penalize the opposite (see some of the other tools below to do this).

You do not necessarily have to publicize attribution to all members (this is critical when you want to encourage comments without fear of being ostracized by others—critical in many Government 2.0 communities). Simply attributing members’ contributions will result in the above behavioral benefits.

2. Privacy Controls

People will not join your community (or contribute) if they are afraid that their privacy will be violated (by you or other members). As such, you should follow the Golden Rule of Social Networking Privacy:

Keep all profile-related information private for any given person unless the member tells you otherwise.

When you do this, you build trust with your members by enabling them to maintain control of their identities. While this is highly valuable in any network, it is often a requirement for statutory compliance in communities that support regulated industries (see my prior post for more details on this).

privacy_SettingsIf you don’t believe this, look at how its use has affected the growth of consumer social communities. For all the complaints about the arcane nature of Facebook’s privacy controls, they are still some of the strongest out there. In addition, Facebook (at least initially) followed the Golden Rule of Social Networking Policy for its members. As a result, it was a safe environment for people to join. This is reflected in Facebook’s dominance (when compared to other recreational communities) not only in total membership size, but also in participation by people 25 and older (i.e., people with a higher interest in maintaining privacy).

3. Member-based Content Flagging

One of the key purposes in creating a business-focused social community in the first place is to tap the input and creative thought of your customers, employees and partners. You should not limit this engagement to simply getting input and insight from your members; you should extend to enable them to police the community themselves. This requires you to put several items in place:

  1. Hooks on every piece of member-generated content that enable members to “flag” and report content of concern for review by your community manager
  2. View rules that automatically hide content that has been deemed of concern by a sufficient number of distinct members (here is where attribution again comes to play) in a given period of time
  3. Automated workflows and administration tools to enable community managers to review and act upon reported content (see Tool #4 below)
Example of a Member Reporting
Example of a Member Reporting Copyrighted Material

You can optionally decide to hide any content that a member has deemed offensive from that given members (preventing further offensive as the member engages you community). The first company I saw do this was AOL, who enabled their members to effectively “stop listening to” offensive chat room members without infringing on their freedom of speech.

Letting members police themselves provides many benefits:

  • You empower your members, strengthening their trust and engagement
  • You get free 24×7 support for moderation: if a 14-year-old publishes offensive content at 2 a.m. other members may detect and force its suspension before your community manager even comes in the next morning
  • You tap the “collective intelligence” of your members to steer your community in a direction that is more welcoming to all.

4. Moderation Console

This is the tool that pulls everything together. The moderation console is where your community leaders will actually manage your community. The enable them to provide members a safe community it must provide them the following functionality:

  1. Promotion of members and their content. This is intrinsic to rewarding good members and featuring them as examples to others.
  2. Removal of bad or offensive content. Without, this you cannot project the message and mission of your community
  3. Management of which members can publish content immediately and which must have their content reviewed by a community leader before publication
  4. Banning or blocking of members who violate your terms of service. This is a key tool for protecting your community from being hijacked. (However, banning provides no safety if you do not require members to authenticate and attribute themselves before adding content.)
  5. Automated review of content reported as offensive (so you can respond to actions members have taken to police the community)
  6. Full editorial privileges to correct content that contains inaccuracies, false claims or simple typos and remove offensive or copy right-infringing media. (Depending on your terms of service, your community leaders may directly publish these changes or send them back to authoring members for review.)

The moderation console builds upon the three other tools to enable you to provide an environment that is safe for your enterprise, its mission and the members of your business community.

Is Your Social Network Safe?

Does your community have all the tools to make it safe? If not, it is simply a manner of time as to when something will happen (and degree as to how extensive this will be.)