Tag Archives: privacy

Social Networks for Business Tip #9: Create a SAFE Environment

I have found ten common tips that apply irrespective of what your enterprise does, your market is or what technology platform you are using. This is my ninth tip in this series. There will be 10 total posts; each with a particular theme. These intended to be read in the order presented, as they will build upon each other…


Too Many Communities are Not Safe

I don’t mean to be an alarmist, but too many enterprise (i.e., mission-focused) communities are simply not safe. I routinely look at newly launched Enterprise 2.0 and Government 2.0 communities and immediate spot holes that I could easily compromise to do any of the following, within minutes or hours:

  • Hijack the community’s core mission and message with distracting, embarrassing or even detrimental content
  • Shifting the community’s focus or value though manipulated rating and voting
  • Disincentive or even harass contributing members from continuing to engage with the community
  • Capture personal information for use from anything from masquerading or stealing members’ identities to using private information for personal gain or exploitation

Of course, I would never do this to. However, I am always happy to evaluate communities and share my insights on their invulnerabilities to make them safer (as this ultimately helps the entire movement to use of social media to foster engagement, collaboration and outreach).

Four MUST HAVE Tools for Safe a Community

Any community should be created with four “tools” (really four key design and administration attributes) to be safe. While these are “nice-to-have’s” for recreational communities, they are absolutely essential for mission-focused ones.

1. Authentication-based Attribution

6a00d83451586c69e201156fb4ed1a970c-400wiAuthentication is the process of verifying the identities of members of your community are when they visit. Attribution is the process of matching every contribution (from rating and voting to content creation and comment) to a member. When you combine these to together, you know which members are contributing what (and they know this as well). This simple action drives whole changes in behavior:

  • Members are more likely to contribute valuable content. (They are also far less likely to create damaging content.)
  • Members will be more polite to each other (as their interactions are no longer hidden by anonymity). This will foster a much more constructive dialog (ultimately creating more value for all).
  • You community manager is now able to recognize and reward constructive members—and penalize the opposite (see some of the other tools below to do this).

You do not necessarily have to publicize attribution to all members (this is critical when you want to encourage comments without fear of being ostracized by others—critical in many Government 2.0 communities). Simply attributing members’ contributions will result in the above behavioral benefits.

2. Privacy Controls

People will not join your community (or contribute) if they are afraid that their privacy will be violated (by you or other members). As such, you should follow the Golden Rule of Social Networking Privacy:

Keep all profile-related information private for any given person unless the member tells you otherwise.

When you do this, you build trust with your members by enabling them to maintain control of their identities. While this is highly valuable in any network, it is often a requirement for statutory compliance in communities that support regulated industries (see my prior post for more details on this).

privacy_SettingsIf you don’t believe this, look at how its use has affected the growth of consumer social communities. For all the complaints about the arcane nature of Facebook’s privacy controls, they are still some of the strongest out there. In addition, Facebook (at least initially) followed the Golden Rule of Social Networking Policy for its members. As a result, it was a safe environment for people to join. This is reflected in Facebook’s dominance (when compared to other recreational communities) not only in total membership size, but also in participation by people 25 and older (i.e., people with a higher interest in maintaining privacy).

3. Member-based Content Flagging

One of the key purposes in creating a business-focused social community in the first place is to tap the input and creative thought of your customers, employees and partners. You should not limit this engagement to simply getting input and insight from your members; you should extend to enable them to police the community themselves. This requires you to put several items in place:

  1. Hooks on every piece of member-generated content that enable members to “flag” and report content of concern for review by your community manager
  2. View rules that automatically hide content that has been deemed of concern by a sufficient number of distinct members (here is where attribution again comes to play) in a given period of time
  3. Automated workflows and administration tools to enable community managers to review and act upon reported content (see Tool #4 below)
Example of a Member Reporting
Example of a Member Reporting Copyrighted Material

You can optionally decide to hide any content that a member has deemed offensive from that given members (preventing further offensive as the member engages you community). The first company I saw do this was AOL, who enabled their members to effectively “stop listening to” offensive chat room members without infringing on their freedom of speech.

Letting members police themselves provides many benefits:

  • You empower your members, strengthening their trust and engagement
  • You get free 24×7 support for moderation: if a 14-year-old publishes offensive content at 2 a.m. other members may detect and force its suspension before your community manager even comes in the next morning
  • You tap the “collective intelligence” of your members to steer your community in a direction that is more welcoming to all.

4. Moderation Console

This is the tool that pulls everything together. The moderation console is where your community leaders will actually manage your community. The enable them to provide members a safe community it must provide them the following functionality:

  1. Promotion of members and their content. This is intrinsic to rewarding good members and featuring them as examples to others.
  2. Removal of bad or offensive content. Without, this you cannot project the message and mission of your community
  3. Management of which members can publish content immediately and which must have their content reviewed by a community leader before publication
  4. Banning or blocking of members who violate your terms of service. This is a key tool for protecting your community from being hijacked. (However, banning provides no safety if you do not require members to authenticate and attribute themselves before adding content.)
  5. Automated review of content reported as offensive (so you can respond to actions members have taken to police the community)
  6. Full editorial privileges to correct content that contains inaccuracies, false claims or simple typos and remove offensive or copy right-infringing media. (Depending on your terms of service, your community leaders may directly publish these changes or send them back to authoring members for review.)

The moderation console builds upon the three other tools to enable you to provide an environment that is safe for your enterprise, its mission and the members of your business community.

Is Your Social Network Safe?

Does your community have all the tools to make it safe? If not, it is simply a manner of time as to when something will happen (and degree as to how extensive this will be.)

Health 2.0 Challenge: Managing UGC in the regulated environment

Update: I originally posted this in May 3, 2009. I updated this post on July 26, 2009 to add advice in response to calls to action for Health 2.0 — the use of Web 2.0, Gov 2.0 and Enterprise 2.0 technologies to help improve medicine and health care. Its focus now outlines the major HHS and FDA regulations any Health 2.0 service provider will have to navigate to deliver a regulatory-compliant solution

Why this focuses on the management of UGC

Open Collaboration intrinsically involves the collection, moderation and management user generated content (UGC). In general, moderation of UGC is not a simple prospect. Moderation of UGC in a regulated space is even tougher – especially in the very highly regulated biotech, pharmaceutical and health care industries where UGC can now include disclosure of personal health history or inadvertent reporting of adverse events. Based on the sensitivity of any discussion of regulatory compliance, it is worth diverting a little of your attention to some disclaimers and background information:

  • I am not currently affiliated with any biotech, pharmaceutical or health care company. Nor am I affiliated with and PAC or PR firm supporting those industries. I am a Chief Information Officer for an enterprise social networking company, Neighborhood America.
  • Prior to this, I worked at Amgen (the world’s largest biotech.) Most of my tenure here was in their Regulatory Affairs & Safety Operations organization leading a program to scale closeout of clinical trial data and submission of Biologic and Drug Licensing Applications to the FDA (and its global counterparts)–a highly-regulated process–through combined use of process re-design and Enterprise 2.0 technologies
  • Before this, I worked at AOL where I owned many systems subject to compliance with numerous financial regulations (especially Regulations E and FD, and Section 404 of the Sarbanes-Oxley Act)
  • Prior to AOL, I spent nearly seven years Booz Allen Hamilton, Lockheed Martin and the US National Laboratory System where I learned strict adherence to control of information of various classification levels.

I state this so you will understand that, while I am someone deeply experienced in managing compliance of information management, I am not a doctor, FDA or EMEA official or similar certified compliance professional.

What regulations do I need to consider?

The range and depth of biotech, pharma and health care regulations are vast. They cover a wide range of areas spanning how you manage clinical trials to manufacturing to sales and control of patient information. For this reason, it is absolutely critical to ensure you separate the social networking components of your Health 2.0 infrastructure from your other enterprise systems. This directly contradicts what some analysts are calling for in the evolution of enterprise social networking. However, it your do not do this, you will subject your social networking infrastructure to so many regulations that it will be impossible to manage it as an effective network AND maintain regulatory compliance. (My preferred method of this separation is the publish/subscribe model—however, that is a subject of another blog post.)

With this understanding in mind, I am assuming—

  • You are using your social network to manage outreach to bring interested parties into the fold to inform them of where to get information, gather their ideas, priorities and interests, and connect them with other professionals with related interests and expertise and…
  • You are not using your social network to manage clinical trial subject data; drug, biologic or medical device manufacturing data; or safety data

If these are true you have two bodies of regulation to watch in particular:

  1. Title 21 CFR Part 11
  2. HIPAA Title II

In addition, you will need to ensure your social networking infrastructure enables mining and export of UGC to support of your organizations’ pharmacovigilance practices.

Another Disclaimer: Of course, you may have many other regulations to consider based on your unique company and its pipeline and products. I do not need to point out the need to engage your Compliance and Regulated Information Technology teams for a full and complete assessment of your risks and needs.

The impact of Title 21 CFR Part 11 on your social network

Title 21, Part 11 of the Code of Federal Regulations (CFR) deals with the FDA guidelines on electronic records and electronic signatures. In the social networking area this means you must do three things:

  1. Never delete: In general it is bad practice, to delete data. It is much better practice to turn the status of data to “Inactive” or “Archived” so you can find it later (if needed a part of a legal or similar investigation.) To assure Part 11 compliance, you will need to ensure your system does not delete data (and your back office systems administration processes ensure data are archived prior any removal as part of hardware tuning or decommissioning)
  2. Use secure, electronic signatures: Here is where user attribution of UGC is so very important. You cannot let unauthenticated users provide content. You must register and authenticate them first. They you register them, you must confirm their identity (e.g., confirm provided email addresses) and authenticate them with encrypted, strong passwords. You then must attribute all UGC to each authenticated user. (It would also not hurt to get SAFE to review your registration and authentication approach.)
  3. Document that you do this: You will need to demonstrate that you have designed, built and tested a system that does the above. This includes documenting requirements, design, test cases and successful completion of those test cases. It also includes demonstration that your configuration management processes ensure that the code you have in production has completed full documentation of the above before going to production. (For software, this is known as Validation; for infrastructure, this is known as Qualification.)

The impact of HIPAA Title II on your social network

In general, the Health Insurance Portability and Accountability Act (HIPAA) protects the ability for workers and their families to gain access to health care when the switch employers or jurisdictions (i.e., when they move). Title II of HIPPA contains something called The Privacy Rule that governs the use and disclosure of Protected Health Information (PHI). This is where social network—even when they are not used to manage medical information—cross into HIPPA regulation.

Imagine you have a social networking site where patients are discussing places to go for cancer recovery support. On this site, a person starts to discuss their medical history. They list enough of their identity that anyone accessing the site can see that they (or a family member) has certain health conditions. This leads to an insurance company declining coverage to them or a family member when they move jobs due to “pre-existing conditions.” Now you potentially have Privacy Rule compliance risk.

However, you can easily guard against this, if you build the following elements into your enterprise social network:

  1. Make it a closed network. Your network needs to be more like facebook (where you need to be member to see UGC) then Twitter (where everything is open). In addition, you need to apply White List / Black List Rules to enforce who can join the network (e.g., pre-filtered list of doctors or patients and/or blocking of users from specific domains such as insurance companies).
  2. Strictly manage profile information. You need to help your members protect themselves by limiting profiling information. Do not capture any PHI data fields. Strongly encourage Display Names to not include names or other identifiers (this includes either prohibiting Avatars or only allowing members to pick from a list generic Avatar icons). Finally, encrypt all profile information (and – to assure Part 11 compliance – never delete past profile information.)
  3. Moderate all UGC prior to publication. Yes, this slows down the dynamics of your network. However, it protects you and your patients. By moderating all UGC before publishing it, you can protect members from disclosing information that would make maintaining their privacy difficult or impossible to anyone reading their content.

Additional support for pharmacovigilance

The WHO defines pharmacovigilance as “the pharmacological science relating to the detection, assessment, understanding and prevention of adverse effects, particularly long term and short term side effects of medicines.”

From a social networking perspective, this means you need to make provisions to handle situations where someone (inadvertantly) reports an adverse effect (AE) via UGC. This could be real-life AE or a fake AE provided by a malicious member. (Adhering to the six 21 CFR Part 11 and Title II HIPAA recommendations above significantly reduces the risk of malicious AE reporting.)

You should implement the following two items to ensure your social networking supports strong pharmacovigilance:

  1. Moderate all UGC prior to publication. If you are following the HIPAA recommendation above, you are already doing this. However, not only are you protecting patient privacy, you are also monitoring for reported AEs. This lets you both prevent inadvertent publication of malicious reports and detect and direct AE data to you Safety Reporting Systems
  2. House all UGC in a true enterprise data warehouse. Pharmacovigilance does not simply span the processing of AE reports; it also includes the mining of information sources to detect safety signals. By pulling social networking UGC into a enterprise data warehouse and providing your safety monitoring team access to this, you are providing them a new channel to mine and monitor safety information.
While these to recommendations can “sound scary,” following them will let you exploit the social networking medium to create a stronger, timelier pharmacovigilance function and capability.

Should I take the dive into social networking?

I can only imagine how many people are saying, “Social Networking in Biotech, Pharma and Health Care = Unwarranted Risks.” This is a natural reaction to the many challenges imposed by this new and dynamically expanding medium of interaction.

However, social networking is here to stay – not as the “next great technology” but as an expected medium to interact with others. When taking these recommendations in mind, companies, associations, and research organizations can tap this new medium to:

  • Foster greater collaboration on new products
  • Improve internal processes
  • Increase the effectiveness and efficiency managing regulatory compliance
  • Enable doctors and patients to more easily access needed information
  • Increasing the efficiency in the delivery of health care through innovation and collaboration
  • Strengthen post-marketing pharmacovigilance their products

Of course, given the push for Health 2.0 and the agenda of the Obama Administration, you have heard all these arguments. You only need to search “#Health20” on Twitter to find the latest.