Since the birth of Web 2.0, there has been a proliferation of user-generated content (UGC) websites, as brands clamour to interact with customers in open and engaging dialogue. However, while these sites undoubtedly bring brands and their audiences closer together, companies must be mindful that overtly negative or inappropriate submissions from site users can cause lasting damage to a brand’s reputation.
Thankfully, measures can be implemented to prohibit such submissions from ever reaching the online community and spoiling the positive user experience.
1. Craft your guidelines: Terms and conditions seldom make for a fun read, not least because they are strewn with complex legal jargon. If a user cannot understand them, how can they be expected to know what is and is not acceptable behaviour?
Begin by using a less intimidating title, such as ‘community guidelines’ and continue the user-friendly approach by drafting the guidelines in an easily digestible manner. Users will develop a sense of ownership for the site as the community grows, so don’t be afraid to involve them in any guideline modifications or extensions that you make. Open consultation with them will foster trust and deepen their relationship with your site and brand. Remember, users are far less likely to breach site guidelines if they understand them in the first place.
2. Build automated filters: Automating the moderation process using smart filters is the first step to thwarting offensive, litigious and hijack-marketing submissions. Therefore, make sure that the filters you put in place are versatile enough to capture a myriad of malicious submissions. Implementing multiple filters allows you to fight a war on several fronts as they will prohibit not only blacklisted words, but also entries based on URLs, duplicate submissions or even the prior history of the user. However, while filters are a powerful weapon for battling undesirable submissions, they are not perfect. For example, offensive slang or expressions can be coined in an instant – filters cannot adjust themselves at the same speed.
3. Embrace your technology: As outlined above, automating the moderation process brings invaluable benefits. But sooner or later human moderators will need to review the content. This applies whether your website adopts a pre-moderated approach (whereby nothing goes live before specifically being approved) or a post-moderation approach (whereby content goes live immediately, to be reviewed later by a moderator who may then decide to accept, reject or edit the content). Maths and logic-informed algorithms will help your moderators with this issue.
For example, a system can be configured to flag any user making a certain number of submissions within a one-hour timeframe. Analysing user history can also be very helpful. If a user has made complaints about content on previous occasions and these complaints have been predominantly upheld by a moderator, it makes sense to shift the user’s report to the top of the moderation queue. Keep an eye out too, for heavily-trafficked content as it can indicate undesirable subject matter.
4.Enlist your users: One of your biggest allies in the fight against untoward submissions is your users themselves. Many are only too happy to assist in the moderation process. This not only lightens the moderators’ load, but also entrenches the user deeper in the site.
With this in mind, make sure that you have different moderation tools available to appeal to different users. Some are willing to take a very active role and become volunteer moderators, while others can be encouraged to demonstrate diligence by clicking the ‘report this content’ link. Either way, ensure that the website includes multiple tiers of participation in order to capture any level of interest in the moderation process.
It is also a very worthwhile exercise to engage directly with the most enthusiastic users by sending them a personal email. This promotes and rewards their loyalty and encourages them to remain committed to keeping ‘their’ site as positive and safe as possible.
5. Make moderation actions visible: One of the dangers of keeping moderation controls concealed is that it unwittingly extends an invitation to those intent on abusing the site. If the community is aware that all submissions are subject to review and must adhere to guidelines, they are far less likely to waste their time testing the boundaries in the first place.
It is also important to notify users when their submissions have been moderated so they are aware of what content has been accepted or declined. If a submission has been declined or edited, explain to them why, as they will be far less likely to make a similar submission in the future. Finally, remember that users make honest mistakes, so make sure they are given a chance to rectify a submission.
6. Moderation tools need love too: Take the time to develop well-designed tools and implement usability test plans. Moderators will spend many hours using the tools, so make sure that their experience of using them is a happy one.
Be sure to account for multiple workflows so that when more than one moderator is working, each moderator is aware of who is responsible for which part of the site. With the right design and preparation, moderation tools and staff can deliver even greater success to UGC projects while finding the right balance between user freedom and brand protection.