Social media gatekeepers

Social media is now an integral part of almost everyone’s life. People use them to communicate with each other, write messages, comment on posts, and share photos. Unfortunately, social networks are also exploited by cybercriminals to spread controversial and often inflammatory information. Modern digital technologies are attracting the attention of drug traffickers, terrorists, pedophiles, and human traffickers. Social media creators are working diligently to eradicate potentially harmful content. They have even created an entire industry that employs some of the very real gatekeepers of the Internet. Online platforms like 21point online casino also face challenges, as they must ensure a safe and secure environment for their users. The efforts of these gatekeepers are crucial in maintaining the integrity of digital spaces. They work tirelessly behind the scenes, often facing significant pressure, to protect users and create a safer online community for everyone. Let’s get to know them and understand the important role they play.

Internet Guardians: Who are they?

Today there are many different social networks, and on each of them one can come across inappropriate, dangerous and illegal content. On Facebook alone, users upload more than 350 million photos, several million animations, videos and text messages every day. And every bit of such voluminous content can contain malicious content that violates laws.
When social networking sites first appeared, volunteers took on the role of protecting users from risks. Moderators scrutinised online information, filtered it and removed inappropriate content. They were later replaced by professional Internet advocates who monitored the network 24 hours a day, seven days a week.

A new industry was born at the intersection of crisis management of advertising and information activities, public opinion analytics and online security.

The expansion of the social media industry to protect against dangerous content is occurring almost daily. This is mainly due to the rapid development of the Internet itself, the significant increase in the amount of content posted by users, and the move of many new companies from offline to online social media activity. Additionally, online platforms like the casino Kartáč are becoming increasingly vigilant in monitoring and regulating user interactions to ensure a safe environment. As businesses transition to digital spaces, the responsibility to safeguard users from harmful content has become a critical priority.

Statistics cited by Hemanshu Nigam, MySpace’s former chief security officer, suggest that between 250,000 and 300,000 people worldwide are involved in social media monitoring today. About 1 million more employees are responsible for protecting personal information and online security. Hemanshu Nigam also notes that the information from these statistics is conservative and constantly changing.

social networking advocates

What do Internet gatekeepers do?

Emma Monks is head of moderation and security at Crisp Thinking, based in Leeds, UK. Today’s social media gatekeepers, she says, are direct descendants of those moderators who pioneered social networking and served as both editors and community owners.

The company, led by Emma Monks, is represented by leading experts in the social media risk protection industry. According to Emma herself, this work used to be done mostly by volunteers on a voluntary basis. It was a kind of hobby, and members of the community of moderators could make decisions about whether or not to remove certain content, based solely on their own opinions. Over time, however, the landscape has changed dramatically. With the rise of online platforms and the increasing prevalence of social media, the need for professional oversight has become more critical. Companies like Uniclub, an online casino, have also recognized the importance of maintaining a positive online presence, highlighting the broader applicability of these skills. Today, the work is more structured and regulated, with comprehensive guidelines and standards in place to ensure consistency and fairness. The expertise of these professionals is now indispensable in managing online reputations and mitigating risks associated with user-generated content. This shift reflects a growing recognition of the complexities involved in moderating online spaces and the value of specialized knowledge in this field.

Now there are professionals rather than amateurs in the field of social media moderation. Crisp interacts with sophisticated computer algorithms that search for potentially inappropriate and dangerous content.
Experts in Social Media Analytics and Risk Assessment scour popular companies’ websites and social media pages for dangerous postings containing illegal information on a daily basis.

Virtual cleaners

Virtual cleaners, or content managers, operate on the principle that consumer and brand presence on social media should be trouble-free. According to Crisp chief executive Adam Hildreth, compliance is important first and foremost for the companies themselves and helps them avoid a lot of trouble. If the off-line security is ensured by the police, the online security function is handled by the Crisp Internet law enforcement.

Hildreth has been working in social media since he was 14. It was then that he developed the social networking site Dubit Limited, which has gone on to become the most visited teen site in the UK. As Dubit grew in popularity, the risks of inappropriate content increased and there were fears of child molesters, who might well seek victims in social media. Dubit staff discovered other threats to the site’s visitors, so the creators’ natural response to such trends was to set up Crisp in 2006 under the leadership of Adam Hildreth.

Over the years Crisp has expanded its reach significantly. Today, the company monitors social media for issues and threats ranging from aggressive pet volunteers posting threats on fur garment manufacturer websites to airliner or airport bomb threats posted on an airline’s Facebook page.

Adam Hildreth says his company’s customer base includes about 200 international brands, with its multibillion-dollar content reviewed by 200 professional analysts every month. When risks arise or inappropriate content is published, they correct the problem themselves (by removing the publication from the brand’s page, for example).

Crisp’s social media content analysts are professionally trained to identify the various types of risks (including reputational risk due to negative publicity or user safety due to physical reprisal risks).

Once a risk is identified, they apply these criteria to the content, classifying it into different types of threats. The Crisp analysts’ list of tasks also includes notifying customers about emerging risks and taking comprehensive measures to eliminate unsolicited content (deletion, reporting of illegal information to network administrators or law enforcement agencies).

Experience not necessary

Emma Monks notes that experience is not necessary for social media analysis. People applying for the job need only be observant, able to quickly identify different types of risks and apply them consistently to social media content. Consistency, according to Monks, is what ensures a high quality customer experience, as well as contributing to a healthy and safe online environment.

The social media analyst must be impartial, objective and stress-resistant, as content can include child pornography, dismemberment of human bodies and other shocking information. Optimism and communication skills are also useful in analytical work.

Many modern companies need quality protection against risks in social networks. It is simply impossible to exist in today’s online space without such protection,” says Hemanshu Nigam.