The company is the target of criticism over how it treats some of the content on the site

May 29, 2013 12:19 GMT  ·  By

Facebook has responded to criticism over its tolerance of hate or harmful speech. Several groups have been ramping up pressure on the company to deal with the problem.

While Facebook does have terms of service which forbid certain types of speech, critics believe it has been failing to enforce them properly.

Not only that, but Facebook is actually profiting from the content by serving ads next to it, something advertisers aren't too keen on either.

In the recent campaign, several women rights groups targeted violence against women in particular, as portrayed on the social platform.

Facebook responded first by clarifying its approach to this type of content. The company only bans speech that is threatening in a real sense or targeted against a specific individual. Otherwise, even offensive content is allowed.

However, it did say that it hasn't been living up to its standards and that it is working on improving its response.

"In recent days, it has become clear that our systems to identify and remove hate speech have failed to work as effectively as we would like, particularly around issues of gender-based hate. In some cases, content is not being removed as quickly as we want," Facebook explained.

"In other cases, content that should be removed has not been or has been evaluated using outdated criteria. We have been working over the past several months to improve our systems to respond to reports of violations, but the guidelines used by these systems have failed to capture all the content that violates our standards. We need to do better – and we will," it added.

Facebook did provide some specifics on the areas it wants to improve. The company is working on a review of the guidelines, particularly concerning hate speech.

Facebook is also working to improve the training of the team that reviews the content marked as hateful by users. It's also working with several outside groups in the training.

The company is also developing a system to hold the people making offensive jokes or posting offensive content, things that fall within the community guidelines, more accountable by requiring them to use their real identities. Granted, that's already the case on Facebook, at least in theory.