If you don't want to make the decision yourself, then the default will be whatever the majority in your region wants

Feb 17, 2017 10:20 GMT  ·  By

In a 6,000 word manifesto, Mark Zuckerberg talks about his view of the future and how he wants to empower people even more while following his main goal for the social network - building communities that are supportive, safe, civically engaged, informed and inclusive. 

One of the things Zuckerberg talks about is how everyone is different and how everyone has different views on the same topics, especially those that are more sensitive in nature.

With that in mind, Zuckerberg revealed that the Community Standards policy is going to go through a massive shift. If until now the rules have been the same for everyone, leading to censorship of historical photos or other content wrongly removed only to be reinstated hours later after media backlash, now more customizations are coming.

"The idea is to give everyone in the community options for how they would like to set the content policy for themselves. Where is your line on nudity? On violence? On graphic content? On profanity? What you decide will be your personal settings. We will periodically ask you these questions to increase participation, and so you don’t need to dig around to find them. For those who don’t make a decision, the default will be whatever the majority of people in your region selected, like a referendum. Of course you will always be free to update your personal settings anytime," Zuck said.

This seems to be a far better approach to the topic since everyone views the world differently, not only in the same houses, cities or countries, but across the globe. Everyone is different, and Facebook seems to finally embrace this.

While Zuckerberg's letter managed to answer many questions, it also raised others. When it comes to these content control settings, will teenagers get the same power over their accounts or go for a default setting? We all know where giving them full control over the situation will lead and we're sure parents everywhere are less than pleased with that idea. ​

More AI to help out

There's a kick here, however. In order to classify potentially objectionable content, Facebook will rely on artificial intelligence, which is already delivering about a third of all content flags. Over time, the Facebook chief hopes the AI will learn to make more nuanced distinctions.

"Artificial intelligence can help provide a better approach. We are researching systems that can look at photos and videos to flag content our team should review. This is still very early in development, but we have started to have it look at some content, and it already generates about one-third of all reports to the team that reviews content for our community," the Facebook CEO writes.

Zuckerberg admits that it will take many years to fully develop these systems, but right now, they're starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda. This will be done in the hopes that Facebook will be able to remove anyone trying to use the social network to recruit for a terrorist organization, he said.