Once again, social media mogul Mark Zuckerberg fails to deliver on the promise to make Facebook content filter AI the „future of content moderation.“

Facebook’s very own team of engineers and scientists stated that an AI tasked with filtering out inappropriate content was unlikely to ever be effective, completely dismantling the idea of a hate speech-free environment within 5 to 10 years as Zuckerberg presented to a senate committee back in 2018.


The estimates are that only about 2% of hate speech and related content actually gets moderated and the developers of the platform say that without a drastic change of course, those numbers won’t go above 10-20% in the coming years.

Somehow the system still has yet to be tuned to work even at a basic level, commonly mistaking cockfights for car accidents despite being fed thousands of clips of injured animals in hopes of the AI learning to isolate the clips containing illegal content.

On that note, far worse things have managed to slip past the AI’s filter over the years, including the Christchurch shooting from 2019 that ended the lives of 51 innocent men, women, and children.

The clip was online for hours as the system had mistaken it for a videogame, as it was filmed in first-person view.


The disparity in results grows as more and more inside info is presented to the public

Facebook spokesman Andy Stone tried to refute the claims of the system never becoming fully functional by calling the information „outdated“ but, yet again, another group of Facebook moderators and employees presented up-to-date info of the system still removing only 3-5% of hate speech related content and a miserable 0.6% of content that overall violates Facebook’s policies.

This info surfaced as the multi-billion company insisted that the AI is working properly and that paying for human moderation was becoming too expensive.

However, the numbers never add when it comes to Facebook.

The social media giant claimed that nearly 98% of all hate speech content was removed before it even reached the news feed of daily users to be flagged.

Rashad Robinson, Facebook agenda boycotter, stated that the company always refuses to show their work, only presenting the result without ever explaining how or if they even got there.


The VP of integrity, Guy Rosen, tried to downplay the shortcomings of the AI by claiming that bringing down hate speech prevalence is the real goal, as opposed to removing it entirely.

He then attributed a vast majority of the supposed drop in hate speech representation to the AI team’s efforts despite the huge disparity in the results presented.

Internal affairs and the aftermath

Internal documents leaked by former product manager / current whistleblower Frances Haugen, show that human moderation costs around $104 million per year and Facebook claims to have spent nearly $13 billion on information concerning their safety and security guidelines.

After leaving the company having worked there for nearly two years on account of preventing election interferences, Frances testified before Congress, revealing that executives were aware of their wrongdoings but decided to turn a blind eye to them.