Facebook has disabled 583 million fake accounts in the past three months

Entrance to Facebook's Menlo Park office

Facebook has disabled 583 million fake accounts in the past three months

837 million pieces of spam were removed in Q1 2018, all of which were found and flagged by Facebook's systems before anyone even reported it.

Responding to calls for transparency after the Cambridge Analytica data privacy scandal, Facebook yesterday said those closures came on top of blocking millions of attempts to create fake accounts every day.

Facebook said it removed 2.5 million pieces of content deemed unacceptable hate speech during the first three months of this year, up from 1.6 million during the previous quarter.

Elsewhere, Facebook removed 21 million pieces of content classified as adult nudity or sexual activity and took down or applied warning labels to almost 3.5 million pieces of violent content during the quarter.

In addition to the removal of these fake accounts, the company acted on 21 million pieces of nudity and sexual activity, 3.5 million posts that displayed violent content, 2.5 million examples of the speech as well aS 1.9 million pieces of terrorist content. If Facebook tamps down on bad content, as some analysts predict, it is unlikely to lose users and advertisements, which account for 98% of its annual revenue.

The social network's global scale - and the extensive efforts it undertakes to keep the platform from descending into chaos - was outlined Tuesday in its first ever transparency report. In April, Facebook published its internal guidelines on how it decides to remove posts that include hate speech, violence, nudity, terrorism and more.

"We have a lot of work still to do to prevent abuse". Facebook released its Community Standards Enforcement Preliminary Report on Tuesday, providing a look at the social network's methods for tracking content that violates its standards, how it responds to those violations, and how much content the company has recently removed.

It admitted, however, that 3% to 4% of its accounts are fake. But the report also indicates Facebook is having trouble detecting hate speech, and only becomes aware of a majority of it when users report the problem.

Hate speech is checked by review teams rather than technology.

Facebook does not fully know why people are posting more graphic violence but believes continued fighting in Syria may have been one reason, said Alex Schultz, Facebook's vice president of data analytics.

The report also covers fake accounts, which has gotten more attention in recent months after it was revealed that Russian agents used fake accounts to buy ads to try to influence the 2016 elections.

Facebook, the world's largest social media firm, has never previously released detailed data about the kinds of posts it takes down for violating its rules.

Latest News