Facebook admits 4% of accounts were fake

Facebook remove more than 20 million pieces of adult nudity or pornography in three months

Facebook's latest transparency move is showing you how much objectionable content it removes

Most recently, the scandal involving digital consultancy Cambridge Analytica, which allegedly improperly accessed the data of up to 87 million Facebook users, put the company's content moderation into the spotlight.

The company took down 837 million pieces of spam in Q1 2018, almost all of which was flagged before any users reported it.

Guy Rosen, Facebook's vice president of product management, said the company had substantially increased its efforts over the past 18 months to flag and remove inappropriate content.

The company removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, almost triple the 1.2 million a quarter earlier, according to the report.

Of the total 2.5 million hate speech posts removed, only 38 percent were pulled by Facebook's tech before users reported it. Compare that to the 95.8 percent of nudity or 99.5 percent of terrorist propaganda that Facebook purged automatically.

It also explains some of the reasons, usually external, or because of advances in the technology used to detect objectionable content, for large swings in the amount of violations found between Q4 and Q1.

Facebook said that for every 10,000 content views, an average of 22 to 27 contained graphic violence, up from 16 to 19 in the previous quarter, a rise that was attributed to the rising volume of graphic content being shared on Facebook. But the report also indicates Facebook is having trouble detecting hate speech, and only becomes aware of a majority of it when users report the problem.

President says first lady is doing 'really well'
Trump has acknowledged reimbursing his lawyer for the payment to Stormy Daniels but denies her allegations. Many were caught off guard by the news, as there was no word of the First Lady's condition until Monday.

The first of what will be quarterly reports on standards enforcement should be as notable to investors as the company's quarterly earnings reports. The company estimates that between 0.22 percent and 0.27 percent of content violated Facebook's standards for graphic violence in the first quarter of 2018.

Though Facebook extolled its forcefulness in removing content, the average user may not notice any change.

[Image: courtesy of Facebook]"We aim to reduce violations to the point that our community doesn't regularly experience them", Rosen and vice president of data analytics Alex Schultz write in the report. "While not always ideal, this combination helps us find and flag potentially violating content at scale before many people see or report it".

However, it declined to say how many minors - legal users who are between the ages of 13 and 17 - saw the offending content.

Spam: Facebook says it took action on 837 million pieces of spam content in Q1, up 15% from 727 million in Q4.

Facebook noted in the report that, "Hate speech content often requires detailed scrutiny by our trained reviewers to understand context and decide whether the material violates standards".

Facebook banned 583 million fake accounts in the first three months of 2018, the social network has revealed. "Our metrics can vary widely for fake accounts acted on", the report notes, "driven by new cyberattacks and the variability of our detection technology's ability to find and flag them". But a recent report from the Washington Post found that Facebook's facial recognition technology may be limited in how effectively it can catch fake accounts, as the tool doesn't yet scan a photo against all of the images posted by all 2.2 billion of the site's users to search for fake accounts.

Latest News