As Facebook is back in controversial crosshairs after a blistering New York Times article landed on Wednesday, but the social media giant is pointing out that they are making progress detecting hate speech, graphic violence and other violations of its rules, even before users see and report them.
Facebook said that during the April-to-September period, it doubled the amount of hate speech it detected proactively, compared with the previous six months.
The findings were spelled out Thursday in Facebook’s second semiannual report on enforcing community standards. The reports come as Facebook grapples with challenge after challenge, ranging from fake news to Facebook’s role in elections interference, hate speech and incitement to violence in the U.S., Myanmar, India and elsewhere.
The company also said it disabled more than 1.5 billion fake accounts in the latest six-month period, compared with 1.3 billion during the previous six months. Facebook said most of the fake accounts it found were financially motivated, rather than aimed at misinformation. The company has nearly 2.3 billion users.
Facebook’s report comes a day after The New York Times published an extensive report on how Facebook deals with crisis after crisis over the past two years. The Times described Facebook’s strategy as “delay, deny and deflect.”
Facebook said Thursday it has cut ties with a Washington public relations firm, Definers, which the Times said Facebook hired to discredit opponents. Facebook CEO Mark Zuckerberg said during a call with reporters that he learned about the company’s relationship with Definers only when he read the Times report.
On community guidelines, Facebook also released metrics on issues such as child nudity and sexual exploitation, terrorist propaganda, bullying and spam. While it is disclosing how many violations it is catching, the company said it can’t always reliably measure how prevalent these things are on Facebook overall. For instance, while Facebook took action on 2 million instances of bullying in the July-September period, this does not mean there were only 2 million instances of bullying during this time.
Clifford Lampe, a professor of information at the University of Michigan, said it’s difficult for people to agree on what constitutes bullying or hate speech — so that makes it difficult, if not impossible, to teach artificial intelligence systems how to detect them.
Overall, though, Lampe said Facebook is making progress on rooting out hate, fake accounts and other objectionable content, but added that it could be doing more.
“Some of this is tempered by (the fact that) they are a publicly traded company,” he said. “Their primary mission isn’t to be good for society. It’s to make money. There are business concerns.”
Facebook also plans to set up an independent body by next year for people to appeal decisions to remove — or leave up — posts that may violate its rules. Appeals are currently handled internally.
Facebook employs thousands of people to review posts, photos, comments and videos for violations. Some things are also detected without humans, using artificial intelligence. Zuckerberg said creating an independent appeals body will prevent the concentration of “too-much decision-making” within Facebook.
CEO Mark Zuckerberg published an extensive Facebook post, which he labeled his blueprint for content governance and enforcement, that lay out his strategy for keeping people safe on the company’s platforms. Facebook issued its report a day after a bombshell report in the New York Times that revealed how the company attempted to deny and deflect blame for Russia’s manipulation of its platform.
“The past two years have shown that without sufficient safeguards, people will misuse these tools to interfere in elections, spread misinformation, and incite violence,” Zuckerberg wrote. “One of the most painful lessons I’ve learned is that when you connect two billion people, you will see all the beauty and ugliness of humanity.”
“This matters,” Zuckerberg wrote, “both for ensuring we’re not mistakenly stifling people’s voices or failing to keep people safe, but also for building a sense of legitimacy in the way we handle enforcement and community governance.”
Facebook has faced accusations of bias against conservatives — something it denies — as well as criticism that it does not go far enough in removing hateful content.