Fb vice chairman of integrity Man Rosen wrote in blog post Sunday that the prevalence of hate speech on the platform had dropped by 50 p.c over the previous three years, and that “a story that the expertise we use to battle hate speech is insufficient and that we intentionally misrepresent our progress” was false.
“We don’t wish to see hate on our platform, nor do our customers or advertisers, and we’re clear about our work to take away it,” Rosen wrote. “What these paperwork display is that our integrity work is a multi-year journey. Whereas we’ll by no means be good, our groups regularly work to develop our methods, determine points and construct options.”
The publish seemed to be in response to a Sunday article in the Wall Street Journal, which stated the Fb workers tasked with retaining offensive content material off the platform don’t imagine the corporate is ready to reliably display for it.
The WSJ report states that inside paperwork present that two years in the past, Fb lowered the time that human reviewers targeted on hate speech complaints, and made different changes that lowered the variety of complaints. That in flip helped create the looks that Fb’s synthetic intelligence had been extra profitable in imposing the corporate’s guidelines than it really was, in response to the WSJ.
A group of Fb workers present in March that the corporate’s automated methods have been eradicating posts which generated between 3 and 5 p.c of the views of hate speech on the social platform, and fewer than 1 p.c of all content material that was in violation of its guidelines in opposition to violence and incitement, the WSJ reported.
However Rosen argued that specializing in content material removals alone was “the fallacious manner to have a look at how we battle hate speech.” He says the expertise to take away hate speech is only one technique Fb makes use of to battle it. “We have to be assured that one thing is hate speech earlier than we take away it,” Rosen stated.
As a substitute, he stated, the corporate believes specializing in the prevalence of hate speech folks really see on the platform and the way it reduces it utilizing numerous instruments is a extra essential measure. He claimed that for each 10,000 views of a chunk of content material on Fb, there have been 5 views of hate speech. “Prevalence tells us what violating content material folks see as a result of we missed it,” Rosen wrote. “It’s how we most objectively consider our progress, because it supplies essentially the most full image.”
However the inside paperwork obtained by the WSJ confirmed some important items of content material have been capable of evade Fb’s detection, together with movies of automotive crashes that confirmed folks with graphic accidents, and violent threats in opposition to trans kids.
The WSJ has produced a series of reports about Facebook primarily based on inside paperwork offered by whistleblower Frances Haugen. She testified before Congress that the corporate was conscious of the detrimental impression its Instagram platform might have on youngsters. Fb has disputed the reporting primarily based on the inner paperwork.
https://www.theverge.com/2021/10/17/22731214/facebook-disputes-report-artificial-intelligence-hate-speech-violence | Fb disputes report that its AI can’t detect hate speech or violence persistently