Input your search keywords and press Enter.

Facebook Researchers Say Higher-Ups Ignored Their Findings on Instagram’s Racist Algorithm: Report

During a study in mid-2019, a team of Facebook employees found that proposed rules for Instagrams automated account removal system disproportionately flagged and banned Black users. When the team approached CEO Mark Zuckerberg and his upper echelon of cronies with this alarming information, they were purportedly ignored and told to halt any further research regarding racial bias in the companys moderation tools, NBC News reported Thursday.
Thats according to eight current and former Facebook employees who spoke with the outlet on the condition of anonymity. Under these proposed rules, Black Instagram users were roughly 50% more likely than white users to see their accounts automatically disabled for ToS infractions like posting hate speech and bullying.
The issue evidently stemmed from an attempt by Facebook, which owns Instagram, to keep its automated moderation systems neutral by creating the algorithmic equivalent of You know, I dont really see color.
The companys hate speech policy holds disparaging remarks against privileged groups (i.e. white people and men) to the same scrutiny as disparaging remarks against marginalized groups (i.e. Black people and women). In practice, this meant that the companys proactive content moderation tools detected hate speech directed at white people at a much higher rate than it did hate speech directed at Black people, in large part because it was flagging comments widely considered innocuous. For example, the phrase white people are trash isnt anywhere near as offensive as the phrase Black people are trashand if you disagree, I hate to be the one to tell you this, but you might be a racist.
The world treats Black people differently from white people, one employee told NBC. If we are treating everyone the same way, we are already making choices on the wrong side of history.
Another employee who posted about the research on an internal forum said the findings indicated that Facebooks automated tools disproportionately defend white men. Per the outlet:
According to a chart posted internally in July 2019 and leaked to NBC News, Facebook proactively took down a higher proportion of hate speech against white people than was reported by users, indicating that users didnt find it offensive enough to report but Facebook deleted it anyway. In contrast, the same tools took down a lower proportion of hate speech targeting marginalized groups including Black, Jewish and transgender users than was reported by users, indicating that these attacks were considered to be offensive but Facebooks automated tools werent detecting them.
These proposed rules never saw the light of day, as Instagram purportedly ended up implementing a revised version of this automated moderation tool. However, employees told NBC they were barred from testing it for racial bias after itd been tweaked.
In response to the report, Facebook claimed that the researchers original methodology was flawed, though the company didnt deny that it had issued a moratorium on investigating possible racial bias in its moderation tools. Facebooks VP of growth and analytics cited ethics and methodology concerns for the decision in an interview with NBC.
The company added that its currently researching better ways to test for racial bias in its products, which falls in line with Facebooks announcement earlier this week that its assembling new teams to study potential racial impacts on its platforms.
We are actively investigating how to measure and analyze internet products along race and ethnic lines responsibly and in partnership with other companies, Facebook spokeswoman Carolyn Glanville said in a statement to multipleoutlets.
In his interview with NBC, Schultz added that racial bias on Facebooks platforms is a very charged topic but one that the company has massively increased our investment to investigate algorithmic bias and understand its effects on moderating hate speech.
Given Facebooks penchant for hosting racist, transphobic, sexist, and generally god-awfulcontent, the fact that some algorithmic magic behind the scenes might be helping to stamp out marginalized voices is hardly surprising. Disappointing, for sure, but not surprising.
[NBC]read more

Leave a Reply

Your email address will not be published. Required fields are marked *