Facebook in India has been selective in curbing hate speech, misinformation and inflammatory posts, particularly anti-Muslim content, according to leaked documents obtained by The Associated Press, even as its own employees cast doubt over the company's motivations and interests.
From research as recent as March of this year to company memos that date back to 2019, the internal company documents on India highlights Facebook's constant struggles in quashing abusive content on its platforms in the world's biggest democracy and the company's largest growth market. Communal and religious tensions in India have a history of boiling over on social media and stoking violence.
According to the documents, Facebook saw India as of the most "at risk countries" in the world and identified both Hindi and Bengali languages as priorities for "automation on violating hostile speech." Yet, Facebook didn't have enough local language moderators or content-flagging in place to stop misinformation that at times led to real-world violence.
This AP story, along with others being published, is based on disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen's legal counsel. The redacted versions were obtained by a consortium of news organizations, including the AP.
In the note, titled "An Indian Test User's Descent into a Sea of Polarizing, Nationalistic Messages," the employee whose name is redacted said they were "shocked" by the content flooding the news feed which "has become a near constant barrage of polarizing nationalist content, misinformation, and violence and gore."
One included a man holding the bloodied head of another man covered in a Pakistani flag, with an Indian flag in the place of his head. Its "Popular Across Facebook" feature showed a slew of unverified content related to the retaliatory Indian strikes into Pakistan after the bombings, including an image of a napalm bomb from a video game clip debunked by one of Facebook's fact-check partners.
"Should we as a company have an extra responsibility for preventing integrity harms that result from recommended content?" the researcher asked in their conclusion.