The company found out that a Russian firm used hundreds of social media influencers to run a coordinated smear campaign against Western Covid-19 vaccines.
Facebook removed hundreds of anti-vaccine accounts after finding them linked to a Russian advertising agency.
“We removed 65 Facebook accounts and 243 Instagram accounts from Russia that we linked to Fazze, a subsidiary of a UK-registered marketing firm, whose operations were primarily conducted from Russia,” the social media platform said in a report on Tuesday.
“Fazze is now banned from our platform. This cross-platform operation targeted audiences primarily in India, Latin America, and to a much lesser extent the United States. We found this network after reviewing public reporting about an off-platform portion of this activity.”
Labelling the operation a “disinformation laundromat,” the social media network said the fake accounts have spread misinformation about Pfizer and AstraZeneca vaccines.
The move came after last month’s BBC report revealed that influencers were offered money to spread false information about vaccines.
In May 2021, when a number of countries, including India, the United States, and others in Latin America, were reportedly discussing the emergency authorization of Pfizer and AstraZeneca vaccines, an anti-vaccine campaign targeting the two vaccines suddenly popped up, even sharing a fake AstraZeneca document to sully the company’s reputation.
One influencer with more than 1.5 million subscribers on YouTube, Mirko Drotschmann, said he was asked to say that the death rate among people who had the Pfizer vaccine was almost three times more than the AstraZeneca jab.
Facebook stated this was the second wave of a collective disinformation campaign on Western vaccines, saying that the network carried out another campaign in November last year. One of the bizarre claims that came out of such campaigns was that the AstraZeneca vaccine can turn humans into chimpanzees.
A Change.org online campaign targeting these vaccines followed, generating a debate amongst those who were concerned about the Covid-19
vaccine safety. The unfounded claims also appeared on Reddit and Medium. But despite hashtags and multiplatform efforts, the campaign failed to gain major traction.
“Some questions remain about aspects of this campaign — including who commissioned Fazze to run it — that would benefit from further research by the defender community,” the Facebook report said.
“Another question relates to how the ‘hacked and leaked’ document came into Fazze’s hands. As with any influence operation, understanding the motive behind leaks like these is key to putting the operation in context.”
Weeding out manipulation from the public debate
It’s not the first time Facebook banned accounts and pages collectively. The tech giant said it also removed 79 Facebook accounts, 13 Pages, eight groups, and 19 Instagram accounts in Myanmar that targeted domestic audiences and were linked to individuals associated with the Myanmar military in July this year.
Over the four years, the social network has been publishing its findings of what it calls a Coordinated Inauthentic Behaviour (CIB).
The social network also aims to eliminate Foreign Government Interference (FGI) and monitor if dodgy accounts and pages reappear on the website.
Although Facebook’s role in removing accounts spreading disinformation about vaccines was praised by health officials, the tech giant has often come under criticism for being selective in censoring or promoting the content.
The social media platform announced plans to hold politicians accountable for hate speech, as much as anyone else, in a decision originally aimed at “preventing outside interference” in the US elections.
In one incident, Facebook refused to take down a widely viewed ad that attacked US Representative Ilhan Omar with claims that she had links with Hamas, even though her office said it could lead to harassment and death threats. The claim offered no proof and is being denied by Omar.
The social media giant was also widely criticised for taking down millions of posts of Palestinians days after Israel attacked Gaza, killing 248 people, including 66 children. Users saw warnings that said their posts had violated hate speech rules but offered no further explanation. Facebook later came up with an easy excuse, saying the mass removal of posts was caused by a technical glitch.