Misinformation about vaccines is widely recognized as a motivator for vaccine hesitancy and anti-vax conspiracy theories. Both attitudes could hamper COVID-19 vaccine rollouts across the country, and the government is very aware of the risk: Ottawa plans to invest $64 million in education campaigns to fight vaccine hesitancy and misinformation.

Misinformation can range from unwarranted suspicions about what vaccines are made of to claims that taking vaccines can cause infertility. Social media platforms are a major source of this misinformation — and companies are very aware of it. 

On March 1, Twitter introduced a new labelling policy to alert users about misinformation and a strike system that would lock users out of the app if they repeatedly violate the company’s COVID-19 policy. Facebook and Instagram already announced a blanket ban on vaccine misinformation last month. 

Vaccine misinformation on social media predates the pandemic. In 2016, information about an illegal vaccine distribution network that administered unrefrigerated or expired vaccines in China’s Shandong province spread on social media, which led to a 43.7 per cent decrease in the willingness of parents to vaccinate their children. Most of the people surveyed had learned about the story exclusively through social media. 

How social media platforms shape beliefs and attitudes

To understand the roots of the vaccine misinformation problem, one has to understand how social media algorithms recommend content to users in the first place.  

Social media allows anyone to share information. This is its primary strength, but it can also be a weakness when that information is unchecked, unverified, or unedited. Social media feeds can become catalysts for misinformation and a lack of trust in public officials. They have the power to change the minds of individuals on many different subjects, primarily through repeated suggestions of the same ideas.

Algorithms on Facebook and Twitter push accounts that users interact with the most to the top of their feeds. As posts or tweets become more popular, they are amplified and spread to more users. When these posts confirm existing biases those users may have, misinformation may spread. For example, those who are borderline questioning vaccine safety and efficacy might interact with a few posts that question the efficiency of vaccines, and then encounter even more similar posts due to the algorithm. 

Misinformation researchers Claire Wardle and Eric Singerman wrote in the British Medical Journal that while Facebook, Twitter, and Google have “stated that they will take more action against false and misleading information,” it’s the personal stories and anecdotes on their platforms — which they are not controlling — that are potentially detrimental to users’ collective understanding of vaccine safety, necessity, and efficiency. 

The duo also highlights the complexity of the situation: people accuse censorship of being a violation of freedom of speech, but at the same time, there is still an argument for platforms removing posts that spread misinformation entirely.

Closer to home, Deena Abul-Fottouh, an assistant professor in the Faculty of Information, researches the impacts social media networks have on their users. A recent paper she co-wrote with researchers from U of T and Ryerson University analyzes how YouTube handles vaccine misinformation. 

The YouTube algorithm is built on homophily — the belief that “like-minded individuals… tend to act in a similar way” — in that it pushes content that users already find interesting or of priority onto other users who are judged to have similar tastes. According to the study, this creates a filter bubble, “which occurs when a recommender system makes assumptions of user preferences based on prior collected information about that user, making it less likely that the user would be exposed to diverse perspectives.”

How are social media companies responding to misinformation? 

Facebook and Twitter began to take steps to prevent the spread of health misinformation in 2018. These were small measures, such as the addition of educational pop-ups and the suppression of false claims that were deemed threatening. Meanwhile, Pinterest changed its settings so that the search term “vaccines’” would only yield information from reliable sources such as the World Health Organization. 

However, social media companies are still under increased pressure from governments, the public, and health authorities to alter their policies regarding public health. Following new guidelines, Facebook has been removing posts that include any false information regarding the vaccines, as well as adding labels to posts that need clarification. 

Wardle and Singerman describe these measures as positive but still insufficient, relying on tackling individual instances of misinformation rather than the larger psychological effects of suspicion and fear they generate. The research sums up, “What’s required is more innovative, agile responses that go beyond the simple questions of whether to simply remove, demote, or label.” 

YouTube has also made changes to its policies and is now more likely to recommend pro-vaccine videos. But Abul-Fottouh and her colleagues wrote that the “filter bubble” effect is still prevalent and that those who engage with anti-vaccine content will be on the receiving end of more anti-vaccine content.