Earlier this summer, reports surfaced that possible automated pro-Trump Twitter accounts from the United States were using hashtags to interfere in Canada’s upcoming federal election.
These alleged bots — broadly defined as non-human actors created to mimic human behaviour online — can contribute to an already-existing problem of disinformation and ‘fake news.’
While Twitter has denied any large-scale disinformation campaigns, others have suggested that manipulation attempts are simply a reality of today’s social media landscape.
Amid the proliferation of false information online, how could one spot bots in their feed?
Perhaps the most infamous case of social media election interference is the Russian online disinformation campaign. According to the Mueller report, it is alleged to have contributed to the election of Donald Trump. However, as campaigning heats up for Canada’s federal election on October 21, U of T researchers have been looking into how automated social media accounts could be generating and spreading digital disinformation at home.
Dr. Alexei Abrahams, a research fellow at The Citizen Lab at the Munk School of Global Affairs & Public Policy, has assisted researcher Dr. Marc Owen Jones in exploring the contentious issue. By examining 34,000 tweets posted between September 3 and 5 of this year, Jones found that 15 per cent of the approximately 4,896 accounts using #TrudeauMustGo were linked to American far-right-wing politics. According to Jones, the behaviour of these accounts was consistent with that of political bots or orchestrated ‘trolls.’
In July, the National Observer reported on similar bot interference after #TrudeauMustGo became a trending topic on Canadian Twitter. In this instance, 31,600 tweets posted between July 16–17 were analyzed, with some accounts displaying “indicators of inauthentic activity.”
In an email to The Varsity, Abrahams confirmed that he and Jones were collecting data, but maintained that the “Canadian elections are not a major target for inauthentic, coordinated behavior.”
Abrahams discussed the potential consequences of disinformation online in a recent interview with CTV News. “You reach a place, when you’re exposed to so much misinformation, that you’re agnostic toward any sort of information,” he said.
“It ultimately leads to a sort of withdrawal from political life and from the activity of inquiring, because you just become frustrated and skeptical, then ultimately disenchanted.”
While much of the conversation around automated social media accounts and their contribution to new concerns surrounding ‘fake news’ involves the United States and the United Kingdom, there have been multiple documented cases of attempted election interference in Canada.
In 2017, university professors Fenwick McKelvey and Elizabeth Dubois released a study on the role of bots in the Canadian media landscape. The study found that Canada has not critically engaged with the role of bots in its democratic processes.
Citing the 2015 federal election campaign, McKelvey and Dubois illustrated how frequent automated tweets using the #cdnpoli hashtag amplified anti-Stephen Harper sentiment.
However, the researchers also highlighted the potential of bots for positive political engagement, including automated accounts created to increase government transparency.
More recently, Global Affairs Canada shared a report by Rapid Response Mechanism — a G7 response coordination group — outlining how “coordinated inauthentic [online] behaviour” was present during Alberta’s 2019 provincial election. While the report notes that the ‘inauthentic behaviour’ did not seriously interfere in the election, the existence of the coordinated disinformation has some questioning the power of bots in democratic processes.
Other reports have suggested that bot activity had amplified the tweets of now-Premier of Ontario, Doug Ford, during his provincial election campaign last year.
How to spot and prevent disinformation
Dr. Brett Caraway, an assistant professor at U of T’s Institute of Communication, Culture, Information & Technology, discussed the pressing concerns of false reporting in democratic institutions in an interview with The Varsity.
“When you have bots or fake news outlets, any sort of party that is interested in influencing a political outcome in an election, it creates some very real level of confusion over facts,” he said. “And that’s the part that I think is so dangerous to a healthy thriving democracy.”
When asked how users could protect themselves from being exposed to or perpetuating disinformation, Caraway outlined several measures. Users should question anonymous sources, examine URLs for proper sourcing, identify the dates on articles, read beyond headlines, check multiple sources, and put in effort when reading and sharing content.
Broadly, however, he believes that the government should take more measures to promote media literacy because it is “just as important as learning to read and write at this stage.”
According to Caraway, media literacy education should focus on three components: how to find authoritative information, how to value different kinds of information, and how to meaningfully participate in political discourse online.
“All of us are in the position of being broadcasters today,” he said. “And being in that position of a broadcaster comes with responsibility and obligation to engage in ethical political discourse.”
Disclosure: Kaitlyn Simpson previously served as Volume 139 Managing Online Editor of The Varsity, and currently serves on the Board of Directors of Varsity Publications Inc.