Hypertabs is The Varsity’s online features subsection about all things Internet. Our goal is to explore the depths of the online world and understand how it shapes our habits and affects our communities.


Content warning: descriptions of violence and sexual violence

On January 5, 2017, four young adults from Boston allegedly kidnapped their mentally disabled classmate, chanting anti-Trump phrases while repeatedly assaulting him. There’s another chilling element to this gruesome attack: it was all recorded on Facebook Live. The 30-minute video has now been viewed millions of times.

In April, a man in Thailand killed his daughter on Facebook Live. It took roughly 24 hours for the social networking giant to remove the clip.

Soon afterward, Mark Zuckerberg, Facebook founder, posted a statement on his timeline referencing the issue: “Over the last few weeks, we’ve seen people hurting themselves and others on Facebook — either live or in video posted later… It’s heartbreaking, and I’ve been reflecting on how we can do better for our community.”

Alongside these broadcasted crimes comes an issue of responsibility: at what point are social media platforms responsible for removing content? At what point are we, the users, responsible for enabling and disseminating toxic behaviour?

Before Facebook Live

The first record of internet-based livestreaming technology was a live e-radio broadcast of a 1995 baseball game between the New York Yankees and the Seattle Mariners by a Seattle startup company called Progressive Networks.

A year later came the first video livestream; Marc Scarpa, a proponent of live participatory media platforms, livestreamed the Tibetan Freedom Concert to 36,000 online viewers.

Commercial expansion of livestreaming tools followed alongside the expansion of social media platforms themselves; as public use of mobile phones and social media increased, so too did the technology of livestreaming. Services like Ustream — a platform initially created for those in the US military to contact their families while overseas — and Meerkat gradually grew in popularity.

Periscope was launched in 2015 and offers users the ability to livestream whatever they choose to their audience. Those viewing a stream can interact with it through comments and by giving ‘hearts.’

On February 27, 2015, Marina Lonina, an Ohio teenager, livestreamed her friend’s sexual assault on Periscope. Interestingly, Lonina’s defendant argued that she had “got caught up in the likes.” Lonina was charged and convicted of obstructing justice; she eventually received a nine-month sentence following a plea deal but initially had faced additional charges, including rape.

Since Periscope, livestreaming services have spread significantly across social media. In 2015, Twitter officially bought Periscope. Livestreaming has also been adopted by a variety of other social media platforms including Instagram, YouTube, and of course, Facebook.

On Motives

Criminals seeking fame and attention for their crimes is nothing new, but with mediums like Facebook Live and Periscope, criminals can broadcast their actions instantly. And while previously, those who committed crimes had to rely on traditional media outlets to report on their crimes, now, all it takes is a retweet, a share, or a like.

With the added element of a livestream, criminals can now perform their crimes for an audience. In an increasingly connected world, perhaps users should expect more livestreamed crimes intended for an audience.

Moving Forward

Recently, a trove of leaked documents was revealed to showcase Facebook’s policies on removing content from the site. The leak, published by The Guardian as the “Facebook Files,” offers insight into what the platform will allow — like illustrated sexual activity and animal abuse — and what it will not allow, like graphic sexual activity. However, much of these rules are a result of user-generated dissatisfaction and uproar.

The Boston kidnapping and Thailand murder all illustrate a powerful shift in the culture of live streaming. With nearly a third of the world’s population using Facebook, the social media site has promised to hire an additional 3,000 moderators to their Community Operations team — bringing the total to 7,500 moderators for its nearly two billion users.

It has been suggested that, in the future, technology companies will consider automated tools to moderate content for possible removal. Until then, it’s up to users to flag inappropriate content, and human moderators to skim through it all and make judgement calls.

Given the rise in performative crime and livestreaming, it’s up to users to consider what content they want blocked — and at what cost.