Don't opt out: click here to learn more about our work.

Rawls’ ghost in the machine

Why a just society may require socialist robots

Rawls’ ghost in the machine

Karl Marx predicted the rise of artificial intelligence (AI) over 170 years ago. Buried within a dense, unfinished manuscript that he began in the 1850s but that remained unpublished until 1939, Marx predicted that “the means of labour passes through different metamorphoses, whose culmination is the… automatic system of machinery… set in motion by an automaton, a moving power that moves itself.” A complete and utter change in both form and function to the means of production.

AI can be loosely defined as the capacity of a machine to imitate human intelligence, whether in intensive pattern recognition, decision-making, or visual perception. It is set to reshape our society in ways we can barely imagine.

A 2016 study from the University of Oxford suggests that AI will eliminate about half of the United States’ human labour. In countries like Ethiopia where agriculture composes a substantial part of the economy, that number rises to 85 per cent.

This massive replacement of human labour by AI will radically change the structuring of our economic institutions. Some, including Elon Musk and Jack Ma, have argued that AI poses a cataclysmic threat to humankind — one that must be curbed.

As opposed to the approach laid out by billionaires who may have very good reasons to resist change to a system that has worked quite well for them I believe that the introduction of AI gives us an unprecedented opportunity to radically transform our economic and social systems for the better. Working not against AI, but with it, will allow us to build a truly just society from the upcoming ‘fourth industrial revolution.’

So what is justice, anyway?

Navigating this crossroads is no easy task, so I opt to employ a guide that will assist in building a just society. John Rawls, widely considered to be one of the most influential philosophers of the twentieth century, presented an influential model of a just society in his aptly titled book, A Theory of Justice. In it, Rawls argues that justice is, in essence, fairness.

The proposed system is elegantly simple: it includes only two basic principles. The first is the “Liberty Principle.” In a just society, each person should have an equal set of basic liberties, familiar to those of us who live in liberal societies: freedom of speech, assembly, conscience, and thought; the right to vote for and hold public office; and so on. However, as Marx noted, while individuals can be religiously or politically free in a liberal state, they may not be able to act on those freedoms due to material constraints.

This brings Rawls to his second “Difference Principle.” This rule is egalitarian in nature; it prohibits all positions of inequality that are not in principle available to all think equality of employment and ensures that the inequalities benefit the least advantaged.

Having this roadmap in mind can help us assess the two questions that I think are at the heart of the debate over AI in the context of a just society: should AI be embraced, and who should own it?

Rise of the robots

Many are convinced that we should be worried about AI. The headlines write themselves. A Mother Jones article warned against job replacement, saying that “mass unemployment is a lot closer than we feared.” A piece in The Guardian discussed a “hollowing out” of the global middle class. Entrepreneur published a piece criticizing the European Union for its lax guidelines on AI.

Calls for halting the progress and implementation of AI are understandable, given that labour change has always been feared in capitalism. The iconic example of this are the Luddites, who destroyed textile machinery in protest of the replacement of traditional labour in the early nineteenth century.

However, maybe the Luddites weren’t necessarily protesting the machines as such. I imagine they would have been perfectly content with the implementations of textile machinery, which made their jobs less strenuous, safer, and more efficient, if their quality of life was not affected in the process.

The main concern people have with machinery, past or present, is not whether they will replace human beings. It is about income security but we will get to that later. Disregarding the loss of human labour, there is insurmountable potential for AI to create a more just society by improving upon our civil liberties, including our freedom of expression, assembly, fair trial, voting, life, and privacy.

It may, however, be tempting to ask whether liberal freedoms can be improved at all. After all, it seems that in an ideal liberal-democratic society, everyone already possesses these freedoms. There are no bars stopping racial or sexual minorities from voting, no Ministry of Public Security to thwart peaceful protests or democratic dissent, no ‘disappearance’ of dissenting journalists.

It is true that we enjoy an overwhelmingly inclusive set of civil liberties. However, the problem lies in the fact that their potential for application is unequal across the board. For example, a homeless man may very well have equal freedom of expression as the editor-in-chief of The Globe and Mail, but they do not have an equal chance to employ this freedom.

This pattern is much the same with other basic civil liberties: someone who lives in a rural community has a much harder time going to vote as opposed to urban dwellers; a person who holds major office can disseminate their voice much more effectively than a regular Joe; a Christian has an easier time practicing their religion than a Muslim. When applied to Rawls’ theory, this “scheme” of civil liberties is incompatible across the board, and is therefore, at least to some degree, unjust.

AI could act as the great equalizer for social liberties. It can make the world more accessible and freer for all users. The ability for advanced and logical problem solving can help overcome many barriers to civil liberties.

A self-directed and corporeal algorithm can, for example, help reach out and organize a protest, direct people to polling stations (or better yet, allow them to vote directly from their personal devices), disseminate voices to a large and diverse audience, and ensure that there is a fair legal process by representing clients.

It can allow for more accessible and integrative methods for religious practice, enhance our healthcare system, and protect our personal information and data from malicious software. A capable, accessible, and sophisticated intelligence has almost unlimited potential for the encouragement of positive liberties.

However, I would like to caution that current use of AI can also corrupt, rather than amplify, civil liberties. Predictive algorithms are used by police to disrupt peaceful protests, screen social media posts, or censor political opposition. Nevertheless, I believe that there are two sides to the AI story, and AI has the potential to ensure that our society is all the more just.

Some may scoff and point out that the picture laid above would be simply impossible to realistically implement. Why, they may ask, would the companies that create these algorithms be interested in enhancing personal liberties?

Indeed, it may be very well in their interest, and the interest of their shareholders, to limit our personal freedoms. They may do this by keeping us in echo chambers, blocking us from consuming content that is critical of companies or governments, or by selling personal information gathered by the algorithms to maximize profits.

How do we trust that AI will not exploit our civil liberties in the name of profit? The answer lies in the nature of ownership and control of the technology.

All ‘bout the money

Here, we arrive at the more pressing question: who should own AI? I mentioned previously that the main concern about machinery is not its nature rather, it lies with a potential decrease in quality of life. Here, Rawls’ second principle can come in handy.

There is a lot of money to be made in AI. The McKinsey Global Institute estimates that AI has the potential to create between $3.5 trillion and $5.8 trillion annually across nine business functions in 19 industries and boost global productivity by around 1.2 per cent.

That may not sound like much, but the introduction of the steam engine and the spread of information technology only increased productivity by 0.3 and 0.6 per cent, respectively.

This outpour of wealth, however, would not satisfy Rawls’ second principle of justice under our current capitalist system. There would inevitably be an unequal distribution of the benefits and burdens of AI, one which would make the rich richer and the poor poorer.

A fully automated capitalist state would have two main outcomes, both of which would produce financial inequalities which would neither be accessible nor beneficial to all. First, there would be a marked loss of jobs.

Many middle-class jobs are classified as routine abstract work, such as accounting or customer service. Given that AI could easily perform most desk jobs, those slots would eventually disappear. Additionally, routine manual work, such as package delivery or agriculture, would also be gone.

The two fields that machines would presumably not replace are non-routine manual work, such as childcare or healthcare, and non-routine abstract work, such as architecture, coding, or researching.

This lack of jobs would be coupled with low money circulation. The number of transactions would fall, and money would instead be used for investments or savings. Given that only a select few would be privileged enough to hold their jobs, this would result in the gross accumulation of wealth in their hands because only they would have a disposable income.

This would, in turn, contribute to economic shrinkage, a terrifying concept for capitalists since it means that resources would be harder to acquire. The shrinkage would, in turn, result in more automation, since the capitalist class would benefit most from it, and more jobs would be lost. On and on the cycle would go, until the system presumably collapses.

The ‘just’ invisible hand

An automated capitalist state cannot uphold any of Rawls’ principles of justice. First, civil liberties could not be upheld in an equitable manner.

Superfluous economic power often translates into political power. Lobbying groups for the capitalist class would inevitably spring up, and with them a host of candidates who latch onto the rich for their own benefit.

White-collar crime is rarely prosecuted as severely as blue-collar crime, and the rich are often given much more leeway in the political and judicial system. The fair value of civil liberties would be controlled by the handful who managed to keep their jobs, leaving large swaths of the population with little to no political power.

This heightened inequitable distribution of civic freedoms would then render this society unjust. Decisions of societal importance would be within the scope of the market, and not by democratic process, to a much greater degree than today.

Jobs would also not be equally accessible to all. Today, we can see that inequalities drive inaccessibility. This effect would be multiplied by the extreme inequalities caused by automated capitalism. The recent college admission scandal shows this on the micro scale. The number of admitted positions at elite American colleges should, in theory, be egalitarian. The fact that students from affluent families could blatantly pay to get the spots is obviously unjust.

On the macro level, accumulated wealth allows systemic inequalities, where high-paying jobs are reserved for a limited demographic. Upward social mobility has been on the decline in the United States since the 1980s — coinciding with the introduction of Reaganomics — with middle- and low-income children struggling for upward mobility while children of the upper class inherit fortunes.

This perpetual cycle, through which high-income jobs are reserved for a chosen few since others could not hope to acquire the needed skills, would only be amplified by the dramatic inequalities that would stem from AI.

Lastly, any benefits that AI might incur would disproportionately go to the privileged. The means of production are currently held by a small group of individuals, and the loss of jobs and money circulation by AI would only accentuate this.

The Economic Policy Institute reported that in 2017, the average CEO at one of the 350 largest firms in the United States saw a salary increase of 17.6 per cent from the previous year, equating to an $18.9-million USD jump.

This jump could only be deemed just if it benefits the least advantaged in society. However, just by looking at employment compensation, we can deduce that this is not the case. While the rich got richer, the average employee compensation remained flat, rising by only 0.3 per cent.

This is not just a one-off year — this pattern has been observed in the capitalist system for some time. The gap between executives and workers has widened for decades, with the CEO-to-worker compensation ratio jumping to 312-to-one from the 20-to-one ratio in 1965.

Clearly, the inequalities are not benefiting those who are least advantaged, which points to a systemic flaw. AI would increase the wealth of the few, but the current system will ensure that the wealth does not trickle down.

An automated capitalist state, then, would not fulfil any of Rawls’ principles of justice, and would therefore be unjust.

If not capitalism, then what?

I’ll be blunt. I think that the only way that a more just, fair society can emerge from this would be the implementation of a fully automated democratic socialist state.

Democratic socialism is defined as a system by which all major means of production are publicly owned. This means that as a constitutional right, all citizens own and administer the assets they require in order to be cooperating members of society. In an automated society, AI used in major companies would be administered and profited from by citizens.

Many may point out that historically, socialism has not always reached its full potential. In countries such as China, Venezuela, or Cuba, the administrators of the system have amassed vast amounts of wealth through illegitimate means, which has led many skeptics to conclude that human nature itself rules out the possibility of true socialism.

However, in an automated world, human nature may not need to play a part in the distribution of the means of production.

In an article for The Washington Post, Feng Xiang wrote that, “If AI rationally allocates resources through big data analysis, and if robust feedback loops can supplant the imperfections of ‘the invisible hand’ while fairly sharing the vast wealth it creates, a planned economy that actually works could at last be achievable.”

Moreover, a nationalized AI system would mean that the elimination of wage labour would not pose a problem, unlike under a capitalist system. Human capital would be largely replaced by resource and information capital. Whereas this would spell out the inevitable collapse of capitalism, it would help a socialist system flourish.

The work and wealth that would stem from AI would be owned and administered by society at large, ensuring that the fruits of production would benefit all. This means that the first clause of Rawls’ second principle would be upheld. Since there would be no “offices” to be held, as human labour would be essentially eliminated, there would be no inequalities in attaining different offices.

This may sound like a frightening idea. After all, we are used to dedicating a majority of our lives toward a profession, whether out of a sense of duty, need, or passion. Eliminating that requirement could make life seem strangely barren.

However, I think that if we really internalize what the abolition of mandatory labour would mean, we can come to an understanding that we would, in fact, be much happier people.

Lack of work would make us much freer than we are today. We would have time to finally get to the book we’ve been meaning to write, to spend more time eating a nice brunch with friends, or to finally master juggling.

A world with little to no required work would mean that the work we do perform would be all the more meaningful. It would be done solely for our own personal enjoyment, or for the benefit of others, not for a perceived duty.

Positions would be defunct, only replaced by equally accessible activities which create no tangible inequalities, thus fulfilling the first clause of the second principle.

In an automated socialist state, natural resources and machinery, the main means of production, would belong to and be administered democratically by the public.

Each citizen would then have a guaranteed source of income, which would mean that any inequalities that arise out of luck, such as in the capitalist case, would be eliminated. The remaining amount of administrative labour would be equally divided among the population as part of their civic duties.

Thus, the benefits and burdens of AI would be equally distributed. Collaboration, rather than competition, would rule the economic market, with an increase of output meaning an equal rise in profits for all. Any fluctuations in capital would equally impact all, fulfilling Rawls’ second principle.

There are, of course, many other benefits of AI that I didn’t touch upon. Better healthcare, decision-making, research, forecasting, disaster response, energy distribution, and much more can emerge from the implementation of AI.

Most of all though, AI provides us with the unprecedented opportunity to transform our society into one that is more just.

Following Rawls’ classic theory of justice, we can safely conclude that implementing a fully automated democratic socialist state could create a truly just society. Some may argue that doing so would be like building castles in the sky — however, we’ll never know until we lay down the first brick.

Two “cis, straight Indian dudes” talk 2062

Hari Kondabolu and Max FineDay joke, dread, hope for the future of work

Two “cis, straight Indian dudes” talk 2062

Victoria College’s Isabel Bader Theatre hosted “2062: Beyond a Cartoon Future for Millennial Workers” on January 23. Ausma Malik, U of T alum and the Director of Social Engagement at the Atkinson Foundation, moderated a “seriously funny” conversation between US-based comedian Hari Kondabolu and Canadian Roots Exchange Executive Director and nêhiyaw activist Max FineDay. 

The two discussed the future of work in the context of millennial anxiety about access to decent and fair employment. The event attempted to be innovative, being recorded in a live podcast format with audience participation.

The Jetsons — a cartoon series that aired in 1962 and is set 100 years after — framed the conversation for the kind of dystopian future that we might want to avoid: technology and robots as a stand-in for disposable labour that used to be performed by poorly paid racialized workers; departure from an Earth beyond saving from environmental disaster; and isolated, non-communal living in space. 

On success

Kondabolu and FineDay demonstrated remarkable comedic chemistry as they brought their individual racial identities and professions — one a South Asian-American in the entertainment industry and former human rights organizer, the other an Indigenous youth leader at the helm of a national non-profit charity — to the discussion. 

No matter how successful they have become in their careers, though, Kondabolu and FineDay are kept grounded by their families. It can be difficult to convey  to them how big they’ve made it. “I’m a stand-up comedian, which is hard to explain to relatives in India, first of all, and then I’m a podcaster, which you don’t really mention to relatives in India at all,” explained Kondabolu. “I’m like a shitty Renaissance man.” 

On complexity

Kondabolu described improvements in the stand-up comedy industry over the last decade, namely the acknowledgement that his community — which he specifies as “straight, cis, Indian dudes” — are multi-layered, complex human beings. Poking fun at how Indigenous peoples and South Asians are ‘identified’ in the same way, FineDay went on to joke about how he and Kondabolu had “really bonded over being cis, straight Indians” backstage.

At the same time, other subcommunities, whether women or trans folks, continue to struggle with barriers. The “complexity of identity” for Kondabolu is to recognize that people of colour are “not all victims.” 

“Some of us are assholes,” he said. “The oppressed can also oppress others…You can be a victim of racism and then still be homophobic and transphobic.”

On commodification 

For FineDay, the increasing recognition of his community can be problematic. It is often non-Indigenous people who have access to funding, resources, and networks, and who ultimately decide what reconciliation should look like. 

For Kondabolu, the trendiness of being Indian or Asian American is a product, not just because people of colour have been pushing for more diversity, but because those at the top — producers and directors — are realizing that there is money to be made through these heretofore neglected communities. Hence, both negotiate between the opportunities that come with diversity and the commodification of their identity that often comes as a price.

But diversity is, nonetheless, an inevitability. In anticipation of the fact that white people will become a minority in Canada and the US in 2036 and 2042 respectively, Malik and FineDay acknowledged that Indigenous communities are the fastest growing ones in Canada. Kondabolu gleamed at the prospect to “fuck colonialists out of the country.” 

On disposability 

Kondabolu described the impact of the recent US government shutdown on labour as “embarrassing,” while acknowledging that “it’s a part of a proud, American tradition of not paying its workers — which I believe began with slavery. So it’s a little retro.”

Malik focused on problematizing a capitalist version of automation, in which technology is used to dispose of labour rather than make our lives easier. As FineDay points out, unlike in our parents’ generations, finding decent, well-paying employment or owning property may be out of reach. On the future of the sharing economy, Kondabolu is more supportive of socialism than the current version concerning “Uber and shit.” 

While many skills may become obsolete, FineDay expects that telling stories is something “no machine will ever be able to do.” Kondabolu is less optimistic, fearing that the human ability to create art might become something that is easily duplicated by artificial intelligence and therefore no longer be as valued. He stressed the need for re-training workers and investing in new industries. “It’s not about investing in coal,” he noted, criticizing the current political reluctance to divest from fossil fuels for climate change.

On empowerment 

To change the power dynamics of the future, Kondabolu hopes to see people of colour become leaders who call the shots — in the case of stand-up comedy, producers and directors. People who belong to a certain community should tell the stories of that community — not just anyone who claims to be ‘woke.’ 

He cautioned, “People do lots of shitty things when they’re awake. Just because you know what’s going on doesn’t mean you’re going to do the right thing.” 

For FineDay, the “Mr. Spacely” of Indigenous peoples has always been Canada. The country continues to break treaties, while portraying itself as an international defender of human rights.

Yet, while many systemic challenges remain, FineDay is focused on changing hearts and minds on a one-on-one level. Informing and encouraging Canadians to learn about youth suicide rates, residential schools, or lower health outcomes in Indigenous communities is part of what he calls “little wins.” 

On organizing 

Reconciliation, for FineDay, means that space is made for Indigenous peoples in higher education and workplaces that don’t require them to sacrifice culture and pride. He takes inspiration from the generosity of youth and communities who, despite Canada’s ongoing wrongdoings, are still willing to reconcile.

If we are to take anything away from the discussion, it’s that an alternative, just, and diverse future, one that overcomes colonialism, racism, and capitalism, is possible — if we take organized action. 

And that starts with any number more than one. At some point, Kondabolu told FineDay, “We should be friends, man.” I don’t really listen to podcasts, but if the two of them were to organize one called “cis, straight Indian dudes,” I would definitely jump on that bandwagon. 

The web: a museum of our everyday lives

How researchers might examine our digital data centuries from now

The web: a museum of our everyday lives

.single-3col{overflow:visible!important;}

Whenever I make a post on social media, I wonder who it will reach — not just in the present, but in the future. Hundreds of years from now, will a researcher studying a hashtag on Instagram labelled ‘dog’ meticulously analyze the editing choices I made for a photo of my dog? Will historians piecing together the lives of millennial university students investigate my tweets? Will my social media accounts exist at all?

I learned while writing this piece that my curiosity might not be as weird or narcissistic as it sounds: archives of our generation’s social media and web pages are currently being compiled, investigated, and utilized across U of T and the world.

Like cave paintings in France or clay tablets from Mesopotamia, our social media posts are artifacts that will offer future historians insights into our daily lives, our society, and our politics. Our social media accounts are museums of our everyday lives, self-curated time capsules for future researchers. Such a large — and constantly expanding — collection of the thoughts and behaviours of ordinary people has never been available to researchers before. While this wealth of data will be invaluable to future researchers and historians, it also presents unique problems that don’t have conclusive solutions.

PRESERVING OUR DIGITAL DATA

Last December, volunteers gathered at U of T to archive climate change and environmental data that was at “high risk” of being deleted or of being made unavailable to the public under Donald Trump’s then-incoming presidency.

This “Guerrilla Archiving” event was done in collaboration with the Internet Archive’s “End of Term 2016” project. The Internet Archive is an online non-profit library that has recorded around 279 billion web pages for future historians to use. Its Canadian headquarters are located on the seventh floor of Robarts Library at UTSG.

Matt Price, a sessional lecturer at U of T’s Department of History, was one of the organizers of the event. Price explained it was important to copy these pages not just for historical reasons, but for the sake of documenting the truth: our understanding of climate and its relation to human health comes from these long stretches of data, which is why it’s imperative for them to stay publicly accessible.

Sam-chin Li is the Reference/Government Publications Librarian at Robarts Library who assisted volunteers at the archiving event. According to Li and Nich Worby, a Government Information and Statistics Librarian at Robarts Library, government information is now only available digitally and only on government websites. Without strong enforcement, this digital content could be at risk of being edited or deleted.

“That is why preserving government websites is not only essential for researchers, historians and scientists to do their work in the future, it is also critical for the opposition and public to keep government accountable,” wrote Li and Worby in an email.

According to Li and Worby, future historians and researchers can use archived web content to grasp a better understanding of our “history and heritage.” Platforms like Twitter reveal valuable information about the lives of ordinary people and contains relevant interactions between governments and citizens.

Wendy Duff, a professor and dean in the Faculty of Information at U of T, thinks our social media archives will be “incredibly valuable” to future researchers trying to understand our societies, and that they will be able to exclusively provide information about certain demographics. Primary sources from the past, like letters and diaries, came from a small, specific group of people: those who were literate and had the free time to write. Now, tons of different groups have access to the internet — and the ability to inadvertently share glimpses of their daily lives with future historians.

PIECING TOGETHER OUR LIVES

Back in April 2010, the Library of Congress announced that it would preserve all public tweets — excluding private account information or deleted tweets, as well as pictures and links — for future generations and historians. In addition to tweets, the Library of Congress is also collecting online information about American and select international election candidates, select Facebook pages and news sites, and websites related to important historical events.

Price underscored that an archive of the lives of ordinary people has never been available to historians before. Historians of earlier centuries have a “scarcity of sources,” while historians of the early 21st century will be overwhelmed by sources. “Their problem is going to be that there’s so many documents that it’s going to be very difficult to sort them,” said Price.

“There will be a massive amount of records, and you will not be able to read them all,” agreed Duff.

For example, a researcher studying a president from the 1800s might have the ability to read every letter sent from the president’s office, but a researcher studying a president from the 21st century almost certainly could not read all the relevant emails and tweets sent out, Duff explained.

“So you will have to have electronic tools to be able to understand certain patterns.”

To sort through these sources, historians of the early 21st century will need to use computational methods — such as searching for keywords or more complex queries — as well as physical analyses of outside texts or sources, explained Price. For some media, like tweets, statistical analysis is the only way to interact with them. One tweet doesn’t reveal enough; historians would have to examine an aggregation of tweets and consult relevant Twitter threads in order to gauge enough context.

‘FAKE NEWS’ AND SELF-CURATION

Our social media accounts are near-shrines of our idealized versions of ourselves: we only post edited photos, we only tweet our wittiest thoughts, and we only share our most ‘likeable’ life events.

A more insidious issue is the spread of misinformation — popularly known as ‘fake news’ — on platforms like Facebook and Twitter. The proliferation of false news stories and even fake first-hand accounts has been a pressing concern, especially over the past year. How will researchers hundreds of years from now be able to navigate our social media posts, all of which have varying degrees of reliability and bias?

Fiorella Foscarini, an associate professor and Director of Concurrent Registration Option at U of T’s Faculty of Information, says that fake news, forged records, and unreliable information has always been around, especially in the personal sphere or other environments with little outside control.

“What we are experiencing with social media, with the current proliferation of partial accounts or completely fabricated facts, is an interesting cultural phenomenon,” said Foscarini. “But it is also worrisome, because many people do not seem to have the critical instruments necessary to evaluate their sources.”

Archivists can prevent the spread of unreliable information by verifying the identity of the data at hand, providing resources for cross-examination, and monitoring the use of information to detect any modifications, Foscarini explained. However, outside of official archival spaces, these best practices might not be implemented.

Price explained that, regardless of genre, every source historians deal with has an “agenda,” and that historians have to learn to “read between the lines” of people’s self-presentations.

“Social media today are different in genre from the kinds of texts produced 100 or 200 years ago, in part because they offer a very strange hybrid of public and private with highly curated visions of oneself,” said Price.

Instead of looking for answers about what people were “really like,” future researchers should turn to social media to see how people curated themselves and the conventions for this self-curation — or, in Price’s words, what “kind of cultural representations were dominant in a particular moment.”

Price also said it would be a good idea to use tweets as ways to learn about how events or ideas “travelled and became meaningful to the historical actors,” rather than to learn what was really happening during an event or crisis.

ARCHIVING SOCIAL MEDIA 

While trying to capture tweets about the Hong Kong Umbrella Movement as part of a school project, Alexander Herd and his fellow group members ran into a problem: some of these Twitter accounts were being blocked or shut down by authorities trying to censor the information and ideas being shared.

In order to prevent these deleted tweets from being lost forever, Herd — who completed a master’s degree in Library and Information Science at U of T in 2016 — and his group members placed them in a “dark archive.” Dark archives usually contain “sensitive information” about an ongoing event and can hold political tweets for 25–50 years.

Copyright was another issue for Herd’s group. Despite tweets being public record, “many users are not comfortable with their tweets being archived for eternity.”

“By extension, there has been discussion over who owns copyright of a tweet,” said Herd. According to Herd, there isn’t a clear resolution yet, but dark archiving tweets for a long period of time is a possible solution.

Herd and his group also consulted U of T librarians for their project, including Li and Worby.

According to Li and Worby, permanently sharing archived tweets is currently prohibited according to Twitter’s Developer Agreement. Researchers can only share ‘Tweet IDs’ with the public. Tweet IDs are unique, unsigned integers that contain a timestamp, worker number, and sequence number to help researchers gain the full content of tweets. However, deleted tweets are not available, which spurs yet another ethical issue when it comes to public figures and their ability to delete their tweets.

A ‘DIGITAL DARK AGE’

In grad school, Price was part of a digital preservation effort at Stanford University that involved lucrative restriction enzyme patents. The archives at the library were given a collection of all the emails sent between the researchers working on this project in the 1970s.

However, the data was on eight-inch floppy disks. Price watched as the researchers moved the data to 5.25-inch floppy disks, and then to 3.5-inch floppy disks, and then to disk drives, and then to small hard drives before printing out the emails.

It’s inevitable that researchers hundreds of years from now will run into the same technological problems. It’s possible that the technology of the future may not be able to support our current technology or read our files — and leave future researchers in a ‘digital dark age.’

Duff said it would be a “huge detriment” if we were to lose all records of our digital data. Data loss is already happening every time someone accidentally deletes a file or breaks a phone full of pictures, Duff pointed out.

Price said that there could be some “massive social upheavals” in the future, especially in the wake of global climate change, which might compromise digital sources — which are currently stored in large buildings that depend on electricity to stay online.

“We know that paper can survive, sometimes for thousands of years, but there’s no evidence that digital data can survive in that way,” said Price.

PRESERVING DIGITAL DATA

Unlike books, web pages change “unpredictably and continuously,” explained Price, which means that archivists need to frequently make copies of these pages in order to truly capture our history.

“Archiving dynamic, interactive, ubiquitous digital information is much more challenging than archiving stable, almost unchanging analog records,” said Foscarini.

Despite the difficulties posed by technological obsolescence, Foscarini said that preserving websites and social media is no longer perceived as completely “unsurmountable.” The problem lies in ensuring these digital materials will still “make sense” hundreds of years from now.

“What kind of metadata do we need to retain, or to add, in order to provide enough context that would allow future generations to understand what that tweet or that meme meant to communicate?” said Foscarini.

Emily Maemura is a fourth-year PhD candidate at U of T’s Faculty of Information whose research centres on archiving and preserving the web.

“Long-term preservation of digital media is perhaps less like letters or newspapers, and more like audio-visual collections, which requires monitoring and attention since software and hardware become obsolete over time,” said Maemura.

Maemura is researching another challenge web archivers must face: deciding which social media posts to actually keep, since archiving the web takes time, money, and resources.

“I think there’s an assumption that it’s possible to capture ‘everything’ that’s out there,” said Maemura.

Maemura explained that this is an “impossible goal” because there is a finite amount of data that can be sustained and because technological limits make it difficult to capture certain kinds of dynamic data.

“So it’s important to be aware of, and be critical of, the kinds of selection processes that happen, who decides what is preserved, and who is responsible for the ongoing access and maintenance,” said Maemura.

So, next time you retweet a viral meme or make an online post, consider the possibility of researchers and archivists centuries from now studying it. What will it say about who we are today?

Read the rest of The Varsity Magazine, on stands and online soon.

                     

.main-wrapper{background:#b28dc1!important;}