The University of Toronto’s Student Newspaper Since 1880

Share on facebook
Share on twitter
Share on email

Neural networking

Share on facebook
Share on twitter
Share on email

Ilya Sutskever is not a computer geek. He doesn’t spend endless hours in front of a computer, and his desktop background is a picture of a sunflower — not an atom. He also isn’t as wired as one would initially expect: as a matter of fact, he is one of the few students without a smart phone.

Nevertheless, Sutskever is nothing short of a computer wiz. As a graduate student at the University of Toronto’s Department of Computer Science, he is the only Canadian winner of the Google Fellowship, an international prize awarded to graduate students doing exceptional work in computer science and related disciplines.

Walking into Sutskever’s office, I find him drawing up a network of connected squares and circles. The meaning of the illustration is simple: the circles represent neurons linked up in a network of inputs and outputs.

Sutskever’s area of interest is neural networks, which often involves tasks that humans are not good at, but that computers could potentially excel at. He is quick to point out my naiveté in thinking that neural networks model the human brain: they only are inspired by it, but do not attempt to replicate it.

alt text
Sutskever’s specific projects have focused on training neural networks and applying them to new settings. Similar to connectionism — a paradigm commonly used to describe human cognition — neural networks use models representing interconnected networks of units. Sutskever’s work involves changing the strength of connections between units, which most commonly represent neurons, to achieve a desired performance. The program network is then allowed to run and is examined for any errors. The connections are re-weighed for an input to result in a desired output.

An example Sutskever gives is that of driving: “You can change the strength of the connections slightly, so as to change the direction of the car.” More recently, Sutskever has focused on speech from text programs. “You can adjust the weights to better represent the correct speech from the text. We want the program to be able to extract semantic info from sentences, such as who did what, to whom.”

I ask if the topic of Sutskever’s work can be described as machine learning. “It is not machine learning, but machine performance,” he replies unequivocally. The statement underscores a clear and imperative position: his work is not to do with artificial intelligence — at least not yet.

Neural networks at the present time are highly operational in their functionality and narrow focus — whereas AI, by the very nature of its title, has encountered a number of setbacks over the course of its development.

“The problem with intelligence is that it’s slippery, very difficult to define. It’s unclear how to proceed, we don’t know how to build intelligent machines,” says Sutskever. He adds, “No researcher has the explicit goal of achieving artificial intelligence. Instead they focus on tasks involving computer vision, speech, object recognition, planning, and decision-making. Researchers must focus on tractable problems.” He mentions, however, that he hopes that “neural networks will eventually lead to smart robots.”

What’s stopping AI from being achieved sooner? It’s not a lack of funding, he assures me, but a “lack of ideas.”

“What we need are bigger computers that can handle more data and can train larger neural networks with better training algorithms,” says Sutskever. Advancements in robotic intelligence take place sporadically. He describes a recent advancement of note in which “researchers at Stanford University trained a robot to accurately fold napkins,” a task involving complex and hierarchical planning.

As a graduate student, Sutskever is himself involved in advanced and innovative research. His current work involves a program that, once fed a set of characters (letters and numbers), can accurately predict subsequent characters. He achieved this by first inputting all of Wikipedia’s 16 million articles into the program. The program was then able to extract regularities and other vital information, and update its brain state. The result: the program could predict foreign names as well as invent plausible-sounding foreign names.

Sutskever, originally from the Open University of Israel, completed his B.Sc in mathematics at U of T, and is currently completing his PhD in computer science. His supervisor is Geoffrey Hinton, a world-renowned computer scientist known for his revolutionary work in information theory.

The Google Fellowship includes tuition and fees, a $25,000 yearly stipend, $5,000 toward a personal computer, invitation to Google Fellowship Forum, a new Android phone, and a Google Research Mentor.

As to when the next big advancement in machine intelligence is to take place, Sutskever candidly replies, “Check back with me in six months.”