Eye spy a fruit fly

Drosophila melanogaster can distinguish other flies

Eye spy a fruit fly

Most of us don’t think much of fruit flies other than as noisy nuisances with their sights set on spoiled food. 

However, according to Jonathan Schneider and Joel Levine, researchers in UTM’s Department of Biology, fruit flies, or Drosophila melanogaster, have a higher capacity for visual comprehension than previously believed.

Schneider, a postdoctoral fellow, and his supervisor, Levine, Chair of UTM’s Biology Department and a senior fellow at the Canadian Institute for Advanced Research (CIFAR) Child & Brain Development program, detailed their research in a paper published in the October issue of PLOS One. 

The research was funded by a CIFAR Catalyst grant and conducted in collaboration with Nihal Murali, a colleague from the Department of Machine Learning at the University of Guelph’s School of Engineering, and Graham Taylor, a Canada Research Chair in Machine Learning.

Though fruit flies have a limited scope of vision, they possess an incredibly layered and organized visual system, including hyperacute photoreceptors. 

Schneider and Levine wanted to determine whether fruit flies, despite their limited input image, could distinguish individual flies.

To do so, the researchers equipped a machine with 25,000 artificial neurons to mimic the eye of a fruit fly. They then recorded 20 individual flies — 10 male, 10 female — for 15 minutes for three days using a machine vision camera. From these recordings, they developed standardized images, which they resized to imitate the images the flies perceived. 

They showed the images to ResNet18 — a computer algorithm without the constraints of ‘fly eye’ technology — their ‘fly eye’ machine, and human participants. All three were tasked with re-identifying the fly whose images they had been shown.  

The results indicated that fruit flies can extract meaning from their visual surroundings and can even recognize individual fruit flies, something that even fly biologists have had trouble with. 

“So, when one [fruit fly] lands next to another,” explains Schneider to Science Daily, “it’s ‘Hi Bob, Hey Alice.’”

Fruit flies’ extent of visual comprehension has implications for their social behaviour, and this study could help researchers learn how they communicate. 

As well, these findings are significant because while most programs designed to mimic human capacity — such as virtual assistants like Siri, Alexa, and Google Assistant — come close to it, rarely do they go beyond it, like with the ‘fly eye.’

Machines like these can bridge the gap between engineers and neurobiologists. The former can use their findings to design their machines as biologically realistic as possible. 

The latter can use that biological accuracy to hypothesize how visual systems process information and, as Schneider and his colleagues put it, “uncover not just how [fruit flies], but all of us, see the world.”

Like humans do

Can U of T researchers help turn computers into mini-minds?

Like humans do

While calculators can be helpful when tackling some math equations, they can’t compete with the complex thought processes of humans — at least, not yet. Dr. Richard Zemel and Dr. Raquel Urtasun of U of T’s Computer Science Department are trying to speed that research along; the two are working to build computers to think more like humans when it comes to processing data. 

The two are part of a team of scientists and mathematicians led by the Baylor College of Medicine, trying to understand the computational building blocks of the brain. The goal is to create more advanced learning machines.

For this project, researchers at the University of Toronto will be partnering with the California Institute of Technology, Columbia University, Cornell University, Rice University, and the Max Planck Institute at the University of Tübingen.

Their research is supported by a program known as Machine Intelligence from Cortical Networks (MICrONS), and operates under the umbrella of Intelligence Advanced Research Projects Activity (IARPA). The IARPA is a US agency which invests in high-risk, high-reward research that offer solutions to the needs of US intelligence agencies. It is also part of the broader BRAIN Initiative, launched in 2013 by President Obama with an eye towards understanding devastating brain diseases and developing new technology, treatments, and cures.

This research will not only help scientists understand the computational workings of the brain, but will also advance the study of synthetic neural networks in order to better predict events such as cyber-attacks, financial crashes, or hazardous weather. 

Algorithms based on neural networks are already used in a wide range of areas, from the consumer-level to military intelligence as seen in “speech recognition, text analysis, object classification, [as well as] image and video analysis programs. The applications are broad,” says Dr. Zemel, adding that the “aim is to extend some of the most popular types of machine-learning models using deep neural networks.”

The massive amounts of data produced across the world on a daily basis affects everything from spam in your inbox to military intelligence operations. Smarter and more discerning learning machines will help to manage and present information in a more comprehensible way. 

“Currently the rules by which activities in a network are defined are mostly ad hoc, and validated and improved by experience. Here we hope to gain some insight from natural deep neural networks to refine these rules,” said Zemmel.   

In other words, the ways in which current algorithms represent, transform, and learn from data are determined largely through trial and error.   

Based on models dating back to the 1980s, advancements in neural networking have been confined to scientists’ ability to measure the activity of only a few neurons at a time. Today, more accurate and plentiful data allows researchers to take a more enhanced detailed look at brain activity, allowing for a more computational, rather than architectural understanding.

The availability of better tools, techniques, and technology will allow MICrONS researchers to measure the activity of 100,000 neurons while a subject is engaged in visual perception and learning tasks. Although the research teams will be mapping the activity of one cubic millimetre of a rodent’s brain (a volume less than one-millionth the size of the human brain), these tools will allow them to analyze neural circuits in ways that were unimaginable just a few years ago. 

The precision and microscopic scale of this research is challenging as scientists are aiming to obtain a highly detailed and complete understanding of one small part of the brain, rather than a structural understanding of the brain entirely.

The team hopes to develop new methods of passing messages from this data, which describe how information is passed between the model neurons in a big network.

The research that doctors Zemel and Urtasun are conducting can bring computers closer to actual brain levels of functioning. This will allow for more powerful performance that will better align with human needs.