Microsoft hosted its first Imagine Cup in 2003. The international competition, now in its 16th year, challenges student tech developers to use Microsoft’s cloud-based technologies to address global problems.
This year, two Canadians won the Cup: Samin Khan, a fourth-year at University College pursuing a double major in Computer Science and Cognitive Science with a minor in Psychology, and Hamayal Choudhry, a third-year at University of Ontario Institute of Technology studying Mechatronics Engineering. In July, Khan and Choudhry were one of 49 teams to advance to the 2018 Imagine Cup World Finals in Seattle.
The pair’s winning project was smartARM, a 3D-printed robotic prosthetic hand designed to be highly functional and affordable at an estimated cost of $100. smartARM operates in a three-step process: First, a camera embedded in the palm of smartARM senses the shape of an unfamiliar object. A Raspberry Pi analyzes the shape’s data and calculates the kind of grip required to grasp it. Second, a machine learning algorithm powered by Microsoft Azure improves the grip each time the device is used. Last, this information is stored on the Microsoft cloud so the user can switch to another smartARM and retain functionality.
Khan and Choudhry won a mentoring session with Microsoft CEO Satya Nadella, a $50,000 Microsoft Azure grant, and $85,000 in cash prizes.
The Varsity had the opportunity to sit down with Khan and talk about his team’s win.
The Varsity: How did you hear about the Imagine Cup?
Samin Khan: I heard about it at UofTHacks — it’s a hackathon run by U of T — and there were sponsors there for each of the companies. So, Microsoft actually had a few sponsors… and after we won the Microsoft API award, they told us about the Imagine Cup.
TV: How did you meet your partner Hamayal Choudhry?
SK: We actually met in middle school. He was a year younger than me; we didn’t know each other too well. We split off into different high schools — we hadn’t seen each other in four, five, six years. We just happened to run into each other at UofTHacks… We started exchanging philosophies that we had for the tech industry. We were tired of seeing innovation that was so focused on making the next cell phone slimmer or the next car more sleek — we were thinking that this is not really focusing on impacting the lives of people… Neither of us came into the competition with an idea nor a team, and based on that conversation, I think we realized we had to put something together. And it was all history from there.
TV: What is it that makes smartARM so novel as a prosthetic?
SK: What we noticed is that within the prosthetics industry, there’s a big divide between the products. On one hand you have ‘cosmetic’ prosthetics: these are going to be pretty cheap, they’re affordable, they’re [less than] $1,000. However, they don’t provide much motor functionality, like fingers, so you can’t actually grasp objects.
On the other hand, you’re going to have these very complex robotic arms. You’ll find that these arms all immediately go up to tens of thousands of dollars. The main reason for that is they are striving for that functionality, and what that requires is a lot of complex neuro-muscular interfacing with the myoelectric sensors.
What smartARM leverages is that, by using computer vision to automate some of that grasping processing, it limits the amount of myoelectric sensor usage. The costs are under $100 and all we have are these very simple motors and this onboard Raspberry Pi. We’re able to get a good bit of functionality.
TV: Were there any experiences at U of T that helped you design smartARM?
SK: Definitely. I mean I started programming not so long ago; I only really started in second year. I really feel that once you know the fundamentals of programming, there’s a lot you can do. Based on just those introductory courses like Intro to [Computer] Programming where you first learn Python, for instance — just having the basics of being able to manipulate integers and strings, this basic kind of algorithmic way of thinking. Honestly, that got us started when we were using object recognition to turn images into strings and those into grips.
What also really helped was that I volunteered for some computer science research opportunities within U of T. Last semester, I was working with Professor Yang Xu — he’s in the Department of Computer Science and also Cognitive Science, which is what I’m in. I started to get comfortable learning some of the basics of machine learning libraries.
TV: What advice do you have for anyone who might be interested in pursuing a similar tech venture such as yourself, but is unsure of where to start?
SK: Just starting conversations with people in computer science, or people who have some programming experience, or who have worked in industry, or research, or started their own company. And having that vision for yourself, and not being afraid of working with programmers, and trying to get any experience, even if you haven’t had any — it would be great to just get a footing in it. I would highly recommend learning the fundamentals of Python, for instance, or learning the fundamentals of some machine learning library.
This interview has been edited for length and clarity.
Khan has offered to speak to anyone who may be interested in pursuing a tech venture. He can be reached at firstname.lastname@example.org.