Have you ever wanted to be more than just human?

Last Monday, the Toronto Transhumanist Association hosted a talk by Eliezer Yudkowsky, a research fellow from the Singularity Institute for Artificial Intelligence in Atlanta, Georgia, to explain the basics of “transhumanism.” This movement proposes that emerging technologies such as artificial intelligence, genetic engineering, and nanotechnology will make it possible for humanity to vastly improve itself. One hot topic is the possibility of vastly extending human lifespan-either through genetic manipulation, or by downloading someone’s entire mind into a computer.

“A lot of transhumanists have had to deal with rationalizations that people develop in order to suppress their fear of death-the idea that death gives meaning to life, and other comforting lies,” said Yudkowsky, citing that people often react strangely at the prospect of immortality.

“There are many philosophical [arguments] for why life must suck in order to be meaningful-that all the pain and death and catastrophe and the minor annoyances that drain out your life force are somehow necessary. Transhumanism just says: ‘this sucks, let’s fix it.'”

Yudkowsky cited Zain Hashmi, a four-year-old British child suffering from a rare blood disorder who will die without a bone-marrow transplant. With no matching donors, Hashmi’s parents decided to have another child, screened before birth to ensure that the new sibling would be a match. As death neared, public debate raged over Zain Hashmi’s right to life and the unborn child’s right not to be turned into a commodity-a tissue bank for another person.

In Yudkowsky’s opinion, such resistance is irrational. “I would make a serious case that this is where bioethics gets you…you start believing that death is a good thing, and you will eventually be willing to murder four-year old children for political purposes. If you can use tissue screening to save a child’s life, or you can use genetically modified golden wheat to prevent blindness due to malnutrition, why on earth wouldn’t you?”

In a nutshell, transhumanism is a movement dedicated to ongoing improvements in our quality of life through technology. The World Transhumanism Association (WTA) web site summarizes: “We foresee the feasibility of redesigning the human condition, including such parameters as the inevitability of aging, limitations on human and artificial intellects, unchosen psychology, suffering, and our confinement to the planet Earth.”

According to Yudkowsky, artificial intelligence (AI) could help us reach a pace of technological advancement where world-changing discoveries are made in a matter of minutes, hours, or days instead of decades or centuries. However, he added, it’s essential that AI be “friendly.” Any AI we develop must have a moral understanding superior to our own, or we risk bringing about a dark, Matrix-like future. Many cognitive scientists consider any concrete definition of intelligence, let alone morality, impossible. Yudkowsky acknowledged that giving AI a moral understanding would be complicated, but he said that, “there is [in fact] a definition of intelligence-it’s an actual math equation.”

If the predictions of transhumanists are correct, there may be no limit to how we can chart our future and transcend the limits of our biology.