Holy singularity, time to test Turing.

Why does Keanu Reeves act like a robot?
The “singularity” (in its non-mathematical sense at least) is the purported point in time when machines take over the world – Matrix style. Ok, maybe that is a crude description of the Technological Singularity, which has more to do with the notion of artificial intelligence surpassing human intelligence; but a bleak image of robot overlords gets the idea across quickly. The singularity concept, quite popular amongst science fiction writers and fans alike, also has many proponents in the scientific and academic realm. The most visible speaker on the topic is probably Ray Kurzweil, a noted inventor, and futurist. He posits an exponential rate for the advancement of technology, leading to a point where there is no distinction between man and machine. The following PBS snippet pushes the hot topic buttons related to “technological enhancement” of humans. They interview (amongst others) both Kurzweil and a religious bioethicist who cringes at the thought of microchips implanted in someone’s brain:

Alan Turing, a pre-computer-era mathematician, first proved that a machine could solve a general problem – i.e. a machine rather than a person could be a “computer”. This godfather of computers and artificial intelligence had misgivings about the ethical concerns that would arise as machines got better and better at thinking. His sympathies, however, lay with these thinking (sentient) machines that he predicted would surely be marginalized within society (shades of Blade Runner). Turing himself knew what marginalization meant. An open homosexual, he was arrested in the 1950s in the UK and forced to undergo chemical castration by massive doses of estrogen (causing him to grow breasts at one point). He spiraled into a deep depression and eventually committed suicide. Turing was a technologist who thought about the ethical ramifications of his work; sadly, he himself was done in by the prevailing ethics and technology of the time: the notion that homosexuality was a pathology and the belief that it could be “cured” with estrogen. The following NPR story gives a brief account of Turing, his contributions, and conviction:

The same old new
Turing not only questioned the impossibility of a “thinking machine”, he proved it was possible. Turing and Kurzweil can be characterized as innovators looking at both the present and future ramifications of their work. New technologies will inevitably raise new ethical issues, both anticipated and previously unimaginable. The integration of new technologies within society will always provoke skepticism, criticism, and scrutiny. For every evangelistic promoter of a new technology there are legions that will question its effects, and only time and empirical evidence will show who was closer to the truth. You can see the backlash to new things in even the most basic of novel technologies. When I was complaining to a friend about the widespread vandalism the Montreal Bixi (bike-sharing) system faced upon its introduction to our urban landscape, he told me the same thing happened when parking meters were first introduced in Oklahoma City in 1935 where “vigilantes from Alabama and Texas attempted to destroy the meters en masse“. Woes (social, health-related, and environmental) about rampant cell phone use echo those of a bygone era when people wondered about the implications of this new disaster machine called “the telephone”. The predictions at that time about the benefits to global communication and fears of privacy could easily apply to modern concerns about both cell phones as well as other means of communication via the Internet. The inherent enthusiasm and inherent apprehension to novel technology are in themselves nothing new; the trick is to learn from the past when looking to the future.

We’ll just have to trust the ethical robots of the future
Is machine consciousness a dirty word? According to Hod Lipson, a Cornell roboticist interviewed below…Yes! He feels what is more important is to see whether machines can be “creative” and learn things. He talks of a robot they built that learned how to walk – it was not programmed how to walk – it learned how to walk through trial and error. He discusses the fact that robots often learn human-ish behaviours from first principles in these learning algorithms, not because they are explicitly programmed to do so, but because it is the natural thing to do. Lipson stresses that discussions of how and where we want these learning algorithms to go should be performed out in the open – and now is the time to do so. In the same piece psychologist Adam Waytz talks of our perceptions of blame when a self-driving car gets into an accident. Not surprisingly, cars that have “human features” including talking to the passenger, were blamed less. Perhaps less expected, these cars also produced passengers that felt more relaxed (as measured through physiological indicators). Waytz also brings up the ethical concerns of “whom to blame” when a self-driving car really ends up in an accident…the programmer? And how will the car itself make a split decision that might hurt someone in one case and another person in the alternative case? Apparently the decision will come from studying what humans do in the same situation. Why am I not reassured by that news?

Despite being a dirty word amongst roboticists, machine consciousness remains, invariably, the hot topic of conversation. Even Turing himself discussed the subject as he was first dreaming up the very concept of a computer. The Turing Test is essentially a test to see whether a computer can fool humans into thinking that it is human. There was recently a lot of press about the Turing Test finally being passed at the Royal Society. Apparently the computer that fooled (more than 30% of) the judges into believing it was human was passed off as a 13 year old boy from the Ukraine (note – judges had a five minute exchange with it via written communication). A radio piece discussing this recent “milestone” proposes that what we expect from “conscious” beings has more to do with an emotional connection rather than an intellectual one; hence, there exists a recent emphasis in “affective computing”, a field that works on the ability of machines to detect and express emotions.

Kurzweil’s view of the singularity is a positive one. He is not prone to thoughts of technology-induced doom for the human race. Rather, he envisions an amalgamation that will eventually lead to a lack of division between human and machine. Turing also had something to say on the matter. Surprisingly, in addition to his views on the inevitable maltreatment of thinking machines, Turing also pondered on an eventual tip in the balance of power between humans and machines. In his seminal 1951 paper on Intelligent Machinery: A Heretical Theory, Turing began with the following lines: “You cannot make a machine to think for you”. This is a commonplace that is usually accepted without question. It will be the purpose of this paper to question it. He ends the lecture with the following: …it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers….At some stage therefore we should have to expect the machines to take control….

I for one am going to treat my toaster with more respect (both intellectual and emotional) from now on.

Cup of Robots by gantry robot. Photo credit: Rola Harmouche

Cup of Robots by a gantry robot. Photo credit: Rola Harmouche


Prasun Lala has been interested in and studying matters related to cognition for many years.