“About halfway through a particularly tense game of Go held in Seoul, South Korea, between Lee Sedol, one of the best players of all time, and AlphaGo, an artificial intelligence created by Google, the AI program made a mysterious move that demonstrated an unnerving edge over its human opponent. On move 37, AlphaGo chose to put a black stone in what seemed, at first, like a ridiculous position. It looked certain to give up substantial territory—a rookie mistake in a game that is all about controlling the space on the board. Two television commentators wondered if they had misread the move or if the machine had malfunctioned somehow. In fact, contrary to any conventional wisdom, move 37 would enable AlphaGo to build a formidable foundation in the center of the board. The Google program had effectively won the game using a move that no human would’ve come up with.”
Will Knight, MIT Technology Review, July 31, 2017
Recently, a letter and appeal signed by Stephen Hawkins, Tesla/Space-X/Solar Tile and Hyperloop founder Elon Musk, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, linguist and activist Noam Chomsky and 1,000 other robotic scientists and sociologists warned that the ultimate (mis)direction of Artificial Intelligence could be the convergence of the military industrial complex with disruptive technology. https://futureoflife.org/ai-open-letter/
They forewarn that it could result in automated robo-military warfare that would be uncontrollable. An ultimate unholy alliance. Very much like the chain reaction that Rutherford did not really foresee in splitting the atom, and what Oppenheimer, the father of the atom bomb, lamented in resignation: “Now I am become Death, the destroyer of worlds.” https://www.youtube.com/watch?v=lb13ynu3Iac
Facebook founder, Mark Zuckerberg, sounding millennially effluvious, criticized Musk, saying that he was pressing a panic button unnecessarily. This is possibly better understood when one realizes that Elon Musk, a manufacturer of tangible products – be it an electric computer that drives you around, a solar tile that is affordable, or experiments with a public transportation system that can get you to Toronto from Montreal in an hour, on the ground – is clearly someone who has integrated at least some of his entrepreneurial zeal to the idea of creating an economy for all, where work, employment, cash or credit flow, eating, living, education and leisure become a reality for the common person. Meanwhile Zuckerberg deals with virtual, social, flirtatious, emoji-based long-distance fucking as the basis for increasing the stock value of his company. Incidentally, those graduate students who map the floor of the ocean and marvel with starry eyes at the shape and forms of sea creatures, topography, ocean shelves and tectonic peculiarities should wake up when they realize that their studies are funded by naval research. The navy is not exactly interested in the mating behaviour of starfish. Hello!
A marketing and sales force automation company is developing an artificial neural network-based customer relations software module that will mimic a biological brain-signalling system. It will try to gauge customer sentiment through voice recognition, email or optical character recognition evaluations. In fact, this is already happening in other sectors of industry. Such systems are attempting to learn to recognize and identify voices, images, vocal tones and facial expressions, and develop a response that will go beyond a databank-based response system. Innocently disruptive technology, is it not? It will, however, not end there. This is evident to some of us who are concerned or already traumatized, while the rest of us – those who are simply uninformed or unserious – are blissfully busy picking our noses, imagining that all science is good science.
The idea that is now quite current is that data-based analytics – the business of spewing out aggregated responses to input data – is not adequate any longer in the world we are intending to live in. Super computers based on 2-bit binary logic are already somewhat passé. We are on the threshold, we are told, of quantum computers. In other words, computers that can maintain a cubic phase. Or to put it another way, computers that operate in a two-in-one state, based on a composite of two rather than on two separate entities like 1 and 0. Computers that make their own selection criteria while being fed data. They can essentially operate at two wavelengths, with a mind of their own operating outside the realm of binary logic.
The quantum target, so to speak, is to develop machines that can be taught to make their own independent choices, learn the language of emotions and short attention-span reactions, and build the language systems that can create satisfaction (or anger) quotients in a fast-moving world. In short, machines will be taught code – sorry, the language that constitutes the building blocks of code – so that they can interact “without data” in a near intelligent and autonomous format. They will create their own machine language, develop their own “emotion.” In effect, they will be “armed” to create their own code, which could be totally indecipherable to the original code and original intent.
Traditionally, the word “algorithm” has been used to develop rules-based programs that have attempted to achieve a linear, exponential or logarithmic response by mining massive amounts of data. There are obvious limitations to such binary code-based decision-making systems, in terms of both data handling and encroachment on that threshold where emotions, feelings and instinctive impulses can be binarized. The shift to being inspired by biological networks is significantly different. Artificial synaptic connections will release neuron-like signals that will have their own decision-making thresholds. For example (and we are not being mischievous here), if a customer hasn’t yet reached a critical degree of irritation, ploys of counter offers or alternatives may be used to push him/her further (which, incidentally, are quite “human” techniques that any car salesman could warm up to).
The original intent of Artificial Intelligence (AI) experimentation was to attempt to come closer to the functioning of the human brain in decision-making processes. A matter of “handling efficiently” large data, as it has now come to be referred to. Today, with increasing ability to process larger amounts of information, we are looking at adjusting the intelligence of the machine itself.
In other words, if we leapfrog forward somewhat – let’s say, as humans – to a point in time where humanity is on the verge of going criminally insane… and instead of applying biological or therapeutic antidotes to insanity, an artificially intelligent machine may try to adjust the insanity level by adjusting its own ethics and morals (using self-diagnosis), and create a state of “being” that could unleash ummm… a robot apocalypse! Ok! That maybe a bit far-fetched, right? Wrong!
Last month when Chinese scientists sliced a photon and sent its partner into space on a satellite, 1,400 km away, the two started changing spin direction instantaneously in relation to each other when one of them was tweaked on the ground, and for the first time teleportation was no longer a Star Trek script word in hippie physics. And did we talk about it much? Not really. We’ve now become gradually immune to impossible physics being made casually possible. Also, coincidentally, our attention span and landmark achievement retention capabilities are now indescribably fleeting, and our responses are non sequitur, so to speak. Laika, Yuri Gagarin and Neil Armstrong are as resonant in our memory banks today as the explanation of gravity, momentum and buoyancy was a couple of centuries ago. We say, “When it happens, we shall see.” And when it does happen, we say, “Ho hum. So what!”
If there is to be an alliance of the unfettered and scientific with the conniving and domineering camp, then this alliance between artificial intelligence and the deep, dark world of military strategists would qualify as the most unholy of alliances. The imperative of regulating AI is undeniable. For my part, I will choose to delude myself that clearer heads will prevail and the sanctimonious new kids on the block won’t meddle in artificial ethics. I will therefore end on an optimistic note:
“Perhaps it was a passing moment of madness after all. There is no trace of it any more. My odd feelings of the other week seem to me quite ridiculous today: I can no longer enter into them.”
Jean-Paul Sartre, Nausea