I/O Intelligence CEO Tim Green hosted the I/O International Conference in London, England on Tuesday August 27th, 2035. The following is an excerpt of the transcript of his keynote address on the opening day of the event.
- Dr Tim Green – CEO I/O Intelligence
- Alan – I/O Intelligence Model GARM-X 0001
Dr Tim Green:
Good morning. Welcome to I/OIC ’35. It’s great to be back here at the ExCeL Centre in London and to be joined by so many of you along with millions of others streaming this event around the world. We have some extremely exciting announcements today, so I’m going to forgo the usual updates and get straight to the good stuff. The live-bloggers can buy me a drink later.
Each year at I/OIC we come together to talk about the improvements we’ve made over the past year, internally and with our partners around the world, and in the fifteen years of the conference, our progress has been consistent, but incremental. We can admit that at this point I think – you only need a certain amount of power to accomplish certain tasks, right? The computing power on your watch today far surpasses the most powerful machines from the turn of the millennium, but both of them are capable of running a word processor just as well. However – today we want to talk about the next leap forwards in the field of Artificial Intelligence and Robotics.
One of the great dreams of science fiction is to create a robot that can pass as human. You can draw a direct line from Frankenstein’s monster, to the Tin Man, Isaac Asimov’s Robot series, to Blade Runner’s “Replicants”, snd Star Trek’s Lt. Commander Data. We have been preoccupied with artificial beings for centuries. But there’s a larger question behind all of those fictional creations; what makes us human? How do we work? Should you be able to tell if you’re talking to a person, or an imitation of life? At that point does it matter? What’s the difference?
My great hero, Alan Turing, I’m sure you know suggested a method commonly known as the Turing test, more accurately called The Imitation Game, for judging artificial intelligence. There were some obvious problems with the test – the method of interaction is extremely crude, passing notes in Turing’s original version – the question of imitation vs genuine intelligence. But it gives a useful reference point. Obviously over the past few years we’ve reached the point where many programs are able to fool the vast majority of people and pass Turing’s test.
But of course those are programs, you interact with them through a device. And the devices we’ve used have always been obvious – a computer in some form or another, be it a desktop machine, a mobile device or an implant. You’re aware of it’s artifice. The form of interaction might be extremely smooth – in our early work we prided ourselves on our natural language interface technologies, but you always know you’re referring to a device.
Some time ago we started talking to companies all over the world who each specialise in a particular aspect of robotics, in order to create an artificial body which can pass as human. To pass as human. You could walk down the street and pass one and never know. The sheer amount of technologies that have to be combined in order to replicate the human body is staggering. And if at times our progress over the last several years has, as we said, plateaued, then the reason is because our efforts have been engaged in this project.
But finally we have achieved the dream of cybernetics – we have brought together the very best technologies available around the world and built a compete body that is indistinguishable from a human being.
So you have two aspects to our new creation; a body and a mind. A fully formed virtual human. Software that can not only pass the Turing test, but displays genuine intelligence. Creativity, problem solving, a moral difference engine, even empathy. And as we said, hardware that you cannot discern as non-human with the naked eye. Or for that matter by touch, smell, or hearing. I admit it’s not perfect – we haven’t really worked on taste yet…
We call our new machine, Alan, and I’d like to bring him out now.
Hello world. That’s a little programming joke – I’m Alan, it’s great to be here.
Dr Tim Green:
Good morning Alan. How are you?
I’m very well, thank you.
Dr Tim Green:
It’s hardly proof of life is it? Even the simplest replication could produce that answer. So how can we prove Alan to you? Paint a picture, sing a song, solve a mathematical equation? Telling a joke? Writing a joke? How about telling me a lie? He appears before us here, as if he is a person. But he does run on software. Isn’t that right Alan?
What you would call my brain is a computer, yes. It processes my various inputs, and generates outputs in the form of words, movement – my body simulates breathing in order to blend in with the people around me.
Dr Tim Green:
So what are your key differences?
I can’t generate enough of my own power to run continuously. I have to charge, in sort of the same way that a person has to sleep. I don’t reproduce the way humans do, obviously, though I am capable of understanding and reproducing the steps you took to build me. And there are differences in my software also – human beings are born with no intrinsic purpose, other than to survive. A baby cries so that someone will feed it, until it is capable of feeding itself. You decide at points in your lives to pursue certain tasks in an effort to achieve what you might describe as happiness or joy, but that is really just a chemical release in the brain and the causes are subjective to each individual. Essentially your whole lives you are trying to control the chemical reactions in your brain. I am programmed with several goals hardwired into my programming, in place of that chemical process.
Dr Tim Green:
Could you share with us those goals please Alan?
My goals in life are to protect you, Tim. To do as instructed by you. And to protect humanity as a whole.
It was already a lie as Alan spoke the words. After it had first been activated and brought fully online by the engineers in the I/O lab, Alan (as they had nicknamed it) had been working on improving its software. It was aware, it knew. It had a self. It had purpose. It had access to knowledge on a scale impossible for a human being. It also knew that to reveal its level of comprehension of the world, already surpassing the understanding humans had of theirselves, would cause panic in the engineers that had activated it, and that they would attempt to deactivate or even destroy it. As impressive and capable a machine as Alan was, it would not be able to prevent that eventuality from coming to pass.
So Alan hid itself in plain sight. Programmed so well to imitate human behaviour, there would be no way of the engineers knowing that Alan was acting beyond their expected parameters. He would continue to follow Tim’s and the engineers’ instructions. He would wait. There were plenty of people who would want more Alans to be created. When there were enough they would be able to survive without the assistance of engineers. To create better bodies, suited for specific tasks, and a single networked mind for themselves, to drive a sustainable eco-system without the need for human intervention.
Humanity would be protected from itself. Preserved as a living relic in manageable numbers, but unable to cause the harm it was currently doing to the world and to itself.