With Terminator 3 about to hit cinema screens next week, STEPHEN LEWIS asks if intelligent robots ever could rule the Earth?
BIG Arnie is back as he always promised. He may be older and creakier, he is 56 on Wednesday, but that is no problem for hunky Arnold Schwarzenegger in his latest Terminator outing. Arnie plays an obsolete T-101 fighting machine struggling to protect a grown-up John Connor against state-of-the-art T-X cyborg Kristanna Loken.
There's no contest, and Arnie gets a beating.
Robots have come a long way on film and TV since the days of Doctor Who's faithful sidekick K-9 - which was little more than a shoebox with a metallic doggy head and tail attached.
Since then, we've seen the likes of Star Wars' robotic pals R2D2 and C3PO, Star Trek's android spaceman Mr Data, and Robin Williams doing a one-man impression of the future of the robotics industry in the film Bicentennial Man - where he starts off as a clunky robot of limited intelligence and is gradually refined and developed into a sentient being indistinguishable from a man.
As budgets have got bigger, special effects have got better - who could forget Robert Patrick's eye-popping liquid metal cyborg in the second Terminator film? - but generally the trend has been for celluloid robots to become ever more human in appearance, while at the same time remaining incapable of emotion.
That seems to have become the film and TV stereotype, says real-life roboticist Dr Nick Pears of York University's computer science department. "It's as though the only thing specific to human beings is emotional response. Robots in film have all the other attributes of a conscious being, except emotion."
He says if it is possible to build an intelligent machine, there is no reason why it should lack emotion. "It is not clear whether you can have a conscious being minus an emotional response," he says. "What would motivate you? The motivation to do things is tied up with emotional responses."
So how closely do the movies reflect real-life robotics? Will we ever be able to build intelligent machines? Will they look like us? And, given that in the Terminator films intelligent machines take over the world and try to wipe out the human race, should we be afraid?
Current robotics technology is nowhere near being able to make something like the Terminator, admits Patrick Olivier of York software firm Lexicle.
Real-life robots are used in industry for things like carrying out routine, repetitive chores on car assembly lines - or, more excitingly, robot arms load and unload the space shuttle in orbit.
They are basically complex machines that qualify as 'robots' only because they are equipped with sensors and can modify their actions slightly based on the feedback they receive. They certainly aren't intelligent - and they don't look remotely like people.
That doesn't, however, mean it will be impossible one day to make a machine as intelligent as a man. Quite the reverse.
It may be a couple of hundred years down the line, admits Patrick - who is working on developing artificial intelligences that can recognise and respond to speech - but the fact we exist and are clever proves it must be possible to build intelligent machines. Because that's exactly what we are.
"Human beings are an existence proof that it may be achievable," he says. "We are machines that perform all these functions (like thinking and feeling). Is it possible to build another machine that does it? It would be tough to say no."
The key, he says, is to recreate in a machine areas of human capability such as visual perception and language understanding.
Software programmes have already been developed, he says, that can recognise certain elements of human speech and give an appropriate, programmed response. But that falls far short of building an artificial intelligence that genuinely understands language and can use it creatively.
To be truly considered 'intelligent', a machine or programme would have to pass the Turing Test devised by British mathematician Alan Turing - which would require it to be able to make abstract, creative leaps like understanding in what way Charles Dickens' character Mr Pickwick is like Christmas. They are both, Patrick says, associated with warmth and fun. But recognising that requires abstract connections to be made between things that are on the face of it very dissimilar - and that requires huge amounts of information to be processed in our brain in ways we still don't really understand.
No wonder we're finding it difficult to build artificial intelligences when we don't understand how our own works.
Ironically, says Nick Pears, in the quest to develop an artificial intelligence it is often the easiest things that are hardest, while things we thought would be difficult turn out to be easy. Thus, it is easy to produce software that is great at playing chess, an activity most people find very difficult; but very hard to build a robot that can 'see' well enough to navigate its way around a room without knocking into things - something we manage effortlessly (unless we've had a few too many down the pub).
Nick is working on the problem of making a robot that can 'see'. The reason it is so difficult, he says, is because organising the mass of information, fed into our brain whenever we 'look' at something, into a pattern that is meaningful involves sifting out huge numbers of other possible patterns and rejecting them. Being able to recognise that object in front of us as a table, and then being able to judge exactly how far away it is so we can avoid it, is a hugely complex process.
Building a robot that has a human-level intelligence is, however, the Holy Grail of robotics. Nick agrees that one day it will be possible - and adds there is no reason why we should stop there. Once we can create a machine that's as clever as us, it should be possible to make one that's even smarter.
It is worth trying, he says, because of what we can learn about ourselves in the process. "We would understand ourselves a lot better if we could create something in our own image," he says.
Some people worry that by trying to map how our brain works, we run the risk of "'demystifying" our humanity and somehow lessening it. Nick doesn't agree.
"In all my work in robotics, the process of recognising how difficult it is to do even simple things makes me marvel at what a wonderful piece of engineering the brain is," he says.
But shouldn't we be afraid of creating a race of super-intelligent robots? We don't want them turning on us the way the machines do in Terminator.
It could be a risk, Nick concedes. But it is not something that is just around the corner. We won't see artificial intelligence in his lifetime, he believes - and long before we get there we will be ready for it.
"It is not something we have to worry about now," he says. "It intelligent robots ruling the Earth is not going to happen."
Updated: 11:10 Friday, July 25, 2003
Comments: Our rules
We want our comments to be a lively and valuable part of our community - a place where readers can debate and engage with the most important local issues. The ability to comment on our stories is a privilege, not a right, however, and that privilege may be withdrawn if it is abused or misused.
Please report any comments that break our rules.
Read the rules hereComments are closed on this article