Sunday, December 4, 2011

Welcome to Robotville, Population: 20


Rob Knight tugs Eccerobot’s arm to the side, and it springs back, swiping my right arm. “If that was an industrial robot, it would have broken your arm,” says Knight, of The Robot Studio in Divonne-les-Bains, France. It doesn't hurt at all though - instead the robot reacts to contact with me and springs backwards.
From a distance, the life-size Eccerobot (pronounced ecky-robot) torso resembles the skinless human bodies biology textbooks use to illustrate our internal musculature and vasculature. But as you get closer you realise that this similarity has limits: what look like blood vessels are actually electric wires and ligaments, elastic bungee cords. The head, meanwhile, is a giant eyeball.
Eccerobot is just one of those I met at Robotville, a 20-strong European robot exhibit at the Science Museum in London that feels more like a trip to the zoo. In the open plan room, each droid has its own “pen”, complete with the particular toys or accessories it needs to show off its skills. Unlike the zoo, however, visitors are positively encouraged to touch and interact with the inmates - and, a rare treat, can even meet their owners and creators.
Inside Eccerobot’s pen, Knight explains how the droid’s elasticity not only makes him safe and fit for human interaction, but also makes him smart. Unlike a stiff factory robot, which must be programmed specifically for a task in a particular environment, Eccerobot’s flexible body adapts to the world around it, in principle allowing him to achieve many feats without much pre-programming.
Meanwhile, his similarity to the human body should make communication easier. “You can tell what someone is about to do by the position of their body,” explains Knight. Because Eccerobot is shaped like us - and is designed to move like us as well - it should be easy to predict what he is about to do. It should also be intuitive to teach him new tricks.
But just as we are much more than just our physical bodies, so are robots. The robot toddler iCub, which explores cognition, sits in a neighbouring pen. The size and approximate dimensions of a three-year old child, iCub sits expectantly facing a plush octopus toy, a plastic truck and a chunk of Lego. There are no tantrums though. “What should I do?” he asks in a very obviously robotic voice.
With a brain architecture that crudely resembles a human’s, the iCub is designed to acquire knowledge through the interactions of its body with the world, just as we think flesh-and-blood kids do. “It is becoming clear that to acquire information you need to explore the world yourself, you need to have a body,” researcher Giorgio Metta of the Italian Institute of Technology in Genoa explains. Rather than being programmed, it can learn for itself, Metta adds. This should also make iCub easier to interact with than traditional robots - slightly eerie flashing red eyebrows and lips notwithstanding.
By contrast, there is nothing creepy about Nao, a third humanoid on display that is made by Aldebaran Robotics in Paris, France. The half-metre-tall, white plastic robot seems to have conquered the market in cute, lovable automatons. “It's made for our feelings,” says Julien Gorrias of Aldebaran. “People see it and say they want to help it.”
Gorrias is a “behavioural architect” at Aldebaran and was chosen for the job because he doubles up as a masked actor. In other words, he’s an expert in conveying emotion and meaning through movement. As he talks to me, his gestures are loaded and expressive - and it is this ability that he has transferred to Nao, whose face is plain, but whose gestures are smooth and nuanced.
Nao’s lovability belies its importance. It is fairly cheap for researchers, now runs the standardised robot operating system (ROS), and is swiftly becoming a workhorse of human-robot interaction research. The Nao Data robot, programmed by Heather Knight at Carnegie Mellon University in Pittsburgh Pennsylvania, is at the core of a project that uses theatre to give robots personalities. It has performed as a stand-up comedian. Meanwhile others have used the Nao, in combination with the Microsoft Kinect sensor, to map human movements on to a robot.
Decidedly less cute is a robot in a neighbouring pen: Dora the Explorer. Essentially a T-shirt on a stick with a wheeled base and camera for a face, Dora’s draw is her brain. It’s fascinating to watch it in action via a computer screen that’s also part of the exhibit. As she explores her two-room pen (a crude mock-up of a kitchen and living room), we see her brain making sense of the world as she, well, explores. First the walls appear on the screen, then a door. Then red blocks pop up as she fills her model of the world with various obstacles. As she nears the target of her search - a box of cereal - she suddenly veers off. She must have perceived space that she had not yet explored, explains Nick Hawes from the University of Birmingham, UK, who is part of the Cogx team that created Dora. “She has a hierarchy of ‘drives’,” he says. “The drive to explore is more urgent.”
Among the other creatures I meet at the robo-zoo are disembodied dexterous hands that wriggle like human ones, a huge light-sensitive beetle and Furhat, a slightly sarcastic, physical talking head whose animated face is a cinematic projection. Built by researchers at the Royal Institute of Technology in Stockholm, Sweden, he wears a deer-stalker hat. “Will it rain?” I ask him. “What do you think, you are in London,” he retorts. It's a little like talking to a chatbot in that most conversation peters out without really going anywhere. Only, it’s a lot more fun because Furhat has a face with expressions and mannerisms. Interestingly, it is always asking questions - such as whether his visitors are afraid of robots - and then squirreling away the answers, which will be used to create a survey. “The idea is to build a machine that can extract information,” says one of its creators.
It’s typical of the exhibit: these robots aren’t just cute and fun - as cutting-edge models of human co-ordination, cognition and emotion, they are also tools that may help us tackle big questions in neuroscience, cognitive science and human-robot interaction. And they are marvels of engineering and design.

No comments:

Post a Comment