Any robot that moves, performs. But those robots that are built or programmed explicitly to perform can accentuate a repertoire of multiply articulated gestures with naturalistic movements and interaction.
One of the hit exhibits at technology trade show CEBit 2011 in Hannover in March 2011 was the performing Robothesbian by Engineered Arts from Cornwall. This gangly robot performer was connected up to a Microsoft Kinect games controller so it could read the body movements of visitors. It has a certain cheekiness, and a Shakespearean repertoire. Its movements are somewhat more explosive than many robots. The designers also exploit lighting and stage sets to good effect.
Robothesbian was built by a company of ten, and engineered over 7 years. At least twenty have been installed, including one at Questacon in Canberra.
Another recent notable robotic performance was at TED, featuring Aldebaran’s NAO playing a stand-up robot comic called Data. He was partnered by Heather Knight from Marilyn Monrobot Labs. Data tells a number of pretty old jokes (but I guess he wasn’t invented yet), and apparently uses software developed at Carnegie Mellon to respond to the audience reactions.
It’s apparent that the audience’s experience of the robot’s performance is distinct from their experience of the uncanny appearance of an ultra-realistic robot such as Hiroshi Ishiguro’s.
At another level, Knight’s use of Nao as Data shows that robotic innovation can legitimately take place in software alone.
This video illustrates another practice in robotics that enhances and distorts the human gait: the exoskeleton. In the tradition of bionics, wearers strap a motorised assemblage to their body, and the device senses nerve signals running through the limbs, and amplifies these into movements. It is designed for people with poor mobility (broken leg, aged etc) and rehabilitation.The “Hybrid Assistive Limb” (HAL) is being developed by Japanese scientists at Cyberdine Corporation and Professor Sankai of Tsukuba University.
The very deliberate, (robotic) gait that wearers adopt when strapped into this is reminiscent of cinematic clichés about how robots move. Rather than allowing movement in-between each step, this device regulates the gait, while giving enhanced strength.
(thanks to Andrew Murphie for the link)
The robot from Cornell University in this video ‘generates a conception of itself’ and improvises ways of moving around. At startup, the design has been left incomplete, and the robot itself finishes the design. As the robot starts up, it moves all its parts to establish its own morphology. If it has been damaged or reorganised, it can adapt to its new body and still improvise getting around.
Unlike the programmed gaits in the previous Following Robots post, this robot belongs to a tradition of self-generative designs. In the documentation, the developers emphasise that this robot generates internal models — diagrams in the robot’s mind that represent its body. The principle of creating mathematical models of the robotic body (and of the artificially intelligent mind) is the dominant approach to designing self-aware autonomous systems.
Against the internal model approach, an alternative view proposes bottom-up designs, such as in Simon Penny’s work (see his paper ‘Trying to be Calm: Ubiquity, Cognitivism and Embodiment’). This tradition critiques the assumption that robotic movement requires models, and that models explain robotic movement and ‘awareness’.
Watching this mangle of motors, sensors and connections struggle to get to its feet, irrespective of the mathematics of its internal model, the information in play clearly comes from the bottom up. The gait is not calculated in the internal model and then applied to the outside. It is generated in the encounter of robot with the gravity-bound world. The model is a vectoral diagram of the forces at play in the robot body, and the ‘model’ is inseparably part of the world.
Most robots I’ve seen move at a very deliberate pace. The computational challenge of processing multiple signals, and deciding what to do next (while not draining the battery too much) mean that most research robots take a long time to do pretty much anything. It is common for engineers to speed up the video of their projects to make watching them tolerable.
The videos on this recent post on BotJunkie shows that snail-like speed is not necessarily the feature of all semi-autonomous and autonomous robots. These ‘sumos’ are specialised fighting robots under 3kg, that need to work really fast, for a short time.
Picture the adapted bomb-disposal robot sitting at the mouth of the mine near Greymouth, New Zealand. A long optical fibre tail trails behind the Remote Positioning Device Wheelbarrow Revolution. So much is riding on its tracks to reveal the fate of the 29 miners, who have not been heard from since the explosion three days before. The robot begins to move, four cameras pointing into the darkness of the mine. It begins to ride into the hole, but 550m in, somehow water gets into its electricals, and it breaks down, apparently beyond repair. The call goes out around the world for a rescue robot that can help. (see ZDNet story and ABC story). Two more robots, from Australia and the US converge on the mine site (story). The Australian robot, owned by Water Corporation in WA., is usually used for wastewater and drainage. It is controlled by optical fibre allowing it a 6km range.
Rescue robots are among the more compelling robot applications, particularly if they can prove themselves as reliable explorers of the places where people can’t go. They promise to reveal truth in the unknown, and provide a hope where it is dwindling. In Japan, robot researchers have worked for some time on rescue robots, particularly for earthquakes and toxic/radioactive events. I met Prof Fumitoshi Matsuno in Korea recenty. He said he had a transformation in his research direction after experiencing the earthqukae in Kobe in 1995. (Use Firefox 3.6.12 for translation on this page).
RoboRescue competitions have been held since 2001 (e.g. RoboRescue 2009 and 2010). Competitors must perform a series of increasingly difficult tasks such as traversing stairs and avoiding obstacles and finding victims. The UNSW CASualty team came second in the competition in 2009., and performed well in Mobilty and Autonomy in 2010. This competition seems more connected to actual robot applications than the original Robocup competition, founded in 1997 with the supposed aspiration of creating soccer playing robots that can challenge World Cup human teams by 2050. Rather than compare robots with humans, and model human capabilities, it seems more likely that the differences will be the greatest sources of value.
The question with the robot in Greymouth must be the extent to which a robot designed for one application (e.g. searching a car for bombs) can be applied to another application (descending 2km into a mine). Will it be possible to build general purpose robots, or will it be more effective to develop special purpose robots for each task domain?
The HRP-4, ‘Diva-Bot’ robot singer, which premiered at the CEATEC Japan 2010 trade show in October 2010, is another in a series of virtuosabots. Virtuosabots deliver uncannily human performances, always mimicking a prized human talent: trumpet playing, violin playing or dancing to Bolero.
The robot designers often claim to aspire to create emotional resonances. For example, the YouTube text for the dancing robots claims: ‘This also marks the first time robots have supported an artistic field evoking emotions.’ It is an industry reaction to the accusation that their progeny are too robotic.
Virtuosabots extend a long tradition of automata that perform for human amusement. Performance sometimes makes the performer culturally admirable (Hollywood stars, and the high tech robot), but at other times being a performer requires a symbolism of deference and self-deprecation. The court clown, or the dancing ‘negro’ (see this clip of an early 20th century gramophone automaton toy) cast a perceived threat to the establishment as ridiculous and servile. Virtuosabots are not necessarily framed as ridiculous, but they have something of the function of a cultured native, in the mould of the early indigenous Australian Bennelong who was kidnapped and enculturated. These robots are natives from a perceived future. They make something exotic and potentially threatening feel safer and more familiar. They open communication to the non-human.
Kismet was an early robotic research project at MIT Media Lab that helped draw popular attention to the possibility of expressive communication between robots and people. Rather than go out down the uncanny valley of making a robot appear human, Kismet’s designers created a shiny elfin face more like a pet than a conversation partner. Kismet’s ears popped up and down like a dog that’s curious one moment and cowering the next. Only later in life did Kismet speak English.
The researcher most associated with Kismet is Cynthia Breazeal, a PhD student at the time, who is now on Faculty at Media Arts and Sciences at MIT. Here participation was essential to Kismet’s identity. Breazeal appears in a large number of videos documenting her research, showing her complex expressive interactions with Kismet, and discussing the significance of the work.
When cultural anthropologist Lucy Suchman (2006) visited the lab, though, Kismet was not so responsive. It seemed Breazeal and Kismet were inseparable.
…in contrast to the interactants pictured in the website videos, none of our party was successful in eliciting coherent or intelligible behaviors from it. Framed as an autonomous entity Kismet… must be said to have failed in its encounters with my colleagues and me… The contrast between my own encounter with Kismet and that recorded on the demonstration videos makes clear the ways in which Kismet’s affect is an effect not simply of the device itself, but of Breazeal’s trained reading of Kismet’s actions and her extended history of labors with the machine. In the absence of Breazeal, correspondingly, Kismet’s apparent randomness attests to the robot’s reliance on the performative capabilities of its very particular ‘human caregiver’.
Suchman’s observations align with my inclination to approach all human-robot relationships (and all human-technology interactions) as deeply relational. That is, both robots and humans are formed through their interactions. During Suchman’s visit, the unresponsive Kismet and the disappointed visitors are co-produced. What emerges is different with different actors. When a robot is touched by the instrumentalist hands of an engineer, the aesthetic gaze of the artist, or the sticky fingers of the child in the museum, a new hybrid assemblage comes out. The ‘same’ robot immediately becomes something else, depending upon the expectations, capabilities and resources of both robot and human. Even if the expressive resources of the Kismet include ‘a number of features for expressiveness, including movements of its eyelids, eyebrows, ears, jaw, lips, neck, and head orientation’ (Bekey 2005: 460), the intended meanings of the movements of these components are not necessarily available to all those who interact with him.
The singular distinctiveness of apparently sentient robots is not reducible to the sum of their expressive components, or to the performance of a certain system of code. It is their capacity to reach of threshold of empathic connection or co-presence that produces more intense human-robot connections. This occurs not so much because a robotic form replicates life-like behaviours, but because it is becoming humanoid, or becoming animal.
In an dense and explosive chapter in A thousand plateaus, Deleuze and Guattari (2004) develop a conceptualisation of becoming that is distinct from copying or sharing proportions.
…becoming is not to imitate or identify with something or someone. Nor is it to proportion formal relations. Neither of these two figures of analogy is applicable to becoming: neither the imitation of subject nor the proportionality of a form. Starting from the forms one has, the subject one is, the organs one has, or the functions one fulfills, becoming is to extract particles between which one establishes the relations of movement and rest, speed and slowness that are closest to what one is becoming, and through which one becomes. This is the sense in which becoming is the process of desire. This principle of proximity or approximation is entirely particular and reintroduces no analogy whatsoever. It indicates as rigorously as possible a zone of proximity or co-presence of a particle, the movement into which any particle that enters the zone is drawn (Deleuze et al 2004: 300-301).
Technological objects become socially present even without resembling people or behaving responsively. For their book The Media Equation, Nass and Reeves (1996) conducted a series of psychological experiments that showed people are polite to computers, feel their social space invaded by large screens, and attribute devices personalities. By implication, the challenge for social robotics is not to reach the threshold of human-computer social interaction (which is almost a given), but to generate a becoming sentient with aesthetic, ethical and affective richness and resonance.
Bekey, G.A., 2005. Autonomous Robots: From Biological Inspiration to Implementation and Control, The MIT Press.
Deleuze, G., Guattari, F. & Massumi, B., 2004. A thousand plateaus, Continuum International Publishing Group.
Reeves, B. & Nass, C., 1996. How people treat computers, television, and new media like real people and places, CSLI Publications and Cambridge university Press.
Suchman, L., 2006. Reconfiguring Human-Robot Relations. In The 15th IEEE International Symposium on Robot and Human Interactive Communication, 2006. ROMAN 2006. pp. 652–654.