This video illustrates another practice in robotics that enhances and distorts the human gait: the exoskeleton. In the tradition of bionics, wearers strap a motorised assemblage to their body, and the device senses nerve signals running through the limbs, and amplifies these into movements. It is designed for people with poor mobility (broken leg, aged etc) and rehabilitation.The “Hybrid Assistive Limb” (HAL) is being developed by Japanese scientists at Cyberdine Corporation and Professor Sankai of Tsukuba University.
The very deliberate, (robotic) gait that wearers adopt when strapped into this is reminiscent of cinematic clichés about how robots move. Rather than allowing movement in-between each step, this device regulates the gait, while giving enhanced strength.
(thanks to Andrew Murphie for the link)
The robot from Cornell University in this video ‘generates a conception of itself’ and improvises ways of moving around. At startup, the design has been left incomplete, and the robot itself finishes the design. As the robot starts up, it moves all its parts to establish its own morphology. If it has been damaged or reorganised, it can adapt to its new body and still improvise getting around.
Unlike the programmed gaits in the previous Following Robots post, this robot belongs to a tradition of self-generative designs. In the documentation, the developers emphasise that this robot generates internal models — diagrams in the robot’s mind that represent its body. The principle of creating mathematical models of the robotic body (and of the artificially intelligent mind) is the dominant approach to designing self-aware autonomous systems.
Against the internal model approach, an alternative view proposes bottom-up designs, such as in Simon Penny’s work (see his paper ‘Trying to be Calm: Ubiquity, Cognitivism and Embodiment’). This tradition critiques the assumption that robotic movement requires models, and that models explain robotic movement and ‘awareness’.
Watching this mangle of motors, sensors and connections struggle to get to its feet, irrespective of the mathematics of its internal model, the information in play clearly comes from the bottom up. The gait is not calculated in the internal model and then applied to the outside. It is generated in the encounter of robot with the gravity-bound world. The model is a vectoral diagram of the forces at play in the robot body, and the ‘model’ is inseparably part of the world.
How a robot walks, runs and jumps is critical to how it moves through its environment. Beyond these instrumental questions, how a robot moves can’t help establishing a sense of its perceived character. We’ve faced these questions of movement, embodiment and identity before — in animation. The problems of designing the gait of robots recalls (and deviates from ) the technique of creating walk cycles in cel animation, which date back to the earliest days of cinema.
Both robotic and animated bodies use rhythms to generate economies in movement. For animators, walk cycles can continue indefinitely to fill any duration in the linear sequence of a final animation. The walk cycle helps establish character by communicating the urgency, competence and mood in the figure’s movements. Shape, style and frame-to-frame changes give the character an implied history by adding deliberate distortions: squashing and stretching the body, and manipulating the apparent forces of acceleration, inertia and gravity on head, torso and limbs.
For the roboticist, a well-designed gait is also economical, because it allows the robot to establish rhythms in movement that maximise its use of energy. A well-tuned gait takes advantage of the dynamics in between the points at which robotic motors activate. It uses the weight and intertia of the robot body to maintain balance and stability at speed. This is inevitably also read by observers as a creating kind of character. The aesthetic inevitably returns.
In designing movement, the animator seems to begin with an aesthetic problem, where the roboticist seems to start with instrumental problems. Of course, the animator must resolve the aesthetic through technical means: whether that is use of cameras and in-betweening, or 3D computer animation (quite similar to robot simulators). The roboticist, on the other hand, cannot escape the aesthetic, as the human eye inevitably reads movement as life and finds a face and character. See also Cholodenko 2007 and Sobchack 2009.
An example of an animated walk cycle.Todd Wheeler
A fast-walking robot built by a researcher in Thailand, Weerayut.
Cholodenko, A., 2007. The Illusion of Life 2: More Essays on Animation, Power Publications.
Sobchack, V., 2009. Animation and automation, or, the incredible effortfulness of being. Screen, 50(4), pp.375 -391.
Van Breemen, A.J.N., 2004. Bringing robots to life: Applying principles of animation to robots. In Proceedings of Shapping Human-Robot Interaction workshop held at CHI.
‘But you can’t always be there…’
The mobile sensing system Mobileye uses a single camera, mounted on the windscreen, to judge whether the car is drifting out of the lane, or about to hit a vehicle, pedestrian, or kangaroo. It can give up to 2.7 seconds warning if it calculates there is a potential collision. But it may not see a wombat (too short), according to the person presenting this product at the Australian Centre for Field Robotics this morning.
This $2000 device from Israel seems most often to be designed to correct the driving behaviour of other people. It does so by beeping whenever it senses poor driving. You can avoid the warning beep by using the indicators properly before crossing the lane. In the advertisement above, it’s this woman’s rogue texting that would have made her smash her car. Luckily the Mobileye operates as the surrogate eye of the father, and warns her of this danger. Another common use case is in truck fleets. If the device reports too many times when the truck drifted out of the lane, the driver can be fired. ‘if you have a serial tailgater you can prove it and dismiss him for it’.
Some car manufacturers are already using Mobileye in production cars: BMW is using it only for keeping their drivers in the lane, but not for collision monitoring. Volvo and GM are working on a system that will actually read road signs, and give their speeding drivers warning.
The researchers at the ACFR were interested in adapting the Mobileye for use on robots. One of the limitations of the system is that it won’t recognise obstacles unless the vehicle is moving at over 5kmh (presumably stereo-optical systems would work better). This is unfortunate, because robots have a problem with starting moving. The researchers also needed to access the Mobileye as a data source, and not as a black-boxed commodity system. They need information, not beeping. The presenter assured the roboticists that it should be possible to hack the eye so that it would be useable in that way (but they would need to contact the head office).
Most robots I’ve seen move at a very deliberate pace. The computational challenge of processing multiple signals, and deciding what to do next (while not draining the battery too much) mean that most research robots take a long time to do pretty much anything. It is common for engineers to speed up the video of their projects to make watching them tolerable.
The videos on this recent post on BotJunkie shows that snail-like speed is not necessarily the feature of all semi-autonomous and autonomous robots. These ‘sumos’ are specialised fighting robots under 3kg, that need to work really fast, for a short time.