Robot searching in belief space: field robots and their contingent encodings of unknown environments [CODE Abstract]
CODE conference: A Media, Games & Art Conference, Swinburne – 21-23 Nov 2012
Robotics research since the 1980s has been establishing codes, conventions and practices that are likely to govern a generation of autonomous robots that is becoming ready for the field. Today’s engineering choices will define the domains of possibility for robots that will inhabit domestic, public and professional spaces in the future. Among their distinctive features are algorithms that degrade gracefully to allow robots to act in environments that they do not fully ‘understand’.
Field robots are distinguished from industrial robots by their capacity to sense, encode and move around unfamiliar spaces. If robots are a kind of medium, their defining features are their capacity to sense and measure new spaces autonomously, identify salient features, and calculate optimal pathways to move and act. The ‘optimal’ pathways calculated by on-board sensors are necessarily imperfect, but because the robot is a physical entity, its agency must always be recoverable. In one engineering approach to this problem of imperfect information, the robot is said to translate space using ‘heuristic search in belief space’ (Bertoli & Cimatti 2002), where belief space is a kind of formalism of contingency opening onto a certain uncertainty. There is a poetry in engineering discourses as they grapple with the unpredictable and the infinite.
As autonomous technical actors are able to adapt to unpredictability, they themselves become less predictable, moving from striated to smooth spaces (Deleuze & Guattari 1987), and from a high degree of control characteristic of simulation to using systems of code that are adaptable to constant adjustments and compensations. Unlike the GUI of personal computers, robots will not necessarily present users with interactive interfaces. Instead, the robot has its own parasocial integrity and autonomous. However, the conventions for relationships with human actors sharing the same physical and social spaces as field robots have yet to be clearly defined.
This paper will explore these ontological and ethical questions about the operation of code in the world as manifest in field robots.
Notes for Chris Chesher on ABC Northwest (Karratha)
September 3, 2012.
At 1030am I talked with Cristy-Lee Macqueen from ABC Northwest.
Mine sites are changing, as robotic technologies are taking on communication and control roles previously held by people. These changes have been coming for some time, but there has recently been a shift from trialling autonomous systems towards using them in production.
In 2008 the first autonomous trucks were first introduced experimentally, carrying waste products at Rio Tinto’s West Angelas mine. The trials seem to have been a success, as the five Komatsu autonomous trucks covered 570,000 kilometres over 897 days at work between them until February this year.
The old model: Komatsu 830 with human drivers.
In late 2011, the autonomous trucks were reassigned, entering the iron ore production process along with five new trucks, hauling ore at the Junction South East pit of Rio’s Yandicoogina mine.
These ten trucks will undoubtedly be joined by more new autonomous trucks. Rio Tinto reached an understanding with Komatsu in Novermber 2011 to buy 150 Komatsu Autonomous Haulage System trucks over the following four years. It’s not clear what the impact of the iron ore price slump will be on these acquisitions, or how they will fit into Rio’s overall processes.
Komatsu documents that these imposing trucks are fitted with a range of sensors that allow them to operate very safely and accurately. They use laser, radar, GPS, and communications systems to help follow a digital map of the mine site with a lot of precision. The trucks are coordinated by Rio’s control centre 1500 km away, in Perth.
In addition to these developments, Rio has committed over $400 million to automating trains over the next few years. Other parts of the mining process, such as drills, are being automated, or being tagged with location beacons.
Safety is one of the motivations for introducing autonomous systems. A driverless vehicle can’t injure the driver. Autonomous systems don’t have lapses in attention, or drive erratically.
Another reason is to increase production efficiency. Autonomous trucks don’t take breaks. They don’t need to work in shifts. Together, these autonomous systems can work towards the goal of continuous production, where the mine produces an uninterrupted stream of ore.
I’m an academic at the University of Sydney. I am here in Karratha trying to get a sense of how people in the Pibarra feel about the changes to mining work as mining automation is introduced. I’d appreciate if anyone with experience or opinions about mine automation to call in. I’m recording this program, and I’d like to use the transcript in my research. You can find more about my project on my blog https://followingrobots.wordpress.com
Whether these goals of safety and efficiency are achieved, it seems likely there will be changes to the experience of mining. It may affect the social life of mining towns.
To bring up a very different example, when mobile phones became available, they seemed at first to be just a phone you could carry around. In fact, they were quite different from fixed phones. They allowed people to change the way they organised their lives. Rather than make detailed arrangements ahead of time, people with mobiles could easily change plans at the last minute. With smart phones, people could make images and change them, making their own media.
Of course, an automated mine is very different from a community of mobile users. The control centre (opened in 2010) gathers detailed information across several mine sites, centralises control, and provides a place for collective expert decision-making. Remote operation allows operators to take over some stages of production, and allows a small number of people to control many machines. The mine site increasingly becomes a rationalised, controlled and regulated rock factory.
Advocates point to potential benefits of automation for workers. It can take away dangerous, dull and dirty work that nobody wants to do. Mine automation may reduce risks of injury and death. By reducing workers on site, it may reduce fly-in-fly-out work, allowing expert operators to work in urban control rooms. This may take social and economic pressure away from remote mining communities. See also BAEconomics Report.
But there are some potential draw-backs: some people may lose their jobs to autonomous systems, and these changes may raise industrial pressures. The high degree of control over mine sites may be extended to new expectations for those working alongside autonomous systems. The dependence on planned communications systems and GPS guided technology may bring some fragility to autonomous operations, in comparison to the more resilient and adaptable human operated systems.
The long term implications of large scale use of autonomous systems are yet to be revealed. As WA will soon host the largest fleet of autonomous mining vehicles in the world, the unanticipated implications, and the qualitative shifts in mining practices, are likely to play out here.
If you have experience or opinions about mining automation, please leave your comments below. I may use these comments in my research.
Recently I presented a paper called ‘Materialising robot platforms’ on the affordances, environments and networks of three Korean service robots. The topic of my paper was something of an outlier in a conference called ‘Platform Politics’ at Anglia Ruskin University, Cambridge, organised by Jussi Parrika and Joss Hands.
Most other papers identified either with political theory and technology, or with platform studies: analysing how the underlying technological infrastructures play out in fostering certain social and political outcomes. My paper was closer to the latter category, examining in particular some of the political implications of technological artefacts: the placement of sensors and motors in robots that respond to touch, allow remote teaching, and bow to indicate subservience.
The conference was video recorded in a pretty rudimentary way using UStream. It is pretty hard to follow the paper from this video. The abstract is below (although of course this doesn’t really reflect what I talked about).
Chris Chesher Research and development in robotics is currently developing a range of network-connected material platforms. This practice is producing robots increasingly tuned towards particular lifeworlds: language teaching robots in classrooms; service robots in public spaces; container-handling robots in ports; rescue robots in earthquake zones, and so on. These specific platforms diverge significantly from the general-purpose robot of popular imagination as robots are made increasingly real as they are themselves formed by their multiple attachments across physical, social and institutional spaces. This paper draws on recent interviews with researchers at the Australian Centre for Field Robotics, and company representatives at the Robotworld tradeshow in Korea. The interviews examine the rhetoric and practices by which robot platforms are increasingly blackboxed as technical innovations in ways that are informed by narratives of the application environments, and strategic connections with institutional networks. A robot platform is constituted by a singular combination of elements: sensors, operating systems, programming and effectors (motors, screens, speakers, etc). However, these components must work together towards creating a robot that can perform as an autonomous
actor in forming relations within specific environments. In talking about the robots, engineers, developers and salespeople often provide rich narratives featuring the robots in particular physical and social environments. Developers are also aware of the institutional connections in operation that will be crucial in securing the robot’s current and future existence. The Korean company Dasarobot’s English language teaching robot must capture the interest of teachers, but outside their direct affiliations with schools. Development communities are establishing core features of contenders for future robot platforms, abstracted below the level of particular applications. For example, many robots use similar autocharging systems to respond autonomously to the common problem of a low battery. Some robots use custom operating systems, while others use open source ROS such as those from Willow Garage and Microsoft. The range of issues in robotic platforms gives the problem of software platforms a material base, as seen in the collaborations and conflicts between key mechatronics disciplines of software engineering, mechanical engineering and electrical engineering. Meanwhile, as robotic platforms stabilise, there are increasing enrolments of other disciplines: media art; media practice; performance; design; marketing; cinema and so on.
Any robot that moves, performs. But those robots that are built or programmed explicitly to perform can accentuate a repertoire of multiply articulated gestures with naturalistic movements and interaction.
One of the hit exhibits at technology trade show CEBit 2011 in Hannover in March 2011 was the performing Robothesbian by Engineered Arts from Cornwall. This gangly robot performer was connected up to a Microsoft Kinect games controller so it could read the body movements of visitors. It has a certain cheekiness, and a Shakespearean repertoire. Its movements are somewhat more explosive than many robots. The designers also exploit lighting and stage sets to good effect.
Robothesbian was built by a company of ten, and engineered over 7 years. At least twenty have been installed, including one at Questacon in Canberra.
Another recent notable robotic performance was at TED, featuring Aldebaran’s NAO playing a stand-up robot comic called Data. He was partnered by Heather Knight from Marilyn Monrobot Labs. Data tells a number of pretty old jokes (but I guess he wasn’t invented yet), and apparently uses software developed at Carnegie Mellon to respond to the audience reactions.
It’s apparent that the audience’s experience of the robot’s performance is distinct from their experience of the uncanny appearance of an ultra-realistic robot such as Hiroshi Ishiguro’s.
At another level, Knight’s use of Nao as Data shows that robotic innovation can legitimately take place in software alone.
The Prius accelerates, tyres squealing as it hits the first turn. The car navigates expertly around a course constructed on the vacant top level of a car park in Long Beach, California. Then the camera moves across to reveal that the steering wheel is spinning on its own, between the driver’s fingers. The wheel remains magically out of his grasp as the G-forces throw around the passenger in the drivers seat.
This is a close-up of the kind of robot car that Google first talked about last year, and was reported in the New York Times among other outlets. This demo is appropriately connected with a TED event. This is not only a demonstration that the car works. It’s a geeky expression of robot car machismo.
This video illustrates another practice in robotics that enhances and distorts the human gait: the exoskeleton. In the tradition of bionics, wearers strap a motorised assemblage to their body, and the device senses nerve signals running through the limbs, and amplifies these into movements. It is designed for people with poor mobility (broken leg, aged etc) and rehabilitation.The “Hybrid Assistive Limb” (HAL) is being developed by Japanese scientists at Cyberdine Corporation and Professor Sankai of Tsukuba University.
The very deliberate, (robotic) gait that wearers adopt when strapped into this is reminiscent of cinematic clichés about how robots move. Rather than allowing movement in-between each step, this device regulates the gait, while giving enhanced strength.
(thanks to Andrew Murphie for the link)
The robot from Cornell University in this video ‘generates a conception of itself’ and improvises ways of moving around. At startup, the design has been left incomplete, and the robot itself finishes the design. As the robot starts up, it moves all its parts to establish its own morphology. If it has been damaged or reorganised, it can adapt to its new body and still improvise getting around.
Unlike the programmed gaits in the previous Following Robots post, this robot belongs to a tradition of self-generative designs. In the documentation, the developers emphasise that this robot generates internal models — diagrams in the robot’s mind that represent its body. The principle of creating mathematical models of the robotic body (and of the artificially intelligent mind) is the dominant approach to designing self-aware autonomous systems.
Against the internal model approach, an alternative view proposes bottom-up designs, such as in Simon Penny’s work (see his paper ‘Trying to be Calm: Ubiquity, Cognitivism and Embodiment’). This tradition critiques the assumption that robotic movement requires models, and that models explain robotic movement and ‘awareness’.
Watching this mangle of motors, sensors and connections struggle to get to its feet, irrespective of the mathematics of its internal model, the information in play clearly comes from the bottom up. The gait is not calculated in the internal model and then applied to the outside. It is generated in the encounter of robot with the gravity-bound world. The model is a vectoral diagram of the forces at play in the robot body, and the ‘model’ is inseparably part of the world.