Clearly, one of the important areas for mobile robots is how they interact with humans. Ideally, we would like to have a mobile robot that we could ask to "fetch us a beer from the fridge". Even simple interaction of this sort is very complicated and requires considerable study. One of my active research areas is in human-robot interaction for the Questacon tour-guide robot. Users should be able to interact with the robot even in Questacon's noisy environment. Therefore, we have been studying augmenting traditional speech recognition techniques with visual methods as well. Work is on-going in face detection, face tracking and tracking of facial movements.
One basic task for a mobile robot is to be deployed in a new place and for the robot to build a map while exploring the new environment. The problem here is that the map information is correlated with the robot position estimation.
Many people have implemented SLAM (see above) for mobile robots using SICK scanning laser range-finders. While the SICK is ideal for this because of the accurate and detailed range information provided, there are drawbacks. In particular, the SICK is expensive and quite heavy and bulky. We are working on developing visual techniques for SLAM, using video cameras for input. Some simple but effective visual SLAM techniques have been developed using (visual) vertical edge features (ideal for indoor environments) and visually unique landmarks detected by our SAD template detector. Future work plans to investigate a feature detector for outdoor environments.
Almost all mobile robot mapping algorithms assume that the environment is static. This is, of course, patently false. We are investigating techniques for mapping in dynamic environments and also trying to discover what additional information the robot can extract from the dynamic aspects. Recent work has concentrated on detecting dynamic aspects (i.e. motion) of the environment. This can be used to remove measurements corrupted by the dynamic nature and for people tracking. Detecting motion is hard because the robot itself moves. We have used visual techniques (movie here) and laser scanner-based techniques (movie here). This work ties in with the human-robot interaction aspects of the Questacon project.
Considerable work has been done in the area of recognising objects based on images of the object. This visual object recognition is a hard problem and only limited success has been achieved, especially if one is interested in deployment into the real-world. Humans use much more information than a single image of and object to recognise real objects. For example, humans use both eyes to give range information as well as appearance information. This project is currently investigating the addition of range information to the object recognition problem. We are focussing on techniques for real-time deployment on a robot, unlike many current approaches which require considerable computation. An advantage of a robot-deployed object recogniser is that we do have the option of acquiring additional information (e.g. changing viewpoint).
We humans appear to be able to recognise some object by their colour. Alternatively, it seems that humans make better use of colour information than robots can. Colour-based detection of objects (colour segmentation) is notoriously unreliable in the research community and can only really be achieved reliably with controlled lighting conditions. Humans seem to have a deeper understanding of colour and can utilise colour more effectively. We are studying how colour is perceived by robots and how this colour sense can be improved. Note that humans can be fooled (see the example image on the right, click for a larger version). The example on the right illustrates that humans use significant higher-level information in perceiving colour. i.e. the colour of the shadowed squares is in some way corrected so that we don't realise that squares A and B have the same colour.
Mobile robotics is today reaching the point where deployment into real-world situations seems possible. Enabling technologies such as path planning, localisation and obstacle avoidance have all been proven in laboratory situations. However, long-term experiments with mobile robots are still quite rare. I'm trying to get our Nomad XR4000 running 24x7 with autonomous recharging as necessary.
Owning and maintaining a mobile robot is fairly expensive and is beyond the reach of the vast majority of people. This project is developing an interface which allows remote users to develop and execute programs which control the mobile robot at the Australian National University. This interface permits far more control than prior publicly available tele-operation or tele-programming interfaces, executing arbitrary user programs within a safe environment.