Invited Speakers

Abstracts

P. Berczi, M. Paton, T. Barfoot – Long-Term Visual Route Following for Mobile Robots

In this talk we will describe a particular approach to visual route following for mobile robots that we have developed, called Visual Teach & Repeat (VT&R), and what we think the next steps are to make this system usable in real-world applications. We can think of VT&R as a simple form of simultaneous localization and mapping (without the loop closures) along with a path-tracking controller; the idea is to pilot a robot manually along a route once and then be able to repeat the route (in its own tracks) autonomously many, many times using only visual feedback. VT&R is useful for such applications as load delivery (mining), sample return (space exploration), and perimeter patrol (security). Despite having demonstrated this technique for over 500 km of driving on several different robots, there are still many challenges we must meet before we can say this technique is ready for real-world applications. These include (i) visual scene changes such as lighting, (ii) physical scene changes such as path obstructions, and (iii) vehicle changes such as tire wear. We will discuss our progress to date in addressing these issues and the next steps moving forward. There will be lots of videos.

S. Behnke – Real-time SLAM, Traversability Analysis and Navigation Planning in Rough Terrain based on 3D Lidar

The mobile manipulation robot Momaro, developed at University of Bonn, is equipped with four compliant legs that end in steerable pairs of wheels. It demonstrated flexible locomotion at the DARPA Robotics Challenge. The robot is equipped with a 3D laser-range finder for spherical distance perception. Its measurements are aggregated to local multiresolution surfel maps and the local maps are aligned to consistent allocentric 3D maps by GraphSLAM. We analyze traversability of the terrain based on local height differences. Cost-optimal navigation plans are obtained by A*. Autonomous navigation in rough terrain has been demonstrated at the DLR SpaceBot Camp 2015. Recent extensions of the method consider the detailed robot footprint and make use of its hybrid wheeled-legged base for omnidirectional driving and stepping locomotion for overcoming non-drivable height differences.

D. Belter, P. Skrzypczyński – Feature-based RGB-D SLAM with Dense Terrain Mapping for a Walking Robot

Although advanced motion planning algorithms allow legged robots to traverse autonomously challenging terrains, in order to achieve full autonomy these robots need to have both environment mapping and self-localization implemented on-board. There are two issues that make the Simultaneous Localization and Mapping (SLAM) problem particularly hard on legged robots: (i) legged robots that plan their motion require very accurate pose estimates. Our earlier experiments demonstrated that it is necessary to achieve localization accuracy of about 2 cm (i.e. the size of our robot's foot) to register the range data into a local elevation map that enables reliable foothold planning; (ii) the environment maps employed by typical SLAM algorithms rarely are suitable for supporting motion planning functions (foothold selection, leg trajectories planning, etc.). An autonomous walking robot requires a dense representation of the terrain for planning. While some state-of-the-art 3-D SLAM systems build voxel-based maps, they usually rely on massive parallel processing to handle the huge amount of range data. Such approach is not suitable for most legged robots, due to the power consumption of a high-end GPGPU. Therefore, an autonomous legged robot needs a SLAM algorithm that is very accurate, runs in real-time without hardware acceleration and is coupled with a dense mapping method that can reflect the uncertainty of both the range measurements and the localization data.

In the workshop presentation we demonstrate the algorithms and techniques we used to create a system that meets the aforementioned requirements, and is implemented on our autonomous walking robot Messor II. We focus on the architecture of our real-time PUT SLAM system, which provides accurate trajectory estimates from RGB-D data, and on coupling this system with dense mapping techniques, also running in real-time. Moreover, we discuss the methods used to evaluate the system, in both the qualitative and quantitative aspects. We present results of experiments demonstrating that our approach produces in real-time the robot state and terrain map estimates that are suitable for advanced motion planning in a multi-legged robot. Last but not least, we point out the need to create benchmark data sets appropriate for evaluation of SLAM/mapping in challenging environments.

M. Camurri, C. Semini – Multi-Sensor State Estimation on Dynamic Quadruped Robots

Dynamic legged robots are expected to successfully traverse rugged and irregular terrains which are out of reach for wheeled and tracked vehicles. Locomoting autonomously on those terrains requires a robot to carefully plan its steps by reasoning on the environment’s morphology. We show the state estimation algorithms we use to allow our Hydraulic Quadruped robot (HyQ) to autonomously control its trajectory, self-localize, and map the environment. We describe the sensor devices HyQ is equipped with, and how their different outputs are dispatched, processed and fused together inside an efficient Extended Kalman Filter running at 500 Hz. Localization results are demonstrated with experiments that shows HyQ performing a variety of locomotion tasks.

G. Cielniak, T. Duckett – 3D Sensing and Mapping for Agricultural Robotics
Environment perception and mapping serve as an enabling technology for many practical and novel applications in agricultural robotics. Mapping techniques based on 3D sensors such as laser range finders or time-of-flight cameras can be used to reconstruct geometry of the terrain for aiding navigation of the robot but also, to identify and localise objects of interest enabling agricultural operations on individual plant level. In this talk, I will focus on 3D perception and mapping techniques for two specific applications: terrain modelling and control of agricultural sprayers, and localisation and head size estimation for selective harvesting of broccoli. The presentation will conclude with an overview of our new agricultural robotic platform Thorvald, which will be relying on mapping techniques to enable new exciting applications such as soil quality monitoring, selective weeding, yield prediction, etc.

A. Elfes – Beyond 3D: Estimation of Multi-Property Augmented World Models

CSIRO’s Autonomous Systems Program has been highly successful in developing 3D lidar-based SLAM algorithms and incorporating them in the Zebedee handheld system, as well as in unmanned ground and aerial vehicles. In this talk we will discuss how we are building on this foundational research and extending it to estimate multi-property augmented world models (AWMs). AWMs fuse 3D maps with additional sensing modalities, including RGB, thermal, hyperspectral, haptics and others. Some applications only require the association of measurements of multiple properties to spatial coordinates in the 3D map, while others use multiple sensor streams to minimize model uncertainty or employ inferences from one sensor stream to analyze other sensor streams. The talk will also present results from several domains, including exploration of natural and built environments, and in situ sensing for agricultural, environmental and industrial applications.

S. Leutenegger – Real-Time Vision-Based State Estimation and (Dense) Mapping

Vision-based on-board state estimation and mapping for mobile robots has seen much progress recently. On the one hand, algorithms have been devised to handle complementary sensors such as Inertial Measurement Units (IMUs) tightly-coupled with vision, pushing up robustness and accuracy levels. On the other hand, with the advent of depth cameras as well as GPUs, we have been able to reconstruct much more useful denser maps. I will go through (probabilistic) formulations and a suite of algorithms recently developed at the Dyson Robotics Lab in that space, together with their application to a variety of robots on the ground and in the air. We are addressing challenges such as tractable visual-inertial fusion, loop closure with dense maps or multiple levels of detail reconstruction. Finally, I will also talk about the use of recognition capabilities provided by Convolutional Neural Networks (CNNs) fused into dense maps, with the goal of simultaneously reconstructing geometry, appearance and scene semantics.

C. Mericli, M.Laverne, V. Rajagopalan, A. Kelly – Data-Driven Virtual Proving Ground for Unmanned Ground Vehicles

The virtual proving ground is a data-driven simulation of a robot’s operating environment.  It feeds simulated perception, positioning, and vehicle dynamics into the robot’s autonomy system and responds to the system’s actions.  In effect, it fools the autonomy system into thinking that the robot is moving around the test site.  The development team can then see how the system reacts under test conditions.

The real world is too rich to model from scratch,so we model it with data. Sensor data that was recorded at the test site is the heart of the virtual proving ground. Using recorded data eliminates the need to construct complex, physics-based models, allowing high fidelity simulations to be created quickly and inexpensively.

The project’s goal is to make it faster, easier, safer, and less expensive to develop reliable, high-performance unmanned systems.  The virtual proving ground would not totally eliminate field testing, but it would dramatically reduce it.  Development teams could use it to test earlier and more often during the process.  They could see how systems perform under conditions that might not occur during an actual field test – for example, under different weather conditions.  They could also perform experiments that are too dangerous or expensive to do in the field, such as testing a system to destruction.

A. Valada, B. Suger, W. Burgard – Techniques for Reliable Robot Perception in Unstructured Environments

In this talk we will describe two approaches that enable our autonomous robot to reliably navigate over extended periods in an unstructured forested environment. In the first part, we present a terrain-adaptive approach to obstacle detection that relies on 3D-Lidar data and combines computationally cheap and fast geometric features, like step height and steepness, which are updated with the frequency of the lidar sensor, with semantic terrain information, which is updated with at lower frequency. We provide experiments in which we evaluate our approach on a real robot on an autonomous run over several kilometers containing different terrain types. The experiments demonstrate that our approach is suitable for autonomous systems that have to navigate reliable on different terrain types including concrete, dirt roads and grass.

In the second part, we present a novel semantic segmentation architecture and deep fusion techniques that enable a multi-stream deep neural network to learn features from complementary modalities and spectra, each of which are specialized in a subset of the input space. Our model learns class-specific features of expert networks based on the scene condition and further learns fused representations to yield robust segmentation. We present results from experimentation on three publicly available datasets that contain diverse conditions including rain, summer, winter, dusk, fall, night and sunset, and show that our approach exceeds the state-of-the-art. In addition, we evaluate the performance of autonomously traversing several kilometres of a forested environment using only the segmentation for perception.

K. Yoshida – Recent Progress on Modeling and Estimation of Wheel Slip and Traction Mechanics for Light Weight Rovers in Loose Soil

The importance of wheel slip and traction control of remote rovers in loose soil environment was recognized during the operation of NASA's mars rovers, Spirit and Opportunity. There are a good number of research papers and text books on the wheel soil traction mechanics, or so called Terra-mechanics, but most of them are based on the Bekker's classical model which was endorsed by the empirical knowledge from relatively heavy machines. Today, some researchers in robotics community question about the validity of the classical model, and try to develop new and useful models.

This talk will provide a quick summary on key issues and recent progress on modeling and estimation of wheel slip and traction mechanics for light weight rovers in loose soil. Some results on vision based perception of the wheel slip and sinkage will be also mentioned.

JavaScript has been disabled in your browser