• Key dates

    Abstract submission opening :
    February 2017

    Abstract submission deadline:
    15 May 2017

    Notification to authors:
    8 June 2017

    Full paper submission deadline:
     31 July 2017

    Provision of peer review evaluation:
    20 September 2017

    Deadline for final paper and presentation  submission:
     24 November 2017
  • "Young researchers" Session : Navigation for Robots and Drones"

    • Vertical Protection Level (VPL) derivation of multi-sensor integrated LAD-GNSS for UAV applications, Jinsil Lee (KAIST, South Korea)

    The concept of Local-Area Differential GNSS (LADGNSS) for unmanned aerial vehicle (UAV) has been proposed to provide high accuracy and integrity navigation solutions to operating UAVs within a 20 km radius [1]. LADGNSS consists of a ground module that monitors navigation faults and broadcast corrections and integrity information to UAVs, and an onboard module that computes navigation solution and protection levels (PLs) by applying messages from the ground module. While LADGNSS can be the primary source of navigation for UAVs, UAV utilizes additional navigation sensors such as inertial navigation unit (IMU), compass and barometer to provide reliable navigation and attitude solutions required for flight control. In multi-sensor integrated LADGNSS, several modifications are required on the onboard module. Those include a navigation algorithm, an onboard sensor fault monitoring algorithm and a design of system requirements such as integrity and continuity according to the additional multi-sensors. In this study, IMU sensor fault hypothesis is additionally considered together with the LADGNSS integrity risk to design a system integrity fault tree. By the inclusion of an IMU sensor, an extended Kalman filter (EKF) algorithm is applied to the navigation algorithm. An IMU fault monitoring algorithm was designed by utilizing innovations from the EKF and PLs are computed with the output statistics from the designed monitor. Since IMU sensors embedded in commercial UAVs generally have large noise level, PLs computed by considering IMU fault modes has the largest value among all navigation fault hypothesis. In addition, noise levels are different for each grade of IMU sensors. Thus, conventional pre-defined integrity allocation for each fault hypothesis applied for Global Navigation Satellite System (GNSS) augmentation systems would result in large PLs. Tighter PLs could be achieved by allocating the total integrity risk to each fault mode dynamically considering onboard sensor grades. In this study, dynamic integrity allocation algorithms are developed in a way of minimizing the total vertical protection level (VPL) by utilizing an optimization theory [2]. Simulated VPLs before and after applying the developed algorithms were compared for the different grades of IMU sensors.

    [1] S. Pullen, P. Enge, and J. Lee, "High-Integrity Local-Area Differential GNSS Architectures Optimized to Support Unmanned Aerial Vehicles (UAVs)," Proceedings of ION ITM 2013, San Diego, CA, Jan. 28-30, 2013.
    [2] J. Blanch, T. Walter, and P. Enge, “RAIM with Optimal Integrity and Continuity Allocations under Multiple Failures,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 46, No. 3, 2010. 

    • ARDEA : An MAV for autonomous navigation in unstructured, unknown, GPS-denied environment, Marcus Müller (DLR, Germany)

    Micro Aerial Vehicles (MAVs) have become more and more popular in a variety of different applications in recent years. They also have great potential for future exploration missions on celestial bodies, since they can explore places which would be difficult or even impossible to reach for other mobile robots or human beings. In most space operations, the MAV cannot be controlled manually because no pilot is present and maneuvering the vehicle from Earth is infeasible. Therefore the system has to be able to navigate mostly autonomous. Furthermore, the system has to be able to operate without any GNSS, since such systems might not be available on a foreign planet or the quality is too weak in certain locations. I will present Ardea, our multicopter which was designed to tackle the aforementioned challenges. This flying robot was completely developed in house, from the mechanics to electronics and software. The system only uses cameras and an inertial measurement unit (IMU) to navigate. It uses two pairs of wide-angle stereo cameras and maps a large area of interest in a short amount of time. The four cameras can cover a 180 degrees field of view in the vertical axis, which makes this system also suitable for cramped environments like caves. All developed algorithms are running in real time and on board the MAV.

    • Towards Autonomous Planetary Exploration: Collaborative Multi-Robot Localization and Mapping in GPS-denied Environments, Martin Schuster (DLR, Germany)
    Exploring the surface of foreign moons and planets is an important current and future application for mobile robots.
    Large communication delays between the systems and operators on Earth lead to a requirement for autonomous robot behavior in order to conduct efficient exploration missions. Furthermore, on foreign celestial bodies, external global means of localization, like (D)GPS on Earth, might either not be available at all or degraded next to scientifically interesting sites like crater walls or inside caves. Thus, on-board and online localization and mapping constitutes the basis for local robot autonomy as well as for coordinated joint action within multi-robot teams. In this talk, I will present our 6D multi-robot localization and mapping system that is suitable for the application on heterogeneous teams of robots in order to approach the aforementioned challenges. A major design goal is the loose coupling of fast local and online global estimation to support both local robot autonomy as well as collaborative behavior based on joint pose and map estimates, such as the collaborative exploration of unknown terrain. We use stereo-vision data to classify the traversability of the local terrain and to match discriminative features in unstructured outdoor environments. This gives us inter- and intra-robot loop closures for global optimization.
    We employ the resulting 3D maps as input to autonomous exploration methods and present preliminary results from recent experiments at a moon-analogue test site on Mt. Etna, Sicily, that we conducted as part of the ROBEX (Robotic Exploration of Extreme Environments) Helmholtz Alliance.

    • Visual SLAM contribution in a multi-sensor fusion architecture for land vehicle navigation, Amani Ben Afia (ENAC, France)

    For land vehicles, the requirements of the navigation solution in terms of accuracy, integrity, continuity and availability are more and more stringent, especially with the development of autonomous vehicles. This type of application requires a navigation system, not only capable of providing an accurate and reliable position, velocity and attitude solution continuously, but also with reasonable cost. In the last decades, GNSS has been the most widely used navigation system especially with the receivers decreasing cost over the years. However, despite of its capability to provide absolute navigation information, this system suffers from weaknesses related to signal propagation especially in urban environments where buildings, trees and other structures hinder the reception of GNSS signals and degrade their quality. A possible way to overcome these problems is to fuse good GNSS measurements with other sensors having complementary characteristics.
    Generally, the most widely implemented hybridization algorithms for land vehicles fuse GNSS measurements with inertial and/or odometric data. However, the performance achieved by this hybridization depends thoroughly on the quality of the inertial/odometric sensor used, especially when GNSS signals are degraded or unavailable.
    A Ph.D. work has been conducted to study the extension of the classical hybridization architecture with other sensors capable of improving the navigation performances in constrained environments, while considering the solution cost constraint. The use of vision-based navigation techniques to provide additional information has been retained. Thus, the SLAM technique was investigated in this work, and a multi-sensor fusion architecture integrating this visual information with the previously mentioned sensors was developed.  This presentation introduces the proposed multi-sensor navigation solution, and discusses the contribution of the visual information to improve the vision-free navigation solution, using real data tests.