The University of Auckland

Project #6: Model-based Reinforcement Learning simulator for robotic navigation

Back

Description:

The long term goal of automation is to enable robots to learn and adapt to their environment through their own interactions. This would allow for full autonomy to operate in complex and dynamic environments without any human supervision. The aim is to do this via reinforcement learning in particular model-based learning, i.e. predicting the effect of one's action on the environment and following a series of desired effects.

Prior work has developed a mobile robotic simulator via ROS and Gazebo (https://github.com/SfTI-Robotics/Autonav-RL-Gym) that is capable of training mobile robotic platforms to navigate a range of simple environments via model-free learning, i.e. if this state do this.

The aim of this project is to extend this simulation environment to handle model-based learning and investigate the navigation performance of model-based learning against model-free learning. The second part of the project is to increase the complexity of the training environments (outdoor/dynamic terrain) for the robot to learn in and investigate what situations the training algorithms fail to learn in and why.

Type:

Undergraduate

Outcome:

1) An extended simulation tool to train model-based navigation algorithms

2) Extended the simulation tool to generate outdoor terrains to increase the complexity of the training environments

3) An exemplary set of tests demonstrating model-based navigation algorithms performances against model-free and traditional path planning algorithms in a range of environmental setups. 

Prerequisites

None

Specialisations

Categories

Supervisor

Co-supervisor

Team

Lab

Lab allocations have not been finalised