The long term goal of automation is to enable robots to learn and adapt to their environment through their own interactions. This would allow for full autonomy to operate in complex and dynamic environments without any human supervision. The aim is to do this via reinforcement learning in particular model-based learning, i.e. predicting the effect of one's action on the environment and following a series of desired effects.
Prior work has developed (https://github.com/SfTI-Robotics/ActionConditionedFramePrediction) a simulated (ROS + Gazebo) means of training a robotic arm to learn a model of its environment for the task of reaching towards an object in a simple environment with only the desired target in an empty space. The objective of this project is to extend this work to handle more complex environments with obstacles and to enable the robot to also grasp the given target object and move it to a given target area via model-based reinforcement learning.
1) Extended the simulation to handle the pick and place task
2) Evaluated the ability for model-based learning to learn the task of pick and plac
Lab allocations have not been finalised