Aim
Produce autonomous robots which can, via learning-based control, navigate challenging, complex and adversarial environments cooperatively.
Objectives
- Measuring types of data fusion leading to optimal results successful navigation of real-world environments by robot.
- Research into multimodal transformer architectures and their application in deep reinforcement learning.
- Utilizing multimodal transformer architectures in multi agent settings.
- Contribute to the development of autonomous platforms which result in enhanced autonomous capabilities for UK armed forces, reducing reliance on human battlefield combatants for tasks such as ISR missions, battlefield replenishment.
Description
Firstly, multimodal transformers in a deep reinforcement learning architecture, for agents in a single agent system, would be investigated. Multimodal transformer architectures with multiple online extero/proprioceptive inputs would be developed, with testing performed to validate which of these data fusion combinations would lead to superior agent behaviour. Environment complexity would be varied in training and real-world scenarios to rigorously test this. Performance testing would include comparison of mean episodic return to baseline, and state of the art algorithms and architectures. Furthermore, more bespoke results would be found relating to robotic function in real environments, with a view to apply these insights into multi agent systems.
This project will be in collaboration with industrial staekholders and DSTL, and aims to contribute to the development of autonomous systems. Applications include ISR and search and rescue operations, this is in order to alleviate the armed forces dependency on man power for these capabilities.
Principal supervisor:
Dr Stefano Albrecht
University of Edinburgh, School of Informatics
s.albrecht@ed.ac.uk