Runtime AI Verification and Validation for Reliable Deployment of Robots

Aim

Runtime verification and validation of AI-driven robots for safe and reliable deployment

Objectives

  1. Conceptualizing frameworks to monitor and explain AI outputs in diverse and challenging environments.
  2. Designing intelligent monitoring systems to examine the robustness, fairness and safety of AI systems and to generate fail-safe solutions in case of corner cases identified.
  3. Integrating explainable methods into the monitoring system to deliver intelligible explanations for AI systems for responsible applications.
  4. Testing the approach using both simulations and real-world data to demonstrate its efficacy.

Description

AI has been widely used in robotic sensing and planning. With the introduction of transformers, data-driven robots are gaining popularity. However, data quality and diversity typically limit the power of AI systems, necessitating runtime monitoring to avoid any harmful AI decisions and ensure fail-safes. Traditional monitoring approaches, such as redundant systems for comparison, are impractical for AI. This is because deploying a redundant AI system not only doubles the computing effort but also adds extra opacity. Designing a redundant AI system that is comparable to the AI being monitored is likewise difficult.

Under these circumstances, developing unique monitoring systems for AI-driven robots is essential to ensure their runtime reliability and illuminate their inner workings through explanations. Therefore, this project aims to create effective runtime monitoring frameworks by studying methods for real-time reliability and robustness analysis, as well as analyzing explainable AI methods to reinforce monitoring performance.

Research theme: 

Principal supervisor: 

Dr Cheng Wang
Heriot-Watt University, School of Engineering & Physical Sciences, AI Safety
Cheng.Wang@hw.ac.uk

Assistant supervisor: 

Dr Pavlos Tafidis
University of Edinburgh, School of Engineering
pavlos.tafidis@ed.ac.uk