Recovering 4D structure from drone video

Aim

The goal of the project is to recover 3D information from the visual information collected by a drone or similar autonomous agent. 

 

Objectives

  1. Be able to leverage knowledge from other modalities (eg, self-driving car datasets) to train models that can reconstruct 3D from a different view-point, such as drones.
  2. Be able to deploy these models efficiently on low-compute embedded devices
  3. Be able to address scenarios with moving objects (eg, 4D), and incomplete views

 

Description

3D reconstruction models have progressed greatly over the last few years as a result of core advances such as gaussian splatting. However, most of the existing work is trained using data from similar view-points, such as self-driving cars or indoor cameras. We propose to leverage the embedded knowledge that these models have of what the 3D world looks like to be able to recover 3D information from dramatically different viewpoints such as drones. This will bring interesting challenges including fast computation to be deployable as well as 4D reconstruction and occlusion among others. 

Research theme: 

Principal supervisor: 

Dr Laura Sevilla
University of Edinburgh, School of Informatics
lsevilla@ed.ac.uk