The University of Auckland

Project #71: Image Motion Elimination for Aquaculture Vision

Back

Description:

The Centre for Automation and Robotic Engineering Science (CARES) is investigating robot-driven underwater farm (aquaculture) automation. The motivation for this is that aquaculture has the potential to give us the ability to manage our marine resources better. A bottleneck for the uptake of aquaculture is the challenges of working underwater, and we believe robotics can help. 

This project proposal is related to the CARES aquaculture investigation. However, it targets a fundamental problem associated with the camera-based vision systems for underwater robots, namely scene motion. Scene motion is typical since much of what we capture tends to be a non-rigid matter that flutters in the constant ebb and flow of underwater currents. The problem is that image sequence processing is complex since the motion makes it difficult to reconcile commonality across images.

A potential solution to this problem is Artificial Neural Network (ANN) based image synthesis. Such systems take image sequences as an input and use ANN technology to "learn" relationships between light ray directions and projected colours, allowing novel perspectives to be synthesised through inference by interpolation. Several of these approaches also incorporate a time element into their models [1,2], allowing the capture of dynamic scenes (scenes with movement). The claim is that these systems can synthesise images (generated from a sequence captured of a moving scene) at a "fixed time" (thus producing a synthetic image set from the perspective of a moving camera, in which the scene appears static [see here]).

The key deliverable of the proposed project is an assessment of some of the "state-of-the-art" versions of these algorithms from the perspective of using synthetic outputs for 3D spatial measurement and 3D/2.5D scene reconstructions.

References

[1] Z. Li, S. Niklaus, N. Snavely, and O. Wang. Neural scene flow fields for space-

time view synthesis of dynamic scenes. In Proceedings of the IEEE/CVF

Conference on Computer Vision and Pattern Recognition, pages 6498–6508,

2021.

[2] A. Pumarola, E. Corona, G. Pons-Moll, and F. Moreno-Noguer. D-nerf:

Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF

Conference on Computer Vision and Pattern Recognition, pages 10318–10327, 2021

 

Type:

Undergraduate

Outcome:

In this project, you are expected to:

Extra credit will be gained if you can perform multiple rounds of experiments with clear progress towards troubleshooting some of the issues you encountered. 

Prerequisites

Enrolled students should be competent programmers (getting algorithms to work will mean obtaining them from repositories and configuring them in your local environment to get them to work). The theory around the relationship between images and 3D space requires a workable knowledge of linear algebra. 

Specialisations

Categories

Supervisor

Co-supervisor

Team

Lab

Robotics (405.652, Lab)