The University of Auckland

Project #32: Learning Visual-Verbal Mappings on an Assistive Robot

Back

Description:

Consider a robot assisting humans in offices, homes, or other complex application domains. To truly interact with and assist humans in such domains, the robot and the humans need to have a common vocabulary to describe objects and events based on what is seen and heard. However, it is difficult for humans to equip the robot with a comprehensive and accurate vocabulary for complex domains. Robots thus need the ability to represent, reason with, and incrementally augment incomplete domain knowledge. This project seeks to develop algorithms for interactive and cumulative learning of the vocabularies for visual and verbal cues, and the associations between these vocabularies. This capability will be implemented and demonstrated in the specific context of a simulated robot interacting with and assisting humans in an indoor office domain by finding and moving specific objects to specific locations.

Type:

Undergraduate

Outcome:

Expected project outcomes:
1. Formal description of algorithms developed to achieve the stated objectives.
2. Software implementation and results of experimental evaluation on robots.
3. A paper describing the algorithms and experimental results.

Prerequisites

Prerequisites:
1. Proficiency in object-oriented programming.
2. Proficiency in probability, statistics, calculus and linear algebra.
3. Interest in (and/or passion for) robotics and AI.

Some knowledge in artificial intelligence, machine learning and/or robotics will be useful but not essential.

Specialisations

Categories

Supervisor

Co-supervisor

Team

Lab

Lab allocations have not been finalised