The University of Auckland

Project #84: Visualising speech for second language pronunciation learning

Back

Description:

Learning a second language is no easy feat, especially if you want to sound like a native speaker. Most language aids (think Duolingo) use speech recognition systems to evaluate the users’ pronunciation but that is often not very accurate, not to mention unintuitive for the learner. 

This project follows on from a previous project where speech acoustics and visualisation were used to help learners acquire better pronunciation for a second language. But we mostly learn our first language by ear. The project aims to answer whether visualisation help with second language acquisition, or our ears are sufficient to learn new sounds?

The project will involve continuing with an existing web-based software application for second language sound learning, and conducting subjective experiments using the platform developed to see what is effective in learning pronunciation. 

Students suitable for this project should be interested in linguistics, acoustics, and software application development. Understanding the difficulties of learning a second language is a plus. They should also have competent software skills. 

Type:

Undergraduate

Outcome:

Prerequisites

This project requires good software skills. Mechatronics students taking this project must have passed MECHENG313, MECHENG270. 

Experience with R, javascript and web-based development will be desirable.

Familiarity with, and/or an interest to learn about:

Specialisations

Categories

Supervisor

Team

Lab

Acoustics Lab (City 422.154, Lab)