The University of Auckland

Project #38: Exploring manual communication in virtual reality




Sign language, at its core, is a language that is solely visual in nature. However, its dynamism and fluidity between signs are an aspect that cannot be captured adequately through static visuals or text. As such, individuals interested in learning this language often need to resort to classrooms – a committal that proves to be a barrier for entry to prospective learners. This barrier inhibits the availability of sign-language throughout society, thus the voices of those who cannot speak remain unheard. Sign languages are not unique in facing this problem; there is a ubiquity of dying oral languages with a heavy dependence on visual hand gestures, similarly due to the difficulty of learning. It is within this context that we beg to ask the question: how capable are modern technologies when used as strategies for learning visually-intensive languages?


Our research question asks whether current technology is able to provide a foundation for a sign-language education platform. In particular, we wish to examine the rapidly developing Virtual reality, coupled with motion detection. Naturally, we follow this question with whether such a platform can provide a realistic solution for vocally fluent individuals seeking to learn sign language. We believe such a solution is advantageous over existing and conventional learning environments through availability, validation, and gamification.

Availability is an issue with conventional classroom-based learning. Students must attend classes during set periods to exercise the language – a luxury that is often unavailable in today's overly committed society. Through a software-based VR approach, we expect that students can learn sign language purely at their own pace, in their own time. Although we concede that online video tutorials and textbooks share this advantage, what they lack is in validation.

With a motion-sensing approach, it may be possible to produce a platform from which students can exercise and subsequently validate their new knowledge. With conventional self-learning tools, it is easy to fall into damaging habits, particularly with sign language that is finely-tuned and prone to misunderstanding. With motion-sensing, we hope to prevent such habits from developing from the onset through validation and feedback, simulating a live trainer.

By mediating education through a software platform, avenues are opened towards gamification of the learning experience. Without a driving factor, learning languages can be difficult in regards to maintaining commitment, particularly when the occasion to exercise the language is sparse. Through modern gamification techniques combined with VR technology, we propose that such barriers can be overcome. With such advantages, we believe that a VR motion detection is a direction worth exploring in the search for effective sign language education.









Allocated (Not available for preferences)


Lab allocations have not been finalised