There has been a lot of research in the past about detecting bias in Large Language Models (LLMs) but it is very US centric. The goal of this project is to develop software that also detects bias in the NZ context. The project will be part of an ongoing research project, which identifies and tackles bias in AI. The output of the project will form the basis for an AI bias detetction tool that is specific to NZ industry.
Undergraduate
Software that detects bias in large language models
Fluency in Python and an understanding of AI
Computer Science (303S.499, Lab)