The University of Auckland

Project #119: Bias in Large Language Models

Back

Description:

There has been a lot of research in the past about detecting bias in Large Language Models (LLMs) but it is very US centric. The goal of this project is to develop software that also detects bias in the NZ context. The project will be part of an ongoing research project, which identifies and tackles bias in AI. The output of the project will form the basis for an AI bias detetction tool that is specific to NZ industry.

Type:

Undergraduate

Outcome:

Software that detects bias in large language models

Prerequisites

Fluency in Python and an understanding of AI

Specialisations

Categories

Supervisor

Co-supervisor

Team

Lab

Computer Science (303S.499, Lab)