The University of Auckland

Project #115: Towards a Framework to Improve and Validate Source Code generated by Large Language Models

Back

Description:

In recent years Large Language Models (e.g., ChatGPT) are showing extraordinary performance. Many Software Engineering researchers around the world are now investigating how to exploit LLMs to automating software engineering tasks like source code or test case generation, showing promising results. When doing so, researchers are facing the challenge of automatically executing and testing the LLM-generated code, which often does not even compile. Indeed, it is an open research problem on how to automatically fix, validate, and improve LLM-generated code.

This research project aims to design and develop a novel framework for code and test generation with LLMs. The framework will intercept the interactions of developers and LLMs, to automatically fix, validate, or improve the code or test cases generated by LLMs. The framework is envisioned to be used by developers and researchers who want to obtain code from LLMs that is readily compilable and runnable with also some guarantees of the correctness of the code.

 

Type:

Undergraduate

Outcome:

Prerequisites

None

Specialisations

Categories

Supervisor

Co-supervisor

Team

Lab

HASEL (405.662, Lab)