Previous research has indicated a gender bias in how contributors act, but is the reliability of the surrounding documentation also affected? We want to investigate whether there are biases in Open Source documentation that impede accuracy and compare it to AI summarised and generated documentation, to see whether AI could be utilised to overcome entry barriers to new open source contributors.
This also has the potential to be useful in the open source security space, to detect malicious code or packages.
Undergraduate
- Insights on existing gender biases in open source software project documentation identified through user studies or other suitable research methods
- New tooling
None
HASEL (405.662, Lab)