A Lightweight Model for Balancing Efficiency and Precision in PEFT-Optimized Java Unit Test Generation

dc.contributor.advisorBeakal Gizachew (PhD)
dc.contributor.authorSintayehu Zekarias
dc.date.accessioned2025-07-17T07:03:39Z
dc.date.available2025-07-17T07:03:39Z
dc.date.issued2025-06
dc.description.abstractSoftware testing accounts for nearly 50% of development costs while being critical for ensuring software quality, creating an urgent need for more efficient testing solutions. This work addresses this challenge by developing an innovative framework that combines Parameter-Efficient Fine-Tuning (PEFT) techniques with transformer models to automate Java unit test generation. The study systematically evaluates three PEFT approaches—Low-Rank Adaptation(LoRA), Quantized LoRA (QLoRA), and Adapters—through a rigorous methodology involving specialized assertion pretraining using the Atlas dataset (1.2M Java method-assertion pairs), PEFT optimization, targeted fine-tuning with Methods2Test (780K test cases), and comprehensive validation on the unseen Defects4J benchmark to assess cross-project generalization. Experimental results demonstrate that LoRA maintains 92% full fine-tuning effectiveness (38.12% correct test cases) while reducing GPU memory requirements by 17% and improving generation speed by 23%. QLoRA achieves even greater efficiency with 36% memory reduction, making it particularly suitable for resource-constrained environments. However, evaluation on Defects4J, assessing cross-project generalization, showed that LoRA achieved 43.1% correct assertions (compared to a full fine-tuning baseline of 46.0% on Defects4J), indicating a minor reduction in generalization alongside the efficiency gains. Despite these promising advancements, it’s important to note that our findings are currently contextualized by the Java programming language and the specific datasets employed in our experiments. These findings provide valuable insights for the implementation of AI-powered test generation in practice, highlighting both the potential of PEFT techniques to reduce testing costs and the need for further research to address the nuances of maintaining test quality across diverse projects.
dc.identifier.urihttps://etd.aau.edu.et/handle/123456789/5650
dc.language.isoen_US
dc.publisherAddis Ababa University
dc.subjectSoftware testing accounts for nearly 50% of development costs while being critical for ensuring software quality
dc.subjectcreating an urgent need for more efficient testing solutions. This work addresses this challenge by developing an innovative framework that combines Parameter-Efficient Fine-Tuning (PEFT) techniques with transformer models to automate Java unit test generation. The study systematically evaluates three PEFT approaches—Low-Rank Adaptation(LoRA)
dc.subjectQuantized LoRA (QLoRA)
dc.subjectand Adapters—through a rigorous methodology involving specialized assertion pretraining using the Atlas dataset (1.2M Java method-assertion pairs)
dc.subjectPEFT optimization
dc.subjecttargeted fine-tuning with Methods2Test (780K test cases)
dc.subjectand comprehensive validation on the unseen Defects4J benchmark to assess cross-project generalization. Experimental results demonstrate that LoRA maintains 92% full fine-tuning effectiveness (38.12% correct test cases) while reducing GPU memory requirements by 17% and improving generation speed by 23%. QLoRA achieves even greater efficiency with 36% memory reduction
dc.subjectmaking it particularly suitable for resource-constrained environments. However
dc.subjectevaluation on Defects4J
dc.subjectassessing cross-project generalization
dc.subjectshowed that LoRA achieved 43.1% correct assertions (compared to a full fine-tuning baseline of 46.0% on Defects4J)
dc.subjectindicating a minor reduction in generalization alongside the efficiency gains. Despite these promising advancements
dc.subjectit’s important to note that our findings are currently contextualized by the Java programming language and the specific datasets employed in our experiments. These findings provide valuable insights for the implementation of AI-powered test generation in practice
dc.subjecthighlighting both the potential of PEFT techniques to reduce testing costs and the need for further research to address the nuances of maintaining test quality across diverse projects.
dc.titleA Lightweight Model for Balancing Efficiency and Precision in PEFT-Optimized Java Unit Test Generation
dc.typeThesis

Files

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Sintayehu Zekarias.pdf
Size:
7.17 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed to upon submission
Description: